US20110022387A1 - Correcting transcribed audio files with an email-client interface - Google Patents

Correcting transcribed audio files with an email-client interface Download PDF

Info

Publication number
US20110022387A1
US20110022387A1 US12/746,352 US74635208A US2011022387A1 US 20110022387 A1 US20110022387 A1 US 20110022387A1 US 74635208 A US74635208 A US 74635208A US 2011022387 A1 US2011022387 A1 US 2011022387A1
Authority
US
United States
Prior art keywords
transcription
email
audio data
user
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/746,352
Inventor
Paul M. Hager
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
III Holdings 1 LLC
Original Assignee
Vovision LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vovision LLC filed Critical Vovision LLC
Priority to US12/746,352 priority Critical patent/US20110022387A1/en
Assigned to VOVISION, LLC reassignment VOVISION, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAGER, PAUL M.
Publication of US20110022387A1 publication Critical patent/US20110022387A1/en
Assigned to III HOLDINGS 1, LLC reassignment III HOLDINGS 1, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VOVISION, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/107Computer-aided management of electronic mailing [e-mailing]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/066Format adaptation, e.g. format conversion or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/75Indicating network or usage conditions on the user display
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Definitions

  • Audio messages can include personal greetings and information or business-related instructions and information. In either case, it may be useful or required that the audio messages be transcribed in order to create written records of the messages.
  • Nuance Communications, Inc. provides a number of software programs, trademarked “Dragon,” that take audio files in .WAV format, .MP3 format, or other audio formats and translate such files into text files.
  • the Dragon software also provides mechanisms for comparing audio files to text files in order to “learn” and improve future transcriptions.
  • the “learning” mechanism included in the Dragon software is only intended to learn based on a voice-dependent model, which means that the same person trains the software program over time.
  • learning mechanisms in existing transcription software are often non-continuous and include set training parameters that limit the amount of training that is performed.
  • Embodiments of the present invention provide methods and systems for correcting transcribed text.
  • One method includes a user sending one or more emails to a transcription server that include audio data via an email-client interface.
  • the emails may be sent from one or more data sources running email-clients and include audio data to be transcribed.
  • the audio data is transcribed based on a voice model to generate text data.
  • the method also includes making the text data available to the user over at least one computer network and receiving corrected text data over the at least one computer network from the user.
  • the method includes modifying the voice model based on the corrected text data.
  • Embodiments of the present invention also provide systems for correcting transcribed text.
  • One system includes a transcription server, at least one translation server, an email-client correction interface, and at least one training server.
  • the transcription server receives audio data from one or more audio data sources and the translation server can transcribe the audio data based on a voice model to generate text data.
  • the email-client correction interface is accessible by a user from within an email-client and provides the user with access to the text data.
  • the transcription server also receives corrected text data from the plurality of users.
  • the training server modifies the voice model based on the corrected text data.
  • Additional embodiments of the invention also provide methods of performing audio data transcription.
  • One method includes obtaining audio data from at least one audio data source, such as a voice over IP system or a voicemail system, transcribing the audio data based on a voice-independent model to generate text data, and sending the text data to an owner of the audio data as an email message.
  • Embodiments of the invention also provide a method of requesting a transcription of audio data.
  • the method includes displaying a send-for-transcription button within an email-client interface on a computer-controlled display, and automatically sending a selected email message and associated audio data to a transcription server as a request for a transcription of the associated audio data when a user selects the send-for-transcription button.
  • Additional embodiments of the invention also provide a system for generating a transcription of audio data.
  • the system includes a transcription server and a translation server.
  • the transcription server is configured to receive at least one email message and associated audio data from an email-client, identify an account based on the at least one email message, and obtain stored account settings associated with the identified account.
  • the translation server is configured to generate a transcription of the associated audio data based on the account settings and a voice-independent model.
  • FIGS. 1 and 2 schematically illustrate systems for transcribing audio data according to various embodiments of the invention.
  • FIG. 3 illustrates an email-client interface according to an embodiment of the invention.
  • FIG. 4 illustrates a process for transcribing audio data using the email-client interface according to an embodiment of the invention.
  • FIG. 5 illustrates the transcription server of FIGS. 1 and 2 according to an embodiment of the invention.
  • FIG. 6 illustrates a file transcription, correction, and training method according to an embodiment of the invention.
  • FIG. 7 illustrates another file transcription, correction, and training method according to an embodiment of the invention.
  • FIG. 8 illustrates a correction method according to an embodiment of the invention.
  • FIGS. 9-10 illustrate a correction notification according to an embodiment of the invention.
  • FIGS. 11-14 illustrate an email-client correction interface according to an embodiment of the invention.
  • FIG. 15 illustrates a message notification according to an embodiment of the invention.
  • embodiments of the invention include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware.
  • the electronic based aspects of the invention may be implemented in software.
  • a plurality of hardware and software based devices, as well as a plurality of different structural components, may be utilized to implement the invention.
  • the specific configurations illustrated in the drawings are intended to exemplify embodiments of the invention. Other alternative configurations are possible.
  • FIG. 1 illustrates a transcription system 10 for transcribing audio data according to an embodiment of the invention.
  • the system 10 includes a transcription server 20 , a data source running an email-client 30 , and a third party device 40 .
  • the transcription server 20 includes, among other things, a voice file directory 52 , a queue server 54 , and a translation server 56 .
  • the transcription server is described in more detail below.
  • the data source email-client 30 and the third party device 40 can be connected to the transcription server 20 via a wide area network 50 such as a cellular network or the Internet.
  • the data source email-client 30 can include a stand-alone email-client, such as Outlook manufactured by MicrosoftTM or Lotus Notes manufactured by IBMTM.
  • the data source email-client 30 can include a browser-based email-client, such as Hotmail, Gmail, Yahoo, AOL, etc.
  • the data source email-client 30 can provide one or more email-client interfaces (e.g., via one or more plug-ins or additional software modules installed and used as part of the email-client 30 ) that allow a user to request, view, manage, and correct transcribed text data.
  • a user sends information from the data source email-client 30 through the wide area network 50 (e.g. a cellular network, the Internet, etc.) to the transcription server 20 .
  • the transcription server 20 places the information in the voice file directory 52 related to an account for the user that sent the information.
  • the information to be transcribed is placed in the queue server 54 before being routed to the translation server 56 to be transcribed.
  • After the information has been transcribed it is sent back through the wide area network 50 and may, optionally, be sent to a third party device 40 for correction. In some embodiments, if the information is not sent to a third party device 40 for correction or if the third party device 40 has finished correcting the transcription, the information is sent back to the data source email-client 30 .
  • FIG. 2 illustrates an exemplary embodiment of the network 10 from FIG. 1 .
  • the transcription server 20 can include or can be connected to an email server 20 a that receives email messages from a client computer 30 a or other devices running email-clients, such as a personal digital assistant (“PDA”) 30 b , a Blackberry device 30 c , or a mobile phone 30 d . In other embodiments, additional devices that support email-clients may also by used.
  • the system 10 also includes a third party device 40 .
  • the third party device 40 can receive messages including transcribed text to be corrected or checked before the text is sent back to the user. As described below, in some embodiments, the third party device 40 provides one or more email-client interfaces for viewing and correcting transcribed text.
  • FIG. 3 illustrates an embodiment of an email-client interface 60 .
  • the email-client interface 60 allows a user to interact with the transcription server 20 from FIGS. 1 and 2 .
  • the email-client interface 60 is provided through an email-client, such as the data source email-client 30 .
  • the email-client can include a stand-alone email-client, such as Outlook manufactured by MicrosoftTM or Lotus Notes manufactured by IBMTM.
  • the email-client can include a browser-based email-client, such as Hotmail, Gmail, Yahoo, AOL, etc.
  • the email-client interface 60 is provided by a plug-in or additional software module that is installed and used with the email-client, which allows a user to access and manage transcribed text from within a standard email-client and without having to launch and access a separate interface for managing transcribed text.
  • the email-client interface 60 includes a send button 62 , a quick play button 64 , a search field 66 , and an options button 68 .
  • the send button 60 allows the user to send one or more selected email messages that include audio data to the transcription server 20 .
  • the search field 66 allows a user to search messages that have already been sent to the transcription server 20 . As a result, the search field 66 allows a user to access information within the transcription system 10 without having to access a web interface.
  • the quick play button 64 allows the user to play audio data related to a message that has already been sent to the transcription server 20 .
  • the options button 68 allows a user to modify features related to the email-client interface 60 and an email-client correction interface described below. In some embodiments, the options button 68 allows a user to modify account settings related to delivery settings, transcription settings, format settings, and the like. In other embodiments, the email-client interface 60 includes additional buttons and functionality.
  • the email-client correction interface is also accessed from within an email-client, such as the data source email-client 30 or an email-client executed by the third party device 40 .
  • the email-client correction interface is also is provided by a plug-in or additional software module that is installed and used with the email-client.
  • the email-client correction interface can be part of the same plug-in providing the email-client interface 60 .
  • the email-client correction interface allows a user to access a web-based correction interface from within an email-client, eliminating the need to launch a separate web browsing application or interface. Aspects of the email-client correction interface include, among other things, the ability to view and correct transcriptions of audio data, monitor the transcription status of audio data sent to the transcription server, and modify account settings. The email-client correction interface is described in greater detail below with respect to FIGS. 11-14 .
  • FIG. 4 illustrates a process 70 for using the email-client interface 60 to send messages including audio data through the transcription system 10 .
  • the user selects one or more email messages including audio data to be transcribed (step 72 ).
  • the selected email message include attached audio data representing voice mail messages. Selecting the email messages may include highlighting the messages, opening individual messages, or any other acceptable selection techniques.
  • the user selects the send button 62 from the email-client interface 60 to forward the selected email messages to the transcription server 20 (step 74 ). Additionally or alternatively, the user can reply to a message from the transcription server 20 , make changes or corrections to the transcribed text, and send the message back to the transcription server 20 , as described below.
  • identifying information is taken from the email messages to identify a user account (step 76 ).
  • the identifying information is metadata taken from the email message.
  • the metadata may include, among other things, information such as a sender's email address and IP address.
  • identifying information is included in the body of the email message and extracted to identify a user account.
  • the message is sent to a voice file directory 52 related to that account (step 78 ).
  • Account settings such as, for example, destination information and formatting information, may be modified for each account. The account settings can be modified or accessed through a system interface, such as the email-client correction interface.
  • the messages stored in the voice file directory 52 awaiting transcription are polled into a queue server 54 (step 80 ).
  • the queue server 54 holds the messages until a translation server 56 becomes available.
  • the queue server 54 routes the messages to the available translation server 56 (step 82 ).
  • the messages enter the translation server 56 and the audio data associated with the message is transcribed (step 84 ).
  • the transcription server can also receive messages with corrected transcribed text. If the transcription server 20 receives a message including corrected transcribed text, the transcription server 20 compares the original transcribed text with the user-corrected transcribed text. After the transcription server 20 has compared the original and the user-corrected text, a message including the user-corrected text or the differences between the original text and the user-corrected text is sent to a training queue to update the voice model, as described below.
  • the transcribed text may be sent to a third party for correction or may be sent directly to one or more destinations specified in the user's account settings (step 86 ).
  • the transcribed text can be sent to a destination in an email message (e.g., embedded or as an attached file).
  • the transcribed text is not sent to a third party, it is sent directly to the training queue to update the voice model (step 90 ).
  • the third party will correct the transcription using, for example, the email-client correction interface described below (step 88 ).
  • the transcribed and/or corrected text is sent to the training queue to update the voice model (step 90 ).
  • the transcribed text is then sent back to the user (step 92 ).
  • a more detailed description of the transcription server 20 is provided below.
  • the transcription server 20 receives audio data 100 from one or more of the audio data sources 30 .
  • the transcription server 20 includes or is connected to one or more intermediary servers, such as an email server 20 a , that receive messages from the audio data sources 30 .
  • Additional intermediary servers may be present such as a voice over IP (“VoIP”) server 20 c and a short message service (“SMS”) server 20 b to receive audio data from additional sources.
  • VoIP voice over IP
  • SMS short message service
  • the messages can be received continuously or in batch form, and can be sent to the transcription server 20 and/or pulled by the transcription server 20 in any manner (e.g., continuously, in batch form, and the like).
  • the transcription server 20 is adapted to request messages at regular intervals and/or to be responsive to a user command or to some other event.
  • the audio data sources 30 and/or any intermediary servers store the converted message(s) until requested by the transcription server 20 or a separate polling computer.
  • the transcription server 20 or the separate polling computer can manage the messages. For example, in one implementation, the transcription server 20 or a separate polling computer establishes a priority for received messages to be transcribed.
  • the transcription server 20 or a separate polling computer also determines a source of a received message (e.g., the audio data source 30 that transmitted the message). For example, the transcription server 20 or separate polling computer can use metadata taken from the email containing audio data to identify the source of a particular message. In additional embodiments, other types of identifying data can be used to identify the source of a received message.
  • the transcription server 20 or separate polling computer places the messages and/or the associated audio data to be transcribed into one or more queue servers 54 .
  • the queue servers 54 look for an open or available processor or translation server 56 .
  • the transcription server 20 includes multiple translation servers 56 , although a different number of translation servers 56 (e.g., physical or virtual) are possible.
  • the queue servers 54 route audio data to the available translation server 56 .
  • the translation server 56 transcribes the audio data to generate text data and, in some embodiments, indexes the message.
  • the translation servers 56 index the messages using a database to identify discrete words.
  • the translation server 56 can use an extensible markup language (“XML”), structured query language (“SQL”), mySQL, idx, or other database language to identify discrete words or phrases within the transcribed text.
  • a translation server 56 In addition to transcribing audio data included in messages as just described, some embodiments of a translation server 56 generate an index of keywords based upon the transcribed text. For example, in some embodiments, the translation server 56 removes those words that are less commonly searched and/or less useful for searching (e.g., I, the, a, an, but, and the like) from transcribed text, which leaves a number of keywords that can be stored in memory available to the translation servers 56 .
  • the resulting “keyword index” includes the exact positions of each keyword in the transcribed text, and, in some cases, includes the exact location of each keyword in the corresponding audio data. This keyword index enables users to perform searches on transcribed text.
  • a user accessing the transcribed text associated with particular audio data can select one or more words from the keyword index of the message generated earlier.
  • the exact locations (e.g., page and/or line numbers) of such words can be provided quickly and efficiently—in many cases significantly faster and with less processing power than performing a standard search for the word through the entire transcribed text.
  • the system 10 can provide the keyword index to a user in any suitable manner, such as in a pop-up or pull-down menu included in an interface of the system 10 , such as the email-client correction interface, during text correction or searching of transcribed text (described below).
  • a translation server 56 generates two or more possible candidates for a transcription of a spoken word or phrase from audio data.
  • the most likely candidate is displayed or otherwise used to generate the transcribed text, and the less likely candidate(s) are saved in a memory accessible by the translation server 56 and/or by another server or third party device 40 as needed.
  • This capability can be useful, for example, during correction of the transcribed text (described below). In particular, if a word in the transcribed text is wrong, a user can obtain other candidate(s) identified by the translation server 56 during transcription, which can speed up and/or simplify the correction process.
  • the system 10 can allow a user to search transcribed text for particular words and/or phrases. This searching capability can be used during correction of transcribed text as described below or when a transcribed text file is searched for particular words (whether a search for such words is performed on the file alone or in combination with one or more other files). For example, using the indexed message, a user viewing generated text data can select a word or phrase included in the text data and, in some embodiments, can hear the corresponding portion of the audio data from which the text data was generated.
  • the system 10 is adapted to enable a user to search some or all transcribed text files accessible by the transcription server 20 , regardless of whether such files have been corrected. Also, the system 10 can enable a user to search transcribed text using Boolean and/or other search terms.
  • Search results can be generated in a number of manners, such as in a table form enabling a user to select one or more files in which a word or phrase has been found and/or one or more locations at which a word or phrase has been found in particular text data.
  • the search results can also be sorted in one or more manners according to one or more rules (e.g., date, relevance, number of instances in which the word or phrase has been found in text data, and the like) and can be printed, displayed, or exported as desired.
  • the search results also provide the text around the found word or phrase.
  • the search results can also include additional information, such as the number of instances in which a word or phrase has been found in a transcribed text file and/or the number of transcribed text files in which a word or phrase has been found.
  • the audio data and text data can be stored internally by the transcription server 20 or can be stored externally to one or more data storage devices (e.g., databases, servers, and the like).
  • a user e.g., a user associated with a particular audio data source email-client 30 ) decides how long audio data and/or text data is stored by the transcription server 20 , after which time the audio data and/or text data can be automatically deleted, over-written, or stored in another storage device (e.g., a relatively low-accessibility mass storage device).
  • An interface of the system 10 e.g., the email-client correction interface
  • a data source email-client 30 connects to the transcription server 20 over a network, such as the Internet, one or more local or wide-area networks 50 , or the like, in order to obtain audio data and/or corresponding, generated text data.
  • a user uses the data source email-client 30 to access the email-client correction interface associated with transcription server 20 to obtain generated text data and/or corresponding audio data. For example, using the email-client interface correction, the user can request particular audio data and/or the corresponding text data.
  • the requested data is obtained from the transcription server 20 and/or a separate data storage device and is transmitted to the user for display via the interface.
  • the transcription server 20 sends audio data and/or corresponding generated text data to the user as an email message.
  • the transcription server 20 can send an email message to a user that includes the audio data and the text data as attached files.
  • the transcription server 20 sends an email message to a user that includes a notification that audio data and/or text data is available for the user.
  • a user uses the email-client correction interface in order to listen to the audio data, view the text data, and/or to correct the text data.
  • a user can reply to the email message sent from the transcription server 20 , correct the transcription, and send the corrected transcription back to the transcription server 20 .
  • the transcription server then updates the voice model based on a comparison of the original transcribed text and the user-corrected transcribed text. If the user replies directly to the transcription server, the user does not need to access the email-client correction interface, web interface, or other interfaces of the system 10 .
  • the user can choose to correct only parts of transcribed text. If the user corrects only a portion of the transcribed text, the email-client (e.g., the email-client correction interface) recognizes that only a portion of the text has changed and transmits only the corrected portion of the text to the transcription server 20 for use in training the voice model. By submitting only the corrected or changed portion of the transcribed text, the amount of data transmitted to the transcription server 20 for processing is reduced.
  • another email-client interface, a web-based interface, the transcription server 20 , or another device included in the system 10 can determine what portions of transcribed text have been changed and can limit transmission and/or processing of the changed text accordingly.
  • the transcription server 20 can send a return email message to the user after the transcription server 20 transcribes the submitted audio file.
  • the email message can inform the user that the submitted audio data was transcribed and that corresponding text data is available.
  • the email message from the transcription server 20 can include the submitted audio data and/or the generated text data.
  • the system 10 can also enable a user to provide destination settings for audio data and/or text data on a per-generated-text-data basis.
  • a user before or after audio data is transcribed, a user specifies a particular destination for the text data.
  • certain implementations allow a user to specify destination settings in an email message. For example, if the user sends an email message to the transcription server 20 that includes audio data, the user can specify destination information in the email message. After the audio message is transcribed and the generated text data is corrected (if applicable), the transcription server 20 sends an email message to the identified recipient (e.g., via a SMTP server).
  • the transcription server 20 transmits data (e.g., audio data and/or text data) to the third party device 40 or another destination device using file transfer protocol (“FTP”).
  • data e.g., audio data and/or text data
  • FTP file transfer protocol
  • the transmitted data can also be protected by a secure socket layer (“SSL”) mechanism (e.g., a bank level certificate).
  • SSL secure socket layer
  • system 10 includes an email-client correction interface and a streaming translation server 102 that a user accesses (e.g., via the data source email-client 30 ) to view generated text.
  • the email-client correction interface and the streaming translation server 102 also enable a user to stream the entire audio data corresponding to the generated text data and/or to stream any desired portion of the audio data corresponding to selected text data.
  • the email-client correction interface and the streaming translation server 102 enable a user to select (e.g., click-on, highlight, mouse over, etc.) a portion of the text in order to hear the corresponding audio data.
  • the email-client correction interface and the streaming translation server 102 enable a user to specify a number of seconds that the user desires to hear before and/or after a selected portion of text data.
  • the email-client correction interface also enables a user to correct generated text data. For example, if a user listens to audio data and determines that a portion of the corresponding generated text data is incorrect, the user can correct the generated text data via the email-client correction interface.
  • the email-client correction interface automatically identifies potentially incorrect portions of generated text data by displaying potentially incorrect portions of the generated text data in a particular color or other format (e.g., via a different font, highlighting in bold, italics, underline, or any other manner).
  • the email-client correction interface also displays portions of the generated text in various colors or other formats depending on the confidence that the portion of the generated text is correct.
  • the email-client correction interface also inserts a placeholder (e.g., an image, an icon, etc.) into text that marks portions of the generated text where text is missing (i.e., the transcription server 20 could not generate text based on the audio data).
  • a placeholder e.g., an image, an icon, etc.
  • a user selects the placeholder in order to hear the audio data corresponding to the missing text and can insert the missing text accordingly.
  • some embodiments of the email-client correction interface automatically generate words similar to incorrectly-generated words.
  • a user selects a word (e.g., by highlighting, clicking, or by any other suitable manner) within generated text data that is or appears to be incorrect.
  • the email-client correction interface suggests similar words, such as in a pop-up menu, pull-down menu, or in any other format. The user selects a word or words from the list of suggested words in order to make a desired correction.
  • the translation server(s) 56 are configured to automatically determine speakers in an audio file. For example, the translation server 56 processes audio files for drastic changes in voice or audio patterns. The translation server 56 then analyzes the patterns in order to identify the number of individuals or sources speaking in an audio file.
  • a user or information associated with the audio file e.g., information included in the email message containing the audio data, or stored in a separate text file associated with the audio data
  • a user uses an interface of the system 10 (e.g., the email-client correction interface) to specify the number of speakers in an audio file before or after the audio file is transcribed.
  • the translation server(s) 56 can generate a speaker list that marks the number of speakers and/or the times in the audio file where each speaker speaks.
  • the translation server(s) 56 can use the speaker list when creating or formatting the corresponding text data to provide markers or identifiers of the speakers (e.g., Speaker 1 , Speaker 2 , etc.) within the generated text data.
  • a user can update the speaker list in order to change the number of speakers included in an audio file, change the identifier of the speakers (e.g., to the names of the speakers), and/or specify that two or more speakers identified by the translation server(s) 56 relate to a single speaker or audio source.
  • a user can use an interface of the system 10 (e.g., the email-client correction interface) to modify the speaker list or to upload a new speaker list.
  • a user can change the identifiers of the speakers by updating a field of the email-client correction interface that identifies a particular speaker.
  • each speaker identifier displayed within generated text data can be placed in a user-editable field.
  • changing an identifier of a speaker in one field automatically changes the identifier for the speaker throughout the generated text data.
  • the system 10 also formats transcribed text data based on one or more templates, such as templates adapted for particular users or businesses (e.g., medical, legal, engineering, or other fields). For example, after generating text data, the system 10 (e.g., the translation server(s) 56 ) compares the text data with one or more templates. If the format or structure of the text data corresponds to the format or structure of a template and/or if the text data includes one or more keywords associated with a template, the system 10 formats the text data based on the template. For example, if the system 10 includes a template specifying the following format:
  • the system 10 is configured to automatically apply a template to text data if text data corresponds to the template. Therefore, as the system 10 “learns” and improves its transcription quality, as described below, the system 10 also “learns” and improves its application of templates.
  • a user uses an interface of the system 10 (e.g., the email-client correction interface) to manually specify a template to be applied to text data. For example, a user can select a template to apply to text data from a drop down menu or other selection mechanism included in the interface.
  • the system 10 can store the formatted text data and can make the formatted text data available for review and correction, as described below.
  • the system 10 stores or retains the unformatted text data separately from the formatted text data. By retaining the unformatted text data, the text data can be applied to new or different templates.
  • the system 10 can use the unformatted text data to train the system 10 , as described below.
  • the system 10 is configured to allow a user to create a customized template and upload the template to the system.
  • a user uses a word processing application, such as Microsoft® Word®, to create a text file that defines the format and structure of a customized template.
  • the user then uploads the text file to the system 10 using an interface of the system 10 (e.g., the email-client interface 60 and/or the email-client correction interface).
  • the system 10 reformats uploaded templates.
  • the system 10 can store predefined templates and/or customized templates in a mark-up language, such as XML or HTML.
  • Templates can be associated with a particular user or a group of users. For example, only users with certain permission may be allowed to use or apply particular templates. In other embodiments, a user can upload one or more templates that only he or she can use or apply. Settings and restrictions for predefined and/or customized templates can be configured by a user or an administrator using an interface of the system 10 .
  • the system 10 enables a user to configure one or more commands that replace transcribed text with different text. For example, a user configures the system 10 to insert the current date into text data whenever audio data and/or corresponding text data includes the word “date” or the phrases “today's date,” “current date,” or “insert today's date.” Similarly, in another embodiment, system 10 is configured to start a new paragraph within transcribed text data each time audio data and/or corresponding text data includes the word “paragraph,” the phrase “new paragraph,” or a similar identifier.
  • the commands can be defined on a per user basis and/or on a group of users basis, and settings or restrictions for the commands can be set by a user or an administrator using the system 10 .
  • Some embodiments of the system 10 also enable a user correcting text data via the email-client correction interface to create commands and/or keyboard shortcuts.
  • the system is configured so that a user can use the commands and/or keyboard shortcuts to stream audio data, add common words or phrases to text data, play audio data, pause audio data, or start or select objects or functions provided through the email-client correction interface or other interfaces of the system 10 .
  • a user uses the email-client correction interface to configure the commands and/or keyboard shortcuts.
  • the commands and/or keyboard shortcuts can be stored on a user level and/or a group level.
  • An administrator can also configure commands and/or keyboard shortcuts that can be made available to one user or multiple users. For example, users with particular permissions may be allowed to use particular commands and/or keyboard shortcuts.
  • the email-client correction interface reacts to commands spoken by the user.
  • the system 10 is configured to permit a user to create commands that when spoken by the user cause the email-client correction interface to perform certain actions.
  • the user can say “play,” “pause,” “forward,” “backward,” etc. to control the playing of the audio data by the email-client correction interface.
  • Other commands include insert, delete, or edit text in transcribed text data. For example, when user says “date,” the email-client correction interface inserts date information into transcribed text data.
  • the system 10 also performs translations of transcribed text data.
  • the email-client correction interface or another interface of the system 10 includes features to permit a user to request a translation of transcribed text data into another language.
  • the transcription server 20 includes one or more language translation modules configured to create text data in a particular language based on generated text data in another language.
  • the system is also configured to process an audio source (e.g., an individual submitting an email message with an attached audio file to the transcription server 20 ) with a request to translate the file to a specific language when an audio file is submitted to the transcription server 20 .
  • corrections made by a user through the email-client correction interface are transmitted to the transcription server 20 .
  • the transcription server 20 includes a training server 104 .
  • the training server 104 can use the corrections made by a user to “learn” so that future incorrect translations are avoided.
  • the training server 104 since audio data is received from one or more audio data sources 30 representing multiple “speakers,” and since the email-client correction interface can be accessible over a network by multiple users, the training server 104 receives corrections from multiple users and, therefore, uses a voice independent model to learn from multiple speakers or audio data sources.
  • the system 10 transcribes audio files of a predetermined size (e.g., over 20 minutes in length) in pieces in order to “pre-train” the translation server(s) 56 .
  • the transcription server 20 and/or the translation server(s) 56 can divide an audio file into segments (e.g., 1 to 5 minute segments).
  • the translation server(s) 56 can then transcribe one or more of the segments and the resulting text data can be made available to a user for correction (e.g., via the email-client correction interface).
  • the translation server(s) 56 transcribe the complete audio file.
  • the transcription of the complete audio file is made available to a user for correction.
  • Using the small segments of the audio file to pre-train the translation server(s) 56 helps increase the accuracy of the transcription of the complete audio file, which can save time and can prevent errors.
  • the complete audio file is transcribed before or in parallel with one or more smaller segments of the same audio file.
  • a user can then immediately review and correct the text for the complete audio file or can wait until the individual segments are transcribed and corrected before correcting the text of the complete audio file.
  • a user can request a re-transcription of the complete audio file after one or more individual segments are transcribed and corrected.
  • the transcription server 20 and/or the translation server(s) 56 automatically re-transcribes the complete audio file.
  • the voice independent model developed by the transcription server 20 can be shared and used by multiple transcription servers 20 .
  • the voice independent model developed by a transcription server 20 can be copied to or shared with other transcription servers 20 .
  • the model can be copied to other transcription servers 20 based on a predetermined schedule, anytime the model is updated, on a manual basis, etc.
  • a lead transcription server 20 collects audio and text data from other transcription servers 20 (e.g., audio and text data which has not been applied to a training server) and transfers the data to a lead training server 104 .
  • the lead transcription server 20 can collect the audio and text data during periods of low network or processor usage.
  • the individual training servers 104 of one or more transcription servers 20 can also take turns processing batches of audio data and copying updated voice models to other transcription servers 20 (e.g., in a predetermined sequence or schedule), which can ensure that each transcription server 20 is using the most up-to-date voice model.
  • individuals may be hired to correct transcribed audio files (“correctors”), and the correctors may be paid on a per-line, per-word, per-file, time, or the like basis, and the transcription server 20 can track performance data for the correctors.
  • the performance data can include line counts, usage counts, word counts, etc. for individual correctors and/or groups of correctors.
  • the transcription server 20 enables a user (e.g., an administrator) to access the performance data via an interface of the system 10 (e.g., an email-client correction interface or a website). The user can use the interface to input personal information associated with the performance data, such as the correctors' names, employee numbers, etc.
  • the user can also use the interface to initiate and/or specify payments to be made to the correctors.
  • the performance data (and any related information provided by a user, such as an administrator) can be stored in a database and/or can be exported to an external accounting system, such as accounting systems and solutions provided by Paychex, Inc. or QuickBooks® provided by Intuit, Inc.
  • the transcription server 20 can send the performance data to an external accounting system via a direct connection or an indirect connection, such as the Internet.
  • the transcription server 20 can also generate a file that can be stored to a portable data storage medium (e.g., a compact disk, a jump drive, etc.). The file can then be uploaded to an external accounting system from the portable data storage medium.
  • An external account system can use the performance data to pay the correctors, generate financial documents, etc.
  • a user may not desire or need transcribed text data to be corrected.
  • a user may not want text data that is substantially accurate to be corrected.
  • the system 10 can allow a user to designate an accuracy threshold, and the system 10 can apply the threshold to determine whether text data should be corrected. For example, if generated text data has a percentage or other measurement of accurate words (as determined by the transcription server 20 ) that is equal to or greater than the accuracy threshold specified by the user, the system 10 can allow the text data to skip the correction process (and the associated training or learning process).
  • the system 10 can deliver any generated text data that skips the correction process directly to its destination (e.g., directly sent to a user via an email message, directly stored to a database, etc.).
  • the accuracy threshold can be set by a user using any described interface of the system 10 .
  • the threshold can be applied to all text data or only to particular text data (e.g., only text data generated based on audio data received from a particular audio source, only text data that is associated with a particular destination, etc.).
  • FIG. 6 illustrates an exemplary transcription, correction, and training method or process performed by the system 10 .
  • the transcription, correction, and training process of the system 10 can be a continual process by which files enter the system 10 and are moved through the series of steps shown in FIG. 6 .
  • the transcription server 20 receives audio data 100 from one or more data source email-clients 30 .
  • the transcription server 20 places the audio data 100 into one or more queues 54 (step 120 ).
  • the audio data 100 is transmitted from a queue 54 to a translation server 56 .
  • the translation server 56 transcribes the audio data to generate text data, and indexes the audio data (step 122 ).
  • the audio data and/or generated text data is made available to a user for review and/or correction via the email-client correction interface (step 124 ).
  • the user makes the corrections and submits the corrections to the training server 104 of the transcription server 20 (step 128 ).
  • the corrections are placed in a training queue and are prepared for archiving (step 130 ).
  • the training server 104 obtains all the corrected files from the training queue and begins a training cycle for an independent voice model (step 132 ). In other embodiments, the training server 104 obtains such corrected files immediately, rather than periodically.
  • the training server 104 can be a server that is separate from the transcription server 20 , and can update the transcription server 20 and/or other servers on a continuous or periodic basis. In other embodiments, the training server 104 , transcription server 20 , and any other servers associated with the system 10 are defined by the same computer. It should be understood that, as used herein and in the appended claims, the terms “server,” “queue,” “module”, etc. are intended to encompass hardware and/or software adapted to perform a particular function.
  • any portion or all of the transcription, correction, and training process performed by the system 10 can be performed by one or more polling managers (e.g., associated with the transcription server 20 , the training server 104 , or other servers).
  • the transcription server 20 and/or the training server 104 utilizes one or more “flags” to indicate a stage of a file.
  • these flags can include: (1) waiting for transcription; (2) transcription in progress; (3) waiting for correction; (4) correction completed; (5) waiting for training; (6) training in progress; (7) retention; (8) move to history pending; and (9) history.
  • the only action required by a user as a message moves through different stages of the system 10 is to indicate that correction of the message has been completed.
  • a less automated system can exist, requiring more input from a user during the transcription, correction, and training process.
  • FIG. 7 Another example of a method by which messages are processed in the system 10 is illustrated in FIG. 7 .
  • a polling manager is used to control the timing of file processing in the system.
  • at least a portion of the transcription, correction, and training process is moved along by alternating actions of a polling manager.
  • the polling manager runs on a relatively short time interval to move files from stage to stage within the transcription, correction, and training process.
  • the polling manager can move multiple files in different stages to the next stage at the same time.
  • the polling manager locates files to enter the transcription, correction, and training process. For example, the polling manager can check a list of FTP servers/locations for new files. New files identified by the polling manger are downloaded (step 202 ) and added to the database (step 204 ). When a file arrives, the polling manager flags the file “waiting for transcription” (step 206 ). The polling manager then executes and moves the file to a transcription queue (step 208 ), after which time the next available server/processor transcribes the file (step 210 ) on a first-in, first-out basis, unless a different priority is assigned.
  • the polling manager flags the file “transcription in progress.”
  • the polling manager flags the file “waiting for correction” (step 212 ), and the file is made available to a user for correction (e.g., through the email-client correction interface).
  • the polling manager flags the file “correction completed” (step 214 ).
  • the polling manager then flags the file “waiting for training,” and moves the corrected file into a waiting to be trained queue (step 216 ).
  • the polling manager flags the file “training in progress.” After the training process, the polling manager flags the file “retention.” In some embodiments, a user-defined retention determines when and whether files are archived. During the time in which a file is being archived (step 220 ), the polling manager flags the file “move to history pending.” When a file has been archived, the polling manager flags the file “history.”
  • the archival process allows files to move out of the system 10 immediately or based at least in part upon set retention rules. Archived or historical files allow the system 10 to keep current files available quickly while older files can be encrypted, compressed, and stored. Archived files can also be returned to a user (step 222 ) in any manner as described above.
  • the email-client correction interface shows the stage of one or more files in the transcription, correction, and training process. This process can be automated and database driven so that all files are used to build and train the voice independent model.
  • a database-driven system 10 allows redundancy within the system. Multiple servers can share the load of the process described above. Also, multiple servers across different geographic regions can provide backup in the event of a natural disaster or other problem at one or more sites.
  • FIG. 8 illustrates a correction method according to an embodiment of the invention.
  • the correction process of FIG. 8 begins when audio data is received by the transcription server 20 and is transcribed (step 250 ).
  • the transcription server 20 can receive audio data from one or more devices running email-clients 30 , such as a computer 30 a , a PDA 30 b , a blackberry device 30 c , a mobile phone 30 d , etc.
  • the transcription server 20 can send the correction notification to a user who is assigned to the correction of transcribed audio data associated with a particular owner or destination. For example, as the transcription server 20 transcribes voicemail messages for a particular member of an organization, the transcription server 20 can send a notification to a secretary or assistant of the member.
  • An administrator can use an interface of the system 10 (e.g., the email-client interface 60 ) to configure one or more recipients who are to receive the correction notifications for a particular destination (e.g., a particular email account).
  • An administrator can also specify settings for notifications, such as the type of notification to send (e.g., email, text, audio, etc.), the addresses or identifiers of the notification recipients (e.g., email addresses), the information to be included in the notifications, etc.
  • an administrator can establish rules for sending correction notifications, such as transcriptions associated with audio data received by the transcription server 20 from a particular audio data source should be corrected by particular users.
  • an administration can set one or more accuracy thresholds, which can dictate when transcribed audio data skips the correction process.
  • FIG. 9 illustrates an email correction notification 254 according to an embodiment of the invention that is listed in an inbox 255 of an email application.
  • the email correction notification 254 is listed as an email message in the inbox 255 similar to other email messages 256 received from other sources.
  • the inbox 255 can display the sender of the email correction notification 254 (i.e., the transcription server 20 ), an account or destination associated with the audio data and generated text data (e.g., an account number), and an identifier of the source of the audio data (e.g., the name of an individual that sent the message).
  • the identifier of the source of the audio data can optionally include an address or location of the audio data source.
  • the inbox 255 lists additional information about the notification 254 , such as the size of the email correction notification 254 , the time the notification 254 was sent, and/or the date that the notification 254 was sent.
  • a user can select the notification 254 (e.g., by clicking on, highlighting, etc.) in the inbox 255 .
  • the email application can display the contents of the notification 254 , as shown in FIG. 10 .
  • the contents of the email correction notification 254 can include similar information as displayed in the inbox 255 .
  • the contents of the email correction notification 254 can also indicate the length of the audio data transcribed by the transcription server 20 and the day, date, and/or time that the audio data was received by the transcription server 20 .
  • the user can access the email-client correction interface from their email-client. However, if the user does not have access to the email-client correction interface, a link 257 to a web interface is provided in the email correction notification.
  • FIGS. 11-14 illustrate the email-client correction interface 260 according to an embodiment of the invention.
  • the user can access the email-client correction interface 260 to review and correct the generated text data (if needed) (step 262 ).
  • the email-client correction interface 260 is accessed from within the email-client. For example, when a user receives a correction notification indicating that the user has messages that either have been corrected or are ready to be corrected, the user can access the email-client correction interface 260 without launching a separate web browsing application.
  • a user can also reply directly to a correction notification that includes transcribed text, correct the transcribed text in the body of the message, and send the corrected transcribed text back to the transcription server 20 . After sending the corrected transcribed text back to the transcription server 20 , the voice model is updated accordingly.
  • the user may first be prompted to enter credentials and/or identifying information via a login screen 264 of the interface 260 .
  • the login screen 264 can include one or more selection mechanisms and/or input mechanisms 266 that enable a user to select or enter credentials and/or identifying information.
  • the login screen 264 can include input mechanisms 266 for entering a username and a password.
  • the input mechanisms 266 can be case sensitive and/or can be limited to a predetermined set and/or number of characters. For example, the input mechanisms 266 can be limited to approximately 30 non-space characters.
  • a user can enter his or her username and password (e.g., as set by the user or an administrator) and can select a log in selection mechanism 268 .
  • a user can select a help selection mechanism 270 in order to access instructions, tips, help web pages, electronic manuals, etc. for the email-client correction interface 260 .
  • the email-client correction interface 260 verifies the entered information, and, if verified, the email-client correction interface 260 displays a main page 272 , as shown in FIG. 12 .
  • the main page 272 includes a navigation area 274 and a view area 276 .
  • the navigation area 274 includes one or more selection mechanisms for accessing standard functions of the email-client correction interface 260 .
  • the navigation area 274 includes a help selection mechanism 278 and a log off selection mechanism 280 .
  • a user can select the help selection mechanism 278 in order to access instructions, tips, help web pages, electronic manuals, etc. for the email-client correction interface 260 .
  • a user selects the log off selection mechanism 280 in order to exit the email-client correction interface 260 .
  • the email-client correction interface 260 returns the user to the login page 264 .
  • the navigation area 274 also includes an inbox selection mechanism 282 , a my history selection mechanism 284 , a settings selection mechanism 286 , a help selection mechanism 288 , and/or a log off selection mechanism 290 .
  • a user selects the inbox selection mechanism 282 in order to view the main page 272 .
  • the user selects the my history selection mechanism 284 in order to access previously corrected transcriptions.
  • the email-client correction interface 260 displays a history page (not shown) similar to the main page 272 that lists previously corrected transcriptions.
  • the history page can display correction date(s) for each transcription.
  • a user can select the settings selection mechanism 286 in order to access one or more setting pages (not shown) of the email-client correction interface 260 .
  • the setting pages can enable a user to change his or her notification preferences, email-client correction interface 260 preferences (e.g., change a username and/or password, set a time limit for transcriptions displayed in a history page), etc.
  • email-client correction interface 260 preferences e.g., change a username and/or password, set a time limit for transcriptions displayed in a history page
  • a user can use the settings pages to specify destination settings for audio data and/or generated text data, configure commands and keyboard shortcuts, specify accuracy thresholds, turn on or off particular features of the email-client correction interface 260 and/or the system 10 , etc.
  • the number and degree of settings configurable by a particular user via the settings pages are based on the permissions of the user.
  • An administrator can use the setting pages to specify global settings, group settings (e.g., associated with particular permissions), and individual settings.
  • an administrator can use a setting page of the email-client correction interface 260 to specify users of the email-client correction interface 260 and can establish usernames and passwords for users.
  • an administrator can use a setting page of the email-client correction interface 260 to specify notification parameters, such as who receives particular notifications, what type of notifications are sent, what information is included in the notifications, etc.
  • the view area 276 lists transcriptions (e.g., associated with the logged-in user) that need attention (e.g., correction).
  • the view area 276 includes one or more filter selection mechanisms 292 , that a user can use to filter and/or sort the listed transcriptions. For example, a user can use a filter selection mechanism 292 to filter and/or sort transcriptions by creation date, priority, etc.
  • the view area 274 can also list additional information for each transcription. For example, as shown in FIG. 12 , the view area 274 can list a file name, a checked out by parameter, a checked out on parameter, a creation date, and a priority for each listed transcription. The view area 274 can also include an edit selection mechanism 294 and a complete selection mechanism 296 for each transcription.
  • the user can select a transcription to correct (step 298 ).
  • a transcription to correct As shown in FIG. 12 , to correct a particular transcription, the user selects the edit selection mechanism 294 associated with the transcription.
  • the email-client correction interface 260 displays a correction page 300 , an example of which is shown in FIG. 13 .
  • the correction page 300 includes the navigation area 274 , as described above with respect to FIG. 12 , and a correction view area 302 .
  • the correction view area 302 displays the text data 303 generated by the transcription.
  • a user can edit the text data 303 by deleting text, inserting text, cutting text, copying text, etc. within the correction view area.
  • the correction view area 302 also includes a recording control area 304 .
  • the recording control area 304 can include one or more selection mechanisms for listening to or playing the audio data associated with the text data 303 displayed in the correction view area 302 .
  • the recording control area 304 can include a play selection mechanism 306 , a stop selection mechanism 308 , and a pause selection mechanism 310 .
  • a user can select the play selection mechanism 306 to play the audio data from the beginning and can select the stop selection mechanism 308 to stop the audio data.
  • a user can select the pause selection mechanism 310 to pause the audio data.
  • selecting the pause selection mechanism 310 after pausing the audio data causes the correction interface 260 to continue playing the audio data (e.g., from the point at which the audio data was paused).
  • the recording control area 304 can also include a continue from cursor selection mechanism 312 .
  • a user can select the continue from cursor selection mechanism 312 in order to start playing the audio data at a location corresponding to the position of the cursor within the text data 303 .
  • the email-client correction interface 260 plays the audio data starting from the word “Once.”
  • the recording control area 304 also includes a playback control selection mechanism 314 that a user can use to specify a number of seconds to play before playing the audio data starting at the cursor position. For example, as shown in FIG.
  • a user can specify 1 to 8 seconds using the play control selection mechanism 314 (e.g., by dragging an indicator along the timeline or in another suitable manner).
  • the user can select the continue from cursor selection mechanism 312 , which causes the email-client correction interface 260 to play the audio data starting at the cursor position minus the number of seconds specified by the play control selection mechanism 314 .
  • the recording control area 304 also includes a speed control mechanism (not shown) that allows a user to decrease and increase the playback speed of audio data.
  • the recording control area 304 includes a speed control mechanism that includes one or more selection mechanisms (e.g., buttons, timelines, etc.). A user can select (e.g., click, drag, etc.) the selection mechanisms in order to increase or decrease the playback of audio data by a particular speed.
  • the speed control mechanism can also include a selection mechanism that a user can select in order to play audio data at normal speed.
  • a user can hide the recording control area 304 .
  • the correction view area 302 can include one or more selection mechanisms 315 (e.g., tabs) that enable a user to choose whether to view the text data 303 only (e.g., by selecting a full text tab 315 a ) or to view the text data 303 and the recording control area 304 (e.g., by selecting a listen/text tab 315 b ).
  • selection mechanisms 315 e.g., tabs
  • the correction view area 302 can also include a save selection mechanism 316 .
  • a user can select the save selection mechanism 316 in order to save the current state of the corrected text data 303 .
  • a user can select the save selection mechanism 316 at any time during the correction process.
  • the correction view area 302 can also include a table 318 that lists, among other things, the system's confidence in its transcription quality. For example, as shown in FIG. 13 , the correction view area 302 can list the total number of words in the text data 303 , the number of low-confidence words in the text data 303 , the number of medium-confidence words in the text data 303 , and/or the number of high-confidence words in the text data. “Low” words can include words that are least likely to be correct. “Medium” words can include words that are moderately likely to be correct. “High” words can include words that are very likely to be correct.
  • the number of low words in the text data 303 is close to the number of total words in the text data 303 , it may be useful for the user to delete the text data 303 and manually retype the text data 303 by listening to the corresponding audio data. This situation may occur if the audio data was received from an audio data source that the system 10 has not previously received data from or has not previously received significant data from.
  • the user corrects the transcription as necessary via the email-client correction interface 260 (step 320 ) and submits or saves the corrected transcription (step 322 ).
  • a user can select the save selection mechanism 316 included in the correction page 300 .
  • the email-client correction interface 260 displays a save options page 330 , as shown in FIG. 14 .
  • the save options page 330 can include the navigation area 274 , as described above with respect to FIGS. 12 and 13 , and a save options view area 332 .
  • the save options view area 332 can display one or more selection mechanisms for saving the current state of the corrected text data 303 .
  • the options view area 332 can include a save recording selection mechanism 334 , a save and mark as complete selection mechanism 336 , and a save, mark as complete and send to owner selection mechanism 338 .
  • a user can select the save recording selection mechanism 334 in order to save the current state of the text data 303 with any corrections made by the user. The user is then returned to the main page 272 .
  • a user may select the save recording selection mechanism 334 if the user has not finished making corrections to the text data 303 but wants to stop working on the corrections at the current time.
  • a user may also select the save recording selection mechanism 334 if the user wants to periodically save corrections when working on long transcriptions.
  • the save recording selection mechanism 334 is the default selection.
  • a user can select the save and mark as complete selection mechanism 336 in order to save the corrections made by the user and move the transcription to the user's history. Once the corrections are saved and moved to the history folder, the user can access the corrected transcription (e.g., via the history page of the email-client correction interface 260 ) but may not be able to edit the corrected transcription.
  • a user can select the save, mark as complete and send to owner selection mechanism 338 in order to save the corrected transcription, move the corrected transcription to the user's history folder, and send the corrected transaction and/or the associated audio data to the owner or destination of the audio data (e.g., the owner's email address).
  • a destination for corrected transcriptions can include files and multiple devices running email clients.
  • the email-client correction interface 260 can send a message notification to the owner of the transcription that includes the corrected transcription (e.g., as text within the message or as an attached file).
  • FIG. 15 illustrates an email message notification 339 according to an embodiment of the invention. As shown in FIG. 15 , the notification 339 includes the corrected transcription.
  • the user can select an accept selection mechanism 340 in order to accept the selected option or can select a cancel selection mechanism 342 in order to cancel the selected option.
  • the email-client correction interface 260 returns the user to the correction page 300 .
  • a user can also select a complete selection mechanism 296 included in the main page 272 of the email-client correction interface 260 in order to submit or save transcriptions.
  • the email-client correction interface 260 displays the save options page 330 as described above with respect to FIG. 14 .
  • the email-client correction interface 260 automatically saves any previous corrections made to the transcription associated with the complete selection mechanism 296 , moves the corrected transcription to the user's history folds, and sends the completed transcription and/or the corresponding audio data to the owner or destination associated with the transcription.
  • the transcription server 20 utilizes multiple threads to transcribe multiple files concurrently. This process can use a single database or a cluster of databases holding temporary information to assist in multiple thread transcription on the same or different machines.
  • Each system or device included in embodiments of the present invention can also be performed by one or more machines and/or one or more virtual machines.

Abstract

Methods and systems for requesting a transcription of audio data. One method includes displaying a send-for-transcription button within an email-client interface on a computer-controlled display, and automatically sending a selected email message and associated audio data to a transcription server as a request for a transcription of the associated audio data when a user selects the send-for-transcription button.

Description

    RELATED APPLICATIONS
  • The present application is a continuation-in-part of International Application PCT/US2007/066791 filed on Apr. 17, 2007, which claims priority to U.S. Provisional Application 60/792,640 filed on Apr. 17, 2006, the entire contents of which are both hereby incorporated by reference. The present application also claims priority to U.S. Provisional Application 60/992,187 filed on Dec. 4, 2007; U.S. Provisional Application 61/005,456 filed on Dec. 4, 2007; and U.S. Provisional Application 61/076,054 filed on Jun. 26, 2008, the entire contents of which are all hereby incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • Each day individuals and companies receive multiple audio messages. These audio messages can include personal greetings and information or business-related instructions and information. In either case, it may be useful or required that the audio messages be transcribed in order to create written records of the messages.
  • Software currently exists that generates written text based on audio data. For example, Nuance Communications, Inc. provides a number of software programs, trademarked “Dragon,” that take audio files in .WAV format, .MP3 format, or other audio formats and translate such files into text files. The Dragon software also provides mechanisms for comparing audio files to text files in order to “learn” and improve future transcriptions. The “learning” mechanism included in the Dragon software, however, is only intended to learn based on a voice-dependent model, which means that the same person trains the software program over time. In addition, learning mechanisms in existing transcription software are often non-continuous and include set training parameters that limit the amount of training that is performed.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present invention provide methods and systems for correcting transcribed text. One method includes a user sending one or more emails to a transcription server that include audio data via an email-client interface. The emails may be sent from one or more data sources running email-clients and include audio data to be transcribed. The audio data is transcribed based on a voice model to generate text data. The method also includes making the text data available to the user over at least one computer network and receiving corrected text data over the at least one computer network from the user. In addition, the method includes modifying the voice model based on the corrected text data.
  • Embodiments of the present invention also provide systems for correcting transcribed text. One system includes a transcription server, at least one translation server, an email-client correction interface, and at least one training server. The transcription server receives audio data from one or more audio data sources and the translation server can transcribe the audio data based on a voice model to generate text data. The email-client correction interface is accessible by a user from within an email-client and provides the user with access to the text data. The transcription server also receives corrected text data from the plurality of users. The training server then modifies the voice model based on the corrected text data.
  • Additional embodiments of the invention also provide methods of performing audio data transcription. One method includes obtaining audio data from at least one audio data source, such as a voice over IP system or a voicemail system, transcribing the audio data based on a voice-independent model to generate text data, and sending the text data to an owner of the audio data as an email message.
  • Embodiments of the invention also provide a method of requesting a transcription of audio data. The method includes displaying a send-for-transcription button within an email-client interface on a computer-controlled display, and automatically sending a selected email message and associated audio data to a transcription server as a request for a transcription of the associated audio data when a user selects the send-for-transcription button.
  • Further embodiments of the invention provide a system for requesting a transcription of audio data. The system includes a transcription server and an email-client interface. The email-client interface displays at least one email message associated with audio data to a user, displays a send-for-transcription button to the user, receives a selection of the at least one email message from the user, receives a selection of the send-for-transcription button from the user, and automatically sends the at least one email message and associated audio data to the transcription server as a request for a transcription of the associated audio data in response to the user's selection of the send-for-transcription button.
  • Additional embodiments of the invention also provide a system for generating a transcription of audio data. The system includes a transcription server and a translation server. The transcription server is configured to receive at least one email message and associated audio data from an email-client, identify an account based on the at least one email message, and obtain stored account settings associated with the identified account. The translation server is configured to generate a transcription of the associated audio data based on the account settings and a voice-independent model.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings:
  • FIGS. 1 and 2 schematically illustrate systems for transcribing audio data according to various embodiments of the invention.
  • FIG. 3 illustrates an email-client interface according to an embodiment of the invention.
  • FIG. 4 illustrates a process for transcribing audio data using the email-client interface according to an embodiment of the invention.
  • FIG. 5 illustrates the transcription server of FIGS. 1 and 2 according to an embodiment of the invention.
  • FIG. 6 illustrates a file transcription, correction, and training method according to an embodiment of the invention.
  • FIG. 7 illustrates another file transcription, correction, and training method according to an embodiment of the invention.
  • FIG. 8 illustrates a correction method according to an embodiment of the invention.
  • FIGS. 9-10 illustrate a correction notification according to an embodiment of the invention.
  • FIGS. 11-14 illustrate an email-client correction interface according to an embodiment of the invention.
  • FIG. 15 illustrates a message notification according to an embodiment of the invention.
  • DETAILED DESCRIPTION
  • Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways.
  • In addition, it should be understood that embodiments of the invention include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, based on a reading of this detailed description, one of ordinary skill in the art would recognize that, in at least one embodiment, the electronic based aspects of the invention may be implemented in software. As such, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components, may be utilized to implement the invention. Furthermore, and as described in subsequent paragraphs, the specific configurations illustrated in the drawings are intended to exemplify embodiments of the invention. Other alternative configurations are possible.
  • FIG. 1 illustrates a transcription system 10 for transcribing audio data according to an embodiment of the invention. As shown in FIG. 1, the system 10 includes a transcription server 20, a data source running an email-client 30, and a third party device 40. The transcription server 20 includes, among other things, a voice file directory 52, a queue server 54, and a translation server 56. The transcription server is described in more detail below. The data source email-client 30 and the third party device 40 can be connected to the transcription server 20 via a wide area network 50 such as a cellular network or the Internet.
  • Information flow through the system 10 begins in the data source email-client 30. The data source email-client 30 can include a stand-alone email-client, such as Outlook manufactured by Microsoft™ or Lotus Notes manufactured by IBM™. In other embodiments, the data source email-client 30 can include a browser-based email-client, such as Hotmail, Gmail, Yahoo, AOL, etc. As described below, in addition to providing standard emailing operations, the data source email-client 30 can provide one or more email-client interfaces (e.g., via one or more plug-ins or additional software modules installed and used as part of the email-client 30) that allow a user to request, view, manage, and correct transcribed text data.
  • A user sends information from the data source email-client 30 through the wide area network 50 (e.g. a cellular network, the Internet, etc.) to the transcription server 20. The transcription server 20 places the information in the voice file directory 52 related to an account for the user that sent the information. The information to be transcribed is placed in the queue server 54 before being routed to the translation server 56 to be transcribed. After the information has been transcribed, it is sent back through the wide area network 50 and may, optionally, be sent to a third party device 40 for correction. In some embodiments, if the information is not sent to a third party device 40 for correction or if the third party device 40 has finished correcting the transcription, the information is sent back to the data source email-client 30.
  • FIG. 2 illustrates an exemplary embodiment of the network 10 from FIG. 1. The transcription server 20 can include or can be connected to an email server 20 a that receives email messages from a client computer 30 a or other devices running email-clients, such as a personal digital assistant (“PDA”) 30 b, a Blackberry device 30 c, or a mobile phone 30 d. In other embodiments, additional devices that support email-clients may also by used. The system 10 also includes a third party device 40. The third party device 40 can receive messages including transcribed text to be corrected or checked before the text is sent back to the user. As described below, in some embodiments, the third party device 40 provides one or more email-client interfaces for viewing and correcting transcribed text.
  • FIG. 3 illustrates an embodiment of an email-client interface 60. The email-client interface 60 allows a user to interact with the transcription server 20 from FIGS. 1 and 2. In some embodiments, the email-client interface 60 is provided through an email-client, such as the data source email-client 30. The email-client can include a stand-alone email-client, such as Outlook manufactured by Microsoft™ or Lotus Notes manufactured by IBM™. In other embodiments, the email-client can include a browser-based email-client, such as Hotmail, Gmail, Yahoo, AOL, etc. In some embodiments, the email-client interface 60 is provided by a plug-in or additional software module that is installed and used with the email-client, which allows a user to access and manage transcribed text from within a standard email-client and without having to launch and access a separate interface for managing transcribed text.
  • As shown in FIG. 3, the email-client interface 60 includes a send button 62, a quick play button 64, a search field 66, and an options button 68. The send button 60 allows the user to send one or more selected email messages that include audio data to the transcription server 20. The search field 66 allows a user to search messages that have already been sent to the transcription server 20. As a result, the search field 66 allows a user to access information within the transcription system 10 without having to access a web interface. The quick play button 64 allows the user to play audio data related to a message that has already been sent to the transcription server 20. The options button 68 allows a user to modify features related to the email-client interface 60 and an email-client correction interface described below. In some embodiments, the options button 68 allows a user to modify account settings related to delivery settings, transcription settings, format settings, and the like. In other embodiments, the email-client interface 60 includes additional buttons and functionality.
  • In conjunction with the email-client interface 60, the email-client correction interface is also accessed from within an email-client, such as the data source email-client 30 or an email-client executed by the third party device 40. In some embodiments, the email-client correction interface is also is provided by a plug-in or additional software module that is installed and used with the email-client. The email-client correction interface can be part of the same plug-in providing the email-client interface 60.
  • The email-client correction interface allows a user to access a web-based correction interface from within an email-client, eliminating the need to launch a separate web browsing application or interface. Aspects of the email-client correction interface include, among other things, the ability to view and correct transcriptions of audio data, monitor the transcription status of audio data sent to the transcription server, and modify account settings. The email-client correction interface is described in greater detail below with respect to FIGS. 11-14.
  • FIG. 4 illustrates a process 70 for using the email-client interface 60 to send messages including audio data through the transcription system 10. The user selects one or more email messages including audio data to be transcribed (step 72). In some embodiments, the selected email message include attached audio data representing voice mail messages. Selecting the email messages may include highlighting the messages, opening individual messages, or any other acceptable selection techniques. After step 72, the user selects the send button 62 from the email-client interface 60 to forward the selected email messages to the transcription server 20 (step 74). Additionally or alternatively, the user can reply to a message from the transcription server 20, make changes or corrections to the transcribed text, and send the message back to the transcription server 20, as described below.
  • When the messages arrive at the transcription server 20, identifying information is taken from the email messages to identify a user account (step 76). In some embodiments the identifying information is metadata taken from the email message. The metadata may include, among other things, information such as a sender's email address and IP address. In other embodiments, identifying information is included in the body of the email message and extracted to identify a user account. After the account is identified, the message is sent to a voice file directory 52 related to that account (step 78). Account settings, such as, for example, destination information and formatting information, may be modified for each account. The account settings can be modified or accessed through a system interface, such as the email-client correction interface.
  • The messages stored in the voice file directory 52 awaiting transcription are polled into a queue server 54 (step 80). The queue server 54 holds the messages until a translation server 56 becomes available. When a translation server 56 becomes available, the queue server 54 routes the messages to the available translation server 56 (step 82). The messages enter the translation server 56 and the audio data associated with the message is transcribed (step 84). As described below, the transcription server can also receive messages with corrected transcribed text. If the transcription server 20 receives a message including corrected transcribed text, the transcription server 20 compares the original transcribed text with the user-corrected transcribed text. After the transcription server 20 has compared the original and the user-corrected text, a message including the user-corrected text or the differences between the original text and the user-corrected text is sent to a training queue to update the voice model, as described below.
  • After the audio data has been transcribed, the transcribed text may be sent to a third party for correction or may be sent directly to one or more destinations specified in the user's account settings (step 86). As described above, the transcribed text can be sent to a destination in an email message (e.g., embedded or as an attached file). In some embodiments, if the transcribed text is not sent to a third party, it is sent directly to the training queue to update the voice model (step 90). If the transcribed text is sent to a third party for correction, the third party will correct the transcription using, for example, the email-client correction interface described below (step 88). After step 88, the transcribed and/or corrected text is sent to the training queue to update the voice model (step 90). The transcribed text is then sent back to the user (step 92). A more detailed description of the transcription server 20 is provided below.
  • As shown in FIG. 5, the transcription server 20 receives audio data 100 from one or more of the audio data sources 30. In some embodiments, as noted above, the transcription server 20 includes or is connected to one or more intermediary servers, such as an email server 20 a, that receive messages from the audio data sources 30. Additional intermediary servers may be present such as a voice over IP (“VoIP”) server 20 c and a short message service (“SMS”) server 20 b to receive audio data from additional sources. The messages can be received continuously or in batch form, and can be sent to the transcription server 20 and/or pulled by the transcription server 20 in any manner (e.g., continuously, in batch form, and the like). For example, in some embodiments, the transcription server 20 is adapted to request messages at regular intervals and/or to be responsive to a user command or to some other event. In some embodiments, rather than immediately transmitting the converted message(s) to the transcription server 20, the audio data sources 30 and/or any intermediary servers store the converted message(s) until requested by the transcription server 20 or a separate polling computer. By requesting messages from the audio data sources 30 and/or any intermediary servers, the transcription server 20 or the separate polling computer can manage the messages. For example, in one implementation, the transcription server 20 or a separate polling computer establishes a priority for received messages to be transcribed. The transcription server 20 or a separate polling computer also determines a source of a received message (e.g., the audio data source 30 that transmitted the message). For example, the transcription server 20 or separate polling computer can use metadata taken from the email containing audio data to identify the source of a particular message. In additional embodiments, other types of identifying data can be used to identify the source of a received message.
  • Once the transcription server 20 or separate polling computer receives one or more messages (received by request or otherwise), the transcription server 20 or separate polling computer places the messages and/or the associated audio data to be transcribed into one or more queue servers 54. The queue servers 54 look for an open or available processor or translation server 56. As shown in FIG. 5, the transcription server 20 includes multiple translation servers 56, although a different number of translation servers 56 (e.g., physical or virtual) are possible. Upon identifying an available translation server 56, the queue servers 54 route audio data to the available translation server 56. The translation server 56 transcribes the audio data to generate text data and, in some embodiments, indexes the message. The translation servers 56 index the messages using a database to identify discrete words. For example, the translation server 56 can use an extensible markup language (“XML”), structured query language (“SQL”), mySQL, idx, or other database language to identify discrete words or phrases within the transcribed text.
  • In addition to transcribing audio data included in messages as just described, some embodiments of a translation server 56 generate an index of keywords based upon the transcribed text. For example, in some embodiments, the translation server 56 removes those words that are less commonly searched and/or less useful for searching (e.g., I, the, a, an, but, and the like) from transcribed text, which leaves a number of keywords that can be stored in memory available to the translation servers 56. The resulting “keyword index” includes the exact positions of each keyword in the transcribed text, and, in some cases, includes the exact location of each keyword in the corresponding audio data. This keyword index enables users to perform searches on transcribed text. For example, a user accessing the transcribed text associated with particular audio data (whether for purposes of correcting any errors in the transcribed text or for searching within the transcribed text) can select one or more words from the keyword index of the message generated earlier. In so doing, the exact locations (e.g., page and/or line numbers) of such words can be provided quickly and efficiently—in many cases significantly faster and with less processing power than performing a standard search for the word through the entire transcribed text. The system 10 can provide the keyword index to a user in any suitable manner, such as in a pop-up or pull-down menu included in an interface of the system 10, such as the email-client correction interface, during text correction or searching of transcribed text (described below).
  • Also, in some embodiments, a translation server 56 generates two or more possible candidates for a transcription of a spoken word or phrase from audio data. The most likely candidate is displayed or otherwise used to generate the transcribed text, and the less likely candidate(s) are saved in a memory accessible by the translation server 56 and/or by another server or third party device 40 as needed. This capability can be useful, for example, during correction of the transcribed text (described below). In particular, if a word in the transcribed text is wrong, a user can obtain other candidate(s) identified by the translation server 56 during transcription, which can speed up and/or simplify the correction process.
  • Once audio data is transcribed, the system 10 can allow a user to search transcribed text for particular words and/or phrases. This searching capability can be used during correction of transcribed text as described below or when a transcribed text file is searched for particular words (whether a search for such words is performed on the file alone or in combination with one or more other files). For example, using the indexed message, a user viewing generated text data can select a word or phrase included in the text data and, in some embodiments, can hear the corresponding portion of the audio data from which the text data was generated. In some embodiments, the system 10 is adapted to enable a user to search some or all transcribed text files accessible by the transcription server 20, regardless of whether such files have been corrected. Also, the system 10 can enable a user to search transcribed text using Boolean and/or other search terms.
  • Search results can be generated in a number of manners, such as in a table form enabling a user to select one or more files in which a word or phrase has been found and/or one or more locations at which a word or phrase has been found in particular text data. The search results can also be sorted in one or more manners according to one or more rules (e.g., date, relevance, number of instances in which the word or phrase has been found in text data, and the like) and can be printed, displayed, or exported as desired. In some embodiments, the search results also provide the text around the found word or phrase. The search results can also include additional information, such as the number of instances in which a word or phrase has been found in a transcribed text file and/or the number of transcribed text files in which a word or phrase has been found.
  • After the translation servers 56 index and translate audio data, the audio data and/or the generated text data is stored. The audio data and text data can be stored internally by the transcription server 20 or can be stored externally to one or more data storage devices (e.g., databases, servers, and the like). In some embodiments, a user (e.g., a user associated with a particular audio data source email-client 30) decides how long audio data and/or text data is stored by the transcription server 20, after which time the audio data and/or text data can be automatically deleted, over-written, or stored in another storage device (e.g., a relatively low-accessibility mass storage device). An interface of the system 10 (e.g., the email-client correction interface) enables a user to specify a time limit for audio data and/or text data stored by the transcription server 20.
  • As shown in FIGS. 1 and 2, a data source email-client 30 connects to the transcription server 20 over a network, such as the Internet, one or more local or wide-area networks 50, or the like, in order to obtain audio data and/or corresponding, generated text data. A user uses the data source email-client 30 to access the email-client correction interface associated with transcription server 20 to obtain generated text data and/or corresponding audio data. For example, using the email-client interface correction, the user can request particular audio data and/or the corresponding text data. The requested data is obtained from the transcription server 20 and/or a separate data storage device and is transmitted to the user for display via the interface.
  • The transcription server 20 sends audio data and/or corresponding generated text data to the user as an email message. The transcription server 20 can send an email message to a user that includes the audio data and the text data as attached files. In other embodiments, the transcription server 20 sends an email message to a user that includes a notification that audio data and/or text data is available for the user. A user uses the email-client correction interface in order to listen to the audio data, view the text data, and/or to correct the text data. As described above, in some embodiments, a user can reply to the email message sent from the transcription server 20, correct the transcription, and send the corrected transcription back to the transcription server 20. The transcription server then updates the voice model based on a comparison of the original transcribed text and the user-corrected transcribed text. If the user replies directly to the transcription server, the user does not need to access the email-client correction interface, web interface, or other interfaces of the system 10.
  • In other embodiments, the user can choose to correct only parts of transcribed text. If the user corrects only a portion of the transcribed text, the email-client (e.g., the email-client correction interface) recognizes that only a portion of the text has changed and transmits only the corrected portion of the text to the transcription server 20 for use in training the voice model. By submitting only the corrected or changed portion of the transcribed text, the amount of data transmitted to the transcription server 20 for processing is reduced. In other embodiments, another email-client interface, a web-based interface, the transcription server 20, or another device included in the system 10 can determine what portions of transcribed text have been changed and can limit transmission and/or processing of the changed text accordingly.
  • If a user forwards or sends an email message to the transcription server 20 that includes audio data, the transcription server 20 can send a return email message to the user after the transcription server 20 transcribes the submitted audio file. The email message can inform the user that the submitted audio data was transcribed and that corresponding text data is available. As previously noted, the email message from the transcription server 20 can include the submitted audio data and/or the generated text data.
  • The system 10 can also enable a user to provide destination settings for audio data and/or text data on a per-generated-text-data basis. In some embodiments, before or after audio data is transcribed, a user specifies a particular destination for the text data. As described above, certain implementations allow a user to specify destination settings in an email message. For example, if the user sends an email message to the transcription server 20 that includes audio data, the user can specify destination information in the email message. After the audio message is transcribed and the generated text data is corrected (if applicable), the transcription server 20 sends an email message to the identified recipient (e.g., via a SMTP server).
  • In some embodiments, to protect the privacy and security of the audio and text data, the transcription server 20 transmits data (e.g., audio data and/or text data) to the third party device 40 or another destination device using file transfer protocol (“FTP”). The transmitted data can also be protected by a secure socket layer (“SSL”) mechanism (e.g., a bank level certificate).
  • In one embodiment, system 10 includes an email-client correction interface and a streaming translation server 102 that a user accesses (e.g., via the data source email-client 30) to view generated text. As described below with respect to FIG. 11, in some embodiments, the email-client correction interface and the streaming translation server 102 also enable a user to stream the entire audio data corresponding to the generated text data and/or to stream any desired portion of the audio data corresponding to selected text data. For example, the email-client correction interface and the streaming translation server 102 enable a user to select (e.g., click-on, highlight, mouse over, etc.) a portion of the text in order to hear the corresponding audio data. In addition, in some embodiments, the email-client correction interface and the streaming translation server 102 enable a user to specify a number of seconds that the user desires to hear before and/or after a selected portion of text data.
  • The email-client correction interface also enables a user to correct generated text data. For example, if a user listens to audio data and determines that a portion of the corresponding generated text data is incorrect, the user can correct the generated text data via the email-client correction interface. In some embodiments, the email-client correction interface automatically identifies potentially incorrect portions of generated text data by displaying potentially incorrect portions of the generated text data in a particular color or other format (e.g., via a different font, highlighting in bold, italics, underline, or any other manner). The email-client correction interface also displays portions of the generated text in various colors or other formats depending on the confidence that the portion of the generated text is correct. The email-client correction interface also inserts a placeholder (e.g., an image, an icon, etc.) into text that marks portions of the generated text where text is missing (i.e., the transcription server 20 could not generate text based on the audio data). A user selects the placeholder in order to hear the audio data corresponding to the missing text and can insert the missing text accordingly.
  • In order to assist a user in correcting generated text data, some embodiments of the email-client correction interface automatically generate words similar to incorrectly-generated words. In this regard, a user selects a word (e.g., by highlighting, clicking, or by any other suitable manner) within generated text data that is or appears to be incorrect. Upon such selection, the email-client correction interface suggests similar words, such as in a pop-up menu, pull-down menu, or in any other format. The user selects a word or words from the list of suggested words in order to make a desired correction.
  • In some embodiments, the translation server(s) 56 are configured to automatically determine speakers in an audio file. For example, the translation server 56 processes audio files for drastic changes in voice or audio patterns. The translation server 56 then analyzes the patterns in order to identify the number of individuals or sources speaking in an audio file. In other embodiments, a user or information associated with the audio file (e.g., information included in the email message containing the audio data, or stored in a separate text file associated with the audio data) identifies the number of speakers in an audio file before the audio file is transcribed. For example, a user uses an interface of the system 10 (e.g., the email-client correction interface) to specify the number of speakers in an audio file before or after the audio file is transcribed.
  • After identifying the number of speakers in an audio file, the translation server(s) 56 can generate a speaker list that marks the number of speakers and/or the times in the audio file where each speaker speaks. The translation server(s) 56 can use the speaker list when creating or formatting the corresponding text data to provide markers or identifiers of the speakers (e.g., Speaker 1, Speaker 2, etc.) within the generated text data. In some embodiments, a user can update the speaker list in order to change the number of speakers included in an audio file, change the identifier of the speakers (e.g., to the names of the speakers), and/or specify that two or more speakers identified by the translation server(s) 56 relate to a single speaker or audio source. Also, in some embodiments, a user can use an interface of the system 10 (e.g., the email-client correction interface) to modify the speaker list or to upload a new speaker list. For example, a user can change the identifiers of the speakers by updating a field of the email-client correction interface that identifies a particular speaker. For example, each speaker identifier displayed within generated text data can be placed in a user-editable field. In some embodiments, changing an identifier of a speaker in one field automatically changes the identifier for the speaker throughout the generated text data.
  • In some embodiments, the system 10 also formats transcribed text data based on one or more templates, such as templates adapted for particular users or businesses (e.g., medical, legal, engineering, or other fields). For example, after generating text data, the system 10 (e.g., the translation server(s) 56) compares the text data with one or more templates. If the format or structure of the text data corresponds to the format or structure of a template and/or if the text data includes one or more keywords associated with a template, the system 10 formats the text data based on the template. For example, if the system 10 includes a template specifying the following format:
  • Date:
  • Type of Illness:
  • and text data generated by the system 10 is “the date today is September the 12th, the year is 2007, the illness is flu,” the system 10 automatically applies the template to the text data in order to create the following formatted text data:
  • Date: Sep. 12, 2007
  • Type of Illness: Flu
  • In some embodiments, the system 10 is configured to automatically apply a template to text data if text data corresponds to the template. Therefore, as the system 10 “learns” and improves its transcription quality, as described below, the system 10 also “learns” and improves its application of templates. In other embodiments, a user uses an interface of the system 10 (e.g., the email-client correction interface) to manually specify a template to be applied to text data. For example, a user can select a template to apply to text data from a drop down menu or other selection mechanism included in the interface.
  • The system 10 can store the formatted text data and can make the formatted text data available for review and correction, as described below. In some embodiments, the system 10 stores or retains the unformatted text data separately from the formatted text data. By retaining the unformatted text data, the text data can be applied to new or different templates. In addition, the system 10 can use the unformatted text data to train the system 10, as described below.
  • The system 10 is configured to allow a user to create a customized template and upload the template to the system. For example, a user uses a word processing application, such as Microsoft® Word®, to create a text file that defines the format and structure of a customized template. The user then uploads the text file to the system 10 using an interface of the system 10 (e.g., the email-client interface 60 and/or the email-client correction interface). In some embodiments, the system 10 reformats uploaded templates. For example, the system 10 can store predefined templates and/or customized templates in a mark-up language, such as XML or HTML.
  • Templates can be associated with a particular user or a group of users. For example, only users with certain permission may be allowed to use or apply particular templates. In other embodiments, a user can upload one or more templates that only he or she can use or apply. Settings and restrictions for predefined and/or customized templates can be configured by a user or an administrator using an interface of the system 10.
  • In some embodiments, alternatively or in addition to configuring templates, the system 10 enables a user to configure one or more commands that replace transcribed text with different text. For example, a user configures the system 10 to insert the current date into text data whenever audio data and/or corresponding text data includes the word “date” or the phrases “today's date,” “current date,” or “insert today's date.” Similarly, in another embodiment, system 10 is configured to start a new paragraph within transcribed text data each time audio data and/or corresponding text data includes the word “paragraph,” the phrase “new paragraph,” or a similar identifier. The commands can be defined on a per user basis and/or on a group of users basis, and settings or restrictions for the commands can be set by a user or an administrator using the system 10.
  • Some embodiments of the system 10 also enable a user correcting text data via the email-client correction interface to create commands and/or keyboard shortcuts. In one example, the system is configured so that a user can use the commands and/or keyboard shortcuts to stream audio data, add common words or phrases to text data, play audio data, pause audio data, or start or select objects or functions provided through the email-client correction interface or other interfaces of the system 10. In some embodiments, a user uses the email-client correction interface to configure the commands and/or keyboard shortcuts. The commands and/or keyboard shortcuts can be stored on a user level and/or a group level. An administrator can also configure commands and/or keyboard shortcuts that can be made available to one user or multiple users. For example, users with particular permissions may be allowed to use particular commands and/or keyboard shortcuts.
  • In one embodiment, the email-client correction interface reacts to commands spoken by the user. In another embodiment, the system 10 is configured to permit a user to create commands that when spoken by the user cause the email-client correction interface to perform certain actions. In some embodiments, the user can say “play,” “pause,” “forward,” “backward,” etc. to control the playing of the audio data by the email-client correction interface. Other commands include insert, delete, or edit text in transcribed text data. For example, when user says “date,” the email-client correction interface inserts date information into transcribed text data.
  • In some embodiments, the system 10 also performs translations of transcribed text data. For example, the email-client correction interface or another interface of the system 10 includes features to permit a user to request a translation of transcribed text data into another language. The transcription server 20 includes one or more language translation modules configured to create text data in a particular language based on generated text data in another language. The system is also configured to process an audio source (e.g., an individual submitting an email message with an attached audio file to the transcription server 20) with a request to translate the file to a specific language when an audio file is submitted to the transcription server 20.
  • With continued reference to the illustrated embodiment of FIG. 5, corrections made by a user through the email-client correction interface are transmitted to the transcription server 20. As shown in FIG. 5, the transcription server 20 includes a training server 104. The training server 104 can use the corrections made by a user to “learn” so that future incorrect translations are avoided. In some embodiments, since audio data is received from one or more audio data sources 30 representing multiple “speakers,” and since the email-client correction interface can be accessible over a network by multiple users, the training server 104 receives corrections from multiple users and, therefore, uses a voice independent model to learn from multiple speakers or audio data sources.
  • In some embodiments, the system 10 transcribes audio files of a predetermined size (e.g., over 20 minutes in length) in pieces in order to “pre-train” the translation server(s) 56. For example, the transcription server 20 and/or the translation server(s) 56 can divide an audio file into segments (e.g., 1 to 5 minute segments). The translation server(s) 56 can then transcribe one or more of the segments and the resulting text data can be made available to a user for correction (e.g., via the email-client correction interface). After the transcribed segments are corrected and any corrections are applied to the training server 104 in order to “teach” the system 10, the translation server(s) 56 transcribe the complete audio file. After the complete audio file is transcribed, the transcription of the complete audio file is made available to a user for correction. Using the small segments of the audio file to pre-train the translation server(s) 56 helps increase the accuracy of the transcription of the complete audio file, which can save time and can prevent errors. In some embodiments, the complete audio file is transcribed before or in parallel with one or more smaller segments of the same audio file. Once the complete audio file is transcribed, a user can then immediately review and correct the text for the complete audio file or can wait until the individual segments are transcribed and corrected before correcting the text of the complete audio file. In addition, a user can request a re-transcription of the complete audio file after one or more individual segments are transcribed and corrected. In some embodiments, if the complete audio file is transcribed before or in parallel with smaller segments and the transcription of the complete audio file has not been corrected by the time the individual segments are transcribed and corrected, the transcription server 20 and/or the translation server(s) 56 automatically re-transcribes the complete audio file.
  • The voice independent model developed by the transcription server 20 can be shared and used by multiple transcription servers 20. For example, in some embodiments, the voice independent model developed by a transcription server 20 can be copied to or shared with other transcription servers 20. The model can be copied to other transcription servers 20 based on a predetermined schedule, anytime the model is updated, on a manual basis, etc. In some embodiments, a lead transcription server 20 collects audio and text data from other transcription servers 20 (e.g., audio and text data which has not been applied to a training server) and transfers the data to a lead training server 104. The lead transcription server 20 can collect the audio and text data during periods of low network or processor usage. The individual training servers 104 of one or more transcription servers 20 can also take turns processing batches of audio data and copying updated voice models to other transcription servers 20 (e.g., in a predetermined sequence or schedule), which can ensure that each transcription server 20 is using the most up-to-date voice model.
  • In some embodiments, individuals may be hired to correct transcribed audio files (“correctors”), and the correctors may be paid on a per-line, per-word, per-file, time, or the like basis, and the transcription server 20 can track performance data for the correctors. The performance data can include line counts, usage counts, word counts, etc. for individual correctors and/or groups of correctors. In some embodiments, the transcription server 20 enables a user (e.g., an administrator) to access the performance data via an interface of the system 10 (e.g., an email-client correction interface or a website). The user can use the interface to input personal information associated with the performance data, such as the correctors' names, employee numbers, etc. In some embodiments, the user can also use the interface to initiate and/or specify payments to be made to the correctors. The performance data (and any related information provided by a user, such as an administrator) can be stored in a database and/or can be exported to an external accounting system, such as accounting systems and solutions provided by Paychex, Inc. or QuickBooks® provided by Intuit, Inc. The transcription server 20 can send the performance data to an external accounting system via a direct connection or an indirect connection, such as the Internet. The transcription server 20 can also generate a file that can be stored to a portable data storage medium (e.g., a compact disk, a jump drive, etc.). The file can then be uploaded to an external accounting system from the portable data storage medium. An external account system can use the performance data to pay the correctors, generate financial documents, etc.
  • In some embodiments, a user may not desire or need transcribed text data to be corrected. For example, a user may not want text data that is substantially accurate to be corrected. In these situations, the system 10 can allow a user to designate an accuracy threshold, and the system 10 can apply the threshold to determine whether text data should be corrected. For example, if generated text data has a percentage or other measurement of accurate words (as determined by the transcription server 20) that is equal to or greater than the accuracy threshold specified by the user, the system 10 can allow the text data to skip the correction process (and the associated training or learning process). The system 10 can deliver any generated text data that skips the correction process directly to its destination (e.g., directly sent to a user via an email message, directly stored to a database, etc.). In some embodiments, the accuracy threshold can be set by a user using any described interface of the system 10. The threshold can be applied to all text data or only to particular text data (e.g., only text data generated based on audio data received from a particular audio source, only text data that is associated with a particular destination, etc.).
  • FIG. 6 illustrates an exemplary transcription, correction, and training method or process performed by the system 10. The transcription, correction, and training process of the system 10 can be a continual process by which files enter the system 10 and are moved through the series of steps shown in FIG. 6. As shown in FIG. 6 (also with reference to FIGS. 1-3), the transcription server 20 receives audio data 100 from one or more data source email-clients 30. Next, the transcription server 20 places the audio data 100 into one or more queues 54 (step 120). Once a translation server or processor 56 is available, the audio data 100 is transmitted from a queue 54 to a translation server 56. The translation server 56 transcribes the audio data to generate text data, and indexes the audio data (step 122).
  • After the audio data is indexed and transcribed, the audio data and/or generated text data is made available to a user for review and/or correction via the email-client correction interface (step 124). If the text data needs to be corrected (step 126), the user makes the corrections and submits the corrections to the training server 104 of the transcription server 20 (step 128). The corrections are placed in a training queue and are prepared for archiving (step 130). Periodically, the training server 104 obtains all the corrected files from the training queue and begins a training cycle for an independent voice model (step 132). In other embodiments, the training server 104 obtains such corrected files immediately, rather than periodically. The training server 104 can be a server that is separate from the transcription server 20, and can update the transcription server 20 and/or other servers on a continuous or periodic basis. In other embodiments, the training server 104, transcription server 20, and any other servers associated with the system 10 are defined by the same computer. It should be understood that, as used herein and in the appended claims, the terms “server,” “queue,” “module”, etc. are intended to encompass hardware and/or software adapted to perform a particular function.
  • Any portion or all of the transcription, correction, and training process performed by the system 10 can be performed by one or more polling managers (e.g., associated with the transcription server 20, the training server 104, or other servers). In some embodiments, the transcription server 20 and/or the training server 104 utilizes one or more “flags” to indicate a stage of a file. By way of example only, these flags can include: (1) waiting for transcription; (2) transcription in progress; (3) waiting for correction; (4) correction completed; (5) waiting for training; (6) training in progress; (7) retention; (8) move to history pending; and (9) history.
  • In some embodiments, the only action required by a user as a message moves through different stages of the system 10 is to indicate that correction of the message has been completed. In other embodiments, a less automated system can exist, requiring more input from a user during the transcription, correction, and training process.
  • Another example of a method by which messages are processed in the system 10 is illustrated in FIG. 7. In this embodiment, a polling manager is used to control the timing of file processing in the system. In particular, at least a portion of the transcription, correction, and training process is moved along by alternating actions of a polling manager. In some embodiments, the polling manager runs on a relatively short time interval to move files from stage to stage within the transcription, correction, and training process. Although not required, the polling manager can move multiple files in different stages to the next stage at the same time.
  • With reference to the exemplary embodiment illustrated in FIG. 7, the polling manager locates files to enter the transcription, correction, and training process. For example, the polling manager can check a list of FTP servers/locations for new files. New files identified by the polling manger are downloaded (step 202) and added to the database (step 204). When a file arrives, the polling manager flags the file “waiting for transcription” (step 206). The polling manager then executes and moves the file to a transcription queue (step 208), after which time the next available server/processor transcribes the file (step 210) on a first-in, first-out basis, unless a different priority is assigned. Once the file is assigned to a server/processor for transcription, the polling manager flags the file “transcription in progress.” When transcription of the file is complete, the polling manager flags the file “waiting for correction” (step 212), and the file is made available to a user for correction (e.g., through the email-client correction interface). When a user is done correcting the file, the polling manager flags the file “correction completed” (step 214). The polling manager then flags the file “waiting for training,” and moves the corrected file into a waiting to be trained queue (step 216). During the time in which the training process runs (step 218), the polling manager flags the file “training in progress.” After the training process, the polling manager flags the file “retention.” In some embodiments, a user-defined retention determines when and whether files are archived. During the time in which a file is being archived (step 220), the polling manager flags the file “move to history pending.” When a file has been archived, the polling manager flags the file “history.”
  • The archival process allows files to move out of the system 10 immediately or based at least in part upon set retention rules. Archived or historical files allow the system 10 to keep current files available quickly while older files can be encrypted, compressed, and stored. Archived files can also be returned to a user (step 222) in any manner as described above.
  • In some embodiments, the email-client correction interface shows the stage of one or more files in the transcription, correction, and training process. This process can be automated and database driven so that all files are used to build and train the voice independent model.
  • It should be noted that a database-driven system 10 allows redundancy within the system. Multiple servers can share the load of the process described above. Also, multiple servers across different geographic regions can provide backup in the event of a natural disaster or other problem at one or more sites.
  • FIG. 8 illustrates a correction method according to an embodiment of the invention. The correction process of FIG. 8 begins when audio data is received by the transcription server 20 and is transcribed (step 250). As described above with respect to FIGS. 1-2, the transcription server 20 can receive audio data from one or more devices running email-clients 30, such as a computer 30 a, a PDA 30 b, a blackberry device 30 c, a mobile phone 30 d, etc.
  • The transcription server 20 can send the correction notification to a user who is assigned to the correction of transcribed audio data associated with a particular owner or destination. For example, as the transcription server 20 transcribes voicemail messages for a particular member of an organization, the transcription server 20 can send a notification to a secretary or assistant of the member. An administrator can use an interface of the system 10 (e.g., the email-client interface 60) to configure one or more recipients who are to receive the correction notifications for a particular destination (e.g., a particular email account). An administrator can also specify settings for notifications, such as the type of notification to send (e.g., email, text, audio, etc.), the addresses or identifiers of the notification recipients (e.g., email addresses), the information to be included in the notifications, etc. For example, an administrator can establish rules for sending correction notifications, such as transcriptions associated with audio data received by the transcription server 20 from a particular audio data source should be corrected by particular users. In addition, as described above, an administration can set one or more accuracy thresholds, which can dictate when transcribed audio data skips the correction process.
  • FIG. 9 illustrates an email correction notification 254 according to an embodiment of the invention that is listed in an inbox 255 of an email application. As shown in FIG. 9, the email correction notification 254 is listed as an email message in the inbox 255 similar to other email messages 256 received from other sources. For example, the inbox 255 can display the sender of the email correction notification 254 (i.e., the transcription server 20), an account or destination associated with the audio data and generated text data (e.g., an account number), and an identifier of the source of the audio data (e.g., the name of an individual that sent the message). As shown in FIG. 9, the identifier of the source of the audio data can optionally include an address or location of the audio data source. In some embodiments (e.g., depending on the email application used), the inbox 255 lists additional information about the notification 254, such as the size of the email correction notification 254, the time the notification 254 was sent, and/or the date that the notification 254 was sent.
  • To read the email correction notification 254, a user can select the notification 254 (e.g., by clicking on, highlighting, etc.) in the inbox 255. After the user selects the notification 254, the email application can display the contents of the notification 254, as shown in FIG. 10. The contents of the email correction notification 254 can include similar information as displayed in the inbox 255. The contents of the email correction notification 254 can also indicate the length of the audio data transcribed by the transcription server 20 and the day, date, and/or time that the audio data was received by the transcription server 20. To correct the transcription, the user can access the email-client correction interface from their email-client. However, if the user does not have access to the email-client correction interface, a link 257 to a web interface is provided in the email correction notification.
  • Referring to FIGS. 11-14 illustrate the email-client correction interface 260 according to an embodiment of the invention. After a user receives a correction notification 254, the user can access the email-client correction interface 260 to review and correct the generated text data (if needed) (step 262). The email-client correction interface 260 is accessed from within the email-client. For example, when a user receives a correction notification indicating that the user has messages that either have been corrected or are ready to be corrected, the user can access the email-client correction interface 260 without launching a separate web browsing application. Additionally, a user can also reply directly to a correction notification that includes transcribed text, correct the transcribed text in the body of the message, and send the corrected transcribed text back to the transcription server 20. After sending the corrected transcribed text back to the transcription server 20, the voice model is updated accordingly.
  • As shown in FIG. 11, to access the email-client correction interface 260, the user may first be prompted to enter credentials and/or identifying information via a login screen 264 of the interface 260. For example, the login screen 264 can include one or more selection mechanisms and/or input mechanisms 266 that enable a user to select or enter credentials and/or identifying information. As shown in FIG. 11, the login screen 264 can include input mechanisms 266 for entering a username and a password. The input mechanisms 266 can be case sensitive and/or can be limited to a predetermined set and/or number of characters. For example, the input mechanisms 266 can be limited to approximately 30 non-space characters. A user can enter his or her username and password (e.g., as set by the user or an administrator) and can select a log in selection mechanism 268. Alternatively, a user can select a help selection mechanism 270 in order to access instructions, tips, help web pages, electronic manuals, etc. for the email-client correction interface 260.
  • After the user enters his or her credentials and/or identifying information, the email-client correction interface 260 verifies the entered information, and, if verified, the email-client correction interface 260 displays a main page 272, as shown in FIG. 12. The main page 272 includes a navigation area 274 and a view area 276. The navigation area 274 includes one or more selection mechanisms for accessing standard functions of the email-client correction interface 260. For example, as shown in FIG. 12, the navigation area 274 includes a help selection mechanism 278 and a log off selection mechanism 280. As described above, a user can select the help selection mechanism 278 in order to access instructions, tips, help web pages, electronic manuals, etc. for the email-client correction interface 260. A user selects the log off selection mechanism 280 in order to exit the email-client correction interface 260. In some embodiments, if a user selects the log off selection mechanism 280, the email-client correction interface 260 returns the user to the login page 264.
  • As shown in FIG. 12, the navigation area 274 also includes an inbox selection mechanism 282, a my history selection mechanism 284, a settings selection mechanism 286, a help selection mechanism 288, and/or a log off selection mechanism 290. A user selects the inbox selection mechanism 282 in order to view the main page 272. The user selects the my history selection mechanism 284 in order to access previously corrected transcriptions. In some embodiments, if a user selects the my history selection mechanism 284, the email-client correction interface 260 displays a history page (not shown) similar to the main page 272 that lists previously corrected transcriptions. Alternatively or in addition to displaying the information displayed in the main page 272 (e.g., file name, checked out by, checked in by, creation date, priority), the history page can display correction date(s) for each transcription.
  • A user can select the settings selection mechanism 286 in order to access one or more setting pages (not shown) of the email-client correction interface 260. The setting pages can enable a user to change his or her notification preferences, email-client correction interface 260 preferences (e.g., change a username and/or password, set a time limit for transcriptions displayed in a history page), etc. For example, as described above, a user can use the settings pages to specify destination settings for audio data and/or generated text data, configure commands and keyboard shortcuts, specify accuracy thresholds, turn on or off particular features of the email-client correction interface 260 and/or the system 10, etc. In some embodiments, the number and degree of settings configurable by a particular user via the settings pages are based on the permissions of the user. An administrator can use the setting pages to specify global settings, group settings (e.g., associated with particular permissions), and individual settings. In addition, an administrator can use a setting page of the email-client correction interface 260 to specify users of the email-client correction interface 260 and can establish usernames and passwords for users. Furthermore, as described above with respect to FIGS. 9 and 10, an administrator can use a setting page of the email-client correction interface 260 to specify notification parameters, such as who receives particular notifications, what type of notifications are sent, what information is included in the notifications, etc.
  • As shown in FIG. 12, the view area 276 lists transcriptions (e.g., associated with the logged-in user) that need attention (e.g., correction). In some embodiments, the view area 276 includes one or more filter selection mechanisms 292, that a user can use to filter and/or sort the listed transcriptions. For example, a user can use a filter selection mechanism 292 to filter and/or sort transcriptions by creation date, priority, etc.
  • The view area 274 can also list additional information for each transcription. For example, as shown in FIG. 12, the view area 274 can list a file name, a checked out by parameter, a checked out on parameter, a creation date, and a priority for each listed transcription. The view area 274 can also include an edit selection mechanism 294 and a complete selection mechanism 296 for each transcription.
  • Returning to FIG. 8, after a user accesses the email-client correction interface 260, the user can select a transcription to correct (step 298). As shown in FIG. 12, to correct a particular transcription, the user selects the edit selection mechanism 294 associated with the transcription. When a user selects an edit selection mechanism 294, the email-client correction interface 260 displays a correction page 300, an example of which is shown in FIG. 13. The correction page 300 includes the navigation area 274, as described above with respect to FIG. 12, and a correction view area 302. The correction view area 302 displays the text data 303 generated by the transcription. A user can edit the text data 303 by deleting text, inserting text, cutting text, copying text, etc. within the correction view area.
  • In some embodiments, the correction view area 302 also includes a recording control area 304. The recording control area 304 can include one or more selection mechanisms for listening to or playing the audio data associated with the text data 303 displayed in the correction view area 302. For example, as shown in FIG. 13, the recording control area 304 can include a play selection mechanism 306, a stop selection mechanism 308, and a pause selection mechanism 310. A user can select the play selection mechanism 306 to play the audio data from the beginning and can select the stop selection mechanism 308 to stop the audio data. Similarly, a user can select the pause selection mechanism 310 to pause the audio data. In some embodiments, selecting the pause selection mechanism 310 after pausing the audio data causes the correction interface 260 to continue playing the audio data (e.g., from the point at which the audio data was paused).
  • As shown in FIG. 13, the recording control area 304 can also include a continue from cursor selection mechanism 312. A user can select the continue from cursor selection mechanism 312 in order to start playing the audio data at a location corresponding to the position of the cursor within the text data 303. For example, if a user places a cursor within the text data 303 before the word “Once” and selects the continue from cursor selection mechanism 312, the email-client correction interface 260 plays the audio data starting from the word “Once.” In some embodiments, the recording control area 304 also includes a playback control selection mechanism 314 that a user can use to specify a number of seconds to play before playing the audio data starting at the cursor position. For example, as shown in FIG. 13, a user can specify 1 to 8 seconds using the play control selection mechanism 314 (e.g., by dragging an indicator along the timeline or in another suitable manner). After setting the playback control selection mechanism 314, the user can select the continue from cursor selection mechanism 312, which causes the email-client correction interface 260 to play the audio data starting at the cursor position minus the number of seconds specified by the play control selection mechanism 314.
  • In some embodiments, the recording control area 304 also includes a speed control mechanism (not shown) that allows a user to decrease and increase the playback speed of audio data. For example, the recording control area 304 includes a speed control mechanism that includes one or more selection mechanisms (e.g., buttons, timelines, etc.). A user can select (e.g., click, drag, etc.) the selection mechanisms in order to increase or decrease the playback of audio data by a particular speed. In some embodiments, the speed control mechanism can also include a selection mechanism that a user can select in order to play audio data at normal speed.
  • In some embodiments, a user can hide the recording control area 304. For example, as shown in FIG. 13, the correction view area 302 can include one or more selection mechanisms 315 (e.g., tabs) that enable a user to choose whether to view the text data 303 only (e.g., by selecting a full text tab 315 a) or to view the text data 303 and the recording control area 304 (e.g., by selecting a listen/text tab 315 b).
  • The correction view area 302 can also include a save selection mechanism 316. A user can select the save selection mechanism 316 in order to save the current state of the corrected text data 303. A user can select the save selection mechanism 316 at any time during the correction process.
  • The correction view area 302 can also include a table 318 that lists, among other things, the system's confidence in its transcription quality. For example, as shown in FIG. 13, the correction view area 302 can list the total number of words in the text data 303, the number of low-confidence words in the text data 303, the number of medium-confidence words in the text data 303, and/or the number of high-confidence words in the text data. “Low” words can include words that are least likely to be correct. “Medium” words can include words that are moderately likely to be correct. “High” words can include words that are very likely to be correct. In some embodiments, if the number of low words in the text data 303 is close to the number of total words in the text data 303, it may be useful for the user to delete the text data 303 and manually retype the text data 303 by listening to the corresponding audio data. This situation may occur if the audio data was received from an audio data source that the system 10 has not previously received data from or has not previously received significant data from.
  • Returning to FIG. 8, after a user selects a transcription to correct, the user corrects the transcription as necessary via the email-client correction interface 260 (step 320) and submits or saves the corrected transcription (step 322). As described above with respect to FIG. 13, to submit or save corrected text data 303, a user can select the save selection mechanism 316 included in the correction page 300. In some embodiments, when a user selects the save selection mechanism 316, the email-client correction interface 260 displays a save options page 330, as shown in FIG. 14. The save options page 330 can include the navigation area 274, as described above with respect to FIGS. 12 and 13, and a save options view area 332. The save options view area 332 can display one or more selection mechanisms for saving the current state of the corrected text data 303. For example, as shown in FIG. 14, the options view area 332 can include a save recording selection mechanism 334, a save and mark as complete selection mechanism 336, and a save, mark as complete and send to owner selection mechanism 338. A user can select the save recording selection mechanism 334 in order to save the current state of the text data 303 with any corrections made by the user. The user is then returned to the main page 272. A user may select the save recording selection mechanism 334 if the user has not finished making corrections to the text data 303 but wants to stop working on the corrections at the current time. A user may also select the save recording selection mechanism 334 if the user wants to periodically save corrections when working on long transcriptions. In some embodiments, the save recording selection mechanism 334 is the default selection.
  • A user can select the save and mark as complete selection mechanism 336 in order to save the corrections made by the user and move the transcription to the user's history. Once the corrections are saved and moved to the history folder, the user can access the corrected transcription (e.g., via the history page of the email-client correction interface 260) but may not be able to edit the corrected transcription.
  • A user can select the save, mark as complete and send to owner selection mechanism 338 in order to save the corrected transcription, move the corrected transcription to the user's history folder, and send the corrected transaction and/or the associated audio data to the owner or destination of the audio data (e.g., the owner's email address). As described above, a destination for corrected transcriptions can include files and multiple devices running email clients. For example, the email-client correction interface 260 can send a message notification to the owner of the transcription that includes the corrected transcription (e.g., as text within the message or as an attached file). FIG. 15 illustrates an email message notification 339 according to an embodiment of the invention. As shown in FIG. 15, the notification 339 includes the corrected transcription.
  • Once a user selects a save option, the user can select an accept selection mechanism 340 in order to accept the selected option or can select a cancel selection mechanism 342 in order to cancel the selected option. In some embodiments, if a user selects the cancel selection mechanism 342, the email-client correction interface 260 returns the user to the correction page 300.
  • A user can also select a complete selection mechanism 296 included in the main page 272 of the email-client correction interface 260 in order to submit or save transcriptions. In some embodiments, if a user selects a complete selection mechanism 296 included in the main page 272, the email-client correction interface 260 displays the save options page 330 as described above with respect to FIG. 14. In other embodiments, if a user selects a complete selection mechanism 296 included in the main page 272, the email-client correction interface 260 automatically saves any previous corrections made to the transcription associated with the complete selection mechanism 296, moves the corrected transcription to the user's history folds, and sends the completed transcription and/or the corresponding audio data to the owner or destination associated with the transcription.
  • The embodiments described above and illustrated in the figures are presented by way of example only and are not intended as a limitation upon the concepts and principles of the invention. As such, it will be appreciated by one having ordinary skill in the art that various changes in the elements and their configuration and arrangement are possible without departing from the spirit and scope of the present invention. For example, in some embodiments the transcription server 20 utilizes multiple threads to transcribe multiple files concurrently. This process can use a single database or a cluster of databases holding temporary information to assist in multiple thread transcription on the same or different machines. Each system or device included in embodiments of the present invention can also be performed by one or more machines and/or one or more virtual machines.
  • Various features and advantages of the invention are set forth in the following claims.

Claims (20)

1. A method of requesting a transcription of audio data, the method comprising:
displaying a send-for-transcription button within an email-client interface on a computer-controlled display; and
automatically sending a selected email message and associated audio data to a transcription server as a request for a transcription of the associated audio data when a user selects the send-for-transcription button.
2. The method of claim 1, further comprising displaying a status of the selected email message within the email-client interface, wherein the status indicates at least one of whether the selected email message has been sent to the transcription server, whether transcribed text based on the associated audio data has been received, and whether corrected text data has been received associated with the transcribed text.
3. The method of claim 2, further comprising playing the associated audio data within the email-client interface so that the audio data is audible to a user.
4. The method of claim 1, further comprising receiving the transcription of the associated audio data from the transcription server.
5. The method of claim 4, further comprising displaying the transcription of the associated audio data to a user within the email-client interface.
6. The method of claim 5, further comprising receiving corrected text data associated with the transcription of the associated audio data from a user within the email-client interface.
7. The method of claim 6, further comprising sending the corrected text data to the transcription server.
8. A system for requesting a transcription of audio data, the system comprising:
a transcription server;
an email-client interface displaying at least one email message associated with audio data to a user, displaying a send-for-transcription button to the user, receiving a selection of the at least one email message from the user, receiving a selection of the send-for-transcription button from the user, and automatically sending the at least one email message and associated audio data to the transcription server as a request for a transcription of the associated audio data in response to the user's selection of the send-for-transcription button.
9. The system of claim 8, wherein the email-client interface displays a status associated with the at least one email message, wherein the status includes at least one of whether the at least one email message has been sent to the transcription server, whether transcribed text based on the associated audio data has been received, and whether corrected text data has been received associated with the transcribed text.
10. The system of claim 8, wherein the email-client interface plays the associated audio data so that the associated audio data is audible to a user.
11. The system of claim 8, wherein the transcription server generates the transcription of the associated audio data based on a voice independent model.
12. The system of claim 8, wherein the transcription server identifies an account associated with the at least one email message based on at least one of an email address and an internet protocol address associated with the at least one email message.
13. The system of claim 12, wherein the transcription server obtains stored account settings associated with the identified account, the account settings including at least one of transcribed text delivery settings, transcription settings, and transcription format settings.
14. The system of claim 13, wherein the transcription server generates the transcription of the associated audio data based on the account settings.
15. The system of claim 8, wherein the transcription server generates the transcription of the associated audio data and sends the transcription of the associated audio data to the email-client interface.
16. The system of claim 15, wherein the email-client interface displays the transcription of the associated audio data to a user, receives corrected text data associated with the transcription of the associated audio data from the user, and sends the corrected text data to the transcription server.
17. The system of claim 16, wherein the transcription server modifies a voice-independent model based on the corrected text data.
18. A system for generating a transcription of audio data, the system comprising:
a transcription server configured to receive at least one email message and associated audio data from an email-client, identify an account based on the at least one email message, and obtain stored account settings associated with the identified account; and
a translation server configured to generate a transcription of the associated audio data based on the account settings and a voice-independent model.
19. The system of claim 18, wherein the account settings include at least one of transcribed text delivery settings, transcription settings, and transcription format settings.
20. The system of claim 18, wherein the transcription server identifies an account based on at least one of an email address and an internet protocol address associated with the at least one email message.
US12/746,352 2007-12-04 2008-12-04 Correcting transcribed audio files with an email-client interface Abandoned US20110022387A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/746,352 US20110022387A1 (en) 2007-12-04 2008-12-04 Correcting transcribed audio files with an email-client interface

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US545607P 2007-12-04 2007-12-04
US99218707P 2007-12-04 2007-12-04
US7605408P 2008-06-26 2008-06-26
PCT/US2008/085498 WO2009073768A1 (en) 2007-12-04 2008-12-04 Correcting transcribed audio files with an email-client interface
US12/746,352 US20110022387A1 (en) 2007-12-04 2008-12-04 Correcting transcribed audio files with an email-client interface

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2007/066791 Continuation-In-Part WO2007121441A2 (en) 2006-04-17 2007-04-17 Methods and systems for correcting transcribed audio files
PCT/US2008/085498 A-371-Of-International WO2009073768A1 (en) 2006-04-17 2008-12-04 Correcting transcribed audio files with an email-client interface

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/158,311 Continuation US9715876B2 (en) 2006-04-17 2014-01-17 Correcting transcribed audio files with an email-client interface

Publications (1)

Publication Number Publication Date
US20110022387A1 true US20110022387A1 (en) 2011-01-27

Family

ID=40473483

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/746,352 Abandoned US20110022387A1 (en) 2007-12-04 2008-12-04 Correcting transcribed audio files with an email-client interface
US14/158,311 Active 2029-09-23 US9715876B2 (en) 2006-04-17 2014-01-17 Correcting transcribed audio files with an email-client interface

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/158,311 Active 2029-09-23 US9715876B2 (en) 2006-04-17 2014-01-17 Correcting transcribed audio files with an email-client interface

Country Status (2)

Country Link
US (2) US20110022387A1 (en)
WO (1) WO2009073768A1 (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022386A1 (en) * 2009-07-22 2011-01-27 Cisco Technology, Inc. Speech recognition tuning tool
US20110276325A1 (en) * 2010-05-05 2011-11-10 Cisco Technology, Inc. Training A Transcription System
US20120035925A1 (en) * 2010-06-22 2012-02-09 Microsoft Corporation Population of Lists and Tasks from Captured Voice and Audio Content
US20120054284A1 (en) * 2010-08-25 2012-03-01 International Business Machines Corporation Communication management method and system
US20120316873A1 (en) * 2011-06-09 2012-12-13 Samsung Electronics Co. Ltd. Method of providing information and mobile telecommunication terminal thereof
US20130030806A1 (en) * 2011-07-26 2013-01-31 Kabushiki Kaisha Toshiba Transcription support system and transcription support method
US20130132079A1 (en) * 2011-11-17 2013-05-23 Microsoft Corporation Interactive speech recognition
US20130317818A1 (en) * 2012-05-24 2013-11-28 University Of Rochester Systems and Methods for Captioning by Non-Experts
US20140025764A1 (en) * 2010-07-22 2014-01-23 At & T Intellectual Property I, L.P. System and Method for Efficient Unified Messaging System Support for Speech-to-Text Service
US20140303974A1 (en) * 2013-04-03 2014-10-09 Kabushiki Kaisha Toshiba Text generator, text generating method, and computer program product
US8879695B2 (en) 2010-08-06 2014-11-04 At&T Intellectual Property I, L.P. System and method for selective voicemail transcription
US20150055764A1 (en) * 2008-07-30 2015-02-26 At&T Intellectual Property I, L.P. Transparent voice registration and verification method and system
US20150221306A1 (en) * 2011-07-26 2015-08-06 Nuance Communications, Inc. Systems and methods for improving the accuracy of a transcription using auxiliary data such as personal data
US9224387B1 (en) * 2012-12-04 2015-12-29 Amazon Technologies, Inc. Targeted detection of regions in speech processing data streams
US9235645B1 (en) 2010-03-26 2016-01-12 Open Invention Network, Llc Systems and methods for managing the execution of processing jobs
US9245522B2 (en) 2006-04-17 2016-01-26 Iii Holdings 1, Llc Methods and systems for correcting transcribed audio files
US9361880B2 (en) 2008-12-23 2016-06-07 Interactions Llc System and method for recognizing speech with dialect grammars
US20160164979A1 (en) * 2013-08-02 2016-06-09 Telefonaktiebolaget L M Ericsson (Publ) Transcription of communication sessions
US9628603B2 (en) * 2014-07-23 2017-04-18 Lenovo (Singapore) Pte. Ltd. Voice mail transcription
US20170195019A1 (en) * 2014-09-19 2017-07-06 Huawei Technologies Co., Ltd. Multi-User Multiplexing Method, Base Station, and User Terminal
US9715876B2 (en) 2006-04-17 2017-07-25 Iii Holdings 1, Llc Correcting transcribed audio files with an email-client interface
US9772816B1 (en) * 2014-12-22 2017-09-26 Google Inc. Transcription and tagging system
US9787819B2 (en) * 2015-09-18 2017-10-10 Microsoft Technology Licensing, Llc Transcription of spoken communications
US10192176B2 (en) 2011-10-11 2019-01-29 Microsoft Technology Licensing, Llc Motivation of task completion and personalization of tasks and lists
US10389876B2 (en) 2014-02-28 2019-08-20 Ultratec, Inc. Semiautomated relay method and apparatus
US10417336B1 (en) * 2010-03-26 2019-09-17 Open Invention Network Llc Systems and methods for identifying a set of characters in a media file
US10748523B2 (en) 2014-02-28 2020-08-18 Ultratec, Inc. Semiautomated relay method and apparatus
US10878721B2 (en) 2014-02-28 2020-12-29 Ultratec, Inc. Semiautomated relay method and apparatus
US10917519B2 (en) 2014-02-28 2021-02-09 Ultratec, Inc. Semiautomated relay method and apparatus
US11017034B1 (en) 2010-06-28 2021-05-25 Open Invention Network Llc System and method for search with the aid of images associated with product categories
US20210210094A1 (en) * 2016-12-27 2021-07-08 Amazon Technologies, Inc. Messaging from a shared device
US11216145B1 (en) 2010-03-26 2022-01-04 Open Invention Network Llc Method and apparatus of providing a customized user interface
US20220005478A1 (en) * 2009-02-27 2022-01-06 Nec Corporation Mobile wireless communications device with speech to text conversion and related methods
US11367445B2 (en) * 2020-02-05 2022-06-21 Citrix Systems, Inc. Virtualized speech in a distributed network environment
US11430435B1 (en) 2018-12-13 2022-08-30 Amazon Technologies, Inc. Prompts for user feedback
US11539900B2 (en) 2020-02-21 2022-12-27 Ultratec, Inc. Caption modification and augmentation systems and methods for use by hearing assisted user
US11575791B1 (en) * 2018-12-12 2023-02-07 8X8, Inc. Interactive routing of data communications
US11664029B2 (en) 2014-02-28 2023-05-30 Ultratec, Inc. Semiautomated relay method and apparatus
US11881936B2 (en) * 2018-11-13 2024-01-23 Email On Acid, Llc E-mail testing and rendering platform
US11922113B2 (en) 2021-01-12 2024-03-05 Email On Acid, Llc Systems, methods, and devices for e-mail rendering

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9973450B2 (en) 2007-09-17 2018-05-15 Amazon Technologies, Inc. Methods and systems for dynamically updating web service profile information by parsing transcribed message strings
US8352264B2 (en) * 2008-03-19 2013-01-08 Canyon IP Holdings, LLC Corrective feedback loop for automated speech recognition
CN102750365B (en) * 2012-06-14 2014-09-03 华为软件技术有限公司 Retrieval method and system of instant voice messages, user device and server
US9678947B2 (en) 2014-11-21 2017-06-13 International Business Machines Corporation Pattern identification and correction of document misinterpretations in a natural language processing system
CN105869654B (en) * 2016-03-29 2020-12-04 阿里巴巴集团控股有限公司 Audio message processing method and device
US10388272B1 (en) 2018-12-04 2019-08-20 Sorenson Ip Holdings, Llc Training speech recognition systems using word sequences
US10573312B1 (en) 2018-12-04 2020-02-25 Sorenson Ip Holdings, Llc Transcription generation from multiple speech recognition systems
US11170761B2 (en) 2018-12-04 2021-11-09 Sorenson Ip Holdings, Llc Training of speech recognition systems
US11017778B1 (en) 2018-12-04 2021-05-25 Sorenson Ip Holdings, Llc Switching between speech recognition systems
US11061638B2 (en) 2019-09-17 2021-07-13 The Toronto-Dominion Bank Dynamically determining an interface for presenting information to a user
US11488604B2 (en) 2020-08-19 2022-11-01 Sorenson Ip Holdings, Llc Transcription of audio

Citations (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5875436A (en) * 1996-08-27 1999-02-23 Data Link Systems, Inc. Virtual transcription system
US5956681A (en) * 1996-12-27 1999-09-21 Casio Computer Co., Ltd. Apparatus for generating text data on the basis of speech data input from terminal
US6173259B1 (en) * 1997-03-27 2001-01-09 Speech Machines Plc Speech to text conversion
US6222909B1 (en) * 1997-11-14 2001-04-24 Lucent Technologies Inc. Audio note taking system and method for communication devices
US6243677B1 (en) * 1997-11-19 2001-06-05 Texas Instruments Incorporated Method of out of vocabulary word rejection
US6275849B1 (en) * 1997-05-02 2001-08-14 Telefonaktiebolaget Lm Ericsson (Publ) Communication system for electronic messages
US6308151B1 (en) * 1999-05-14 2001-10-23 International Business Machines Corp. Method and system using a speech recognition system to dictate a body of text in response to an available body of text
US6366882B1 (en) * 1997-03-27 2002-04-02 Speech Machines, Plc Apparatus for converting speech to text
US6404762B1 (en) * 1998-06-09 2002-06-11 Unisys Corporation Universal messaging system providing integrated voice, data and fax messaging services to pc/web-based clients, including a session manager for maintaining a session between a messaging platform and the web-based clients
US6411685B1 (en) * 1999-01-29 2002-06-25 Microsoft Corporation System and method for providing unified messaging to a user with a thin web browser
US20020159573A1 (en) * 2001-04-25 2002-10-31 Hitzeman Bonnie P. System allowing telephone customers to send and retrieve electronic mail messages using only conventional telephonic devices
US6483899B2 (en) * 1998-06-19 2002-11-19 At&T Corp Voice messaging system
US20020178002A1 (en) * 2001-05-24 2002-11-28 International Business Machines Corporation System and method for searching, analyzing and displaying text transcripts of speech after imperfect speech recognition
US20030002643A1 (en) * 2001-06-29 2003-01-02 Seibel Richard A. Network-attached interactive unified messaging device
US20030009528A1 (en) * 2001-07-08 2003-01-09 Imran Sharif System and method for using an internet appliance to send/receive digital content files as E-mail attachments
US6507643B1 (en) * 2000-03-16 2003-01-14 Breveon Incorporated Speech recognition system and method for converting voice mail messages to electronic mail messages
US20030036903A1 (en) * 2001-08-16 2003-02-20 Sony Corporation Retraining and updating speech models for speech recognition
US20030046350A1 (en) * 2001-09-04 2003-03-06 Systel, Inc. System for transcribing dictation
US20030050777A1 (en) * 2001-09-07 2003-03-13 Walker William Donald System and method for automatic transcription of conversations
US6535586B1 (en) * 1998-12-30 2003-03-18 At&T Corp. System for the remote notification and retrieval of electronically stored messages
US20030068023A1 (en) * 2001-10-10 2003-04-10 Bruce Singh E-mail card: sending e-mail via telephone
US20030105631A1 (en) * 2001-12-03 2003-06-05 Habte Yosef G. Method for generating transcribed data from verbal information and providing multiple recipients with access to the transcribed data
US20030122922A1 (en) * 2001-11-26 2003-07-03 Saffer Kevin D. Video e-mail system and associated method
US6643291B1 (en) * 1997-06-18 2003-11-04 Kabushiki Kaisha Toshiba Multimedia information communication system
US20030220784A1 (en) * 2002-05-24 2003-11-27 International Business Machines Corporation System and method for automated voice message transcription and delivery
US20030223556A1 (en) * 2002-05-29 2003-12-04 Yun-Cheng Ju Electronic mail replies with speech recognition
US6697841B1 (en) * 1997-06-24 2004-02-24 Dictaphone Corporation Dictation system employing computer-to-computer transmission of voice files controlled by hand microphone
US6697458B1 (en) * 2000-07-10 2004-02-24 Ulysses Esd, Inc. System and method for synchronizing voice mailbox with e-mail box
US6704394B1 (en) * 1998-03-25 2004-03-09 International Business Machines Corporation System and method for accessing voice mail from a remote server
US6738800B1 (en) * 1999-06-28 2004-05-18 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for organizing and accessing electronic messages in a telecommunications system
US6775359B1 (en) * 1999-12-28 2004-08-10 Comverse Ltd. Voice reply to incoming e-mail messages, via e-mail
US6775651B1 (en) * 2000-05-26 2004-08-10 International Business Machines Corporation Method of transcribing text from computer voice mail
US20040172245A1 (en) * 2003-02-28 2004-09-02 Lee Rosen System and method for structuring speech recognized text into a pre-selected document format
US6823306B2 (en) * 2000-11-30 2004-11-23 Telesector Resources Group, Inc. Methods and apparatus for generating, updating and distributing speech recognition models
US20040252679A1 (en) * 2002-02-26 2004-12-16 Tim Williams Stored voice message control extensions
US20050010411A1 (en) * 2003-07-09 2005-01-13 Luca Rigazio Speech data mining for call center management
US20050013419A1 (en) * 2003-07-15 2005-01-20 Pelaez Mariana Benitez Network speech-to-text conversion and store
US20050015443A1 (en) * 2000-10-10 2005-01-20 Alex Levine Personal message delivery system
US20050028212A1 (en) * 2003-07-31 2005-02-03 Laronne Shai A. Automated digital voice recorder to personal information manager synchronization
US6857008B1 (en) * 2000-04-19 2005-02-15 Cisco Technology, Inc. Arrangement for accessing an IP-based messaging server by telephone for management of stored messages
US6865258B1 (en) * 1999-08-13 2005-03-08 Intervoice Limited Partnership Method and system for enhanced transcription
US6868143B1 (en) * 2002-10-01 2005-03-15 Bellsouth Intellectual Property System and method for advanced unified messaging
US20050058260A1 (en) * 2000-11-15 2005-03-17 Lasensky Peter Joel Systems and methods for communicating using voice messages
US20050076109A1 (en) * 2003-07-11 2005-04-07 Boban Mathew Multimedia notification system and method
US20050100142A1 (en) * 2003-11-10 2005-05-12 International Business Machines Corporation Personal home voice portal
US20050102139A1 (en) * 2003-11-11 2005-05-12 Canon Kabushiki Kaisha Information processing method and apparatus
US6901364B2 (en) * 2001-09-13 2005-05-31 Matsushita Electric Industrial Co., Ltd. Focused language models for improved speech input of structured documents
US20050163289A1 (en) * 2004-01-23 2005-07-28 Rami Caspi Method and system for providing a voice mail message
US20050187766A1 (en) * 2004-02-23 2005-08-25 Rennillo Louis R. Real-time transcription system
US6937986B2 (en) * 2000-12-28 2005-08-30 Comverse, Inc. Automatic dynamic speech recognition vocabulary based on external sources of information
US6965666B1 (en) * 2001-10-26 2005-11-15 Sprint Spectrum L.P. System and method for sending e-mails from a customer entity in a telecommunications network
US6980953B1 (en) * 2000-10-31 2005-12-27 International Business Machines Corp. Real-time remote transcription or translation service
US20050288926A1 (en) * 2004-06-25 2005-12-29 Benco David S Network support for wireless e-mail using speech-to-text conversion
US20060029197A1 (en) * 2004-07-30 2006-02-09 Avaya Technology Corp. One-touch user voiced message
US20060047518A1 (en) * 2004-08-31 2006-03-02 Claudatos Christopher H Interface for management of multiple auditory communications
US7016844B2 (en) * 2002-09-26 2006-03-21 Core Mobility, Inc. System and method for online transcription services
US7023968B1 (en) * 1998-11-10 2006-04-04 Intel Corporation Message handling system
US7035804B2 (en) * 2001-04-26 2006-04-25 Stenograph, L.L.C. Systems and methods for automated audio transcription, translation, and transfer
US20060095259A1 (en) * 2004-11-02 2006-05-04 International Business Machines Corporation Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment
US20060123347A1 (en) * 2004-12-06 2006-06-08 Joe Hewitt Managing and collaborating with digital content using a dynamic user interface
US20060135128A1 (en) * 2004-12-21 2006-06-22 Alcatel Systems and methods for storing personal messages
US20060140360A1 (en) * 2004-12-27 2006-06-29 Crago William B Methods and systems for rendering voice mail messages amenable to electronic processing by mailbox owners
US20060168259A1 (en) * 2005-01-27 2006-07-27 Iknowware, Lp System and method for accessing data via Internet, wireless PDA, smartphone, text to voice and voice to text
US20060193450A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Communication conversion between text and audio
US7103154B1 (en) * 1998-01-16 2006-09-05 Cannon Joseph M Automatic transmission of voice-to-text converted voice message
US20060223502A1 (en) * 2003-04-22 2006-10-05 Spinvox Limited Method of providing voicemails to a wireless information device
US7130401B2 (en) * 2004-03-09 2006-10-31 Discernix, Incorporated Speech to text conversion system
US7130918B2 (en) * 2000-04-27 2006-10-31 Microsoft Corporation Mobile internet voice service
US20060245434A1 (en) * 2005-04-29 2006-11-02 Microsoft Corporation Delegated presence for unified messaging/unified communication
US20060287854A1 (en) * 1999-04-12 2006-12-21 Ben Franklin Patent Holding Llc Voice integration platform
US20070005709A1 (en) * 2004-06-18 2007-01-04 2Speak, Inc. Method and system for providing a voice e-mail messaging service
US20070041522A1 (en) * 2005-08-19 2007-02-22 At&T Corp. System and method for integrating and managing E-mail, voicemail, and telephone conversations using speech processing techniques
US20070047702A1 (en) * 2005-08-25 2007-03-01 Newell Thomas J Message distribution system
US20070083656A1 (en) * 1994-05-13 2007-04-12 J2 Global Communications, Inc. Systems and method for storing, delivering, and managing messages
US20070129949A1 (en) * 2005-12-06 2007-06-07 Alberth William P Jr System and method for assisted speech recognition
US20070156400A1 (en) * 2006-01-03 2007-07-05 Wheeler Mark R System and method for wireless dictation and transcription
US20070203901A1 (en) * 2006-02-24 2007-08-30 Manuel Prado Data transcription and management system and method
US20070208570A1 (en) * 2006-03-06 2007-09-06 Foneweb, Inc. Message transcription, voice query and query delivery system
US20070299664A1 (en) * 2004-09-30 2007-12-27 Koninklijke Philips Electronics, N.V. Automatic Text Correction
US7346505B1 (en) * 2001-09-28 2008-03-18 At&T Delaware Intellectual Property, Inc. System and method for voicemail transcription
US20080102863A1 (en) * 2006-10-31 2008-05-01 Research In Motion Limited System, method, and user interface for searching for messages associated with a message service on a mobile device
US20080198981A1 (en) * 2007-02-21 2008-08-21 Jens Ulrik Skakkebaek Voicemail filtering and transcription
US20080221884A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile environment speech processing facility
US20080300873A1 (en) * 2007-05-30 2008-12-04 James Siminoff Systems And Methods For Securely Transcribing Voicemail Messages
US20090070109A1 (en) * 2007-09-12 2009-03-12 Microsoft Corporation Speech-to-Text Transcription for Personal Communication Devices
US7539086B2 (en) * 2002-10-23 2009-05-26 J2 Global Communications, Inc. System and method for the secure, real-time, high accuracy conversion of general-quality speech into text
US20090276215A1 (en) * 2006-04-17 2009-11-05 Hager Paul M Methods and systems for correcting transcribed audio files
US7706520B1 (en) * 2005-11-08 2010-04-27 Liveops, Inc. System and method for facilitating transcription of audio recordings, with auditing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596606B2 (en) * 1999-03-11 2009-09-29 Codignotto John D Message publishing system for publishing messages from identified, authorized senders
US6418410B1 (en) 1999-09-27 2002-07-09 International Business Machines Corporation Smart correction of dictated speech
US20060271365A1 (en) * 2000-09-18 2006-11-30 International Business Machines Corporation Methods and apparatus for processing information signals based on content
US6775360B2 (en) * 2000-12-28 2004-08-10 Intel Corporation Method and system for providing textual content along with voice messages
GB2427500A (en) * 2005-06-22 2006-12-27 Symbian Software Ltd Mobile telephone text entry employing remote speech to text conversion
US7949529B2 (en) * 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
CA2527813A1 (en) * 2005-11-24 2007-05-24 9160-8083 Quebec Inc. System, method and computer program for sending an email message from a mobile communication device based on voice input
US20110022387A1 (en) 2007-12-04 2011-01-27 Hager Paul M Correcting transcribed audio files with an email-client interface

Patent Citations (94)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083656A1 (en) * 1994-05-13 2007-04-12 J2 Global Communications, Inc. Systems and method for storing, delivering, and managing messages
US20070088808A1 (en) * 1995-04-28 2007-04-19 J2 Global Communications, Inc. Systems and method for storing, delivering, and managing messages
US5875436A (en) * 1996-08-27 1999-02-23 Data Link Systems, Inc. Virtual transcription system
US5956681A (en) * 1996-12-27 1999-09-21 Casio Computer Co., Ltd. Apparatus for generating text data on the basis of speech data input from terminal
US6173259B1 (en) * 1997-03-27 2001-01-09 Speech Machines Plc Speech to text conversion
US6366882B1 (en) * 1997-03-27 2002-04-02 Speech Machines, Plc Apparatus for converting speech to text
US6275849B1 (en) * 1997-05-02 2001-08-14 Telefonaktiebolaget Lm Ericsson (Publ) Communication system for electronic messages
US6643291B1 (en) * 1997-06-18 2003-11-04 Kabushiki Kaisha Toshiba Multimedia information communication system
US6697841B1 (en) * 1997-06-24 2004-02-24 Dictaphone Corporation Dictation system employing computer-to-computer transmission of voice files controlled by hand microphone
US6222909B1 (en) * 1997-11-14 2001-04-24 Lucent Technologies Inc. Audio note taking system and method for communication devices
US6243677B1 (en) * 1997-11-19 2001-06-05 Texas Instruments Incorporated Method of out of vocabulary word rejection
US7103154B1 (en) * 1998-01-16 2006-09-05 Cannon Joseph M Automatic transmission of voice-to-text converted voice message
US6704394B1 (en) * 1998-03-25 2004-03-09 International Business Machines Corporation System and method for accessing voice mail from a remote server
US6404762B1 (en) * 1998-06-09 2002-06-11 Unisys Corporation Universal messaging system providing integrated voice, data and fax messaging services to pc/web-based clients, including a session manager for maintaining a session between a messaging platform and the web-based clients
US6483899B2 (en) * 1998-06-19 2002-11-19 At&T Corp Voice messaging system
US20040062365A1 (en) * 1998-06-19 2004-04-01 Sanjay Agraharam Voice messaging system for converting oral messages into text messages
US7023968B1 (en) * 1998-11-10 2006-04-04 Intel Corporation Message handling system
US6535586B1 (en) * 1998-12-30 2003-03-18 At&T Corp. System for the remote notification and retrieval of electronically stored messages
US6411685B1 (en) * 1999-01-29 2002-06-25 Microsoft Corporation System and method for providing unified messaging to a user with a thin web browser
US20060287854A1 (en) * 1999-04-12 2006-12-21 Ben Franklin Patent Holding Llc Voice integration platform
US6308151B1 (en) * 1999-05-14 2001-10-23 International Business Machines Corp. Method and system using a speech recognition system to dictate a body of text in response to an available body of text
US6738800B1 (en) * 1999-06-28 2004-05-18 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for organizing and accessing electronic messages in a telecommunications system
US6865258B1 (en) * 1999-08-13 2005-03-08 Intervoice Limited Partnership Method and system for enhanced transcription
US6775359B1 (en) * 1999-12-28 2004-08-10 Comverse Ltd. Voice reply to incoming e-mail messages, via e-mail
US6507643B1 (en) * 2000-03-16 2003-01-14 Breveon Incorporated Speech recognition system and method for converting voice mail messages to electronic mail messages
US6857008B1 (en) * 2000-04-19 2005-02-15 Cisco Technology, Inc. Arrangement for accessing an IP-based messaging server by telephone for management of stored messages
US7130918B2 (en) * 2000-04-27 2006-10-31 Microsoft Corporation Mobile internet voice service
US6775651B1 (en) * 2000-05-26 2004-08-10 International Business Machines Corporation Method of transcribing text from computer voice mail
US6697458B1 (en) * 2000-07-10 2004-02-24 Ulysses Esd, Inc. System and method for synchronizing voice mailbox with e-mail box
US20050015443A1 (en) * 2000-10-10 2005-01-20 Alex Levine Personal message delivery system
US6980953B1 (en) * 2000-10-31 2005-12-27 International Business Machines Corp. Real-time remote transcription or translation service
US20050058260A1 (en) * 2000-11-15 2005-03-17 Lasensky Peter Joel Systems and methods for communicating using voice messages
US6823306B2 (en) * 2000-11-30 2004-11-23 Telesector Resources Group, Inc. Methods and apparatus for generating, updating and distributing speech recognition models
US6937986B2 (en) * 2000-12-28 2005-08-30 Comverse, Inc. Automatic dynamic speech recognition vocabulary based on external sources of information
US20020159573A1 (en) * 2001-04-25 2002-10-31 Hitzeman Bonnie P. System allowing telephone customers to send and retrieve electronic mail messages using only conventional telephonic devices
US7035804B2 (en) * 2001-04-26 2006-04-25 Stenograph, L.L.C. Systems and methods for automated audio transcription, translation, and transfer
US20020178002A1 (en) * 2001-05-24 2002-11-28 International Business Machines Corporation System and method for searching, analyzing and displaying text transcripts of speech after imperfect speech recognition
US20030002643A1 (en) * 2001-06-29 2003-01-02 Seibel Richard A. Network-attached interactive unified messaging device
US20030009528A1 (en) * 2001-07-08 2003-01-09 Imran Sharif System and method for using an internet appliance to send/receive digital content files as E-mail attachments
US20030036903A1 (en) * 2001-08-16 2003-02-20 Sony Corporation Retraining and updating speech models for speech recognition
US20030046350A1 (en) * 2001-09-04 2003-03-06 Systel, Inc. System for transcribing dictation
US20030050777A1 (en) * 2001-09-07 2003-03-13 Walker William Donald System and method for automatic transcription of conversations
US6901364B2 (en) * 2001-09-13 2005-05-31 Matsushita Electric Industrial Co., Ltd. Focused language models for improved speech input of structured documents
US7346505B1 (en) * 2001-09-28 2008-03-18 At&T Delaware Intellectual Property, Inc. System and method for voicemail transcription
US20030068023A1 (en) * 2001-10-10 2003-04-10 Bruce Singh E-mail card: sending e-mail via telephone
US6965666B1 (en) * 2001-10-26 2005-11-15 Sprint Spectrum L.P. System and method for sending e-mails from a customer entity in a telecommunications network
US20030122922A1 (en) * 2001-11-26 2003-07-03 Saffer Kevin D. Video e-mail system and associated method
US20030105631A1 (en) * 2001-12-03 2003-06-05 Habte Yosef G. Method for generating transcribed data from verbal information and providing multiple recipients with access to the transcribed data
US20040252679A1 (en) * 2002-02-26 2004-12-16 Tim Williams Stored voice message control extensions
US20030220784A1 (en) * 2002-05-24 2003-11-27 International Business Machines Corporation System and method for automated voice message transcription and delivery
US20030223556A1 (en) * 2002-05-29 2003-12-04 Yun-Cheng Ju Electronic mail replies with speech recognition
US7146320B2 (en) * 2002-05-29 2006-12-05 Microsoft Corporation Electronic mail replies with speech recognition
US7016844B2 (en) * 2002-09-26 2006-03-21 Core Mobility, Inc. System and method for online transcription services
US6868143B1 (en) * 2002-10-01 2005-03-15 Bellsouth Intellectual Property System and method for advanced unified messaging
US7539086B2 (en) * 2002-10-23 2009-05-26 J2 Global Communications, Inc. System and method for the secure, real-time, high accuracy conversion of general-quality speech into text
US20040172245A1 (en) * 2003-02-28 2004-09-02 Lee Rosen System and method for structuring speech recognized text into a pre-selected document format
US20060234680A1 (en) * 2003-04-22 2006-10-19 Spinvox Limited Method of managing voicemails from a mobile telephone
US20060223502A1 (en) * 2003-04-22 2006-10-05 Spinvox Limited Method of providing voicemails to a wireless information device
US20050010411A1 (en) * 2003-07-09 2005-01-13 Luca Rigazio Speech data mining for call center management
US20050076109A1 (en) * 2003-07-11 2005-04-07 Boban Mathew Multimedia notification system and method
US20050013419A1 (en) * 2003-07-15 2005-01-20 Pelaez Mariana Benitez Network speech-to-text conversion and store
US20050028212A1 (en) * 2003-07-31 2005-02-03 Laronne Shai A. Automated digital voice recorder to personal information manager synchronization
US20050100142A1 (en) * 2003-11-10 2005-05-12 International Business Machines Corporation Personal home voice portal
US20050102139A1 (en) * 2003-11-11 2005-05-12 Canon Kabushiki Kaisha Information processing method and apparatus
US7317788B2 (en) * 2004-01-23 2008-01-08 Siemens Communications, Inc. Method and system for providing a voice mail message
US20050163289A1 (en) * 2004-01-23 2005-07-28 Rami Caspi Method and system for providing a voice mail message
US20050187766A1 (en) * 2004-02-23 2005-08-25 Rennillo Louis R. Real-time transcription system
US7130401B2 (en) * 2004-03-09 2006-10-31 Discernix, Incorporated Speech to text conversion system
US20070005709A1 (en) * 2004-06-18 2007-01-04 2Speak, Inc. Method and system for providing a voice e-mail messaging service
US20050288926A1 (en) * 2004-06-25 2005-12-29 Benco David S Network support for wireless e-mail using speech-to-text conversion
US20060029197A1 (en) * 2004-07-30 2006-02-09 Avaya Technology Corp. One-touch user voiced message
US20060047518A1 (en) * 2004-08-31 2006-03-02 Claudatos Christopher H Interface for management of multiple auditory communications
US20070299664A1 (en) * 2004-09-30 2007-12-27 Koninklijke Philips Electronics, N.V. Automatic Text Correction
US20060095259A1 (en) * 2004-11-02 2006-05-04 International Business Machines Corporation Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment
US20060123347A1 (en) * 2004-12-06 2006-06-08 Joe Hewitt Managing and collaborating with digital content using a dynamic user interface
US20060135128A1 (en) * 2004-12-21 2006-06-22 Alcatel Systems and methods for storing personal messages
US20060140360A1 (en) * 2004-12-27 2006-06-29 Crago William B Methods and systems for rendering voice mail messages amenable to electronic processing by mailbox owners
US20060168259A1 (en) * 2005-01-27 2006-07-27 Iknowware, Lp System and method for accessing data via Internet, wireless PDA, smartphone, text to voice and voice to text
US20060193450A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Communication conversion between text and audio
US20060245434A1 (en) * 2005-04-29 2006-11-02 Microsoft Corporation Delegated presence for unified messaging/unified communication
US20070041522A1 (en) * 2005-08-19 2007-02-22 At&T Corp. System and method for integrating and managing E-mail, voicemail, and telephone conversations using speech processing techniques
US20070047702A1 (en) * 2005-08-25 2007-03-01 Newell Thomas J Message distribution system
US7706520B1 (en) * 2005-11-08 2010-04-27 Liveops, Inc. System and method for facilitating transcription of audio recordings, with auditing
US20070129949A1 (en) * 2005-12-06 2007-06-07 Alberth William P Jr System and method for assisted speech recognition
US20070156400A1 (en) * 2006-01-03 2007-07-05 Wheeler Mark R System and method for wireless dictation and transcription
US20070203901A1 (en) * 2006-02-24 2007-08-30 Manuel Prado Data transcription and management system and method
US20070208570A1 (en) * 2006-03-06 2007-09-06 Foneweb, Inc. Message transcription, voice query and query delivery system
US20090276215A1 (en) * 2006-04-17 2009-11-05 Hager Paul M Methods and systems for correcting transcribed audio files
US8407052B2 (en) * 2006-04-17 2013-03-26 Vovision, Llc Methods and systems for correcting transcribed audio files
US20080102863A1 (en) * 2006-10-31 2008-05-01 Research In Motion Limited System, method, and user interface for searching for messages associated with a message service on a mobile device
US20080198981A1 (en) * 2007-02-21 2008-08-21 Jens Ulrik Skakkebaek Voicemail filtering and transcription
US20080221884A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile environment speech processing facility
US20080300873A1 (en) * 2007-05-30 2008-12-04 James Siminoff Systems And Methods For Securely Transcribing Voicemail Messages
US20090070109A1 (en) * 2007-09-12 2009-03-12 Microsoft Corporation Speech-to-Text Transcription for Personal Communication Devices

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9858256B2 (en) 2006-04-17 2018-01-02 Iii Holdings 1, Llc Methods and systems for correcting transcribed audio files
US10861438B2 (en) 2006-04-17 2020-12-08 Iii Holdings 1, Llc Methods and systems for correcting transcribed audio files
US9245522B2 (en) 2006-04-17 2016-01-26 Iii Holdings 1, Llc Methods and systems for correcting transcribed audio files
US11594211B2 (en) 2006-04-17 2023-02-28 Iii Holdings 1, Llc Methods and systems for correcting transcribed audio files
US9715876B2 (en) 2006-04-17 2017-07-25 Iii Holdings 1, Llc Correcting transcribed audio files with an email-client interface
US20150055764A1 (en) * 2008-07-30 2015-02-26 At&T Intellectual Property I, L.P. Transparent voice registration and verification method and system
US9369577B2 (en) * 2008-07-30 2016-06-14 Interactions Llc Transparent voice registration and verification method and system
US9361880B2 (en) 2008-12-23 2016-06-07 Interactions Llc System and method for recognizing speech with dialect grammars
US20220005478A1 (en) * 2009-02-27 2022-01-06 Nec Corporation Mobile wireless communications device with speech to text conversion and related methods
US20110022386A1 (en) * 2009-07-22 2011-01-27 Cisco Technology, Inc. Speech recognition tuning tool
US9183834B2 (en) * 2009-07-22 2015-11-10 Cisco Technology, Inc. Speech recognition tuning tool
US11216145B1 (en) 2010-03-26 2022-01-04 Open Invention Network Llc Method and apparatus of providing a customized user interface
US10417336B1 (en) * 2010-03-26 2019-09-17 Open Invention Network Llc Systems and methods for identifying a set of characters in a media file
US11209967B1 (en) * 2010-03-26 2021-12-28 Open Invention Network Llc Systems and methods for identifying a set of characters in a media file
US20230333688A1 (en) * 2010-03-26 2023-10-19 Google Llc Systems and Methods for Identifying a Set of Characters in a Media File
US9235645B1 (en) 2010-03-26 2016-01-12 Open Invention Network, Llc Systems and methods for managing the execution of processing jobs
US9386256B1 (en) * 2010-03-26 2016-07-05 Open Invention Network Llc Systems and methods for identifying a set of characters in a media file
US11520471B1 (en) 2010-03-26 2022-12-06 Google Llc Systems and methods for identifying a set of characters in a media file
US20110276325A1 (en) * 2010-05-05 2011-11-10 Cisco Technology, Inc. Training A Transcription System
US9009040B2 (en) * 2010-05-05 2015-04-14 Cisco Technology, Inc. Training a transcription system
US9009592B2 (en) * 2010-06-22 2015-04-14 Microsoft Technology Licensing, Llc Population of lists and tasks from captured voice and audio content
US20120035925A1 (en) * 2010-06-22 2012-02-09 Microsoft Corporation Population of Lists and Tasks from Captured Voice and Audio Content
US11017034B1 (en) 2010-06-28 2021-05-25 Open Invention Network Llc System and method for search with the aid of images associated with product categories
US9215203B2 (en) * 2010-07-22 2015-12-15 At&T Intellectual Property I, L.P. System and method for efficient unified messaging system support for speech-to-text service
US9672826B2 (en) 2010-07-22 2017-06-06 Nuance Communications, Inc. System and method for efficient unified messaging system support for speech-to-text service
US20140025764A1 (en) * 2010-07-22 2014-01-23 At & T Intellectual Property I, L.P. System and Method for Efficient Unified Messaging System Support for Speech-to-Text Service
US8879695B2 (en) 2010-08-06 2014-11-04 At&T Intellectual Property I, L.P. System and method for selective voicemail transcription
US9992344B2 (en) 2010-08-06 2018-06-05 Nuance Communications, Inc. System and method for selective voicemail transcription
US9137375B2 (en) 2010-08-06 2015-09-15 At&T Intellectual Property I, L.P. System and method for selective voicemail transcription
US20120054284A1 (en) * 2010-08-25 2012-03-01 International Business Machines Corporation Communication management method and system
US9455944B2 (en) 2010-08-25 2016-09-27 International Business Machines Corporation Reply email clarification
US8775530B2 (en) * 2010-08-25 2014-07-08 International Business Machines Corporation Communication management method and system
US10582033B2 (en) * 2011-06-09 2020-03-03 Samsung Electronics Co., Ltd. Method of providing information and mobile telecommunication terminal thereof
US20120316873A1 (en) * 2011-06-09 2012-12-13 Samsung Electronics Co. Ltd. Method of providing information and mobile telecommunication terminal thereof
US9626969B2 (en) * 2011-07-26 2017-04-18 Nuance Communications, Inc. Systems and methods for improving the accuracy of a transcription using auxiliary data such as personal data
US20150221306A1 (en) * 2011-07-26 2015-08-06 Nuance Communications, Inc. Systems and methods for improving the accuracy of a transcription using auxiliary data such as personal data
US20130030806A1 (en) * 2011-07-26 2013-01-31 Kabushiki Kaisha Toshiba Transcription support system and transcription support method
US9489946B2 (en) * 2011-07-26 2016-11-08 Kabushiki Kaisha Toshiba Transcription support system and transcription support method
US10192176B2 (en) 2011-10-11 2019-01-29 Microsoft Technology Licensing, Llc Motivation of task completion and personalization of tasks and lists
US20130132079A1 (en) * 2011-11-17 2013-05-23 Microsoft Corporation Interactive speech recognition
US20130317818A1 (en) * 2012-05-24 2013-11-28 University Of Rochester Systems and Methods for Captioning by Non-Experts
US9224387B1 (en) * 2012-12-04 2015-12-29 Amazon Technologies, Inc. Targeted detection of regions in speech processing data streams
US9916826B1 (en) * 2012-12-04 2018-03-13 Amazon Technologies, Inc. Targeted detection of regions in speech processing data streams
US9460718B2 (en) * 2013-04-03 2016-10-04 Kabushiki Kaisha Toshiba Text generator, text generating method, and computer program product
US20140303974A1 (en) * 2013-04-03 2014-10-09 Kabushiki Kaisha Toshiba Text generator, text generating method, and computer program product
US9888083B2 (en) * 2013-08-02 2018-02-06 Telefonaktiebolaget L M Ericsson (Publ) Transcription of communication sessions
US20160164979A1 (en) * 2013-08-02 2016-06-09 Telefonaktiebolaget L M Ericsson (Publ) Transcription of communication sessions
US10742805B2 (en) 2014-02-28 2020-08-11 Ultratec, Inc. Semiautomated relay method and apparatus
US10878721B2 (en) 2014-02-28 2020-12-29 Ultratec, Inc. Semiautomated relay method and apparatus
US10917519B2 (en) 2014-02-28 2021-02-09 Ultratec, Inc. Semiautomated relay method and apparatus
US10389876B2 (en) 2014-02-28 2019-08-20 Ultratec, Inc. Semiautomated relay method and apparatus
US10748523B2 (en) 2014-02-28 2020-08-18 Ultratec, Inc. Semiautomated relay method and apparatus
US10542141B2 (en) 2014-02-28 2020-01-21 Ultratec, Inc. Semiautomated relay method and apparatus
US11741963B2 (en) 2014-02-28 2023-08-29 Ultratec, Inc. Semiautomated relay method and apparatus
US11368581B2 (en) 2014-02-28 2022-06-21 Ultratec, Inc. Semiautomated relay method and apparatus
US11664029B2 (en) 2014-02-28 2023-05-30 Ultratec, Inc. Semiautomated relay method and apparatus
US11627221B2 (en) 2014-02-28 2023-04-11 Ultratec, Inc. Semiautomated relay method and apparatus
US9628603B2 (en) * 2014-07-23 2017-04-18 Lenovo (Singapore) Pte. Ltd. Voice mail transcription
US20170195019A1 (en) * 2014-09-19 2017-07-06 Huawei Technologies Co., Ltd. Multi-User Multiplexing Method, Base Station, and User Terminal
US9772816B1 (en) * 2014-12-22 2017-09-26 Google Inc. Transcription and tagging system
US9787819B2 (en) * 2015-09-18 2017-10-10 Microsoft Technology Licensing, Llc Transcription of spoken communications
US20210210094A1 (en) * 2016-12-27 2021-07-08 Amazon Technologies, Inc. Messaging from a shared device
US11881936B2 (en) * 2018-11-13 2024-01-23 Email On Acid, Llc E-mail testing and rendering platform
US11575791B1 (en) * 2018-12-12 2023-02-07 8X8, Inc. Interactive routing of data communications
US11430435B1 (en) 2018-12-13 2022-08-30 Amazon Technologies, Inc. Prompts for user feedback
US11367445B2 (en) * 2020-02-05 2022-06-21 Citrix Systems, Inc. Virtualized speech in a distributed network environment
US11539900B2 (en) 2020-02-21 2022-12-27 Ultratec, Inc. Caption modification and augmentation systems and methods for use by hearing assisted user
US11922113B2 (en) 2021-01-12 2024-03-05 Email On Acid, Llc Systems, methods, and devices for e-mail rendering

Also Published As

Publication number Publication date
US20140136199A1 (en) 2014-05-15
US9715876B2 (en) 2017-07-25
WO2009073768A1 (en) 2009-06-11

Similar Documents

Publication Publication Date Title
US11594211B2 (en) Methods and systems for correcting transcribed audio files
US9715876B2 (en) Correcting transcribed audio files with an email-client interface
US10853582B2 (en) Conversational agent
JP6640384B2 (en) Incorporating selectable application links into conversation threads
CN106471570B (en) Order single language input method more
KR101843604B1 (en) Electronic communications triage
US10063497B2 (en) Electronic reply message compositor and prioritization apparatus and method of operation
US20140207472A1 (en) Automated communication integrator
US8244544B1 (en) Editing voice input
US20140372115A1 (en) Self-Directed Machine-Generated Transcripts
KR102339296B1 (en) Incorporating selectable application links into conversations with personal assistant modules
JP2005528850A (en) Method and apparatus for controlling data provided to a mobile device
US20180131693A1 (en) Systems and methods for creating and displaying an electronic communication digest
US11924154B2 (en) System and method for deep message editing in a chat communication environment
US20230092334A1 (en) Systems and methods for linking notes and transcripts

Legal Events

Date Code Title Description
AS Assignment

Owner name: VOVISION, LLC, WISCONSIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HAGER, PAUL M.;REEL/FRAME:025104/0371

Effective date: 20101006

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: III HOLDINGS 1, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VOVISION, LLC;REEL/FRAME:033614/0044

Effective date: 20140813