US20140163974A1 - Distributed Speech Recognition Using One Way Communication - Google Patents
Distributed Speech Recognition Using One Way Communication Download PDFInfo
- Publication number
- US20140163974A1 US20140163974A1 US13/957,684 US201313957684A US2014163974A1 US 20140163974 A1 US20140163974 A1 US 20140163974A1 US 201313957684 A US201313957684 A US 201313957684A US 2014163974 A1 US2014163974 A1 US 2014163974A1
- Authority
- US
- United States
- Prior art keywords
- speech
- server
- speech recognition
- client
- recognizer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
Definitions
- ASRs automatic speech recognizers
- Some applications of automatic speech recognizers require shorter turnaround times (the amount of time between when the speech is spoken and when the speech recognizer produces output) than others in order to appear responsive to the end user.
- a speech recognizer that is used for a “live” speech recognition application such as controlling the movement of an on-screen cursor, may require a shorter turnaround time (also referred to as a “response time”) than a speech recognizer that is used to produce a transcript of a medical report.
- the desired turnaround time may depend, for example, on the content of the speech utterance that is processed by the speech recognizer. For example, for a short command-and-control utterance, such as “close window,” a turnaround time above 500 ms may appear sluggish to the end user. In contrast, for a long dictated sentence which the user desires to transcribe into text, response times of 1000 ms may be acceptable to the end user. In fact, in the latter case users may prefer longer response times because they may otherwise feel that their speech is being interrupted by the immediate display of text in response to their speech. For longer dictated passages, such as entire paragraphs, even longer response times of multiple seconds may be acceptable to the end user.
- colocation requires a speech recognition system to be installed on every end user device—such as every desktop computer, laptop computer, cellular telephone, and personal digital assistant (PDA)—which requires speech recognition functionality.
- end user device such as every desktop computer, laptop computer, cellular telephone, and personal digital assistant (PDA)—which requires speech recognition functionality.
- PDA personal digital assistant
- Installing and maintaining such speech recognition systems on such a large number and wide variety of devices can be tedious and time-consuming for end users and system administrators.
- system binaries to be updated when a new release of the speech recognition system becomes available.
- User data such as speech models, are created and accumulated over time on individual devices, taking up precious storage space, and need to be synchronized with multiple devices used by the same user.
- Such maintenance can grow particularly burdensome as users continue to use speech recognition systems on a wider number and variety of devices.
- locating a speech recognition system on the end user device causes the speech recognition system to consume precious computing resources, such as CPU processing cycles, main memory, and disk space.
- resources are particularly scarce on handheld mobile devices such as cellular telephones.
- Producing speech recognition results with fast turnaround times using such devices typically requires sacrificing recognition accuracy and reducing the resources available to other applications executing on the same device.
- One known technique for overcoming these resource constraints in the context of embedded devices is to delegate some or all of the speech recognition processing responsibility to a speech recognition server that it located remotely from the embedded device and which has significantly greater computing resources than the embedded device.
- the embedded device When a user speaks into the embedded device in this situation, the embedded device does not attempt to recognize the speech using its own computing resources. Instead, the embedded device transmits the speech (or a processed form of it) over a network connection to the speech recognition server, which recognizes the speech using its greater computing resources and therefore produces recognition results more quickly than the embedded device could have produced with the same accuracy.
- the speech recognition server then transmits the results back over the network connection to the embedded device.
- this technique produces highly-accurate speech recognition results more quickly than would otherwise be possible using the embedded device alone.
- server-side speech recognition In practice, however, this “server-side speech recognition” technique has a variety of shortcomings. In particular, because server-side speech recognition relies on the availability of high-speed and reliable network connections, the technique breaks down if such connections are not available when needed. For example, the potential increases in speed made possible by server-side speech recognition may be negated by use of a network connection without sufficiently high bandwidth. As one example, the typical network latency of an HTTP call to a remote server can range from 100 ms to 500 ms. If spoken data arrives at a speech recognition server 500 ms after it is spoken, it will be impossible for that server to produce results quickly enough to satisfy the minimum turnaround time (500 ms) required by command-and-control applications. As a result, even the fastest speech recognition server will produce results that appear sluggish if used in combination with a slow network connection.
- server-side speech recognition techniques assume that the network connection established between the client (e.g., embedded device) and speech recognition server is kept alive continuously during the entire recognition process. Although it may be possible to satisfy this condition in a Local Area Network (LAN) or when both client and server are managed by the same entity, this condition may be impossible or at least unreasonable to satisfy when the client and server are connected over a Wide Area Network (WAN) or the Internet, in which case interruptions to the network connection may be common and unavoidable.
- LAN Local Area Network
- WAN Wide Area Network
- organizations often restrict the kinds of communications that their users can engage in over public networks such as the Internet. For example, organizations may only allow clients within their networks to engage in outbound communications. This means that a client can contact an external server on a certain port, but that the server cannot initiate contact with the client. This is an example of one-way communication.
- clients may only use a limited range of outbound ports to communicate with external servers. Furthermore, outgoing communication on those ports may be required to be encrypted. For example, clients often are allowed to use only the standard HTTP port (port 80 ) or the standard secure, encrypted HTTPS port (port 443 ).
- a speech recognition client sends a speech stream and control stream in parallel to a server-side speech recognizer over a network.
- the network may be an unreliable, low-latency network.
- the server-side speech recognizer recognizes the speech stream continuously.
- the speech recognition client receives recognition results from the server-side recognizer in response to requests from the client.
- the client may remotely reconfigure the state of the server-side recognizer during recognition.
- FIG. 1 is a dataflow diagram of a system for performing speech recognizing over a low-latency network according to one embodiment of the present invention
- FIG. 2A is a flowchart of a method performed by the system of FIG. 1 according to one embodiment of the present invention
- FIG. 2B is a flowchart of a method performed by a server-side automatic speech recognizer to recognize a segment of speech according to one embodiment of the present invention
- FIG. 2C is a flowchart of a method performed by a server-side automatic speech recognizer as part of performing speech recognition on segments of speech according to one embodiment of the present invention
- FIG. 2D is a flowchart of a method performed by a server-side recognizer to ensure that the recognizer is reconfigured after certain recognition results are obtained and before further recognition is performed according to one embodiment of the present invention
- FIG. 3 is a diagram of a speech stream according to one embodiment of the present invention.
- FIG. 4 is a diagram of a command and control stream according to one embodiment of the present invention.
- FIG. 1 a dataflow diagram is shown of a speech recognition system 100 according to one embodiment of the present invention.
- FIG. 2A a flowchart is shown of a method 200 performed by the system 100 of FIG. 1 according to one embodiment of the present invention.
- a user 102 of a client device 106 speaks and thereby provides speech 104 to the client device 106 (step 202 ).
- the client device 106 may be any device, such as a desktop or laptop computer, cellular telephone, personal digital assistant (PDA), or telephone. Embodiments of the present invention, however, are particularly useful in conjunction with resource-constrained clients, such as computers or mobile computing devices with slow processors or small amounts of memory, or computers running resource-intensive software.
- the device 106 may receive the speech 104 from the user 102 in any way, such as through a microphone connected to a sound card.
- the speech 104 may be embodied in an audio signal which is tangibly stored in a computer-readable medium and/or transmitted over a network connection or other channel.
- the speech 104 may, for example, include multiple audio streams, as in the case of “push to talk” applications, in which each push initiates a new audio stream.
- the client device 106 includes an application 108 , such as a transcription application or other application which needs to recognize the speech 104 .
- the application 108 may be any kind of application that uses speech recognition results, assume for purposes of the following discussion that the application 108 is a “live” recognition application for transcribing speech. Portions of the speech 104 provided by the user 102 in this context may fall into one of two basic categories: dictated speech to be transcribed (e.g., “The patient is a 35 year-old male”) or commands (such as “delete this” or “sign and submit”).
- the client device 106 also includes a speech recognition client 140 .
- the speech recognition client 140 is shown in FIG. 1 as a separate module from the application 108 , alternatively the speech recognition client 140 may be part of the application 108 .
- the application 108 provides the speech 104 to the speech recognition client 140 .
- the application 108 may process the speech 104 in some way and provide the processed version of the speech 104 , or other data derived from the speech, to the speech recognition client 140 .
- the speech recognition client 140 itself may process the speech 104 (in addition to or instead of any processing performed on the speech by the application 108 ) in preparation for transmitting the speech 104 for recognition.
- the speech recognition client 140 transmits the speech 104 over a network 116 to a server-side speech recognition engine 120 located on a server 118 (step 204 ). Although the client 140 may transmit the entire speech 104 to the server 118 using a single server configuration, doing so may produce suboptimal results. To improve recognition accuracy or change the context of the speech recognition engine 120 , the client 140 may instead reconfigure the speech recognition engine 120 at various points during transmission of the speech 104 , and therefore at various points during the speech recognition engine's recognition of the speech 104 . In general, configuration commands transmitted by the client 140 to the speech recognition engine 120 set the expectations of the recognizer 120 regarding the context and/or content of the speech that is to follow.
- the speech recognition client 140 It is undesirable, however, to require the speech recognition client 140 to wait to receive an acknowledgement from the server 118 that the previous reconfiguration command has been processed by the server 118 before sending the next portion of the speech 104 to the server 118 , because such a requirement could introduce a significant delay into the recognition of the speech 104 , particularly if the network connection is slow and/or unreliable. It is also undesirable to stop server-side processing of the speech until the server receives instructions from the client-side application 108 on how to process subsequent speech. In prior art systems, however, the server needs to stop processing speech until it receives such instructions, such as reconfiguration commands, from the client.
- the speech recognition client 140 transmits the speech 104 to the server 118 in a speech stream 110 over the network 116 ( FIG. 2 , step 204 ).
- the speech stream 110 may be divided into segments 302 a - e , each of which may represent a portion of the speech 104 (e.g., 150-250 ms of the speech 104 ).
- Sending the speech 104 in segments enables the speech recognition client 140 to transmit portions of the speech 104 to the server 118 relatively soon after those portions become available to the speech recognition client 140 , thereby enabling the recognizer 120 to begin recognizing those portions with minimal delay.
- the application 108 may, for example, send the first segment 302 a immediately after it becomes available, even as the second segment 302 b is being generated. Furthermore, the client 140 may transmit individual portions in the speech stream 110 to the server 118 without using a standing connection (e.g., socket). As a result, a connectionless or stateless protocol, such as HTTP, may be used by the speech recognition client 140 to transmit the speech stream 110 to the server 118 .
- a connectionless or stateless protocol such as HTTP
- the speech stream 110 may contain any number of segments, which may grow as the user 102 continues to speak.
- the application 108 may use any procedure to divide the speech 104 into segments, or to stream the speech 104 to the server 118 over, for example, an HTTP connection.
- Each of the speech segments 302 a - e contains data 304 a representing a corresponding portion of the speech 104 of the user 102 .
- Such speech data 304 a may be represented in any appropriate format.
- Each of the speech segments 302 a - e may contain other information, such as the start time 304 b and end time 304 c of the corresponding speech data 304 a, and a tag 304 d which will be described in more detail below.
- the particular fields 304 a - d illustrated in FIG. 3 are merely examples and do not constitute limitations of the present invention.
- the server-side recognizer 120 queues segments from the speech stream 110 into a first-in first-out processing queue 124 at the server 118 ( FIG. 2 , step 216 ). With certain exceptions that will be described in more detail below, the server-side recognizer 120 pulls segments from the processing queue 124 as soon as possible after they become available and performs speech recognition on those segments to produce speech recognition results (step 218 ), which the server 120 queues into a first-in first-out output queue 134 (step 220 ).
- the application 108 may also send a control stream 112 to the server-side recognizer 120 over the network 116 as part of step 204 .
- the control stream 112 may include control messages 402 a - c , transmitted in sequence to the recognizer 120 .
- control messages 402 a - c are shown in FIG. 4 for ease of illustration, in practice the control stream 112 may contain any number of control messages.
- each of the control messages 402 a may contain a plurality of fields, such as a command field 404 a for specifying a command to be executed by the server-side recognizer 120 , a configuration object field 404 b for specifying a configuration object, and a timeout value field 404 c for specifying a timeout value.
- a command field 404 a for specifying a command to be executed by the server-side recognizer 120
- a configuration object field 404 b for specifying a configuration object
- a timeout value field 404 c for specifying a timeout value.
- the speech recognition client 140 may treat the speech stream 110 and control stream 112 as two different streams of data (steps 206 and 208 ), transmitted in parallel from the speech recognition client 140 to the engine 120 .
- the client 106 may multiplex the speech stream 110 and the control stream 112 into a single data stream 114 transmitted to the server 118 (step 210 ).
- the server 118 demultiplexes the signal 114 into its constituent speech stream 110 and control stream 112 on the server side (step 214 ).
- Any multiplexing scheme may be used.
- HTTP is used as a transport mechanism
- an HTTP client 130 and HTTP server 132 may transparently perform the multiplexing and demultiplexing functions, respectively, on behalf of the client 106 and server 118 .
- the speech recognition client 140 may treat the speech stream 110 and control stream 112 as two separate streams even though they are transmitted as a single multiplexed stream 114 because the HTTP client 130 multiplexes these two streams together automatically and transparently on behalf of the speech recognition client 140 .
- the server-side recognizer 120 may treat the speech stream 110 and control stream 112 as two separate streams even though they are received by the server 118 as a single multiplexed stream 114 because the HTTP server 132 demultiplexes the combined stream 114 into two streams automatically and transparently on behalf of the server-side recognizer 120 .
- the server-side recognizer 120 pulls speech segments from the processing queue 124 in sequence, performs speech recognition on them, and queues the speech recognition results into the output queue 134 .
- the speech recognition client 108 receives the speech recognition results as follows.
- the speech recognition client 140 sends, in the control stream 112 , a control message whose command field 404 a calls a method referred to herein as “DecodeNext.”
- This method takes as parameters a configuration update object 404 b (which specifies how a configuration state 126 of the server-side recognizer 120 is to be updated), and a real-time timeout value 404 c.
- the speech recognition client 140 may send other commands in the control stream 112 , only the DecodeNext command will be described here for ease of explanation.
- the server-side recognizer 120 pulls control messages from the control stream 112 in sequence, as soon as possible after they are received, and in parallel with processing the speech segments in the speech stream 110 (step 222 ).
- the server-side recognizer 120 executes the command in each control message in sequence (step 224 ).
- a flow chart is shown of a method performed by the server-side recognizer 120 to execute a DecodeNext control message in the control stream 112 .
- the recognizer 120 sends the next result(s) 122 in the queue 134 to the speech recognition client 140 over the network 116 (step 242 ). If more than one result is available in the queue 134 at the time step 242 is performed, then all available results in the queue 134 are transmitted in the results stream 122 to the speech recognition client 140 . (Although the results 122 are shown in FIG.
- the results 122 may be transmitted by the HTTP server 132 over the network 116 and received by the HTTP client 130 at the client device 106 .
- the DecodeNext method then returns control to the application 108 (step 246 ), and terminates.
- the DecodeNext method blocks until at least one result (e.g., one word) is available in the output queue 134 , or until the amount of time specified by the timeout value 404 c is reached (step 248 ). If a result appears in the output queue 134 before the timeout value 404 c is reached, then the
- DecodeNext method transmits that result to the speech recognition client 140 (step 242 ), returns control to the speech recognition client 140 (step 246 ), and terminates. If no results appear in the output queue 134 before the timeout value 404 c is reached, then the DecodeNext method informs the speech recognition client 140 that no results are available (step 244 ), returns control to the speech recognition client 140 (step 246 ), and terminates without returning any recognition results to the speech recognition client 140 .
- the speech recognition client 140 may immediately send another DecodeNext message to the server 120 in an attempt to receive the next recognition result.
- the server 120 may process this DecodeNext message in the manner described above with respect to FIG. 2B . This process may repeat for subsequent recognition results.
- the control stream 112 may essentially always be blocking on the server side (in the loop represented by steps 240 and 248 in FIG. 2B ), waiting for recognition results and returning them to the client application 108 as they become available.
- the timeout value 404 c may be chosen to be shorter than the timeout value of the underlying communication protocol used between the client 140 and server 120 , such as the HTTP timeout value.
- the client 140 may draw the conclusion that the timeout was the result of the inability of the server 120 to produce any speech recognition results before the timeout value 404 c was reached, rather than as the result of a network communication problem. Regardless of the reason for the timeout, however, the client 140 may send another DecodeNext message to the server 120 after such a timeout.
- the examples described above involve two fully unsynchronized data streams 110 and 112 . However, it may be desirable to perform certain kinds of synchronization on the two streams 110 and 112 .
- the speech recognition client 140 may use the textual context of the current cursor position in a text edit window to guide recognition for text that is to be inserted at that cursor position. Since the cursor position may change frequently due to mouse or other keyboard events, it may be useful for the application 108 to delay transmission of the text context to the server 120 until the user 102 presses the “start recording” button. In this case, the server-side recognizer 120 must be prevented from recognizing speech transmitted to the server 120 until the correct text context is received by the server 120 and the server 120 updates its configuration state 126 accordingly.
- some recognition results may trigger the need to change the configuration state 126 of the recognizer 120 .
- the server-side recognizer 120 when the server-side recognizer 120 generates such a result, it should wait until it is reconfigured before generating the next result. For example, if the recognizer 120 produces the result, “delete all,” the application 108 may next attempt to verify the user's intent by prompting the user 102 as follows: “Do you really want to delete all? Say YES or NO.” In this case, the application 108 (through the speech recognition client 140 ) should reconfigure the recognizer 120 with a “YES
- FIG. 2C illustrates a method which may be performed by the server-side recognizer 120 as part of performing speech recognition on the audio segments in the processing queue ( FIG. 2A , step 218 ).
- Each recognizer configuration state is assigned a unique configuration state identifier (ID).
- ID unique configuration state identifier
- the speech recognition client 140 assigns integer values to configuration state IDs, such that if ID 1 >ID 2 , then the configuration state associated with ID 1 is more recent than the configuration state associated with ID 2 .
- the speech recognition client 140 also provides tags 304 d within each of the speech stream segments 302 a - e which indicate the minimum required configuration state ID number that is required before recognition of that segment can begin.
- the recognizer 120 compares the configuration state ID 136 of the recognizer's current configuration state 126 to the minimum required configuration ID specified by the retrieved audio segment's tag 304 d. If the current configuration ID 136 is at least as great as the minimum required configuration ID (step 264 ), then the server 120 begins recognizing the retrieved audio segment (step 266 ). Otherwise, the server 120 waits until its configuration ID 136 reaches the minimum required ID before it begins recognizing the current speech segment. Since the method of FIG. 2C may be performed in parallel with the method 200 of FIG.
- the configuration ID 136 of the server-side recognizer 120 may be updated by execution of control messages 224 even while the method of FIG. 2C blocks in the loop over step 264 . Furthermore, note that even while the server 120 waits to process speech from the processing queue 124 , the server 120 continues to receive additional segments from the speech stream 110 and queue those segments into the processing queue 124 ( FIG. 2A , steps 214 - 216 ).
- the application 108 may instruct the recognizer 120 ahead of time to stop recognizing the speech stream 110 , or take some other action, upon producing any recognition result or upon producing a recognition result satisfying certain criteria.
- Such criteria may effectively serve as breakpoints which the application 108 , through the speech recognition client 140 , may use to proactively control how far ahead the recognizer 120 produces recognition results.
- a possible configuration which may be specified by the configuration update object 404 b, would be: ⁇ delete, continue>, ⁇ next, continue>, ⁇ select all, continue>, ⁇ open file chooser, stop>.
- Such a configuration instructs the server-side recognizer 120 to continue recognizing the speech stream 110 after obtaining the recognition result “delete,” “next,” or “select all,” but to stop recognizing the speech stream 110 after obtaining the recognition result “open file chooser.”
- the reason for configuring the recognizer 120 in this way is that production of the results “delete,” “next,” or “select all” do not require the recognizer 120 to be reconfigured before producing the next result. Therefore, the recognizer 120 may be allowed to continue recognizing the speech stream 110 after producing any of the results “delete,” “next,” or “select all,” thereby enabling the recognizer 120 to continue recognizing the speech 104 at full speed (see FIG. 2D , step 272 ).
- the recognizer 120 In contrast, production of the result “open file chooser” requires the recognizer 120 to be reconfigured (e.g., to expect results such as “OK,” “select filel.xml,” or “New Folder”) before recognizing any subsequent segments in the speech stream 110 (see FIG. 2C , step 274 ). Therefore, if the application 108 , through the speech recognition client 140 , is informed by the recognizer 120 that the result “open file chooser” was produced, the application 108 , through the speech recognition client 140 , may reconfigure the recognizer 120 with a configuration state that is appropriate for control of a file chooser. Enabling the application 108 to pre-configure the recognizer 120 in this way strikes a balance between maximizing the recognizer's response time and ensuring that the recognizer 120 uses the proper configuration state to recognize different portions of the speech 104 .
- the recognizer 120 may continue to receive speech segments from the speech stream 110 and to queue those segments into the processing queue 124 ( FIG. 2A , steps 214 , 216 ). As a result, additional segments of the speech stream 110 are ready to be processed as soon as the recognizer 120 resumes performing speech recognition.
- the techniques disclosed herein may be used in conjunction with one-way communication protocols, such as HTTPS.
- Such communication protocols are simple to set up on wide area networks, but offer little guarantee against failures. Failures may occur during a request between the client 130 and server 132 that may leave the application 108 in an ambiguous state. For example, a problem may occur when either party (client application 108 or server-side recognizer 120 ) fails while in the midst of a call. Other problems may occur, for example, due to lost messages to or from the server 118 , messages arriving at the client 106 or server 118 out of sequence, or messages mistakenly sent as duplicates. In general, in prior art systems it is the responsibility of the speech recognition client 140 to ensure the robustness of the overall system 100 , since the underlying communications protocol does not guarantee such robustness.
- Embodiments of the present invention are robust against such problems by making all messages and events exchanged between the speech recognition client 140 and server-side recognizer 120 idempotent.
- An event is idempotent if multiple occurrences of the same event have the same effect as a single occurrence of the event. Therefore, if the speech recognition client 140 detects a failure, such as failure to transmit a command to the server-side recognizer 120 , the speech recognition client 140 may re-transmit the command, either immediately or after a waiting period.
- the speech recognition client 140 and recognizer 120 may use a messaging application program interface (API) which guarantees that the retry will leave the system 100 in a coherent state.
- API messaging application program interface
- the API for the speech stream 110 forces the speech recognition client 140 to transmit the speech stream 110 in segments.
- Each segment may have a unique ID 304 e in addition to the start byte index 304 b (initially 0 for the first segment), and either an end byte index 304 c or a segment size.
- the server-side recognizer 120 may acknowledge that it has received a segment by transmitting back the end byte index of the segment, which should normally be equal to the start byte plus the segment size.
- the end byte index transmitted by the server may, however, be a lower value if the server could not read the entire audio segment.
- the speech recognition client 140 then transfers the next segment starting where the server-side recognizer 120 left off, so that the new start byte index is equal to the end byte index returned by the recognizer 120 . This process is repeated for the entire speech stream 110 . If a message is lost (on the way to or from the server 118 ), the speech recognition client 140 repeats the transfer. If the server-side recognizer 120 did not previously receive that speech segment, then the server-side recognizer 120 will simply process the new data. If, however, the recognizer 120 previously processed that segment (such as may occur if the results were lost on the way back to the client 106 ), then the recognizer 120 may, for example, acknowledge receipt of the segment and drop it without processing it again.
- all control messages 402 a - c may be resent to the server 118 , since each of the messages may contain an ID for the current session.
- the speech recognition client 140 may pass, as part of the DecodeNext method, a running unique identifier to identify the current method call.
- the server 118 keeps track of those identifiers to determine whether the current message being received in the control stream 112 is new or whether it has already been received and processed. If the current message is new, then the recognizer 120 processes the message normally, as described above. If the current message was previously processed, then the recognizer 120 may re-deliver the previously-returned results instead of generating them again.
- the client 140 may store the control message.
- the client 140 may send both the first (unacknowledged) control message and the second control message to the server 118 .
- the client 140 may alternatively achieve the same result by combining the state changes represented by the first and second control messages into a single control message, which the client 140 may then transmit to the server 140 .
- the client 140 may combine any number of control messages together into a single control message in this way until such messages are acknowledged by the server 118 .
- the server 118 may combine speech recognition results which have not been acknowledged by the client 140 into individual results in the results stream 122 until such results are acknowledged by the client.
- Embodiments of the present invention enable speech recognition to be distributed anywhere on the Internet, without requiring any special network.
- the techniques disclosed herein may operate over a one-way communication protocol, such as HTTP, thereby enabling operation even in restrictive environments in which clients are limited to engaging only in outbound (one-way) communications.
- embodiments of the present invention are broadly useful in conjunction with a wide variety of networks without requiring security to be sacrificed.
- the techniques disclosed herein may reuse existing web security mechanisms (such as SSL and, by extension, HTTPS) to provide secure communications between client 106 and server 118 .
- Embodiments of the present invention may be implemented in such systems by multiplexing the speech stream 110 and the control stream 112 into a single stream 114 that can be transmitted through a single port.
- outgoing communication may be required to be encrypted.
- clients often are allowed to use only the standard secure, encrypted HTTPS port (port 443 ).
- Embodiments of the present invention can work over either a standard (unsecured) HTTP port or a secured HTTPS port for all of its communication needs—both audio transfer 110 and control flow 112 .
- the techniques disclosed herein may be used in conjunction with systems which allow clients to communicate using unsecured HTTP and systems which require or allow clients to communicate using secured HTTPS.
- the techniques disclosed herein are also resilient to intermittent network failures because they employ a communications protocol in which messages are idempotent. This is particularly useful when embodiments of the present invention are used in conjunction with networks, such as WANs, in which network drops and spikes are common. Although such events may cause conventional server-side speech recognition systems to fail, they do not effect results produced by embodiments of the present invention (except possibly by increasing turnaround time).
- Embodiments of the present invention enable speech 104 to be transmitted from client 106 to server 118 as fast as the network 116 will allow, even if the server 118 cannot process that speech continuously. Furthermore, the server-side recognizer 120 may process speech from the processing queue 124 as quickly as possible even when the network 116 cannot transmit the results and/or the application 108 is not ready to receive the results. These and other features of embodiments of the present invention enable speech and speech recognition results to be transmitted and processed as quickly as individual components of the system 100 will allow, such that problems with individual components of the system 100 have minimum impact on the performance of the other components of the system 100 .
- embodiments of the present invention enable the server-side recognizer 120 to process speech as quickly as possible but without getting too far ahead of the client application 108 .
- the application 108 may use control messages in the control stream 112 to issue reconfiguration commands to the recognizer 120 which cause the recognizer 120 to reconfigure itself to recognize speech in the appropriate configuration state, and to temporarily halt recognition upon the occurrence of predetermined conditions so that the application 108 can reconfigure the state of the recognizer 120 appropriately.
- Such techniques enable speech recognition to be performed as quickly as possible without being performed using the wrong configuration state.
- the techniques described above may be implemented, for example, in hardware, software, firmware, or any combination thereof.
- the techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
- Program code may be applied to input entered using the input device to perform the functions described and to generate output.
- the output may be provided to one or more output devices.
- Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language.
- the programming language may, for example, be a compiled or interpreted programming language.
- Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor.
- Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output.
- Suitable processors include, by way of example, both general and special purpose microprocessors.
- the processor receives instructions and data from a read-only memory and/or a random access memory.
- Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays).
- a computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk.
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 13/563,998, filed on Aug. 1, 2012, entitled, “Distributed Speech Recognition Using One Way Communication”; which is a continuation of U.S. patent application Ser. No. 13/196,188, filed on Aug. 2, 2011, entitled, “Distributed Speech Recognition Using One Way Communication” (now U.S. Pat. No. 8,249,878, issued on Aug. 21, 2012); which is a continuation of U.S. patent application Ser. No. 12/550,381, filed on Aug. 30, 2009, entitled, “Distributed Speech Recognition Using One Way Communication” (now U.S. Pat. No. 8,019,608, issued on Sep. 13, 2011); which claims priority from U.S. Prov. Pat. App. Ser. No. 61/093,221, filed on Aug. 29, 2008, entitled, “Distributed Speech Recognition Using One Way Communication”; all of which are hereby incorporated by reference herein.
- A variety of automatic speech recognizers (ASRs) exist for performing functions such as converting speech into text and controlling the operations of a computer in response to speech. Some applications of automatic speech recognizers require shorter turnaround times (the amount of time between when the speech is spoken and when the speech recognizer produces output) than others in order to appear responsive to the end user. For example, a speech recognizer that is used for a “live” speech recognition application, such as controlling the movement of an on-screen cursor, may require a shorter turnaround time (also referred to as a “response time”) than a speech recognizer that is used to produce a transcript of a medical report.
- The desired turnaround time may depend, for example, on the content of the speech utterance that is processed by the speech recognizer. For example, for a short command-and-control utterance, such as “close window,” a turnaround time above 500 ms may appear sluggish to the end user. In contrast, for a long dictated sentence which the user desires to transcribe into text, response times of 1000 ms may be acceptable to the end user. In fact, in the latter case users may prefer longer response times because they may otherwise feel that their speech is being interrupted by the immediate display of text in response to their speech. For longer dictated passages, such as entire paragraphs, even longer response times of multiple seconds may be acceptable to the end user.
- In typical prior art speech recognition systems, increasing response time while maintaining recognition accuracy requires increasing the computing resources (processing cycles and/or memory) that are dedicated to performing speech recognition. As a result, many applications which require fast response times require the speech recognition system to execute on the same computer as that on which the applications themselves execute. Although such colocation may eliminate the delay that would otherwise be introduced by requiring the speech recognition results to be transmitted to the requesting application over a network, such colocation also has a variety of disadvantages.
- For example, colocation requires a speech recognition system to be installed on every end user device—such as every desktop computer, laptop computer, cellular telephone, and personal digital assistant (PDA)—which requires speech recognition functionality. Installing and maintaining such speech recognition systems on such a large number and wide variety of devices can be tedious and time-consuming for end users and system administrators. For example, such maintenance requires system binaries to be updated when a new release of the speech recognition system becomes available. User data, such as speech models, are created and accumulated over time on individual devices, taking up precious storage space, and need to be synchronized with multiple devices used by the same user. Such maintenance can grow particularly burdensome as users continue to use speech recognition systems on a wider number and variety of devices.
- Furthermore, locating a speech recognition system on the end user device causes the speech recognition system to consume precious computing resources, such as CPU processing cycles, main memory, and disk space. Such resources are particularly scarce on handheld mobile devices such as cellular telephones. Producing speech recognition results with fast turnaround times using such devices typically requires sacrificing recognition accuracy and reducing the resources available to other applications executing on the same device.
- One known technique for overcoming these resource constraints in the context of embedded devices is to delegate some or all of the speech recognition processing responsibility to a speech recognition server that it located remotely from the embedded device and which has significantly greater computing resources than the embedded device. When a user speaks into the embedded device in this situation, the embedded device does not attempt to recognize the speech using its own computing resources. Instead, the embedded device transmits the speech (or a processed form of it) over a network connection to the speech recognition server, which recognizes the speech using its greater computing resources and therefore produces recognition results more quickly than the embedded device could have produced with the same accuracy. The speech recognition server then transmits the results back over the network connection to the embedded device. Ideally this technique produces highly-accurate speech recognition results more quickly than would otherwise be possible using the embedded device alone.
- In practice, however, this “server-side speech recognition” technique has a variety of shortcomings. In particular, because server-side speech recognition relies on the availability of high-speed and reliable network connections, the technique breaks down if such connections are not available when needed. For example, the potential increases in speed made possible by server-side speech recognition may be negated by use of a network connection without sufficiently high bandwidth. As one example, the typical network latency of an HTTP call to a remote server can range from 100 ms to 500 ms. If spoken data arrives at a speech recognition server 500 ms after it is spoken, it will be impossible for that server to produce results quickly enough to satisfy the minimum turnaround time (500 ms) required by command-and-control applications. As a result, even the fastest speech recognition server will produce results that appear sluggish if used in combination with a slow network connection.
- Furthermore, conventional server-side speech recognition techniques assume that the network connection established between the client (e.g., embedded device) and speech recognition server is kept alive continuously during the entire recognition process. Although it may be possible to satisfy this condition in a Local Area Network (LAN) or when both client and server are managed by the same entity, this condition may be impossible or at least unreasonable to satisfy when the client and server are connected over a Wide Area Network (WAN) or the Internet, in which case interruptions to the network connection may be common and unavoidable.
- Furthermore, organizations often restrict the kinds of communications that their users can engage in over public networks such as the Internet. For example, organizations may only allow clients within their networks to engage in outbound communications. This means that a client can contact an external server on a certain port, but that the server cannot initiate contact with the client. This is an example of one-way communication.
- Another common restriction imposed on clients is that they may only use a limited range of outbound ports to communicate with external servers. Furthermore, outgoing communication on those ports may be required to be encrypted. For example, clients often are allowed to use only the standard HTTP port (port 80) or the standard secure, encrypted HTTPS port (port 443).
- What is needed, therefore, are improved techniques for producing speech recognition results with fast response times without overburdening the limited computing resources of client devices.
- A speech recognition client sends a speech stream and control stream in parallel to a server-side speech recognizer over a network. The network may be an unreliable, low-latency network. The server-side speech recognizer recognizes the speech stream continuously. The speech recognition client receives recognition results from the server-side recognizer in response to requests from the client. The client may remotely reconfigure the state of the server-side recognizer during recognition.
- Other features and advantages of various aspects and embodiments of the present invention will become apparent from the following description and from the claims.
-
FIG. 1 is a dataflow diagram of a system for performing speech recognizing over a low-latency network according to one embodiment of the present invention; -
FIG. 2A is a flowchart of a method performed by the system ofFIG. 1 according to one embodiment of the present invention; -
FIG. 2B is a flowchart of a method performed by a server-side automatic speech recognizer to recognize a segment of speech according to one embodiment of the present invention; -
FIG. 2C is a flowchart of a method performed by a server-side automatic speech recognizer as part of performing speech recognition on segments of speech according to one embodiment of the present invention; -
FIG. 2D is a flowchart of a method performed by a server-side recognizer to ensure that the recognizer is reconfigured after certain recognition results are obtained and before further recognition is performed according to one embodiment of the present invention; -
FIG. 3 is a diagram of a speech stream according to one embodiment of the present invention; and -
FIG. 4 is a diagram of a command and control stream according to one embodiment of the present invention. - Referring to
FIG. 1 , a dataflow diagram is shown of aspeech recognition system 100 according to one embodiment of the present invention. Referring toFIG. 2A , a flowchart is shown of amethod 200 performed by thesystem 100 ofFIG. 1 according to one embodiment of the present invention. - A
user 102 of aclient device 106 speaks and thereby providesspeech 104 to the client device 106 (step 202). Theclient device 106 may be any device, such as a desktop or laptop computer, cellular telephone, personal digital assistant (PDA), or telephone. Embodiments of the present invention, however, are particularly useful in conjunction with resource-constrained clients, such as computers or mobile computing devices with slow processors or small amounts of memory, or computers running resource-intensive software. Thedevice 106 may receive thespeech 104 from theuser 102 in any way, such as through a microphone connected to a sound card. Thespeech 104 may be embodied in an audio signal which is tangibly stored in a computer-readable medium and/or transmitted over a network connection or other channel. Thespeech 104 may, for example, include multiple audio streams, as in the case of “push to talk” applications, in which each push initiates a new audio stream. - The
client device 106 includes anapplication 108, such as a transcription application or other application which needs to recognize thespeech 104. Although theapplication 108 may be any kind of application that uses speech recognition results, assume for purposes of the following discussion that theapplication 108 is a “live” recognition application for transcribing speech. Portions of thespeech 104 provided by theuser 102 in this context may fall into one of two basic categories: dictated speech to be transcribed (e.g., “The patient is a 35 year-old male”) or commands (such as “delete this” or “sign and submit”). - The
client device 106 also includes aspeech recognition client 140. Although thespeech recognition client 140 is shown inFIG. 1 as a separate module from theapplication 108, alternatively thespeech recognition client 140 may be part of theapplication 108. Theapplication 108 provides thespeech 104 to thespeech recognition client 140. Alternatively, theapplication 108 may process thespeech 104 in some way and provide the processed version of thespeech 104, or other data derived from the speech, to thespeech recognition client 140. Thespeech recognition client 140 itself may process the speech 104 (in addition to or instead of any processing performed on the speech by the application 108) in preparation for transmitting thespeech 104 for recognition. - The
speech recognition client 140 transmits thespeech 104 over anetwork 116 to a server-sidespeech recognition engine 120 located on a server 118 (step 204). Although theclient 140 may transmit theentire speech 104 to theserver 118 using a single server configuration, doing so may produce suboptimal results. To improve recognition accuracy or change the context of thespeech recognition engine 120, theclient 140 may instead reconfigure thespeech recognition engine 120 at various points during transmission of thespeech 104, and therefore at various points during the speech recognition engine's recognition of thespeech 104. In general, configuration commands transmitted by theclient 140 to thespeech recognition engine 120 set the expectations of therecognizer 120 regarding the context and/or content of the speech that is to follow. Various prior art systems perform this configuration function by configuring the server-side recognition engine with an initial configuration, then sending some of the speech to the server, then reconfiguring the server-side recognition engine, then sending more of the speech, and so on. This enables the server-side recognition engine to recognize different portions of the speech with configurations and in contexts that are designed to produce better results for later portions of the speech than would have been produced using the initial configuration. - It is undesirable, however, to require the
speech recognition client 140 to wait to receive an acknowledgement from theserver 118 that the previous reconfiguration command has been processed by theserver 118 before sending the next portion of thespeech 104 to theserver 118, because such a requirement could introduce a significant delay into the recognition of thespeech 104, particularly if the network connection is slow and/or unreliable. It is also undesirable to stop server-side processing of the speech until the server receives instructions from the client-side application 108 on how to process subsequent speech. In prior art systems, however, the server needs to stop processing speech until it receives such instructions, such as reconfiguration commands, from the client. - Embodiments of the present invention address these and other problems as follows. The
speech recognition client 140 transmits thespeech 104 to theserver 118 in aspeech stream 110 over the network 116 (FIG. 2 , step 204). As shown inFIG. 3 , thespeech stream 110 may be divided into segments 302 a-e, each of which may represent a portion of the speech 104 (e.g., 150-250 ms of the speech 104). Sending thespeech 104 in segments enables thespeech recognition client 140 to transmit portions of thespeech 104 to theserver 118 relatively soon after those portions become available to thespeech recognition client 140, thereby enabling therecognizer 120 to begin recognizing those portions with minimal delay. Theapplication 108 may, for example, send thefirst segment 302 a immediately after it becomes available, even as thesecond segment 302 b is being generated. Furthermore, theclient 140 may transmit individual portions in thespeech stream 110 to theserver 118 without using a standing connection (e.g., socket). As a result, a connectionless or stateless protocol, such as HTTP, may be used by thespeech recognition client 140 to transmit thespeech stream 110 to theserver 118. - Although only five representative segments 302 a-e are shown in
FIG. 2A for ease of illustration, in practice thespeech stream 110 may contain any number of segments, which may grow as theuser 102 continues to speak. Theapplication 108 may use any procedure to divide thespeech 104 into segments, or to stream thespeech 104 to theserver 118 over, for example, an HTTP connection. - Each of the speech segments 302 a-e contains
data 304 a representing a corresponding portion of thespeech 104 of theuser 102.Such speech data 304 a may be represented in any appropriate format. Each of the speech segments 302 a-e may contain other information, such as thestart time 304 b and end time 304 c of thecorresponding speech data 304 a, and atag 304 d which will be described in more detail below. The particular fields 304 a-d illustrated inFIG. 3 are merely examples and do not constitute limitations of the present invention. - In general, the server-
side recognizer 120 queues segments from thespeech stream 110 into a first-in first-out processing queue 124 at the server 118 (FIG. 2 , step 216). With certain exceptions that will be described in more detail below, the server-side recognizer 120 pulls segments from theprocessing queue 124 as soon as possible after they become available and performs speech recognition on those segments to produce speech recognition results (step 218), which theserver 120 queues into a first-in first-out output queue 134 (step 220). - The
application 108, through thespeech recognition client 140, may also send acontrol stream 112 to the server-side recognizer 120 over thenetwork 116 as part ofstep 204. As shown inFIG. 4 , thecontrol stream 112 may include control messages 402 a-c, transmitted in sequence to therecognizer 120. Although only three representative control messages 402 a-c are shown inFIG. 4 for ease of illustration, in practice thecontrol stream 112 may contain any number of control messages. As will be described in more detail below, each of thecontrol messages 402 a may contain a plurality of fields, such as acommand field 404 a for specifying a command to be executed by the server-side recognizer 120, aconfiguration object field 404 b for specifying a configuration object, and a timeout value field 404 c for specifying a timeout value. The particular fields 304 a-d illustrated inFIG. 3 are merely examples and do not constitute limitations of the present invention. - As shown in
FIG. 1 , thespeech recognition client 140 may treat thespeech stream 110 andcontrol stream 112 as two different streams of data (steps 206 and 208), transmitted in parallel from thespeech recognition client 140 to theengine 120. However, assuming that only one output port is available to thespeech recognition client 140 for communicating with theserver 118, theclient 106 may multiplex thespeech stream 110 and thecontrol stream 112 into asingle data stream 114 transmitted to the server 118 (step 210). Theserver 118 demultiplexes thesignal 114 into itsconstituent speech stream 110 andcontrol stream 112 on the server side (step 214). - Any multiplexing scheme may be used. For example, if HTTP is used as a transport mechanism, then an
HTTP client 130 andHTTP server 132 may transparently perform the multiplexing and demultiplexing functions, respectively, on behalf of theclient 106 andserver 118. In other words, thespeech recognition client 140 may treat thespeech stream 110 andcontrol stream 112 as two separate streams even though they are transmitted as a single multiplexedstream 114 because theHTTP client 130 multiplexes these two streams together automatically and transparently on behalf of thespeech recognition client 140. Similarly, the server-side recognizer 120 may treat thespeech stream 110 andcontrol stream 112 as two separate streams even though they are received by theserver 118 as a single multiplexedstream 114 because theHTTP server 132 demultiplexes the combinedstream 114 into two streams automatically and transparently on behalf of the server-side recognizer 120. - As mentioned above, by default the server-
side recognizer 120 pulls speech segments from theprocessing queue 124 in sequence, performs speech recognition on them, and queues the speech recognition results into theoutput queue 134. Thespeech recognition client 108 receives the speech recognition results as follows. Thespeech recognition client 140 sends, in thecontrol stream 112, a control message whosecommand field 404 a calls a method referred to herein as “DecodeNext.” This method takes as parameters aconfiguration update object 404 b (which specifies how aconfiguration state 126 of the server-side recognizer 120 is to be updated), and a real-time timeout value 404 c. Although thespeech recognition client 140 may send other commands in thecontrol stream 112, only the DecodeNext command will be described here for ease of explanation. - The server-
side recognizer 120 pulls control messages from thecontrol stream 112 in sequence, as soon as possible after they are received, and in parallel with processing the speech segments in the speech stream 110 (step 222). The server-side recognizer 120 executes the command in each control message in sequence (step 224). - Referring to
FIG. 2B , a flow chart is shown of a method performed by the server-side recognizer 120 to execute a DecodeNext control message in thecontrol stream 112. If at least one speech recognition result is in the output queue 134 (step 240), therecognizer 120 sends the next result(s) 122 in thequeue 134 to thespeech recognition client 140 over the network 116 (step 242). If more than one result is available in thequeue 134 at thetime step 242 is performed, then all available results in thequeue 134 are transmitted in the results stream 122 to thespeech recognition client 140. (Although theresults 122 are shown inFIG. 1 as being transmitted directly from therecognizer 120 to thespeech recognition client 140 for ease of illustration, theresults 122 may be transmitted by theHTTP server 132 over thenetwork 116 and received by theHTTP client 130 at theclient device 106.) The DecodeNext method then returns control to the application 108 (step 246), and terminates. - Recall that the
recognizer 120 is continuously performing speech recognition on the speech segments in theprocessing queue 124. Therefore, if theoutput queue 134 is empty when therecognizer 120 begins to execute the DecodeNext method, the DecodeNext method blocks until at least one result (e.g., one word) is available in theoutput queue 134, or until the amount of time specified by the timeout value 404 c is reached (step 248). If a result appears in theoutput queue 134 before the timeout value 404 c is reached, then the - DecodeNext method transmits that result to the speech recognition client 140 (step 242), returns control to the speech recognition client 140 (step 246), and terminates. If no results appear in the
output queue 134 before the timeout value 404 c is reached, then the DecodeNext method informs thespeech recognition client 140 that no results are available (step 244), returns control to the speech recognition client 140 (step 246), and terminates without returning any recognition results to thespeech recognition client 140. - Once control returns to the speech recognition client 140 (after the DecodeNext method either returns a recognition result to the
speech recognition client 140 or informs thespeech recognition client 140 that no such results are available), thespeech recognition client 140 may immediately send another DecodeNext message to theserver 120 in an attempt to receive the next recognition result. Theserver 120 may process this DecodeNext message in the manner described above with respect toFIG. 2B . This process may repeat for subsequent recognition results. As a result, thecontrol stream 112 may essentially always be blocking on the server side (in the loop represented bysteps FIG. 2B ), waiting for recognition results and returning them to theclient application 108 as they become available. - The timeout value 404 c may be chosen to be shorter than the timeout value of the underlying communication protocol used between the
client 140 andserver 120, such as the HTTP timeout value. As a result, if theclient 140 receives notification from the server that no speech recognition results were produced before the timeout value 404 c was reached, theclient 140 may draw the conclusion that the timeout was the result of the inability of theserver 120 to produce any speech recognition results before the timeout value 404 c was reached, rather than as the result of a network communication problem. Regardless of the reason for the timeout, however, theclient 140 may send another DecodeNext message to theserver 120 after such a timeout. - The examples described above involve two fully unsynchronized data streams 110 and 112. However, it may be desirable to perform certain kinds of synchronization on the two
streams speech recognition client 140 to ensure that therecognizer 120 is in a certain configuration state before beginning to recognize thespeech stream 110. For example, therecognizer 120 may use the textual context of the current cursor position in a text edit window to guide recognition for text that is to be inserted at that cursor position. Since the cursor position may change frequently due to mouse or other keyboard events, it may be useful for theapplication 108 to delay transmission of the text context to theserver 120 until theuser 102 presses the “start recording” button. In this case, the server-side recognizer 120 must be prevented from recognizing speech transmitted to theserver 120 until the correct text context is received by theserver 120 and theserver 120 updates itsconfiguration state 126 accordingly. - As another example, some recognition results may trigger the need to change the
configuration state 126 of therecognizer 120. As a result, when the server-side recognizer 120 generates such a result, it should wait until it is reconfigured before generating the next result. For example, if therecognizer 120 produces the result, “delete all,” theapplication 108 may next attempt to verify the user's intent by prompting theuser 102 as follows: “Do you really want to delete all? Say YES or NO.” In this case, the application 108 (through the speech recognition client 140) should reconfigure therecognizer 120 with a “YES|NO” grammar before the recognizer 120 attempts to recognize the next segment in thespeech stream 110. - Such results may be obtained as follows, as shown by the flowchart of
FIG. 2C , which illustrates a method which may be performed by the server-side recognizer 120 as part of performing speech recognition on the audio segments in the processing queue (FIG. 2A , step 218). Each recognizer configuration state is assigned a unique configuration state identifier (ID). Thespeech recognition client 140 assigns integer values to configuration state IDs, such that if ID1>ID2, then the configuration state associated with ID1 is more recent than the configuration state associated with ID2. As described above with respect toFIG. 3 , thespeech recognition client 140 also providestags 304 d within each of the speech stream segments 302 a-e which indicate the minimum required configuration state ID number that is required before recognition of that segment can begin. - When the server-
side recognizer 120 retrieves the next audio segment from the processing queue 124 (step 262), therecognizer 120 compares theconfiguration state ID 136 of the recognizer'scurrent configuration state 126 to the minimum required configuration ID specified by the retrieved audio segment'stag 304 d. If thecurrent configuration ID 136 is at least as great as the minimum required configuration ID (step 264), then theserver 120 begins recognizing the retrieved audio segment (step 266). Otherwise, theserver 120 waits until itsconfiguration ID 136 reaches the minimum required ID before it begins recognizing the current speech segment. Since the method ofFIG. 2C may be performed in parallel with themethod 200 ofFIG. 2A , theconfiguration ID 136 of the server-side recognizer 120 may be updated by execution ofcontrol messages 224 even while the method ofFIG. 2C blocks in the loop overstep 264. Furthermore, note that even while theserver 120 waits to process speech from theprocessing queue 124, theserver 120 continues to receive additional segments from thespeech stream 110 and queue those segments into the processing queue 124 (FIG. 2A , steps 214-216). - As another example of ways in which the
speech stream 110 andcontrol stream 112 may be synchronized, theapplication 108, through thespeech recognition client 140, may instruct therecognizer 120 ahead of time to stop recognizing thespeech stream 110, or take some other action, upon producing any recognition result or upon producing a recognition result satisfying certain criteria. Such criteria may effectively serve as breakpoints which theapplication 108, through thespeech recognition client 140, may use to proactively control how far ahead therecognizer 120 produces recognition results. - For example, consider a context in which the
user 102 may issue any of the following voice commands: “delete,” “next,” “select all,” and “open file chooser.” In this context, a possible configuration, which may be specified by theconfiguration update object 404 b, would be: <delete, continue>, <next, continue>, <select all, continue>, <open file chooser, stop>. Such a configuration instructs the server-side recognizer 120 to continue recognizing thespeech stream 110 after obtaining the recognition result “delete,” “next,” or “select all,” but to stop recognizing thespeech stream 110 after obtaining the recognition result “open file chooser.” The reason for configuring therecognizer 120 in this way is that production of the results “delete,” “next,” or “select all” do not require therecognizer 120 to be reconfigured before producing the next result. Therefore, therecognizer 120 may be allowed to continue recognizing thespeech stream 110 after producing any of the results “delete,” “next,” or “select all,” thereby enabling therecognizer 120 to continue recognizing thespeech 104 at full speed (seeFIG. 2D , step 272). In contrast, production of the result “open file chooser” requires therecognizer 120 to be reconfigured (e.g., to expect results such as “OK,” “select filel.xml,” or “New Folder”) before recognizing any subsequent segments in the speech stream 110 (seeFIG. 2C , step 274). Therefore, if theapplication 108, through thespeech recognition client 140, is informed by therecognizer 120 that the result “open file chooser” was produced, theapplication 108, through thespeech recognition client 140, may reconfigure therecognizer 120 with a configuration state that is appropriate for control of a file chooser. Enabling theapplication 108 to pre-configure therecognizer 120 in this way strikes a balance between maximizing the recognizer's response time and ensuring that therecognizer 120 uses the proper configuration state to recognize different portions of thespeech 104. - Note that even if the
recognizer 120 stops recognizing speech from theprocessing queue 124 as the result of a configuration “stop” command (step 274), therecognizer 120 may continue to receive speech segments from thespeech stream 110 and to queue those segments into the processing queue 124 (FIG. 2A , steps 214, 216). As a result, additional segments of thespeech stream 110 are ready to be processed as soon as therecognizer 120 resumes performing speech recognition. - As mentioned above, the techniques disclosed herein may be used in conjunction with one-way communication protocols, such as HTTPS. Such communication protocols are simple to set up on wide area networks, but offer little guarantee against failures. Failures may occur during a request between the
client 130 andserver 132 that may leave theapplication 108 in an ambiguous state. For example, a problem may occur when either party (client application 108 or server-side recognizer 120) fails while in the midst of a call. Other problems may occur, for example, due to lost messages to or from theserver 118, messages arriving at theclient 106 orserver 118 out of sequence, or messages mistakenly sent as duplicates. In general, in prior art systems it is the responsibility of thespeech recognition client 140 to ensure the robustness of theoverall system 100, since the underlying communications protocol does not guarantee such robustness. - Embodiments of the present invention are robust against such problems by making all messages and events exchanged between the
speech recognition client 140 and server-side recognizer 120 idempotent. An event is idempotent if multiple occurrences of the same event have the same effect as a single occurrence of the event. Therefore, if thespeech recognition client 140 detects a failure, such as failure to transmit a command to the server-side recognizer 120, thespeech recognition client 140 may re-transmit the command, either immediately or after a waiting period. Thespeech recognition client 140 andrecognizer 120 may use a messaging application program interface (API) which guarantees that the retry will leave thesystem 100 in a coherent state. - In particular, the API for the
speech stream 110 forces thespeech recognition client 140 to transmit thespeech stream 110 in segments. Each segment may have aunique ID 304 e in addition to thestart byte index 304 b (initially 0 for the first segment), and either an end byte index 304 c or a segment size. The server-side recognizer 120 may acknowledge that it has received a segment by transmitting back the end byte index of the segment, which should normally be equal to the start byte plus the segment size. The end byte index transmitted by the server may, however, be a lower value if the server could not read the entire audio segment. - The
speech recognition client 140 then transfers the next segment starting where the server-side recognizer 120 left off, so that the new start byte index is equal to the end byte index returned by therecognizer 120. This process is repeated for theentire speech stream 110. If a message is lost (on the way to or from the server 118), thespeech recognition client 140 repeats the transfer. If the server-side recognizer 120 did not previously receive that speech segment, then the server-side recognizer 120 will simply process the new data. If, however, therecognizer 120 previously processed that segment (such as may occur if the results were lost on the way back to the client 106), then therecognizer 120 may, for example, acknowledge receipt of the segment and drop it without processing it again. - For the
control stream 112, all control messages 402 a-c may be resent to theserver 118, since each of the messages may contain an ID for the current session. In the case of the DecodeNext method, thespeech recognition client 140 may pass, as part of the DecodeNext method, a running unique identifier to identify the current method call. Theserver 118 keeps track of those identifiers to determine whether the current message being received in thecontrol stream 112 is new or whether it has already been received and processed. If the current message is new, then therecognizer 120 processes the message normally, as described above. If the current message was previously processed, then therecognizer 120 may re-deliver the previously-returned results instead of generating them again. - If one of the control messages 402 a-c is sent to the
server 118 and theserver 118 does not acknowledge receipt of the control message, theclient 140 may store the control message. When theclient 140 has a second control message to send to theserver 118, theclient 140 may send both the first (unacknowledged) control message and the second control message to theserver 118. Theclient 140 may alternatively achieve the same result by combining the state changes represented by the first and second control messages into a single control message, which theclient 140 may then transmit to theserver 140. Theclient 140 may combine any number of control messages together into a single control message in this way until such messages are acknowledged by theserver 118. Similarly, theserver 118 may combine speech recognition results which have not been acknowledged by theclient 140 into individual results in the results stream 122 until such results are acknowledged by the client. - Among the advantages of the invention are one or more of the following. Embodiments of the present invention enable speech recognition to be distributed anywhere on the Internet, without requiring any special network. In particular, the techniques disclosed herein may operate over a one-way communication protocol, such as HTTP, thereby enabling operation even in restrictive environments in which clients are limited to engaging only in outbound (one-way) communications. As a result, embodiments of the present invention are broadly useful in conjunction with a wide variety of networks without requiring security to be sacrificed. Furthermore, the techniques disclosed herein may reuse existing web security mechanisms (such as SSL and, by extension, HTTPS) to provide secure communications between
client 106 andserver 118. - As mentioned above, one common restriction imposed on clients is that they may only use a limited range of outbound ports to communicate with external servers. Embodiments of the present invention may be implemented in such systems by multiplexing the
speech stream 110 and thecontrol stream 112 into asingle stream 114 that can be transmitted through a single port. - Furthermore, outgoing communication may be required to be encrypted. For example, clients often are allowed to use only the standard secure, encrypted HTTPS port (port 443). Embodiments of the present invention can work over either a standard (unsecured) HTTP port or a secured HTTPS port for all of its communication needs—both
audio transfer 110 andcontrol flow 112. As a result, the techniques disclosed herein may be used in conjunction with systems which allow clients to communicate using unsecured HTTP and systems which require or allow clients to communicate using secured HTTPS. - The techniques disclosed herein are also resilient to intermittent network failures because they employ a communications protocol in which messages are idempotent. This is particularly useful when embodiments of the present invention are used in conjunction with networks, such as WANs, in which network drops and spikes are common. Although such events may cause conventional server-side speech recognition systems to fail, they do not effect results produced by embodiments of the present invention (except possibly by increasing turnaround time).
- Embodiments of the present invention enable
speech 104 to be transmitted fromclient 106 toserver 118 as fast as thenetwork 116 will allow, even if theserver 118 cannot process that speech continuously. Furthermore, the server-side recognizer 120 may process speech from theprocessing queue 124 as quickly as possible even when thenetwork 116 cannot transmit the results and/or theapplication 108 is not ready to receive the results. These and other features of embodiments of the present invention enable speech and speech recognition results to be transmitted and processed as quickly as individual components of thesystem 100 will allow, such that problems with individual components of thesystem 100 have minimum impact on the performance of the other components of thesystem 100. - Furthermore, embodiments of the present invention enable the server-
side recognizer 120 to process speech as quickly as possible but without getting too far ahead of theclient application 108. As described above, theapplication 108 may use control messages in thecontrol stream 112 to issue reconfiguration commands to therecognizer 120 which cause therecognizer 120 to reconfigure itself to recognize speech in the appropriate configuration state, and to temporarily halt recognition upon the occurrence of predetermined conditions so that theapplication 108 can reconfigure the state of therecognizer 120 appropriately. Such techniques enable speech recognition to be performed as quickly as possible without being performed using the wrong configuration state. - It is to be understood that although the invention has been described above in terms of particular embodiments, the foregoing embodiments are provided as illustrative only, and do not limit or define the scope of the invention. Various other embodiments, including but not limited to the following, are also within the scope of the claims. For example, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.
- As described above, various methods performed by embodiments of the present invention may be performed in parallel with each other, in whole or in part. Those having ordinary skill in the art will appreciate how to perform particular portions of the methods disclosed herein to achieve the stated benefits, in various combinations.
- The techniques described above may be implemented, for example, in hardware, software, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input entered using the input device to perform the functions described and to generate output. The output may be provided to one or more output devices.
- Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be a compiled or interpreted programming language.
- Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium.
Claims (1)
- 2. A computer-implemented method comprising:(A) at a speech recognition server:(A)(1) receiving a speech stream and a control stream from a client;(A)(2) using an automatic speech recognition engine in a first configuration state to recognize a first portion of the speech stream and thereby to produce a first speech recognition result;(B) at the speech recognition server,(B)(1) analyzing a first control message in the control stream to determine whether the first speech recognition result satisfies a first predetermined criterion specified by the control stream;(B)(2) waiting until the automation speech recognition engine has been reconfigured before continuing to (C); and(C) at the speech recognition server, responsive to receiving a second control message, using the automatic speech recognition engine in a second configuration state to recognize a second portion of the speech stream and thereby to produce a second speech recognition result.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/957,684 US20140163974A1 (en) | 2008-08-29 | 2013-08-02 | Distributed Speech Recognition Using One Way Communication |
US14/627,560 US9502033B2 (en) | 2008-08-29 | 2015-02-20 | Distributed speech recognition using one way communication |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US9322108P | 2008-08-29 | 2008-08-29 | |
US12/550,381 US8019608B2 (en) | 2008-08-29 | 2009-08-30 | Distributed speech recognition using one way communication |
US13/196,188 US8249878B2 (en) | 2008-08-29 | 2011-08-02 | Distributed speech recognition using one way communication |
US13/563,998 US8504372B2 (en) | 2008-08-29 | 2012-08-01 | Distributed speech recognition using one way communication |
US13/957,684 US20140163974A1 (en) | 2008-08-29 | 2013-08-02 | Distributed Speech Recognition Using One Way Communication |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/563,998 Continuation US8504372B2 (en) | 2008-08-29 | 2012-08-01 | Distributed speech recognition using one way communication |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/627,560 Continuation US9502033B2 (en) | 2008-08-29 | 2015-02-20 | Distributed speech recognition using one way communication |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140163974A1 true US20140163974A1 (en) | 2014-06-12 |
Family
ID=41722339
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/550,381 Active 2029-12-14 US8019608B2 (en) | 2008-08-29 | 2009-08-30 | Distributed speech recognition using one way communication |
US13/196,188 Active US8249878B2 (en) | 2008-08-29 | 2011-08-02 | Distributed speech recognition using one way communication |
US13/563,998 Active US8504372B2 (en) | 2008-08-29 | 2012-08-01 | Distributed speech recognition using one way communication |
US13/957,684 Abandoned US20140163974A1 (en) | 2008-08-29 | 2013-08-02 | Distributed Speech Recognition Using One Way Communication |
US14/627,560 Active US9502033B2 (en) | 2008-08-29 | 2015-02-20 | Distributed speech recognition using one way communication |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/550,381 Active 2029-12-14 US8019608B2 (en) | 2008-08-29 | 2009-08-30 | Distributed speech recognition using one way communication |
US13/196,188 Active US8249878B2 (en) | 2008-08-29 | 2011-08-02 | Distributed speech recognition using one way communication |
US13/563,998 Active US8504372B2 (en) | 2008-08-29 | 2012-08-01 | Distributed speech recognition using one way communication |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/627,560 Active US9502033B2 (en) | 2008-08-29 | 2015-02-20 | Distributed speech recognition using one way communication |
Country Status (8)
Country | Link |
---|---|
US (5) | US8019608B2 (en) |
EP (1) | EP2321821B1 (en) |
JP (2) | JP5588986B2 (en) |
CA (1) | CA2732256C (en) |
DK (1) | DK2321821T3 (en) |
ES (1) | ES2446667T3 (en) |
PL (1) | PL2321821T3 (en) |
WO (1) | WO2010025441A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9910840B2 (en) | 2015-04-03 | 2018-03-06 | Microsoft Technology Licensing, Llc | Annotating notes from passive recording with categories |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7933777B2 (en) * | 2008-08-29 | 2011-04-26 | Multimodal Technologies, Inc. | Hybrid speech recognition |
US8019608B2 (en) * | 2008-08-29 | 2011-09-13 | Multimodal Technologies, Inc. | Distributed speech recognition using one way communication |
US9570078B2 (en) * | 2009-06-19 | 2017-02-14 | Microsoft Technology Licensing, Llc | Techniques to provide a standard interface to a speech recognition platform |
US20110320953A1 (en) * | 2009-12-18 | 2011-12-29 | Nokia Corporation | Method and apparatus for projecting a user interface via partition streaming |
US20110184740A1 (en) * | 2010-01-26 | 2011-07-28 | Google Inc. | Integration of Embedded and Network Speech Recognizers |
US9634855B2 (en) | 2010-05-13 | 2017-04-25 | Alexander Poltorak | Electronic personal interactive device that determines topics of interest using a conversational agent |
US8812321B2 (en) * | 2010-09-30 | 2014-08-19 | At&T Intellectual Property I, L.P. | System and method for combining speech recognition outputs from a plurality of domain-specific speech recognizers via machine learning |
US8959102B2 (en) | 2010-10-08 | 2015-02-17 | Mmodal Ip Llc | Structured searching of dynamic structured document corpuses |
KR101208166B1 (en) * | 2010-12-16 | 2012-12-04 | 엔에이치엔(주) | Speech recognition client system, speech recognition server system and speech recognition method for processing speech recognition in online |
US9009041B2 (en) | 2011-07-26 | 2015-04-14 | Nuance Communications, Inc. | Systems and methods for improving the accuracy of a transcription using auxiliary data such as personal data |
US8924219B1 (en) | 2011-09-30 | 2014-12-30 | Google Inc. | Multi hotword robust continuous voice command detection in mobile devices |
US8775175B1 (en) * | 2012-06-01 | 2014-07-08 | Google Inc. | Performing dictation correction |
US8996374B2 (en) * | 2012-06-06 | 2015-03-31 | Spansion Llc | Senone scoring for multiple input streams |
US9430465B2 (en) * | 2013-05-13 | 2016-08-30 | Facebook, Inc. | Hybrid, offline/online speech translation system |
US20140379334A1 (en) * | 2013-06-20 | 2014-12-25 | Qnx Software Systems Limited | Natural language understanding automatic speech recognition post processing |
US9747899B2 (en) | 2013-06-27 | 2017-08-29 | Amazon Technologies, Inc. | Detecting self-generated wake expressions |
EP2851896A1 (en) | 2013-09-19 | 2015-03-25 | Maluuba Inc. | Speech recognition using phoneme matching |
WO2015041892A1 (en) * | 2013-09-20 | 2015-03-26 | Rawles Llc | Local and remote speech processing |
EP2866153A1 (en) * | 2013-10-22 | 2015-04-29 | Agfa Healthcare | Speech recognition method and system with simultaneous text editing |
US9601108B2 (en) | 2014-01-17 | 2017-03-21 | Microsoft Technology Licensing, Llc | Incorporating an exogenous large-vocabulary model into rule-based speech recognition |
US20180270350A1 (en) | 2014-02-28 | 2018-09-20 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10748523B2 (en) | 2014-02-28 | 2020-08-18 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10389876B2 (en) | 2014-02-28 | 2019-08-20 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10878721B2 (en) | 2014-02-28 | 2020-12-29 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US20180034961A1 (en) | 2014-02-28 | 2018-02-01 | Ultratec, Inc. | Semiautomated Relay Method and Apparatus |
US10749989B2 (en) | 2014-04-01 | 2020-08-18 | Microsoft Technology Licensing Llc | Hybrid client/server architecture for parallel processing |
JP6150077B2 (en) * | 2014-10-31 | 2017-06-21 | マツダ株式会社 | Spoken dialogue device for vehicles |
WO2016129188A1 (en) * | 2015-02-10 | 2016-08-18 | Necソリューションイノベータ株式会社 | Speech recognition processing device, speech recognition processing method, and program |
EP3089159B1 (en) | 2015-04-28 | 2019-08-28 | Google LLC | Correcting voice recognition using selective re-speak |
WO2017014721A1 (en) * | 2015-07-17 | 2017-01-26 | Nuance Communications, Inc. | Reduced latency speech recognition system using multiple recognizers |
US9715498B2 (en) | 2015-08-31 | 2017-07-25 | Microsoft Technology Licensing, Llc | Distributed server system for language understanding |
US9443519B1 (en) | 2015-09-09 | 2016-09-13 | Google Inc. | Reducing latency caused by switching input modalities |
CN107452383B (en) * | 2016-05-31 | 2021-10-26 | 华为终端有限公司 | Information processing method, server, terminal and information processing system |
US10971157B2 (en) | 2017-01-11 | 2021-04-06 | Nuance Communications, Inc. | Methods and apparatus for hybrid speech recognition processing |
US10410635B2 (en) * | 2017-06-09 | 2019-09-10 | Soundhound, Inc. | Dual mode speech recognition |
CN109285548A (en) * | 2017-07-19 | 2019-01-29 | 阿里巴巴集团控股有限公司 | Information processing method, system, electronic equipment and computer storage medium |
US10796687B2 (en) | 2017-09-06 | 2020-10-06 | Amazon Technologies, Inc. | Voice-activated selective memory for voice-capturing devices |
KR102552486B1 (en) * | 2017-11-02 | 2023-07-06 | 현대자동차주식회사 | Apparatus and method for recoginizing voice in vehicle |
US10388272B1 (en) | 2018-12-04 | 2019-08-20 | Sorenson Ip Holdings, Llc | Training speech recognition systems using word sequences |
US10573312B1 (en) | 2018-12-04 | 2020-02-25 | Sorenson Ip Holdings, Llc | Transcription generation from multiple speech recognition systems |
US11017778B1 (en) | 2018-12-04 | 2021-05-25 | Sorenson Ip Holdings, Llc | Switching between speech recognition systems |
US11170761B2 (en) | 2018-12-04 | 2021-11-09 | Sorenson Ip Holdings, Llc | Training of speech recognition systems |
US11398238B2 (en) * | 2019-06-07 | 2022-07-26 | Lg Electronics Inc. | Speech recognition method in edge computing device |
US11539900B2 (en) | 2020-02-21 | 2022-12-27 | Ultratec, Inc. | Caption modification and augmentation systems and methods for use by hearing assisted user |
US11488604B2 (en) | 2020-08-19 | 2022-11-01 | Sorenson Ip Holdings, Llc | Transcription of audio |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6327568B1 (en) * | 1997-11-14 | 2001-12-04 | U.S. Philips Corporation | Distributed hardware sharing for speech processing |
US6487534B1 (en) * | 1999-03-26 | 2002-11-26 | U.S. Philips Corporation | Distributed client-server speech recognition system |
US6801604B2 (en) * | 2001-06-25 | 2004-10-05 | International Business Machines Corporation | Universal IP-based and scalable architectures across conversational applications using web services for speech and audio processing resources |
US7035797B2 (en) * | 2001-12-14 | 2006-04-25 | Nokia Corporation | Data-driven filtering of cepstral time trajectories for robust speech recognition |
US7376556B2 (en) * | 1999-11-12 | 2008-05-20 | Phoenix Solutions, Inc. | Method for processing speech signal features for streaming transport |
US20080255848A1 (en) * | 2007-04-11 | 2008-10-16 | Huawei Technologies Co., Ltd. | Speech Recognition Method and System and Speech Recognition Server |
US20090204410A1 (en) * | 2008-02-13 | 2009-08-13 | Sensory, Incorporated | Voice interface and search for electronic devices including bluetooth headsets and remote systems |
US7647225B2 (en) * | 1999-11-12 | 2010-01-12 | Phoenix Solutions, Inc. | Adjustable resource based speech recognition system |
US7774204B2 (en) * | 2003-09-25 | 2010-08-10 | Sensory, Inc. | System and method for controlling the operation of a device by voice commands |
US8019608B2 (en) * | 2008-08-29 | 2011-09-13 | Multimodal Technologies, Inc. | Distributed speech recognition using one way communication |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6078886A (en) * | 1997-04-14 | 2000-06-20 | At&T Corporation | System and method for providing remote automatic speech recognition services via a packet network |
US6298326B1 (en) | 1999-05-13 | 2001-10-02 | Alan Feller | Off-site data entry system |
US7330815B1 (en) * | 1999-10-04 | 2008-02-12 | Globalenglish Corporation | Method and system for network-based speech recognition |
US6963837B1 (en) | 1999-10-06 | 2005-11-08 | Multimodal Technologies, Inc. | Attribute-based word modeling |
JP3728177B2 (en) | 2000-05-24 | 2005-12-21 | キヤノン株式会社 | Audio processing system, apparatus, method, and storage medium |
JP3581648B2 (en) | 2000-11-27 | 2004-10-27 | キヤノン株式会社 | Speech recognition system, information processing device, control method thereof, and program |
KR20020049150A (en) | 2000-12-19 | 2002-06-26 | 이계철 | Protocol utilization to control speech recognition and speech synthesis |
US6728677B1 (en) * | 2001-01-31 | 2004-04-27 | Nuance Communications | Method and system for dynamically improving performance of speech recognition or other speech processing systems |
CN1266625C (en) * | 2001-05-04 | 2006-07-26 | 微软公司 | Server for identifying WEB invocation |
US6996525B2 (en) * | 2001-06-15 | 2006-02-07 | Intel Corporation | Selecting one of multiple speech recognizers in a system based on performance predections resulting from experience |
JP2003140691A (en) * | 2001-11-07 | 2003-05-16 | Hitachi Ltd | Voice recognition device |
JP3826032B2 (en) * | 2001-12-28 | 2006-09-27 | 株式会社東芝 | Speech recognition apparatus, speech recognition method, and speech recognition program |
US7266127B2 (en) | 2002-02-08 | 2007-09-04 | Lucent Technologies Inc. | Method and system to compensate for the effects of packet delays on speech quality in a Voice-over IP system |
JP2004118325A (en) * | 2002-09-24 | 2004-04-15 | Sega Corp | Data communication method and data communication system |
US7092880B2 (en) * | 2002-09-25 | 2006-08-15 | Siemens Communications, Inc. | Apparatus and method for quantitative measurement of voice quality in packet network environments |
US7016844B2 (en) | 2002-09-26 | 2006-03-21 | Core Mobility, Inc. | System and method for online transcription services |
US7539086B2 (en) | 2002-10-23 | 2009-05-26 | J2 Global Communications, Inc. | System and method for the secure, real-time, high accuracy conversion of general-quality speech into text |
US7774694B2 (en) | 2002-12-06 | 2010-08-10 | 3M Innovation Properties Company | Method and system for server-based sequential insertion processing of speech recognition results |
US7444285B2 (en) | 2002-12-06 | 2008-10-28 | 3M Innovative Properties Company | Method and system for sequential insertion of speech recognition results to facilitate deferred transcription services |
TWI245259B (en) * | 2002-12-20 | 2005-12-11 | Ibm | Sensor based speech recognizer selection, adaptation and combination |
EP1493993A1 (en) * | 2003-06-30 | 2005-01-05 | Harman Becker Automotive Systems GmbH | Method and device for controlling a speech dialog system |
US20050102140A1 (en) | 2003-11-12 | 2005-05-12 | Joel Davne | Method and system for real-time transcription and correction using an electronic communication environment |
US7844464B2 (en) | 2005-07-22 | 2010-11-30 | Multimodal Technologies, Inc. | Content-based audio playback emphasis |
US8412521B2 (en) | 2004-08-20 | 2013-04-02 | Multimodal Technologies, Llc | Discriminative training of document transcription system |
US7584103B2 (en) | 2004-08-20 | 2009-09-01 | Multimodal Technologies, Inc. | Automated extraction of semantic content and generation of a structured document from speech |
US20130304453A9 (en) | 2004-08-20 | 2013-11-14 | Juergen Fritsch | Automated Extraction of Semantic Content and Generation of a Structured Document from Speech |
US20060095266A1 (en) * | 2004-11-01 | 2006-05-04 | Mca Nulty Megan | Roaming user profiles for speech recognition |
US7502741B2 (en) | 2005-02-23 | 2009-03-10 | Multimodal Technologies, Inc. | Audio signal de-identification |
US7640158B2 (en) | 2005-11-08 | 2009-12-29 | Multimodal Technologies, Inc. | Automatic detection and application of editing patterns in draft documents |
JP4882537B2 (en) * | 2006-06-20 | 2012-02-22 | 株式会社日立製作所 | Request control method by timer cooperation |
WO2007150005A2 (en) | 2006-06-22 | 2007-12-27 | Multimodal Technologies, Inc. | Automatic decision support |
WO2008064358A2 (en) | 2006-11-22 | 2008-05-29 | Multimodal Technologies, Inc. | Recognition of speech in editable audio streams |
JP2008145676A (en) * | 2006-12-08 | 2008-06-26 | Denso Corp | Speech recognition device and vehicle navigation device |
US7933777B2 (en) | 2008-08-29 | 2011-04-26 | Multimodal Technologies, Inc. | Hybrid speech recognition |
CA2680304C (en) | 2008-09-25 | 2017-08-22 | Multimodal Technologies, Inc. | Decoding-time prediction of non-verbalized tokens |
US9280541B2 (en) * | 2012-01-09 | 2016-03-08 | Five9, Inc. | QR data proxy and protocol gateway |
US8880398B1 (en) * | 2012-07-13 | 2014-11-04 | Google Inc. | Localized speech recognition with offload |
-
2009
- 2009-08-30 US US12/550,381 patent/US8019608B2/en active Active
- 2009-08-31 CA CA2732256A patent/CA2732256C/en active Active
- 2009-08-31 JP JP2011525266A patent/JP5588986B2/en active Active
- 2009-08-31 ES ES09810710.5T patent/ES2446667T3/en active Active
- 2009-08-31 EP EP09810710.5A patent/EP2321821B1/en active Active
- 2009-08-31 PL PL09810710T patent/PL2321821T3/en unknown
- 2009-08-31 WO PCT/US2009/055480 patent/WO2010025441A2/en active Application Filing
- 2009-08-31 DK DK09810710.5T patent/DK2321821T3/en active
-
2011
- 2011-08-02 US US13/196,188 patent/US8249878B2/en active Active
-
2012
- 2012-08-01 US US13/563,998 patent/US8504372B2/en active Active
-
2013
- 2013-08-02 US US13/957,684 patent/US20140163974A1/en not_active Abandoned
- 2013-11-05 JP JP2013229237A patent/JP5883841B2/en active Active
-
2015
- 2015-02-20 US US14/627,560 patent/US9502033B2/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6327568B1 (en) * | 1997-11-14 | 2001-12-04 | U.S. Philips Corporation | Distributed hardware sharing for speech processing |
US6487534B1 (en) * | 1999-03-26 | 2002-11-26 | U.S. Philips Corporation | Distributed client-server speech recognition system |
US7647225B2 (en) * | 1999-11-12 | 2010-01-12 | Phoenix Solutions, Inc. | Adjustable resource based speech recognition system |
US7376556B2 (en) * | 1999-11-12 | 2008-05-20 | Phoenix Solutions, Inc. | Method for processing speech signal features for streaming transport |
US7729904B2 (en) * | 1999-11-12 | 2010-06-01 | Phoenix Solutions, Inc. | Partial speech processing device and method for use in distributed systems |
US7672841B2 (en) * | 1999-11-12 | 2010-03-02 | Phoenix Solutions, Inc. | Method for processing speech data for a distributed recognition system |
US6801604B2 (en) * | 2001-06-25 | 2004-10-05 | International Business Machines Corporation | Universal IP-based and scalable architectures across conversational applications using web services for speech and audio processing resources |
US7035797B2 (en) * | 2001-12-14 | 2006-04-25 | Nokia Corporation | Data-driven filtering of cepstral time trajectories for robust speech recognition |
US7774204B2 (en) * | 2003-09-25 | 2010-08-10 | Sensory, Inc. | System and method for controlling the operation of a device by voice commands |
US20080255848A1 (en) * | 2007-04-11 | 2008-10-16 | Huawei Technologies Co., Ltd. | Speech Recognition Method and System and Speech Recognition Server |
US20090204409A1 (en) * | 2008-02-13 | 2009-08-13 | Sensory, Incorporated | Voice Interface and Search for Electronic Devices including Bluetooth Headsets and Remote Systems |
US20090204410A1 (en) * | 2008-02-13 | 2009-08-13 | Sensory, Incorporated | Voice interface and search for electronic devices including bluetooth headsets and remote systems |
US8019608B2 (en) * | 2008-08-29 | 2011-09-13 | Multimodal Technologies, Inc. | Distributed speech recognition using one way communication |
US8249878B2 (en) * | 2008-08-29 | 2012-08-21 | Multimodal Technologies, Llc | Distributed speech recognition using one way communication |
US8504372B2 (en) * | 2008-08-29 | 2013-08-06 | Mmodal Ip Llc | Distributed speech recognition using one way communication |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9910840B2 (en) | 2015-04-03 | 2018-03-06 | Microsoft Technology Licensing, Llc | Annotating notes from passive recording with categories |
Also Published As
Publication number | Publication date |
---|---|
EP2321821B1 (en) | 2013-11-13 |
US20100057451A1 (en) | 2010-03-04 |
CA2732256A1 (en) | 2010-03-04 |
PL2321821T3 (en) | 2014-04-30 |
DK2321821T3 (en) | 2014-02-17 |
US9502033B2 (en) | 2016-11-22 |
JP2012501481A (en) | 2012-01-19 |
CA2732256C (en) | 2017-11-07 |
US8019608B2 (en) | 2011-09-13 |
ES2446667T3 (en) | 2014-03-10 |
EP2321821A4 (en) | 2012-11-28 |
US20110288857A1 (en) | 2011-11-24 |
US8504372B2 (en) | 2013-08-06 |
US8249878B2 (en) | 2012-08-21 |
EP2321821A2 (en) | 2011-05-18 |
JP5883841B2 (en) | 2016-03-15 |
WO2010025441A2 (en) | 2010-03-04 |
JP2014056258A (en) | 2014-03-27 |
US20150170647A1 (en) | 2015-06-18 |
US20120296645A1 (en) | 2012-11-22 |
JP5588986B2 (en) | 2014-09-10 |
WO2010025441A3 (en) | 2010-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9502033B2 (en) | Distributed speech recognition using one way communication | |
US10409550B2 (en) | Voice control of interactive whiteboard appliances | |
US9666190B2 (en) | Speech recognition using loosely coupled components | |
JP6113008B2 (en) | Hybrid speech recognition | |
US8874447B2 (en) | Inferring switching conditions for switching between modalities in a speech application environment extended for interactive text exchanges | |
US8150689B2 (en) | Distributed dictation/transcription system | |
US20180211668A1 (en) | Reduced latency speech recognition system using multiple recognizers | |
KR20110117086A (en) | Markup language-based selection and utilization of recognizers for utterance processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MMODAL IP LLC, TENNESSEE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MULTIMODAL TECHNOLOGIES, LLC;REEL/FRAME:030933/0403 Effective date: 20121026 |
|
AS | Assignment |
Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:MMODAL IP LLC;REEL/FRAME:034047/0527 Effective date: 20140731 Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT, Free format text: SECURITY AGREEMENT;ASSIGNOR:MMODAL IP LLC;REEL/FRAME:034047/0527 Effective date: 20140731 |
|
AS | Assignment |
Owner name: CORTLAND CAPITAL MARKET SERVICES LLC, ILLINOIS Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:MMODAL IP LLC;REEL/FRAME:033958/0729 Effective date: 20140731 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: MMODAL IP LLC, TENNESSEE Free format text: CHANGE OF ADDRESS;ASSIGNOR:MMODAL IP LLC;REEL/FRAME:042271/0858 Effective date: 20140805 |
|
AS | Assignment |
Owner name: MMODAL IP LLC, TENNESSEE Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CORTLAND CAPITAL MARKET SERVICES LLC, AS ADMINISTRATIVE AGENT;REEL/FRAME:048211/0799 Effective date: 20190201 |
|
AS | Assignment |
Owner name: MEDQUIST CM LLC, TENNESSEE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT;REEL/FRAME:048411/0712 Effective date: 20190201 Owner name: MULTIMODAL TECHNOLOGIES, LLC, TENNESSEE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT;REEL/FRAME:048411/0712 Effective date: 20190201 Owner name: MMODAL MQ INC., TENNESSEE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT;REEL/FRAME:048411/0712 Effective date: 20190201 Owner name: MMODAL IP LLC, TENNESSEE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT;REEL/FRAME:048411/0712 Effective date: 20190201 Owner name: MEDQUIST OF DELAWARE, INC., TENNESSEE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT;REEL/FRAME:048411/0712 Effective date: 20190201 |