US8612229B2 - Method and system for conveying an example in a natural language understanding application - Google Patents

Method and system for conveying an example in a natural language understanding application Download PDF

Info

Publication number
US8612229B2
US8612229B2 US11/300,799 US30079905A US8612229B2 US 8612229 B2 US8612229 B2 US 8612229B2 US 30079905 A US30079905 A US 30079905A US 8612229 B2 US8612229 B2 US 8612229B2
Authority
US
United States
Prior art keywords
nlu
routing destination
expected user
entry
user entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/300,799
Other versions
US20070143099A1 (en
Inventor
Rajesh Balchandran
Linda M. Boyer
James R. Lewis
Brent D. Metz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Nuance Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nuance Communications Inc filed Critical Nuance Communications Inc
Priority to US11/300,799 priority Critical patent/US8612229B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEWIS, JAMES R., METZ, BRENT D., BALCHANDRAN, RAJESH, BOYER, LINDA M.
Publication of US20070143099A1 publication Critical patent/US20070143099A1/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Priority to US14/088,858 priority patent/US9384190B2/en
Application granted granted Critical
Publication of US8612229B2 publication Critical patent/US8612229B2/en
Priority to US15/151,277 priority patent/US10192543B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NUANCE COMMUNICATIONS, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0638Interactive procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/20Aspects of automatic or semi-automatic exchanges related to features of supplementary services
    • H04M2203/2061Language aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details

Definitions

  • the present invention relates to the field of natural language understanding (NLU) and, more particularly, to a method and system to facilitate user interaction in an NLU application.
  • NLU natural language understanding
  • Natural Language Understanding (NLU) systems have been increasingly utilized for interfacing with software applications, customer support systems, embedded devices, and voice interactive based machines. Most NLU systems are employed to interact with a user, to receive text or voice, and to determine what the user desires the machine to accomplish. NLU systems can interpret a text or spoken utterance for performing a programmatic action. Typically, a user speaks into the device or machine and the NLU application performs a responsive action. For example, the NLU application can interpret a caller's request and identify the most appropriate destination to route the caller to by recognizing content within the spoken request.
  • the NLU system can employ domain specific vocabularies to process a caller request for routing a caller to a destination.
  • a different NLU system can be used for different applications such as airline reservations, car rentals, hotel reservations, and other service based inquiry systems.
  • the NLU system can recognize phrases particular to a certain terminology or field. Accordingly, the NLU system can be trained to interpret certain phrases to improve interpretation performance.
  • the NLU system can be trained for specific phrases and sentences that are more representative of the requests callers may typically have with the service offering.
  • a high level of substantive content within the example sentences can be required within an NLU system for the NLU application to correctly interpret spoken requests from the caller.
  • the developer generally decides what sentences should be entered into the NLU database prior to knowing the target NLU application.
  • the developer also typically provides many different examples in anticipation of generally spoken caller requests since the developer does not know what requests to expect.
  • the invention disclosed herein provides a method and system for interacting with a natural language understanding system (NLU).
  • the method can include the steps of entering at least one example sentence during development of a natural language understanding (NLU) application, and conveying the example sentence.
  • the example sentence is interpreted by an NLU model.
  • the method can further include presenting the example sentence in a help message of the NLU application as an example of what to say to interact with the NLU application.
  • the method can include presenting a failure dialog to the developer for displaying at least one example that failed to be properly interpreted.
  • the system can include a system for creating a Natural Language Understanding (NLU) application.
  • the system can include an example planner that prompts a developer for at least one example sentence, and a validation unit connected to the example planner to increase a likelihood that an NLU model correctly interprets the example sentence.
  • the system can include a help message to present at least one example of what a caller can say to interact with the NLU application.
  • the system can further include a failure dialog to the developer for displaying at least one example that failed to be properly interpreted.
  • FIG. 1 is a schematic depiction of a system for creating and conveying an example prompt in accordance with an embodiment of the inventive arrangements disclosed herein;
  • FIG. 2 is a flowchart for creating and conveying an example prompt in accordance with an embodiment of the inventive arrangements disclosed herein;
  • FIG. 3 is a flowchart illustrating a method for creating and conveying an example prompt in accordance with an embodiment of the inventive arrangements disclosed herein.
  • FIG. 1 is a schematic diagram illustrating an NLU System 100 for creating and conveying an example within an NLU application development environment 127 in accordance with an embodiment of the inventive arrangements disclosed.
  • the NLU System itself would work in conjunction with but not limited to a speech recognition system and Interactive Voice Response (IVR) platform (not shown in figure) for speech based interaction. Alternately, it could also be deployed but not limited to function in a text mode.
  • the system 100 can include an external source 102 , an example planner 110 , a validation unit 120 , and a help message 147 .
  • the help message 147 can be presented within the context of an NLU application dialogue 149 . In one arrangement a failure dialogue 169 can be presented to the developer 101 .
  • the failure dialogue 169 can be presented for displaying at least one example that failed to be properly interpreted, and also presenting those example sentences that failed in a ranked order.
  • the failure dialogue can help the developer recognize what examples the application is capable of correctly interpreting and help the developer introduce examples that achieve unambiguous and correct interpretation.
  • the operative aspects of the embodiments of the invention are further described herein primarily in the context of performing call routing.
  • the invention also has utility in many other contexts, such as handling financial and/or commercial transactions providing public services and other such voice-interactive functions.
  • a developer 101 can enter an example sentence from the external source 102 into the NLU application development environment 127 using the example planner 110 .
  • the external source 102 can be a text prompt for typing in example sentences, or it can be a database of available sentences.
  • the example planner 110 can be a software interface or program component within the NLU application development environment 127 .
  • the developer 101 can create or identify an example, such as a sentence or phrase using the external source 102 , for association with a routing destination supported by the NLU application 149 .
  • the developer 101 can select the example and enter the example into the example planner 110 under the corresponding routing destination.
  • the NLU application development environment 127 can accordingly form an association between the example and the routing destination for configuring the validation unit 120 to correctly interpret the examples.
  • the NLU application development environment 127 can present the help message 147 in the NLU application 149 for conveying the examples the developer submitted through the example planner 110 for presentation through the NLU application 149 .
  • the validation unit 120 can ensure that an NLU model within the NLU application can properly interpret the example, that is, associating the example with the (specified or expected) routing destination and thereby producing a correctly interpreted example.
  • the help message 147 can present the example to a caller of the NLU application for conveying to the caller an example phrase or sentence that the caller can speak to the NLU application 149 .
  • the purpose of the example is to provide a subset of sentences that are representative of caller statements which cause the NLU application 149 to interpret them correctly and to route the caller to an appropriate routing destination.
  • the caller can receive a list of spoken examples from the help message 147 for making a selection.
  • the caller can submit a spoken request not provided within the help message 147 which the NLU application can still properly interpret for routing the call to the correct destination.
  • the validation unit 120 ensures the examples presented in the help message 147 are correctly interpreted by the NLU models for connecting the user to the correct routing destination.
  • the example planner 110 can prompt a developer 101 to enter an example under a supported routing destination.
  • the NLU application can already support various routing destinations.
  • the developer 101 does not need to supply a routing destination and can enter the example under an already supported routing destination.
  • the developer 101 can enter the example into the example planner 110 via text or voice.
  • the validation unit 120 can process the example with the associated routing destination for forming an assignment between the example and the routing destination.
  • the developer 101 can provide an example for an already supported routing destination. For example, the developer can enter the example “I want to buy a car” under the routing destination “sales”. As another example, the developer can enter the example “I need credit” under the routing destination “finances”.
  • the developer enters examples that are generally related to the particular services that are within the scope of the NLU application being developed, typically choosing examples related to the more frequently requested services.
  • the examples can be presented to a user of the NLU application for conveying examples of what the user can say to the NLU application.
  • a help message 147 can present examples for the user to hear. The user can repeat an example back or they can submit their own utterance during the interaction.
  • the examples can be inserted into the help message 147 within an NLU application dialogue menu and presented to the caller during an NLU session.
  • the help message can present one or more examples in a sentence frame like, “For example, you could say, ‘I want to buy a car’, ‘I need an oil change’, or ‘I have a flat tire’.”
  • the NLU application can interpret the caller request and route the caller to the correct routing destination.
  • the example planner 110 can be a graphical user interface or information window that can interface with the developer 101 .
  • the example planner 110 can provide an interface to the developer to enter examples from the external source 102 .
  • the example planner 110 can prompt a developer 101 to enter an example.
  • the developer 101 can enter the example from the external source 102 by any suitable method which can include, without limitation, typing in the example or copying the example from a database.
  • the example planner 110 can add the example and associated routing destination as a marked “example” sentence to the NLU database 130 .
  • the marked sentence can indicate a consideration priority during NLU model training, and can be marked with a count. The count denotes the frequency of occurrence for entering the example into the training unit 122 .
  • the NLU database 130 can include a large number of marked examples that can be input to the NLU models within the NLU application 120 .
  • the validation unit 120 can take the marked examples provided by the developer 101 and ensure that the NLU models can properly interpret each example sentence. For example, within a call routing application, the proper interpretation is assignment to the correct routing destination. If the validation unit 120 cannot correctly interpret the routing destination from the example, the example sentence can be added to the NLU models and biased to ensure a high likelihood of correct interpretation after retraining the model.
  • the example planner 110 can also be cooperatively connected to the validation unit 120 for identifying the marked sentence within the NLU database 130 .
  • the validation unit 120 can perform the training of the NLU models, the testing of the NLU models, and the biasing of the NLU models.
  • the validation unit 120 can include a training unit 122 cooperatively connected to a testing unit 124 that is cooperatively connected to a biasing unit 126 .
  • the units 122 , 124 , and 126 can also be interconnected together.
  • the training unit 122 can be coupled to the NLU sentence database 130 for updating a NLU model where the NLU sentence database 130 can include the example and routing destination.
  • the testing unit 124 can evaluate the performance of NLU models with the marked sentences.
  • the biasing unit 126 can tune the NLU models during training to associate routing destinations for correctly interpreting the examples.
  • the NLU models can be but not limited to statistical models or grammars or language models.
  • the validation unit 120 can automatically search for sentences marked as examples.
  • the validation unit 120 can extract the sentence text and the routing destination from NLU database 130 and build NLU models with processing logic from the NLU application development environment 127 .
  • the testing unit 124 can test the example sentences, compare the output of the NLU models from the example sentences, and determine if the output is the correct routing destination. If the output of the NLU model is not the expected routing destination, the training unit can automatically correct errors using available techniques, including but not limited to biasing the NLU models to the example sentences that failed by adding them to the training data, and restarting the training process.
  • the validation unit 120 can further include an error resolution procedure that can repeat, in a looping fashion, until either all problems are resolved or a stop condition is reached.
  • a stop condition can be, for example, a maximum number of iterations or a threshold ratio of corrected sentences to newly broken sentences.
  • the system 100 can prompt the developer 101 for an example for a routing destination.
  • the developer 101 can enter an example from the external source 102 , which can be a sentence corpus.
  • the example can be a sentence or phrase that the developer 101 associates with the routing destination.
  • a developer creates an NLU application to route calls received by an auto dealership. The calls can be routed to departments such as “sales”, “service”, “roadside assistance”, or “finance”, which are the routing destinations in this example.
  • the developer provides an example of a sentence for each of the routing destinations by entering the examples into the example planner 110 .
  • the developer can enter the example sentences “I want to buy a car” (associated with sales), “I need an oil change” (associated with service), and “I have a flat tire” (associated with roadside assistance).
  • the developer may provide a few examples for the most frequently called destinations.
  • the examples can be added to the NLU database 130 .
  • the validation unit 120 can verify the association of the examples with their routing destinations.
  • the examples provided by the developer with the associated routing destinations can be added as marked example sentences to an NLU sentence database 130 which may already contain general corpora of sentences.
  • the validation unit 120 can take the examples provided by the developer and ensure that the NLU models can properly interpret each example sentence.
  • the example planner 110 submits examples that fail testing within the validation unit 120 . For example, within a call routing application, the proper interpretation is assignment to the correct routing destination. If the example fails to make the correct assignment, the example is added to the NLU database for learning an association for making a correct interpretation.
  • the NLU application is sufficiently capable of interpreting the example if it passes the validation unit 120 test.
  • the system 100 can build NLU models. For example, referring to FIG. 1 , the training unit 122 builds a NLU model from all sentences in the NLU database including examples that failed, where the process of building captures the statistics of the marked example sentences within the NLU sentence database.
  • the NLU model forms new rules during the building process to learn an association between the sentences in the NLU database and the routing destination. This enables the NLU model to learn to interpret the failed examples correctly.
  • the statistics associated with the identified NLU model interpret the caller's request and connect the caller to the correct routing destination.
  • the system 100 can validate NLU models as mentioned. For example, referring to FIG. 1 , the developer 101 enters an example sentence into the example planner 110 which the validation unit 120 evaluates. If the NLU models within the validation unit do not produce a correct routing destination with respect to the example sentence, the example planner 110 adds the example sentence to the NLU database 130 .
  • the developer can optionally specify a count to increase the number of presentations of the example during training. The developer increases the frequency of example occurrence to reduce ambiguities between a frequently used example and its associated routing destination. If the NLU models do produce a correct routing destination with respect to the example sentence, the example planner 110 does not add the example to the NLU database 130 .
  • the validation unit 120 trains a set of NLU models to form associations between the examples and associated routing destinations for example sentences that fail.
  • the training unit 122 teaches the NLU models to capture statistical information from sentence content of the marked examples, and biases the statistics to favor associations concerning the example or multiple occurrences of the example provided by the developer.
  • a criterion can be measured to reveal the degree to which a NLU model has learned an association between an example and a routing destination for correctly interpreting the example.
  • the testing unit 124 measures a discrepancy between an example and routing destination during testing.
  • the example and expected outcome constitute an example sentence which the example planner 110 adds to the NLU database 130 .
  • the testing unit 124 first tests the example to determine if the NLU models identifies a correct routing destination. If the NLU models can identify a correct routing destination the example planner 110 does not add the example to the NLU database 130 , as there is no need to make any corrections for examples that are working properly. If the validation unit 120 cannot identify a correct routing destination, the example planner 110 adds the failing examples to the NLU database and trains the NLU models.
  • the training unit 122 trains the NLU models to ensure the NLU models can properly interpret the example sentence.
  • the validation unit 120 gathers statistics from the sentences and words in the examples to form associations within the NLU database 130 .
  • the NLU models produce a measure of similarity between the examples and associated routing destinations with regard to statistics within the NLU application 149 .
  • the measure of similarity provides a criterion for which correct routing destinations are measured.
  • the training unit 122 applies weights to the grammar rules to bias the NLU models in favor of frequently called destinations or the example sentences provided by the developer 101 during generation of the NLU application.
  • the testing unit 124 evaluates the interpretation of the NLU models to the trained example sentences, and if the criterion is not met, the training unit 122 continues to bias the NLU models until the criterion is satisfied.
  • the NLU models can be corrected if the criterion is not met.
  • the testing unit 124 evaluates a discrepancy for a developer example. A discrepancy can exist when the testing unit 124 incorrectly interprets the example sentence.
  • the testing unit 124 marks the example within the NLU database 130 for further biasing.
  • the biasing unit 126 evaluates the discrepancy and sends the discrepancy with the example and incorrect routing destination to the training unit 122 for further training.
  • the training unit 122 uses the discrepancy with the marked example sentence to further bias the NLU models to strengthen an association between the example and the correct routing destination.
  • the system can stop training the NLU models.
  • a developer 101 can enter an example that already has an associated routing destination.
  • the developer 101 can enter numerous routing examples each with their own set of routing destinations.
  • the example planner 110 enters this information into the NLU database 130 if the examples are not properly interpreted during testing.
  • the validation unit 120 continues to train the examples and test the examples to ensure a high likelihood of correct interpretation.
  • the biasing unit 126 controls the amount of biasing to prevent the NLU models from incorrectly interpreting sentences that were already correctly interpreted before the biasing.
  • the training unit 122 can over-train the NLU models, which can lessen the interpretation capabilities of the models.
  • the example planner 110 adds examples to the NLU database 130 to increase the statistical variance, thereby providing focused generalizations that are mode content specific.
  • the training unit 122 stops training.
  • a method 300 for creating an example prompt is shown.
  • the steps of the method 300 are not limited to the particular order in which they are presented in FIG. 3 .
  • the inventive method can also have a greater number of steps or a fewer number of steps than those shown in FIG. 3 .
  • a context of the NLU application within which an example can be presented can be identified.
  • an auto dealership NLU system may receive calls that can be routed to various departments.
  • the system 100 can prompt the developer for an example.
  • the example planner 110 presents a prompt to the developer for entering an example.
  • the developer can enter the examples “I want to buy a car” (associated with sales), “I need an oil change” (associated with service), and “I have a flat tire (associated with roadside assistance).
  • the NLU application development environment or NLU software programming environment 127 may contain drag and drop routing destinations which a developer may use to enter the examples.
  • the developer 101 can build the NLU application using this NLU application development environment.
  • the developer 101 can write an NLU program on software in a software programming environment.
  • the software programming environment can include application interfaces to information sources.
  • the software programming environment could link in example sentences from an external source 102 .
  • the developer can enter program commands to copy the example sentences, or the developer can use drag and drop features for entering the examples into the application.
  • the developer example can be added to a NLU sentence database if the example fails to interpret the correct routing destination.
  • the testing unit 124 evaluates an example provided by the developer 101 . The testing unit 124 first determines if the NLU models can correctly interpret the example, where the test of interpretation is whether the NLU models can associate the example with the correct routing destination. If the NLU models fail to interpret the example, the example planner 110 adds the example to the NLU database 130 . If the NLU models correctly interpret the example, and the correct routing destination is selected for the example, the example planner 110 does not add the example to the NLU database.
  • the developer can enter the example sentence “I need an oil change” that is associated with a ‘service’ routing destination into the example planner 110 .
  • the testing unit 122 then processes the sentence “I need an oil change” to evaluate what routing destination the NLU models associate with the sentence. If the NLU models produce a routing destination result such as ‘carwash’, the testing unit 124 determines that the NLU models did not correctly interpret the sentence. Accordingly, the example planner 110 adds the example sentence to the NLU database with an association to the correct routing destination.
  • the NLU models during training, require that a target routing destination be associated with the example sentence. Having incorrectly interpreted the example, the NLU models need to know what should be the correct association.
  • the NLU models can be but not limited to Hidden Markov Models or Neural Networks that learn associations by minimizing a mean square error or other criterion.
  • the mean square error or other criterion reveals the degree of association between the example and the expected outcome (e.g., the routing destination) which is provided by the validation unit 120 .
  • the example planner 110 marks the example and routing destination within the NLU sentence database to produce a marked example sentence.
  • the marking identifies the example as one that should be validated and potentially be used for training to improve the accuracy of the NLU model on these examples.
  • using these marked examples in the training improves the statistical modeling accuracy by effectively increasing their statistical weighting during training. Accordingly, this biasing increases the association of the example with the correct routing destination.
  • a correct outcome of the example can be validated.
  • One aspect of the present invention includes building the NLU models to validate a correct routing destination.
  • the training unit 122 trains the NLU models using the NLU sentence database, which includes the marked example sentences.
  • the testing unit 124 tests the NLU models with the marked sentences to produce a result and the biasing unit 126 biases the NLU models to reinforce the association of the example with the correct routing destination based on the result.
  • the validation unit 120 ensures that the NLU application learns an association between the example and associated routing destination created by the developer during the development of the NLU application.
  • validating a correct outcome can include testing the NLU application with the example during development.
  • the NLU models which include the marked example sentences can be trained.
  • the training unit 122 trains the NLU models using the NLU sentence database with the marked example sentences.
  • the NLU models can be tested to ascertain whether the model produces a correct outcome.
  • the testing unit 124 assesses performance of the NLU models using the example as an input.
  • the biasing unit 126 updates the NLU model to reinforce the association between the example and the correct outcome (i.e., routing destination).
  • the biasing unit 126 measures a degree of correlation between the example and the provided routing destination, and biases the NLU model.
  • the developer can enter many examples into the developer example prompt, each with an associated set of routing destinations during development.
  • the NLU models ranks a list of routing destinations based on the degree of correlation from the training process.
  • the NLU application selects the routing destination with the highest correlation to the example.
  • the correlation value represents the level of confidence within the NLU models for producing the correct routing destination.
  • a discrepancy can be resolved between the result and the correct outcome, and the NLU models can be re-trained.
  • the biasing unit 126 resolves this discrepancy by requesting the training unit 122 to re-train the NLU model by including the marked example sentences.
  • the training unit 122 uses these marked example sentences to reinforce the association between the marked example sentences and the expected routing destination.
  • the biasing unit 126 concludes operation by a triggering of a stop condition which prevents further biasing of the trained NLU model.
  • the triggering can be initiated by a developer or a software process during generation of the NLU application.
  • the stop condition can also be a threshold of corrected sentences to newly broken sentences.
  • the biasing unit 126 corrects broken sentences generated by the testing unit 124 .
  • the biasing unit 126 passes the corrected sentences to the training unit 122 , which strengthens the association of the example sentence to the correct routing destination.
  • a display unit 112 shows a failure dialogue 169 coupled to the validation unit 120 and presents an error entry.
  • the failure dialogue 169 identifies the response with the correct outcome placed in a list of error entries to be resolved.
  • the NLU application development environment 127 presents the failure dialogue 169 to the developer 101 .
  • the step of validating the correct outcome can be repeated until all discrepancies are mitigated.
  • the testing unit 124 identifies a discrepancy during the development of the NLU application.
  • the training unit 122 includes corrected sentences in the training to reinforce the association between the example and the routing destination.
  • the biasing unit 126 sets a threshold and initiates the stop condition when the threshold of corrected sentences to newly broken sentences is reached.
  • the biasing unit 126 triggers a stop condition with a threshold number of maximum iterations, where the validation unit 120 performs at least one iteration which is considered the validating of at least one correct outcome.
  • a failure dialogue presents an error entry identifying the response and the correct outcome placed in a list of error entries to be resolved can be displayed.
  • the NLU application development environment 127 presents a failure dialogue 169 to disclose the outcome of the interpretation of the developer's examples.
  • the failure dialogue 169 provides feedback to the developer by presenting a list of entry errors where the NLU models could not properly interpret the developer's example or set of examples. For example, a developer may enter in a sentence which did not produce a correct routing destination. The NLU models may not be able to predict the correct routing destination even with sufficient training. Accordingly, the failure dialogue 169 presents the results of the validation to the developer 101 , to inform the developer of the NLU model's performance.
  • the developer 101 may elect to use another example instead of the example that received a poor performance rating in view of the failure dialogue 169 .
  • the failure dialogue 169 serves to assist the developer 101 in entering examples which the NLU models can correctly interpret for providing an unambiguous and correct interpretation.
  • the failure dialogue 169 can present errors in a ranked order of probability.
  • the validation unit 120 processes a batch of examples sentences with each sentence having a confidence score to each of the available routing destinations. For each sentence within a batch, the NLU model produces a confidence score describing a probability of the example being associated to the routing destination.
  • the failure dialogue 169 presents a confidence score for each routing destination in a ranked order. For example, the developer can enter six sentences all under a common routing destination of N total destinations possible. The failure dialogue 169 returns an N-best list for each of the six examples revealing the confidence scores, and which should list the correct routing destination at the top of the list if the examples are accurately interpreted. The failure dialogue 169 retrains examples which do not produce the correct association, and if it cannot produce the correct association, the confidence scores in the list of errors show the developer that the system considers the example one that the NLU model cannot interpret correctly. Accordingly, the developer 101 examines the list of errors to determine which examples resulted in an incorrect routing destination. The developer 101 removes these incorrect entries and/or substitutes in new entries to ensure that all examples provided to a caller within a help message 147 are interpreted correctly.
  • the present invention may be realized in hardware, software, or a combination of hardware and software.
  • the present invention may be realized in a centralized fashion in one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software may be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention also may be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

Abstract

A method (300) and system (100) is provided to add the creation of examples at a developer level in the generation of Natural Language Understanding (NLU) models, tying the examples into a NLU sentence database (130), automatically validating (310) a correct outcome of using the examples, and automatically resolving (316) problems the user has using the examples. The method (300) can convey examples of what a caller can say to a Natural Language Understanding (NLU) application. The method includes entering at least one example associated with an existing routing destination, and ensuring an NLU model correctly interprets the example unambiguously for correctly routing a call to the routing destination. The method can include presenting the example sentence in a help message (126) within an NLU dialogue as an example of what a caller can say for connecting the caller to a desired routing destination. The method can also include presented a failure dialogue for displaying at least one example that failed to be properly interpreted to ensure that ambiguous or incorrect examples are not presented in a help message.

Description

BACKGROUND
1. Field of the Invention
The present invention relates to the field of natural language understanding (NLU) and, more particularly, to a method and system to facilitate user interaction in an NLU application.
2. Description of the Related Art
Natural Language Understanding (NLU) systems have been increasingly utilized for interfacing with software applications, customer support systems, embedded devices, and voice interactive based machines. Most NLU systems are employed to interact with a user, to receive text or voice, and to determine what the user desires the machine to accomplish. NLU systems can interpret a text or spoken utterance for performing a programmatic action. Typically, a user speaks into the device or machine and the NLU application performs a responsive action. For example, the NLU application can interpret a caller's request and identify the most appropriate destination to route the caller to by recognizing content within the spoken request.
Callers often have difficulty using an NLU application because the caller may not know what word should be spoken to the application. The caller can become frustrated when the NLU application misinterprets the caller's request and routes the caller to an incorrect destination, or simply does not process or respond to the caller's request. Accordingly, the user may not have a satisfactory experience with the NLU application. To improve understanding performance, the NLU system can employ domain specific vocabularies to process a caller request for routing a caller to a destination. A different NLU system can be used for different applications such as airline reservations, car rentals, hotel reservations, and other service based inquiry systems. The NLU system can recognize phrases particular to a certain terminology or field. Accordingly, the NLU system can be trained to interpret certain phrases to improve interpretation performance. The NLU system can be trained for specific phrases and sentences that are more representative of the requests callers may typically have with the service offering.
Currently, developing a NLU system is a mostly manual process in which statistical models are built from a corpus of user utterances that represent what a caller might say in response to the system prompts. As part of the development of these types of applications, developers may make decisions about examples of legitimate utterances that can be presented to callers in help prompts messages. However, the sentences selected by developers to present in the help prompts may be ambiguously interpreted or completely misinterpreted by the NLU statistical models with a consequence that the NLU application may not associate the sentences with the correct response to a caller's question or statement. Accordingly, the NLU application's ability to correctly process a caller's request depends on the content and relevance of the sentences the developer entered into the NLU database during development. A high level of substantive content within the example sentences can be required within an NLU system for the NLU application to correctly interpret spoken requests from the caller. In practice, the developer generally decides what sentences should be entered into the NLU database prior to knowing the target NLU application. The developer also typically provides many different examples in anticipation of generally spoken caller requests since the developer does not know what requests to expect.
It is very effective for applications to offer assistance to callers by providing examples phrases through help prompts or otherwise, that the caller can use to interact with the system. Presently, NLU application development environments provide little support for coming up with or knowing what phrases or sentences the NLU application is capable of processing. Currently, the design of caller examples and the validation of examples is a manual process that is poorly understood by developers and often overlooked in practice. Therefore, there is a need for developers to convey to callers of an NLU application examples of statements to facilitate a favorable and responsive interaction with the application, and a need to make it easy for developers to provide high-quality examples. By encouraging users to use examples provided by a developer, the developer can significantly improve the NLU system's usability.
SUMMARY OF THE INVENTION
The invention disclosed herein provides a method and system for interacting with a natural language understanding system (NLU). The method can include the steps of entering at least one example sentence during development of a natural language understanding (NLU) application, and conveying the example sentence. In one aspect, the example sentence is interpreted by an NLU model. The method can further include presenting the example sentence in a help message of the NLU application as an example of what to say to interact with the NLU application. The method can include presenting a failure dialog to the developer for displaying at least one example that failed to be properly interpreted.
Another aspect of the invention can include a system for creating a Natural Language Understanding (NLU) application. The system can include an example planner that prompts a developer for at least one example sentence, and a validation unit connected to the example planner to increase a likelihood that an NLU model correctly interprets the example sentence. The system can include a help message to present at least one example of what a caller can say to interact with the NLU application. The system can further include a failure dialog to the developer for displaying at least one example that failed to be properly interpreted.
BRIEF DESCRIPTION OF THE DRAWINGS
There are shown in the drawings, embodiments that are presently preferred; it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
FIG. 1 is a schematic depiction of a system for creating and conveying an example prompt in accordance with an embodiment of the inventive arrangements disclosed herein;
FIG. 2 is a flowchart for creating and conveying an example prompt in accordance with an embodiment of the inventive arrangements disclosed herein; and
FIG. 3 is a flowchart illustrating a method for creating and conveying an example prompt in accordance with an embodiment of the inventive arrangements disclosed herein.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is a schematic diagram illustrating an NLU System 100 for creating and conveying an example within an NLU application development environment 127 in accordance with an embodiment of the inventive arrangements disclosed. The NLU System itself would work in conjunction with but not limited to a speech recognition system and Interactive Voice Response (IVR) platform (not shown in figure) for speech based interaction. Alternately, it could also be deployed but not limited to function in a text mode. The system 100 can include an external source 102, an example planner 110, a validation unit 120, and a help message 147. The help message 147 can be presented within the context of an NLU application dialogue 149. In one arrangement a failure dialogue 169 can be presented to the developer 101. The failure dialogue 169 can be presented for displaying at least one example that failed to be properly interpreted, and also presenting those example sentences that failed in a ranked order. The failure dialogue can help the developer recognize what examples the application is capable of correctly interpreting and help the developer introduce examples that achieve unambiguous and correct interpretation. The operative aspects of the embodiments of the invention are further described herein primarily in the context of performing call routing. The invention also has utility in many other contexts, such as handling financial and/or commercial transactions providing public services and other such voice-interactive functions.
A developer 101 can enter an example sentence from the external source 102 into the NLU application development environment 127 using the example planner 110. The external source 102 can be a text prompt for typing in example sentences, or it can be a database of available sentences. The example planner 110 can be a software interface or program component within the NLU application development environment 127. For example, the developer 101 can create or identify an example, such as a sentence or phrase using the external source 102, for association with a routing destination supported by the NLU application 149.
The developer 101 can select the example and enter the example into the example planner 110 under the corresponding routing destination. The NLU application development environment 127 can accordingly form an association between the example and the routing destination for configuring the validation unit 120 to correctly interpret the examples. The NLU application development environment 127 can present the help message 147 in the NLU application 149 for conveying the examples the developer submitted through the example planner 110 for presentation through the NLU application 149. The validation unit 120 can ensure that an NLU model within the NLU application can properly interpret the example, that is, associating the example with the (specified or expected) routing destination and thereby producing a correctly interpreted example. The help message 147 can present the example to a caller of the NLU application for conveying to the caller an example phrase or sentence that the caller can speak to the NLU application 149. The purpose of the example is to provide a subset of sentences that are representative of caller statements which cause the NLU application 149 to interpret them correctly and to route the caller to an appropriate routing destination. For example, the caller can receive a list of spoken examples from the help message 147 for making a selection. Alternatively, the caller can submit a spoken request not provided within the help message 147 which the NLU application can still properly interpret for routing the call to the correct destination. The validation unit 120 ensures the examples presented in the help message 147 are correctly interpreted by the NLU models for connecting the user to the correct routing destination.
In one arrangement, the example planner 110 can prompt a developer 101 to enter an example under a supported routing destination. The NLU application can already support various routing destinations. The developer 101 does not need to supply a routing destination and can enter the example under an already supported routing destination. The developer 101 can enter the example into the example planner 110 via text or voice. The validation unit 120 can process the example with the associated routing destination for forming an assignment between the example and the routing destination. During the creation of the NLU models the developer 101 can provide an example for an already supported routing destination. For example, the developer can enter the example “I want to buy a car” under the routing destination “sales”. As another example, the developer can enter the example “I need credit” under the routing destination “finances”. The developer enters examples that are generally related to the particular services that are within the scope of the NLU application being developed, typically choosing examples related to the more frequently requested services.
The examples can be presented to a user of the NLU application for conveying examples of what the user can say to the NLU application. A help message 147 can present examples for the user to hear. The user can repeat an example back or they can submit their own utterance during the interaction. The examples can be inserted into the help message 147 within an NLU application dialogue menu and presented to the caller during an NLU session. For instance, the help message can present one or more examples in a sentence frame like, “For example, you could say, ‘I want to buy a car’, ‘I need an oil change’, or ‘I have a flat tire’.” The NLU application can interpret the caller request and route the caller to the correct routing destination.
In one arrangement, the example planner 110 can be a graphical user interface or information window that can interface with the developer 101. The example planner 110 can provide an interface to the developer to enter examples from the external source 102. The example planner 110 can prompt a developer 101 to enter an example. The developer 101 can enter the example from the external source 102 by any suitable method which can include, without limitation, typing in the example or copying the example from a database. The example planner 110 can add the example and associated routing destination as a marked “example” sentence to the NLU database 130. The marked sentence can indicate a consideration priority during NLU model training, and can be marked with a count. The count denotes the frequency of occurrence for entering the example into the training unit 122. For example, certain examples can be given a higher count to emphasize a more frequent use and to give more weighting for purposes of strengthening an interpretation. The NLU database 130 can include a large number of marked examples that can be input to the NLU models within the NLU application 120. The validation unit 120 can take the marked examples provided by the developer 101 and ensure that the NLU models can properly interpret each example sentence. For example, within a call routing application, the proper interpretation is assignment to the correct routing destination. If the validation unit 120 cannot correctly interpret the routing destination from the example, the example sentence can be added to the NLU models and biased to ensure a high likelihood of correct interpretation after retraining the model.
The example planner 110 can also be cooperatively connected to the validation unit 120 for identifying the marked sentence within the NLU database 130. The validation unit 120 can perform the training of the NLU models, the testing of the NLU models, and the biasing of the NLU models. In one arrangement, the validation unit 120 can include a training unit 122 cooperatively connected to a testing unit 124 that is cooperatively connected to a biasing unit 126. The units 122, 124, and 126 can also be interconnected together. The training unit 122 can be coupled to the NLU sentence database 130 for updating a NLU model where the NLU sentence database 130 can include the example and routing destination. The testing unit 124 can evaluate the performance of NLU models with the marked sentences. The biasing unit 126 can tune the NLU models during training to associate routing destinations for correctly interpreting the examples. The NLU models can be but not limited to statistical models or grammars or language models. During the training of the NLU models, the validation unit 120 can automatically search for sentences marked as examples. The validation unit 120 can extract the sentence text and the routing destination from NLU database 130 and build NLU models with processing logic from the NLU application development environment 127.
The testing unit 124 can test the example sentences, compare the output of the NLU models from the example sentences, and determine if the output is the correct routing destination. If the output of the NLU model is not the expected routing destination, the training unit can automatically correct errors using available techniques, including but not limited to biasing the NLU models to the example sentences that failed by adding them to the training data, and restarting the training process. The validation unit 120 can further include an error resolution procedure that can repeat, in a looping fashion, until either all problems are resolved or a stop condition is reached. A stop condition can be, for example, a maximum number of iterations or a threshold ratio of corrected sentences to newly broken sentences.
Referring to FIG. 2, a flowchart for creating an example prompt is shown. When describing the method 200, reference will be made to FIG. 1, although the method can be practiced in any other suitable device or system. At step 202, the system 100 can prompt the developer 101 for an example for a routing destination. The developer 101 can enter an example from the external source 102, which can be a sentence corpus. The example can be a sentence or phrase that the developer 101 associates with the routing destination. For example, a developer creates an NLU application to route calls received by an auto dealership. The calls can be routed to departments such as “sales”, “service”, “roadside assistance”, or “finance”, which are the routing destinations in this example. The developer provides an example of a sentence for each of the routing destinations by entering the examples into the example planner 110. For example, the developer can enter the example sentences “I want to buy a car” (associated with sales), “I need an oil change” (associated with service), and “I have a flat tire” (associated with roadside assistance). The developer may provide a few examples for the most frequently called destinations.
At step 204, the examples can be added to the NLU database 130. The validation unit 120 can verify the association of the examples with their routing destinations. The examples provided by the developer with the associated routing destinations can be added as marked example sentences to an NLU sentence database 130 which may already contain general corpora of sentences. The validation unit 120 can take the examples provided by the developer and ensure that the NLU models can properly interpret each example sentence. The example planner 110 submits examples that fail testing within the validation unit 120. For example, within a call routing application, the proper interpretation is assignment to the correct routing destination. If the example fails to make the correct assignment, the example is added to the NLU database for learning an association for making a correct interpretation. The NLU application is sufficiently capable of interpreting the example if it passes the validation unit 120 test.
At step 206, the system 100 can build NLU models. For example, referring to FIG. 1, the training unit 122 builds a NLU model from all sentences in the NLU database including examples that failed, where the process of building captures the statistics of the marked example sentences within the NLU sentence database. The NLU model forms new rules during the building process to learn an association between the sentences in the NLU database and the routing destination. This enables the NLU model to learn to interpret the failed examples correctly. The statistics associated with the identified NLU model interpret the caller's request and connect the caller to the correct routing destination.
At step 208, the system 100 can validate NLU models as mentioned. For example, referring to FIG. 1, the developer 101 enters an example sentence into the example planner 110 which the validation unit 120 evaluates. If the NLU models within the validation unit do not produce a correct routing destination with respect to the example sentence, the example planner 110 adds the example sentence to the NLU database 130. The developer can optionally specify a count to increase the number of presentations of the example during training. The developer increases the frequency of example occurrence to reduce ambiguities between a frequently used example and its associated routing destination. If the NLU models do produce a correct routing destination with respect to the example sentence, the example planner 110 does not add the example to the NLU database 130. Notably, the validation unit 120 trains a set of NLU models to form associations between the examples and associated routing destinations for example sentences that fail. The training unit 122 teaches the NLU models to capture statistical information from sentence content of the marked examples, and biases the statistics to favor associations concerning the example or multiple occurrences of the example provided by the developer.
At step 210, a criterion can be measured to reveal the degree to which a NLU model has learned an association between an example and a routing destination for correctly interpreting the example. For example, referring to FIG. 1, the testing unit 124 measures a discrepancy between an example and routing destination during testing. The example and expected outcome constitute an example sentence which the example planner 110 adds to the NLU database 130. The testing unit 124 first tests the example to determine if the NLU models identifies a correct routing destination. If the NLU models can identify a correct routing destination the example planner 110 does not add the example to the NLU database 130, as there is no need to make any corrections for examples that are working properly. If the validation unit 120 cannot identify a correct routing destination, the example planner 110 adds the failing examples to the NLU database and trains the NLU models. The training unit 122 trains the NLU models to ensure the NLU models can properly interpret the example sentence.
During training, the validation unit 120 gathers statistics from the sentences and words in the examples to form associations within the NLU database 130. The NLU models produce a measure of similarity between the examples and associated routing destinations with regard to statistics within the NLU application 149. The measure of similarity provides a criterion for which correct routing destinations are measured. For example, the training unit 122 applies weights to the grammar rules to bias the NLU models in favor of frequently called destinations or the example sentences provided by the developer 101 during generation of the NLU application. The testing unit 124 evaluates the interpretation of the NLU models to the trained example sentences, and if the criterion is not met, the training unit 122 continues to bias the NLU models until the criterion is satisfied.
At step 212, the NLU models can be corrected if the criterion is not met. For example, referring to FIG. 1, the testing unit 124 evaluates a discrepancy for a developer example. A discrepancy can exist when the testing unit 124 incorrectly interprets the example sentence. The testing unit 124 marks the example within the NLU database 130 for further biasing. The biasing unit 126 evaluates the discrepancy and sends the discrepancy with the example and incorrect routing destination to the training unit 122 for further training. The training unit 122 uses the discrepancy with the marked example sentence to further bias the NLU models to strengthen an association between the example and the correct routing destination.
At step 214, if the criterion is met, the system can stop training the NLU models. For example, a developer 101 can enter an example that already has an associated routing destination. The developer 101 can enter numerous routing examples each with their own set of routing destinations. The example planner 110 enters this information into the NLU database 130 if the examples are not properly interpreted during testing. The validation unit 120 continues to train the examples and test the examples to ensure a high likelihood of correct interpretation. The biasing unit 126 controls the amount of biasing to prevent the NLU models from incorrectly interpreting sentences that were already correctly interpreted before the biasing. For example, the training unit 122 can over-train the NLU models, which can lessen the interpretation capabilities of the models. With too much training, the NLU models can be over-biased, reducing the generalizations of the discriminant functions within the NLU models and forcing them to reduce their variance—an undesirable outcome. Accordingly, the example planner 110 adds examples to the NLU database 130 to increase the statistical variance, thereby providing focused generalizations that are mode content specific. When the validation unit 120 determines the NLU models that can accurately interpret the example, and provide the correct interpretation, the training unit 122 stops training.
Referring to FIG. 3, a method 300 for creating an example prompt is shown. When describing the method 300, reference will be made to FIG. 1, although the method can be practiced in any other suitable device or system. Moreover, the steps of the method 300 are not limited to the particular order in which they are presented in FIG. 3. The inventive method can also have a greater number of steps or a fewer number of steps than those shown in FIG. 3.
At step 302, a context of the NLU application within which an example can be presented can be identified. For this example, an auto dealership NLU system may receive calls that can be routed to various departments.
At step 304, the system 100 can prompt the developer for an example. Referring to FIG. 1, the example planner 110 presents a prompt to the developer for entering an example. For example, in the case of the auto dealership NLU routing application, the developer can enter the examples “I want to buy a car” (associated with sales), “I need an oil change” (associated with service), and “I have a flat tire (associated with roadside assistance). Also, the NLU application development environment or NLU software programming environment 127 may contain drag and drop routing destinations which a developer may use to enter the examples. The developer 101 can build the NLU application using this NLU application development environment. The developer 101 can write an NLU program on software in a software programming environment. The software programming environment can include application interfaces to information sources. For example, the software programming environment could link in example sentences from an external source 102. The developer can enter program commands to copy the example sentences, or the developer can use drag and drop features for entering the examples into the application.
The developer example can be added to a NLU sentence database if the example fails to interpret the correct routing destination. For example, referring to FIG. 1, the testing unit 124 evaluates an example provided by the developer 101. The testing unit 124 first determines if the NLU models can correctly interpret the example, where the test of interpretation is whether the NLU models can associate the example with the correct routing destination. If the NLU models fail to interpret the example, the example planner 110 adds the example to the NLU database 130. If the NLU models correctly interpret the example, and the correct routing destination is selected for the example, the example planner 110 does not add the example to the NLU database.
For example, the developer can enter the example sentence “I need an oil change” that is associated with a ‘service’ routing destination into the example planner 110. The testing unit 122 then processes the sentence “I need an oil change” to evaluate what routing destination the NLU models associate with the sentence. If the NLU models produce a routing destination result such as ‘carwash’, the testing unit 124 determines that the NLU models did not correctly interpret the sentence. Accordingly, the example planner 110 adds the example sentence to the NLU database with an association to the correct routing destination. The NLU models, during training, require that a target routing destination be associated with the example sentence. Having incorrectly interpreted the example, the NLU models need to know what should be the correct association. For example, the NLU models can be but not limited to Hidden Markov Models or Neural Networks that learn associations by minimizing a mean square error or other criterion. The mean square error or other criterion reveals the degree of association between the example and the expected outcome (e.g., the routing destination) which is provided by the validation unit 120.
Referring to FIG. 1, the example planner 110 marks the example and routing destination within the NLU sentence database to produce a marked example sentence. The marking identifies the example as one that should be validated and potentially be used for training to improve the accuracy of the NLU model on these examples. In particular, using these marked examples in the training improves the statistical modeling accuracy by effectively increasing their statistical weighting during training. Accordingly, this biasing increases the association of the example with the correct routing destination.
At step 310, a correct outcome of the example can be validated. One aspect of the present invention includes building the NLU models to validate a correct routing destination. Referring to FIG. 1, the training unit 122 trains the NLU models using the NLU sentence database, which includes the marked example sentences. The testing unit 124 tests the NLU models with the marked sentences to produce a result and the biasing unit 126 biases the NLU models to reinforce the association of the example with the correct routing destination based on the result. In particular, the validation unit 120 ensures that the NLU application learns an association between the example and associated routing destination created by the developer during the development of the NLU application. In one arrangement, validating a correct outcome can include testing the NLU application with the example during development.
At step 312, the NLU models which include the marked example sentences can be trained. For example, the training unit 122 trains the NLU models using the NLU sentence database with the marked example sentences. At step 314, the NLU models can be tested to ascertain whether the model produces a correct outcome. In one arrangement, the testing unit 124 assesses performance of the NLU models using the example as an input. The biasing unit 126 updates the NLU model to reinforce the association between the example and the correct outcome (i.e., routing destination). For example, the biasing unit 126 measures a degree of correlation between the example and the provided routing destination, and biases the NLU model. For instance, the developer can enter many examples into the developer example prompt, each with an associated set of routing destinations during development. During testing, the NLU models ranks a list of routing destinations based on the degree of correlation from the training process. The NLU application selects the routing destination with the highest correlation to the example. The correlation value represents the level of confidence within the NLU models for producing the correct routing destination.
At step 318, a discrepancy can be resolved between the result and the correct outcome, and the NLU models can be re-trained. The biasing unit 126 resolves this discrepancy by requesting the training unit 122 to re-train the NLU model by including the marked example sentences. The training unit 122 uses these marked example sentences to reinforce the association between the marked example sentences and the expected routing destination.
In one arrangement, the biasing unit 126 concludes operation by a triggering of a stop condition which prevents further biasing of the trained NLU model. For example, the triggering can be initiated by a developer or a software process during generation of the NLU application. The stop condition can also be a threshold of corrected sentences to newly broken sentences. For example, referring to FIG. 1, the biasing unit 126 corrects broken sentences generated by the testing unit 124. The biasing unit 126 passes the corrected sentences to the training unit 122, which strengthens the association of the example sentence to the correct routing destination. In another arrangement a display unit 112 shows a failure dialogue 169 coupled to the validation unit 120 and presents an error entry. The failure dialogue 169 identifies the response with the correct outcome placed in a list of error entries to be resolved. The NLU application development environment 127 presents the failure dialogue 169 to the developer 101.
At step 320, the step of validating the correct outcome (such as routing destination) can be repeated until all discrepancies are mitigated. Referring to FIG. 1, the testing unit 124 identifies a discrepancy during the development of the NLU application. The training unit 122 includes corrected sentences in the training to reinforce the association between the example and the routing destination. The biasing unit 126 sets a threshold and initiates the stop condition when the threshold of corrected sentences to newly broken sentences is reached. As another example, the biasing unit 126 triggers a stop condition with a threshold number of maximum iterations, where the validation unit 120 performs at least one iteration which is considered the validating of at least one correct outcome.
At step 322 a failure dialogue presents an error entry identifying the response and the correct outcome placed in a list of error entries to be resolved can be displayed. For example, referring to FIG. 1, the NLU application development environment 127 presents a failure dialogue 169 to disclose the outcome of the interpretation of the developer's examples. The failure dialogue 169 provides feedback to the developer by presenting a list of entry errors where the NLU models could not properly interpret the developer's example or set of examples. For example, a developer may enter in a sentence which did not produce a correct routing destination. The NLU models may not be able to predict the correct routing destination even with sufficient training. Accordingly, the failure dialogue 169 presents the results of the validation to the developer 101, to inform the developer of the NLU model's performance.
The developer 101 may elect to use another example instead of the example that received a poor performance rating in view of the failure dialogue 169. The failure dialogue 169 serves to assist the developer 101 in entering examples which the NLU models can correctly interpret for providing an unambiguous and correct interpretation. The failure dialogue 169 can present errors in a ranked order of probability. For example, the validation unit 120 processes a batch of examples sentences with each sentence having a confidence score to each of the available routing destinations. For each sentence within a batch, the NLU model produces a confidence score describing a probability of the example being associated to the routing destination.
The failure dialogue 169 presents a confidence score for each routing destination in a ranked order. For example, the developer can enter six sentences all under a common routing destination of N total destinations possible. The failure dialogue 169 returns an N-best list for each of the six examples revealing the confidence scores, and which should list the correct routing destination at the top of the list if the examples are accurately interpreted. The failure dialogue 169 retrains examples which do not produce the correct association, and if it cannot produce the correct association, the confidence scores in the list of errors show the developer that the system considers the example one that the NLU model cannot interpret correctly. Accordingly, the developer 101 examines the list of errors to determine which examples resulted in an incorrect routing destination. The developer 101 removes these incorrect entries and/or substitutes in new entries to ensure that all examples provided to a caller within a help message 147 are interpreted correctly.
The present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention also may be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
This invention may be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.

Claims (22)

What is claimed is:
1. A method for facilitating development of a natural language understanding (NLU) model associated with an NLU application executing on a computer system comprising a combination of hardware and software, the method comprising acts of:
receiving, from a developer of the NLU application, at least one expected user entry and a corresponding desired routing destination;
determining whether the NLU model associates the at least one expected user entry with the desired routing destination, the determining comprising:
interpreting the at least one expected user entry via the NLU model to determine an actual routing destination for the at least one expected user entry, and
comparing the actual routing destination to the desired routing destination;
if it is determined that the actual routing destination of the at least one expected user entry matches the desired routing destination, selecting the at least one expected user entry for presentation to a user during a help prompt of the NLU application as an example of a legitimate utterance the user could speak to be routed to the desired routing destination; and
if it is determined that the actual routing destination does not match the desired routing destination:
(i) adding the at least one expected user entry to an NLU entry data set associated with the NLU model, and
(ii) training the NLU model to associate the at least one expected user entry with the desired routing destination.
2. The method of claim 1, wherein the NLU entry data set is an NLU database.
3. The method of claim 1, further comprising:
presenting said at least one expected user entry in a help message of said NLU application as example input to the NLU application.
4. The method of claim 1, further comprising:
if it is determined that the actual routing destination does not match the desired routing destination, presenting a failure dialog to the developer of the NLU application indicating that the actual routing destination does not match to the desired routing destination.
5. The method of claim 1, further comprising:
if it is determined that the actual routing destination matches the desired routing destination, using the at least one expected user entry to strengthen an association between the at least one expected user entry and the desired routing destination in the NLU model.
6. The method of claim 1, wherein training the NLU model to associate the at least one expected user entry with the desired routing destination causes the NLU model to form an association between the at least one expected user entry and the desired routing destination such that a user providing the at least one expected user entry during an interaction with the NLU application is connected to the desired routing destination.
7. The method of claim 3, wherein said help prompt comprises a preamble followed by the at least one expected user entry.
8. The method of claim 1, wherein adding the at least one expected user entry to an NLU entry data set associated with the NLU model comprises:
marking the at least one expected user entry within the NLU entry data set to set a training frequency and/or a statistical weighting associated with the at least one expected user entry.
9. The method of claim 8, wherein training the NLU model to associate the at least one expected user entry with the desired routing destination comprises:
training the NLU model using the NLU entry data set that includes the at least one marked entry;
testing the NLU model with the at least one marked entry; and
biasing the NLU model to reinforce an association between the at least one marked entry and the corresponding desired routing destination.
10. The method of claim 9, wherein said biasing is controlled to prevent the NLU model from incorrectly interpreting entries that were previously correctly interpreted before the biasing.
11. The method of claim 10, wherein said biasing comprises increasing the training frequency associated with the at least one marked entry.
12. The method of claim 9, wherein said biasing stops at a threshold number of maximum iterations.
13. The method of claim 1, wherein:
the at least one expected user entry is one of text or an utterance, and
the at least one expected user entry is one of a phrase or a sentence.
14. The method of claim 8, further comprising increasing the statistical weighting associated with the at least one marked entry relative to another entry in the NLU entry data set.
15. At least one non-transitory computer-readable medium encoded with instructions that, when executed by a computer system, cause the computer system to perform a method for facilitating development of a natural language understanding (NLU) model associated with an NLU application executing on a computer system comprising a combination of hardware and, the method comprising acts of:
receiving, from a developer of the NLU application, at least one expected user entry expected user entry and a corresponding to-a desired routing destination;
determining whether the NLU model associates the at least one expected user entry with the desired routing destination, the determining comprising:
interpreting the at least one expected user entry via the NLU model to determine an actual routing destination for the at least one expected user entry, and expected user entry, and
comparing the actual routing destination to the desired routing destination;
if it is determined that the actual routing destination does not match the desired routing destination of the at least one expected user entry matches the desired routing destination, selecting the at least one expected user entry for presentation to a user during a help prompt of the NLU application as an example of a legitimate utterance the user could speak to be routed to the desired routing destination; and
if it is determined that the actual routing destination does not match the desired routing destination:
(i) adding the at least one expected user entry to an NLU entry data set associated with the NLU model, and
(ii) training the NLU model to associate the at least one expected user entry with the desired routing destination.
16. The at least one computer-readable medium of claim 15, further comprising:
if it is determined that the actual routing destination matches the desired routing destination, using the at least one expected user entry to strengthen an association between the at least one expected user entry and the desired routing destination in the NLU model.
17. The at least one computer-readable medium of claim 15, wherein training the NLU model to associate the at least one expected user entry with the desired routing destination causes the NLU model to form an association between the at least one expected user entry and the desired routing destination such that a user providing the at least one expected user entry during an interaction with the NLU application is connected to the desired routing destination.
18. The at least one computer-readable medium of claim 15, wherein the method further comprises increasing a statistical weighting of the at least one expected user entry relative to another entry in the NLU data entry set.
19. An apparatus for facilitating development of a computer-implemented natural language understanding (NLU) model associated with an NLU application, the apparatus comprising:
at least one computer-readable medium encoded with instructions; and
at least one processing unit coupled to the at least one computer-readable medium, wherein upon execution of the instructions by the at least one processing unit, the at least one processing unit:
receives, from a developer of the NLU application, at least one expected user entry and a corresponding desired routing destination;
determines whether the NLU model associates the at least one expected user entry with the desired routing destination, the determining comprising:
interpreting the at least one expected user entry via the NLU model to determine an actual routing destination for the at least one expected user entry, and
comparing the actual routing destination to the desired routing destination;
if it is determined that the actual routing destination of the at least one expected user entry matches the desired routing destination, selects the at least one expected user entry for presentation to a user during a help prompt of the NLU application as an example of a legitimate utterance the user could speak to be routed to the desired routing destination; and
if it is determined that the actual routing destination does not match the desired routing destination:
adds the at least one expected user entry to an NLU entry data set associated with the NLU model, and
trains the NLU model to associate the at least one expected user entry with the desired routing destination.
20. The apparatus of claim 19, wherein if it is determined that the actual routing destination matches the desired routing destination, the at least one processing unit:
uses the at least one expected user entry to strengthen an association between the at least one expected user entry and the desired routing destination in the NLU model.
21. The apparatus of claim 19, wherein training the NLU model to associate the at least one expected user entry with the desired routing destination causes the NLU model to form an association between the at least one expected user entry and the desired routing destination such that a user providing the at least one expected user entry during an interaction with the NLU application is connected to the desired routing destination.
22. The apparatus of claim 19, wherein the at least one processing unit:
increases a statistical weighting of the at least one expected user entry relative to another entry in the NLU data entry set.
US11/300,799 2005-12-15 2005-12-15 Method and system for conveying an example in a natural language understanding application Active 2028-12-06 US8612229B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/300,799 US8612229B2 (en) 2005-12-15 2005-12-15 Method and system for conveying an example in a natural language understanding application
US14/088,858 US9384190B2 (en) 2005-12-15 2013-11-25 Method and system for conveying an example in a natural language understanding application
US15/151,277 US10192543B2 (en) 2005-12-15 2016-05-10 Method and system for conveying an example in a natural language understanding application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/300,799 US8612229B2 (en) 2005-12-15 2005-12-15 Method and system for conveying an example in a natural language understanding application

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/088,858 Continuation US9384190B2 (en) 2005-12-15 2013-11-25 Method and system for conveying an example in a natural language understanding application

Publications (2)

Publication Number Publication Date
US20070143099A1 US20070143099A1 (en) 2007-06-21
US8612229B2 true US8612229B2 (en) 2013-12-17

Family

ID=38174827

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/300,799 Active 2028-12-06 US8612229B2 (en) 2005-12-15 2005-12-15 Method and system for conveying an example in a natural language understanding application
US14/088,858 Active 2026-04-24 US9384190B2 (en) 2005-12-15 2013-11-25 Method and system for conveying an example in a natural language understanding application
US15/151,277 Active 2026-04-06 US10192543B2 (en) 2005-12-15 2016-05-10 Method and system for conveying an example in a natural language understanding application

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/088,858 Active 2026-04-24 US9384190B2 (en) 2005-12-15 2013-11-25 Method and system for conveying an example in a natural language understanding application
US15/151,277 Active 2026-04-06 US10192543B2 (en) 2005-12-15 2016-05-10 Method and system for conveying an example in a natural language understanding application

Country Status (1)

Country Link
US (3) US8612229B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10192543B2 (en) 2005-12-15 2019-01-29 Nuance Communications, Inc. Method and system for conveying an example in a natural language understanding application
US10482182B1 (en) * 2018-09-18 2019-11-19 CloudMinds Technology, Inc. Natural language understanding system and dialogue systems

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574358B2 (en) 2005-02-28 2009-08-11 International Business Machines Corporation Natural language system and method based on unisolated performance metric
US9336193B2 (en) 2012-08-30 2016-05-10 Arria Data2Text Limited Method and apparatus for updating a previously generated text
US9355093B2 (en) 2012-08-30 2016-05-31 Arria Data2Text Limited Method and apparatus for referring expression generation
US9135244B2 (en) 2012-08-30 2015-09-15 Arria Data2Text Limited Method and apparatus for configurable microplanning
US9405448B2 (en) 2012-08-30 2016-08-02 Arria Data2Text Limited Method and apparatus for annotating a graphical output
US8762134B2 (en) 2012-08-30 2014-06-24 Arria Data2Text Limited Method and apparatus for situational analysis text generation
US8762133B2 (en) 2012-08-30 2014-06-24 Arria Data2Text Limited Method and apparatus for alert validation
US9600471B2 (en) 2012-11-02 2017-03-21 Arria Data2Text Limited Method and apparatus for aggregating with information generalization
WO2014076524A1 (en) 2012-11-16 2014-05-22 Data2Text Limited Method and apparatus for spatial descriptions in an output text
WO2014076525A1 (en) 2012-11-16 2014-05-22 Data2Text Limited Method and apparatus for expressing time in an output text
WO2014102568A1 (en) 2012-12-27 2014-07-03 Arria Data2Text Limited Method and apparatus for motion detection
US9990360B2 (en) 2012-12-27 2018-06-05 Arria Data2Text Limited Method and apparatus for motion description
GB2524934A (en) 2013-01-15 2015-10-07 Arria Data2Text Ltd Method and apparatus for document planning
WO2015028844A1 (en) 2013-08-29 2015-03-05 Arria Data2Text Limited Text generation from correlated alerts
US9396181B1 (en) 2013-09-16 2016-07-19 Arria Data2Text Limited Method, apparatus, and computer program product for user-directed reporting
US9244894B1 (en) 2013-09-16 2016-01-26 Arria Data2Text Limited Method and apparatus for interactive reports
US10664558B2 (en) 2014-04-18 2020-05-26 Arria Data2Text Limited Method and apparatus for document planning
US10445432B1 (en) 2016-08-31 2019-10-15 Arria Data2Text Limited Method and apparatus for lightweight multilingual natural language realizer
US10467347B1 (en) 2016-10-31 2019-11-05 Arria Data2Text Limited Method and apparatus for natural language document orchestrator
US10331791B2 (en) * 2016-11-23 2019-06-25 Amazon Technologies, Inc. Service for developing dialog-driven applications
US10957313B1 (en) 2017-09-22 2021-03-23 Amazon Technologies, Inc. System command processing
US10600419B1 (en) * 2017-09-22 2020-03-24 Amazon Technologies, Inc. System command processing
US10810994B2 (en) * 2018-07-19 2020-10-20 International Business Machines Corporation Conversational optimization of cognitive models
CN111199728A (en) * 2018-10-31 2020-05-26 阿里巴巴集团控股有限公司 Training data acquisition method and device, intelligent sound box and intelligent television
US11228682B2 (en) * 2019-12-30 2022-01-18 Genesys Telecommunications Laboratories, Inc. Technologies for incorporating an augmented voice communication into a communication routing configuration
US11341339B1 (en) * 2020-05-14 2022-05-24 Amazon Technologies, Inc. Confidence calibration for natural-language understanding models that provides optimal interpretability
US11393456B1 (en) * 2020-06-26 2022-07-19 Amazon Technologies, Inc. Spoken language understanding system
US11252149B1 (en) 2020-09-30 2022-02-15 Amazon Technologies, Inc. Resource management techniques for dialog-driven applications
US11817091B1 (en) 2020-09-30 2023-11-14 Amazon Technologies, Inc. Fault-tolerance techniques for dialog-driven applications

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5386556A (en) 1989-03-06 1995-01-31 International Business Machines Corporation Natural language analyzing apparatus and method
US5633909A (en) * 1994-06-17 1997-05-27 Centigram Communications Corporation Apparatus and method for generating calls and testing telephone equipment
US5870464A (en) * 1995-11-13 1999-02-09 Answersoft, Inc. Intelligent information routing system and method
US5915011A (en) * 1997-02-10 1999-06-22 Genesys Telecommunications Laboratories, Inc. Statistically-predictive and agent-predictive call routing
JP2000250913A (en) 1999-02-25 2000-09-14 Nippon Telegr & Teleph Corp <Ntt> Example type natural language translation method, production method and device for list of bilingual examples and recording medium recording program of the production method and device
US6151575A (en) 1996-10-28 2000-11-21 Dragon Systems, Inc. Rapid adaptation of speech models
US6269153B1 (en) * 1998-07-29 2001-07-31 Lucent Technologies Inc. Methods and apparatus for automatic call routing including disambiguating routing decisions
US6285978B1 (en) 1998-09-24 2001-09-04 International Business Machines Corporation System and method for estimating accuracy of an automatic natural language translation
US6393388B1 (en) 1996-05-02 2002-05-21 Sony Corporation Example-based translation method and system employing multi-stage syntax dividing
US20020077823A1 (en) * 2000-10-13 2002-06-20 Andrew Fox Software development systems and methods
US20020116174A1 (en) * 2000-10-11 2002-08-22 Lee Chin-Hui Method and apparatus using discriminative training in natural language call routing and document retrieval
US20020181689A1 (en) * 2001-03-27 2002-12-05 Jason Rupe System and method for modeling resources for calls centered in a public switch telelphone network
US20030105634A1 (en) * 2001-10-15 2003-06-05 Alicia Abella Method for dialog management
US6606598B1 (en) * 1998-09-22 2003-08-12 Speechworks International, Inc. Statistical computing and reporting for interactive speech applications
US6629066B1 (en) 1995-07-18 2003-09-30 Nuance Communications Method and system for building and running natural language understanding systems
US20030225578A1 (en) * 1999-07-28 2003-12-04 Jonathan Kahn System and method for improving the accuracy of a speech recognition program
US20040030541A1 (en) * 2002-08-12 2004-02-12 Avaya Inc. Methods and apparatus for automatic training using natural language techniques for analysis of queries presented to a trainee and responses from the trainee
US20050135595A1 (en) * 2003-12-18 2005-06-23 Sbc Knowledge Ventures, L.P. Intelligently routing customer communications
US20060069569A1 (en) * 2004-09-16 2006-03-30 Sbc Knowledge Ventures, L.P. System and method for optimizing prompts for speech-enabled applications
US20060140357A1 (en) * 2004-12-27 2006-06-29 International Business Machines Corporation Graphical tool for creating a call routing application
US7092888B1 (en) * 2001-10-26 2006-08-15 Verizon Corporate Services Group Inc. Unsupervised training in natural language call routing
US20060195321A1 (en) * 2005-02-28 2006-08-31 International Business Machines Corporation Natural language system and method based on unisolated performance metric
US7117447B2 (en) * 2001-06-08 2006-10-03 Mci, Llc Graphical user interface (GUI) based call application system
US20070219798A1 (en) * 2006-03-16 2007-09-20 Microsoft Corporation Training system for a speech recognition application
US7295981B1 (en) * 2004-01-09 2007-11-13 At&T Corp. Method for building a natural language understanding model for a spoken dialog system
US7346507B1 (en) * 2002-06-05 2008-03-18 Bbn Technologies Corp. Method and apparatus for training an automated speech recognition-based system
US7643998B2 (en) * 2001-07-03 2010-01-05 Apptera, Inc. Method and apparatus for improving voice recognition performance in a voice application distribution system
US8073699B2 (en) * 2005-08-16 2011-12-06 Nuance Communications, Inc. Numeric weighting of error recovery prompts for transfer to a human agent from an automated speech response system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5794189A (en) * 1995-11-13 1998-08-11 Dragon Systems, Inc. Continuous speech recognition
US6064957A (en) * 1997-08-15 2000-05-16 General Electric Company Improving speech recognition through text-based linguistic post-processing
US20020032564A1 (en) * 2000-04-19 2002-03-14 Farzad Ehsani Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
JP2003505778A (en) * 1999-05-28 2003-02-12 セーダ インコーポレイテッド Phrase-based dialogue modeling with specific use in creating recognition grammars for voice control user interfaces
US6735563B1 (en) * 2000-07-13 2004-05-11 Qualcomm, Inc. Method and apparatus for constructing voice templates for a speaker-independent voice recognition system
US6941266B1 (en) * 2000-11-15 2005-09-06 At&T Corp. Method and system for predicting problematic dialog situations in a task classification system
US6751591B1 (en) * 2001-01-22 2004-06-15 At&T Corp. Method and system for predicting understanding errors in a task classification system
US7269545B2 (en) * 2001-03-30 2007-09-11 Nec Laboratories America, Inc. Method for retrieving answers from an information retrieval system
US7505911B2 (en) * 2001-09-05 2009-03-17 Roth Daniel L Combined speech recognition and sound recording
US20030225719A1 (en) * 2002-05-31 2003-12-04 Lucent Technologies, Inc. Methods and apparatus for fast and robust model training for object classification
US7835910B1 (en) * 2003-05-29 2010-11-16 At&T Intellectual Property Ii, L.P. Exploiting unlabeled utterances for spoken language understanding
US7672908B2 (en) * 2005-04-15 2010-03-02 Carnegie Mellon University Intent-based information processing and updates in association with a service agent
US8612229B2 (en) 2005-12-15 2013-12-17 Nuance Communications, Inc. Method and system for conveying an example in a natural language understanding application
US8086549B2 (en) * 2007-11-09 2011-12-27 Microsoft Corporation Multi-label active learning
US9785891B2 (en) * 2014-12-09 2017-10-10 Conduent Business Services, Llc Multi-task conditional random field models for sequence labeling

Patent Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5386556A (en) 1989-03-06 1995-01-31 International Business Machines Corporation Natural language analyzing apparatus and method
US5633909A (en) * 1994-06-17 1997-05-27 Centigram Communications Corporation Apparatus and method for generating calls and testing telephone equipment
US6629066B1 (en) 1995-07-18 2003-09-30 Nuance Communications Method and system for building and running natural language understanding systems
US5870464A (en) * 1995-11-13 1999-02-09 Answersoft, Inc. Intelligent information routing system and method
US6393388B1 (en) 1996-05-02 2002-05-21 Sony Corporation Example-based translation method and system employing multi-stage syntax dividing
US6151575A (en) 1996-10-28 2000-11-21 Dragon Systems, Inc. Rapid adaptation of speech models
US5915011A (en) * 1997-02-10 1999-06-22 Genesys Telecommunications Laboratories, Inc. Statistically-predictive and agent-predictive call routing
US6269153B1 (en) * 1998-07-29 2001-07-31 Lucent Technologies Inc. Methods and apparatus for automatic call routing including disambiguating routing decisions
US6606598B1 (en) * 1998-09-22 2003-08-12 Speechworks International, Inc. Statistical computing and reporting for interactive speech applications
US6285978B1 (en) 1998-09-24 2001-09-04 International Business Machines Corporation System and method for estimating accuracy of an automatic natural language translation
JP2000250913A (en) 1999-02-25 2000-09-14 Nippon Telegr & Teleph Corp <Ntt> Example type natural language translation method, production method and device for list of bilingual examples and recording medium recording program of the production method and device
US20030225578A1 (en) * 1999-07-28 2003-12-04 Jonathan Kahn System and method for improving the accuracy of a speech recognition program
US6925432B2 (en) * 2000-10-11 2005-08-02 Lucent Technologies Inc. Method and apparatus using discriminative training in natural language call routing and document retrieval
US20020116174A1 (en) * 2000-10-11 2002-08-22 Lee Chin-Hui Method and apparatus using discriminative training in natural language call routing and document retrieval
US20020077823A1 (en) * 2000-10-13 2002-06-20 Andrew Fox Software development systems and methods
US20020181689A1 (en) * 2001-03-27 2002-12-05 Jason Rupe System and method for modeling resources for calls centered in a public switch telelphone network
US6668056B2 (en) * 2001-03-27 2003-12-23 Qwest Communications International, Inc. System and method for modeling resources for calls centered in a public switch telephone network
US7117447B2 (en) * 2001-06-08 2006-10-03 Mci, Llc Graphical user interface (GUI) based call application system
US7643998B2 (en) * 2001-07-03 2010-01-05 Apptera, Inc. Method and apparatus for improving voice recognition performance in a voice application distribution system
US20030105634A1 (en) * 2001-10-15 2003-06-05 Alicia Abella Method for dialog management
US7092888B1 (en) * 2001-10-26 2006-08-15 Verizon Corporate Services Group Inc. Unsupervised training in natural language call routing
US7346507B1 (en) * 2002-06-05 2008-03-18 Bbn Technologies Corp. Method and apparatus for training an automated speech recognition-based system
US7249011B2 (en) * 2002-08-12 2007-07-24 Avaya Technology Corp. Methods and apparatus for automatic training using natural language techniques for analysis of queries presented to a trainee and responses from the trainee
US20040030541A1 (en) * 2002-08-12 2004-02-12 Avaya Inc. Methods and apparatus for automatic training using natural language techniques for analysis of queries presented to a trainee and responses from the trainee
US7751552B2 (en) * 2003-12-18 2010-07-06 At&T Intellectual Property I, L.P. Intelligently routing customer communications
US7027586B2 (en) * 2003-12-18 2006-04-11 Sbc Knowledge Ventures, L.P. Intelligently routing customer communications
US20060098803A1 (en) * 2003-12-18 2006-05-11 Sbc Knowledge Ventures, L.P. Intelligently routing customer communications
US20050135595A1 (en) * 2003-12-18 2005-06-23 Sbc Knowledge Ventures, L.P. Intelligently routing customer communications
US7295981B1 (en) * 2004-01-09 2007-11-13 At&T Corp. Method for building a natural language understanding model for a spoken dialog system
US7620550B1 (en) * 2004-01-09 2009-11-17 At&T Intellectual Property Ii, L.P. Method for building a natural language understanding model for a spoken dialog system
US20060143015A1 (en) * 2004-09-16 2006-06-29 Sbc Technology Resources, Inc. System and method for facilitating call routing using speech recognition
US20060069569A1 (en) * 2004-09-16 2006-03-30 Sbc Knowledge Ventures, L.P. System and method for optimizing prompts for speech-enabled applications
US20060140357A1 (en) * 2004-12-27 2006-06-29 International Business Machines Corporation Graphical tool for creating a call routing application
US20060195321A1 (en) * 2005-02-28 2006-08-31 International Business Machines Corporation Natural language system and method based on unisolated performance metric
US8073699B2 (en) * 2005-08-16 2011-12-06 Nuance Communications, Inc. Numeric weighting of error recovery prompts for transfer to a human agent from an automated speech response system
US20070219798A1 (en) * 2006-03-16 2007-09-20 Microsoft Corporation Training system for a speech recognition application

Non-Patent Citations (21)

* Cited by examiner, † Cited by third party
Title
A.Berger et al., "A maximum entropy approach to natural language processing," Association for Computational Linguistics, 1996, pp. 39-71, vol. 22, No. 1.
B. Roark et al., "Corrective language modeling for large vocabulary ASR with the perceptron algorithm" IEEE ICASSP 2004, pp. I-749-I-752.
B. Souvignier et al., "Online adaptation for language models in spoken dialogue system," in Proc. ICSLP 1998, Sydney, Australia, 4 pgs.
B.-H. Juang et al., "Minimum classification error rate methods for speech recognition," IEEE Transactions on Speech and Audio Processing, May 1997, pp. 257-265, vol. 5, No. 3.
Ballard, B., et al., "A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing", Comp. Ling., vol. 10, No. 2, pp. 81-96, Apr.-Jun. 1984.
C. Chelba et al., "Discriminative training of N-gram classifiers for speech and text routing," Eurospeech 2003, Geneva, Switzerland, Sep. 2003, pp. 2777-2780.
G. Riccardi, "Language model adaptation for spoken language systems," in Proc. ICSLP 1998, Sydney, Australia, 4 pgs.
G. Tur et al., "Extending boosting for call classification using word confusion networks," IEEE, ICASSP 2004, Montreal, Canada, pp. I-437-I-440.
G. Tur et al., "Improving spoken language understanding using word confusion networks," in Proceedings ICSLP 2002, pp. 1137-1140.
H.-K. Kuo et al., "Discriminative training of language models for speech recognition," in Proc. ICASSP 2002, Orlando, Florida, pp. I-325-I-328.
H.-K. Kuo et al., "Discriminative training of natural language call routers," IEEE Transactions on Speech and Audio Processing, Jan. 2003, pp. 100-109, vol. 11, No. 1.
J.-L. Gauvain et al., "MAP estimation of continuous density HMM: theory and applications," Proc. DARPA Speech and Natural Language, Morgan Kaufmann, Feb. 1992, pp. 1-6.
J.-L. Gauvain et al., "Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains," IEEE Transactions on Speech and Audio Processing, Apr. 1994, pp. 291-298, vol. 2, No. 2.
L. Mangu et al., "Finding consensus in speech recognition: word error minimization and other application of confusion networks," Computer Speech and Language, Oct. 2000, pp. 373-400, vol. 14, No. 4.
McRoy, S.W., et al., "Creating Natural Language Output for Real-Time Applications", Intelligence, pp. 21-34, Summer 2001.
P. C. Woodland et al., "Large scale discriminative training for speech recognition," Computer Speech and Language, 2002, pp. 25-47, vol. 16.
P. S. Gopalakrishnan et al., "An inequality for rational functions with application to some statistical estimation problems," IEEE Transactions on Information Theory, Jan. 1991, pp. 107-113, vol. 37, No. 1.
S. F. Chen et al., "An empirical study of smoothing techniques for language modeling," in Tech. Rep. TR-10-98, Center for Research in Computing Technology, Harvard University, Cambridge, MA 1998, pp. 310-318.
S. Katagiri et al., "Pattern recognition using a family of design algorithms based upon the generalized probabilistic descent method," Proceedings of the IEEE, Nov. 1998, pp. 2345-2373, vol. 86, No. 11.
V. Goel, "Conditional maximum likelihood estimation for improving annotation performance of N-gram models incorporating stochastic finite state grammars," in Proc. ICSLP 2004, Jeju Island, Korea, 4 pgs.
Y.-Y. Wang et al., "Is word error rate a good indicator for spoken language understanding accuracy," IEEE ASRU 2003, pp. 577-582.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10192543B2 (en) 2005-12-15 2019-01-29 Nuance Communications, Inc. Method and system for conveying an example in a natural language understanding application
US10482182B1 (en) * 2018-09-18 2019-11-19 CloudMinds Technology, Inc. Natural language understanding system and dialogue systems

Also Published As

Publication number Publication date
US20140156265A1 (en) 2014-06-05
US9384190B2 (en) 2016-07-05
US20160253991A1 (en) 2016-09-01
US20070143099A1 (en) 2007-06-21
US10192543B2 (en) 2019-01-29

Similar Documents

Publication Publication Date Title
US10192543B2 (en) Method and system for conveying an example in a natural language understanding application
US11238845B2 (en) Multi-dialect and multilingual speech recognition
US7702512B2 (en) Natural error handling in speech recognition
US7412387B2 (en) Automatic improvement of spoken language
US6839667B2 (en) Method of speech recognition by presenting N-best word candidates
US9666182B2 (en) Unsupervised and active learning in automatic speech recognition for call classification
US7016827B1 (en) Method and system for ensuring robustness in natural language understanding
JP5819924B2 (en) Recognition architecture for generating Asian characters
US8407050B2 (en) Method and system for automatic transcription prioritization
US20180308474A1 (en) Systems and methods for modeling l1-specific phonological errors in computer-assisted pronunciation training system
US7228275B1 (en) Speech recognition system having multiple speech recognizers
US9754586B2 (en) Methods and apparatus for use in speech recognition systems for identifying unknown words and for adding previously unknown words to vocabularies and grammars of speech recognition systems
KR101445904B1 (en) System and methods for maintaining speech-to-speech translation in the field
US8346553B2 (en) Speech recognition system and method for speech recognition
US7620550B1 (en) Method for building a natural language understanding model for a spoken dialog system
López-Cózar et al. Assessment of dialogue systems by means of a new simulation technique
US20080133245A1 (en) Methods for speech-to-speech translation
US20060229870A1 (en) Using a spoken utterance for disambiguation of spelling inputs into a speech recognition system
US20060287868A1 (en) Dialog system
Raux et al. Using task-oriented spoken dialogue systems for language learning: potential, practical applications and challenges
Lee et al. Hybrid approach to robust dialog management using agenda and dialog examples
CN110021293B (en) Voice recognition method and device and readable storage medium
US20210104235A1 (en) Arbitration of Natural Language Understanding Applications
US6963834B2 (en) Method of speech recognition using empirically determined word candidates
US11636853B2 (en) Natural language grammar improvement

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BALCHANDRAN, RAJESH;BOYER, LINDA M.;LEWIS, JAMES R.;AND OTHERS;REEL/FRAME:017088/0329;SIGNING DATES FROM 20051213 TO 20051215

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BALCHANDRAN, RAJESH;BOYER, LINDA M.;LEWIS, JAMES R.;AND OTHERS;SIGNING DATES FROM 20051213 TO 20051215;REEL/FRAME:017088/0329

AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317

Effective date: 20090331

Owner name: NUANCE COMMUNICATIONS, INC.,MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317

Effective date: 20090331

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:065552/0934

Effective date: 20230920