US20050119892A1 - Method and arrangement for managing grammar options in a graphical callflow builder - Google Patents

Method and arrangement for managing grammar options in a graphical callflow builder Download PDF

Info

Publication number
US20050119892A1
US20050119892A1 US10/726,102 US72610203A US2005119892A1 US 20050119892 A1 US20050119892 A1 US 20050119892A1 US 72610203 A US72610203 A US 72610203A US 2005119892 A1 US2005119892 A1 US 2005119892A1
Authority
US
United States
Prior art keywords
grammar
built
user
option
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/726,102
Inventor
Ciprian Agapi
Felipe Gomez
James Lewis
Vanessa Michelini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/726,102 priority Critical patent/US20050119892A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGAPI, CIPRIAN, GOMEZ, FELIPE, LEWIS, JAMES R., MICHELINI, VANESSA V.
Publication of US20050119892A1 publication Critical patent/US20050119892A1/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Priority to US13/344,193 priority patent/US8355918B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • G10L15/193Formal grammars, e.g. finite state automata, context free grammars or word networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/35Aspects of automatic or semi-automatic exchanges related to information services provided via a voice call
    • H04M2203/355Interactive dialogue design tools, features or methods

Definitions

  • This invention relates to the field of graphical user interfaces and more particularly to a graphical call flow builder.
  • the task model is a framework for describing the application-specific information needed to perform the task.
  • the development tool is an object that interprets a user specified task model and outputs information for a spoken dialogue system to perform according to the specified task model.
  • the dialogue manager is a runtime system that uses output from the development tool in carrying out interactive dialogues to perform the task specified according to the task model. The dialogue manager conducts the dialogue using the task model and its built-in knowledge of dialogue management.
  • a markup language document as described in the U.S. Pat. No. 6,269,336, includes a dialogue element including a plurality of markup language elements. Each of the plurality of markup language elements is identifiable by at least one markup tag.
  • a step element is contained within the dialogue element to define a state within the dialogue element.
  • the step element includes a prompt element and an input element.
  • the prompt element includes an announcement to be read to the user.
  • the input element includes at least one input that corresponds to a user input.
  • a method in accordance with the present invention includes the steps of creating a markup language document having a plurality of elements, selecting a prompt element, and defining a voice communication in the prompt element to be read to the user. The method further includes the steps of selecting an input element and defining an input variable to store data inputted by the user.
  • this invention describes a markup language similar, but not identical to, VoiceXML, and includes the capacity (like VoiceXML) to refer to either built-in or external grammars, it does not address the resolution of specific new options with the contents of existing grammars.
  • U.S. Pat. No. 6,173,266 discusses a dialogue module that includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application.
  • a subset of the plurality of dialogue modules are selected to accomplish their respective interactive dialogue tasks in the interactive speech application and are interconnected in an order defining the callflow of the application, and the application is generated.
  • a graphical user interface represents the stored plurality of dialogue modules as icons in a graphical display in which icons for the subset of dialogue modules are selected in the graphical display.
  • the icons for the subset of dialogue modules are graphically interconnected into a graphical representation of the call flow of the interactive speech application, and the interactive speech application is generated based upon the graphical representation.
  • the method further includes associating configuration parameters with specific dialogue modules.
  • this existing invention describes a graphical callflow builder using dialogue modules as elements, but does not address the resolution of specific new options with the contents of existing grammars.
  • Embodiments in accordance with the invention can enable callflow designers to work more efficiently with lists of variables in a graphical callflow builder, particularly where users can create their own variable names. Furthermore, embodiments disclosed herein overcome the problems described above through the automatic evaluation of options added to prompts in a graphical callflow when the prompt is using one or more existing grammars. The nature of this evaluation is to determine if the added options are present in one or more of the existing grammars. If not present, the added prompts are used as external referents for use in the graphical callflow and become part of a new generated grammar. If present, the added prompts are only used as external referents for use in the graphical callflow and do not become part of a new generated grammar.
  • a method for a speech recognition application callflow can include the steps of placing a prompt into a workspace for the speech recognition application workflow and attaching at least one among a pre-built grammar and a user-entered individual new option to the prompt.
  • the pre-built grammars can be selected from a list.
  • the method can further include the step of searching the list of pre-built grammars for matches to the user-entered individual new option. If a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option can point to an equivalent pre-built grammar. If a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option can form a part of the list of pre-built grammars.
  • a method in a speech recognition application callflow can include the steps of assigning a individual option and a pre-built grammar to the same prompt, treating the individual option as a valid output of the pre-built grammar if the individual option is a potential valid match to a recognition phrase or an annotation in the pre-built grammar, and treating the individual option as an independent grammar from the pre-built grammar if the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar.
  • a system for managing grammar options in a graphical callflow builder can include a memory and a processor.
  • the processor can be programmed to place a prompt into a workspace for the speech recognition application workflow and to attach at least one among a pre-built grammar and a user-entered individual new option to the prompt.
  • a computer program has a plurality of code sections executable by a machine for causing the machine to perform certain steps as described in the method and systems above.
  • FIG. 1 is a flow diagram illustrating a method in a speech recognition application callflow in accordance with the present invention.
  • FIG. 2 is an exemplary instantiation of a callflow GUI with system and user-generated labels for callflow elements in accordance with the present invention.
  • FIGS. 3A and 3B illustrate a callflow element prompt and callflow element in accordance with the present invention.
  • FIG. 4 is a portion of an exemplary instantiation of a callflow GUI in accordance with the present invention.
  • the method 10 can include the step 11 of assigning an individual option and a pre-built grammar to the same prompt.
  • decision block 12 if the individual option is a potential valid match to a recognition phrase, then the method treats the individual option as a valid output of the pre-built grammar at step 15 .
  • decision block 13 if the individual option is a potential valid match to an annotation in the pre-built grammar, then the method also treats the individual option as a valid output of the pre-built grammar at step 15 . If the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar, then the individual option can be treated as an independent grammar from the pre-built grammar at step 14 .
  • the callflow GUI 20 illustrates a reminder system where callflow element 22 welcomes the user to the system.
  • Callflow element 24 determines a particular date using user defined variable “Date”, the value of which will be an output of the grammar named date.jsgf.
  • Callflow element 26 confirms an entry for the date.
  • Callflow element 28 determines a time using user defined variable ‘time’, the value of which will be an output of the grammar named time.jsgf.
  • Callflow element 30 then confirms the entry from the time.
  • Callflow element 32 then prompts the user to record at the tone and callflow element 34 prompts the user to determine if another reminder is desired.
  • the prompt in 34 can take as speech input any valid phrase in the date.jsgf grammar plus ‘Yes’ or ‘No’. Without inspection, it is not possible to determine whether ‘Yes’ and/or ‘No’ are valid phrases in the date.jsgf grammar. For example, suppose the designer has created the callflow shown in FIG. 2 , with ‘Yes’ and ‘No’ defined as valid responses to the prompt in 34 along with the date.jsgf grammar. The high-level flowchart of FIG. 1 would then illustrate the actions a system would take in evaluating these options as previously described above. If no further reminders are to be set, then the callflow element 36 provides a goodbye greeting.
  • a key component of such a system would be a prompt—a request for user input.
  • the prompt could have a symbolic representation similar to the call flow element 29 shown in FIG. 3A .
  • the symbol contains an automatically-generated label (“12345”), prompt text (“This is a prompt”), and a placeholder for a grammar option.
  • the designer can select from a set of prebuilt grammars (such as the built-in types in VoiceXML, custom-built grammars from a library, etc.) a grammar for the prompt as shown in FIG. 3B .
  • the designer can also select one or more additional options for recognition at that prompt.
  • the present invention can be realized in hardware, software, or a combination of hardware and software.
  • the present invention can also be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention also can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program or application in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

Abstract

A method (10) in a speech recognition application callflow can include the steps of assigning (11) an individual option and a pre-built grammar to a same prompt, treating (15) the individual option as a valid output of the pre-built grammar if the individual option is a potential valid match to a recognition phrase (12) or an annotation (13) in the pre-built grammar, and treating (14) the individual option as an independent grammar from the pre-built grammar if the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar.

Description

    BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • This invention relates to the field of graphical user interfaces and more particularly to a graphical call flow builder.
  • 2. Description of the Related Art
  • Systems exist that allow callflow designers to write simple grammar options or separately select prebuilt grammar files in graphical callflow builders. Some systems are described below. There is no system that allows designers who do not have any technical knowledge of speech grammars to both select a pre-built grammar file and write in the same element of a callflow. Furthermore, there is no other system that lets a designer select a specific output of a prebuilt grammar for special treatment in a callflow. The system we describe below overcomes these problems.
  • One such system, as described in U.S. Pat. No. 6,510,411, discusses a simplification of the process of developing call or dialogue flows for use in an Interactive Voice Response system where three principal aspects of the invention include a task-oriented dialogue model (or task model), a development tool and a dialogue manager. The task model is a framework for describing the application-specific information needed to perform the task. The development tool is an object that interprets a user specified task model and outputs information for a spoken dialogue system to perform according to the specified task model. The dialogue manager is a runtime system that uses output from the development tool in carrying out interactive dialogues to perform the task specified according to the task model. The dialogue manager conducts the dialogue using the task model and its built-in knowledge of dialogue management. Plus, generic knowledge of how to conduct a dialogue is separated from the specific information to be collected in a particular application. It is only necessary for the developer to provide the specific information about the structure of a task, leaving the specifics of dialogue management to the dialogue manager. This invention describes a form-based method for developing very simple speech applications, and does not address at all the use of external grammar files.
  • Another system, U.S. Pat. No. 6,269,336, discusses a voice browser for interactive services. A markup language document, as described in the U.S. Pat. No. 6,269,336, includes a dialogue element including a plurality of markup language elements. Each of the plurality of markup language elements is identifiable by at least one markup tag. A step element is contained within the dialogue element to define a state within the dialogue element. The step element includes a prompt element and an input element. The prompt element includes an announcement to be read to the user. The input element includes at least one input that corresponds to a user input. A method in accordance with the present invention includes the steps of creating a markup language document having a plurality of elements, selecting a prompt element, and defining a voice communication in the prompt element to be read to the user. The method further includes the steps of selecting an input element and defining an input variable to store data inputted by the user. Although this invention describes a markup language similar, but not identical to, VoiceXML, and includes the capacity (like VoiceXML) to refer to either built-in or external grammars, it does not address the resolution of specific new options with the contents of existing grammars.
  • U.S. Pat. No. 6,173,266 discusses a dialogue module that includes computer readable instructions for accomplishing a predefined interactive dialogue task in an interactive speech application. In response to user input, a subset of the plurality of dialogue modules are selected to accomplish their respective interactive dialogue tasks in the interactive speech application and are interconnected in an order defining the callflow of the application, and the application is generated. A graphical user interface represents the stored plurality of dialogue modules as icons in a graphical display in which icons for the subset of dialogue modules are selected in the graphical display. In response to user input, the icons for the subset of dialogue modules are graphically interconnected into a graphical representation of the call flow of the interactive speech application, and the interactive speech application is generated based upon the graphical representation. Using the graphical display, the method further includes associating configuration parameters with specific dialogue modules. Once again, this existing invention describes a graphical callflow builder using dialogue modules as elements, but does not address the resolution of specific new options with the contents of existing grammars.
  • SUMMARY OF THE INVENTION
  • Embodiments in accordance with the invention can enable callflow designers to work more efficiently with lists of variables in a graphical callflow builder, particularly where users can create their own variable names. Furthermore, embodiments disclosed herein overcome the problems described above through the automatic evaluation of options added to prompts in a graphical callflow when the prompt is using one or more existing grammars. The nature of this evaluation is to determine if the added options are present in one or more of the existing grammars. If not present, the added prompts are used as external referents for use in the graphical callflow and become part of a new generated grammar. If present, the added prompts are only used as external referents for use in the graphical callflow and do not become part of a new generated grammar.
  • In a first aspect of the invention, a method for a speech recognition application callflow can include the steps of placing a prompt into a workspace for the speech recognition application workflow and attaching at least one among a pre-built grammar and a user-entered individual new option to the prompt. The pre-built grammars can be selected from a list. The method can further include the step of searching the list of pre-built grammars for matches to the user-entered individual new option. If a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option can point to an equivalent pre-built grammar. If a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option can form a part of the list of pre-built grammars.
  • In a second aspect of the invention, a method in a speech recognition application callflow can include the steps of assigning a individual option and a pre-built grammar to the same prompt, treating the individual option as a valid output of the pre-built grammar if the individual option is a potential valid match to a recognition phrase or an annotation in the pre-built grammar, and treating the individual option as an independent grammar from the pre-built grammar if the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar.
  • In a third aspect of the invention, a system for managing grammar options in a graphical callflow builder can include a memory and a processor. The processor can be programmed to place a prompt into a workspace for the speech recognition application workflow and to attach at least one among a pre-built grammar and a user-entered individual new option to the prompt.
  • In a fourth aspect of the invention, a computer program has a plurality of code sections executable by a machine for causing the machine to perform certain steps as described in the method and systems above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • There are shown in the drawings embodiments which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
  • FIG. 1 is a flow diagram illustrating a method in a speech recognition application callflow in accordance with the present invention.
  • FIG. 2 is an exemplary instantiation of a callflow GUI with system and user-generated labels for callflow elements in accordance with the present invention.
  • FIGS. 3A and 3B illustrate a callflow element prompt and callflow element in accordance with the present invention.
  • FIG. 4 is a portion of an exemplary instantiation of a callflow GUI in accordance with the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In our proposed system, designers can put a prompt into a workspace, then attach either prebuilt grammars from a list or attach individual new options, or both. To keep the system as parsimonious as possible, and to prevent potential conflicts between multiple grammars, if the user combines a prebuilt grammar and any new options, the system searches the prebuilt grammar for any matches to the new options, searching both valid utterances and associated annotations. If the new option exists in the grammar, the ‘new’ option simply points to the equivalent grammar entry. Otherwise, the new option becomes part of a grammar automatically built to hold it, with the entry in the new grammar having the text of the new option as both the recognition string and an associated annotation. Thus, without any deep understanding of the structure of a speech recognition grammar, callflow designers can create or work with grammars with a high degree of flexibility.
  • Referring to FIG. 1, a high-level flowchart of a method 10 in a speech recognition application callflow. The method 10 can include the step 11 of assigning an individual option and a pre-built grammar to the same prompt. At decision block 12, if the individual option is a potential valid match to a recognition phrase, then the method treats the individual option as a valid output of the pre-built grammar at step 15. Likewise, at decision block 13, if the individual option is a potential valid match to an annotation in the pre-built grammar, then the method also treats the individual option as a valid output of the pre-built grammar at step 15. If the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar, then the individual option can be treated as an independent grammar from the pre-built grammar at step 14.
  • Referring to FIG. 2, a possible instantiation of a callflow GUI with system-and user-generated labels for callflow elements is shown in accordance with the present invention. In particular, the callflow GUI 20 illustrates a reminder system where callflow element 22 welcomes the user to the system. Callflow element 24 determines a particular date using user defined variable “Date”, the value of which will be an output of the grammar named date.jsgf. Callflow element 26 confirms an entry for the date. Callflow element 28 determines a time using user defined variable ‘time’, the value of which will be an output of the grammar named time.jsgf. Callflow element 30 then confirms the entry from the time. Callflow element 32 then prompts the user to record at the tone and callflow element 34 prompts the user to determine if another reminder is desired. Note that the prompt in 34 can take as speech input any valid phrase in the date.jsgf grammar plus ‘Yes’ or ‘No’. Without inspection, it is not possible to determine whether ‘Yes’ and/or ‘No’ are valid phrases in the date.jsgf grammar. For example, suppose the designer has created the callflow shown in FIG. 2, with ‘Yes’ and ‘No’ defined as valid responses to the prompt in 34 along with the date.jsgf grammar. The high-level flowchart of FIG. 1 would then illustrate the actions a system would take in evaluating these options as previously described above. If no further reminders are to be set, then the callflow element 36 provides a goodbye greeting.
  • Assume that a system exists for the graphical building of speech recognition callflows. A key component of such a system would be a prompt—a request for user input. The prompt could have a symbolic representation similar to the call flow element 29 shown in FIG. 3A. Note that the symbol contains an automatically-generated label (“12345”), prompt text (“This is a prompt”), and a placeholder for a grammar option. Through some well-known means (property sheet, drag-and-drop, etc.), the designer can select from a set of prebuilt grammars (such as the built-in types in VoiceXML, custom-built grammars from a library, etc.) a grammar for the prompt as shown in FIG. 3B. The designer can also select one or more additional options for recognition at that prompt.
  • For example, suppose the designer has created the callflow shown in FIG. 4, then determines that there is a need to disambiguate ‘midnight’ as a special case if spoken in response to the request for the reminder time. The callflow element 102 of FIG. 4 and the high-level flowchart of FIG. 1 would then illustrate the actions a system would take in evaluating this new option as previously described above.
  • While these techniques can be generalized to any code generated from the callflow, here is an example of a VoiceXML form capable of being automatically generated from the information provided in the graphical callflow for the Time prompt (assuming that ‘midnight’ was NOT a valid input or annotation or time.jsgf):
    <form id=”Time”>
    <field name=”Time”>
    <prompt>
    <audio src=”Time.wav”>
    For what time?
    </audio>
    </prompt>
    <grammar src=”time.jsgf”/>
    <grammar>midnight {midnight} </grammar>
    <filled>
    <if cond=”Time == ’midnight’ ”>
    <goto next=”#Midnight” />
    </if>
    <goto next=”#C0020” />
    </filled>
    </field>
    </form>
  • Finally, here is an example of a VoiceXML form capable of being generated from the information provided in the graphical callflow for the Time prompt, assuming that ‘midnight’ IS a valid input for time.jsgf, and that the annotation returned for ‘midnight’ is 12:00 AM.
    <form id=”Time”>
    <field name=”Time”>
    <prompt>
    <audio src=”Time.wav”>
    For what time?
    </audio>
    </prompt>
    <grammar src=”time.jsgf”/>
    <filled>
    <if cond=”Time == ’1200AM’ ”>
    <goto next=”#Midnight” />
    </if>
    <goto next=”#C0020”/>
    </filled>
    </field>
    </form>
  • Note that in searching the grammar (shown in the list below, using a jsgf grammar as an example, but note that this would be workable for any type of grammar that includes recognition text and annotations—including bnf, srcl, SRGS XML, SRGS ABNF, etc.), it could be determined that ‘midnight’ was in the grammar, and that the annotation for midnight was ‘1200 AM’, which enabled the automatic generation of the <if> statement in the form code above—all capable of being done without any detailed knowledge about the content of the prebuilt grammars on the part of the callflow designer.
      • #JSGF V1.0 iso-8859-1;
  • grammar time;
    public <time> = [<starter>] [<at>] <hour> [o clock] {needampm}
        [<starter>] [<at>] <hour> <minute> {needampm}
        [<starter>] [<at>] <hour> [o clock] <ampm>
        [<starter>] [<at>] <hour> <minute> <ampm>
        [<starter>] [<at>] half past <hour> {needampm}
        [<starter>] [<at>] half past <hour> <ampm>
        [<starter>] [<at>] [a] quarter till <hour> {needampm}
        [<starter>] [<at>] [a] quarter till <hour> <ampm>
        [<starter>] [<at>] <minute> till <hour> {needampm}
        [<starter>] [<at>] <minute> till <hour> <ampm>
        [<starter>] [<at>] <minute> after <hour> {needampm}
        [<starter>] [<at>] <minute> after <hour> <ampm>
        [<starter>] [<at>] noon {noon}
        [<starter>] [<at>] midnight {1200AM}
        ;
    <starter> = set
        set time
        set remindertime
        ;
    <at> = at
        for
        ;
    <hour> = one
         two
         three
         four
         five
         six
         seven
         eight
         nine
         ten
      eleven
      twelve
      ;
    <minute> = <units>
        <teens>
        <tens>
        <tens><units>
        ;
    <units> = one
        two
        three
        four
        five
        six
        seven
        eight
        nine
        ;
    <teens> = ten
        eleven
        twelve
        thirteen
        fourteen
        fifteen
        sixteen
        seventeen
        eighteen
        nineteen
        ;
    <tens> = twenty
       thirty
       forty
       fifty
       ;
    <ampm> = AM
        PM
        ;
  • It should be understood that the present invention can be realized in hardware, software, or a combination of hardware and software. The present invention can also be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • The present invention also can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program or application in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • This invention can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.

Claims (20)

1. A method in a speech recognition application callflow, comprising the steps of:
placing a prompt into a workspace for the speech recognition application workflow; and
attaching at least one among a pre-built grammar and a user-entered individual new option to the prompt.
2. The method of claim 1, wherein the step of attaching the pre-built grammar comprises the step of selecting the pre-built grammar from a list.
3. The method of claim 2, wherein the method further comprises the step of searching the list of pre-built grammars for matches to the user-entered individual new option.
4. The method of claim 3, wherein if a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option points to an equivalent pre-built grammar.
5. The method of claim 3, wherein if a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option forms a part of the list of pre-built grammars.
6. The method of claim 1, wherein the pre-built grammars are selected from the group comprising VoiceXML and custom-built grammars from a library.
7. The method of claim 1, wherein the method further comprises the step of enabling a customized user selective output of the pre-built grammar.
8. The method of claim 1, wherein the method supports prototyping without knowledge of a grammar structure by a user.
9. The method of claim 3, wherein the method further comprises the step of feeding the result of the step of searching to the pre-defined grammar instead of forming an auxiliary grammar.
10. A method in a speech recognition application callflow, comprising the steps of:
assigning a individual option and a pre-built grammar to a same prompt;
treat the individual option as a valid output of the pre-built grammar if the individual option is a potential valid match to a recognition phrase or an annotation in the pre-built grammar; and
treat the individual option as an independent grammar from the pre-built grammar if the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar.
11. A system for managing grammar options in a graphical callflow builder, comprises:
a memory; and
a processor programmed to place a prompt into a workspace for the speech recognition application workflow; and
attach at least one among a pre-built grammar and a user-entered individual new option to the prompt.
12. The system of claim 11, wherein the processors of attaches the pre-built grammar by selecting the pre-built grammar from a list.
13. The system of claim 12, wherein the processor is further programmed to search the list of pre-built grammars for matches to the user-entered individual new option.
14. The system of claim 13, wherein if a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option points to an equivalent pre-built grammar.
15. The system of claim 13, wherein if a match exists between the pre-built grammar and the user-entered individual new option, then the user-entered individual new option forms a part of the list of pre-built grammars.
16. The system of claim 11, wherein the pre-built grammars are selected from the group comprising VoiceXML and custom-built grammars from a library.
17. The system of claim 11, wherein the processor is further programmed to further enable a customized user selective output of the pre-built grammar.
18. The system of claim 13, wherein the processor is further programmed to feed the result of the search to the pre-defined grammar instead of forming an auxiliary grammar.
19. A machine-readable storage, having stored thereon a computer program having a plurality of code sections executable by a machine for causing the machine to perform the steps of placing a prompt into a workspace for the speech recognition application workflow and attaching at least one among a pre-built grammar and a user-entered individual new option to the prompt.
20. The machine-readable storage of claim 19, wherein the machine-readable storage is further programmed to select the pre-built grammar from a list.
US10/726,102 2003-12-02 2003-12-02 Method and arrangement for managing grammar options in a graphical callflow builder Abandoned US20050119892A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/726,102 US20050119892A1 (en) 2003-12-02 2003-12-02 Method and arrangement for managing grammar options in a graphical callflow builder
US13/344,193 US8355918B2 (en) 2003-12-02 2012-01-05 Method and arrangement for managing grammar options in a graphical callflow builder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/726,102 US20050119892A1 (en) 2003-12-02 2003-12-02 Method and arrangement for managing grammar options in a graphical callflow builder

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/344,193 Continuation US8355918B2 (en) 2003-12-02 2012-01-05 Method and arrangement for managing grammar options in a graphical callflow builder

Publications (1)

Publication Number Publication Date
US20050119892A1 true US20050119892A1 (en) 2005-06-02

Family

ID=34620433

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/726,102 Abandoned US20050119892A1 (en) 2003-12-02 2003-12-02 Method and arrangement for managing grammar options in a graphical callflow builder
US13/344,193 Expired - Lifetime US8355918B2 (en) 2003-12-02 2012-01-05 Method and arrangement for managing grammar options in a graphical callflow builder

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/344,193 Expired - Lifetime US8355918B2 (en) 2003-12-02 2012-01-05 Method and arrangement for managing grammar options in a graphical callflow builder

Country Status (1)

Country Link
US (2) US20050119892A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070129947A1 (en) * 2005-12-02 2007-06-07 International Business Machines Corporation Method and system for testing sections of large speech applications
US20070136351A1 (en) * 2005-12-09 2007-06-14 International Business Machines Corporation System and methods for previewing alternative compositions and arrangements when composing a strictly-structured flow diagram
US20080151886A1 (en) * 2002-09-30 2008-06-26 Avaya Technology Llc Packet prioritization and associated bandwidth and buffer management techniques for audio over ip
US20090198496A1 (en) * 2008-01-31 2009-08-06 Matthias Denecke Aspect oriented programmable dialogue manager and apparatus operated thereby
US20100036661A1 (en) * 2008-07-15 2010-02-11 Nu Echo Inc. Methods and Systems for Providing Grammar Services
US7978827B1 (en) 2004-06-30 2011-07-12 Avaya Inc. Automatic configuration of call handling based on end-user needs and characteristics
US8218751B2 (en) 2008-09-29 2012-07-10 Avaya Inc. Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences
US8593959B2 (en) 2002-09-30 2013-11-26 Avaya Inc. VoIP endpoint call admission
US20150032442A1 (en) * 2013-07-26 2015-01-29 Nuance Communications, Inc. Method and apparatus for selecting among competing models in a tool for building natural language understanding models
CN110853676A (en) * 2019-11-18 2020-02-28 广州国音智能科技有限公司 Audio comparison method, device and equipment
US10636425B2 (en) 2018-06-05 2020-04-28 Voicify, LLC Voice application platform
US10803865B2 (en) 2018-06-05 2020-10-13 Voicify, LLC Voice application platform
US10943589B2 (en) 2018-06-05 2021-03-09 Voicify, LLC Voice application platform
US11437029B2 (en) * 2018-06-05 2022-09-06 Voicify, LLC Voice application platform

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754585B2 (en) * 2012-04-03 2017-09-05 Microsoft Technology Licensing, Llc Crowdsourced, grounded language for intent modeling in conversational interfaces
US10229106B2 (en) * 2013-07-26 2019-03-12 Nuance Communications, Inc. Initializing a workspace for building a natural language understanding system
US9953646B2 (en) 2014-09-02 2018-04-24 Belleau Technologies Method and system for dynamic speech recognition and tracking of prewritten script
US11043206B2 (en) 2017-05-18 2021-06-22 Aiqudo, Inc. Systems and methods for crowdsourced actions and commands
US11520610B2 (en) 2017-05-18 2022-12-06 Peloton Interactive Inc. Crowdsourced on-boarding of digital assistant operations
US11340925B2 (en) 2017-05-18 2022-05-24 Peloton Interactive Inc. Action recipes for a crowdsourced digital assistant system
US11056105B2 (en) * 2017-05-18 2021-07-06 Aiqudo, Inc Talk back from actions in applications
US10838746B2 (en) 2017-05-18 2020-11-17 Aiqudo, Inc. Identifying parameter values and determining features for boosting rankings of relevant distributable digital assistant operations
WO2019152511A1 (en) 2018-01-30 2019-08-08 Aiqudo, Inc. Personalized digital assistant device and related methods

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US18476A (en) * 1857-10-20 Nathaniel thomas
US32564A (en) * 1861-06-18 Improvement in harrows
US41314A (en) * 1864-01-19 Improvement in cultivator and seeder
US83882A (en) * 1868-11-10 Improvement in hydrants
US184002A (en) * 1876-11-07 Improvement in water-hooks for harness
US4864501A (en) * 1987-10-07 1989-09-05 Houghton Mifflin Company Word annotation system
US5617578A (en) * 1990-06-26 1997-04-01 Spss Corp. Computer-based workstation for generation of logic diagrams from natural language text structured by the insertion of script symbols
US5704060A (en) * 1995-05-22 1997-12-30 Del Monte; Michael G. Text storage and retrieval system and method
US5799273A (en) * 1996-09-24 1998-08-25 Allvoice Computing Plc Automated proofreading using interface linking recognized words to their audio data while text is being changed
US5812977A (en) * 1996-08-13 1998-09-22 Applied Voice Recognition L.P. Voice control computer interface enabling implementation of common subroutines
US5903867A (en) * 1993-11-30 1999-05-11 Sony Corporation Information access system and recording system
US5940797A (en) * 1996-09-24 1999-08-17 Nippon Telegraph And Telephone Corporation Speech synthesis method utilizing auxiliary information, medium recorded thereon the method and apparatus utilizing the method
US5970460A (en) * 1997-12-05 1999-10-19 Lernout & Hauspie Speech Products N.V. Speech recognition and editing system
US6064961A (en) * 1998-09-02 2000-05-16 International Business Machines Corporation Display for proofreading text
US6100891A (en) * 1998-06-09 2000-08-08 Teledirect International, Inc. Call center agent interface and development tool
US6112174A (en) * 1996-11-13 2000-08-29 Hitachi, Ltd. Recognition dictionary system structure and changeover method of speech recognition system for car navigation
US6173266B1 (en) * 1997-05-06 2001-01-09 Speechworks International, Inc. System and method for developing interactive speech applications
US6246981B1 (en) * 1998-11-25 2001-06-12 International Business Machines Corporation Natural language task-oriented dialog manager and method
US6269336B1 (en) * 1998-07-24 2001-07-31 Motorola, Inc. Voice browser for interactive services and methods thereof
US6321198B1 (en) * 1999-02-23 2001-11-20 Unisys Corporation Apparatus for design and simulation of dialogue
US20020032564A1 (en) * 2000-04-19 2002-03-14 Farzad Ehsani Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
US6490564B1 (en) * 1999-09-03 2002-12-03 Cisco Technology, Inc. Arrangement for defining and processing voice enabled web applications using extensible markup language documents
US6510411B1 (en) * 1999-10-29 2003-01-21 Unisys Corporation Task oriented dialog model and manager
US6578000B1 (en) * 1999-09-03 2003-06-10 Cisco Technology, Inc. Browser-based arrangement for developing voice enabled web applications using extensible markup language documents
US20030144846A1 (en) * 2002-01-31 2003-07-31 Denenberg Lawrence A. Method and system for modifying the behavior of an application based upon the application's grammar
US20030195739A1 (en) * 2002-04-16 2003-10-16 Fujitsu Limited Grammar update system and method
US6714905B1 (en) * 2000-05-02 2004-03-30 Iphrase.Com, Inc. Parsing ambiguous grammar
US6961700B2 (en) * 1996-09-24 2005-11-01 Allvoice Computing Plc Method and apparatus for processing the output of a speech recognition engine
US7024348B1 (en) * 2000-09-28 2006-04-04 Unisys Corporation Dialogue flow interpreter development tool
US7099824B2 (en) * 2000-11-27 2006-08-29 Canon Kabushiki Kaisha Speech recognition system, speech recognition server, speech recognition client, their control method, and computer readable memory

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998036585A2 (en) * 1997-02-18 1998-08-20 Northern Telecom Inc. Sponsored call and cell service
US6463130B1 (en) * 1998-07-31 2002-10-08 Bellsouth Intellectual Property Corporation Method and system for creating automated voice response menus for telecommunications services
US6381323B1 (en) * 1999-02-16 2002-04-30 Ameritech Corporation Call programming apparatus and method
US6574595B1 (en) * 2000-07-11 2003-06-03 Lucent Technologies Inc. Method and apparatus for recognition-based barge-in detection in the context of subword-based automatic speech recognition
US20030083882A1 (en) 2001-05-14 2003-05-01 Schemers Iii Roland J. Method and apparatus for incorporating application logic into a voice responsive system
US20020184002A1 (en) 2001-05-30 2002-12-05 International Business Machines Corporation Method and apparatus for tailoring voice prompts of an interactive voice response system
US20030007609A1 (en) 2001-07-03 2003-01-09 Yuen Michael S. Method and apparatus for development, deployment, and maintenance of a voice software application for distribution to one or more consumers
US7065201B2 (en) * 2001-07-31 2006-06-20 Sbc Technology Resources, Inc. Telephone call processing in an interactive voice response call management system
US20030041314A1 (en) 2001-08-14 2003-02-27 Apac Customers Services, Inc. Call flow method and system using visual programming
US20040210443A1 (en) * 2003-04-17 2004-10-21 Roland Kuhn Interactive mechanism for retrieving information from audio and multimedia files containing speech
US20050086102A1 (en) * 2003-10-15 2005-04-21 International Business Machines Corporation Method and system for validation of service consumers

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US32564A (en) * 1861-06-18 Improvement in harrows
US41314A (en) * 1864-01-19 Improvement in cultivator and seeder
US83882A (en) * 1868-11-10 Improvement in hydrants
US184002A (en) * 1876-11-07 Improvement in water-hooks for harness
US18476A (en) * 1857-10-20 Nathaniel thomas
US4864501A (en) * 1987-10-07 1989-09-05 Houghton Mifflin Company Word annotation system
US5617578A (en) * 1990-06-26 1997-04-01 Spss Corp. Computer-based workstation for generation of logic diagrams from natural language text structured by the insertion of script symbols
US5903867A (en) * 1993-11-30 1999-05-11 Sony Corporation Information access system and recording system
US5704060A (en) * 1995-05-22 1997-12-30 Del Monte; Michael G. Text storage and retrieval system and method
US5812977A (en) * 1996-08-13 1998-09-22 Applied Voice Recognition L.P. Voice control computer interface enabling implementation of common subroutines
US5799273A (en) * 1996-09-24 1998-08-25 Allvoice Computing Plc Automated proofreading using interface linking recognized words to their audio data while text is being changed
US5940797A (en) * 1996-09-24 1999-08-17 Nippon Telegraph And Telephone Corporation Speech synthesis method utilizing auxiliary information, medium recorded thereon the method and apparatus utilizing the method
US6961700B2 (en) * 1996-09-24 2005-11-01 Allvoice Computing Plc Method and apparatus for processing the output of a speech recognition engine
US6112174A (en) * 1996-11-13 2000-08-29 Hitachi, Ltd. Recognition dictionary system structure and changeover method of speech recognition system for car navigation
US6173266B1 (en) * 1997-05-06 2001-01-09 Speechworks International, Inc. System and method for developing interactive speech applications
US5970460A (en) * 1997-12-05 1999-10-19 Lernout & Hauspie Speech Products N.V. Speech recognition and editing system
US6100891A (en) * 1998-06-09 2000-08-08 Teledirect International, Inc. Call center agent interface and development tool
US6269336B1 (en) * 1998-07-24 2001-07-31 Motorola, Inc. Voice browser for interactive services and methods thereof
US6064961A (en) * 1998-09-02 2000-05-16 International Business Machines Corporation Display for proofreading text
US6246981B1 (en) * 1998-11-25 2001-06-12 International Business Machines Corporation Natural language task-oriented dialog manager and method
US6321198B1 (en) * 1999-02-23 2001-11-20 Unisys Corporation Apparatus for design and simulation of dialogue
US6490564B1 (en) * 1999-09-03 2002-12-03 Cisco Technology, Inc. Arrangement for defining and processing voice enabled web applications using extensible markup language documents
US6578000B1 (en) * 1999-09-03 2003-06-10 Cisco Technology, Inc. Browser-based arrangement for developing voice enabled web applications using extensible markup language documents
US6510411B1 (en) * 1999-10-29 2003-01-21 Unisys Corporation Task oriented dialog model and manager
US20020032564A1 (en) * 2000-04-19 2002-03-14 Farzad Ehsani Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
US6714905B1 (en) * 2000-05-02 2004-03-30 Iphrase.Com, Inc. Parsing ambiguous grammar
US7024348B1 (en) * 2000-09-28 2006-04-04 Unisys Corporation Dialogue flow interpreter development tool
US7099824B2 (en) * 2000-11-27 2006-08-29 Canon Kabushiki Kaisha Speech recognition system, speech recognition server, speech recognition client, their control method, and computer readable memory
US20030144846A1 (en) * 2002-01-31 2003-07-31 Denenberg Lawrence A. Method and system for modifying the behavior of an application based upon the application's grammar
US20030195739A1 (en) * 2002-04-16 2003-10-16 Fujitsu Limited Grammar update system and method

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8370515B2 (en) 2002-09-30 2013-02-05 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US20080151886A1 (en) * 2002-09-30 2008-06-26 Avaya Technology Llc Packet prioritization and associated bandwidth and buffer management techniques for audio over ip
US8593959B2 (en) 2002-09-30 2013-11-26 Avaya Inc. VoIP endpoint call admission
US7877501B2 (en) 2002-09-30 2011-01-25 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US7877500B2 (en) 2002-09-30 2011-01-25 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US8015309B2 (en) 2002-09-30 2011-09-06 Avaya Inc. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US7978827B1 (en) 2004-06-30 2011-07-12 Avaya Inc. Automatic configuration of call handling based on end-user needs and characteristics
US8661411B2 (en) * 2005-12-02 2014-02-25 Nuance Communications, Inc. Method and system for testing sections of large speech applications
US20070129947A1 (en) * 2005-12-02 2007-06-07 International Business Machines Corporation Method and system for testing sections of large speech applications
US8607147B2 (en) * 2005-12-09 2013-12-10 International Business Machines Corporation System and methods for previewing alternative compositions and arrangements when composing a strictly-structured flow diagram
US20070136351A1 (en) * 2005-12-09 2007-06-14 International Business Machines Corporation System and methods for previewing alternative compositions and arrangements when composing a strictly-structured flow diagram
US20090198496A1 (en) * 2008-01-31 2009-08-06 Matthias Denecke Aspect oriented programmable dialogue manager and apparatus operated thereby
US20100036661A1 (en) * 2008-07-15 2010-02-11 Nu Echo Inc. Methods and Systems for Providing Grammar Services
US8218751B2 (en) 2008-09-29 2012-07-10 Avaya Inc. Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences
US20150032442A1 (en) * 2013-07-26 2015-01-29 Nuance Communications, Inc. Method and apparatus for selecting among competing models in a tool for building natural language understanding models
US10339216B2 (en) * 2013-07-26 2019-07-02 Nuance Communications, Inc. Method and apparatus for selecting among competing models in a tool for building natural language understanding models
US10943589B2 (en) 2018-06-05 2021-03-09 Voicify, LLC Voice application platform
US10636425B2 (en) 2018-06-05 2020-04-28 Voicify, LLC Voice application platform
US10803865B2 (en) 2018-06-05 2020-10-13 Voicify, LLC Voice application platform
US11437029B2 (en) * 2018-06-05 2022-09-06 Voicify, LLC Voice application platform
US11450321B2 (en) 2018-06-05 2022-09-20 Voicify, LLC Voice application platform
US11615791B2 (en) 2018-06-05 2023-03-28 Voicify, LLC Voice application platform
US11790904B2 (en) 2018-06-05 2023-10-17 Voicify, LLC Voice application platform
CN110853676A (en) * 2019-11-18 2020-02-28 广州国音智能科技有限公司 Audio comparison method, device and equipment

Also Published As

Publication number Publication date
US8355918B2 (en) 2013-01-15
US20120209613A1 (en) 2012-08-16

Similar Documents

Publication Publication Date Title
US8355918B2 (en) Method and arrangement for managing grammar options in a graphical callflow builder
US6311159B1 (en) Speech controlled computer user interface
US7930182B2 (en) Computer-implemented tool for creation of speech application code and associated functional specification
CA2497866C (en) A development system for a dialog system
US7774196B2 (en) System and method for modifying a language model and post-processor information
JP5142720B2 (en) Interactive conversational conversations of cognitively overloaded users of devices
US6871179B1 (en) Method and apparatus for executing voice commands having dictation as a parameter
US8229745B2 (en) Creating a mixed-initiative grammar from directed dialog grammars
US7024348B1 (en) Dialogue flow interpreter development tool
JP2008506156A (en) Multi-slot interaction system and method
US7461344B2 (en) Mixed initiative interface control
US20090292530A1 (en) Method and system for grammar relaxation
US8620668B2 (en) System and method for configuring voice synthesis
JP2007122747A (en) Dialogue flow interpreter
CN102246227A (en) Method and system for generating vocal user interface code from a data meta-model
US20060136195A1 (en) Text grouping for disambiguation in a speech application
Di Fabbrizio et al. AT&t help desk.
US7853451B1 (en) System and method of exploiting human-human data for spoken language understanding systems
US20110161927A1 (en) Generating voice extensible markup language (vxml) documents
Wang et al. Multi-modal and modality specific error handling in the Gemini Project
Cenek Hybrid dialogue management in frame-based dialogue system exploiting VoiceXML
Paternò et al. Deriving Vocal Interfaces from Logical Descriptions in Multi-device Authoring Environments
Gatius et al. A Multilingual Dialogue System for Accessing the Web.
Riccardi et al. Spoken dialog systems: From theory to technology
Gatius et al. Obtaining linguistic resources for dialogue systems from application specifications and domain ontologies

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AGAPI, CIPRIAN;GOMEZ, FELIPE;LEWIS, JAMES R.;AND OTHERS;REEL/FRAME:014762/0204

Effective date: 20031201

AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317

Effective date: 20090331

Owner name: NUANCE COMMUNICATIONS, INC.,MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317

Effective date: 20090331

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION