Suche Bilder Maps Play YouTube News Gmail Drive Mehr »
Anmelden
Nutzer von Screenreadern: Klicke auf diesen Link, um die Bedienungshilfen zu aktivieren. Dieser Modus bietet die gleichen Grundfunktionen, funktioniert aber besser mit deinem Reader.

Patentsuche

  1. Erweiterte Patentsuche
VeröffentlichungsnummerUS8762469 B2
PublikationstypErteilung
AnmeldenummerUS 13/604,556
Veröffentlichungsdatum24. Juni 2014
Eingetragen5. Sept. 2012
Prioritätsdatum2. Okt. 2008
Auch veröffentlicht unterUS8296383, US8676904, US8713119, US9412392, US20100088100, US20120232906, US20120330661, US20130006638, US20140244271, US20160336010
Veröffentlichungsnummer13604556, 604556, US 8762469 B2, US 8762469B2, US-B2-8762469, US8762469 B2, US8762469B2
ErfinderAram M. Lindahl
Ursprünglich BevollmächtigterApple Inc.
Zitat exportierenBiBTeX, EndNote, RefMan
Externe Links: USPTO, USPTO-Zuordnung, Espacenet
Electronic devices with voice command and contextual data processing capabilities
US 8762469 B2
Zusammenfassung
An electronic device may capture a voice command from a user. The electronic device may store contextual information about the state of the electronic device when the voice command is received. The electronic device may transmit the voice command and the contextual information to computing equipment such as a desktop computer or a remote server. The computing equipment may perform a speech recognition operation on the voice command and may process the contextual information. The computing equipment may respond to the voice command. The computing equipment may also transmit information to the electronic device that allows the electronic device to respond to the voice command.
Bilder(8)
Previous page
Next page
Ansprüche(39)
What is claimed is:
1. A method for operating an automated assistant, comprising:
at a server computer system comprising a processor and memory storing instructions for execution by the processor:
receiving, from a speech recognition service operated separately from the server computer system, a text string corresponding to a voice command received at a portable electronic device;
receiving contextual information from the portable electronic device;
processing the text string and the contextual information; and
transmitting results associated with processing the text string and the contextual information to the portable electronic device.
2. The method of claim 1, further comprising:
prior to receiving the text string from the speech recognition service:
receiving the voice command from the portable electronic device; and
sending the voice command to the speech recognition service.
3. The method of claim 1, wherein the text string and the contextual information are received by the server computer system substantially simultaneously.
4. The method of claim 1, wherein the contextual information includes information from one or more sensors on the portable electronic device.
5. The method of claim 4, wherein the one or more sensors include a location sensor.
6. The method of claim 1, wherein processing the text string and the contextual information comprises:
sending at least one of the text string and the contextual information to an online service operated separately from the server computer system; and
receiving, from the online service, the results associated with processing the text string and the contextual information.
7. The method of claim 6, wherein the online service is selected from the group consisting of:
a search service;
an email service;
a media service;
a software update service; and
an online business service.
8. The method of claim 1, wherein processing the text string and the contextual information comprises:
identifying a search query in the text string;
identifying a geographical constraint in the text string; and
performing a search based at least in part on the search query and the geographical constraint;
wherein transmitting the results comprises transmitting results of the search to the portable electronic device.
9. The method of claim 1, wherein the contextual information is a geographical location of the portable electronic device.
10. The method of claim 1, wherein the contextual information is information associated with a current or a previous telephone call.
11. The method of claim 10, wherein the information associated with the current or the previous telephone call is at least one of a telephone number or contact information.
12. The method of claim 1, wherein the contextual information is information from a software application running on the portable electronic device.
13. The method of claim 12, wherein the software application is selected from the group consisting of:
a business productivity application;
an email application; and
a calendar application.
14. The method of claim 1, wherein the contextual information is information related to an operation occurring in the background of the portable electronic device.
15. The method of claim 1, wherein the results associated with processing the text string are displayed at the portable electronic device.
16. The method of claim 1, wherein the server computer system is provided by a first entity, and the speech recognition service is provided by a second entity different from the first entity.
17. The method of claim 1, wherein the speech recognition service comprises a software application executed by a second computer system remote from the server computer system.
18. A server computer system configured to communicate with a portable electronic device over a communications path in order to process a voice command received by the portable electronic device, the server computer system comprising:
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving, from a speech recognition service operated separately from the server computer, a text string corresponding to a voice command received at a portable electronic device;
receiving contextual information from the portable electronic device;
processing the text string and the contextual information; and
transmitting results associated with processing the text string and the contextual information to the portable electronic device.
19. A non-transitory computer readable storage medium storing instructions that, when executed by a server computer with one or more processors, cause the processors to perform operations comprising:
receiving, from a speech recognition service operated separately from the server computer, a text string corresponding to a voice command received at a portable electronic device;
receiving contextual information from the portable electronic device;
processing the text string and the contextual information; and
transmitting results associated with processing the text string and the contextual information to the portable electronic device.
20. A method for operating an automated assistant, comprising:
at a server computer system provided by a first entity, the server computer system comprising a processor and memory storing instructions for execution by the processor:
receiving a voice command and contextual information from the portable electronic device;
processing the voice command, using a speech recognition service provided by a second entity different from the first entity, to generate a text string from the voice command;
processing the text string and the contextual information; and
transmitting results associated with processing the text string and the contextual information to the portable electronic device.
21. The method of claim 20, wherein the results associated with processing the text string are displayed at the portable electronic device.
22. The method of claim 20, wherein the speech recognition service is a standalone software component that is executed by the server computer system.
23. The method of claim 20, wherein the text string and the contextual information are received by the server computer system substantially simultaneously.
24. The method of claim 20, wherein the contextual information includes information from one or more sensors on the portable electronic device.
25. The method of claim 24, wherein the one or more sensors include a location sensor.
26. The method of claim 20, wherein processing the text string and the contextual information comprises:
sending at least one of the text string and the contextual information to an online service operated separately from the server computer system; and
receiving, from the online service, the results associated with processing the text string and the contextual information.
27. The method of claim 26, wherein the online service is selected from the group consisting of:
a search service;
an email service;
a media service;
a software update service; and
an online business service.
28. The method of claim 20, wherein processing the text string and the contextual information comprises:
identifying a search query in the text string;
identifying a geographical constraint in the text string; and
performing a search based at least in part on the search query and the geographical constraint;
wherein transmitting the results comprises transmitting results of the search to the portable electronic device.
29. The method of claim 20, wherein the contextual information is a geographical location of the portable electronic device.
30. The method of claim 20, wherein the contextual information is information associated with a current or a previous telephone call.
31. The method of claim 30, wherein the information associated with the current or the previous telephone call is at least one of a telephone number or contact information.
32. The method of claim 20, wherein the contextual information is information from a software application running on the portable electronic device.
33. The method of claim 32, wherein the software application is selected from the group consisting of:
a business productivity application;
an email application; and
a calendar application.
34. The method of claim 20, wherein the contextual information is information related to an operation occurring in the background of the portable electronic device.
35. The method of claim 20, wherein the server computer system is provided by a first entity, and the speech recognition service is provided by a second entity different from the first entity.
36. A server computer system provided by a first entity and configured to communicate with a portable electronic device over a communications path in order to process a voice command received by the portable electronic device, the server computer system comprising:
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a voice command and contextual information from the portable electronic device;
processing the voice command, using a speech recognition service provided by a second entity different from the first entity, to generate a text string from the voice command;
processing the text string and the contextual information; and
transmitting results associated with processing the text string and the contextual information to the portable electronic device.
37. A non-transitory computer readable storage medium storing instructions that, when executed by a server computer provided by a first entity and having one or more processors, cause the processors to perform operations comprising:
receiving a voice command and contextual information from the portable electronic device;
processing the voice command, using a speech recognition service provided by a second entity different from the first entity, to generate a text string from the voice command;
processing the text string and the contextual information; and
transmitting results associated with processing the text string and the contextual information to the portable electronic device.
38. The method of claim 4, wherein the one or more sensors include an orientation sensor.
39. The method of claim 24, wherein the one or more sensors include an orientation sensor.
Beschreibung
RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 12/244,713, filed Oct. 2, 2008, which is hereby incorporated by reference in its entirety.

This application is related to U.S. patent application Ser. No. 13/480,422, filed May 24, 2012, which is hereby incorporated by reference in its entirety.

BACKGROUND

This invention relates generally to electronic devices, and more particularly, to electronic devices such as portable electronic devices that can capture voice commands and contextual information.

Electronic devices such as portable electronic devices are becoming increasingly popular. Examples of portable devices include handheld computers, cellular telephones, media players, and hybrid devices that include the functionality of multiple devices of this type. Popular portable electronic devices that are somewhat larger than traditional handheld electronic devices include laptop computers and tablet computers.

Portable electronic devices such as handheld electronic devices may have limited speech recognition capabilities. For example, a cellular telephone may have a microphone that can be used to receive and process cellular telephone voice commands that control the operation of the cellular telephone.

Portable electronic devices generally have limited processing power and are not always actively connected to remote databases and services of interest. Conventional devices are often not contextually aware. These shortcomings can make it difficult to use conventional portable electronic devices for sophisticated voice-based control functions.

It would therefore be desirable to be able to provide improved systems for electronic devices such as portable electronic devices that handle voice-based commands.

SUMMARY

A portable electronic device such as a handheld electronic device is provided. The electronic device may have a microphone that is used to receive voice commands. The electronic device may use the microphone to record a user's voice. The recording of the user's voice may be stored as a digital audio file in storage associated with the electronic device.

When the electronic device receives a voice command, the electronic device may store information about the current state of the electronic device and its operating environment as contextual information (metadata). With one suitable arrangement, stored contextual information may include information about the operational state of the electronic device such as which applications are running on the device and their status. The electronic device may determine which portions of the information on the state of the device are relevant to the voice command and may store only the relevant portions. If desired, the electronic device may determine which contextual information is most relevant by performing a speech recognition operation on the recorded voice command to look for specific keywords.

The electronic device may process voice commands locally or voice commands processing may be performed remotely. For example, the electronic device may transmit one or more recorded voice commands and associated contextual information to computing equipment such as a desktop computer. Captured voice commands and contextual information may also be uploaded to server computing equipment over a network. The electronic device may transmit recorded voice commands and the associated contextual information at any suitable time such as when instructed by a user, as each voice command is received, immediately after each voice command is received, whenever the electronic device is synched with appropriate computing equipment, or other suitable times.

After a recorded voice command and associated contextual information have been transferred to a desktop computer, remote server, or other computing equipment, the computing equipment may process the voice command using a speech recognition operation. The computing equipment may use the results of the speech recognition operation and any relevant contextual information together to respond to the voice command properly. For example, the computing equipment may respond to the voice command by displaying search results or performing other suitable actions). If desired, the computing equipment may convey information back to the electronic device in response to the voice command.

In a typical scenario, a user may make a voice command while directing the electronic device to record the voice command. The user may make the voice command while the electronic device is performing a particular operation with an application. For example, the user may be using the electronic device to play songs with a media application. While listening to a song, the user may press a record button on the electronic device to record the voice command “find more like this.” The voice command may be processed by the electronic device (e.g., to create a code representative of the spoken command) or may be stored in the form of an audio clip by the electronic device. At an appropriate time, such as when the electronic device is connected to a host computer or a remote server through a communications path, the code or the audio clip corresponding to the spoken command may be uploaded for further processing. Contextual information such as information on the song that was playing in the media application when the voice command was made may be uploaded with the voice command.

A media playback application on a computer such as the iTunes program of Apple Inc. may take an appropriate action in response to an uploaded voice command and associated contextual data. As an example, the media playback application may present a user with recommended songs for purchase. The songs that are recommended may be songs that are similar to the song that was playing on the electronic device when the user captured the audio clip voice command “find more like this.”

The computer to which the voice command audio clip is uploaded may have greater processing power available than that available on a handheld electronic device, so voice processing accuracy may be improved by offloading voice recognition operations to the computer from the handheld electronic device in this way. The computer to which the audio clip is uploaded may also have access to more extensive data that would be available on a handheld electronic device such as the contents of a user's full home media library. The computer that receives the uploaded command may also have access to online resources such as an online server database. This database may have been difficult or impossible for the user to access from the handheld device when the voice command was captured.

If desired, the contextual information that is captured by the electronic device in association with a captured voice command may include audio information. For example, a user may record a spoken phrase. Part of the spoken phrase may represent a voice command and part of the spoken phrase may include associated contextual information. As an example, a user may be using a mapping application on a handheld electronic device. The device may be presenting the user with a map that indicates the user's current position. The user may press a button or may otherwise instruct the handheld electronic device to record the phrase “I like American restaurants in this neighborhood.” In response, the electronic device may record the spoken phrase. The recorded phrase (in this example), includes a command portion (“I like”) that instructs the mapping application to create a bookmark or other indicator of the user's preference. The recorded phrase also includes the modifier “American restaurants” to provide partial context for the voice command. Additional contextual information (i.e., the phrase “in this neighborhood) and accompanying position data (e.g., geographic coordinates from global positioning system circuitry in the device) may also be supplied in conjunction with the recorded voice command. When uploaded, the audio clip voice command and the associated audio clip contextual information can be processed by speech recognition software and appropriate actions taken.

Further features of the invention, its nature and various advantages will be more apparent from the accompanying drawings and the following detailed description of the preferred embodiments.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram of an illustrative system environment in which a portable electronic device and computing equipment with speech recognition functionality may be used in accordance with an embodiment of the present invention.

FIG. 2 is a perspective view of an illustrative portable electronic device in accordance with an embodiment of the present invention.

FIG. 3 is a schematic diagram of an illustrative portable electronic device in accordance with an embodiment of the present invention.

FIG. 4 is a schematic diagram of illustrative computing equipment that may be used in processing voice commands from a portable electronic device in accordance with an embodiment of the present invention.

FIG. 5 is a flowchart of illustrative steps involved in using a portable electronic device to receive and process voice commands in accordance with an embodiment of the present invention.

FIG. 6 is a flowchart of illustrative steps involved in using a portable electronic device to receive and upload voice commands and using computing equipment to process the voice commands in accordance with an embodiment of the present invention.

FIG. 7 is a flowchart of illustrative steps involved in using a portable electronic device to receive, process, and upload voice commands and using computing equipment to process the voice commands in accordance with an embodiment of the present invention.

DETAILED DESCRIPTION

The present invention relates to using voice commands to control electronic systems.

Voice commands may be captured with an electronic device and uploaded to computing equipment for further processing. Electronic devices that may be used in this type of environment may be portable electronic devices such as laptop computers or small portable computers of the type that are sometimes referred to as ultraportables. Portable electronic devices may also be somewhat smaller devices. Examples of smaller portable electronic devices include wrist-watch devices, pendant devices, headphone and earpiece devices, and other wearable and miniature devices. With one suitable arrangement, the portable electronic devices may be wireless electronic devices.

The wireless electronic devices may be, for example, handheld wireless devices such as cellular telephones, media players with wireless communications capabilities, handheld computers (also sometimes called personal digital assistants), global positioning system (GPS) devices, and handheld gaming devices. The wireless electronic devices may also be hybrid devices that combine the functionality of multiple conventional devices. Examples of hybrid portable electronic devices include a cellular telephone that includes media player functionality, a gaming device that includes a wireless communications capability, a cellular telephone that includes game and email functions, and a portable device that receives email, supports mobile telephone calls, has music player functionality and supports web browsing. These are merely illustrative examples.

An illustrative environment in which a user may interact with system components using voice commands is shown in FIG. 1. A user in system 10 may have an electronic device such as user device 12. User device 12 may be used to receive voice commands (e.g., to record a user's voice). If device 12 has sufficient processing power, the voice commands may be partly or fully processed by user device 12 (e.g., using a speech recognition engine such as speech recognition engine 13). If desired, the voice commands may be transmitted by user device 12 to computing equipment 14 over communications path 20. Voice commands may also be conveyed to remote services 18 over network 16 (e.g., via path 21 or via path 20, equipment 14, and path 17).

When user device 12 transmits voice commands to computing equipment 14, the user device may include contextual information along with the voice commands. User device 12, computing equipment 14, and services 18 may be connected through a network such as communications network 16. Network 16 may be, for example, a local area network, a wide area network such as the Internet, a wired network, a wireless network, or a network formed from multiple networks of these types. User device 12 may connect to communications network 16 through a wired or wireless communications path such as path 21 or may connect to network 16 via equipment 14. In one embodiment of the invention, user device 12 may transmit voice commands and contextual information to computing equipment 14 through communications network 16. User device 12 may also transmit voice commands and contextual information to computing equipment 14 directly via communications path 20. Path 20 may be, for example, a universal serial bus (USB®) path or any other suitable wired or wireless path.

User device 12 may have any suitable form factor. For example, user device 12 may be provided in the form of a handheld device, desktop device, or even integrated as part of a larger structure such as a table or wall. With one particularly suitable arrangement, which is sometimes described herein as an example, user device 12 may be provided with a handheld form factor. For example, device 12 may be a handheld electronic device. Illustrative handheld electronic devices that may be provided with voice command recording capabilities include cellular telephones, media players, media players with wireless communications capabilities, handheld computers (also sometimes called personal digital assistants), global positioning system (GPS) devices, handheld gaming devices, and other handheld devices. If desired, user device 12 may be a hybrid device that combines the functionality of multiple conventional devices. Examples of hybrid handheld devices include a cellular telephone that includes media player functionality, a gaming device that includes a wireless communications capability, a cellular telephone that includes game and email functions, and a handheld device that receives email, supports mobile telephone calls, supports web browsing, and includes media player functionality. These are merely illustrative examples.

Computing equipment 14 may include any suitable computing equipment such as a personal desktop computer, a laptop computer, a server, etc. With one suitable arrangement, computing equipment 14 is a computer that establishes a wired or wireless connection with user device 12. The computing equipment may be a server (e.g., an internet server), a local area network computer with or without internet access, a user's own personal computer, a peer device (e.g., another user device 12), any other suitable computing equipment, and combinations of multiple pieces of computing equipment. Computing equipment 14 may be used to implement applications such as media playback applications (e.g., iTunes® from Apple Inc.), a web browser, a mapping application, an email application, a calendar application, etc.

Computing equipment 18 (e.g., one or more servers) may be associated with one or more online services.

Communications path 17 and the other paths in system 10 such as path 20 between device 12 and equipment 14, path 21 between device 12 and network 16, and the paths between network 16 and services 18 may be based on any suitable wired or wireless communications technology. For example, the communications paths in system 10 may be based on wired communications technology such as coaxial cable, copper wiring, fiber optic cable, universal serial bus (USB®), IEEE 1394 (FireWire®), paths using serial protocols, paths using parallel protocols, and Ethernet paths. Communications paths in system 10 may, if desired, be based on wireless communications technology such as satellite technology, radio-frequency (RF) technology, wireless universal serial bus technology, and Wi-Fi® or Bluetooth® 802.11 wireless link technologies. Wireless communications paths in system 10 may also include cellular telephone bands such as those at 850 MHz, 900 MHz, 1800 MHz, and 1900 MHz (e.g., the main Global System for Mobile Communications or GSM cellular telephone bands), one or more proprietary radio-frequency links, and other local and remote wireless links. Communications paths in system 10 may also be based on wireless signals sent using light (e.g., using infrared communications) or sound (e.g., using acoustic communications).

Communications path 20 may be used for one-way or two-way transmissions between user device 12 and computing equipment 14. For example, user device 12 may transmit voice commands and contextual information to computing equipment 14. After receiving voice commands and contextual information from user device 12, computing equipment 14 may process the voice commands and contextual information using a speech recognition engine such as speech recognition engine 15. Engine 15 may be provided as a standalone software component or may be integrated into a media playback application or other application. If desired, computing equipment 14 may transmit data signals to user device 12. Equipment 14 may, for example, transmit information to device 12 in response to voice commands transmitted by device 12 to system 14. For example, when a voice command transmitted by device 12 includes a request to search for information, system 14 may transmit search results back to device 12.

Communications network 16 may be based on any suitable communications network or networks such as a radio-frequency network, the Internet, an Ethernet network, a wireless network, a Wi-Fi® network, a Bluetooth® network, a cellular telephone network, or a combination of such networks.

Services 18 may include any suitable online services. Services 18 may include a speech recognition service (e.g., a speech recognition dictionary), a search service (e.g., a service that searches a particular database or that performs Internet searches), an email service, a media service, a software update service, an online business service, etc. Services 18 may communicate with computing equipment 14 and user device 12 through communications network 16.

In typical user, user device 12 may be used to capture voice commands from a user during the operation of user device 12. For example, user device 12 may receive one or more voice commands during a media playback operation (e.g., during playback of a music file or a video file). User device 12 may then store information about its current operational state as contextual information. User device 12 may record information related to the current media playback operation. Other contextual information may be stored when other applications are running on device 12. For example, user device 12 may store information related to a web-browsing application, the location of user device 12, or other appropriate information on the operating environment for device 12. Following the reception of a voice command, user device 12 may, if desired, perform a speech recognition operation on the voice command. User device 12 may utilize contextual information about the state of the user device at the time the voice command was received during the associated speech recognition operation.

In addition to or in lieu of performing a local speech recognition operation on the voice command using engine 13, user device 12 may forward the captured voice command audio clip and, if desired, contextual information to computing equipment 14 for processing. Computing equipment 14 may use engine 15 to implement speech recognition capabilities that allow computing equipment 14 to respond to voice commands that user device 12 might otherwise have difficulties in processing. For example, if user device 12 were to receive a voice command to “find Italian restaurants near me,” user device 12 might not be able to execute the voice command immediately for reasons such as an inability to perform adequate speech processing due to a lack of available processing power, an inability to perform a search requested by a voice command due to a lack of network connectivity, etc. In this type of situation, device 12 may save the voice command (e.g., as a recorded audio file of a user's voice) and relevant contextual information (e.g., the current location of user device 12) for transmission to computing equipment 14 for further processing of the voice command. Device 12 may transmit voice commands and contextual information to computing equipment 14 at any suitable time (e.g., when device 12 is synched with computing equipment 14, as the voice commands are received by device 12, whenever device 12 is connected to a communications network, etc.). These transmissions may take place simultaneously or as two separate but related transmissions.

With one suitable arrangement, device 12 may save all available contextual information. With another arrangement, device 12 may perform a either a cursory or a full speech recognition operation on voice commands to determine what contextual information is relevant and then store only the relevant contextual information. As an example, user device 12 may search for the words “music” and “location” in a voice command to determine whether the contextual information stored in association with the voice command should include information related to a current media playback operation or should include the current location of user device 12 (e.g., which may be manually entered by a user or may be determined using a location sensor).

An illustrative user device 12 in accordance with an embodiment of the present invention is shown in FIG. 2. User device 12 may be any suitable electronic device such as a portable or handheld electronic device.

User device 12 may handle communications over one or more wireless communications bands such as local area network bands and cellular telephone network bands.

Device 12 may have a housing 30. Display 34 may be attached to housing 30 using bezel 32. Display 34 may be a touch screen liquid crystal display (as an example).

Device 12 may have a microphone for receiving voice commands. Openings 42 and 40 may, if desired, form microphone and speaker ports. With one suitable arrangement, device 12 may have speech recognition capabilities (e.g., a speech recognition engine that can be used to receive and process voice commands from a user). Device 12 may also have audio capture and playback capabilities. Device 12 may be able to receive voice commands from a user and other audio though a microphone (e.g., formed as part of one or more ports such as openings 40 and 42). Port 41 may be, for example, a speaker sport. If desired, device 12 may activate its audio recording and/or speech recognition capabilities (e.g., device 12 may begin recording audio signals associated with a user's voice with a microphone) in response to user input. For example, device 12 may present an on-screen selectable option to the user to activate speech recognition functionality. Device 12 may also have a user input device such as button 37 that is used to receive user input to activate speech recognition functionality.

User device 12 may have other input-output devices. For example, user device 12 may have other buttons. Input-output components such as port 38 and one or more input-output jacks (e.g., for audio and/or video) may be used to connect device 12 to computing equipment 14 and external accessories. Button 37 may be, for example, a menu button. Port 38 may contain a 30-pin data connector (as an example). Suitable user input interface devices for user device 12 may also include buttons such as alphanumeric keys, power on-off, power-on, power-off, voice memo, and other specialized buttons, a touch pad, pointing stick, or other cursor control device, or any other suitable interface for controlling user device 12. In the example of FIG. 2, display screen 34 is shown as being mounted on the front face of user device 12, but display screen 34 may, if desired, be mounted on the rear face of user device 12, on a side of user device 12, on a flip-up portion of user device 12 that is attached to a main body portion of user device 12 by a hinge (for example), or using any other suitable mounting arrangement. Display 34 may also be omitted

Although shown schematically as being formed on the top face of user device 12 in the example of FIG. 2, buttons such as button 37 and other user input interface devices may generally be formed on any suitable portion of user device 12. For example, a button such as button 37 or other user interface control may be formed on the side of user device 12. Buttons and other user interface controls can also be located on the top face, rear face, or other portion of user device 12. If desired, user device 12 can be controlled remotely (e.g., using an infrared remote control, a radio-frequency remote control such as a Bluetooth® remote control, etc.). With one suitable arrangement, device 12 may receive voice commands and other audio through a wired or wireless headset or other accessory. Device 12 may also activate its speech recognition functionality in response to user input received through a wired or wireless headset (e.g., in response to a button press received on the headset).

Device 12 may use port 38 to perform a synchronization operation with computing equipment 14. With one suitable arrangement, device 12 may transmit voice commands and contextual information to computing equipment 14. For example, during a media playback operation, device 12 may receive a voice command to “find more music like this.” If desired, device 12 may upload the voice command and relevant contextual information (e.g., the title and artist of the media file that was playing when the voice command was received) to computing equipment 14. Computing equipment 14 may receive and process the voice command and relevant contextual information and may perform a search for music that is similar to the media file that was playing when the voice command was received. Computing equipment 14 may then respond by displaying search results, purchase recommendations, etc.

Device 12 may receive data signals from computing equipment 14 in response to uploading voice commands and contextual information. The data received by device 12 from equipment 14 in response to voice commands and contextual information may be used by device 12 to carry out requests associated with the voice commands. For example, after processing the voice command and contextual information, computing equipment 14 may transmit results associated with the voice command to user device 12 which may then display the results.

A schematic diagram of an embodiment of an illustrative user device 12 is shown in FIG. 3. User device 12 may be a mobile telephone, a mobile telephone with media player capabilities, a media player, a handheld computer, a game player, a global positioning system (GPS) device, a combination of such devices, or any other suitable electronic device such as a portable device.

As shown in FIG. 3, user device 12 may include storage 44. Storage 44 may include one or more different types of storage such as hard disk drive storage, nonvolatile memory (e.g., flash memory or other electrically-programmable-read-only memory), volatile memory (e.g., battery-based static or dynamic random-access-memory), etc. Storage 44 may be used to store voice commands and contextual information about the state of device 12 when voice commands are received.

Processing circuitry 46 may be used to control the operation of user device 12. Processing circuitry 46 may be based on a processor such as a microprocessor and other suitable integrated circuits. With one suitable arrangement, processing circuitry 46 and storage 44 are used to run software on user device 12, such as speech recognition applications, internet browsing applications, voice-over-internet-protocol (VOIP) telephone call applications, email applications, media playback applications, operating system functions (e.g., operating system functions supporting speech recognition capabilities), etc. Processing circuitry 46 and storage 44 may be used in implementing analog-to-digital conversion functions for capturing audio and may be used to implement speech recognition functions.

Input-output devices 48 may be used to allow data to be supplied to user device 12 and to allow data to be provided from user device 12 to external devices. Display screen 34, button 37, microphone port 42, speaker port 40, speaker port 41, and dock connector port 38 are examples of input-output devices 48.

Input-output devices 48 can include user input devices 50 such as buttons, touch screens, joysticks, click wheels, scrolling wheels, touch pads, key pads, keyboards, microphones, cameras, etc. A user can control the operation of user device 12 by supplying commands through user input devices 50. Display and audio devices 52 may include liquid-crystal display (LCD) screens or other screens, light-emitting diodes (LEDs), and other components that present visual information and status data. Display and audio devices 52 may also include audio equipment such as speakers and other devices for creating sound. Display and audio devices 52 may contain audio-video interface equipment such as jacks and other connectors for external headphones, microphones, and monitors.

Wireless communications devices 54 may include communications circuitry such as radio-frequency (RF) transceiver circuitry formed from one or more integrated circuits, power amplifier circuitry, passive RF components, one or more antennas, and other circuitry for handling RF wireless signals. Wireless signals can also be sent using light (e.g., using infrared communications circuitry in circuitry 54).

User device 12 can communicate with external devices such as accessories 56 and computing equipment 58, as shown by paths 60. Paths 60 may include wired and wireless paths (e.g., bidirectional wireless paths). Accessories 56 may include headphones (e.g., a wireless cellular headset or audio headphones) and audio-video equipment (e.g., wireless speakers, a game controller, or other equipment that receives and plays audio and video content).

Computing equipment 58 may be any suitable computer such as computing equipment 14 or computing equipment 18 of FIG. 1. With one suitable arrangement, computing equipment 58 is a computer that has an associated wireless access point (router) or an internal or external wireless card that establishes a wireless connection with user device 12. The computer may be a server (e.g., an internet server), a local area network computer with or without internet access, a user's own personal computer, a peer device (e.g., another user device 12), or any other suitable computing equipment. Computing equipment 58 may be associated with one or more online services. A link such as link 60 may be used to connect device 12 to computing equipment such as computing equipment 14 of FIG. 1.

Wireless communications devices 54 may be used to support local and remote wireless links. Examples of local wireless links include infrared communications, Wi-Fi® (IEEE 802.11), Bluetooth®, and wireless universal serial bus (USB) links.

If desired, wireless communications devices 54 may include circuitry for communicating over remote communications links. Typical remote link communications frequency bands include the cellular telephone bands at 850 MHz, 900 MHz, 1800 MHz, and 1900 MHz, the global positioning system (GPS) band at 1575 MHz, and data service bands such as the 3G data communications band at 2170 MHz band (commonly referred to as UMTS or Universal Mobile Telecommunications System). In these illustrative remote communications links, data is transmitted over links 60 that are one or more miles long, whereas in short-range links 60, a wireless signal is typically used to convey data over tens or hundreds of feet.

A schematic diagram of an embodiment of illustrative computing equipment 140 is shown in FIG. 4. Computing equipment 140 may include any suitable computing equipment such as a personal desktop computer, a laptop computer, a server, etc. and may be used to implement computing equipment 14 and/or computing equipment 18 of FIG. 1. Computing equipment 140 may be a server (e.g., an internet server), a local area network computer with or without internet access, a user's own personal computer, a peer device (e.g., another user device 12), other suitable computing equipment, or combinations of multiple pieces of such computing equipment. Computing equipment 140 may be associated with one or more services such as services 18 of FIG. 1.

As shown in FIG. 4, computing equipment 140 may include storage 64 such as hard disk drive storage, nonvolatile memory, volatile memory, etc. Processing circuitry 62 may be used to control the operation of computing equipment 140. Processing circuitry 62 may be based on one or more processors such as microprocessors, microcontrollers, digital signal processors, application specific integrated circuits, and other suitable integrated circuits. Processing circuitry 62 and storage 64 may be used to run software on computing equipment 140 such as speech recognition applications, operating system functions, audio capture applications, other applications with voice recognition and/or audio capture functionality, and other software applications.

Input-output circuitry 66 may be used to gather user input and other input data and to allow data to be provided from computing equipment 140 to external devices. Input-output circuitry 66 can include devices such as mice, keyboards, touch screens, microphones, speakers, displays, televisions, speakers, wired communications circuitry, and wireless communications circuitry.

Illustrative steps involved in using an electronic device such as user device 12 to gather voice commands and contextual information are shown in FIG. 5.

At step 68, an electronic device such as user device 12 of FIG. 1 may receive a voice command. Voice commands may be received from a user using an integrated microphone such as a microphone in microphone port 42. If desired, voice commands may be received using an external microphone (e.g., a microphone in an accessory such as a wired or wireless headset).

Voice commands may be recorded (e.g., stored) in storage such as storage 44 of FIG. 3. Voice commands may be stored as a digital audio recording (e.g., an MP3 audio clip). With one suitable arrangement, voice commands may be stored in long-term storage (e.g., nonvolatile memory, hard disk drive storage, etc.) so that the voice commands may be processed at a later time. If desired, voice commands may be stored in short-term storage (e.g., volatile memory).

At step 70, user device 12 may store contextual information related to the current state of the user device. The contextual information may include any information that is available about the current state of the user device. For example, the contextual information may include information related to a current media playback operation (e.g., media attributes such as a track name, a title, an artist name, an album name, year, genre, etc.), a current web-browsing operation (e.g., a current web-address), the geographic location of the user device (e.g., a location determined using a location sensor, a location derived from information associated with communications path 20 and 21 such as which cellular telephone network or other network the device is connected to, or location data manually entered by a user), the current date and time, a telephone operation (e.g., a telephone number or contact information associated with a current or previous telephone call), information from other software applications running on device 12 such as mapping applications, business productivity applications, email applications, calendar applications, calendar applications, games, etc. The contextual information may include contextual information related to operations occurring in the background of the operation of device 12. For example, contextual information may include media playback information in addition to web browsing information when user device 12 is being used to browse the Internet while listening to music in the background.

With one suitable arrangement, user device 12 may store voice commands as audio clips without performing local voice recognition operations. If desired, user device 12 may perform a speech recognition operation on a voice command. The results of this operation may be used to convert the command into a code or may be used to determine which contextual information is most relevant. Device 12 may then store this most relevant contextual information. For example, user device 12 may perform a preliminary speech recognition operation to search for specific keywords such as “music,” “location,” “near,” and other suitable keywords to determine which contextual information would be most relevant. With this type of arrangement, keywords such as “location” and “near” may indicate that location information is relevant while keywords such as “music” may indicate that information associated with a current media playback operation is most likely to be relevant.

A voice command that has been recorded in step 68 may be processed at step 70. User device 12 may process the voice command using a speech recognition engine. When user device 12 processes the voice command, user device 12 may also process contextual information stored in step 70. With one suitable arrangement, user device 12 may process each voice command with a speech recognition application that runs on processing circuitry such as circuitry 46. If the speech recognition application is able to successfully recognize the speech in the voice command, user device 12 may attempt to perform the action or actions requested by the voice command using any relevant contextual information. For example, the voice command “find more music like this” may be interpreted by user device 12 to mean that the user device should perform a search for music that has the same genre as music that was playing when the voice command was received. User device 12 may therefore perform a search for music using the genre of the currently playing music as a search criteria.

With one suitable arrangement, voice commands may be associated with a list of available media files on user device 12 so that the list of media files serve as contextual information. Image captures and captured audio and/or video clips can also serve as contextual information. For example, user device 12 may have an integrated camera that can be used to take pictures. In this example, user device 12 may allow a user to supply a voice command and to associate the voice command with one or more pictures so that the pictures serve as contextual information. In one example of this type of arrangement, if user device 12 receives the voice command “identify this car” and receives information associating the voice command with a picture containing a car, user device 12 may transmit the picture to a service capable of identifying cars from pictures.

Illustrative steps involved in using a portable electronic device such as user device 12 to receive and upload voice commands and in using computing equipment such as computing equipment 14 to process the uploaded voice commands are shown in FIG. 6.

At step 74, user device 12 may record a voice command. The voice command may be recorded as an audio clip when a user pressed and releases a record button or supplies other user input directing device 12 to capture the voice command. The voice command may be digitized by device 12 and stored in storage associated with user device 12 such as storage 44.

At step 76, user device 12 may store contextual information in storage. If desired, user device 12 may store only the contextual information that is relevant to the captured voice command. As indicated by line 77, the operations of steps 74 and 76 may be repeated (e.g., user device 12 may record numerous voice commands each of which may be associated with corresponding contextual information).

If desired, user device 12 may present the user with an opportunity to record an audio clip that includes both a voice command and contextual information. An example of a possible audio clip that includes both a voice command and contextual information and that could be received by user device 12 is “create new event for Sunday, July 18th: James's Birthday.” In this example, the voice command corresponds to the user's desire for user device 12 to create a new calendar event and the relevant contextual information is included in the audio clip (e.g., the date of the new event “Sunday, July 18th” and the title of the new event “James's Birthday”).

At step 78, user device 12 may upload recorded voice commands and stored contextual information to computing equipment such as equipment 14 or equipment 18. User device 12 may upload recorded voice commands and stored contextual information to computing equipment 14 or equipment 18 using any suitable communications path. For example, user device 12 may transmit voice commands and contextual information to equipment 14 directly over communications path 20, indirectly through communications network 16 over paths 17 and 21, or may upload them to equipment 18 over network 16.

The operations of step 78 may be performed at any suitable time. For example, user device 12 may upload stored voice commands and contextual information whenever user device 12 is coupled to the computing equipment directly (e.g., through a communications path such as path 20 which may be a Universal Serial Bus® communication path), whenever user device 12 is coupled to computing equipment indirectly (e.g., through communication network 16 and paths 17 and 21), whenever voice commands are recorded at step 74 and a communications link to the computing equipment is available, on demand (e.g., when user device 12 receives a command from a user to process voice commands by uploading them to the computing equipment), at regular intervals (e.g., every ten minutes, every half hour, every hour, etc.), and at combinations of these and other suitable times.

At step 80, computing equipment such as computing equipment 14 or 18 may process voice commands and contextual information from user device 12. Computing equipment 14 or 18 may process voice commands using speech recognition software (e.g., speech recognition engines) running on processing circuitry 62 of FIG. 4, as an example. Computing equipment 14 or 18 may utilize contextual information in processing the associated voice command. For example, when a voice command requests that more music be found that is similar to a given media file, computing equipment 14 or 18 may perform a search of music based on information about the given media file. In another example, the voice command “find nearby retail establishments” may be interpreted by user device 12, computing equipment 14, or equipment 18 to mean that a search should be performed for retail establishments that are within a given distance of user device 12. The given distance may be any suitable distance such as a pre-specified distance (e.g., walking distance, one-half mile, one mile, two miles, etc.) and a distance specified as part of the voice command. The voice command may also specify which types of retail establishments the search should include. For example, the voice command “find Italian restaurants within three blocks” specifies a type of retail establishment (restaurants), a particular style of restaurant (Italian), and the given distance over which the search should be performed (within three blocks of the geographical location of the user device that received the voice command).

If desired, computing equipment 14 or 18 may fulfill a voice command directly. For example, when user device 12 is connected to computing equipment 14 or 18 (e.g., when device 12 is synched with the equipment), the computing equipment may display results related to the voice command (e.g., a list of similar music) and may perform any appropriate action (e.g., transmit a picture to a car-identification service and then display any results returned by the car-identification service).

With another suitable arrangement, computing equipment 14 or 18 may transmit information related to processing and responding to the voice command to user device 12. In response, user device 12 may then respond to the voice command. This type of arrangement may be particularly beneficial when user device 12 and the computing equipment are not physically located near each other (e.g., when user device 12 is only connected to computing equipment 14 or 18 through long-range communications paths such as through a communications network such as the Internet).

Illustrative steps involved in using a portable electronic device such as user device 12 to receive, process, and upload voice commands and in using computing equipment such as computing equipment 14 or 18 to process the voice commands are shown in FIG. 7.

At step 82, user device 12 may record a voice command. The voice command may be stored in storage such as storage 44.

Following step 82, user device 12 may process the recorded voice command at step 84. User device 12 may process the voice command at any suitable time (e.g., as the voice command is received or at any later time). If desired, user device 12 may perform a preliminary speech recognition operation to determine which portions of the available contextual information are relevant to the voice command. Device 12 may search for specific keywords in the voice command to determine which portions of the available contextual information are relevant, as an example. With another suitable arrangement, device 12 may perform a more thorough speech recognition operation. In this type of arrangement, device 12 may determine that it is able to respond to the voice command immediately (e.g., by executing an operation or by retrieving appropriate information from an appropriate service 18).

If desired, user device 12 may be trained to one or more users' voices. For example, user device 12 may instruct each user to speak a specific set of sample words in order to train its speech recognition operations to be as accurate as possible for each particular user.

When device 12 is not able to fulfill the voice command at the time the voice command is received, device 12 may store contextual information related to the state of user device 12 at the time the voice command was received in storage (step 86).

As illustrated by line 87, the operations of steps 82, 84, and 86 may optionally be repeated as user device 12 receives numerous voice commands that it is not able to fulfill (e.g., respond to) without further processing by computing equipment 14 or 18.

At step 88, user device 12 may upload one or more voice commands and contextual information associated with each of the voice commands to computing equipment 14 or 18. User device 12 may upload the voice commands to computing equipment 14 or 18 at any suitable time.

At step 90, computing equipment 14 or 18 may process voice commands received from user device 12. Computing equipment 14 or 18 may utilize the contextual information associated with each voice command in processing each of the voice commands (e.g., in using a speech recognition engine to process each voice command and associated contextual information).

If desired, computing equipment 14 or 18 may be trained to one or more users' voices. For example, computing equipment 14 or 18 may instruct each user to speak a specific set of sample words in order to train its speech recognition operations to be as accurate as possible for each particular user. With one suitable arrangement, computing equipment 14 or 18 and user device 12 may share information related to training speech recognition operations to particular users.

The voice commands processed and stored by user device 12 and processed by computing equipment 14 or 18 may include any suitable voice commands. With one suitable arrangement, user device 12 and computing equipment 14 or 18 may each have a respective dictionary of voice commands that can be recognized using the speech recognition capabilities of user device 12 and computing equipment 14 or 18. Because computing equipment 14 or 18 may include any type of computing equipment including desktop computers and computer servers which generally have relatively large amount of processing and storage capabilities compared to portable devices such as user device 12, computing equipment 14 or 18 will generally have a larger dictionary of voice commands that the equipment can recognize using speech recognition operations. By uploading voice commands and contextual information from user device 12 to computing equipment 14 or 18, the probability that a given voice command can be successfully processed and fulfilled will generally increase. With one suitable arrangement, user device 12 may have a closed dictionary (e.g., a dictionary containing only specific keywords and phrase) whereas computing equipment 14 or 18 may have an open dictionary (e.g., a dictionary that can include essentially any word or phrase and which may be provided by a service such as one of services 18).

When user device 12 is not connected to communications networks such as network 16 or to computing equipment 14 or 18 over path 20, user device 12 may not always have the capabilities required to satisfy (e.g., fulfill) a particular voice command at the time the voice command is received. For example, if user device 12 is not connected to a communications network and receives a voice command to “find more music like this,” user device 12 may be able to determine, using a speech recognition dictionary associated with device 12, that a user wants device 12 to perform a search for music that matches the profile of music currently playing through device 12. However, because user device 12 is not currently connected to a communications network, device 12 may not be able to perform the search immediately. In this situation, device 12 may store the voice command and perform the requested action later at an appropriate time (e.g., when device 12 is connected to computing equipment 14 or 18 or when device 12 connects to a service at equipment 18 through a communications network such as network 16).

Because user device 12 can upload voice commands and contextual information to computing equipment 14 or 18, user device 12 may be able to support an increased amount of voice commands and may be able to respond in a more complete manner than if user device 12 performed speech recognition operations without the assistance of equipment 14 or 18. For example, user device 12 can record voice commands that it is unable to comprehend using its own speech recognition capabilities and can transmit the voice commands and relevant contextual information to computing equipment 14 or 18, which may be more capable and therefore more able to comprehend and respond to the voice commands.

As the foregoing demonstrates, users can capture voice commands on device 12 for immediate processing in a device that includes a speech recognition (voice processing) engine. In the event that no speech recognition processing functions are implemented on device 12 or when it is desired to offload voice recognition functions to remote equipment, device 12 may be used to capture an audio clip that includes a voice command.

Any suitable user interface may be used to initiate voice command recording operations. For example, a dedicated button such as a record button may be pressed to initiate voice command capture operations and may be released to terminate voice command capture operations. The start and end of the voice command may also be initiated using a touch screen and on-screen options. The end of the voice command clip may be determined by the expiration of a timer (e.g., all clips may be three seconds long) or device 12 may terminate recording when the ambient sound level at the microphone drops below a given threshold.

Recorded audio clips may be digitized in device 12 using any suitable circuitry. As an example, device 12 may have a microphone amplifier and associated analog-to-digital converter circuitry that digitizes audio clips. Audio clips may be compressed (e.g., using file formats such as the MP3 format).

Contextual information may be captured concurrently. For example, information may be stored on the current operating state of device 12 when a user initiates a voice command capture operation. Stored contextual information may include information such as information on which applications are running on device 12 and their states, the geographic location of device 12 (e.g., geographic coordinates), the orientation of device 12 (e.g., from an orientation sensor in device 12), information from other sensors in device 12, etc.

Because voice command processing can be deferred until device 12 is connected to appropriate computing equipment, it is not necessary for device 12 to immediately communicate with the computing equipment. As user may, for example, capture voice commands while device 12 is offline (e.g., when a user is in an airplane without network connectivity). Device 12 may also be used to capture voice commands that are to be executed by the user's home computer, even when the user's home computer is not powered.

Later, when device 12 is connected to the user's home computer and/or an online service, the captured voice commands can be uploaded and processed by this external computing equipment. The contextual information that was captured when the voice command was captured may help the external computing equipment (e.g., the user's computer or a remote server) properly process the voice command. The computing equipment to which the voice command is uploaded may be able to access data that was unavailable to device 12 when the command was captured, such as information on the contents of a user's media library or other database, information that is available from an online repository, etc. The computing equipment to which the voice command and contextual information were uploaded may also be able to take actions that are not possible when executing commands locally on device 12. These actions may include actions such as making adjustments to a database on the computing equipment, making online purchases, controlling equipment that is associated with or attached to the computing equipment, etc.

The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention.

Patentzitate
Zitiertes PatentEingetragen Veröffentlichungsdatum Antragsteller Titel
US370434519. März 197128. Nov. 1972Bell Telephone Labor IncConversion of printed text into synthetic speech
US382813230. Okt. 19706. Aug. 1974Bell Telephone Labor IncSpeech synthesis by concatenation of formant encoded words
US39795573. Juli 19757. Sept. 1976International Telephone And Telegraph CorporationSpeech processor system for pitch period extraction using prediction filters
US42788382. Aug. 197914. Juli 1981Edinen Centar Po PhysikaMethod of and device for synthesis of speech from printed text
US428240526. Nov. 19794. Aug. 1981Nippon Electric Co., Ltd.Speech analyzer comprising circuits for calculating autocorrelation coefficients forwardly and backwardly
US431072123. Jan. 198012. Jan. 1982The United States Of America As Represented By The Secretary Of The ArmyHalf duplex integral vocoder modem system
US43485532. Juli 19807. Sept. 1982International Business Machines CorporationParallel pattern verifier with dynamic time warping
US465302115. Juni 198424. März 1987Kabushiki Kaisha ToshibaData management apparatus
US468819528. Jan. 198318. Aug. 1987Texas Instruments IncorporatedNatural-language interface generating system
US469294110. Apr. 19848. Sept. 1987First ByteReal-time text-to-speech conversion system
US471809427. März 19865. Jan. 1988International Business Machines Corp.Speech recognition system
US472454222. Jan. 19869. Febr. 1988International Business Machines CorporationAutomatic reference adaptation during dynamic signature verification
US472606526. Jan. 198416. Febr. 1988Horst FroesslImage manipulation by speech signals
US47273547. Jan. 198723. Febr. 1988Unisys CorporationSystem for selecting best fit vector code in vector quantization encoding
US477601621. Nov. 19854. Okt. 1988Position Orientation Systems, Inc.Voice control system
US478380727. Aug. 19848. Nov. 1988John MarleySystem and method for sound recognition with feature selection synchronized to voice pitch
US481124323. Dez. 19857. März 1989Racine Marsh VComputer aided coordinate digitizing system
US481927116. Dez. 19874. Apr. 1989International Business Machines CorporationConstructing Markov model word baseforms from multiple utterances by concatenating model sequences for word segments
US482752016. Jan. 19872. Mai 1989Prince CorporationVoice actuated control system for use in a vehicle
US482957621. Okt. 19869. Mai 1989Dragon Systems, Inc.Voice recognition system
US483371229. Mai 198523. Mai 1989International Business Machines CorporationAutomatic generation of simple Markov model stunted baseforms for words in a vocabulary
US483985315. Sept. 198813. Juni 1989Bell Communications Research, Inc.Computer information retrieval using latent semantic structure
US485216818. Nov. 198625. Juli 1989Sprague Richard PCompression of stored waveforms for artificial speech
US48625042. Jan. 198729. Aug. 1989Kabushiki Kaisha ToshibaSpeech synthesis system of rule-synthesis type
US487823016. Okt. 198731. Okt. 1989Mitsubishi Denki Kabushiki KaishaAmplitude-adaptive vector quantization system
US490330523. März 198920. Febr. 1990Dragon Systems, Inc.Method for representing word models for use in speech recognition
US49051633. Okt. 198827. Febr. 1990Minnesota Mining & Manufacturing CompanyIntelligent optical navigator dynamic information presentation and navigation system
US49145866. Nov. 19873. Apr. 1990Xerox CorporationGarbage collector for hypermedia systems
US491459018. Mai 19883. Apr. 1990Emhart Industries, Inc.Natural language understanding system
US49440131. Apr. 198624. Juli 1990British Telecommunications Public Limited CompanyMulti-pulse speech coder
US49550472. Mai 19894. Sept. 1990Dytel CorporationAutomated attendant with direct inward system access
US49657636. Febr. 198923. Okt. 1990International Business Machines CorporationComputer method for automatic extraction of commonly specified information from business correspondence
US497419131. Juli 198727. Nov. 1990Syntellect Software Inc.Adaptive natural language computer interface system
US497759813. Apr. 198911. Dez. 1990Texas Instruments IncorporatedEfficient pruning algorithm for hidden markov model speech recognition
US499297218. Nov. 198712. Febr. 1991International Business Machines CorporationFlexible context searchable on-line information system with help files and modules for on-line computer system documentation
US501057413. Juni 198923. Apr. 1991At&T Bell LaboratoriesVector quantizer search arrangement
US502011231. Okt. 198928. Mai 1991At&T Bell LaboratoriesImage recognition method using two-dimensional stochastic grammars
US50219717. Dez. 19894. Juni 1991Unisys CorporationReflective binary encoder for vector quantization
US50220819. Okt. 19904. Juni 1991Sharp Kabushiki KaishaInformation recognition system
US50274066. Dez. 198825. Juni 1991Dragon Systems, Inc.Method for interactive speech recognition and training
US503121721. Sept. 19899. Juli 1991International Business Machines CorporationSpeech recognition system using Markov models having independent label output sets
US503298924. Apr. 198916. Juli 1991Realpro, Ltd.Real estate search and location system and method
US50402186. Juli 199013. Aug. 1991Digital Equipment CorporationName pronounciation by synthesizer
US50476172. Apr. 199010. Sept. 1991Symbol Technologies, Inc.Narrow-bodied, single- and twin-windowed portable laser scanning head for reading bar code symbols
US505791525. Okt. 199015. Okt. 1991Kohorn H VonSystem and method for attracting shoppers to sales outlets
US50724522. Nov. 198910. Dez. 1991International Business Machines CorporationAutomatic determination of labels and Markov word models in a speech recognition system
US509194528. Sept. 198925. Febr. 1992At&T Bell LaboratoriesSource dependent channel coding with error protection
US512705324. Dez. 199030. Juni 1992General Electric CompanyLow-complexity method for improving the performance of autocorrelation-based pitch detectors
US512705511. Febr. 199130. Juni 1992Kurzweil Applied Intelligence, Inc.Speech recognition apparatus & method having dynamic reference pattern adaptation
US512867230. Okt. 19907. Juli 1992Apple Computer, Inc.Dynamic predictive keyboard
US513301126. Dez. 199021. Juli 1992International Business Machines CorporationMethod and apparatus for linear vocal control of cursor position
US514258420. Juli 199025. Aug. 1992Nec CorporationSpeech coding/decoding method having an excitation signal
US516490026. Mai 198917. Nov. 1992Colman BernathMethod and device for phonetically encoding Chinese textual data for data processing entry
US516500712. Juni 198917. Nov. 1992International Business Machines CorporationFeneme-based Markov models for words
US517962723. Juni 199212. Jan. 1993Dictaphone CorporationDigital dictation system
US517965213. Dez. 198912. Jan. 1993Anthony I. RozmanithMethod and apparatus for storing, transmitting and retrieving graphical and tabular data
US519495027. Febr. 198916. März 1993Mitsubishi Denki Kabushiki KaishaVector quantizer
US51970051. Mai 198923. März 1993Intelligent Business SystemsDatabase retrieval system having a natural language interface
US519907719. Sept. 199130. März 1993Xerox CorporationWordspotting for voice editing and indexing
US520295222. Juni 199013. Apr. 1993Dragon Systems, Inc.Large-vocabulary continuous speech prefiltering and processing system
US520886220. Febr. 19914. Mai 1993Nec CorporationSpeech coder
US521674721. Nov. 19911. Juni 1993Digital Voice Systems, Inc.Voiced/unvoiced estimation of an acoustic signal
US52206391. Dez. 198915. Juni 1993National Science CouncilMandarin speech input method for Chinese computers and a mandarin speech recognition machine
US522065715. Apr. 199115. Juni 1993Xerox CorporationUpdating local copy of shared data in a collaborative system
US522214623. Okt. 199122. Juni 1993International Business Machines CorporationSpeech recognition apparatus having a speech coder outputting acoustic prototype ranks
US523003617. Okt. 199020. Juli 1993Kabushiki Kaisha ToshibaSpeech coding system utilizing a recursive computation technique for improvement in processing speed
US523568017. Sept. 199110. Aug. 1993Moore Business Forms, Inc.Apparatus and method for communicating textual and image information between a host computer and a remote display terminal
US526734510. Febr. 199230. Nov. 1993International Business Machines CorporationSpeech recognition apparatus which predicts word classes from context and words from word classes
US526899031. Jan. 19917. Dez. 1993Sri InternationalMethod for recognizing speech using linguistically-motivated hidden Markov models
US528226525. Nov. 199225. Jan. 1994Canon Kabushiki KaishaKnowledge information processing system
US52912869. Febr. 19931. März 1994Mitsubishi Denki Kabushiki KaishaMultimedia data transmission system
US52934483. Sept. 19928. März 1994Nippon Telegraph And Telephone CorporationSpeech analysis-synthesis method and apparatus therefor
US52934521. Juli 19918. März 1994Texas Instruments IncorporatedVoice log-in using spoken name input
US529717021. Aug. 199022. März 1994Codex CorporationLattice and trellis-coded quantization
US530110917. Juli 19915. Apr. 1994Bell Communications Research, Inc.Computerized cross-language document retrieval using latent semantic indexing
US530340629. Apr. 199112. Apr. 1994Motorola, Inc.Noise squelch circuit with adaptive noise shaping
US530935916. Aug. 19903. Mai 1994Boris KatzMethod and apparatus for generating and utlizing annotations to facilitate computer text retrieval
US53175077. Nov. 199031. Mai 1994Gallant Stephen IMethod for document retrieval and for word sense disambiguation using neural networks
US53176477. Apr. 199231. Mai 1994Apple Computer, Inc.Constrained attribute grammars for syntactic pattern recognition
US532529725. Juni 199228. Juni 1994System Of Multiple-Colored Images For Internationally Listed Estates, Inc.Computer implemented method and system for storing and retrieving textual data and compressed image data
US53252983. Sept. 199128. Juni 1994Hnc, Inc.Methods for generating or revising context vectors for a plurality of word stems
US53274981. Sept. 19895. Juli 1994Ministry Of Posts, Tele-French State Communications & SpaceProcessing device for speech synthesis by addition overlapping of wave forms
US533323610. Sept. 199226. Juli 1994International Business Machines CorporationSpeech recognizer having a speech coder for an acoustic match based on context-dependent speech-transition acoustic models
US533327523. Juni 199226. Juli 1994Wheatley Barbara JSystem and method for time aligning speech
US534553617. Dez. 19916. Sept. 1994Matsushita Electric Industrial Co., Ltd.Method of speech recognition
US534964531. Dez. 199120. Sept. 1994Matsushita Electric Industrial Co., Ltd.Word hypothesizer for continuous speech decoding using stressed-vowel centered bidirectional tree searches
US535337717. Aug. 19924. Okt. 1994International Business Machines CorporationSpeech recognition system having an interface to a host computer bus for direct access to the host memory
US537730121. Jan. 199427. Dez. 1994At&T Corp.Technique for modifying reference vector quantized speech feature signals
US538489231. Dez. 199224. Jan. 1995Apple Computer, Inc.Dynamic language model for speech recognition
US538489323. Sept. 199224. Jan. 1995Emerson & Stern Associates, Inc.Method and apparatus for speech synthesis based on prosodic analysis
US538649421. Juni 199331. Jan. 1995Apple Computer, Inc.Method and apparatus for controlling a speech recognition function using a cursor control device
US538655623. Dez. 199231. Jan. 1995International Business Machines CorporationNatural language analyzing apparatus and method
US539027931. Dez. 199214. Febr. 1995Apple Computer, Inc.Partitioning speech rules by context for speech recognition
US53966251. Apr. 19947. März 1995British Aerospace Public Ltd., Co.System for binary tree searched vector quantization data compression processing each tree node containing one vector and one scalar to compare with an input vector
US540043418. Apr. 199421. März 1995Matsushita Electric Industrial Co., Ltd.Voice source for synthetic speech system
US54042954. Jan. 19944. Apr. 1995Katz; BorisMethod and apparatus for utilizing annotations to facilitate computer retrieval of database material
US541275622. Dez. 19922. Mai 1995Mitsubishi Denki Kabushiki KaishaArtificial intelligence software shell for plant operation simulation
US541280430. Apr. 19922. Mai 1995Oracle CorporationExtending the semantics of the outer join operator for un-nesting queries to a data base
US541280620. Aug. 19922. Mai 1995Hewlett-Packard CompanyCalibration of logical cost formulae for queries in a heterogeneous DBMS using synthetic database
US541895130. Sept. 199423. Mai 1995The United States Of America As Represented By The Director Of National Security AgencyMethod of retrieving documents that concern the same topic
US542494712. Juni 199113. Juni 1995International Business Machines CorporationNatural language analyzing apparatus and method, and construction of a knowledge base for natural language analysis
US543477718. März 199418. Juli 1995Apple Computer, Inc.Method and apparatus for processing natural language
US544482318. Okt. 199422. Aug. 1995Compaq Computer CorporationIntelligent search engine for associated on-line documentation having questionless case-based knowledge base
US54558884. Dez. 19923. Okt. 1995Northern Telecom LimitedSpeech bandwidth extension method and apparatus
US546952921. Sept. 199321. Nov. 1995France Telecom Establissement Autonome De Droit PublicProcess for measuring the resemblance between sound samples and apparatus for performing this process
US547161112. März 199228. Nov. 1995University Of StrathclydeComputerised information-retrieval database systems
US547558712. Juli 199112. Dez. 1995Digital Equipment CorporationMethod and apparatus for efficient morphological text analysis using a high-level language for compact specification of inflectional paradigms
US54794888. Febr. 199426. Dez. 1995Bell CanadaMethod and apparatus for automation of directory assistance using speech recognition
US54917723. Mai 199513. Febr. 1996Digital Voice Systems, Inc.Methods for speech transmission
US54936778. Juni 199420. Febr. 1996Systems Research & Applications CorporationGeneration, archiving, and retrieval of digital images with evoked suggestion-set captions and natural language interface
US549560425. Aug. 199327. Febr. 1996Asymetrix CorporationMethod and apparatus for the modeling and query of database structures using natural language-like constructs
US550279021. Dez. 199226. März 1996Oki Electric Industry Co., Ltd.Speech recognition method and system using triphones, diphones, and phonemes
US55027911. Sept. 199326. März 1996International Business Machines CorporationSpeech recognition by concatenating fenonic allophone hidden Markov models in parallel among subwords
US551547524. Juni 19937. Mai 1996Northern Telecom LimitedSpeech recognition method using a two-pass search
US553690214. Apr. 199316. Juli 1996Yamaha CorporationMethod of and apparatus for analyzing and synthesizing a sound by extracting and controlling a sound parameter
US553761822. Dez. 199416. Juli 1996Diacom Technologies, Inc.Method and apparatus for implementing user feedback
US557482323. Juni 199312. Nov. 1996Her Majesty The Queen In Right Of Canada As Represented By The Minister Of CommunicationsFrequency selective harmonic coding
US55772417. Dez. 199419. Nov. 1996Excite, Inc.Information retrieval system and method with implementation extensible query architecture
US557880828. Febr. 199526. Nov. 1996Datamark Services, Inc.Data card that can be used for transactions involving separate card issuers
US557943615. März 199326. Nov. 1996Lucent Technologies Inc.Recognition unit model training based on competing word and word string models
US558165522. Jan. 19963. Dez. 1996Sri InternationalMethod for recognizing speech using linguistically-motivated hidden Markov models
US558402424. März 199410. Dez. 1996Software AgInteractive database query system and method for prohibiting the selection of semantically incorrect query parameters
US559667611. Okt. 199521. Jan. 1997Hughes ElectronicsMode-specific method and apparatus for encoding signals containing speech
US55969942. Mai 199428. Jan. 1997Bro; William L.Automated and interactive behavioral and medical guidance system
US560862415. Mai 19954. März 1997Apple Computer Inc.Method and apparatus for processing natural language
US561303625. Apr. 199518. März 1997Apple Computer, Inc.Dynamic categories for a speech recognition system
US561750714. Juli 19941. Apr. 1997Korea Telecommunication AuthoritySpeech segment coding and pitch control methods for speech synthesis systems
US561969426. Aug. 19948. Apr. 1997Nec CorporationCase database storage/retrieval system
US562185919. Jan. 199415. Apr. 1997Bbn CorporationSingle tree method for grammar directed, very large vocabulary speech recognizer
US562190319. Sept. 199415. Apr. 1997Apple Computer, Inc.Method and apparatus for deducing user intent and providing computer implemented services
US56424643. Mai 199524. Juni 1997Northern Telecom LimitedMethods and apparatus for noise conditioning in digital speech compression systems using linear predictive coding
US564251929. Apr. 199424. Juni 1997Sun Microsystems, Inc.Speech interpreter with a unified grammer compiler
US56447276. Dez. 19941. Juli 1997Proprietary Financial Products, Inc.System for the operation and management of one or more financial accounts through the use of a digital communication and computation system for exchange, investment and borrowing
US56640557. Juni 19952. Sept. 1997Lucent Technologies Inc.CS-ACELP speech compression system with adaptive pitch prediction filter gain based on a measure of periodicity
US567581916. Juni 19947. Okt. 1997Xerox CorporationDocument information retrieval using global word co-occurrence patterns
US568253929. Sept. 199428. Okt. 1997Conrad; DonovanAnticipated meaning natural language interface
US568707719. Okt. 199511. Nov. 1997Universal Dynamics LimitedMethod and apparatus for adaptive control
US56969628. Mai 19969. Dez. 1997Xerox CorporationMethod for computerized information retrieval using shallow linguistic analysis
US57014008. März 199523. Dez. 1997Amado; Carlos ArmandoMethod and apparatus for applying if-then-else rules to data sets in a relational data base and generating from the results of application of said rules a database of diagnostics linked to said data sets to aid executive analysis of financial data
US570644220. Dez. 19956. Jan. 1998Block Financial CorporationSystem for on-line financial services using distributed objects
US571088616. Juni 199520. Jan. 1998Sellectsoft, L.C.Electric couponing method and apparatus
US57129578. Sept. 199527. Jan. 1998Carnegie Mellon UniversityLocating and correcting erroneously recognized portions of utterances by rescoring based on two n-best lists
US571546830. Sept. 19943. Febr. 1998Budzinski; Robert LuciusMemory system for storing and retrieving experience and knowledge with natural language
US57218272. Okt. 199624. Febr. 1998James LoganSystem for electrically distributing personalized information
US572795022. Mai 199617. März 1998Netsage CorporationAgent based instruction system and method
US57296946. Febr. 199617. März 1998The Regents Of The University Of CaliforniaSpeech coding, reconstruction and recognition using acoustics and electromagnetic waves
US5732216 *2. Okt. 199624. März 1998Internet Angles, Inc.Audio message exchange system
US573239012. Aug. 199624. März 1998Sony CorpSpeech signal transmitting and receiving apparatus with noise sensitive volume control
US573479131. Dez. 199231. März 1998Apple Computer, Inc.Rapid tree-based method for vector quantization
US573773415. Sept. 19957. Apr. 1998Infonautics CorporationQuery word relevance adjustment in a search of an information retrieval system
US574897413. Dez. 19945. Mai 1998International Business Machines CorporationMultimodal natural language interface for cross-application tasks
US57490816. Apr. 19955. Mai 1998Firefly Network, Inc.System and method for recommending items to a user
US575910111. Apr. 19942. Juni 1998Response Reward Systems L.C.Central and remote evaluation of responses of participatory broadcast audience with automatic crediting and couponing
US579097815. Sept. 19954. Aug. 1998Lucent Technologies, Inc.System and method for determining pitch contours
US57940502. Okt. 199711. Aug. 1998Intelligent Text Processing, Inc.Natural language understanding system
US579418230. Sept. 199611. Aug. 1998Apple Computer, Inc.Linear predictive speech encoding systems with efficient combination pitch coefficients computation
US57942074. Sept. 199611. Aug. 1998Walker Asset Management Limited PartnershipMethod and apparatus for a cryptographically assisted commercial network system designed to facilitate buyer-driven conditional purchase offers
US57942373. Nov. 199711. Aug. 1998International Business Machines CorporationSystem and method for improving problem source identification in computer systems employing relevance feedback and statistical source ranking
US57992767. Nov. 199525. Aug. 1998Accent IncorporatedKnowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals
US58227438. Apr. 199713. Okt. 19981215627 Ontario Inc.Knowledge-based information retrieval system
US582588128. Juni 199620. Okt. 1998Allsoft Distributing Inc.Public network merchandising system
US582626110. Mai 199620. Okt. 1998Spencer; GrahamSystem and method for querying multiple, distributed databases by selective sharing of local relative significance information for terms related to the query
US58289996. Mai 199627. Okt. 1998Apple Computer, Inc.Method and system for deriving a large-span semantic language model for large-vocabulary recognition systems
US583589318. Apr. 199610. Nov. 1998Atr Interpreting Telecommunications Research LabsClass-based word clustering for speech recognition using a three-level balanced hierarchical similarity
US583910617. Dez. 199617. Nov. 1998Apple Computer, Inc.Large-vocabulary speech recognition using an integrated syntactic and semantic statistical language model
US58452552. Okt. 19971. Dez. 1998Advanced Health Med-E-Systems CorporationPrescription management system
US58571843. Mai 19965. Jan. 1999Walden Media, Inc.Language and method for creating, organizing, and retrieving data from a database
US586006311. Juli 199712. Jan. 1999At&T CorpAutomated meaningful phrase clustering
US586223320. Mai 199319. Jan. 1999Industrial Research LimitedWideband assisted reverberation system
US58648065. Mai 199726. Jan. 1999France TelecomDecision-directed frame-synchronous adaptive equalization filtering of a speech signal by implementing a hidden markov model
US586484424. Okt. 199626. Jan. 1999Apple Computer, Inc.System and method for enhancing a user interface with a computer based training tool
US58677994. Apr. 19962. Febr. 1999Lang; Andrew K.Information system and method for filtering a massive flow of information entities to meet user information classification needs
US587305612. Okt. 199316. Febr. 1999The Syracuse UniversityNatural language processing system for semantic vector representation which accounts for lexical ambiguity
US587543715. Apr. 199723. Febr. 1999Proprietary Financial Products, Inc.System for the operation and management of one or more financial accounts through the use of a digital communication and computation system for exchange, investment and borrowing
US588432313. Okt. 199516. März 19993Com CorporationExtendible method and apparatus for synchronizing files on two different computer systems
US589546430. Apr. 199720. Apr. 1999Eastman Kodak CompanyComputer program product and a method for using natural language for the description, search and retrieval of multi-media objects
US589546619. Aug. 199720. Apr. 1999At&T CorpAutomated natural language understanding customer service system
US589997229. Sept. 19954. Mai 1999Seiko Epson CorporationInteractive voice recognition method and apparatus using affirmative/negative content discrimination
US591319330. Apr. 199615. Juni 1999Microsoft CorporationMethod and system of runtime acoustic unit selection for speech synthesis
US591524914. Juni 199622. Juni 1999Excite, Inc.System and method for accelerated query evaluation of very large full-text databases
US59307697. Okt. 199627. Juli 1999Rose; AndreaSystem and method for fashion shopping
US593382222. Juli 19973. Aug. 1999Microsoft CorporationApparatus and methods for an information retrieval system that employs natural language processing of search results to improve overall precision
US593692623. Mai 199710. Aug. 1999Victor Company Of Japan, Ltd.Variable transfer rate data reproduction apparatus
US594081115. Okt. 199617. Aug. 1999Affinity Technology Group, Inc.Closed loop financial transaction method and apparatus
US59419443. März 199724. Aug. 1999Microsoft CorporationMethod for providing a substitute for a requested inaccessible object by identifying substantially similar objects using weights corresponding to object features
US594367021. Nov. 199724. Aug. 1999International Business Machines CorporationSystem and method for categorizing objects in combined categories
US59480406. Febr. 19977. Sept. 1999Delorme Publishing Co.Travel reservation information and planning system
US595669917. Nov. 199721. Sept. 1999Jaesent Inc.System for secured credit card transactions on the internet
US596039422. Okt. 199728. Sept. 1999Dragon Systems, Inc.Method of speech command recognition with dynamic assignment of probabilities according to the state of the controlled applications
US596042226. Nov. 199728. Sept. 1999International Business Machines CorporationSystem and method for optimized source selection in an information retrieval system
US596392426. Apr. 19965. Okt. 1999Verifone, Inc.System, method and article of manufacture for the use of payment instrument holders and payment instruments in network electronic commerce
US596612623. Dez. 199612. Okt. 1999Szabo; Andrew J.Graphic user interface for database system
US597047424. Apr. 199719. Okt. 1999Sears, Roebuck And Co.Registry information system for shoppers
US597414630. Juli 199726. Okt. 1999Huntington Bancshares IncorporatedReal time bank-centric universal payment system
US59828914. Nov. 19979. Nov. 1999Intertrust Technologies Corp.Systems and methods for secure transaction management and electronic rights protection
US598713217. Juni 199616. Nov. 1999Verifone, Inc.System, method and article of manufacture for conditionally accepting a payment method utilizing an extensible, flexible architecture
US598714026. Apr. 199616. Nov. 1999Verifone, Inc.System, method and article of manufacture for secure network electronic payment and credit collection
US598740429. Jan. 199616. Nov. 1999International Business Machines CorporationStatistical natural language understanding using hidden clumpings
US598744022. Juli 199716. Nov. 1999Cyva Research CorporationPersonal information security and exchange tool
US599990819. Sept. 19977. Dez. 1999Abelow; Daniel H.Customer-based product design module
US601647129. Apr. 199818. Jan. 2000Matsushita Electric Industrial Co., Ltd.Method and apparatus using decision trees to generate and score multiple pronunciations for a spelled word
US60236841. Okt. 19978. Febr. 2000Security First Technologies, Inc.Three tier financial transaction system with cache memory
US602428824. Dez. 199715. Febr. 2000Graphic Technology, Inc.Promotion system including an ic-card memory for obtaining and tracking a plurality of transactions
US602634521. Sept. 199815. Febr. 2000Mobile Information Systems, Inc.Method and apparatus for tracking vehicle location
US60263755. Dez. 199715. Febr. 2000Nortel Networks CorporationMethod and apparatus for processing orders from customers in a mobile environment
US602638814. Aug. 199615. Febr. 2000Textwise, LlcUser interface and other enhancements for natural language information retrieval system and method
US602639331. März 199815. Febr. 2000Casebank Technologies Inc.Configuration knowledge as an aid to case retrieval
US602913230. Apr. 199822. Febr. 2000Matsushita Electric Industrial Co.Method for letter-to-sound in text-to-speech synthesis
US60385337. Juli 199514. März 2000Lucent Technologies Inc.System and method for selecting training text
US605265621. Juni 199518. Apr. 2000Canon Kabushiki KaishaNatural language processing system and method for processing input information by predicting kind thereof
US605551421. Juni 199625. Apr. 2000Wren; Stephen CoreySystem for marketing foods and services utilizing computerized centraland remote facilities
US605553123. Juni 199725. Apr. 2000Engate IncorporatedDown-line transcription system having context sensitive searching capability
US606496018. Dez. 199716. Mai 2000Apple Computer, Inc.Method and apparatus for improved duration modeling of phonemes
US607013920. Aug. 199630. Mai 2000Seiko Epson CorporationBifurcated speaker specific and non-speaker specific speech recognition method and apparatus
US607017413. Juli 199830. Mai 2000Infraworks CorporationMethod and apparatus for real-time secure file deletion
US60760517. März 199713. Juni 2000Microsoft CorporationInformation retrieval utilizing semantic representation of text
US60760886. Febr. 199713. Juni 2000Paik; WoojinInformation extraction system and method using concept relation concept (CRC) triples
US60789149. Dez. 199620. Juni 2000Open Text CorporationNatural language meta-search system and method
US60817506. Juni 199527. Juni 2000Hoffberg; Steven MarkErgonomic man-machine interface incorporating adaptive pattern recognition based control system
US608177422. Aug. 199727. Juni 2000Novell, Inc.Natural language information retrieval system and method
US608873124. Apr. 199811. Juli 2000Associative Computing, Inc.Intelligent assistant for use with a local computer and with the internet
US609464922. Dez. 199725. Juli 2000Partnet, Inc.Keyword searches of structured databases
US610586517. Juli 199822. Aug. 2000Hardesty; Laurence DanielFinancial transaction system with retirement saving benefit
US610862731. Okt. 199722. Aug. 2000Nortel Networks CorporationAutomatic transcription tool
US611910117. Jan. 199712. Sept. 2000Personal Agents, Inc.Intelligent agents for electronic commerce
US61226163. Juli 199619. Sept. 2000Apple Computer, Inc.Method and apparatus for diphone aliasing
US612535615. Sept. 199726. Sept. 2000Rosefaire Development, Ltd.Portable sales presentation system with selective scripted seller prompts
US61449381. Mai 19987. Nov. 2000Sun Microsystems, Inc.Voice user interface with personality
US617326121. Dez. 19989. Jan. 2001At&T CorpGrammar fragment acquisition using syntactic and semantic clustering
US61732799. Apr. 19989. Jan. 2001At&T Corp.Method of using a natural language interface to retrieve information from one or more data resources
US618899930. Sept. 199913. Febr. 2001At Home CorporationMethod and system for dynamically synthesizing a computer program by differentially resolving atoms based on user context data
US619564127. März 199827. Febr. 2001International Business Machines Corp.Network universal spoken language vocabulary
US620545613. Jan. 199820. März 2001Fujitsu LimitedSummarization apparatus and method
US620897130. Okt. 199827. März 2001Apple Computer, Inc.Method and apparatus for command recognition using data-driven semantic inference
US62335591. Apr. 199815. Mai 2001Motorola, Inc.Speech control of multiple applications using applets
US623357811. Sept. 199715. Mai 2001Nippon Telegraph And Telephone CorporationMethod and system for information retrieval
US624698125. Nov. 199812. Juni 2001International Business Machines CorporationNatural language task-oriented dialog manager and method
US62600242. Dez. 199810. Juli 2001Gary ShkedyMethod and apparatus for facilitating buyer-driven purchase orders on a commercial network system
US626663711. Sept. 199824. Juli 2001International Business Machines CorporationPhrase splicing and variable substitution using a trainable speech synthesizer
US62758242. Okt. 199814. Aug. 2001Ncr CorporationSystem and method for managing data privacy in a database management system
US628578630. Apr. 19984. Sept. 2001Motorola, Inc.Text recognizer and method using non-cumulative character scoring in a forward search
US630814916. Dez. 199823. Okt. 2001Xerox CorporationGrouping words with equivalent substrings by automatic clustering based on suffix relationships
US63111893. Dez. 199830. Okt. 2001Altavista CompanyTechnique for matching a query to a portion of media
US631759421. Sept. 199913. Nov. 2001Openwave Technologies Inc.System and method for providing data to a wireless device upon detection of activity of the device on a wireless network
US63177077. Dez. 199813. Nov. 2001At&T Corp.Automatic clustering of tokens from a corpus for grammar acquisition
US631783121. Sept. 199813. Nov. 2001Openwave Systems Inc.Method and apparatus for establishing a secure connection over a one-way data path
US632109215. Sept. 199920. Nov. 2001Signal Soft CorporationMultiple input data management for wireless location-based applications
US63341031. Sept. 200025. Dez. 2001General Magic, Inc.Voice user interface with personality
US63568545. Apr. 199912. März 2002Delphi Technologies, Inc.Holographic object position and type sensing system and method
US63569055. März 199912. März 2002Accenture LlpSystem, method and article of manufacture for mobile communication utilizing an interface support framework
US636688316. Febr. 19992. Apr. 2002Atr Interpreting TelecommunicationsConcatenation of speech segments by use of a speech synthesizer
US63668848. Nov. 19992. Apr. 2002Apple Computer, Inc.Method and apparatus for improved duration modeling of phonemes
US642167227. Juli 199916. Juli 2002Verizon Services Corp.Apparatus for and method of disambiguation of directory listing searches utilizing multiple selectable secondary search keys
US64345245. Okt. 199913. Aug. 2002One Voice Technologies, Inc.Object interactive user interface using speech recognition and natural language processing
US644607619. Nov. 19983. Sept. 2002Accenture Llp.Voice interactive web-based agent system responsive to a user location for prioritizing and formatting information
US64496202. März 200010. Sept. 2002Nimble Technology, Inc.Method and apparatus for generating information pages using semi-structured data stored in a structured manner
US645328130. Juli 199617. Sept. 2002Vxi CorporationPortable audio database device with icon-based graphical user-interface
US645329228. Okt. 199817. Sept. 2002International Business Machines CorporationCommand boundary identifier for conversational natural language
US646002923. Dez. 19981. Okt. 2002Microsoft CorporationSystem for improving search text
US64666546. März 200015. Okt. 2002Avaya Technology Corp.Personal virtual assistant with semantic tagging
US647748810. März 20005. Nov. 2002Apple Computer, Inc.Method for dynamic context scope selection in hybrid n-gram+LSA language modeling
US648753423. März 200026. Nov. 2002U.S. Philips CorporationDistributed client-server speech recognition system
US64990139. Sept. 199824. Dez. 2002One Voice Technologies, Inc.Interactive user interface using speech recognition and natural language processing
US65019372. Juli 199931. Dez. 2002Chi Fai HoLearning method and system based on questioning
US65051585. Juli 20007. Jan. 2003At&T Corp.Synthesis-based pre-selection of suitable units for concatenative speech
US65051756. Okt. 19997. Jan. 2003Goldman, Sachs & Co.Order centric tracking system
US65051834. Febr. 19997. Jan. 2003Authoria, Inc.Human resource knowledge modeling and delivery system
US651041721. März 200021. Jan. 2003America Online, Inc.System and method for voice access to internet-based information
US651306314. März 200028. Jan. 2003Sri InternationalAccessing network-based electronic information through scripted online interfaces using spoken input
US652306130. Juni 200018. Febr. 2003Sri International, Inc.System, method, and article of manufacture for agent-based navigation in a speech-based data navigation system
US652317219. Febr. 199918. Febr. 2003Evolutionary Technologies International, Inc.Parser translator system and method
US65263827. Dez. 199925. Febr. 2003Comverse, Inc.Language-oriented user interfaces for voice activated services
US652639531. Dez. 199925. Febr. 2003Intel CorporationApplication of personality models and interaction with synthetic characters in a computing system
US65324445. Okt. 199811. März 2003One Voice Technologies, Inc.Network interactive user interface using speech recognition and natural language processing
US653244621. Aug. 200011. März 2003Openwave Systems Inc.Server based speech recognition user interface for wireless devices
US654638814. Jan. 20008. Apr. 2003International Business Machines CorporationMetadata search results ranking system
US655334422. Febr. 200222. Apr. 2003Apple Computer, Inc.Method and apparatus for improved duration modeling of phonemes
US655698312. Jan. 200029. Apr. 2003Microsoft CorporationMethods and apparatus for finding semantic information, such as usage logs, similar to a query using a pattern lattice data space
US658446419. März 199924. Juni 2003Ask Jeeves, Inc.Grammar template query system
US65980398. Juni 199922. Juli 2003Albert-Inc. S.A.Natural language interface for searching database
US660102617. Sept. 199929. Juli 2003Discern Communications, Inc.Information retrieval by natural language querying
US660123431. Aug. 199929. Juli 2003Accenture LlpAttribute dictionary in a business logic services environment
US660405910. Juli 20015. Aug. 2003Koninklijke Philips Electronics N.V.Predictive calendar
US661517212. Nov. 19992. Sept. 2003Phoenix Solutions, Inc.Intelligent query engine for processing voice based queries
US661517510. Juni 19992. Sept. 2003Robert F. Gazdzinski“Smart” elevator system and method
US661522014. März 20002. Sept. 2003Oracle International CorporationMethod and mechanism for data consolidation
US66255836. Okt. 199923. Sept. 2003Goldman, Sachs & Co.Handheld trading system interface
US66313467. Apr. 19997. Okt. 2003Matsushita Electric Industrial Co., Ltd.Method and apparatus for natural language parsing using multiple passes and tags
US663384612. Nov. 199914. Okt. 2003Phoenix Solutions, Inc.Distributed realtime speech recognition system
US66472609. Apr. 199911. Nov. 2003Openwave Systems Inc.Method and system facilitating web based provisioning of two-way mobile communications devices
US665073527. Sept. 200118. Nov. 2003Microsoft CorporationIntegrated voice access to a variety of personal information services
US66547408. Mai 200125. Nov. 2003Sunflare Co., Ltd.Probabilistic information retrieval based on differential latent semantic space
US666563916. Jan. 200216. Dez. 2003Sensory, Inc.Speech recognition in consumer electronic products
US666564012. Nov. 199916. Dez. 2003Phoenix Solutions, Inc.Interactive speech based learning/training system formulating search queries based on natural language parsing of recognized user queries
US666564112. Nov. 199916. Dez. 2003Scansoft, Inc.Speech synthesis using concatenation of speech waveforms
US668418730. Juni 200027. Jan. 2004At&T Corp.Method and system for preselection of suitable units for concatenative speech
US669106420. Apr. 200110. Febr. 2004General Electric CompanyMethod and system for identifying repeatedly malfunctioning equipment
US669111113. Juni 200110. Febr. 2004Research In Motion LimitedSystem and method for implementing a natural language user interface
US669115115. Nov. 199910. Febr. 2004Sri InternationalUnified messaging methods and systems for communication and cooperation among distributed agents in a computing environment
US669778025. Apr. 200024. Febr. 2004At&T Corp.Method and apparatus for rapid acoustic unit selection from a large speech corpus
US669782431. Aug. 199924. Febr. 2004Accenture LlpRelationship management in an E-commerce application framework
US670129419. Jan. 20002. März 2004Lucent Technologies, Inc.User interface for translating natural language inquiries into database queries and data presentations
US671158515. Juni 200023. März 2004Kanisa Inc.System and method for implementing a knowledge management system
US671832430. Jan. 20036. Apr. 2004International Business Machines CorporationMetadata search results ranking system
US67217282. März 200113. Apr. 2004The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdministrationSystem, method and apparatus for discovering phrases in a database
US67356322. Dez. 199911. Mai 2004Associative Computing, Inc.Intelligent assistant for use with a local computer and with the internet
US674202113. März 200025. Mai 2004Sri International, Inc.Navigating network-based electronic information using spoken input with multimodal error feedback
US67573626. März 200029. Juni 2004Avaya Technology Corp.Personal virtual assistant
US675771830. Juni 200029. Juni 2004Sri InternationalMobile navigation of network-based electronic information using spoken input
US676632024. Aug. 200020. Juli 2004Microsoft CorporationSearch engine with natural language-based robust parsing for user query and relevance feedback learning
US67789519. Aug. 200017. Aug. 2004Concerto Software, Inc.Information retrieval method with natural language interface
US677895212. Sept. 200217. Aug. 2004Apple Computer, Inc.Method for dynamic context scope selection in hybrid N-gram+LSA language modeling
US677896221. Juli 200017. Aug. 2004Konami CorporationSpeech synthesis with prosodic model data and accent type
US677897028. Mai 199817. Aug. 2004Lawrence AuTopological methods to organize semantic network data flows for conversational applications
US679208213. Sept. 199914. Sept. 2004Comverse Ltd.Voice mail system with personal assistant provisioning
US680757422. Okt. 199919. Okt. 2004Tellme Networks, Inc.Method and apparatus for content personalization over a telephone interface
US681037924. Apr. 200126. Okt. 2004Sensory, Inc.Client/server architecture for text-to-speech synthesis
US681349131. Aug. 20012. Nov. 2004Openwave Systems Inc.Method and apparatus for adapting settings of wireless communication devices in accordance with user proximity
US68296032. Febr. 20007. Dez. 2004International Business Machines Corp.System, method and program product for interactive natural dialog
US683219426. Okt. 200014. Dez. 2004Sensory, IncorporatedAudio recognition peripheral system
US684276724. Febr. 200011. Jan. 2005Tellme Networks, Inc.Method and apparatus for content personalization over a telephone interface with adaptive personalization
US684796624. Apr. 200225. Jan. 2005Engenium CorporationMethod and system for optimally searching a document database using a representative semantic space
US684797923. Febr. 200125. Jan. 2005Synquiry Technologies, LtdConceptual factoring and unification of graphs representing semantic models
US68511155. Jan. 19991. Febr. 2005Sri InternationalSoftware-based architecture for communication and cooperation among distributed electronic agents
US685993117. März 199922. Febr. 2005Sri InternationalExtensible software-based architecture for communication and cooperation within and between communities of distributed agents and distributed objects
US68953802. März 200117. Mai 2005Electro Standards LaboratoriesVoice actuation with contextual learning for intelligent machine control
US689555811. Febr. 200017. Mai 2005Microsoft CorporationMulti-access mode electronic personal assistant
US690139916. Juni 199831. Mai 2005Microsoft CorporationSystem for processing textual inputs using natural language processing techniques
US691249931. Aug. 199928. Juni 2005Nortel Networks LimitedMethod and apparatus for training a multilingual speech model set
US692482826. Apr. 20002. Aug. 2005SurfnotesMethod and apparatus for improved information representation
US692861413. Okt. 19989. Aug. 2005Visteon Global Technologies, Inc.Mobile office with speech recognition
US693138410. Apr. 200116. Aug. 2005Microsoft CorporationSystem and method providing utility-based decision making about clarification dialog given communicative uncertainty
US693468417. Jan. 200323. Aug. 2005Dialsurf, Inc.Voice-interactive marketplace providing promotion and promotion tracking, loyalty reward and redemption, and other features
US693797522. Sept. 199930. Aug. 2005Canon Kabushiki KaishaApparatus and method for processing natural language
US693798628. Dez. 200030. Aug. 2005Comverse, Inc.Automatic dynamic speech recognition vocabulary based on external sources of information
US69640235. Febr. 20018. Nov. 2005International Business Machines CorporationSystem and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input
US698094914. März 200327. Dez. 2005Sonum Technologies, Inc.Natural language processor
US698095528. März 200127. Dez. 2005Canon Kabushiki KaishaSynthesis unit selection apparatus and method, and storage medium
US698586526. Sept. 200110. Jan. 2006Sprint Spectrum L.P.Method and system for enhanced response to voice commands in a voice command platform
US698807129. Aug. 200317. Jan. 2006Gazdzinski Robert FSmart elevator system and method
US699653130. März 20017. Febr. 2006Comverse Ltd.Automated database assistance using a telephone for a speech based or text based multimedia communication mode
US699992715. Okt. 200314. Febr. 2006Sensory, Inc.Speech recognition programming information retrieved from a remote source to a speech recognition system for performing a speech recognition method
US702068516. Aug. 200028. März 2006Openwave Systems Inc.Method and apparatus for providing internet content to SMS-based wireless devices
US7024363 *14. Dez. 19994. Apr. 2006International Business Machines CorporationMethods and apparatus for contingent transfer and execution of spoken language interfaces
US702797427. Okt. 200011. Apr. 2006Science Applications International CorporationOntology-based parser for natural language processing
US70361289. Aug. 200025. Apr. 2006Sri International OfficesUsing a community of distributed electronic agents to support a highly mobile, ambient computing environment
US705097712. Nov. 199923. Mai 2006Phoenix Solutions, Inc.Speech-enabled server for internet website and method
US705856914. Sept. 20016. Juni 2006Nuance Communications, Inc.Fast waveform synchronization for concentration and time-scale modification of speech
US706242813. März 200113. Juni 2006Canon Kabushiki KaishaNatural language machine interface
US706956017. März 199927. Juni 2006Sri InternationalHighly scalable software-based architecture for communication and cooperation among distributed electronic agents
US709288715. Okt. 200315. Aug. 2006Sensory, IncorporatedMethod of performing speech recognition across a network
US709292831. Juli 200115. Aug. 2006Quantum Leap Research, Inc.Intelligent portal engine
US70936937. Sept. 200422. Aug. 2006Gazdzinski Robert FElevator access control system and method
US712704622. März 200224. Okt. 2006Verizon Laboratories Inc.Voice-activated call placement systems and methods
US71274035. Febr. 200224. Okt. 2006Microstrategy, Inc.System and method for personalizing an interactive voice broadcast of a voice service based on particulars of a request
US71367106. Juni 199514. Nov. 2006Hoffberg Steven MErgonomic man-machine interface incorporating adaptive pattern recognition based control system
US71371261. Okt. 199914. Nov. 2006International Business Machines CorporationConversational computing via conversational virtual machine
US71397147. Jan. 200521. Nov. 2006Phoenix Solutions, Inc.Adjustable resource based speech recognition system
US713972227. Juni 200121. Nov. 2006Bellsouth Intellectual Property CorporationLocation and time sensitive wireless calendaring
US71520707. Jan. 200019. Dez. 2006The Regents Of The University Of CaliforniaSystem and method for integrating and accessing multiple data sources within a data warehouse architecture
US717779821. Mai 200113. Febr. 2007Rensselaer Polytechnic InstituteNatural language interface using constrained intermediate dictionary of results
US719746019. Dez. 200227. März 2007At&T Corp.System for handling frequently asked questions in a natural language dialog service
US720055929. Mai 20033. Apr. 2007Microsoft CorporationSemantic object synchronous understanding implemented with speech application language tags
US720364622. Mai 200610. Apr. 2007Phoenix Solutions, Inc.Distributed internet based speech recognition system with natural language support
US721607313. März 20028. Mai 2007Intelligate, Ltd.Dynamic natural language understanding
US721608026. Sept. 20018. Mai 2007Mindfabric Holdings LlcNatural-language voice-activated personal assistant
US72251257. Jan. 200529. Mai 2007Phoenix Solutions, Inc.Speech recognition system trained with regional speech characteristics
US723379019. Juni 200319. Juni 2007Openwave Systems, Inc.Device capability based discovery, packaging and provisioning of content for wireless mobile devices
US723390413. Apr. 200619. Juni 2007Sony Computer Entertainment America, Inc.Menu-driven voice control of characters in a game environment
US726649624. Dez. 20024. Sept. 2007National Cheng-Kung UniversitySpeech recognition system
US72778547. Jan. 20052. Okt. 2007Phoenix Solutions, IncSpeech recognition system interactive agent
US729003927. Febr. 200130. Okt. 2007Microsoft CorporationIntent based processing
US729903319. Juni 200320. Nov. 2007Openwave Systems Inc.Domain-based management of distribution of digital content from multiple suppliers to multiple wireless services subscribers
US731060025. Okt. 200018. Dez. 2007Canon Kabushiki KaishaLanguage recognition using a similarity measure
US732494730. Sept. 200229. Jan. 2008Promptu Systems CorporationGlobal speech user interface
US734995322. Dez. 200425. März 2008Microsoft CorporationIntent based processing
US73765562. März 200420. Mai 2008Phoenix Solutions, Inc.Method for processing speech signal features for streaming transport
US737664524. Jan. 200520. Mai 2008The Intellection Group, Inc.Multimodal natural language query system and architecture for processing voice and proximity-based queries
US73798745. Dez. 200627. Mai 2008Microsoft CorporationMiddleware layer between speech related applications and engines
US738644911. Dez. 200310. Juni 2008Voice Enabling Systems Technology Inc.Knowledge-based flexible natural speech dialogue system
US738922423. Febr. 200017. Juni 2008Canon Kabushiki KaishaNatural language search method and apparatus, including linguistically-matching context data
US739218525. Juni 200324. Juni 2008Phoenix Solutions, Inc.Speech based learning/training system using semantic decoding
US73982093. Juni 20038. Juli 2008Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US740393820. Sept. 200222. Juli 2008Iac Search & Media, Inc.Natural language query processing
US740933730. März 20045. Aug. 2008Microsoft CorporationNatural language processing interface
US74151004. Mai 200419. Aug. 2008Avaya Technology Corp.Personal virtual assistant
US741839210. Sept. 200426. Aug. 2008Sensory, Inc.System and method for controlling the operation of a device by voice commands
US742646723. Juli 200116. Sept. 2008Sony CorporationSystem and method for supporting interactive user interface operations and storage medium
US742702416. Dez. 200423. Sept. 2008Gazdzinski Mark JChattel management apparatus and methods
US744763519. Okt. 20004. Nov. 2008Sony CorporationNatural language interface control system
US745435126. Jan. 200518. Nov. 2008Harman Becker Automotive Systems GmbhSpeech dialogue system for dialogue interruption and continuation control
US746708710. Okt. 200316. Dez. 2008Gillick Laurence STraining and using pronunciation guessers in speech recognition
US74750102. Sept. 20046. Jan. 2009Lingospot, Inc.Adaptive and scalable method for resolving natural language ambiguities
US748389422. Mai 200727. Jan. 2009Platformation Technologies, IncMethods and apparatus for entity search
US748708920. März 20073. Febr. 2009Sensory, IncorporatedBiometric client-server security system and method
US749649824. März 200324. Febr. 2009Microsoft CorporationFront-end architecture for a multi-lingual text-to-speech system
US749651213. Apr. 200424. Febr. 2009Microsoft CorporationRefining of segmental boundaries in speech waveforms using contextual-dependent models
US750273811. Mai 200710. März 2009Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US750837328. Jan. 200524. März 2009Microsoft CorporationForm factor and input method for language input
US75229279. Mai 200721. Apr. 2009Openwave Systems Inc.Interface for wireless location information
US752310822. Mai 200721. Apr. 2009Platformation, Inc.Methods and apparatus for searching with awareness of geography and languages
US752646615. Aug. 200628. Apr. 2009Qps Tech Limited Liability CompanyMethod and system for analysis of intended meaning of natural language
US75296714. März 20035. Mai 2009Microsoft CorporationBlock synchronous decoding
US75296766. Dez. 20045. Mai 2009Kabushikikaisha KenwoodAudio device control device, audio device control method, and program
US75396566. März 200126. Mai 2009Consona Crm Inc.System and method for providing an intelligent multi-step dialog with a user
US754638228. Mai 20029. Juni 2009International Business Machines CorporationMethods and systems for authoring of mixed-initiative multi-modal interactions and related browsing mechanisms
US754889530. Juni 200616. Juni 2009Microsoft CorporationCommunication-prompted user assistance
US755205510. Jan. 200423. Juni 2009Microsoft CorporationDialog component re-use in recognition systems
US75554312. März 200430. Juni 2009Phoenix Solutions, Inc.Method for processing speech using dynamic grammars
US75587303. Juli 20077. Juli 2009Advanced Voice Recognition Systems, Inc.Speech recognition and transcription among users having heterogeneous protocols
US75711068. Apr. 20084. Aug. 2009Platformation, Inc.Methods and apparatus for freshness and completeness of information
US759991829. Dez. 20056. Okt. 2009Microsoft CorporationDynamic search with implicit user intention mining
US762054910. Aug. 200517. Nov. 2009Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition in conversational speech
US76240073. Dez. 200424. Nov. 2009Phoenix Solutions, Inc.System and method for natural language processing of sentence based queries
US763440931. Aug. 200615. Dez. 2009Voicebox Technologies, Inc.Dynamic speech sharpening
US76366579. Dez. 200422. Dez. 2009Microsoft CorporationMethod and apparatus for automatic grammar generation from data entries
US76401605. Aug. 200529. Dez. 2009Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US764722520. Nov. 200612. Jan. 2010Phoenix Solutions, Inc.Adjustable resource based speech recognition system
US76574243. Dez. 20042. Febr. 2010Phoenix Solutions, Inc.System and method for processing sentence based queries
US766463828. Juli 200816. Febr. 2010Nuance Communications, Inc.Tracking time using portable recorders and speech recognition
US767284119. Mai 20082. März 2010Phoenix Solutions, Inc.Method for processing speech data for a distributed recognition system
US76760263. Mai 20059. März 2010Baxtech Asia Pte LtdDesktop telephony system
US768498510. Dez. 200323. März 2010Richard DominachTechniques for disambiguating speech input using multimodal interfaces
US769371510. März 20046. Apr. 2010Microsoft CorporationGenerating large units of graphonemes with mutual information criterion for letter to sound conversion
US769372015. Juli 20036. Apr. 2010Voicebox Technologies, Inc.Mobile systems and methods for responding to natural language speech utterance
US76981319. Apr. 200713. Apr. 2010Phoenix Solutions, Inc.Speech recognition system for client devices having differing computing capabilities
US770250024. Nov. 200420. Apr. 2010Blaedow Karen RMethod and apparatus for determining the meaning of natural language
US77025083. Dez. 200420. Apr. 2010Phoenix Solutions, Inc.System and method for natural language processing of query answers
US770702713. Apr. 200627. Apr. 2010Nuance Communications, Inc.Identification and rejection of meaningless input during natural language classification
US770703220. Okt. 200527. Apr. 2010National Cheng Kung UniversityMethod and system for matching speech data
US770726722. Dez. 200427. Apr. 2010Microsoft CorporationIntent based processing
US771156517. Aug. 20064. Mai 2010Gazdzinski Robert F“Smart” elevator system and method
US771167227. Dez. 20024. Mai 2010Lawrence AuSemantic network methods to disambiguate natural language meaning
US771605627. Sept. 200411. Mai 2010Robert Bosch CorporationMethod and system for interactive conversational dialogue for cognitively overloaded device users
US772067429. Juni 200418. Mai 2010Sap AgSystems and methods for processing natural language queries
US772068310. Juni 200418. Mai 2010Sensory, Inc.Method and apparatus of specifying and performing speech recognition operations
US77213012. Juni 200518. Mai 2010Microsoft CorporationProcessing files from a mobile device using voice commands
US772530729. Aug. 200325. Mai 2010Phoenix Solutions, Inc.Query engine for processing voice based queries including semantic decoding
US77253181. Aug. 200525. Mai 2010Nice Systems Inc.System and method for improving the accuracy of audio searching
US77253209. Apr. 200725. Mai 2010Phoenix Solutions, Inc.Internet based speech recognition system with dynamic grammars
US772532123. Juni 200825. Mai 2010Phoenix Solutions, Inc.Speech based query system using semantic decoding
US77299043. Dez. 20041. Juni 2010Phoenix Solutions, Inc.Partial speech processing device and method for use in distributed systems
US772991623. Okt. 20061. Juni 2010International Business Machines CorporationConversational computing via conversational virtual machine
US773446128. Aug. 20068. Juni 2010Samsung Electronics Co., LtdApparatus for providing voice dialogue service and method of operating the same
US774761630. Juni 200629. Juni 2010Fujitsu LimitedFile search method and system therefor
US775215217. März 20066. Juli 2010Microsoft CorporationUsing predictive user models for language modeling on a personal device with user behavior models based on statistical modeling
US775686825. Febr. 200513. Juli 2010Nhn CorporationMethod for providing search results list based on importance information and system thereof
US777420424. Juli 200810. Aug. 2010Sensory, Inc.System and method for controlling the operation of a device by voice commands
US778348624. Nov. 200324. Aug. 2010Roy Jonathan RosserResponse generator for mimicking human-computer natural language conversation
US780172913. März 200721. Sept. 2010Sensory, Inc.Using multiple attributes to create a voice search playlist
US78095707. Juli 20085. Okt. 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US780961021. Mai 20075. Okt. 2010Platformation, Inc.Methods and apparatus for freshness and completeness of information
US78181766. Febr. 200719. Okt. 2010Voicebox Technologies, Inc.System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US782260827. Febr. 200726. Okt. 2010Nuance Communications, Inc.Disambiguating a speech recognition grammar in a multimodal application
US78269451. Juli 20052. Nov. 2010You ZhangAutomobile speech-recognition interface
US783142623. Juni 20069. Nov. 2010Phoenix Solutions, Inc.Network based interactive speech recognition system
US784040021. Nov. 200623. Nov. 2010Intelligate, Ltd.Dynamic natural language understanding
US784044730. Okt. 200823. Nov. 2010Leonard KleinrockPricing and auctioning of bundled items among multiple sellers and buyers
US785357426. Aug. 200414. Dez. 2010International Business Machines CorporationMethod of generating a context-inferenced search query and of sorting a result of the query
US785366427. Sept. 200014. Dez. 2010Landmark Digital Services LlcMethod and system for purchasing pre-recorded music
US787351931. Okt. 200718. Jan. 2011Phoenix Solutions, Inc.Natural language speech lattice containing semantic variants
US787365414. März 200818. Jan. 2011The Intellection Group, Inc.Multimodal natural language query system for processing and analyzing voice and proximity-based queries
US78819361. Juni 20051. Febr. 2011Tegic Communications, Inc.Multimodal disambiguation of speech recognition
US789065213. Jan. 200015. Febr. 2011Travelocity.Com LpInformation aggregation and synthesization system
US791270231. Okt. 200722. März 2011Phoenix Solutions, Inc.Statistical language model trained with semantic variants
US791736712. Nov. 200929. März 2011Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US791749718. Apr. 200829. März 2011Iac Search & Media, Inc.Natural language query processing
US792067823. Sept. 20085. Apr. 2011Avaya Inc.Personal virtual assistant
US792552525. März 200512. Apr. 2011Microsoft CorporationSmart reminders
US79301684. Okt. 200519. Apr. 2011Robert Bosch GmbhNatural language processing of disfluent sentences
US794952929. Aug. 200524. Mai 2011Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US79495345. Juli 200924. Mai 2011Advanced Voice Recognition Systems, Inc.Speech recognition and transcription among users having heterogeneous protocols
US79748441. März 20075. Juli 2011Kabushiki Kaisha ToshibaApparatus, method and computer program product for recognizing speech
US797497212. März 20095. Juli 2011Platformation, Inc.Methods and apparatus for searching with awareness of geography and languages
US798391530. Apr. 200719. Juli 2011Sonic Foundry, Inc.Audio content search engine
US798391729. Okt. 200919. Juli 2011Voicebox Technologies, Inc.Dynamic speech sharpening
US79839972. Nov. 200719. Juli 2011Florida Institute For Human And Machine Cognition, Inc.Interactive complex task teaching system that allows for natural language input, recognizes a user's intent, and automatically performs tasks in document object model (DOM) nodes
US798643119. Sept. 200626. Juli 2011Ricoh Company, LimitedInformation processing apparatus, information processing method, and computer program product
US798715125. Febr. 200526. Juli 2011General Dynamics Advanced Info Systems, Inc.Apparatus and method for problem solving using intelligent agents
US799622822. Dez. 20059. Aug. 2011Microsoft CorporationVoice initiated network operations
US800045321. März 200816. Aug. 2011Avaya Inc.Personal virtual assistant
US800567931. Okt. 200723. Aug. 2011Promptu Systems CorporationGlobal speech user interface
US801500630. Mai 20086. Sept. 2011Voicebox Technologies, Inc.Systems and methods for processing natural language speech utterances with context-specific domain agents
US80241959. Okt. 200720. Sept. 2011Sensory, Inc.Systems and methods of performing speech recognition using historical information
US803238315. Juni 20074. Okt. 2011Foneweb, Inc.Speech controlled services and devices using internet
US80369015. Okt. 200711. Okt. 2011Sensory, IncorporatedSystems and methods of performing speech recognition using sensory inputs of human position
US804157031. Mai 200518. Okt. 2011Robert Bosch CorporationDialogue management using scripts
US804161118. Nov. 201018. Okt. 2011Platformation, Inc.Pricing and auctioning of bundled items among multiple sellers and buyers
US80557081. Juni 20078. Nov. 2011Microsoft CorporationMultimedia spaces
US806515510. Febr. 201022. Nov. 2011Gazdzinski Robert FAdaptive advertising apparatus and methods
US806515624. Febr. 201022. Nov. 2011Gazdzinski Robert FAdaptive information presentation apparatus and methods
US806904629. Okt. 200929. Nov. 2011Voicebox Technologies, Inc.Dynamic speech sharpening
US807368116. Okt. 20066. Dez. 2011Voicebox Technologies, Inc.System and method for a cooperative conversational voice user interface
US807847311. Febr. 201013. Dez. 2011Gazdzinski Robert FAdaptive advertising apparatus and methods
US808215320. Aug. 200920. Dez. 2011International Business Machines CorporationConversational computing via conversational virtual machine
US80953642. Juli 201010. Jan. 2012Tegic Communications, Inc.Multimodal disambiguation of speech recognition
US809928928. Mai 200817. Jan. 2012Sensory, Inc.Voice interface and search for electronic devices including bluetooth headsets and remote systems
US810740115. Nov. 200431. Jan. 2012Avaya Inc.Method and apparatus for providing a virtual assistant to a communication participant
US811227522. Apr. 20107. Febr. 2012Voicebox Technologies, Inc.System and method for user-specific speech recognition
US811228019. Nov. 20077. Febr. 2012Sensory, Inc.Systems and methods of performing speech recognition with barge-in for use in a bluetooth system
US811703724. Febr. 201014. Febr. 2012Gazdzinski Robert FAdaptive information presentation apparatus and methods
US813155720. Mai 20116. März 2012Advanced Voice Recognition Systems, Inc,Speech recognition and transcription among users having heterogeneous protocols
US814033511. Dez. 200720. März 2012Voicebox Technologies, Inc.System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US816588629. Sept. 200824. Apr. 2012Great Northern Research LLCSpeech interface system and method for control and interaction with applications on a computing system
US816601921. Juli 200824. Apr. 2012Sprint Communications Company L.P.Providing suggested actions in response to textual communications
US819035930. Aug. 200829. Mai 2012Proxpro, Inc.Situation-aware personal information management for a mobile device
US819546710. Juli 20085. Juni 2012Sensory, IncorporatedVoice interface and search for electronic devices including bluetooth headsets and remote systems
US82042389. Juni 200819. Juni 2012Sensory, IncSystems and methods of sonic communication
US820578822. Sept. 200826. Juni 2012Gazdzinski Mark JChattel management apparatus and method
US821940730. Sept. 200810. Juli 2012Great Northern Research, LLCMethod for processing the output of a speech recognizer
US82855511. März 20129. Okt. 2012Gazdzinski Robert FNetwork apparatus and methods for user information delivery
US82855531. Febr. 20129. Okt. 2012Gazdzinski Robert FComputerized information presentation apparatus
US829077824. Febr. 201216. Okt. 2012Gazdzinski Robert FComputerized information presentation apparatus
US829078124. Febr. 201216. Okt. 2012Gazdzinski Robert FComputerized information presentation apparatus
US829614624. Febr. 201223. Okt. 2012Gazdzinski Robert FComputerized information presentation apparatus
US829615324. Febr. 201223. Okt. 2012Gazdzinski Robert FComputerized information presentation methods
US829638324. Mai 201223. Okt. 2012Apple Inc.Electronic devices with voice command and contextual data processing capabilities
US830145624. Jan. 201230. Okt. 2012Gazdzinski Robert FElectronic information access system and methods
US831183427. Febr. 201213. Nov. 2012Gazdzinski Robert FComputerized information selection and download apparatus and methods
US837015831. Jan. 20125. Febr. 2013Gazdzinski Robert FAdaptive information presentation apparatus
US837150315. März 201212. Febr. 2013Robert F. GazdzinskiPortable computerized wireless payment apparatus and methods
US837487111. März 200212. Febr. 2013Fluential, LlcMethods for creating a phrase thesaurus
US84476129. Febr. 201221. Mai 2013West View Research, LlcComputerized information presentation apparatus
US2001004726415. Febr. 200129. Nov. 2001Brian RoundtreeAutomated reservation and appointment system using interactive voice recognition
US2002001058422. Mai 200124. Jan. 2002Schultz Mitchell JayInteractive voice communication method and system for information and entertainment
US2002003256419. Apr. 200114. März 2002Farzad EhsaniPhrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
US20020035474 *26. März 200121. März 2002Ahmet AlpdemirVoice-interactive marketplace providing time and money saving benefits and real-time promotion publishing and feedback
US2002004602531. Aug. 200118. Apr. 2002Horst-Udo HainGrapheme-phoneme conversion
US2002006906319. Okt. 19986. Juni 2002Peter BuchnerSpeech recognition control of remotely controllable devices in a home network evironment
US200200778171. Nov. 200120. Juni 2002Atal Bishnu SaroopSystem and method of pattern recognition in very high-dimensional space
US2002010364113. Dez. 20011. Aug. 2002Kuo Jie YungStore speech, select vocabulary to recognize word
US20020116185 *16. Febr. 200122. Aug. 2002International Business Machines CorporationTracking time using portable recorders and speech recognition
US200201640001. Dez. 19987. Nov. 2002Michael H. CohenSystem for and method of creating and browsing a voice web
US2002019871426. Juni 200126. Dez. 2002Guojun ZhouStatistical spoken dialog system
US200401357015. Jan. 200415. Juli 2004Kei YasudaApparatus operating system
US2004019938726. Apr. 20047. Okt. 2004Wang Avery Li-ChunMethod and system for purchasing pre-recorded music
US200402367787. Juli 200425. Nov. 2004Matsushita Electric Industrial Co., Ltd.Mechanism for storing information about recorded television broadcasts
US2005005540325. Okt. 200210. März 2005Brittan Paul St. JohnAsynchronous access to synchronous voice services
US200500713323. Nov. 200431. März 2005Ortega Ruben ErnestoSearch query processing to identify related search terms and to correct misspellings of search terms
US2005008062510. Okt. 200314. Apr. 2005Bennett Ian M.Distributed real time speech recognition system
US2005009111810. Okt. 200128. Apr. 2005Accenture Properties (2) B.V.Location-Based filtering for a shopping agent in the physical world
US2005010261412. Nov. 200312. Mai 2005Microsoft CorporationSystem for identifying paraphrases using machine translation
US2005010800115. Nov. 200219. Mai 2005Aarskog Brit H.Method and apparatus for textual exploration discovery
US2005011412426. Nov. 200326. Mai 2005Microsoft CorporationMethod and apparatus for multi-sensory speech enhancement
US200501198977. Jan. 20052. Juni 2005Bennett Ian M.Multi-language speech recognition system
US2005014397224. Febr. 200530. Juni 2005Ponani GopalakrishnanSystem and methods for acoustic and language modeling for automatic speech recognition with large vocabularies
US2005016560722. Jan. 200428. Juli 2005At&T Corp.System and method to disambiguate and clarify user intention in a spoken dialog system
US2005018262918. Jan. 200518. Aug. 2005Geert CoormanCorpus-based speech synthesis based on segment recombination
US2005019673327. Apr. 20058. Sept. 2005Scientific Learning CorporationMethod and apparatus for automated training of language learning skills
US2005028893615. Aug. 200529. Dez. 2005Senis BusayapongchaiMulti-context conversational environment system and method
US2006001849213. Dez. 200426. Jan. 2006Inventec CorporationSound control system and method
US2006010659215. Nov. 200418. Mai 2006Microsoft CorporationUnsupervised learning of paraphrase/ translation alternations and selective application thereof
US2006010659415. Nov. 200418. Mai 2006Microsoft CorporationUnsupervised learning of paraphrase/translation alternations and selective application thereof
US2006010659515. Nov. 200418. Mai 2006Microsoft CorporationUnsupervised learning of paraphrase/translation alternations and selective application thereof
US200601170021. Nov. 20051. Juni 2006Bing SwenMethod for search result clustering
US200601228345. Dez. 20058. Juni 2006Bennett Ian MEmotion detection device & method for use in distributed systems
US2006014300731. Okt. 200529. Juni 2006Koh V EUser interaction with voice information services
US2006021796722. März 200428. Sept. 2006Doug GoertzenSystem and methods for storing and presenting personal information
US20060235700 *2. Juni 200519. Okt. 2006Microsoft CorporationProcessing files from a mobile device using voice commands
US2007004136115. Aug. 200522. Febr. 2007Nokia CorporationApparatus and methods for implementing an in-call voice user interface using context information
US2007005552931. Aug. 20058. März 2007International Business Machines CorporationHierarchical methods and apparatus for extracting user intent from spoken utterances
US200700588327. Aug. 200615. März 2007Realnetworks, Inc.Personal media device
US2007008855617. Okt. 200519. Apr. 2007Microsoft CorporationFlexible speech-activated command and control
US200701007908. Sept. 20063. Mai 2007Adam CheyerMethod and apparatus for building an intelligent automated assistant
US200701066748. Nov. 200610. Mai 2007Purusharth AgrawalField sales process facilitation systems and methods
US2007011837716. Dez. 200324. Mai 2007Leonardo BadinoText-to-speech method and system, computer program product therefor
US2007013594923. Febr. 200714. Juni 2007Microsoft CorporationAdministrative Tool Environment
US2007017418823. Jan. 200726. Juli 2007Fish Robert DElectronic marketplace that facilitates transactions between consolidated buyers and/or sellers
US2007018591728. Nov. 20069. Aug. 2007Anand PrahladSystems and methods for classifying and transferring information in a storage network
US200702825956. Juni 20066. Dez. 2007Microsoft CorporationNatural language personal information management
US2008001586416. Juli 200717. Jan. 2008Ross Steven IMethod and Apparatus for Managing Dialog Management in a Computer Conversation
US200800217081. Okt. 200724. Jan. 2008Bennett Ian MSpeech recognition system interactive agent
US2008003403212. Okt. 20077. Febr. 2008Healey Jennifer AMethods and Systems for Authoring of Mixed-Initiative Multi-Modal Interactions and Related Browsing Mechanisms
US2008005206331. Okt. 200728. Febr. 2008Bennett Ian MMulti-language speech recognition system
US2008008233228. Sept. 20063. Apr. 2008Jacqueline MallettMethod And System For Sharing Portable Voice Profiles
US2008012011231. Okt. 200722. Mai 2008Adam JordanGlobal speech user interface
US200801295201. Dez. 20065. Juni 2008Apple Computer, Inc.Electronic device with enhanced audio feedback
US20080140416 *22. Febr. 200712. Juni 2008Shostak Robert EVoice-controlled communications system and method using a badge application
US200801406572. Febr. 200612. Juni 2008Behnam AzvineDocument Searching Tool and Method
US2008022190322. Mai 200811. Sept. 2008International Business Machines CorporationHierarchical Methods and Apparatus for Extracting User Intent from Spoken Utterances
US2008022849615. März 200718. Sept. 2008Microsoft CorporationSpeech-centric multimodal user interface design in mobile technology
US2008024751917. Juni 20089. Okt. 2008At&T Corp.Method for dialog management
US2008024977020. Aug. 20079. Okt. 2008Samsung Electronics Co., Ltd.Method and apparatus for searching for music based on speech recognition
US2008030087819. Mai 20084. Dez. 2008Bennett Ian MMethod For Transporting Speech Data For A Distributed Recognition System
US2008031976329. Aug. 200825. Dez. 2008At&T Corp.System and dialog manager developed using modular spoken-dialog components
US2009000610029. Juni 20071. Jan. 2009Microsoft CorporationIdentification and selection of a software application via speech
US2009000634328. Juni 20071. Jan. 2009Microsoft CorporationMachine assisted query formulation
US2009001883523. Sept. 200815. Jan. 2009Cooper Robert SPersonal Virtual Assistant
US2009003080031. Jan. 200729. Jan. 2009Dan GroisMethod and System for Searching a Data Network by Using a Virtual Assistant and for Advertising by using the same
US2009005517915. Jan. 200826. Febr. 2009Samsung Electronics Co., Ltd.Method, medium and apparatus for providing mobile voice web service
US2009005882311. Febr. 20085. März 2009Apple Inc.Virtual Keyboards in Multi-Language Environment
US2009007679618. Sept. 200719. März 2009Ariadne Genomics, Inc.Natural language processing method
US200900771659. Juni 200819. März 2009Rhodes Bradley JWorkflow Manager For A Distributed System
US2009010004917. Dez. 200816. Apr. 2009Platformation Technologies, Inc.Methods and Apparatus for Entity Search
US2009011267721. Okt. 200830. Apr. 2009Rhett Randolph LMethod for automatically developing suggested optimal work schedules from unsorted group and individual task lists
US2009015015611. Dez. 200711. Juni 2009Kennewick Michael RSystem and method for providing a natural language voice user interface in an integrated voice navigation services environment
US2009015740123. Juni 200818. Juni 2009Bennett Ian MSemantic Decoding of User Queries
US2009016444122. Dez. 200825. Juni 2009Adam CheyerMethod and apparatus for searching using an active ontology
US200901716644. Febr. 20092. Juli 2009Kennewick Robert ASystems and methods for responding to natural language speech utterance
US2009020440928. Mai 200813. Aug. 2009Sensory, IncorporatedVoice Interface and Search for Electronic Devices including Bluetooth Headsets and Remote Systems
US2009028758331. März 200919. Nov. 2009Dell Products L.P.Digital media content location and purchasing system
US2009029071820. Mai 200926. Nov. 2009Philippe KahnMethod and Apparatus for Adjusting Audio for a User Environment
US2009029974527. Mai 20083. Dez. 2009Kennewick Robert ASystem and method for an integrated, multi-modal, multi-device natural language voice services environment
US200902998494. Aug. 20093. Dez. 2009Platformation, Inc.Methods and Apparatus for Freshness and Completeness of Information
US200903071621. Juni 200910. Dez. 2009Hung BuiMethod and apparatus for automated assistance with task management
US2010000508114. Sept. 20097. Jan. 2010Bennett Ian MSystems for natural language processing of sentence based queries
US201000233201. Okt. 200928. Jan. 2010Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition in conversational speech
US2010003666014. Okt. 200911. Febr. 2010Phoenix Solutions, Inc.Emotion Detection Device and Method for Use in Distributed Systems
US201000424009. Nov. 200618. Febr. 2010Hans-Ulrich BlockMethod for Triggering at Least One First and Second Background Application via a Universal Language Dialog System
US201000880206. Okt. 20098. Apr. 2010Darrell SanoUser interface for predictive traffic
US201000881002. Okt. 20088. Apr. 2010Lindahl Aram MElectronic devices with voice command and contextual data processing capabilities
US201001382151. Dez. 20083. Juni 2010At&T Intellectual Property I, L.P.System and method for using alternate recognition hypotheses to improve whole-dialog understanding accuracy
US2010014570012. Febr. 201010. Juni 2010Voicebox Technologies, Inc.Mobile systems and methods for responding to natural language speech utterance
US2010020498622. Apr. 201012. Aug. 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US2010021760420. Febr. 200926. Aug. 2010Voicebox Technologies, Inc.System and method for processing multi-modal device interactions in a natural language voice services environment
US2010022854020. Mai 20109. Sept. 2010Phoenix Solutions, Inc.Methods and Systems for Query-Based Searching Using Spoken Input
US2010023534119. Mai 201016. Sept. 2010Phoenix Solutions, Inc.Methods and Systems for Searching Using Spoken Input and User Context Information
US201002571609. Apr. 20107. Okt. 2010Yu CaoMethods & apparatus for searching with awareness of different types of information
US2010026259914. Apr. 201014. Okt. 2010Sri InternationalContent processing systems and methods
US2010027757929. Apr. 20104. Nov. 2010Samsung Electronics Co., Ltd.Apparatus and method for detecting voice based on motion information
US2010028098329. Apr. 20104. Nov. 2010Samsung Electronics Co., Ltd.Apparatus and method for predicting user's intention based on multimodal information
US2010028698519. Juli 201011. Nov. 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US2010029914230. Juli 201025. Nov. 2010Voicebox Technologies, Inc.System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US201003125475. Juni 20099. Dez. 2010Apple Inc.Contextual voice commands
US2010031857619. März 201016. Dez. 2010Samsung Electronics Co., Ltd.Apparatus and method for providing goal predictive interface
US2010033223529. Juni 200930. Dez. 2010Abraham Ben DavidIntelligent home automation
US201003323481. Sept. 201030. Dez. 2010Platformation, Inc.Methods and Apparatus for Freshness and Completeness of Information
US201100470725. Aug. 201024. Febr. 2011Visa U.S.A. Inc.Systems and Methods for Propensity Analysis and Validation
US2011006080710. Sept. 200910. März 2011John Jeffrey MartinSystem and method for tracking user location and associated activity and responsively providing mobile device updates
US2011008268830. Sept. 20107. Apr. 2011Samsung Electronics Co., Ltd.Apparatus and Method for Analyzing Intention
US201101128279. Febr. 201012. Mai 2011Kennewick Robert ASystem and method for hybrid processing in a natural language voice services environment
US2011011292110. Nov. 201012. Mai 2011Voicebox Technologies, Inc.System and method for providing a natural language content dedication service
US2011011904922. Okt. 201019. Mai 2011Tatu Ylonen Oy LtdSpecializing disambiguation of a natural language expression
US2011012554017. Nov. 201026. Mai 2011Samsung Electronics Co., Ltd.Schedule management system using interactive robot and method and computer-readable medium thereof
US2011013095830. Nov. 20092. Juni 2011Apple Inc.Dynamic alerts for calendar events
US201101310367. Febr. 20112. Juni 2011Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition in conversational speech
US201101310452. Febr. 20112. Juni 2011Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US2011014381113. Aug. 201016. Juni 2011Rodriguez Tony FMethods and Systems for Content Processing
US2011014499910. Dez. 201016. Juni 2011Samsung Electronics Co., Ltd.Dialogue system and dialogue method thereof
US201101610769. Juni 201030. Juni 2011Davis Bruce LIntuitive Computing Methods and Systems
US2011016130929. Dez. 200930. Juni 2011Lx1 Technology LimitedMethod Of Sorting The Result Set Of A Search Engine
US2011017581015. Jan. 201021. Juli 2011Microsoft CorporationRecognizing User Intent In Motion Capture System
US2011018473022. Jan. 201028. Juli 2011Google Inc.Multi-dimensional disambiguation of voice commands
US201102188551. März 20118. Sept. 2011Platformation, Inc.Offering Promotions Based on Query Analysis
US2011023118211. Apr. 201122. Sept. 2011Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US201102311881. Juni 201122. Sept. 2011Voicebox Technologies, Inc.System and method for providing an acoustic grammar to dynamically sharpen speech interpretation
US201102646435. Juli 201127. Okt. 2011Yu CaoMethods and Apparatus for Searching with Awareness of Geography and Languages
US2011027936812. Mai 201017. Nov. 2011Microsoft CorporationInferring user intent to engage a motion capture system
US2011030642610. Juni 201015. Dez. 2011Microsoft CorporationActivity Participation Based On User Intent
US2012000282030. Juni 20105. Jan. 2012GoogleRemoving Noise From Audio
US2012001667810. Jan. 201119. Jan. 2012Apple Inc.Intelligent Automated Assistant
US2012002049030. Sept. 201126. Jan. 2012Google Inc.Removing Noise From Audio
US2012002278730. Sept. 201126. Jan. 2012Google Inc.Navigation Queries
US201200228573. Okt. 201126. Jan. 2012Voicebox Technologies, Inc.System and method for a cooperative conversational voice user interface
US2012002286030. Sept. 201126. Jan. 2012Google Inc.Speech and Noise Models for Speech Recognition
US2012002286830. Sept. 201126. Jan. 2012Google Inc.Word-Level Correction of Speech Input
US2012002286930. Sept. 201126. Jan. 2012Google, Inc.Acoustic model adaptation using geographic information
US2012002287030. Sept. 201126. Jan. 2012Google, Inc.Geotagged environmental audio for enhanced speech recognition accuracy
US2012002287430. Sept. 201126. Jan. 2012Google Inc.Disambiguation of contact information using historical data
US2012002287630. Sept. 201126. Jan. 2012Google Inc.Voice Actions on Computing Devices
US2012002308830. Sept. 201126. Jan. 2012Google Inc.Location-Based Searching
US201200349046. Aug. 20109. Febr. 2012Google Inc.Automatically Monitoring for Voice Input Based on Context
US2012003590829. Sept. 20119. Febr. 2012Google Inc.Translating Languages
US2012003592420. Juli 20119. Febr. 2012Google Inc.Disambiguating input based on context
US2012003593129. Sept. 20119. Febr. 2012Google Inc.Automatically Monitoring for Voice Input Based on Context
US201200359326. Aug. 20109. Febr. 2012Google Inc.Disambiguating Input Based on Context
US2012004234329. Sept. 201116. Febr. 2012Google Inc.Television Remote Control Data Transfer
US201201373678. Nov. 201031. Mai 2012Cataphora, Inc.Continuous anomaly detection based on behavior modeling and heterogeneous information analysis
US201201734641. Sept. 20105. Juli 2012Gokhan TurMethod and apparatus for exploiting human feedback in an intelligent automated assistant
US2012026552830. Sept. 201118. Okt. 2012Apple Inc.Using Context Information To Facilitate Processing Of Commands In A Virtual Assistant
US2012027167624. Apr. 201225. Okt. 2012Murali AravamudanSystem and method for an intelligent personal timeline assistant
US2012031158330. Sept. 20116. Dez. 2012Apple Inc.Generating and processing task items that represent tasks to perform
US201203306615. Sept. 201227. Dez. 2012Lindahl Aram MElectronic Devices with Voice Command and Contextual Data Processing Capabilities
US2013000663813. Sept. 20123. Jan. 2013Lindahl Aram MElectronic Devices with Voice Command and Contextual Data Processing Capabilities
US2013011051821. Dez. 20122. Mai 2013Apple Inc.Active Input Elicitation by Intelligent Automated Assistant
US2013011052021. Dez. 20122. Mai 2013Apple Inc.Intent Deduction Based on Previous User Interactions with Voice Assistant
USRE3456216. Okt. 198715. März 1994Mitsubishi Denki Kabushiki KaishaAmplitude-adaptive vector quantization system
CH681573A5 Titel nicht verfügbar
DE3837590A15. Nov. 198810. Mai 1990Ant NachrichtentechVerfahren zum reduzieren der datenrate von digitalen bilddaten
DE19841541B411. Sept. 19986. Dez. 2007Püllen, RainerTeilnehmereinheit für einen Multimediadienst
EP0138061A112. Sept. 198424. Apr. 1985Siemens AktiengesellschaftMethod of determining speech spectra with an application to automatic speech recognition and speech coding
EP0138061B112. Sept. 198429. Juni 1988Siemens AktiengesellschaftMethod of determining speech spectra with an application to automatic speech recognition and speech coding
EP0218859A226. Aug. 198622. Apr. 1987International Business Machines CorporationSignal processor communication interface
EP0262938A129. Sept. 19876. Apr. 1988BRITISH TELECOMMUNICATIONS public limited companyLanguage translation system
EP0293259A227. Mai 198830. Nov. 1988Kabushiki Kaisha ToshibaVoice recognition system used in telephone apparatus
EP0299572A28. Juli 198818. Jan. 1989Philips Patentverwaltung GmbHMethod for connected word recognition
EP0313975A219. Okt. 19883. Mai 1989International Business Machines CorporationDesign and construction of a binary-tree system for language modelling
EP0314908A216. Sept. 198810. Mai 1989International Business Machines CorporationAutomatic determination of labels and markov word models in a speech recognition system
EP0327408A26. Febr. 19899. Aug. 1989ADVANCED PRODUCTS & TECHNOLOGIES, INC.Voice language translator
EP0389271A221. März 199026. Sept. 1990International Business Machines CorporationMatching sequences of labels representing input data and stored data utilising dynamic programming
EP0411675A210. Juni 19836. Febr. 1991Mitsubishi Denki Kabushiki KaishaInterframe coding apparatus
EP0559349A117. Febr. 19938. Sept. 1993AT&T Corp.Training method and apparatus for speech recognition
EP0559349B117. Febr. 19937. Jan. 1999AT&T Corp.Training method and apparatus for speech recognition
EP0570660A115. Jan. 199324. Nov. 1993International Business Machines CorporationSpeech recognition system for natural language translation
EP0863453A16. März 19989. Sept. 1998Xerox CorporationShared-data environment in which each file has independent security properties
EP1245023A110. Nov. 20002. Okt. 2002Phoenix solutions, Inc.Distributed real time speech recognition system
EP2109295A124. Okt. 200814. Okt. 2009LG Electronics Inc.Mobile terminal and menu control method thereof
GB2293667A Titel nicht verfügbar
JP2001125896A Titel nicht verfügbar
JP2002024212A Titel nicht verfügbar
JP2003517158A Titel nicht verfügbar
JP2009036999A Titel nicht verfügbar
KR100776800B1 Titel nicht verfügbar
KR100810500B1 Titel nicht verfügbar
KR100920267B1 Titel nicht verfügbar
KR102008109322A Titel nicht verfügbar
KR102009086805A Titel nicht verfügbar
KR1020110113414A Titel nicht verfügbar
WO2002073603A126. März 200119. Sept. 2002Totally Voice, Inc.A method for integrating processes with a multi-faceted human centered interface
WO2006129967A130. Mai 20067. Dez. 2006Daumsoft, Inc.Conversation system and method using conversational agent
WO2008085742A227. Dez. 200717. Juli 2008Apple Inc.Portable multifunction device, method and graphical user interface for interacting with user input elements in displayed content
WO2008109835A27. März 200812. Sept. 2008Vlingo CorporationSpeech recognition of speech recorded by a mobile communication facility
WO2011088053A211. Jan. 201121. Juli 2011Apple Inc.Intelligent automated assistant
Nichtpatentzitate
Referenz
1Acero, A., et al., "Environmental Robustness in Automatic Speech Recognition," International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages.
2Acero, A., et al., "Robust Speech Recognition by Normalization of the Acoustic Space," International Conference on Acoustics, Speech, and Signal Processing, 1991, 4 pages.
3Agnas, Ms., et al., "Spoken Language Translator: First-Year Report," Jan. 1994, SICS (ISSN 0283-3638), SRI and Telia Research AB, 161 pages.
4Ahlbom, G., et al., "Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques," IEEE International Conference of Acoustics, Speech, and Signal Processing (ICASSP'87), Apr. 1987, vol. 12, 4 pages.
5Aikawa, K., "Speech Recognition Using Time-Warping Neural Networks," Proceedings of the 1991 IEEE Workshop on Neural Networks for Signal Processing, Sep. 30 to Oct. 1, 1991, 10 pages.
6Alfred App, 2011, http://www.alfredapp.com/, 5 pages.
7Allen, J., "Natural Language Understanding," 2nd Edition, Copyright @ 1995 by The Benjamin/Cummings Publishing Company, Inc., 671 pages.
8Alshawi H., et al., "Logical Forms In The Core Language Engine," 1989, Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 8 pages.
9Alshawi, H., "Translation and Monotonic Interpretation/Generation," Jul. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 18 pages., http://www.cam.sri.com/tricrc024/paper.ps.Z 1992.
10Alshawi, H., et al., "Clare: A Contextual Reasoning and Cooperative Response Framework for the Core Language Engine," Dec. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 273 pages.
11Alshawi, H., et al., "Declarative Derivation of Database Queries from Meaning Representations," Oct. 1991, Proceedings of the BANKAI Workshop on Intelligent Information Access, 12 pages.
12Alshawi, H., et al., "Overview of the Core Language Engine," Sep. 1988, Proceedings of Future Generation Computing Systems, Tokyo, 13 pages.
13Ambite, JL., et al., "Design and Implementation of the CALO Query Manager," Copyright © 2006, American Association for Artificial Intelligence, (www.aaai.org), 8 pages.
14Ambite, JL., et al., "Integration of Heterogeneous Knowledge Sources in the CALO Query Manager," 2005, The 4th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), Agia Napa, Cyprus, ttp://www.isi.edu/people/ambite/publications/integration-heterogeneous-knowledge-sources-calo-query-manager, 18 pages.
15Ambite, JL., et al., "Integration of Heterogeneous Knowledge Sources in the CALO Query Manager," 2005, The 4th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), Agia Napa, Cyprus, ttp://www.isi.edu/people/ambite/publications/integration—heterogeneous—knowledge—sources—calo—query—manager, 18 pages.
16Anastasakos, A., et al., "Duration Modeling in Large Vocabulary Speech Recognition," International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages.
17Anderson, R. H., "Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics," In Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium, ©1967, 12 pages.
18Ansari, R., et al., "Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach," IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, 3 pages.
19Anthony, N. J., et al., "Supervised Adaption for Signature Verification System," Jun. 1, 1978, IBM Technical Disclosure, 3 pages.
20Appelt, D., et al., "Fastus: A Finite-state Processor for Information Extraction from Real-world Text," 1993, Proceedings of IJCAI, 8 pages.
21Appelt, D., et al., "SRI: Description of the JV-FASTUS System Used for MUC-5," 1993, SRI International, Artificial Intelligence Center, 19 pages.
22Appelt, D., et al., SRI International Fastus System MUC-6 Test Results and Analysis, 1995, SRI International, Menlo Park, California, 12 pages.
23Apple Computer, "Guide Maker User's Guide," © Apple Computer, Inc., Apr. 27, 1994, 8 pages.
24Apple Computer, "Introduction to Apple Guide," © Apple Computer, Inc., Apr. 28, 1994, 20 pages.
25Apple Computer, video entitled "Knowledge Navigator," published by Apple Computer no later than 2008, as depicted in "Exemplary Screenshots from video entitled ‘Knowledge Navigator,’" 7 pages.
26Apple Computer, video entitled "Knowledge Navigator," published by Apple Computer no later than 2008, as depicted in "Exemplary Screenshots from video entitled 'Knowledge Navigator,'" 7 pages.
27Archbold, A., et al.,"A Team User's Guide," Dec. 21, 1981, SRI International, 70 pages.
28Asanovia, K., et al., "Experimental Determination of Precision Requirements for BackPropagation Training of Artificial Neural Networks," In Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkeley.EDU, 7 pages.
29Atal, B. S., "Efficient Coding of LPC Parameters by Temporal Decomposition," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'83), Apr. 1983, 4 pages.
30Bahl, L. R., et al, "Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition," IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, 11 pages.
31Bahl, L. R., et al., "A Maximum Likelihood Approach to Continuous Speech Recognition," IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, 13 pages.
32Bahl, L. R., et al., "A Tree-Based Statistical Language Model for Natural Language Speech Recognition," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, Issue 7, Jul. 1989, 8 pages.
33Bahl, L. R., et al., "Acoustic Markov Models Used in the Tangora Speech Recognition System," In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14,1988, vol. 1, 4 pages.
34Bahl, L. R., et al., "Large Vocabulary Natural Language Continuous Speech Recognition," In Proceedings of 1989 International Conference on Acoustics, Speech, and Signal Processing, May 23-26, 1989, vol. 1, 6 pages.
35Bahl, L. R., et al., "Speech Recognition with Continuous-Parameter Hidden Markov Models," In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 8 pages.
36Banbrook, M., "Nonlinear Analysis of Speech from a Synthesis Perspective," A thesis submitted for the degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages.
37Bear, J., et al., "A System for Labeling Self-Repairs in Speech," Feb. 22, 1993, SRI International, 9 pages.
38Bear, J., et al., "Detection and Correction of Repairs in Human-Computer Dialog," May 5, 1992, SRI International, 11 pages.
39Bear, J., et al., "Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog," 1992, Proceedings of the 30th annual meeting on Association for Computational Linguistics (ACL), 8 pages.
40Bear, J., et al., "Using Information Extraction to Improve Document Retrieval," 1998, SRI International, Menlo Park, California, 11 pages.
41Belaid, A., et al., "A Syntactic Approach for Handwritten Mathematical Formula Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, 7 pages.
42Bellegarda, E. J., et al., "On-Line Handwriting Recognition Using Statistical Mixtures," Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris- France, Jul. 1993, 11 pages.
43Bellegarda, J. R., "A Latent Semantic Analysis Framework for Large-Span Language Modeling," 5th European Conference on Speech, Communication and Technology, (EUROSPEECH'97), Sep. 22-25, 1997, 4 pages.
44Bellegarda, J. R., "A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition," IEEE Transactions on Speech and Audio Processing, vol. 6, No. 5, Sep. 1998, 12 pages.
45Bellegarda, J. R., "Exploiting Both Local and Global Constraints for Multi-Span Statistical Language Modeling," Proceeding of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'98), vol. 2, May 12-15, 1998, 5 pages.
46Bellegarda, J. R., "Exploiting Latent Semantic Information in Statistical Language Modeling," In Proceedings of the IEEE, Aug. 2000, vol. 88, No. 8, 18 pages.
47Bellegarda, J. R., "Interaction-Driven Speech Input—A Data-Driven Approach to the Capture of Both Local and Global Language Constraints," 1992, 7 pages., available at http://old.sigchi.org/bulletin/1998.2/bellegarda.html.
48Bellegarda, J. R., "Large Vocabulary Speech Recognition with Multispan Statistical Language Models," IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, 9 pages.
49Bellegarda, J. R., et al., "A Novel Word Clustering Algorithm Based on Latent Semantic Analysis," In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, 4 pages.
50Bellegarda, J. R., et al., "Experiments Using Data Augmentation for Speaker Adaptation," International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages.
51Bellegarda, J. R., et al., "Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the Arpa Wall Street Journal Task," Signal Processing VII: Theories and Applications, © 1994 European Association for Signal Processing, 4 pages.
52Bellegarda, J. R., et al., "The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation," IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 8 pages.
53Belvin, R. et al., "Development of the HRL Route Navigation Dialogue System," 2001, In Proceedings of the First International Conference on Human Language Technology Research, Paper, Copyright © 2001 HRL Laboratories, LLC, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6538, 5 pages.
54Berry, P. M., et al. "PTIME: Personalized Assistance for Calendaring," ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Publication date: Jul. 2011, 40:1-22, 22 pages.
55Berry, P., et al., "Task Management under Change and Uncertainty Constraint Solving Experience with the CALO Project," 2005, Proceedings of CP'05 Workshop on Constraint Solving under Change, 5 pages.
56Black, A. W., et al., "Automatically Clustering Similar Units for Unit Selection in Speech Synthesis," In Proceedings of Eurospeech 1997, vol. 2, 4 pages.
57Blair, D. C., et al., "An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System," Communications of the ACM, vol. 28, No. 3, Mar. 1985, 11 pages.
58Bobrow, R. et al., "Knowledge Representation for Syntactic/Semantic Processing," From: AAA-80 Proceedings. Copyright @ 1980, AAAI, 8 pages.
59Bouchou, B., et al., "Using Transducers in Natural Language Database Query," Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 17 pages.
60Bratt, H., et al., "The SRI Telephone-based ATIS System," 1995, Proceedings of ARPA Workshop on Spoken Language Technology, 3 pages.
61Briner, L. L., "Identifying Keywords in Text Data Processing," in Zelkowitz, Marvin V., ED, Directions and Challenges, 15th Annual Technical Symposium, Jun. 17, 1976, Gaithersbury, Maryland, 7 pages.
62Bulyko, I., et al., "Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis," Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages.
63Burke, R., et al., "Question Answering from Frequently Asked Question Files," 1997, AL Magazine, vol. 18, No. 2, 10 pages.
64Burns, A., et al., "Development of a Web-Based Intelligent Agent for the Fashion Selection and Purchasing Process via Electronic Commerce," Dec. 31, 1998, Proceedings of the Americas Conference on Information system AMCIS , 4 pages.
65Bussey, H. E., et al., "Service Architecture, Prototype Description, and Network Implications of a Personalized Information Grazing Service," INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, Jun. 3-7, 1990, http://slrohall.com/publications/, 8 pages.
66Bussler, C., et al., "Web Service Execution Environment (WSMX)," Jun. 3, 2005, W3C Member Submission, http://www.w3.org/Submission/WSMX, 29 pages.
67Butcher, M., "EVI arrives in town to go toe-to-toe with Siri," Jan. 23, 2012, http://techcrunch.com/2012/01/23/evi-arrives-in-town-to-go-toe-to-toe-with-siri/, 2 pages.
68Buzo, A., et al., "Speech Coding Based Upon Vector Quantization," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, 13 pages.
69Caminero-Gil, J., et al., "Data-Driven Discourse Modeling for Semantic Interpretation," In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, May 7-10, 1996, 6 pages.
70Carter, D., "Lexical Acquisition in the Core Language Engine," 1989, Proceedings of the Fourth.
71Carter, D., et al., "The Speech-Language Interface in the Spoken Language Translator," Nov. 23, 1994, Sri International, 9 pages.
72Cawley, G. C., "The Application of Neural Networks to Phonetic Modelling," PhD Thesis, University of Essex, Mar. 1996, 13 pages.
73Chai, J., et al., "Comparative Evaluation of a Natural Language Dialog Based System and a Menu Driven System for Information Access: a Case Study," Apr. 2000, Proceedings of the International Conference on Multimedia Information Retrieval (RIAO), Paris, 11 pages.
74Chang, S., et al., "A Segment-based Speech Recognition System for Isolated Mandarin Syllables," Proceedings TENCON '93, IEEE Region 10 conference on Computer, Communication, Control and Power Engineering, Oct. 19-21, 1993, vol. 3, 6 pages.
75Chen, Y., "Multimedia Siri Finds and Plays Whatever You Ask for," Feb. 9, 2012, http://www.psfk.com/2012/02/multimedia-siri.html, 9 pages.
76Cheyer, A. et al., "Spoken Language and Multimodal Applications for Electronic Realties," © Springer-Verlag London Ltd, Virtual Reality 1999, 3:1-15, 15 pages.
77Cheyer, A., "A Perspective on AI & Agent Technologies for SCM," VerticalNet, 2001 presentation, 22 pages.
78Cheyer, A., "About Adam Cheyer," Sep. 17, 2012, http://www.adam.cheyer.com/about.html, 2 pages.
79Cheyer, A., et al., "Multimodal Maps: An Agent-based Approach," International Conference on Cooperative Multimodal Communication, 1995, 15 pages.
80Cheyer, A., et al., "The Open Agent Architecture," Autonomous Agents and Multi-Agent systems, vol. 4, Mar. 1, 2001, 6 pages.
81Cheyer, A., et al., "The Open Agent Architecture: Building communities of distributed software agents" Feb. 21, 1998, Artificial Intelligence Center SRI International, Power Point presentation, downloaded from tittp://www.ai.sri.com/˜oaa/, 25 pages.
82Cheyer, A., et al., video entitled "Demonstration Video of Multimodal Maps Using an Agent Architecture," published by SRI International no later than 1996, as depicted in "Exemplary Screenshots from video entitled ‘Demonstration Video of Multimodal Maps Using an Agent Architecture,’" 6 pages.
83Cheyer, A., et al., video entitled "Demonstration Video of Multimodal Maps Using an Agent Architecture," published by SRI International no later than 1996, as depicted in "Exemplary Screenshots from video entitled 'Demonstration Video of Multimodal Maps Using an Agent Architecture,'" 6 pages.
84Cheyer, A., et al., video entitled "Demonstration Video of Multimodal Maps Using an Open-Agent Architecture," published by SRI International no later than 1996, as depicted in "Exemplary Screenshots from video entitled ‘Demonstration Video of Multimodal Maps Using an Open-Agent Architecture,’"6 pages.
85Cheyer, A., et al., video entitled "Demonstration Video of Multimodal Maps Using an Open-Agent Architecture," published by SRI International no later than 1996, as depicted in "Exemplary Screenshots from video entitled 'Demonstration Video of Multimodal Maps Using an Open-Agent Architecture,'"6 pages.
86Cheyer, A., video entitled "Demonstration Video of Vanguard Mobile Portal," published by SRI International no later than 2004, as depicted in "Exemplary Screenshots from video entitled ‘Demonstration Video of Vanguard Mobile Portal,’" 10 pages.
87Cheyer, A., video entitled "Demonstration Video of Vanguard Mobile Portal," published by SRI International no later than 2004, as depicted in "Exemplary Screenshots from video entitled 'Demonstration Video of Vanguard Mobile Portal,'" 10 pages.
88Codd, E. F., "Databases: Improving Usability and Responsiveness —'How About Recently'," Copyright @ 1978, by Academic Press, Inc., 28 pages.
89Cohen, P.R., et al., "An Open Agent Architecture," 1994, 8 pages. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.480.
90Coles, L. S., "Techniques for Information Retrieval Using an Inferential Question-Answering System with Natural-Language Input," Nov. 1972, SRI International, 198 pages.
91Coles, L. S., "The Application of Theorem Proving to Information Retrieval," Jan. 1971, SRI International, 21 pages.
92Coles, L. S., et al., "Chemistry Question-Answering", Jun. 1969, SRI International, 15 pages.
93Conklin, J., "Hypertext: An Introduction and Survey," Computer Magazine, Sep. 1987, 25 pages.
94Connolly, F. T., et al., "Fast Algorithms for Complex Matrix Multiplication Using Surrogates," IEEE Transactions on Acoustics, Speech, and Signal Processing, Jun. 1989, vol. 37, No. 6, 13 pages.
95Constantinides, P., et al., "A Schema Based Approach to Dialog Control," 1998, Proceedings of the International Conference on Spoken Language Processing, 4 pages.
96Corin, A. L., et al., "On Adaptive Acquisition of Language," International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), vol. 1, Apr. 3-6, 1990, 5 pages.
97Craig, J., et al., "Deacon: Direct English Access and Control," Nov. 7-10, 1966 AFIPS Conference Proceedings, vol. 19, San Francisco, 18 pages.
98Cutkosky, M. R. et al., "PACT: An Experiment in Integrating Concurrent Engineering Systems," Journal, Computer, vol. 26 Issue 1, Jan. 1993, IEEE Computer Society Press Los Alamitos, CA, USA, http://d1.acm.org/citation.cfm?id=165320, 14 pages.
99Dar, S., et al., "DTL's DataSpot: Database Exploration Using Plain Language," 1998 Proceedings of the 24th VLDB Conference, New York, 5 pages.
100Decker, K., et al., "Designing Behaviors for Information Agents," the Robotics Institute, Carnegie-Mellon University, paper, Jul. 6, 1996, 15 pages.
101Decker, K., et al., "Matchmaking and Brokering," The Robotics Institute, Carnegie-Mellon University, paper, May 16, 1996, 19 pages.
102Deerwester, S., et al., "Indexing by Latent Semantic Analysis," Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, 19 pages.
103Deller, Jr., J. R., et al., "Discrete-Time Processing of Speech Signals," © 1987 Prentice Hall, ISBN: 0-02-328301-7, 14 pages.
104Digital Equipment Corporation, "Open VMS Software Overview," Dec. 1995, software manual, 159 pages.
105Domingue, J., et al., "Web Service Modeling Ontology (WSMO)-An Ontology for Semantic Web Services," Jun. 9-10, 2005, position paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, 6 pages.
106Domingue, J., et al., "Web Service Modeling Ontology (WSMO)—An Ontology for Semantic Web Services," Jun. 9-10, 2005, position paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, 6 pages.
107Donovan, R. E., "A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers," 2001, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.6398, 4 pages.
108Dowding, J., et al., "Gemini: A Natural Language System for Spoken-Language Understanding," 1993, Proceedings of the Thirty-First Annual Meeting of the Association for Computational Linguistics, 8 pages.
109Dowding, J., et al., "Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser," 1994, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, 7 pages.
110Elio, R. et al., "On Abstract Task Models and Conversation Policies," http://webdocs.cs.ualberta.ca/~ree/publications/papers2/ATS.AA99.pdf, May 1999, 10 pages.
111Elio, R. et al., "On Abstract Task Models and Conversation Policies," http://webdocs.cs.ualberta.ca/˜ree/publications/papers2/ATS.AA99.pdf, May 1999, 10 pages.
112Epstein, M., et al., "Natural Language Access to a Melanoma Data Base," Sep. 1978, SRI International, 7 pages.
113Ericsson, S. et al., "Software illustrating a unified approach to multimodality and multilinguality in the in-home domain," Dec. 22, 2006, Talk and Look: Tools for Ambient Linguistic Knowledge, http://www.talk-project.eurice.eu/fileadmin/talk/publications-public/deliverables-public/D1-6.pdf, 127 pages.
114Ericsson, S. et al., "Software illustrating a unified approach to multimodality and multilinguality in the in-home domain," Dec. 22, 2006, Talk and Look: Tools for Ambient Linguistic Knowledge, http://www.talk-project.eurice.eu/fileadmin/talk/publications—public/deliverables—public/D1—6.pdf, 127 pages.
115Evi, "Meet Evi: the one mobile app that provides solutions for your everyday problems," Feb. 8, 2012, http://www.evi.com/, 3 pages.
116Exhibit 1, "Natural Language Interface Using Constrained Intermediate Dictionary of Results," Classes/Subclasses Manually Reviewed for the Search of US Patent No. 7,177,798, Mar. 22, 2013, 1 page.
117Exhibit 1, "Natural Language Interface Using Constrained Intermediate Dictionary of Results," List of Publications Manually reviewed for the Search of US Patent No. 7,177,798, Mar. 22, 2013, 1 page.
118Feigenbaum, E., et al., "Computer-assisted Semantic Annotation of Scientific Life Works," 2007, http://tomgruber.org/writing/stanford-cs300.pdf, 22 pages.
119Ferguson, G., et al., "TRIPS: An Integrated Intelligent Problem-Solving Assistant," 1998, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98) and Tenth Conference on Innovative Applications of Artificial Intelligence (Iaai-98), 7 pages.
120Fikes, R., et al., "A Network-based knowledge Representation and its Natural Deduction System," Jul. 1977, SRI International, 43 pages.
121Frisse, M. E., "Searching for Information in a Hypertext Medical Handbook," Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages.
122Gamback, B., et al., "The Swedish Core Language Engine," 1992 Notex Conference, 17 pages.
123Gannes, L., "Alfred App Gives Personalized Restaurant Recommendations," allthingsd.com, Jul. 18, 2011, http://allthingsd.com/20110718/alfred-app-gives-personalized-restaurant-recommendations/, 3 pages.
124Gautier, P. O., et al. "Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering," 1993, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.8394, 9 pages.
125Gervasio, M. T., et al., Active Preference Learning for Personalized Calendar Scheduling Assistancae, Copyright © 2005, http://www.ai.sri.com/~gervasio/pubs/gervasio-iui05.pdf, 8 pages.
126Gervasio, M. T., et al., Active Preference Learning for Personalized Calendar Scheduling Assistancae, Copyright © 2005, http://www.ai.sri.com/˜gervasio/pubs/gervasio-iui05.pdf, 8 pages.
127Glass, A., "Explaining Preference Learning," 2006, http://cs229.stanford.edu/proj2006/Glass-ExplainingPreferenceLearning.pdf, 5 pages.
128Glass, J., et al., "Multilingual Language Generation Across Multiple Domains," Sep. 18-22, 1994, International Conference on Spoken Language Processing, Japan, 5 pages.
129Glass, J., et al., "Multilingual Spoken-Language Understanding in the MIT Voyager System," Aug. 1995, http://groups.csail.mit.edu/sls/publications/1995/speechcomm95-voyager.pdf, 29 pages.
130Goddeau, D., et al., "A Form-Based Dialogue Manager for Spoken Language Applications," Oct. 1996, http://phasedance.com/pdf/icslp96.pdf, 4 pages.
131Goddeau, D., et al., "Galaxy: A Human-Language Interface to On-Line Travel Information," 1994 International Conference on Spoken Language Processing, Sep. 18-22, 1994, Pacific Convention Plaza Yokohama, Japan, 6 pages.
132Goldberg, D., et al., "Using Collaborative Filtering to Weave an Information Tapestry," Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages.
133Gotoh, Y., et al., "Document Space Models Using Latent Semantic Analysis," In Proceedings of Eurospeech, 1997, 4 pages.
134Gray, R. M., "Vector Quantization," IEEE ASSP Magazine, Apr. 1984, 26 pages.
135Green, C. "The Application of Theorem Proving to Question-Answering Systems," Jun. 1969, SRI Stanford Research Institute, Artificial Intelligence Group, 169 pages.
136Gregg, D. G., "DSS Access on the WWW: An Intelligent Agent Prototype," 1998 Proceedings of the Americas Conference on Information Systems-Association for Information Systems, 3 pages.
137Grishman, R., "Computational Linguistics: an Introduction," © Cambridge University Press 1986, 172 pages.
138Grosz, B. et al., "Dialogic: A Core Natural-Language Processing System," Nov. 9, 1982, SRI International, 17 pages.
139Grosz, B. et al., "Research on Natural-Language Processing at SRI," Nov. 1981, SRI International, 21 pages.
140Grosz, B., "Team: A Transportable Natural-Language Interface System," 1983, Proceedings of the First Conference on Applied Natural Language Processing, 7 pages.
141Grosz, B., et al., "Team: An Experiment in the Design of Transportable Natural-Language Interfaces," Artificial Intelligence, vol. 32, 1987, 71 pages.
142Gruber, T. R., "(Avoiding) the Travesty of the Commons," Presentation at NPUC 2006, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006. http://tomgruber.org/writing/avoiding-travestry.htm, 52 pages.
143Gruber, T. R., "2021: Mass Collaboration and the Really New Economy," TNTY Futures, The newsletter of the Next Twenty Years series, vol. 1, Issue 6, Aug. 2001, http://www.tnty.com/newsletter/futures/archive/v01-05business.html, 5 pages.
144Gruber, T. R., "A Translation Approach to Portable Ontology Specifications," Knowledge Systems Laboratory, Stanford University, Sep. 1992, Technical Report KSL 92-71, Revised Apr. 1993, 27 pages.
145Gruber, T. R., "Automated Knowledge Acquisition for Strategic Knowledge," Knowledge Systems Laboratory, Machine Learning, 4, 293-336 (1989), 44 pages.
146Gruber, T. R., "Big Think Small Screen: How semantic computing in the cloud will revolutionize the consumer experience on the phone," Keynote presentation at Web 3.0 conference, Jan. 27, 2010, http://tomgruber.org/writing/web30jan2010.htm, 41 pages.
147Gruber, T. R., "Collaborating around Shared Content on the WWW," W3C Workshop on WWW and Collaboration, Cambridge, MA, Sep. 11, 1995, http://www.w3.org/Collaboration/Workshop/Proceedings/P9.html, 1 page.
148Gruber, T. R., "Collective Knowledge Systems: Where the Social Web meets the Semantic Web," Web Semantics: Science, Services and Agents on the World Wide Web (2007), doi:10.1016/j.websem.2007.11.011, keynote presentation given at the 5th International Semantic Web Conference, Nov. 7, 2006, 19 pages.
149Gruber, T. R., "Despite our Best Efforts, Ontologies are not the Problem," AAAI Spring Symposium, Mar. 2008, http://tomgruber.org/writing/aaai-ss08.htm, 40 pages.
150Gruber, T. R., "Enterprise Collaboration Management with Intraspect," Intraspect Software, Inc., Instraspect Technical White Paper Jul. 2001, 24 pages.
151Gruber, T. R., "Every ontology is a treaty-a social agreement-among people with some common motive in sharing," Interview by Dr. Miltiadis D. Lytras, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, Issue 3, 2004, http://www.sigsemis.org 1, 5 pages.
152Gruber, T. R., "Every ontology is a treaty—a social agreement—among people with some common motive in sharing," Interview by Dr. Miltiadis D. Lytras, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, Issue 3, 2004, http://www.sigsemis.org 1, 5 pages.
153Gruber, T. R., "Helping Organizations Collaborate, Communicate, and Learn," Presentation to NASA Ames Research, Mountain View, CA, Mar. 2003, http://tomgruber.org/writing/organizational-intelligence-talk.htm, 30 pages.
154Gruber, T. R., "Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience," Presentation at Semantic Technologies conference (SemTech08), May 20, 2008, http://tonigruber.org/writing.htm, 40 pages.
155Gruber, T. R., "It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing," (c) 2000, 2003, http://www.cidoc-crm.org/docs/symposium-presentations/gruber-cidoc-ontology-2003.pdf, 21 pages.
156Gruber, T. R., "It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing," (c) 2000, 2003, http://www.cidoc-crm.org/docs/symposium—presentations/gruber—cidoc-ontology-2003.pdf, 21 pages.
157Gruber, T. R., "Ontologies, Web 2.0 and Beyond," Apr. 24, 2007, Ontology Summit 2007, http://tomgruber.org/writing/ontolog-social-web-keynote.pdf, 17 pages.
158Gruber, T. R., "Ontology of Folksonomy: A Mash-up of Apples and Oranges," Originally published to the web in 2005, Int'l Journal on Semantic Web & Information Systems, 3(2), 2007, 7 pages.
159Gruber, T. R., "Siri, a Virtual Personal Assistant-Bringing Intelligence to the Interface," Jun. 16, 2009, Keynote presentation at Semantic Technologies conference, Jun. 2009. http://tomgruber.org/writing/semtech09.htm, 22 pages.
160Gruber, T. R., "Siri, a Virtual Personal Assistant—Bringing Intelligence to the Interface," Jun. 16, 2009, Keynote presentation at Semantic Technologies conference, Jun. 2009. http://tomgruber.org/writing/semtech09.htm, 22 pages.
161Gruber, T. R., "TagOntology," Presentation to Tag Camp, www.tagcamp.org, Oct. 29, 2005, 20 pages.
162Gruber, T. R., "Toward Principles for the Design of Ontologies Used for Knowledge Sharing," In International Journal Human-Computer Studies 43, p. 907-928, substantial revision of paper presented at the International Workshop on Formal Ontology, Mar. 1993, Padova, Italy, available as Technical Report KSL 93-04, Knowledge Systems Laboratory, Stanford University, further revised Aug. 23, 1993, 23 pages.
163Gruber, T. R., "Where the Social Web meets the Semantic Web," Presentation at the 5th International Semantic Web Conference, Nov. 7, 2006, 38 pages.
164Gruber, T. R., et al., "An Ontology for Engineering Mathematics," In Jon Doyle, Piero Torasso, & Erik Sandewall, Eds., Fourth International Conference on Principles of Knowledge Representation and Reasoning, Gustav Stresemann Institut, Bonn, Germany, Morgan Kaufmann, 1994, http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html, 22 pages.
165Gruber, T. R., et al., "Generative Design Rationale: Beyond the Record and Replay Paradigm," Knowledge Systems Laboratory, Stanford University, Dec. 1991, Technical Report KSL 92-59, Updated Feb. 1993, 24 pages.
166Gruber, T. R., et al., "Machine-generated Explanations of Engineering Models: A Compositional Modeling Approach," (1993) In Proc. International Joint Conference on Artificial Intelligence, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.930, 7 pages.
167Gruber, T. R., et al., "Toward a Knowledge Medium for Collaborative Product Development," In Artificial Intelligence in Design 1992, from Proceedings of the Second International Conference on Artificial Intelligence in Design, Pittsburgh, USA, Jun. 22-25, 1992, 19 pages.
168Gruber, T. R., et al.,"NIKE: A National Infrastructure for Knowledge Exchange," Oct. 1994, http://www.eit.com/papers/nike/nike.html and nike.ps, 10 pages.
169Gruber, T. R., Interactive Acquisition of Justifications: Learning "Why" by Being Told "What" Knowledge Systems Laboratory, Stanford University, Oct. 1990, Technical Report KSL 91-17, Revised Feb. 1991, 24 pages.
170Guida, G., et al., "NLI: A Robust Interface for Natural Language Person-Machine Communication," Int. J. Man-Machine Studies, vol. 17, 1982, 17 pages.
171Guzzoni, D., "Active: A unified platform for building intelligent assistant applications," Oct. 25, 2007, 262 pages.
172Guzzoni, D., et al., "A Unified Platform for Building Intelligent Web Interaction Assistants," Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 4 pages.
173Guzzoni, D., et al., "Active, A Platform for Building Intelligent Operating Rooms," Surgetica 2007 Computer-Aided Medical Interventions: tools and applications, pp. 191-198, Paris, 2007, Sauramps Médical, http://lsro.epfl.ch/page-68384-en.html, 8 pages.
174Guzzoni, D., et al., "Active, A platform for Building Intelligent Software," Computational Intelligence 2006, 5 pages. http://www.informatik.uni-trier.de/˜ley/pers/hd/g/Guzzoni:Didier
175Guzzoni, D., et al., "Active, A Tool for Building Intelligent User Interfaces," ASC 2007, Palma de Mallorca, http://lsro.epfl.ch/page-34241.html, 6 pages.
176Guzzoni, D., et al., "Many Robots Make Short Work," 1996 AAAI Robot Contest, SRI International, 9 pages.
177Guzzoni, D., et al., "Modeling Human-Agent Interaction with Active Ontologies," 2007, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 8 pages.
178Haas, N., et al., "An Approach to Acquiring and Applying Knowledge," Nov. 1980, SRI International, 22 pages.
179Hadidi, R., et al., "Students' Acceptance of Web-Based Course Offerings: An Empirical Assessment," 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages.
180Hardawar, D., "Driving app Waze builds its own Siri for hands-free voice control," Feb. 9, 2012, http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/, 4 pages.
181Harris, F. J., "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform," In Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages.
182Hawkins, J., et al., "Hierarchical Temporal Memory: Concepts, Theory, and Terminology," Mar. 27, 2007, Numenta, Inc., 20 pages.
183He, Q., et al., "Personal Security Agent: KQML-Based PKI," The Robotics Institute, Carnegie-Mellon University, paper, Oct. 1, 1997, 14 pages.
184Helm, R., et al., "Building Visual Language Parsers," In Proceedings of CHI'91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 8 pages.
185Hendrix, G. et al., "Developing a Natural Language Interface to Complex Data," ACM Transactions on Database Systems, vol. 3, No. 2, Jun. 1978, 43 pages.
186Hendrix, G., "Human Engineering for Applied Natural Language Processing," Feb. 1977, SRI International, 27 pages.
187Hendrix, G., "Klaus: A System for Managing Information and Computational Resources," Oct. 1980, SRI International, 34 pages.
188Hendrix, G., "Lifer: A Natural Language Interface Facility," Dec. 1976, SRI Stanford Research Institute, Artificial Intelligence Center, 9 pages.
189Hendrix, G., "Natural-Language Interface," Apr.-Jun. 1982, American Journal of Computational Linguistics, vol. 8, No. 2, 7 pages. Best Copy Available.
190Hendrix, G., "The Lifer Manual: A Guide to Building Practical Natural Language Interfaces," Feb. 1977, SRI International, 76 pages.
191Hendrix, G., et al., "Transportable Natural-Language Interfaces to Databases," Apr. 30, 1981, SRI International, 18 pages.
192Hermansky, H., "Perceptual Linear Predictive (PLP) Analysis of Speech," Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages.
193Hermansky, H., "Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing," in proceedings of IEEE International Conference on Acoustics, speech, and Signal Processing (ICASSP'93), Apr. 27-30, 1993, 4 pages.
194Hirschman, L., et al., "Multi-Site Data Collection and Evaluation in Spoken Language Understanding," 1993, Proceedings of the workshop on Human Language Technology, 6 pages.
195Hobbs, J., "Sublanguage and Knowledge," Jun. 1984, SRI International, Artificial Intelligence Center, 30 pages.
196Hobbs, J., et al., "Fastus: A System for Extracting Information from Natural-Language Text," Nov. 19, 1992, SRIi International, Artificial Intelligence Center, 26 pages.
197Hobbs, J., et al.,"Fastus: Extracting Information from Natural-Language Texts," 1992, SRI International, Artificial Intelligence Center, 22 pages.
198Hodjat, B., et al., "Iterative Statistical Language Model Generation for Use with an Agen-tOriented Natural Language Interface," vol. 4 of the Proceedings of HCI International 2003, 7 pages.
199Hoehfeld M., et al., "Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm," IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages.
200Holmes, J. N., "Speech Synthesis and Recognition—Stochastic Models for Word Recognition," Speech Synthesis and Recognition, Published by Chapman & Hall, London, ISBN 0 412 53430 4, © 1998 J. N. Holmes, 7 pages.
201Hon, H.W., et al., "CMU Robust Vocabulary-Independent Speech Recognition System," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-91), Apr. 1417, 1991, 4 pages.
202Huang, X., et al., "The Sphinx-II Speech Recognition System: An Overview," Jan. 15, 1992, Computer, Speech and Language, 14 pages.
203IBM Technical Disclosure Bulletin, "Integrated Audio-Graphics User Interface," vol. 33, No. 11, Apr. 1991, 4 pages.
204IBM Technical Disclosure Bulletin, "Speech Editor," vol. 29, No. 10, Mar. 10, 1987, 3 pages
205IBM Technical Disclosure Bulletin, "Speech Recognition with Hidden Markov Models of Speech Waveforms," vol. 34, No. 1, Jun. 1991, 10 pages.
206International Preliminary Examination Report dated Apr. 10, 1995, received in International Application No. PCT/US1993/12637, which corresponds to U.S. Appl. No. 07/999,354, 7 pages. (Alejandro Acero).
207International Preliminary Examination Report dated Feb. 28, 1996, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow).
208International Preliminary Examination Report dated Mar. 1, 1995, received in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 5 pages. (Robert Don Strong).
209International Search Report and Written Opinion dated Nov. 29, 2011, received in International Application No. PCT/US2011/20861, which corresponds to U.S. Appl. No. 12/987,982, 15 pages. (Thomas Robert Gruber).
210International Search Report dated Feb. 8, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 7 pages. (Yen-Lu Chow).
211International Search Report dated Nov. 8, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow).
212International Search Report dated Nov. 8, 1995, received in International Application No.PCT/US1995/08369, which corresponds to U.S. Appl.No. 08/271,639, 6 pages. (Peter V. De Souza).
213International Search Report dated Nov. 9, 1994, received in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 8 pages. (Robert Don Strong).
214Intraspect Software, "The Intraspect Knowledge Management Solution: Technical Overview," http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf, 18 pages.
215Issar, S., "Estimation of Language Models for New Spoken Language Applications," Oct. 3-6, 1996, Proceedings of 4th International Conference on Spoken language Processing, Philadelphia, 4 pages.
216Issar, S., et al., "CMU's Robust Spoken Language Understanding System," 1993, Proceedings of EUROSPEECH, 4 pages.
217Jacobs, P. S., et al., "Scisor: Extracting Information from On-Line News," Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages.
218Janas, J., "The Semantics-Based Natural Language Interface to Relational Databases," © Springer-Verlag Berlin Heidelberg 1986, Germany, 48 pages.
219Jelinek, F., "Self-Organized Language Modeling for Speech Recognition," Readings in Speech Recognition, edited by Alex Waibel and Kai-Fu Lee, May 15, 1990, © 1990 Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 63 pages.
220Jennings, A., et al., "A Personal News Service Based on a User Model Neural Network," IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, Tokyo, JP, 12 pages.
221Ji, T., et al., "A Method for Chinese Syllables Recognition based upon Sub-syllable Hidden Markov Model," 1994 International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 4 pages.
222Johnson, J., "A Data Management Strategy for Transportable Natural Language Interfaces," Jun. 1989, doctoral thesis submitted to the Department of Computer Science, University of British Columbia, Canada, 285 pages.
223Jones, J., "Speech Recognition for Cyclone," Apple Computer, Inc., E.R.S., Revision 2.9, Sep. 10, 1992, 93 pages.
224Julia, L., et al., "Http://Www.Speech.Sri.Com/Demos/Atis.Html," 1997, Proceedings of AAAI, Spring Symposium, 5 pages.
225Julia, L., et al., Un éditeur interactif de tableaux dessinés à main levée (An Interactive Editor for Hand-Sketched Tables), Traitement du Signal 1995, vol. 12, No. 6, 8 pages. No English Translation Available.
226Kahn, M., et al., "CoABS Grid Scalability Experiments," 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 8 pages.
227Kamel, M., et al., "A Graph Based Knowledge Retrieval System," © 1990 IEEE, 7 pages.
228Karp, P. D., "A Generic Knowledge-Base Access Protocol," May 12, 1994, http://lecture.cs.buu.ac.th/~f50353/Document/gfp.pdf, 66 pages.
229Karp, P. D., "A Generic Knowledge-Base Access Protocol," May 12, 1994, http://lecture.cs.buu.ac.th/˜f50353/Document/gfp.pdf, 66 pages.
230Kats, B., et al., "Exploiting Lexical Regularities in Designing Natural Language Systems," 1988, Proceedings of the 12th International Conference on Computational Linguistics, Coling'88, Budapest, Hungary, 22 pages.
231Katz, B., "A Three-Step Procedure for Language Generation," Dec. 1980, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 42 pages.
232Katz, B., "Annotating the World Wide Web Using Natural Language," 1997, Proceedings of the 5th Riao Conference on Computer Assisted Information Searching on the Internet, 7 pages.
233Katz, B., "Using English for Indexing and Retrieving," 1988 Proceedings of the 1st Riao Conference on User-Oriented Content-Based Text and Image (RIAO'88), 19 pages.
234Katz, B., et al., "Rextor: A System for Generating Relations from Natural Language," In Proceedings of the ACL Oct. 2000 Workshop on Natural Language Processing and Information Retrieval (NLP&IR), 11 pages.
235Katz, S. M., "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages.
236Kitano, H., "PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System," Jun. 1991 Computer, vol. 24, No. 6, 13 pages.
237Klabbers, E., et al., "Reducing Audible Spectral Discontinuities," IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages.
238Klatt, D. H., "Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence," Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages.
239Kominek, J., et al., "Impact of Durational Outlier Removal from Unit Selection Catalogs," 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages.
240Konolige, K., "A Framework for a Portable Natural-Language Interface to Large Data Bases," Oct. 12, 1979, SRI International, Artificial Intelligence Center, 54 pages.
241Kubala, F., et al., "Speaker Adaptation from a Speaker-Independent Training Corpus," International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages.
242Kubala, F., et al., "The Hub and Spoke Paradigm for CSR Evaluation," Proceedings of the Spoken Language Technology Workshop, Mar. 6-8, 1994, 9 pages.
243Laird, J., et al., "Soar: An Architecture for General Intelligence," 1987, Artificial Intelligence vol. 33, 64 pages.
244Langly, P., et al.,"A Design for the Icarus Architechture," SIGART Bulletin, vol. 2, No. 4, 6 pages.
245Larks, "Intelligent Software Agents: Larks," 2006, downloaded on Mar. 15, 2013 from http://www.cs.cmu.edu/larks.html, 2 pages.
246Lee, K.F., "Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The Sphinx System," Apr. 18, 1988, Partial fulfillment of the requirements for the degree of Doctor of Philosophy, Computer Science Department, Carnegie Mellon University, 195 pages.
247Lee, L, et al., "Golden Mandarin(II)-An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary," 0/7803-0946-4/93 ©19931EEE, 4 pages.
248Lee, L, et al., "Golden Mandarin(II)-An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions," International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 5 pages.
249Lee, L., et al., "A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary," International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 3-6, 1990, 5 pages.
250Lee, L., et al., "System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters," International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, Nos. 3 & 4, Nov. 1991, 16 pages.
251Lemon, O., et al., "Multithreaded Context for Robust Conversational Interfaces: Context-Sensitive Speech Recognition and Interpretation of Corrective Fragments," Sep. 2004, ACM Transactions on Computer-Human Interaction, vol. 11, No. 3, 27 pages.
252Leong, L., et al., "CASIS: A Context-Aware Speech Interface System," IUI'05, Jan. 9-12, 2005, Proceedings of the 10th international conference on Intelligent user interfaces, San Diego, California, USA, 8 pages.
253Lieberman, H., et al., "Out of context: Computer systems that adapt to, and learn from, context," 2000, IBM Systems Journal, vol. 39, Nos. 3/4, 2000, 16 pages.
254Lin, B., et al., "A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History," 1999, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.272, 4 pages.
255Lin, C.H., et al., "A New Framework for Recognition of Mandarin Syllables With Tones Using Sub-syllabic Unites," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-93), Apr. 27-30, 1993, 4 pages.
256Linde, Y., et al., "An Algorithm for Vector Quantizer Design," IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages.
257Liu, F.H., et al., "Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering," IEEE International Conference of Acoustics, Speech, and Signal Processing, ICASSP-92, Mar. 23-26, 1992, 4 pages.
258Logan, B., "Mel Frequency Cepstral Coefficients for Music Modeling," in International Symposium on Music Information Retrieval, 2000, 2 pages.
259lowegian International, "FIR Filter Properties, dspGuro, Digital Signal Processing Central," http://www.dsaguru.com/dsp/taqs/fir/properties, downloaded on Jul. 28, 2010, 6 pages.
260Lowerre, B. T., "The-Harpy Speech Recognition System," Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages.
261Maghbouleh, A., "An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations," Revised version of a paper presented at the Computational Phonology in Speech Technology workshop, 1996 annual meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages.
262Markel, J. D., et al., "Linear Prediction of Speech," Springer-Verlag, Berlin Heidelberg New York 1976, 12 pages.
263Martin, D., et al., "Building Distributed Software Systems with the Open Agent Architecture," Mar. 23-25, 1998, Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 23 pages.
264Martin, D., et al., "Development Tools for the Open Agent Architecture," Apr., 1996, Proceedings of the International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 17 pages.
265Martin, D., et al., "Information Brokering in an Agent Architecture," Apr., 1997, Proceedings of the second International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 20 pages.
266Martin, D., et al., "PAAM '98 Tutorial: Building and Using Practical Agent Applications," 1998, SRI International, 78 pages.
267Martin, D., et al., "The Open Agent Architecture: A Framework for building distributed software systems," Jan.-Mar. 1999, Applied Artificial Intelligence: An International Journal, vol. 13, No. 1-2, http://adam.cheyer.com/papers/oaa.pdf, 38 pages.
268Martin, P., et al., "Transportability and Generality in a Natural-Language Interface System," Aug. 8-12, 1983, Proceedings of the Eight International Joint Conference on Artificial Intelligence, West Germany, 21 pages.
269Matiasek, J., et al., "Tamic-P: A System for NL Access to Social Insurance Database," Jun. 17-19, 1999, Proceeding of the 4th International Conference on Applications of Natural Language to Information Systems, Austria, 7 pages.
270McGuire, J., et al., "SHADE: Technology for Knowledge-Based Collaborative Engineering," 1993, Journal of Concurrent Engineering: Applications and Research (CERA), 18 pages.
271Meng, H., et al., "Wheels: A Conversational System in the Automobile Classified Domain," Oct. 1996, httphttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.3022, 4 pages.
272Michos, S.E., et al., "Towards an adaptive natural language interface to command languages," Natural Language Engineering 2 (3), © 1994 Cambridge University Press, 19 pages. Best Copy Available.
273Milstead, J., et al., "Metadata: Cataloging by Any Other Name . . ." Jan. 1999, ONLINE, Copyright © 1999 Information Today, Inc., 18 pages.
274Milward, D., et al., "D2.2: Dynamic Multimodal Interface Reconfiguration," Talk and Look: Tools for Ambient Linguistic Knowledge, Aug. 8, 2006, http://www.ihmc.us/users/nblaylock/Pubs/Files/talk-d2.2.pdf, 69 pages.
275Milward, D., et al., "D2.2: Dynamic Multimodal Interface Reconfiguration," Talk and Look: Tools for Ambient Linguistic Knowledge, Aug. 8, 2006, http://www.ihmc.us/users/nblaylock/Pubs/Files/talk—d2.2.pdf, 69 pages.
276Minker, W., et al., "Hidden Understanding Models for Machine Translation," 1999, Proceedings of ETRW on Interactive Dialogue in Multi-Modal Systems, 4 pages.
277Mitra, P., et al., "A Graph-Oriented Model for Articulation of Ontology Interdependencies," 2000, http://ilpubs.stanford.edu:8090/442/1/2000-20.pdf, 15 pages.
278Modi, P. J., et al., "CMRadar: A Personal Assistant Agent for Calendar Management," © 2004, American Association for Artificial Intelligence, Intelligent Systems Demonstrations, 2 pages.
279Moore, et al., "The Information Warefare Advisor: An Architecture for Interacting with Intelligent Agents Across the Web," Dec. 31, 1998 Proceedings of Americas Conference on Information Systems (AMCIS), 4 pages.
280Moore, R., "Handling Complex Queries in a Distributed Data Base," Oct. 8, 1979, SRI International, Artificial Intelligence Center, 38 pages.
281Moore, R., "Practical Natural-Language Processing by Computer," Oct. 1981, SRI International, Artificial Intelligence Center, 34 pages.
282Moore, R., "The Role of Logic in Knowledge Representation and Commonsense Reasoning," Jun. 1982, SRI International, Artificial Intelligence Center, 19 pages.
283Moore, R., "Using Natural-Language Knowledge Sources in Speech Recognition," Jan. 1999, SRI International, Artificial Intelligence Center, 24 pages.
284Moore, R., et al., "Combining Linguistic and Statistical Knowledge Sources in Natural-Language Processing for ATIS," 1995, SRI International, Artificial Intelligence Center, 4 pages.
285Moore, R., et al., "SRI's Experience with the ATIS Evaluation," Jun. 24-27, 1990, Proceedings of a workshop held at Hidden Valley, Pennsylvania, 4 pages. Best Copy Available.
286Moran, D. B., et al., "Multimodal User Interfaces in the Open Agent Architecture," Proc. of the 1997 International Conference on Intelligent User Interfaces (IUI97), 8 pages.
287Moran, D., "Quantifier Scoping in the SRI Core Language Engine," 1988, Proceedings of the 26th annual meeting on Association for Computational Linguistics, 8 pages.
288Moran, D., et al., "Intelligent Agent-based User Interfaces," Oct. 12-13, 1995, Proceedings of International Workshop on Human Interface Technology, University of Aizu, Japan, 4 pages. http://www.dougmoran.corin/dmoran/PAPERS/oaa-iwhit1995.pdf
289Morgan, B., "Business Objects," (Business Objects for Windows) Business Objects Inc., DBMS Sep. 1992, vol. 5, No. 10, 3 pages.
290Motro, A., "Flex: a Tolerant and Cooperative User Interface to Databases," IEEE Transactions on Knowledge and Data Engineering, vol. 2, No. 2, Jun. 1990, 16 pages.
291Mountford, S. J., et al., "Talking and Listening to Computers," The Art of Human-Computer Interface Design, Copyright © 1990 Apple Computer, Inc. Addison-Wesley Publishing Company, Inc., 17 pages.
292Mozer, M., "An Intelligent Environment Must be Adaptive," Mar./Apr. 1999, IEEE Intelligent Systems, 3 pages.
293Mühlhäuser, M., "Context Aware Voice User Interfaces for Workflow Support," Darmstadt 2007, http://tuprints.ulb.tu-darmstadt.de/876/1/PhD.pdf, 254 pages.
294Murty, K. S. R., et al., "Combining Evidence from Residual Phase and MFCC Features for Speaker Recognition," IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages.
295Murveit H. et al., "Integrating Natural Language Constraints into HMM-based Speech Recognition," 1990 International Conference on Acoustics, Speech, and Signal Processing, Apr. 3-6, 1990, 5 pages.
296Murveit, H., et al., "Speech Recognition in Sri's Resource Management and ATIS Systems," 1991, Proceedings of the workshop on Speech and Natural Language (HTL'91), 7 pages.
297Nakagawa, S., et al., "Speaker Recognition by Combining MFCC and Phase Information," IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Mar. 14-19, 2010, 4 pages.
298Naone, E., "TR10: Intelligent Software Assistant," Mar.-Apr. 2009, Technology Review, http://www.technologyreview.com/printer-friendly-article.aspx?id=22117, 2 pages.
299Naone, E., "TR10: Intelligent Software Assistant," Mar.-Apr. 2009, Technology Review, http://www.technologyreview.com/printer—friendly—article.aspx?id=22117, 2 pages.
300Neches, R., "Enabling Technology for Knowledge Sharing," Fall 1991, AI Magazine, pp. 37-56, (21 pages).
301Niesler, T. R., et al., "A Variable-Length Category-Based N-Gram Language Model," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, May 7-10, 1996, 6 pages.
302Nöth, E., et al., "Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System," IEEE Transactions on Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, 14 pages.
303Notice of Allowance dated Apr. 2, 2013, received in U.S. Appl. No. 12/244,713 , 13 pages. (Lindahl).
304Notice of Allowance dated Aug. 7, 2012, received in U.S. Appl. No. 12/244,713 , 19 pages (Lindahl).
305Notice of Allowance dated Dec. 5, 2012, received in U.S. Appl. No. 12/244,713 , 13 pages. (Lindahl).
306Notice of Allowance dated Jul. 19, 2012, received in U.S. Appl. No. 13/480,422 , 21 pages (Lindahl).
307Notice of Allowance dated Jun. 12, 2013, received in U.S. Appl. No. 13/615,427, 9 pages. (Lindahl).
308Notice of Allowance dated May 10, 2012, received in U.S. Appl. No. 12/244,713 , 13 pages. (Lindahl).
309OAA, "The Open Agent Architecture 1.0 Distribution Source Code," Copyright 1999, SRI International, 2 pages.
310Odubiyi, J., et al., "SAIRE—A scalable agent-based information retrieval engine," 1997 Proceedings of the First International Conference on Autonomous Agents, 12 pages.
311Office Action dated Dec. 23, 2011, received in U.S. Appl. No. 12/244,713, 13 pages (Lindhahl).
312Office Action dated Feb. 28, 2013, received in U.S. Appl. No. 13/615,427, 21 pages (Lindahl).
313Office Action Exparte Quayle dated Sep. 10, 2012, received in U.S. Appl. No. 12/244,713 , 13 pages. (Lindahl).
314Office Communication dated Dec. 7, 2012, received in U.S. Appl. No. 12/244,713, 11 pages (Lindahl).
315Owei, V., et al., "Natural Language Query Filtration in the Conceptual Query Language," © 1997 IEEE, 11 pages.
316Pannu, A., et al., "A Learning Personal Agent for Text Filtering and Notification," 1996, The Robotics Institute School of Computer Science, Carnegie-Mellon University, 12 pages.
317Papadimitriou, C. H., et al., "Latent Semantic Indexing: A Probabilistic Analysis," Nov. 14, 1997, http://citeseerx.ist.psu.edu/messages/downloadsexceeded.html, 21 pages.
318Parsons, T. W., "Voice and Speech Processing," Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 5 pages.
319Parsons, T. W., "Voice and Speech Processing," Pitch and Formant Estimation, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 15 pages.
320Pereira, "Logic for Natural Language Analysis," Jan. 1983, SRI International, Artificial Intelligence Center, 194 pages.
321Perrault, C.R., et al., "Natural-Language Interfaces," Aug. 22, 1986, SRI International, 48 pages.
322Phoenix Solutions, Inc. v. West Interactive Corp., Document 40, Declaration of Christopher Schmandt Regarding the MIT Galaxy System dated Jul. 2, 2010, 162 pages.
323Picone, J., "Continuous Speech Recognition Using Hidden Markov Models," IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages.
324Pulman, S.G., et al., "Clare: A Combined Language and Reasoning Engine," 1993, Proceedings of JFIT Conference, 8 pages. URL: http://www.cam.sri.com/tr/crc042/paper.ps. Z.
325Rabiner, L. R., et al., "Fundamental of Speech Recognition," © 1993 AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 17 pages.
326Rabiner, L. R., et al., "Note on the Properties of a Vector Quantizer for LPC Coefficients," The Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, 9 pages.
327Ratcliffe, M., "ClearAccess 2.0 allows SQL searches off-line," (Structured Query Language), ClearAcess Corp., MacWeek Nov. 16, 1992, vol. 6, No. 41, 2 pages.
328Ravishankar, "Efficient Algorithms for Speech Recognition," May 15, 1996, Doctoral Thesis submitted to School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburg, 146 pages.
329Rayner, M., "Abductive Equivalential Translation and its application to Natural Language Database Interfacing," Sep. 1993 Dissertation paper, SRI International, 163 pages.
330Rayner, M., "Linguistic Domain Theories: Natural-Language Database Interfacing from First Principles," 1993, SRI International, Cambridge, 11 pages.
331Rayner, M., et al., "Adapting the Core Language Engine to French and Spanish," May 10, 1996, Cornell University Library, 9 pages. http://arxiv.org/abs/cmp-Ig/9605015.
332Rayner, M., et al., "Deriving Database Queries from Logical Forms by Abductive Definition Expansion," 1992, Proceedings of the Third Conference on Applied Natural Language Processing, ANLC'92, 8 pages.
333Rayner, M., et al., "Spoken Language Translation With Mid-90's Technology: A Case Study," 1993, Eurospeech, ISCA, 4 pages. http://dblp.uni-trier.de/db/conf/interspeech/eurospeech1993.html#RaynerBCCDGKKLPPS93.
334Remde, J. R., et al., "SuperBook: An Automatic Tool for Information Exploration-Hypertext?," In Proceedings of Hypertext'87 papers, Nov. 13-15, 1987, 14 pages.
335Reynolds, C. F., "On-Line Reviews: A New Application of the HICOM Conferencing System," IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages.
336Rice, J., et al., "Monthly Program: Nov. 14, 1995," The San Francisco Bay Area Chapter of ACM SIGCHI, http://www.baychi.org/calendar/19951114/, 2 pages.
337Rice, J., et al., "Using the Web Instead of a Window System," Knowledge Systems Laboratory, Stanford University, (http://tomgruber.org/writing/ks1-95-69.pdf, Sep. 1995.) CHI '96 Proceedings: Conference on Human Factors in Computing Systems, Apr. 13-18, 1996, Vancouver, BC, Canada, 14 pages.
338Rigoll, G., "Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models," International Conference on Acoustics, Speech, and Signal Processing (ICASSP'89), May 23-26, 1989, 4 pages.
339Riley, M. D., "Tree-Based Modelling of Segmental Durations," Talking Machines Theories, Models, and Designs, 1992 © Elsevier Science Publishers B.V., North-Holland, ISBN: 08-44489115.3, 15 pages.
340Rivlin, Z., et al., "Maestro: Conductor of Multimedia Analysis Technologies," 1999 SRI International, Communications of the Association for Computing Machinery (CACM), 7 pages.
341Rivoira, S., et al., "Syntax and Semantics in a Word-Sequence Recognition System," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'79), Apr. 1979, 5 pages.
342Roddy, D., et al., "Communication and Collaboration in a Landscape of B2B eMarketplaces," VerticalNet Solutions, white paper, Jun. 15, 2000, 23 pages.
343Rosenfeld, R., "A Maximum Entropy Approach to Adaptive Statistical Language Modelling," Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages.
344Roszkiewicz, A., "Extending your Apple," Back Talk—Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, 5 pages.
345Rudnicky, A.I. et al. (1999). "Creating Natural Dialogs in the Carnegie Mellon Communicator System," Proceedings of Eurospeech4:1531-1534.
346Russell, S., et al., "Artificial Intelligence, A Modern Approach," © 1995 Prentice Hall, Inc., 121 pages.
347Sacerdoti, E., et al., "A Ladder User's Guide (Revised)," Mar. 1980, SRI International, Artificial Intelligence Center, 39 pages.
348Sagalowicz, D., "A D-Ladder User's Guide," Sep. 1980, SRI International, 42 pages.
349Sakoe, H., et al., "Dynamic Programming Algorithm Optimization for Spoken Word Recognition," IEEE Transactins on Acoustics, Speech, and Signal Processing, Feb. 1978, vol. ASSP-26 No. 1, 8 pages.
350Salton, G., et al., "On the Application of Syntactic Methodologies in Automatic Text Analysis," Information Processing and Management, vol. 26, No. 1, Great Britain 1990, 22 pages.
351Sameshima, Y., et al., "Authorization with security attributes and privilege delegation Access control beyond the ACL," Computer Communications, vol. 20, 1997, 9 pages.
352San-Segundo, R., et al., "Confidence Measures for Dialogue Management in the CU Communicator System," Jun. 5-9, 2000, Proceedings of Acoustics, Speech, and Signal Processing (ICASSP'00), 4 pages.
353Sato, H., "A Data Model, Knowledge Base, and Natural Language Processing for Sharing a Large Statistical Database," 1989, Statistical and Scientific Database Management, Lecture Notes in Computer Science, vol. 339, 20 pages.
354Savoy, J., "Searching Information in Hypertext Systems Using Multiple Sources of Evidence," International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1993, 15 pages.
355Scagliola, C., "Language Models and Search Algorithms for Real-Time Speech Recognition," International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages.
356Schmandt, C., et al., "Augmenting a Window System with Speech Input," IEEE Computer Society, Computer Aug. 1990, vol. 23, No. 8, 8 pages.
357Schnelle, D., "Context Aware Voice User Interfaces for Workflow Support," Aug. 27, 2007, Dissertation paper, 254 pages.
358Schutze, H., "Dimensions of Meaning," Proceedings of Supercomputing'92 Conference, Nov. 16-20, 1992, 10 pages.
359Seneff, S., et al., "A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains," Oct. 1996, citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16...rep..., 4 pages.
360Sharoff, S., et al., "Register-domain Separation as a Methodology for Development of Natural Language Interfaces to Databases," 1999, Proceedings of Human-Computer Interaction (INTERACTt'99), 7 pages.
361Sheth B., et al., "Evolving Agents for Personalized Information Filtering," In Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1-5, 1993, 9 pages.
362Sheth, A., et al., "Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships," Oct. 13, 2002, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, SpringerVerlag, 38 pages.
363Shikano, K., et al., "Speaker Adaptation Through Vector Quantization," IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages.
364Shimazu, H., et al., "CAPIT: Natural Language Interface Design Tool with Keyword Analyzer and Case-Based Parser," NEC Research & Development, vol. 33, No. 4, Oct. 1992, 11 pages.
365Shinkle, L., "Team User's Guide," Nov. 1984, SRI International, Artificial Intelligence Center, 78 pages.
366Shklar, L., et al., "Info Harness: Use of Automatically Generated Metadata for Search and Retrieval of Heterogeneous Information," 1995 Proceedings of CAiSE'95, Finland,.
367Sigurdsson, S., et al., "Mel Frequency Cepstral Coefficients: an Evaluation of Robustness of MP3 Encoded Music," in Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR), 2006, 4 pages.
368Silverman, K. E. A., et al., "Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration," Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 15-19, 1999, 5 pages.
369Simonite, T., "One Easy Way to Make Siri Smarter," Oct. 18, 2011, Technology Review, http:// www.technologyreview.com/printer-friendly-article.aspx?id=38915, 2 pages.
370Simonite, T., "One Easy Way to Make Siri Smarter," Oct. 18, 2011, Technology Review, http:// www.technologyreview.com/printer—friendly—article.aspx?id=38915, 2 pages.
371Singh, N., "Unifying Heterogeneous Information Models," 1998 Communications of the ACM, 13 pages.
372SR12009, "SRI Speech: Products: Software Development Kits: EduSpeak," 2009, 2 pages, available at http://web.archive.org/web/20090828084033/http://www.speechatsri.com/products/eduspeak.shtml.
373Starr, B., et al., "Knowledge-Intensive Query Processing," May 31, 1998, Proceedings of the 5th KRDB Workshop, Seattle, 6 pages.
374Stent, A., et al., "The CommandTalk Spoken Dialogue System," 1999, http://acl.ldc.upenn.edu/P/P99/P99-1024.pdf, 8 pages.
375Stern, R., et al. "Multiple Approaches to Robust Speech Recognition," 1992, Proceedings of Speech and Natural Language Workshop, 6 pages.
376Stickel, "A Nonclausal Connection-Graph Resolution Theorem-Proving Program," 1982, Proceedings of AAAI'82, 5 pages.
377Sugumaran, V., "A Distributed Intelligent Agent-Based Spatial Decision Support System," Dec. 31, 1998, Proceedings of the Americas Conference on Information systems (AMCIS), 4 pages.
378Sycara, K., et al., "Coordination of Multiple Intelligent Software Agents," International Journal of Cooperative Information Systems (IJCIS), vol. 5, Nos. 2 & 3, Jun. & Sep. 1996, 33 pages.
379Sycara, K., et al., "Distributed Intelligent Agents," IEEE Expert, vol. 11, No. 6, Dec. 1996, 32 pages.
380Sycara, K., et al., "Dynamic Service Matchmaking Among Agents in Open Information Environments ," 1999, SIGMOD Record, 7 pages.
381Sycara, K., et al., "The RETSINA MAS Infrastructure," 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 20 pages.
382Tenenbaum, A.M., et al., "Data Structure Using Pascal," 1981 Prentice-Hall, Inc., 34 pages.
383Tofel, K., et al., "SpeakTolt: A personal assistant for older iPhones, iPads," Feb. 9, 2012, http://gigaom.com/apple/speaktoit-siri-for-older-iphones-ipads/, 7 pages.
384Tsai, W.H., et al., "Attributed Grammar-A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition," IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages.
385Tucker, J., "Too lazy to grab your TV remote? Use Siri instead," Nov. 30, 2011, http://www.engadget.com/2011/11/30/too-lazy-to-grab-your-tv-remote-use-siri-instead/, 8 pages.
386Tur, G., et al., "The CALO Meeting Speech Recognition and Understanding System," 2008, Proc. IEEE Spoken Language Technology Workshop, 4 pages.
387Tur, G., et al., "The-CALO-Meeting-Assistant System," IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 6, Aug. 2010, 11 pages.
388Tyson, M., et al., "Domain-Independent Task Specification in the TACITUS Natural Language System," May 1990, SRI International, Artificial Intelligence Center, 16 pages.
389Udell, J., "Computer Telephony," BYTE, vol. 19, No. 7, Jul. 1, 1994, 9 pages.
390van Santen, J. P. H., "Contextual Effects on Vowel Duration," Journal Speech Communication, vol. 11, No. 6, Dec. 1992, 34 pages.
391Vepa, J., et al., "New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis," In Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 4 pages.
392Verschelde, J., "Matlab Lecture 8. Special Matrices in MATLAB," Nov. 23, 2005, UIC Dept. of Math., Stat.. & C.S., MCS 320, Introduction to Symbolic Computation, 4 pages.
393Vingron, M. "Near-Optimal Sequence Alignment," Deutsches Krebsforschungszentrum (DKFZ), Abteilung Theoretische Bioinformatik, Heidelberg, Germany, Jun. 1996, 20 pages.
394Vlingo, "Vlingo Launches Voice Enablement Application on Apple App Store," Vlingo press release dated Dec. 3, 2008, 2 pages.
395Wahlster, W., et al., "Smartkom: multimodal communication with a life-like character," 2001 Eurospeech -Scandinavia, 7th European Conference on Speech Communication and Technology, 5 pages.
396Waldinger, R., et al., "Deductive Question Answering from Multiple Resources," 2003, New Directions in Question Answering, published by AAAI, Menlo Park, 22 pages.
397Walker, D., et al., "Natural Language Access to Medical Text," Mar. 1981, SRI International, Artificial Intelligence Center, 23 pages.
398Waltz, D., "An English Language Question Answering System for a Large Relational Database," © 1978 ACM, vol. 21, No. 7, 14 pages.
399Ward, W., "The CMU Air Travel Information Service: Understanding Spontaneous Speech," 3 pages.
400Ward, W., et al., "A Class Based Language Model for Speech Recognition," © 1996 IEEE, 3 pages.
401Ward, W., et al., "Recent Improvements in the CMU Spoken Language Understanding System," 1994, ARPA Human Language Technology Workshop, 4 pages.
402Warren, D.H.D., et al., "An Efficient Easily Adaptable System for Interpreting Natural Language Queries," Jul.-Dec. 1982, American Journal of Computational Linguistics, vol. 8, No. 3-4, 11 pages. Best Copy Available.
403Weizenbaum, J., "Eliza—A Computer Program for the Study of Natural Language Communication Between Man and Machine," Communications of the ACM, vol. 9, No. 1, Jan. 1966, 10 pages.
404Werner, S., et al., "Prosodic Aspects of Speech," Universite de Lausanne, Switzerland, 1994, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art, and Future Challenges, 18 pages.
405Wikipedia, "Mel Scale," Wikipedia, the free encyclopedia, last modified page date: Oct. 13, 2009, http://en.wikipedia.orq/wiki/Mel scale, 2 pages.
406Wikipedia, "Minimum Phase," Wikipedia, The free encyclopedia, last modified page date: Jan. 12, 2010, http://en.wikipedia.org/wiki/Minimum phase, 8 pages.
407Winiwarter, W., "Adaptive Natural Language Interfaces to Faq Knowledge Bases," Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 22 pages.
408Wolf, M., "Poststructuralism and the ARTFUL Database: Some Theoretical Considerations", Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages.
409Written Opinion dated Aug. 21, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow).
410Wu, M., "Digital Speech Processing and Coding," ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-2 course presentation, University of Maryland, College Park, 8 pages.
411Wu, M., "Speech Recognition, Synthesis, and H.C.I.," ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-3 course presentation, University of Maryland, College Park, 11 pages.
412Wu, X. et al., "KDA: A Knowledge-based Database Assistant," Data Engineering, Feb. 610, 1989, Proceeding of the Fifth International Conference on Engineering (IEEE Cat. No. 89CH2695-5), 8 pages.
413Wyle, M. F., "A Wide Area Network Information Filter," In Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 9-11, 1991, 6 pages.
414Yang, J., et al., "Smart Sight: A Tourist Assistant System," 1999 Proceedings of Third International Symposium on Wearable Computers, 6 pages.
415Yankelovich, N., et al., "Intermedia: The Concept and the Construction of a Seamless Information Environment," Computer Magazine, Jan. 1988, © 1988 IEEE, 16 pages.
416Yoon, K., et al., "Letter-to-Sound Rules for Korean," Department of Linguistics, The Ohio State University, 2002, 4 pages.
417Zeng, D., et al., "Cooperative Intelligent Software Agents," The Robotics Institute, CarnegieMellon University, Mar. 1995, 13 pages.
418Zhao, L., "Intelligent Agents for Flexible Workflow Systems," Oct. 31, 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages.
419Zhao, Y., "An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition," IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 15 pages.
420Zovato, E., et al., "Towards Emotional Speech Synthesis: A Rule Based Approach," 5th ISCA Speech Synthesis Workshop—Pittsburgh, Jun. 14-16, 2004, 2 pages.
421Zue, V. W., "Toward Systems that Understand Spoken Language," Feb. 1994, ARPA Strategic Computing Institute, © 1994 IEEE, 9 pages.
422Zue, V., "Conversational Interfaces: Advances and Challenges," Sep. 1997, http://www.cs.cmu.edu/~dod/papers/zue97.pdf, 10 pages.
423Zue, V., "Conversational Interfaces: Advances and Challenges," Sep. 1997, http://www.cs.cmu.edu/˜dod/papers/zue97.pdf, 10 pages.
424Zue, V., et al., "From Interface to Content: Translingual Access and Delivery of On-Line Information," 1997, Eurospeech, 4 pages.
425Zue, V., et al., "Jupiter: A Telephone-Based Conversational Interface for Weather Information," Jan. 2000, IEEE Transactions on Speech and Audio Processing, 13 pages.
426Zue, V., et al., "Pegasus: A Spoken Dialogue Interface for On-Line Air Travel Planning," 1994 Elsevier, Speech Communication 15 (1994), 10 pages.
427Zue, V., et al., "The Voyager Speech Understanding System: Preliminary Development and Evaluation," 1990, Proceedings of IEEE 1990 International Conference on Acoustics, Speech, and Signal Processing, 4 pages.
Referenziert von
Zitiert von PatentEingetragen Veröffentlichungsdatum Antragsteller Titel
US94492752. Juli 201220. Sept. 2016Siemens AktiengesellschaftActuation of a technical system based on solutions of relaxed abduction
US95480509. Juni 201217. Jan. 2017Apple Inc.Intelligent automated assistant
US95826086. Juni 201428. Febr. 2017Apple Inc.Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US96201046. Juni 201411. Apr. 2017Apple Inc.System and method for user-specified pronunciation of words for speech synthesis and recognition
US96269554. Apr. 201618. Apr. 2017Apple Inc.Intelligent text-to-speech conversion
US963366013. Nov. 201525. Apr. 2017Apple Inc.User profiling for voice input processing
US96336745. Juni 201425. Apr. 2017Apple Inc.System and method for detecting errors in interactions with a voice-based digital assistant
US964661421. Dez. 20159. Mai 2017Apple Inc.Fast, language-independent method for user authentication by voice
US966802430. März 201630. Mai 2017Apple Inc.Intelligent automated assistant for TV user interactions
US96978207. Dez. 20154. Juli 2017Apple Inc.Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US979839325. Febr. 201524. Okt. 2017Apple Inc.Text correction processing
Klassifizierungen
US-Klassifikation709/206, 704/257, 709/227, 709/217, 704/270.1, 704/270, 709/204, 700/94, 704/275, 379/93.24
Internationale KlassifikationG06F15/16
UnternehmensklassifikationG10L15/1822, G06F17/30023, G10L21/06, G10L2015/223, G10L15/22, G10L15/30, G06F3/167