WO2013102892A1 - A system and method for generating personalized sensor-based activation of software - Google Patents

A system and method for generating personalized sensor-based activation of software Download PDF

Info

Publication number
WO2013102892A1
WO2013102892A1 PCT/IB2013/050123 IB2013050123W WO2013102892A1 WO 2013102892 A1 WO2013102892 A1 WO 2013102892A1 IB 2013050123 W IB2013050123 W IB 2013050123W WO 2013102892 A1 WO2013102892 A1 WO 2013102892A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
recognizer
user interface
action
natural
Prior art date
Application number
PCT/IB2013/050123
Other languages
French (fr)
Inventor
Danny Weissberg
Original Assignee
Technologies Of Voice Interface Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technologies Of Voice Interface Ltd filed Critical Technologies Of Voice Interface Ltd
Publication of WO2013102892A1 publication Critical patent/WO2013102892A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to sensor-based activation of software.
  • graphical user interfaces in applications, enabling users to quickly provide input to applications.
  • a user uses a graphical user interface, a user operates an input device such as a mouse, keyboard or touchscreen to manipulate graphical objects on a computer display.
  • the graphical objects are often represented as icons, elements or widgets such as buttons, check boxes, toolbars, hyperlinks etc., and the user can operate an input device, such as a mouse, to activate the application through the graphical object.
  • the user can activate a required function in the application by positioning a mouse pointer over any form of user interface element such as a button and clicking the mouse or by using a touchpad.
  • NUI Natural User Interface
  • voice, motion etc. voice, motion etc.
  • input activation
  • voice instructions may be given to a mobile phone while driving in order to remain "hands-free".
  • Such instructions may include playing music, dialing a specific phone number etc.
  • Other examples can include tapping or shaking a device to activate it.
  • NUI is becoming increasingly popular and may be used in diverse applications such as games, healthcare (doctors, paramedics) systems, social networks applications, inventory, field sales-force applications, computer support applications, interactive menu systems, games, educational software, etc.
  • a system for providing a natural user interface to an application includes a personalized natural user interface to at least partially override a user interface of the application and a control element to receive an action from the personalized natural user interface and to provide a control instruction to the application.
  • the system for providing a natural user interface to an application also includes an input device to receive natural input and where the input device is at least one of a microphone, a motion sensor, an accelerometer, a gyroscope, a magnetometer, a camera, a location sensor and a temperature sensor.
  • the personalized natural user interface includes at least one recognizer to recognize at least one natural input as a formatted output.
  • the recognizer is at least one of a speech recognizer, a voice recognizer, a motion recognizer, a location recognizer, a temperature recognizer and a facial expression recognizer.
  • the recognizer includes an external support analyzer to analyze if the recognizing is to be performed on at least one of a local server and a remote server.
  • the personalized natural user interface also includes a database to store associations of formatted output and associated actions.
  • the associations are one of: pre-determined and user generated.
  • the database also stores application context and properties of elements of the user interface of the application.
  • the personalized natural user interface includes a coordinator to receive the formatted output, to generate at least one action associated with the formatted output from the database and to provide at least one action to the control element.
  • the personalized natural user interface also includes a configurer to receive the natural input from a user and associate the natural input with at least one action.
  • control element includes an agent to communicate with the application.
  • the agent is one of an emulator, a native user interface agent and a web browser add-on.
  • control element is a software development kit (SDK).
  • SDK software development kit
  • the application is one of a web-based application and a non-web-based application.
  • the converting also includes receiving natural input via an input device and where the input device is at least one of a microphone, a motion sensor, an accelerometer, a gyroscope, a magnetometer, a camera, a location sensor and a temperature sensor.
  • the converting also includes recognizing via a recognizer said natural input as a formatted output.
  • the recognizer is at least one of a speech recognizer, a voice recognizer, a motion recognizer, a location recognizer, a temperature recognizer and a facial expression recognizer.
  • the converting also includes saving the associated action or chains of actions in a database.
  • the associations are one of: pre-determined and user generated.
  • the converting also includes generating the action or chain of actions associated with the formatted output.
  • the converting also includes configuring the natural input to be associated with the action or chain of actions.
  • the providing includes communicating with the application via an agent.
  • the providing also includes communicating with the application via an SDK.
  • the agent is one of an emulator, a native user interface agent and a web browser add-on.
  • the application is one of a web-based application and a non-web-based application.
  • FIGs. 1A and IB are schematic illustrations of unit for providing a control instruction from a personalized natural user interface, constructed and operative in accordance with the present invention
  • FIG. 2 is a block diagram of personalized natural interface activation system in accordance with the present invention.
  • FIG. 3 is a flowchart of the runtime functionality of the personalized natural interface activation system of Fig. 2; in accordance with the present invention
  • Fig. 4 is a schematic illustration of how more than one input or event may be configured to map to one or more actions for a particular application context in accordance with the present invention
  • FIG. 5 is a schematic illustration of how associations are held in the database in accordance with the present invention.
  • Fig. 6 is a flowchart of the lookup functionality of the coordinator of Fig. 2 in accordance with the present invention.
  • Fig. 7 is a flowchart of the action retrieval functionality of the coordinator of Fig. 2. in accordance with the present invention
  • Fig. 8 is a flowchart of the configuration functionality of the configurer of Fig. 2. in accordance with the present invention
  • FIG. 9 is a schematic illustration of the communication methods between the unit of Figs. 1 and 2 and an application, in accordance with the present invention.
  • FIG. 10 is a schematic illustration of a user interface wizard in accordance with the present invention.
  • FIG. 11 is a schematic illustration of a seamless UI in accordance with the present invention.
  • Fig. 12 is a schematic illustration of storage and sharing of the database of Fig. 2 on a remote server, in accordance with the present invention
  • Fig. 13 is a block diagram of the system of Fig. 2 together with an external recognizer in accordance with the present invention.
  • NUI natural user interface
  • a soccer voice control game may be limited to the control word "kick' to kick a ball, whereas the user may wish to use the word "go” or a sound "boom” instead.
  • Applicants have also realized that that are other forms of computerized devices aside from those discussed in the background such large household appliances - air conditioner units, ovens, refrigerators etc. as well as smaller appliances such as vacuum cleaners, televisions etc.
  • Such appliances may be considered computerized since they also have a user interface which is controlled by some form of control software. Input is usually provided to the control software of these appliances through the use of touch/switches, remote controls etc. to change appliance settings.
  • Applicants have further realized that an extra interface to provide a NUI may be added between a user and any control software or 3 rd party application installed on a computing device and that this NUI may be configured and personalized by any user with or without extensive computer knowledge.
  • this personalized NUI may be used to override or partially override the pertinent user interface of an application to be controlled by providing an automatic user interface control instruction that may replace the current control.
  • the application in question may receive a pertinent control from the NUI instead of its own user interface.
  • Fig. 1A which illustrates a unit 105 A for providing a control instruction from personalized natural user interface (PNUI) 100 via a UI controller 8 to an application 15.
  • a user 5 may say the word "JUMP" with the intention of making a character in an avatar game jump.
  • the speech input may be captured by PNUI 100 and matched to an associated action which may be provided to controller 8.
  • Controller 8 may then forward a control instruction to match the associated action to application 15 which may replace the physical activation of the 'jump' button 3 of application 15.
  • an alternative method of overriding the pertinent user interface of application 15 may be through the use of a software development kit (SDK) 9 in conjunction with PNUI 100 as is illustrated in Fig. IB which illustrates a unit 105B to be used with application 15 and to which reference is now made.
  • SDK software development kit
  • Application 15 may seek a mapping match for a UI control or other application functionality from PNUI 100 via SDK 9 instead of from its own user interface.
  • PNUI 100 may override the UI the application 15 using its application programming interface (API).
  • API application programming interface
  • PNUI 100 may comprise one or more input devices 10 (such as a microphone 10A, a motion sensor 10B, a heat sensor IOC and/or a location sensor 10D), one or more input recognizers 20 (such as speech recognizer 20A, a motion recognizer 20B, a temperature recognizer 20C and a location recognizer 20D), a database 30, a coordinator 40 and a configurer 50.
  • input devices 10 such as a microphone 10A, a motion sensor 10B, a heat sensor IOC and/or a location sensor 10D
  • input recognizers 20 such as speech recognizer 20A, a motion recognizer 20B, a temperature recognizer 20C and a location recognizer 20D
  • database 30 such as a coordinator 40 and a configurer 50.
  • PNUI 100 may have three main flows, at application start up, during user configuration and at run time. For the sake of clarity, the functionality of PNUI 100 is described with a run-time flow.
  • an input may be received by the relevant input device 10 (step 120) and sent to the relevant recognizer 20 (step 130).
  • the relevant input device 10 may be received by the relevant input device 10 (step 120) and sent to the relevant recognizer 20 (step 130).
  • a speech signal may be received by microphone 10A and may be forwarded to speech recognizer 10B.
  • motion may be detected using motion sensor 12 and may be forwarded to motion recognizer IOC etc.
  • Each recognizer 20 may interpret the pertinent input (step 140) (audio signal, motion, temperature etc.) and may convert it to a formatted output such as an alphanumeric or text format according to methods known in the art. It will be appreciated that for each type of recognizer 20, the formatted output may vary according to the type of recognizer. Examples of speech recognizers include the speech recognizers, implemented as Application Programming Interfaces (APIs) and commercially available from Google Inc. or from Nuance Communications, Inc. An exemplary motion detector is Kinect (commercially available from Microsoft Corporation Inc.). It will be appreciated that recognizers 20 may be potentially implemented by a third party API as described herein above or may have a proprietary implementation such as a voice tag recognizer.
  • APIs Application Programming Interfaces
  • Kinect commercially available from Microsoft Corporation Inc.
  • Voice tags are known in the art and may be used in automated speech recognition in a voice control device to allow users to "speak" commands.
  • recognizers that provide a text output are used in this discussion.
  • the speech signal corresponding to the word "jump” may be picked up by microphone 11 and may be converted into a text output of the word "jump".
  • predefined voice tags may be recognized by speech recognizer 21 thus providing a different form of output for association.
  • a user can record a voice tag such as "boo" to kick a ball in a soccer game.
  • PNUI 100 may be active for a particular application context or state only. It will be appreciated that some PNUI 100 configurations may be global, i.e. active for all applications in any state of the application such as the functionality of up, down, right and left keys in a gaming application. Other configurations may be active for a specific application, but on any screen and state of the application, for example, the connection between the "LIKE" voice command and the like button may only be active in the context of the Facebook application and a Facebook screen showing the "like' button'.
  • Another context may be the sub-state where the application context may be divided into sub states. For example, state may be a particular screen, and every change may be identified as a sub-state. Therefore when the screen changes, a particular configuration may no longer be valid. It will be appreciated that context for a particular event may be set during configuration as is described herein below.
  • the current context of application 15 may be obtained during runtime via controller 8 or via SDK 9 which may receive from application 15 information regarding the active application itself and any current context including pertinent states and sub- states as described in more detail herein below.
  • the context of the application 15 may be obtained by coordinator 40 from controller 8 or via SDK 9 as is described hereinabove.
  • the text output of each recognizer 20 may then be passed to coordinator 40 (step 160), which may be connected to database 30.
  • database 30 may store associations between the text output of the appropriate recognizers 20 and desired actions, within a particular context. For example, the word “right” may be associated with the action “move right” for a character within the first three levels of a particular game.
  • Coordinator 40 may attempt to retrieve from database 30 an associated action or actions for the word "right” (step 170) within the correct context and if a suitable match is found (180), may provide action instructions to controller 8 (step 190). If no match is found, the process may be repeated.
  • microphone 10A may capture external sound.
  • the raw sound captured may be interpreted into an event using either a speech recognizer 20A (either proprietary or 3 rd party or a hybrid of both as described in more detail herein below) or another form of voice recognizer 20 which may recognize voice tags.
  • a speech recognizer 20A either proprietary or 3 rd party or a hybrid of both as described in more detail herein below
  • voice recognizer 20 which may recognize voice tags.
  • another form of input device may be motion sensor 10B which may detect motion and may be an accelerometer, a gyroscope, a magnetometers etc. The motion captured may be analyzed by motion recognizer 20B.
  • Other forms of input may include facial expression, location and temperature which may be captured by optical and image sensors such as device cameras, global positioning systems, thermometers, compass sensors, RFID readers and clocks and which all may be analyzed by using a pertinent recognizer 20. It will be appreciated that some of the above mentioned input devices may be built in to the operating system of the pertinent device or may be 3 rd party software.
  • coordinator 40 may receive as input a single or chain of events from multiple recognizers 20.
  • PNUI 100 may be configured (as described in more detail herein below) to match the motion of a hand wave with the speech input of the word "BYE" to close a particular 3 rd party application.
  • Fig. 4 illustrates how more than one input or event may be configured to map to one or more actions for a particular application context.
  • each chain of events may comprise multiple events (such as the hand wave and the "BYE") with the relationship between them set with pre-defined order together with a predefined time delay. It will be appreciated that the events may also be configured to happen simultaneously. It will further be appreciated that one or many events, may be mapped to one or many actions.
  • database 30 may hold the matches between "events” (input received via devices 10) and "actions" (instructions to controller 8) within a particular context.
  • a chain of events can be one event or more.
  • the relationship between different events in the same chain may also be stored in database 30, together with instructions as to whether multiple events should happen simultaneously or in a particular order with time delays between them etc. The same information may also be held for actions.
  • Database 30 may be configured by default values pre- configured by the system, by user configuration using configurer 50 or by sharing configurations as part of a social community (described in more detail herein below).
  • an association between an incoming event such as the word “Dad” and the outgoing action “instruct cellphone to call 123456” may be broadened.
  • the command “Brian” may also be associated with the same action “instruct cellphone to call 123456".
  • the words “Dad” and “Brian” may both be used as triggers for the action “instruct cellphone to call "12345 as is illustrated in Fig.5 which shows examples of different associations that may be held in database 30 and to which reference is now made.
  • an incoming event or chain of events may be configured to map to a particular action or chain of actions.
  • Coordinator 40 may receive an event (step 220) from a recognizer 20. It may then add the event to a current chain of events (step 230). Once the complete chain of events has been received, a match is made with each event with database 30 (step 240). If a complete match is made of all the events in database 30 (step 250), then the pertinent action or chain of actions is activated (step 270). If no match is made, then a check is made to see if a particular event is missing from the current chain (step 260). If an event is missing, then the event is retrieved (step 220).
  • each 'action' may be considered an instruction to controller 8.
  • each single event may be configured to match an individual action or a chain of actions.
  • a chain of events may be configured to match a single action or a chain of actions. There may be many permutations of matches between events and actions (two events -> three actions etc.)
  • Fig. 7 illustrates a flow chart of the activation of a chain of actions (or single action) once they have been activated as described herein above.
  • An action is retrieved (either first or subsequent) (step 310) and added to the current list of actions (step 320) together with any pre-defined relationship between the current action and the previous action (step 330). If the relationship has been defined that both actions are to be performed simultaneously then the next action is retrieved immediately and added to the list. It will be appreciated that the list may hold more than one action if they are to be performed simultaneously. It will be also be appreciated that if the actions are not defined as simultaneous, then only one action will be held on the list at any time.
  • step 340 If the relationship between subsequent actions has been defined with a time delay between them, another action is not retrieved, the pertinent time delay is waited and the current action (or actions that are on the current list of actions) is sent to controller 8 (step 340). Once controller 8 has received pertinent action, it is removed from the list (step 350) and the next action is sought. Steps 310-350 and are repeated until controller 8 has been instructed of all the pertinent actions. Once there are no more actions to be retrieved (step 310), the process is stopped.
  • the first type action may be to send information to the pertinent device on which PNUI 100 is installed such as showing a particular picture on a screen, changing its brightness etc.
  • Another type of action in this category may be to prepare voice feedback to user 5 via the device speaker or to vibrate the device.
  • the second type of action may be to activate the device functionalities of the pertinent device. It will be appreciated that all device functionalities have a programmable API.
  • the built in functionalities may include sending an SMS, turning Wi-Fi/ Bluetooth etc. on and off, creating a call, browsing multimedia etc.
  • the third type of action may be instructing a 3 rd party application, i.e. overriding a particular control such as pressing a button or in the case of using an SDK to control the 3 rd party application (as described herein above), the activation may be through SDK callback.
  • the fourth type of action may be to activate a secondary 3 rd party application via a primary 3 rd party application.
  • coordinator 40 may not receive input via input device 10 but from a different form of trigger such as an application based event.
  • an application based event may be the same application providing the input.
  • changes in application resources may be configured to be recognized as suitable input.
  • a significant change in CPU usage by application 15 may be configured to cause the pertinent device on which it is installed to vibrate when a threshold level is reached.
  • configurer 50 may enable user 5 to personalize his own PNUI 100.
  • Configurer 50 may set the above mentioned relationships between events and actions as well as between events in chains of events and actions in chains of actions.
  • Fig. 8 illustrates a flowchart showing the functionality of configurer 50.
  • Configurer 50 may be activated (step 500) and the pertinent application that is to be controlled may be selected (step 505).
  • the input type is selected (step 510), for example whether the pertinent event is triggered by voice, motion etc. It will be appreciated that a particular input type may be pre-defined as the default such as all input is voice.
  • a value is set (step 520) for the particular event (as described in more detail herein below).
  • the pertinent recognizer 20 may convert the input from the device 10 in question into text output. It is this text output that is assigned a value (label or tag). For example the incoming speech signal for the word "BYE” may be converted into a text output by speech recognizer 21 and then assigned a label 'bye" It is this label that is held in database 30 for further associations with actions.
  • a relationship may be set between the current input and the next input (step 540), such as sequence or timing between them and then the next event is selected (step 510). This looping (step 530) may continue until all pertinent inputs have been received, assigned a label and had the relationships between them defined.
  • the action/command type is selected (step 550) such as voice output, screen output, control of application 15 etc. Once the output type has been determined, the appropriate command may be set (step 560). It will also be appreciated that a particular output type may be defined as a default. For example for the event that has been labeled "bye", the matching command may be "close down" a particular 3 rd party application.
  • step 570 If there are additional commands that need to be configured (step 570), the relationship is set between the current command and the command event (step 580), such as sequence or timing and then the next event is selected (step 370). Once again, "looping' may occur until all the pertinent commands are configured. Once they have all been configured, the pertinent configuration may be stored in database 30.
  • controller 8 may comprise an agent 85 as a communication medium between controller 8 and the UI of application 15.
  • agent 85 there may be four types of control instruction to application 15 including activating the device functionalities of the pertinent device on which application 15 is installed and instructing a 3 rd party application by overriding a particular control such as pressing a button.
  • overriding or partially overriding the device functionalities of the pertinent device as well as 3 rd party application controls may require identifying the UI elements from application 15 together with their properties as well as 'hooking' on to them in order to modify them or to emulate their functionality.
  • agent 85 may have the ability to do this during configuration. Agent 85 may also receive any updates as to changes in functionality and/or context from application 15 during runtime such as the creation of a new window in a voice control game. It will also be appreciated, that agent 85 may also instruct application 15 to load itself into a particular context of the application. It will be further appreciated that in the case of an SDK when no controller 8 is present, context information may be received by coordinator 40 via the API of application 15.
  • UI elements may come in various shapes and sizes and may range from buttons, check boxes, labels, progress bars etc. to avatars in gaming, hyperlinks and widgets. It will also be appreciated all UI elements may have multiple properties such as type, caption, color, size etc. It will be further appreciated that the activation of a UI element may include, pressing a button, checking a checkbox, selecting an item in a pull down menu etc. Modification of a UI element may include changing different parameters such as color or size or may include creating or destroying an element.
  • Fig. 9 illustrates how agent 85 may communicate with the UI elements of application 15 in order to identify, 'hook' and control them. It will be further appreciated that there may be different methods of implementation depending on the type of application 15 and its environment. If application 15 is a 3 rd party web application (71), a standard method may be to extend the current capabilities and features of the web browser in use (Explorer, Safari etc.) Add-on software may be used that may sit inside the browser and search for UI elements to activate. An alternative method may be using a customized web browser (72) added to agent 85. This may be very useful in devices where there is no add-on support.
  • 3 rd party web application 71
  • Add-on software may be used that may sit inside the browser and search for UI elements to activate.
  • An alternative method may be using a customized web browser (72) added to agent 85. This may be very useful in devices where there is no add-on support.
  • agent 85 may be required to run in the context of application 15 in order to communicate with its UI elements and to either extract property information for storage in database 30 or to modify their properties.
  • agent 85 may also run within the context of unit 105 and not within application 15.
  • Another embodiment may be through an emulator (74).
  • Such a program may be native to the pertinent device in use and may aid users to develop their own applications by emulating the pertinent operating system. This emulation capability may then be extended by using an add-on agent 85 in order to control application 15. It will be appreciated that all mobile platforms may have emulator software. In some cases, the emulator may be SDK 9.
  • agent 85 may understand the pertinent technology of the UI of application 15 such as Win 32, .Net, Java, ObjectiveC etc. It will also be appreciated that agent 85 may be generic at the technology level and that once agent 85 supports a particular technology; all other applications running with the same technology may also be supported.
  • UI element Once user 5 has selected and configured a UI element, its properties may be stored in database 30 in a manner that will enable the pertinent UI element to be uniquely identified for future use. For example if user 5 says the word "hey", and the word 'hey' has been associated with turning on a particular button, coordinator 40 may need to know which button to instruct controller 8 to activate. It will be appreciated that certain properties of UI elements may be obligatory and certain properties may be optional. For example obligatory properties for a button may include type - button and caption -'hey' in order to provide a unique identification. Optional parameters may include its position (since the button may be moved) and its color (which may be changed)...
  • controller 8 may seek to identify to which UI element to send his control instruction, coordinator 40 may find a match in database 30 by first seeking obligatory elements. If the obligatory elements searched do not provide a unique element, then the optional properties are also searched. For example, if there are two buttons both stored as type button and with caption "hey", then coordinator 40 may seek an optional property such as the color red in order to identify the pertinent button to be used. It will be further appreciated that controller 8 may retrieve this information from database 30 via coordinator 40 together with the required action.
  • configurer 50 the main purpose of configurer 50 is to enable user 5 to personalize the natural user interface of his applications. It will be further appreciated that there may be different ways for a user 5 to configure system 100 through the use of an appropriate user interface. It will also be appreciated that event values as stored in database 30, may be configured by user 5 by recording his events, i.e. user 5 instructs the pertinent device to start recording and then performs the events (voice, motion etc.) in the desired order with the desired timings between them.. It will be also appreciated that events may also be configured manually by selecting from a list of possible values and by manually typing the text of the required value.
  • PNUI 100 may provide a further interface, to aid user 5 configure his events.
  • One such interface may be through the use of an interface wizard, in which user 5 may be presented with different dialog boxes in sequence and may move between the dialog boxes usually by clicking on a "next" button after he has entered data or configuring information into the current dialog box. If user 5 decides to go back and change any previously entered information, he may do so by clicking on a "previous" button. It will also be appreciated that any standard wizard may be used and may be adapted accordingly for the application in use.
  • Fig. 10 illustrates the stages of using an interface wizard to configure system 100 on a mobile communication device.
  • the wizard is activated, usually manually by selecting the pertinent wizard application.
  • the wizard may also be activated using a pre-configured PNUI 100 command. It will be appreciated that in order for the wizard to complete its task, pertinent devices 10 and recognizers 20 must also be activated.
  • User 5 may then be presented with a list of applications 15 for personalizing with a NUI interface (step 610). It will be appreciated only the applications that may be supported by PNUI 100 may be presented.
  • the wizard may then present to user 5 a pre-defined list of functionalities available for the selected application (step 620).
  • each functionality may be an action or a chain of actions and may combine actions on UI elements, APIs etc. It will also be appreciated that functionalities may be nested within functionalities represented as menus and sub menus.
  • This may be executed either by recording in conjunction with recording module 55, selecting or defining the event value. For example, if user 5 desires to start application ABC on his mobile phone using the voice input of the word "ON”, he may first select application ABC from the list of available applications, he may then select the start functionality that appears in the list of available functionality for application ABC and then he may hit the record button and record his voice saying the word "ON”. When he has finished speaking, user 5 may stop the recorder and the resultant information may be stored in database 30 for later use.
  • a seamless UI may be used instead of a wizard to configure PNUI 100. It will be appreciated that the seamless UI may allow user 5 to configure an application NUI without leaving the application context and screen, thus eliminating the need for a wizard. It will be further appreciated that the seamless UI may be integrated into the application 15. In order for the seamless UI to function properly, pertinent devices 10 and recognizers 20 must also be activated. It will also be appreciated that the seamless UI may be used in conjunction with the mouse over functionality or touch provided by the host operating system and that agent 85 must be registered to receive 3 rd party UI events. It will also be appreciated that the seamless UI may be created by agent 85 during the identification and hooking process of the UI elements of application 15 as described herein above.
  • the element When user 5 hovers over a UI element belonging to application 15, the element may be identified by agent 85 as configurable and be highlighted.
  • the highlight may be in the form of a pop-up screen or in the form of a change of color, shading etc. of the pertinent element as is illustrated in Fig. 11 to which reference is now made.
  • the 3 rd party application user interface (710) may provide various UI elements (710A, 710B, 710C etc.).
  • User 5 may be presented with the same UI screen (720) with the configurable elements highlighted and then may proceed to configure the properties of any pertinent element via a configuration dialogue that may be triggered.
  • agent 85 may also highlight UI elements that have an existing configuration.
  • Configurer 50 may then be activated to configure PNUI 100 according to the configuration settings of user 5 It will be appreciated a seamless UI may be generic to a particular UI technology (such as Win32, .Net, Java , Web etc.) and does not have to be configured to a particular application. It will be further appreciated that properties concerning the UI elements that have been selected and configured may be stored in database 30 for subsequent configuration at runtime.
  • UI technology such as Win32, .Net, Java , Web etc.
  • the properties stored in database 30 for a pertinent UI element of application 15 must provide a unique identification for the element.
  • the properties for button 3 once configured may provide enough identification that button 3 is the jump button for application 15.
  • database 30 may be stored locally on the pertinent computing device or may be stored on a remote account server 70 as illustrated in Fig. 12 to which reference is now made.
  • database 30 may be used with any other computerized device to control the same application 15 such as on cars, tablets, personal computers and smart-TVs.
  • user 5 may configure a command for Facebook which may be also used on a mobile communication device, a tablet, a smart television screen, a personal computer etc. and on different platforms such as Android, iPhone, iMac, Windows etc. It will be appreciated that if user 5 decides that the configuration is only for a specific device or platform, the configuration may be stored locally, else the configuration may be stored on account server 70.
  • database 30 may allow for configuration to be performed by other users and may allow for other users to share the associations and configurations that are held. It will also be appreciated that database 30 may be shared amongst other users by uploading database 30 to a shared server 75 that may be accessed by all users in a particular social group using the standard group sharing rules. It will be further appreciated that synchronization with servers 70 and/or 75 may be done manually by user 5 whenever he has an update or automatically using standard timers.
  • recognizer 20 may be installed on a server 80 instead or in addition to the local device as is illustrated in Fig. 13 to which reference is now made. It will be appreciated that the recognizer installed on server 80 may be either 3 rd party or propriety. It will also be appreciated that recognizer 20 installed locally on the pertinent device may also be 3 rd party or proprietary.
  • Each recognizer 20 may include a sub-module comprising an external support analyzer 22 to decide if analysis of the input is to be performed locally or externally on server 80. For example, speech recognizer 20A may only have the ability to recognize a limited list of words. External support analyzer 22A may decide to activate an external recognizer situated on server 80 in order recognize the other unsupported words.
  • both recognizers may work in parallel. It will be appreciated that performing recognition locally has the advantage of a fast real-time response, but may be limited to the CPU power of the local device and other resources such as battery life. It will be further appreciated that performing the recognition action on server 80 has the advantage of almost unlimited CPU and resources (cloud) but has the disadvantage of suffering slow reaction because of the round-trip delay [0082]
  • a user 5 with no technical skills or background may configure a personalized user interface to override or partially override control software such as operating systems and 3 rd party applications. It will be further appreciated, that user 5 may have the ability to do this across different platforms and over different devices and with the ability to share his configurations and also have the ability to use other user configurations.
  • Embodiments of the present invention may include apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the desired purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk, including floppy disks, optical disks, magnetic-optical disks, read-only memories (ROMs), compact disc read-only memories (CD-ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, Flash memory, or any other type of media suitable for storing electronic instructions and capable of being coupled to a computer system bus.
  • ROMs read-only memories
  • CD-ROMs compact disc read-only memories
  • RAMs random access memories
  • EPROMs electrically programmable read-only memories
  • EEPROMs electrically erasable and

Abstract

A system for providing a natural user interface to an application including a personalized natural user interface to at least partially override a user interface of said application and a control element to receive an action from the personalized natural user interface and to provide a control instruction to the application.

Description

TITLE OF THE INVENTION
A SYSTEM AND METHOD FOR GENERATING PERSONALIZED SENSOR- BASED ACTIVATION OF SOFTWARE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims benefit from U.S. Provisional Patent Applications No. 61/583,775, filed January 6, 2012 which is hereby incorporated in its entirety by reference.
FIELD OF THE INVENTION
[0002] The present invention relates to sensor-based activation of software.
BACKGROUND OF THE INVENTION
[0003] Conventional computerized devices, such as personal computers, laptop computers, smartphones, tablets and the like, utilize graphical user interfaces in applications, enabling users to quickly provide input to applications. In general, using a graphical user interface, a user operates an input device such as a mouse, keyboard or touchscreen to manipulate graphical objects on a computer display. The graphical objects are often represented as icons, elements or widgets such as buttons, check boxes, toolbars, hyperlinks etc., and the user can operate an input device, such as a mouse, to activate the application through the graphical object. For example, the user can activate a required function in the application by positioning a mouse pointer over any form of user interface element such as a button and clicking the mouse or by using a touchpad.
[0004] Today there is an increasing demand to use a more natural way of communicating with such computerized devices in order to keep up with our busy lives and schedules. Natural User Interface (NUI) (voice, motion etc.) is known in the art where input (activation) is provided in a more natural form such as voice and motion. For example voice instructions may be given to a mobile phone while driving in order to remain "hands-free". Such instructions may include playing music, dialing a specific phone number etc. Other examples can include tapping or shaking a device to activate it. NUI is becoming increasingly popular and may be used in diverse applications such as games, healthcare (doctors, paramedics) systems, social networks applications, inventory, field sales-force applications, computer support applications, interactive menu systems, games, educational software, etc.
SUMMARY OF THE PRESENT INVENTION
[0005] There is provided, in accordance with a preferred embodiment of the present invention a system for providing a natural user interface to an application. The system includes a personalized natural user interface to at least partially override a user interface of the application and a control element to receive an action from the personalized natural user interface and to provide a control instruction to the application.
[0006] Moreover, in accordance with a preferred embodiment of the present invention, the system for providing a natural user interface to an application also includes an input device to receive natural input and where the input device is at least one of a microphone, a motion sensor, an accelerometer, a gyroscope, a magnetometer, a camera, a location sensor and a temperature sensor.
[0007] Further, in accordance with a preferred embodiment of the present invention, the personalized natural user interface includes at least one recognizer to recognize at least one natural input as a formatted output.
[0008] Still further, in accordance with a preferred embodiment of the present invention, the recognizer is at least one of a speech recognizer, a voice recognizer, a motion recognizer, a location recognizer, a temperature recognizer and a facial expression recognizer.
[0009] Additionally, in accordance with a preferred embodiment of the present invention, the recognizer includes an external support analyzer to analyze if the recognizing is to be performed on at least one of a local server and a remote server.
[0010] Moreover, in accordance with a preferred embodiment of the present invention, the personalized natural user interface also includes a database to store associations of formatted output and associated actions. [0011] Further, in accordance with a preferred embodiment of the present invention, the associations are one of: pre-determined and user generated.
[0012] Still further, in accordance with a preferred embodiment of the present invention, the database also stores application context and properties of elements of the user interface of the application.
[0013] Additionally, in accordance with a preferred embodiment of the present invention, the personalized natural user interface includes a coordinator to receive the formatted output, to generate at least one action associated with the formatted output from the database and to provide at least one action to the control element.
[0014] Moreover, in accordance with a preferred embodiment of the present invention, the personalized natural user interface also includes a configurer to receive the natural input from a user and associate the natural input with at least one action.
[0015] Further, in accordance with a preferred embodiment of the present invention, the control element includes an agent to communicate with the application.
[0016] Still further, in accordance with a preferred embodiment of the present invention, the agent is one of an emulator, a native user interface agent and a web browser add-on.
[0017] Additionally, in accordance with a preferred embodiment of the present invention, the control element is a software development kit (SDK).
[0018] Moreover, in accordance with a preferred embodiment of the present invention, the application is one of a web-based application and a non-web-based application.
[0019] There is provided, in accordance with a preferred embodiment of the present invention a method for converting natural input into an associated action or chain of actions and providing the associated action or chain of actions as input to the application. [0020] Moreover, in accordance with a preferred embodiment of the present invention, the converting also includes receiving natural input via an input device and where the input device is at least one of a microphone, a motion sensor, an accelerometer, a gyroscope, a magnetometer, a camera, a location sensor and a temperature sensor.
[0021] Further, in accordance with a preferred embodiment of the present invention, the converting also includes recognizing via a recognizer said natural input as a formatted output.
[0022] Still further, in accordance with a preferred embodiment of the present invention, the recognizer is at least one of a speech recognizer, a voice recognizer, a motion recognizer, a location recognizer, a temperature recognizer and a facial expression recognizer.
[0023] Additionally, in accordance with a preferred embodiment of the present invention, the converting also includes saving the associated action or chains of actions in a database.
[0024] Moreover, in accordance with a preferred embodiment of the present invention, the associations are one of: pre-determined and user generated.
[0025] Further, in accordance with a preferred embodiment of the present invention, the converting also includes generating the action or chain of actions associated with the formatted output.
[0026] Still further, in accordance with a preferred embodiment of the present invention, the converting also includes configuring the natural input to be associated with the action or chain of actions.
[0027] Further, in accordance with a preferred embodiment of the present invention, the providing includes communicating with the application via an agent.
[0028] Additionally, in accordance with a preferred embodiment of the present invention, the providing also includes communicating with the application via an SDK. [0029] Moreover, in accordance with a preferred embodiment of the present invention, the agent is one of an emulator, a native user interface agent and a web browser add-on.
[0030] Further, in accordance with a preferred embodiment of the present invention, the application is one of a web-based application and a non-web-based application.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
[0032] Figs. 1A and IB are schematic illustrations of unit for providing a control instruction from a personalized natural user interface, constructed and operative in accordance with the present invention;
[0033] Fig. 2 is a block diagram of personalized natural interface activation system in accordance with the present invention;
[0034] Fig. 3 is a flowchart of the runtime functionality of the personalized natural interface activation system of Fig. 2; in accordance with the present invention;
[0035] Fig. 4 is a schematic illustration of how more than one input or event may be configured to map to one or more actions for a particular application context in accordance with the present invention;
[0036] Fig. 5 is a schematic illustration of how associations are held in the database in accordance with the present invention;
[0037] Fig. 6 is a flowchart of the lookup functionality of the coordinator of Fig. 2 in accordance with the present invention;
[0038] Fig. 7 is a flowchart of the action retrieval functionality of the coordinator of Fig. 2. in accordance with the present invention; [0039] Fig. 8 is a flowchart of the configuration functionality of the configurer of Fig. 2. in accordance with the present invention;
[0040] Fig. 9 is a schematic illustration of the communication methods between the unit of Figs. 1 and 2 and an application, in accordance with the present invention;
[0041] Fig. 10 is a schematic illustration of a user interface wizard in accordance with the present invention;
[0042] Fig. 11 is a schematic illustration of a seamless UI in accordance with the present invention;
[0043] Fig. 12 is a schematic illustration of storage and sharing of the database of Fig. 2 on a remote server, in accordance with the present invention;
[0044] Fig. 13 is a block diagram of the system of Fig. 2 together with an external recognizer in accordance with the present invention.
[0045] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0046] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
[0047] Applicants have realized that current methods of natural user interface (NUI) and in particular the use of NUI to provide input to 3 rd party applications require substantial time and effort in the development of applications and platforms in order to implement such functionality. In order to support NUI for an application, it is necessary to create the correct content and to develop code to support NUI for each application separately. It will further be appreciated, that these forms of NUI may also be limiting. For example, a soccer voice control game may be limited to the control word "kick' to kick a ball, whereas the user may wish to use the word "go" or a sound "boom" instead.
[0048] Applicants have also realized that that are other forms of computerized devices aside from those discussed in the background such large household appliances - air conditioner units, ovens, refrigerators etc. as well as smaller appliances such as vacuum cleaners, televisions etc. Such appliances may be considered computerized since they also have a user interface which is controlled by some form of control software. Input is usually provided to the control software of these appliances through the use of touch/switches, remote controls etc. to change appliance settings.
[0049] Applicants have further realized that an extra interface to provide a NUI may be added between a user and any control software or 3 rd party application installed on a computing device and that this NUI may be configured and personalized by any user with or without extensive computer knowledge.. It will be appreciated that this personalized NUI (PNUI) may be used to override or partially override the pertinent user interface of an application to be controlled by providing an automatic user interface control instruction that may replace the current control. The application in question may receive a pertinent control from the NUI instead of its own user interface. Reference is now made to Fig. 1A which illustrates a unit 105 A for providing a control instruction from personalized natural user interface (PNUI) 100 via a UI controller 8 to an application 15. For example, a user 5 may say the word "JUMP" with the intention of making a character in an avatar game jump. The speech input may be captured by PNUI 100 and matched to an associated action which may be provided to controller 8. Controller 8 may then forward a control instruction to match the associated action to application 15 which may replace the physical activation of the 'jump' button 3 of application 15.
[0050] It will also be appreciated that an alternative method of overriding the pertinent user interface of application 15 may be through the use of a software development kit (SDK) 9 in conjunction with PNUI 100 as is illustrated in Fig. IB which illustrates a unit 105B to be used with application 15 and to which reference is now made. Application 15 may seek a mapping match for a UI control or other application functionality from PNUI 100 via SDK 9 instead of from its own user interface. In another embodiment, PNUI 100 may override the UI the application 15 using its application programming interface (API).
[0051] Reference is now made to Fig .2 which illustrates a PNUI activation system 100 which may be installed on a computerized device in accordance with an embodiment of the present invention. Reference is also made to Fig. 3 which illustrates the runtime flow of events for PNUI 100. PNUI 100 may comprise one or more input devices 10 (such as a microphone 10A, a motion sensor 10B, a heat sensor IOC and/or a location sensor 10D), one or more input recognizers 20 (such as speech recognizer 20A, a motion recognizer 20B, a temperature recognizer 20C and a location recognizer 20D), a database 30, a coordinator 40 and a configurer 50.
[0052] It will be appreciated that PNUI 100 may have three main flows, at application start up, during user configuration and at run time. For the sake of clarity, the functionality of PNUI 100 is described with a run-time flow.
[0053] Once application 15 has been activated (step 110 of Fig. 3), an input (an event) may be received by the relevant input device 10 (step 120) and sent to the relevant recognizer 20 (step 130). It will be appreciated that for each input device 10 in use, there may be one or more matching recognizers 20. For example a speech signal may be received by microphone 10A and may be forwarded to speech recognizer 10B. Likewise, motion may be detected using motion sensor 12 and may be forwarded to motion recognizer IOC etc.
[0054] Each recognizer 20 may interpret the pertinent input (step 140) (audio signal, motion, temperature etc.) and may convert it to a formatted output such as an alphanumeric or text format according to methods known in the art. It will be appreciated that for each type of recognizer 20, the formatted output may vary according to the type of recognizer. Examples of speech recognizers include the speech recognizers, implemented as Application Programming Interfaces (APIs) and commercially available from Google Inc. or from Nuance Communications, Inc. An exemplary motion detector is Kinect (commercially available from Microsoft Corporation Inc.). It will be appreciated that recognizers 20 may be potentially implemented by a third party API as described herein above or may have a proprietary implementation such as a voice tag recognizer. Voice tags are known in the art and may be used in automated speech recognition in a voice control device to allow users to "speak" commands. For the sake of clarity, recognizers that provide a text output are used in this discussion. For example, for the above mentioned example, the speech signal corresponding to the word "jump" may be picked up by microphone 11 and may be converted into a text output of the word "jump". In an alternative embodiment, predefined voice tags may be recognized by speech recognizer 21 thus providing a different form of output for association. For example, a user can record a voice tag such as "boo" to kick a ball in a soccer game.
[0055] It will also be appreciated that PNUI 100 may be active for a particular application context or state only. It will be appreciated that some PNUI 100 configurations may be global, i.e. active for all applications in any state of the application such as the functionality of up, down, right and left keys in a gaming application. Other configurations may be active for a specific application, but on any screen and state of the application, for example, the connection between the "LIKE" voice command and the like button may only be active in the context of the Facebook application and a Facebook screen showing the "like' button'. Another context may be the sub-state where the application context may be divided into sub states. For example, state may be a particular screen, and every change may be identified as a sub-state. Therefore when the screen changes, a particular configuration may no longer be valid. It will be appreciated that context for a particular event may be set during configuration as is described herein below.
[0056] It will be appreciated that the current context of application 15 may be obtained during runtime via controller 8 or via SDK 9 which may receive from application 15 information regarding the active application itself and any current context including pertinent states and sub- states as described in more detail herein below.
[0057] The context of the application 15 may be obtained by coordinator 40 from controller 8 or via SDK 9 as is described hereinabove. The text output of each recognizer 20 may then be passed to coordinator 40 (step 160), which may be connected to database 30. It will be appreciated that database 30 may store associations between the text output of the appropriate recognizers 20 and desired actions, within a particular context. For example, the word "right" may be associated with the action "move right" for a character within the first three levels of a particular game. Coordinator 40 may attempt to retrieve from database 30 an associated action or actions for the word "right" (step 170) within the correct context and if a suitable match is found (180), may provide action instructions to controller 8 (step 190). If no match is found, the process may be repeated.
[0058] It will be appreciated that there may be multiple forms of input and that for each form of input there may be an appropriate input device 10. For example, microphone 10A may capture external sound. The raw sound captured may be interpreted into an event using either a speech recognizer 20A (either proprietary or 3 rd party or a hybrid of both as described in more detail herein below) or another form of voice recognizer 20 which may recognize voice tags. It will also be appreciated that another form of input device may be motion sensor 10B which may detect motion and may be an accelerometer, a gyroscope, a magnetometers etc. The motion captured may be analyzed by motion recognizer 20B. Other forms of input may include facial expression, location and temperature which may be captured by optical and image sensors such as device cameras, global positioning systems, thermometers, compass sensors, RFID readers and clocks and which all may be analyzed by using a pertinent recognizer 20.. It will be appreciated that some of the above mentioned input devices may be built in to the operating system of the pertinent device or may be 3 rd party software.
[0059] It will be appreciated that coordinator 40 may receive as input a single or chain of events from multiple recognizers 20. For example, PNUI 100 may be configured (as described in more detail herein below) to match the motion of a hand wave with the speech input of the word "BYE" to close a particular 3 rd party application. Reference is now made to Fig. 4 which illustrates how more than one input or event may be configured to map to one or more actions for a particular application context. It will also be appreciated that each chain of events may comprise multiple events (such as the hand wave and the "BYE") with the relationship between them set with pre-defined order together with a predefined time delay. It will be appreciated that the events may also be configured to happen simultaneously. It will further be appreciated that one or many events, may be mapped to one or many actions.
[0060] It will also be appreciated that database 30 may hold the matches between "events" (input received via devices 10) and "actions" (instructions to controller 8) within a particular context. As described hereinabove, a chain of events can be one event or more. It will be further appreciated that the relationship between different events in the same chain may also be stored in database 30, together with instructions as to whether multiple events should happen simultaneously or in a particular order with time delays between them etc. The same information may also be held for actions. Database 30 may be configured by default values pre- configured by the system, by user configuration using configurer 50 or by sharing configurations as part of a social community (described in more detail herein below).
[0061] It will be further appreciated that an association between an incoming event such as the word "Dad" and the outgoing action "instruct cellphone to call 123456" may be broadened. The command "Brian" may also be associated with the same action "instruct cellphone to call 123456". Thus the words "Dad" and "Brian" may both be used as triggers for the action "instruct cellphone to call "12345 as is illustrated in Fig.5 which shows examples of different associations that may be held in database 30 and to which reference is now made. As discussed hereinabove, an incoming event or chain of events may be configured to map to a particular action or chain of actions.
[0062] Reference is now made to Fig. 6 which illustrates a flow chart of the lookup functionality of coordinator 40. Coordinator 40 may receive an event (step 220) from a recognizer 20. It may then add the event to a current chain of events (step 230). Once the complete chain of events has been received, a match is made with each event with database 30 (step 240). If a complete match is made of all the events in database 30 (step 250), then the pertinent action or chain of actions is activated (step 270). If no match is made, then a check is made to see if a particular event is missing from the current chain (step 260). If an event is missing, then the event is retrieved (step 220). If a partial match is made (some events match up and not others), then a second check may be made within database 30 to see whether there is a similar chain of events stored in database 30 (step 220). .If no similar match is found, the current chain of events is cleared and the entire process may be repeated (step 280). It will also be appreciated that matches are also made according to the application context. It will be further appreciated that each 'action' may be considered an instruction to controller 8. It will be further appreciated that each single event may be configured to match an individual action or a chain of actions. Similarly, a chain of events may be configured to match a single action or a chain of actions. There may be many permutations of matches between events and actions (two events -> three actions etc.)
[0063] Reference is now made to Fig. 7 which illustrates a flow chart of the activation of a chain of actions (or single action) once they have been activated as described herein above. An action is retrieved (either first or subsequent) (step 310) and added to the current list of actions (step 320) together with any pre-defined relationship between the current action and the previous action (step 330). If the relationship has been defined that both actions are to be performed simultaneously then the next action is retrieved immediately and added to the list. It will be appreciated that the list may hold more than one action if they are to be performed simultaneously. It will be also be appreciated that if the actions are not defined as simultaneous, then only one action will be held on the list at any time. If the relationship between subsequent actions has been defined with a time delay between them, another action is not retrieved, the pertinent time delay is waited and the current action (or actions that are on the current list of actions) is sent to controller 8 (step 340). Once controller 8 has received pertinent action, it is removed from the list (step 350) and the next action is sought. Steps 310-350 and are repeated until controller 8 has been instructed of all the pertinent actions. Once there are no more actions to be retrieved (step 310), the process is stopped.
[0064] It will be appreciated that there may be four types of actions that may be configured to be activated by an event or series of events. The first type action may be to send information to the pertinent device on which PNUI 100 is installed such as showing a particular picture on a screen, changing its brightness etc. Another type of action in this category may be to prepare voice feedback to user 5 via the device speaker or to vibrate the device. The second type of action may be to activate the device functionalities of the pertinent device. It will be appreciated that all device functionalities have a programmable API. The built in functionalities may include sending an SMS, turning Wi-Fi/ Bluetooth etc. on and off, creating a call, browsing multimedia etc.
[0065] The third type of action may be instructing a 3 rd party application, i.e. overriding a particular control such as pressing a button or in the case of using an SDK to control the 3 rd party application (as described herein above), the activation may be through SDK callback. The fourth type of action may be to activate a secondary 3rd party application via a primary 3rd party application.
[0066] In an alternative embodiment to the present invention, coordinator 40 may not receive input via input device 10 but from a different form of trigger such as an application based event. It will also be appreciated that the application to be controlled 15 by such an application based event may be the same application providing the input. For example, changes in application resources may be configured to be recognized as suitable input. A significant change in CPU usage by application 15 may be configured to cause the pertinent device on which it is installed to vibrate when a threshold level is reached.
[0067] It will be appreciated that configurer 50 may enable user 5 to personalize his own PNUI 100. Configurer 50 may set the above mentioned relationships between events and actions as well as between events in chains of events and actions in chains of actions. Reference is now made to Fig. 8 which illustrates a flowchart showing the functionality of configurer 50. Configurer 50 may be activated (step 500) and the pertinent application that is to be controlled may be selected (step 505). Next the input type is selected (step 510), for example whether the pertinent event is triggered by voice, motion etc. It will be appreciated that a particular input type may be pre-defined as the default such as all input is voice. Once the input and context has been determined, a value is set (step 520) for the particular event (as described in more detail herein below). As is described herein above, the pertinent recognizer 20 may convert the input from the device 10 in question into text output. It is this text output that is assigned a value (label or tag). For example the incoming speech signal for the word "BYE" may be converted into a text output by speech recognizer 21 and then assigned a label 'bye" It is this label that is held in database 30 for further associations with actions.
[0068] If there is more than one event that requires configuring, a relationship may be set between the current input and the next input (step 540), such as sequence or timing between them and then the next event is selected (step 510). This looping (step 530) may continue until all pertinent inputs have been received, assigned a label and had the relationships between them defined. Next the action/command type is selected (step 550) such as voice output, screen output, control of application 15 etc. Once the output type has been determined, the appropriate command may be set (step 560). It will also be appreciated that a particular output type may be defined as a default. For example for the event that has been labeled "bye", the matching command may be "close down" a particular 3 rd party application. If there are additional commands that need to be configured (step 570), the relationship is set between the current command and the command event (step 580), such as sequence or timing and then the next event is selected (step 370). Once again, "looping' may occur until all the pertinent commands are configured. Once they have all been configured, the pertinent configuration may be stored in database 30.
[0069] It will be appreciated, that in order for unit 105 to fully control application 15 and in order to allow user 5 to configure PNUI 100, controller 8 may comprise an agent 85 as a communication medium between controller 8 and the UI of application 15. As discussed herein above, there may be four types of control instruction to application 15 including activating the device functionalities of the pertinent device on which application 15 is installed and instructing a 3 rd party application by overriding a particular control such as pressing a button. It will be appreciated that overriding or partially overriding the device functionalities of the pertinent device as well as 3 rd party application controls may require identifying the UI elements from application 15 together with their properties as well as 'hooking' on to them in order to modify them or to emulate their functionality. It will also be appreciated that agent 85 may have the ability to do this during configuration. Agent 85 may also receive any updates as to changes in functionality and/or context from application 15 during runtime such as the creation of a new window in a voice control game. It will also be appreciated, that agent 85 may also instruct application 15 to load itself into a particular context of the application. It will be further appreciated that in the case of an SDK when no controller 8 is present, context information may be received by coordinator 40 via the API of application 15.
[0070] It will also be appreciated that UI elements may come in various shapes and sizes and may range from buttons, check boxes, labels, progress bars etc. to avatars in gaming, hyperlinks and widgets. It will also be appreciated all UI elements may have multiple properties such as type, caption, color, size etc. It will be further appreciated that the activation of a UI element may include, pressing a button, checking a checkbox, selecting an item in a pull down menu etc. Modification of a UI element may include changing different parameters such as color or size or may include creating or destroying an element.
[0071] Reference is now made to Fig. 9 which illustrates how agent 85 may communicate with the UI elements of application 15 in order to identify, 'hook' and control them. It will be further appreciated that there may be different methods of implementation depending on the type of application 15 and its environment. If application 15 is a 3 rd party web application (71), a standard method may be to extend the current capabilities and features of the web browser in use (Explorer, Safari etc.) Add-on software may be used that may sit inside the browser and search for UI elements to activate. An alternative method may be using a customized web browser (72) added to agent 85. This may be very useful in devices where there is no add-on support.
[0072] For certain non-web based applications 15 (73) using technologies such as Win32; agent 85 may be required to run in the context of application 15 in order to communicate with its UI elements and to either extract property information for storage in database 30 or to modify their properties. In another embodiment, agent 85 may also run within the context of unit 105 and not within application 15. Another embodiment may be through an emulator (74). Such a program may be native to the pertinent device in use and may aid users to develop their own applications by emulating the pertinent operating system. This emulation capability may then be extended by using an add-on agent 85 in order to control application 15. It will be appreciated that all mobile platforms may have emulator software. In some cases, the emulator may be SDK 9. [0073] It will also be appreciated that agent 85 may understand the pertinent technology of the UI of application 15 such as Win 32, .Net, Java, ObjectiveC etc. It will also be appreciated that agent 85 may be generic at the technology level and that once agent 85 supports a particular technology; all other applications running with the same technology may also be supported.
[0074] Once user 5 has selected and configured a UI element, its properties may be stored in database 30 in a manner that will enable the pertinent UI element to be uniquely identified for future use. For example if user 5 says the word "hey", and the word 'hey' has been associated with turning on a particular button, coordinator 40 may need to know which button to instruct controller 8 to activate. It will be appreciated that certain properties of UI elements may be obligatory and certain properties may be optional. For example obligatory properties for a button may include type - button and caption -'hey' in order to provide a unique identification. Optional parameters may include its position (since the button may be moved) and its color (which may be changed)... It will be appreciated that during configuration, all pertinent properties both obligatory and optional may be stored in database 30 for a particular UI element. It will be further appreciated that when controller 8 may seek to identify to which UI element to send his control instruction, coordinator 40 may find a match in database 30 by first seeking obligatory elements. If the obligatory elements searched do not provide a unique element, then the optional properties are also searched. For example, if there are two buttons both stored as type button and with caption "hey", then coordinator 40 may seek an optional property such as the color red in order to identify the pertinent button to be used. It will be further appreciated that controller 8 may retrieve this information from database 30 via coordinator 40 together with the required action.
[0075] It will also be appreciated that the main purpose of configurer 50 is to enable user 5 to personalize the natural user interface of his applications. It will be further appreciated that there may be different ways for a user 5 to configure system 100 through the use of an appropriate user interface. It will also be appreciated that event values as stored in database 30, may be configured by user 5 by recording his events, i.e. user 5 instructs the pertinent device to start recording and then performs the events (voice, motion etc.) in the desired order with the desired timings between them.. It will be also appreciated that events may also be configured manually by selecting from a list of possible values and by manually typing the text of the required value.
[0076] In order to enable such configuration as described hereinabove, PNUI 100 may provide a further interface, to aid user 5 configure his events. One such interface may be through the use of an interface wizard, in which user 5 may be presented with different dialog boxes in sequence and may move between the dialog boxes usually by clicking on a "next" button after he has entered data or configuring information into the current dialog box. If user 5 decides to go back and change any previously entered information, he may do so by clicking on a "previous" button. It will also be appreciated that any standard wizard may be used and may be adapted accordingly for the application in use.
[0077] Reference is now to Fig. 10 which illustrates the stages of using an interface wizard to configure system 100 on a mobile communication device. First the wizard is activated, usually manually by selecting the pertinent wizard application. In an alternative embodiment, the wizard may also be activated using a pre-configured PNUI 100 command. It will be appreciated that in order for the wizard to complete its task, pertinent devices 10 and recognizers 20 must also be activated. User 5 may then be presented with a list of applications 15 for personalizing with a NUI interface (step 610). It will be appreciated only the applications that may be supported by PNUI 100 may be presented. Once a pertinent application 15 has been selected, the wizard may then present to user 5 a pre-defined list of functionalities available for the selected application (step 620). It will be appreciated that each functionality may be an action or a chain of actions and may combine actions on UI elements, APIs etc. It will also be appreciated that functionalities may be nested within functionalities represented as menus and sub menus. Once user 5 has selected his desired functionality, he needs to define the event that will be the trigger for the functionality selected (630). This may be executed either by recording in conjunction with recording module 55, selecting or defining the event value. For example, if user 5 desires to start application ABC on his mobile phone using the voice input of the word "ON", he may first select application ABC from the list of available applications, he may then select the start functionality that appears in the list of available functionality for application ABC and then he may hit the record button and record his voice saying the word "ON". When he has finished speaking, user 5 may stop the recorder and the resultant information may be stored in database 30 for later use.
[0078] In an alternative embodiment, a seamless UI may be used instead of a wizard to configure PNUI 100. It will be appreciated that the seamless UI may allow user 5 to configure an application NUI without leaving the application context and screen, thus eliminating the need for a wizard. It will be further appreciated that the seamless UI may be integrated into the application 15. In order for the seamless UI to function properly, pertinent devices 10 and recognizers 20 must also be activated. It will also be appreciated that the seamless UI may be used in conjunction with the mouse over functionality or touch provided by the host operating system and that agent 85 must be registered to receive 3 rd party UI events. It will also be appreciated that the seamless UI may be created by agent 85 during the identification and hooking process of the UI elements of application 15 as described herein above. When user 5 hovers over a UI element belonging to application 15, the element may be identified by agent 85 as configurable and be highlighted. The highlight may be in the form of a pop-up screen or in the form of a change of color, shading etc. of the pertinent element as is illustrated in Fig. 11 to which reference is now made. The 3 rd party application user interface (710) may provide various UI elements (710A, 710B, 710C etc.). User 5 may be presented with the same UI screen (720) with the configurable elements highlighted and then may proceed to configure the properties of any pertinent element via a configuration dialogue that may be triggered. It will be appreciated that agent 85 may also highlight UI elements that have an existing configuration. Configurer 50 may then be activated to configure PNUI 100 according to the configuration settings of user 5 It will be appreciated a seamless UI may be generic to a particular UI technology (such as Win32, .Net, Java , Web etc.) and does not have to be configured to a particular application. It will be further appreciated that properties concerning the UI elements that have been selected and configured may be stored in database 30 for subsequent configuration at runtime.
[0079] It will be appreciated that the properties stored in database 30 for a pertinent UI element of application 15 must provide a unique identification for the element. For example as illustrated in Fig. 1A, the properties for button 3 once configured, may provide enough identification that button 3 is the jump button for application 15.
[0080] It will be appreciated that database 30 may be stored locally on the pertinent computing device or may be stored on a remote account server 70 as illustrated in Fig. 12 to which reference is now made.. It will also be appreciated that once configured for a particular application 15, database 30 may be used with any other computerized device to control the same application 15 such as on cars, tablets, personal computers and smart-TVs. For example, user 5 may configure a command for Facebook which may be also used on a mobile communication device, a tablet, a smart television screen, a personal computer etc. and on different platforms such as Android, iPhone, iMac, Windows etc. It will be appreciated that if user 5 decides that the configuration is only for a specific device or platform, the configuration may be stored locally, else the configuration may be stored on account server 70. It will also be appreciated that the ability to access database 30 remotely may allow for configuration to be performed by other users and may allow for other users to share the associations and configurations that are held. It will also be appreciated that database 30 may be shared amongst other users by uploading database 30 to a shared server 75 that may be accessed by all users in a particular social group using the standard group sharing rules. It will be further appreciated that synchronization with servers 70 and/or 75 may be done manually by user 5 whenever he has an update or automatically using standard timers.
[0081] As discussed hereinabove, recognizer 20 may be installed on a server 80 instead or in addition to the local device as is illustrated in Fig. 13 to which reference is now made. It will be appreciated that the recognizer installed on server 80 may be either 3 rd party or propriety. It will also be appreciated that recognizer 20 installed locally on the pertinent device may also be 3 rd party or proprietary. Each recognizer 20 may include a sub-module comprising an external support analyzer 22 to decide if analysis of the input is to be performed locally or externally on server 80. For example, speech recognizer 20A may only have the ability to recognize a limited list of words. External support analyzer 22A may decide to activate an external recognizer situated on server 80 in order recognize the other unsupported words. Alternatively both recognizers (speech recognizer 20A and the external recognizer on server 80) may work in parallel. It will be appreciated that performing recognition locally has the advantage of a fast real-time response, but may be limited to the CPU power of the local device and other resources such as battery life. It will be further appreciated that performing the recognition action on server 80 has the advantage of almost unlimited CPU and resources (cloud) but has the disadvantage of suffering slow reaction because of the round-trip delay [0082] Thus it will be appreciated that a user 5 with no technical skills or background may configure a personalized user interface to override or partially override control software such as operating systems and 3 rd party applications. It will be further appreciated, that user 5 may have the ability to do this across different platforms and over different devices and with the ability to share his configurations and also have the ability to use other user configurations.
[0083] Unless specifically stated otherwise, as apparent from the preceding discussions, it is appreciated that, throughout the specification, discussions utilizing terms such as "processing," "computing," "calculating," "determining," or the like, refer to the action and/or processes of a computer, computing system, or similar electronic computing device that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
[0084] Embodiments of the present invention may include apparatus for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk, including floppy disks, optical disks, magnetic-optical disks, read-only memories (ROMs), compact disc read-only memories (CD-ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, Flash memory, or any other type of media suitable for storing electronic instructions and capable of being coupled to a computer system bus. [0085] The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
[0086] While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Claims

CLAIMS What is claimed is:
1. A system for providing a natural user interface to an application, the system comprising:
a personalized natural user interface to at least partially override a user interface of said application; and a control element to receive an action from said personalized natural user interface and to provide a control instruction to said application.
2. The system according to claim 1 and comprising an input device to receive natural input and wherein said input device is at least one of a microphone, a motion sensor, an accelerometer, a gyroscope, a magnetometer, a camera, a location sensor and a temperature sensor.
3. The system according to claim 1 and wherein said personalized natural user interface comprises at least one recognizer to recognize at least one natural input as a formatted output.
4. The system according to claim 3 wherein said recognizer is at least one of a speech recognizer, a voice recognizer, a motion recognizer, a location recognizer, a temperature recognizer and a facial expression recognizer.
5. The system according to claim 3 and comprising an external support analyzer to analyze if said recognizing is to be performed on at least one of a local server and a remote server.
6. The system according to claim 1 and wherein said personalized natural user interface comprises a database to store associations of formatted output and associated actions.
7. The system according to claim 6 wherein said associations are one of pre-determined and user generated.
8. The system according to claim 6 and also storing application context and properties of elements of said user interface of said application.
9. The system according to claim 1 and wherein said personalized natural user interface comprises a coordinator to receive said formatted output, to generate at least one action associated with said formatted output from said database and to provide said at least one action to said control element.
10. The system according to claim 1 and wherein said personalized natural user interface comprises a configurer to receive said natural input from a user and associate said natural input with at least one action.
11. The system according to claim 1 and wherein said control element comprises an agent to communicate with said application.
12. The method according to claim 11 and wherein said agent is one of an emulator, a native user interface agent and a web browser add-on.
13. The system according to claim 1 and wherein said control element is a software development kit (SDK).
14. The system according to claim 1 and wherein said application is one of a web-based application and a non- web-based application.
15. A method for providing a natural user interface to an application, the method comprising:
converting natural input into an associated action or chain of actions; and providing said associated action or chain of actions as input to said application.
14. The method according to claim 15 and wherein said converting comprises receiving said natural input via an input device and wherein said input device is at least one of a microphone, a motion sensor, an accelerometer, a gyroscope, a magnetometer, a camera, a location sensor and a temperature sensor.
15. The method according to claim 13 and wherein said converting comprises recognizing via a recognizer said natural input as a formatted output.
16. The method according to claim 15 and wherein said recognizer is at least one of a speech recognizer, a voice recognizer, a motion recognizer, a location recognizer, a temperature recognizer and a facial expression recognizer.
17. The method according to claim 15 and wherein said converting comprises saving said associated action or chains of actions in a database.
18. The method according to claim 17 wherein said associations are one of: pre-determined and user generated.
19. The method according to claim 15 and wherein said converting comprises generating said action or chain of actions associated with said formatted output.
20. The method according to claim 13 and wherein said converting comprises configuring said natural input to be associated with said action or chain of actions.
21. The method according to claim 13 and wherein said providing comprises communicating with said application via an agent.
22. The method according to claim 13 and wherein said providing comprises communicating with said application via an SDK.
23. The method according to claim 21 and wherein said agent is one of: an emulator, a native user interface agent and a web browser add-on.
24. The method according to claim 15 and wherein said application is one of a web-based application and a non- web-based application.
PCT/IB2013/050123 2012-01-06 2013-01-07 A system and method for generating personalized sensor-based activation of software WO2013102892A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261583775P 2012-01-06 2012-01-06
US61/583,775 2012-01-06

Publications (1)

Publication Number Publication Date
WO2013102892A1 true WO2013102892A1 (en) 2013-07-11

Family

ID=48745013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/050123 WO2013102892A1 (en) 2012-01-06 2013-01-07 A system and method for generating personalized sensor-based activation of software

Country Status (1)

Country Link
WO (1) WO2013102892A1 (en)

Cited By (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2846241A1 (en) * 2013-08-08 2015-03-11 Palantir Technologies Inc. Long click display of a context menu
US9043696B1 (en) 2014-01-03 2015-05-26 Palantir Technologies Inc. Systems and methods for visual definition of data associations
US9116975B2 (en) 2013-10-18 2015-08-25 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores
US9123086B1 (en) 2013-01-31 2015-09-01 Palantir Technologies, Inc. Automatically generating event objects from images
US9223773B2 (en) 2013-08-08 2015-12-29 Palatir Technologies Inc. Template system for custom document generation
US9256664B2 (en) 2014-07-03 2016-02-09 Palantir Technologies Inc. System and method for news events detection and visualization
CN105393302A (en) * 2013-07-17 2016-03-09 三星电子株式会社 Multi-level speech recognition
US9335911B1 (en) 2014-12-29 2016-05-10 Palantir Technologies Inc. Interactive user interface for dynamic data analysis exploration and query processing
US9367872B1 (en) 2014-12-22 2016-06-14 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US9383911B2 (en) 2008-09-15 2016-07-05 Palantir Technologies, Inc. Modal-less interface enhancements
US9449035B2 (en) 2014-05-02 2016-09-20 Palantir Technologies Inc. Systems and methods for active column filtering
US9454281B2 (en) 2014-09-03 2016-09-27 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US9454785B1 (en) 2015-07-30 2016-09-27 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US9483162B2 (en) 2014-02-20 2016-11-01 Palantir Technologies Inc. Relationship visualizations
US9501851B2 (en) 2014-10-03 2016-11-22 Palantir Technologies Inc. Time-series analysis system
US9552615B2 (en) 2013-12-20 2017-01-24 Palantir Technologies Inc. Automated database analysis to detect malfeasance
US9558352B1 (en) 2014-11-06 2017-01-31 Palantir Technologies Inc. Malicious software detection in a computing system
US9557882B2 (en) 2013-08-09 2017-01-31 Palantir Technologies Inc. Context-sensitive views
US9619557B2 (en) 2014-06-30 2017-04-11 Palantir Technologies, Inc. Systems and methods for key phrase characterization of documents
US9646396B2 (en) 2013-03-15 2017-05-09 Palantir Technologies Inc. Generating object time series and data objects
US9727560B2 (en) 2015-02-25 2017-08-08 Palantir Technologies Inc. Systems and methods for organizing and identifying documents via hierarchies and dimensions of tags
US9727622B2 (en) 2013-12-16 2017-08-08 Palantir Technologies, Inc. Methods and systems for analyzing entity performance
US9767172B2 (en) 2014-10-03 2017-09-19 Palantir Technologies Inc. Data aggregation and analysis system
US9785317B2 (en) 2013-09-24 2017-10-10 Palantir Technologies Inc. Presentation and analysis of user interaction data
US9785773B2 (en) 2014-07-03 2017-10-10 Palantir Technologies Inc. Malware data item analysis
US9817563B1 (en) 2014-12-29 2017-11-14 Palantir Technologies Inc. System and method of generating data points from one or more data stores of data items for chart creation and manipulation
US9823818B1 (en) 2015-12-29 2017-11-21 Palantir Technologies Inc. Systems and interactive user interfaces for automatic generation of temporal representation of data objects
US9852195B2 (en) 2013-03-15 2017-12-26 Palantir Technologies Inc. System and method for generating event visualizations
US9852205B2 (en) 2013-03-15 2017-12-26 Palantir Technologies Inc. Time-sensitive cube
US9857958B2 (en) 2014-04-28 2018-01-02 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive access of, investigation of, and analysis of data objects stored in one or more databases
US9864493B2 (en) 2013-10-07 2018-01-09 Palantir Technologies Inc. Cohort-based presentation of user interaction data
US9870205B1 (en) 2014-12-29 2018-01-16 Palantir Technologies Inc. Storing logical units of program code generated using a dynamic programming notebook user interface
US9880987B2 (en) 2011-08-25 2018-01-30 Palantir Technologies, Inc. System and method for parameterizing documents for automatic workflow generation
US9886467B2 (en) 2015-03-19 2018-02-06 Plantir Technologies Inc. System and method for comparing and visualizing data entities and data entity series
US9891808B2 (en) 2015-03-16 2018-02-13 Palantir Technologies Inc. Interactive user interfaces for location-based data analysis
US9898509B2 (en) 2015-08-28 2018-02-20 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US9898528B2 (en) 2014-12-22 2018-02-20 Palantir Technologies Inc. Concept indexing among database of documents using machine learning techniques
US9898335B1 (en) 2012-10-22 2018-02-20 Palantir Technologies Inc. System and method for batch evaluation programs
US9946738B2 (en) 2014-11-05 2018-04-17 Palantir Technologies, Inc. Universal data pipeline
US9953445B2 (en) 2013-05-07 2018-04-24 Palantir Technologies Inc. Interactive data object map
US9965937B2 (en) 2013-03-15 2018-05-08 Palantir Technologies Inc. External malware data item clustering and analysis
US9965534B2 (en) 2015-09-09 2018-05-08 Palantir Technologies, Inc. Domain-specific language for dataset transformations
US9984133B2 (en) 2014-10-16 2018-05-29 Palantir Technologies Inc. Schematic and database linking system
US9998485B2 (en) 2014-07-03 2018-06-12 Palantir Technologies, Inc. Network intrusion data item clustering and analysis
US9996229B2 (en) 2013-10-03 2018-06-12 Palantir Technologies Inc. Systems and methods for analyzing performance of an entity
US9996595B2 (en) 2015-08-03 2018-06-12 Palantir Technologies, Inc. Providing full data provenance visualization for versioned datasets
US10037314B2 (en) 2013-03-14 2018-07-31 Palantir Technologies, Inc. Mobile reports
US10037383B2 (en) 2013-11-11 2018-07-31 Palantir Technologies, Inc. Simple web search
US10042524B2 (en) 2013-10-18 2018-08-07 Palantir Technologies Inc. Overview user interface of emergency call data of a law enforcement agency
US10102369B2 (en) 2015-08-19 2018-10-16 Palantir Technologies Inc. Checkout system executable code monitoring, and user account compromise determination system
US10180977B2 (en) 2014-03-18 2019-01-15 Palantir Technologies Inc. Determining and extracting changed data from a data source
US10180929B1 (en) 2014-06-30 2019-01-15 Palantir Technologies, Inc. Systems and methods for identifying key phrase clusters within documents
US10198515B1 (en) 2013-12-10 2019-02-05 Palantir Technologies Inc. System and method for aggregating data from a plurality of data sources
US10216801B2 (en) 2013-03-15 2019-02-26 Palantir Technologies Inc. Generating data clusters
US10230746B2 (en) 2014-01-03 2019-03-12 Palantir Technologies Inc. System and method for evaluating network threats and usage
US10229284B2 (en) 2007-02-21 2019-03-12 Palantir Technologies Inc. Providing unique views of data based on changes or rules
US10262047B1 (en) 2013-11-04 2019-04-16 Palantir Technologies Inc. Interactive vehicle information map
US10275778B1 (en) 2013-03-15 2019-04-30 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic malfeasance clustering of related data in various data structures
US10296617B1 (en) 2015-10-05 2019-05-21 Palantir Technologies Inc. Searches of highly structured data
US10318630B1 (en) 2016-11-21 2019-06-11 Palantir Technologies Inc. Analysis of large bodies of textual data
US10324609B2 (en) 2016-07-21 2019-06-18 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US10356032B2 (en) 2013-12-26 2019-07-16 Palantir Technologies Inc. System and method for detecting confidential information emails
US10362133B1 (en) 2014-12-22 2019-07-23 Palantir Technologies Inc. Communication data processing architecture
US10372879B2 (en) 2014-12-31 2019-08-06 Palantir Technologies Inc. Medical claims lead summary report generation
US10403011B1 (en) 2017-07-18 2019-09-03 Palantir Technologies Inc. Passing system with an interactive user interface
US10423582B2 (en) 2011-06-23 2019-09-24 Palantir Technologies, Inc. System and method for investigating large amounts of data
US10437840B1 (en) 2016-08-19 2019-10-08 Palantir Technologies Inc. Focused probabilistic entity resolution from multiple data sources
US10437612B1 (en) 2015-12-30 2019-10-08 Palantir Technologies Inc. Composite graphical interface with shareable data-objects
US10444941B2 (en) 2015-08-17 2019-10-15 Palantir Technologies Inc. Interactive geospatial map
US10452678B2 (en) 2013-03-15 2019-10-22 Palantir Technologies Inc. Filter chains for exploring large data sets
US10460602B1 (en) 2016-12-28 2019-10-29 Palantir Technologies Inc. Interactive vehicle information mapping system
US10484407B2 (en) 2015-08-06 2019-11-19 Palantir Technologies Inc. Systems, methods, user interfaces, and computer-readable media for investigating potential malicious communications
US10489391B1 (en) 2015-08-17 2019-11-26 Palantir Technologies Inc. Systems and methods for grouping and enriching data items accessed from one or more databases for presentation in a user interface
US10552994B2 (en) 2014-12-22 2020-02-04 Palantir Technologies Inc. Systems and interactive user interfaces for dynamic retrieval, analysis, and triage of data items
US10572487B1 (en) 2015-10-30 2020-02-25 Palantir Technologies Inc. Periodic database search manager for multiple data sources
US10678860B1 (en) 2015-12-17 2020-06-09 Palantir Technologies, Inc. Automatic generation of composite datasets based on hierarchical fields
US10698938B2 (en) 2016-03-18 2020-06-30 Palantir Technologies Inc. Systems and methods for organizing and identifying documents via hierarchies and dimensions of tags
US10706434B1 (en) 2015-09-01 2020-07-07 Palantir Technologies Inc. Methods and systems for determining location information
US10719188B2 (en) 2016-07-21 2020-07-21 Palantir Technologies Inc. Cached database and synchronization system for providing dynamic linked panels in user interface
US10754822B1 (en) 2018-04-18 2020-08-25 Palantir Technologies Inc. Systems and methods for ontology migration
US10795723B2 (en) 2014-03-04 2020-10-06 Palantir Technologies Inc. Mobile tasks
US10817513B2 (en) 2013-03-14 2020-10-27 Palantir Technologies Inc. Fair scheduling for mixed-query loads
US10853378B1 (en) 2015-08-25 2020-12-01 Palantir Technologies Inc. Electronic note management via a connected entity graph
US10885021B1 (en) 2018-05-02 2021-01-05 Palantir Technologies Inc. Interactive interpreter and graphical user interface
US10956406B2 (en) 2017-06-12 2021-03-23 Palantir Technologies Inc. Propagated deletion of database records and derived data
US11119630B1 (en) 2018-06-19 2021-09-14 Palantir Technologies Inc. Artificial intelligence assisted evaluations and user interface for same
US11138180B2 (en) 2011-09-02 2021-10-05 Palantir Technologies Inc. Transaction protocol for reading database values
US11150917B2 (en) 2015-08-26 2021-10-19 Palantir Technologies Inc. System for data aggregation and analysis of data from a plurality of data sources
WO2021223178A1 (en) * 2020-05-07 2021-11-11 深圳市欢太科技有限公司 User interface processing method and related apparatus
US11599369B1 (en) 2018-03-08 2023-03-07 Palantir Technologies Inc. Graphical user interface configuration system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000011571A1 (en) * 1998-08-24 2000-03-02 Bcl Computers, Inc. Adaptive natural language interface
US20040249640A1 (en) * 1998-12-23 2004-12-09 Richard Grant Method for integrating processes with a multi-faceted human centered interface
WO2004114034A2 (en) * 2003-06-19 2004-12-29 Schneider Automation Inc. System and method for ocular input to an automation system
US20060136221A1 (en) * 2004-12-22 2006-06-22 Frances James Controlling user interfaces with contextual voice commands

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000011571A1 (en) * 1998-08-24 2000-03-02 Bcl Computers, Inc. Adaptive natural language interface
US20040249640A1 (en) * 1998-12-23 2004-12-09 Richard Grant Method for integrating processes with a multi-faceted human centered interface
WO2004114034A2 (en) * 2003-06-19 2004-12-29 Schneider Automation Inc. System and method for ocular input to an automation system
US20060136221A1 (en) * 2004-12-22 2006-06-22 Frances James Controlling user interfaces with contextual voice commands

Cited By (162)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10719621B2 (en) 2007-02-21 2020-07-21 Palantir Technologies Inc. Providing unique views of data based on changes or rules
US10229284B2 (en) 2007-02-21 2019-03-12 Palantir Technologies Inc. Providing unique views of data based on changes or rules
US9383911B2 (en) 2008-09-15 2016-07-05 Palantir Technologies, Inc. Modal-less interface enhancements
US10747952B2 (en) 2008-09-15 2020-08-18 Palantir Technologies, Inc. Automatic creation and server push of multiple distinct drafts
US10248294B2 (en) 2008-09-15 2019-04-02 Palantir Technologies, Inc. Modal-less interface enhancements
US10423582B2 (en) 2011-06-23 2019-09-24 Palantir Technologies, Inc. System and method for investigating large amounts of data
US11392550B2 (en) 2011-06-23 2022-07-19 Palantir Technologies Inc. System and method for investigating large amounts of data
US10706220B2 (en) 2011-08-25 2020-07-07 Palantir Technologies, Inc. System and method for parameterizing documents for automatic workflow generation
US9880987B2 (en) 2011-08-25 2018-01-30 Palantir Technologies, Inc. System and method for parameterizing documents for automatic workflow generation
US11138180B2 (en) 2011-09-02 2021-10-05 Palantir Technologies Inc. Transaction protocol for reading database values
US11182204B2 (en) 2012-10-22 2021-11-23 Palantir Technologies Inc. System and method for batch evaluation programs
US9898335B1 (en) 2012-10-22 2018-02-20 Palantir Technologies Inc. System and method for batch evaluation programs
US10743133B2 (en) 2013-01-31 2020-08-11 Palantir Technologies Inc. Populating property values of event objects of an object-centric data model using image metadata
US9380431B1 (en) 2013-01-31 2016-06-28 Palantir Technologies, Inc. Use of teams in a mobile application
US10313833B2 (en) 2013-01-31 2019-06-04 Palantir Technologies Inc. Populating property values of event objects of an object-centric data model using image metadata
US9123086B1 (en) 2013-01-31 2015-09-01 Palantir Technologies, Inc. Automatically generating event objects from images
US10037314B2 (en) 2013-03-14 2018-07-31 Palantir Technologies, Inc. Mobile reports
US10997363B2 (en) 2013-03-14 2021-05-04 Palantir Technologies Inc. Method of generating objects and links from mobile reports
US10817513B2 (en) 2013-03-14 2020-10-27 Palantir Technologies Inc. Fair scheduling for mixed-query loads
US10275778B1 (en) 2013-03-15 2019-04-30 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic malfeasance clustering of related data in various data structures
US10977279B2 (en) 2013-03-15 2021-04-13 Palantir Technologies Inc. Time-sensitive cube
US10453229B2 (en) 2013-03-15 2019-10-22 Palantir Technologies Inc. Generating object time series from data objects
US10216801B2 (en) 2013-03-15 2019-02-26 Palantir Technologies Inc. Generating data clusters
US9965937B2 (en) 2013-03-15 2018-05-08 Palantir Technologies Inc. External malware data item clustering and analysis
US9646396B2 (en) 2013-03-15 2017-05-09 Palantir Technologies Inc. Generating object time series and data objects
US10452678B2 (en) 2013-03-15 2019-10-22 Palantir Technologies Inc. Filter chains for exploring large data sets
US9852195B2 (en) 2013-03-15 2017-12-26 Palantir Technologies Inc. System and method for generating event visualizations
US10264014B2 (en) 2013-03-15 2019-04-16 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation based on automatic clustering of related data in various data structures
US9852205B2 (en) 2013-03-15 2017-12-26 Palantir Technologies Inc. Time-sensitive cube
US10482097B2 (en) 2013-03-15 2019-11-19 Palantir Technologies Inc. System and method for generating event visualizations
US9779525B2 (en) 2013-03-15 2017-10-03 Palantir Technologies Inc. Generating object time series from data objects
US10360705B2 (en) 2013-05-07 2019-07-23 Palantir Technologies Inc. Interactive data object map
US9953445B2 (en) 2013-05-07 2018-04-24 Palantir Technologies Inc. Interactive data object map
CN105393302A (en) * 2013-07-17 2016-03-09 三星电子株式会社 Multi-level speech recognition
EP3022733A4 (en) * 2013-07-17 2017-06-14 Samsung Electronics Co., Ltd. Multi-level speech recognition
US9335897B2 (en) 2013-08-08 2016-05-10 Palantir Technologies Inc. Long click display of a context menu
US10699071B2 (en) 2013-08-08 2020-06-30 Palantir Technologies Inc. Systems and methods for template based custom document generation
US9223773B2 (en) 2013-08-08 2015-12-29 Palatir Technologies Inc. Template system for custom document generation
EP2846241A1 (en) * 2013-08-08 2015-03-11 Palantir Technologies Inc. Long click display of a context menu
US10976892B2 (en) 2013-08-08 2021-04-13 Palantir Technologies Inc. Long click display of a context menu
US9557882B2 (en) 2013-08-09 2017-01-31 Palantir Technologies Inc. Context-sensitive views
US9921734B2 (en) 2013-08-09 2018-03-20 Palantir Technologies Inc. Context-sensitive views
US10545655B2 (en) 2013-08-09 2020-01-28 Palantir Technologies Inc. Context-sensitive views
US9785317B2 (en) 2013-09-24 2017-10-10 Palantir Technologies Inc. Presentation and analysis of user interaction data
US10732803B2 (en) 2013-09-24 2020-08-04 Palantir Technologies Inc. Presentation and analysis of user interaction data
US9996229B2 (en) 2013-10-03 2018-06-12 Palantir Technologies Inc. Systems and methods for analyzing performance of an entity
US10635276B2 (en) 2013-10-07 2020-04-28 Palantir Technologies Inc. Cohort-based presentation of user interaction data
US9864493B2 (en) 2013-10-07 2018-01-09 Palantir Technologies Inc. Cohort-based presentation of user interaction data
US9116975B2 (en) 2013-10-18 2015-08-25 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores
US9514200B2 (en) 2013-10-18 2016-12-06 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores
US10877638B2 (en) 2013-10-18 2020-12-29 Palantir Technologies Inc. Overview user interface of emergency call data of a law enforcement agency
US10042524B2 (en) 2013-10-18 2018-08-07 Palantir Technologies Inc. Overview user interface of emergency call data of a law enforcement agency
US10719527B2 (en) 2013-10-18 2020-07-21 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores
US10262047B1 (en) 2013-11-04 2019-04-16 Palantir Technologies Inc. Interactive vehicle information map
US10037383B2 (en) 2013-11-11 2018-07-31 Palantir Technologies, Inc. Simple web search
US11100174B2 (en) 2013-11-11 2021-08-24 Palantir Technologies Inc. Simple web search
US10198515B1 (en) 2013-12-10 2019-02-05 Palantir Technologies Inc. System and method for aggregating data from a plurality of data sources
US11138279B1 (en) 2013-12-10 2021-10-05 Palantir Technologies Inc. System and method for aggregating data from a plurality of data sources
US9734217B2 (en) 2013-12-16 2017-08-15 Palantir Technologies Inc. Methods and systems for analyzing entity performance
US9727622B2 (en) 2013-12-16 2017-08-08 Palantir Technologies, Inc. Methods and systems for analyzing entity performance
US9552615B2 (en) 2013-12-20 2017-01-24 Palantir Technologies Inc. Automated database analysis to detect malfeasance
US10356032B2 (en) 2013-12-26 2019-07-16 Palantir Technologies Inc. System and method for detecting confidential information emails
US10901583B2 (en) 2014-01-03 2021-01-26 Palantir Technologies Inc. Systems and methods for visual definition of data associations
US10805321B2 (en) 2014-01-03 2020-10-13 Palantir Technologies Inc. System and method for evaluating network threats and usage
US10120545B2 (en) 2014-01-03 2018-11-06 Palantir Technologies Inc. Systems and methods for visual definition of data associations
US10230746B2 (en) 2014-01-03 2019-03-12 Palantir Technologies Inc. System and method for evaluating network threats and usage
US9043696B1 (en) 2014-01-03 2015-05-26 Palantir Technologies Inc. Systems and methods for visual definition of data associations
US9483162B2 (en) 2014-02-20 2016-11-01 Palantir Technologies Inc. Relationship visualizations
US10402054B2 (en) 2014-02-20 2019-09-03 Palantir Technologies Inc. Relationship visualizations
US10795723B2 (en) 2014-03-04 2020-10-06 Palantir Technologies Inc. Mobile tasks
US10180977B2 (en) 2014-03-18 2019-01-15 Palantir Technologies Inc. Determining and extracting changed data from a data source
US9857958B2 (en) 2014-04-28 2018-01-02 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive access of, investigation of, and analysis of data objects stored in one or more databases
US10871887B2 (en) 2014-04-28 2020-12-22 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive access of, investigation of, and analysis of data objects stored in one or more databases
US9449035B2 (en) 2014-05-02 2016-09-20 Palantir Technologies Inc. Systems and methods for active column filtering
US10180929B1 (en) 2014-06-30 2019-01-15 Palantir Technologies, Inc. Systems and methods for identifying key phrase clusters within documents
US10162887B2 (en) 2014-06-30 2018-12-25 Palantir Technologies Inc. Systems and methods for key phrase characterization of documents
US11341178B2 (en) 2014-06-30 2022-05-24 Palantir Technologies Inc. Systems and methods for key phrase characterization of documents
US9619557B2 (en) 2014-06-30 2017-04-11 Palantir Technologies, Inc. Systems and methods for key phrase characterization of documents
US9298678B2 (en) 2014-07-03 2016-03-29 Palantir Technologies Inc. System and method for news events detection and visualization
US9785773B2 (en) 2014-07-03 2017-10-10 Palantir Technologies Inc. Malware data item analysis
US9256664B2 (en) 2014-07-03 2016-02-09 Palantir Technologies Inc. System and method for news events detection and visualization
US10798116B2 (en) 2014-07-03 2020-10-06 Palantir Technologies Inc. External malware data item clustering and analysis
US9998485B2 (en) 2014-07-03 2018-06-12 Palantir Technologies, Inc. Network intrusion data item clustering and analysis
US10929436B2 (en) 2014-07-03 2021-02-23 Palantir Technologies Inc. System and method for news events detection and visualization
US9454281B2 (en) 2014-09-03 2016-09-27 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US9880696B2 (en) 2014-09-03 2018-01-30 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US10866685B2 (en) 2014-09-03 2020-12-15 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US10360702B2 (en) 2014-10-03 2019-07-23 Palantir Technologies Inc. Time-series analysis system
US9767172B2 (en) 2014-10-03 2017-09-19 Palantir Technologies Inc. Data aggregation and analysis system
US11004244B2 (en) 2014-10-03 2021-05-11 Palantir Technologies Inc. Time-series analysis system
US10664490B2 (en) 2014-10-03 2020-05-26 Palantir Technologies Inc. Data aggregation and analysis system
US9501851B2 (en) 2014-10-03 2016-11-22 Palantir Technologies Inc. Time-series analysis system
US11275753B2 (en) 2014-10-16 2022-03-15 Palantir Technologies Inc. Schematic and database linking system
US9984133B2 (en) 2014-10-16 2018-05-29 Palantir Technologies Inc. Schematic and database linking system
US9946738B2 (en) 2014-11-05 2018-04-17 Palantir Technologies, Inc. Universal data pipeline
US10191926B2 (en) 2014-11-05 2019-01-29 Palantir Technologies, Inc. Universal data pipeline
US10853338B2 (en) 2014-11-05 2020-12-01 Palantir Technologies Inc. Universal data pipeline
US10728277B2 (en) 2014-11-06 2020-07-28 Palantir Technologies Inc. Malicious software detection in a computing system
US10135863B2 (en) 2014-11-06 2018-11-20 Palantir Technologies Inc. Malicious software detection in a computing system
US9558352B1 (en) 2014-11-06 2017-01-31 Palantir Technologies Inc. Malicious software detection in a computing system
US9898528B2 (en) 2014-12-22 2018-02-20 Palantir Technologies Inc. Concept indexing among database of documents using machine learning techniques
US10447712B2 (en) 2014-12-22 2019-10-15 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US9367872B1 (en) 2014-12-22 2016-06-14 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US11252248B2 (en) 2014-12-22 2022-02-15 Palantir Technologies Inc. Communication data processing architecture
US10362133B1 (en) 2014-12-22 2019-07-23 Palantir Technologies Inc. Communication data processing architecture
US9589299B2 (en) 2014-12-22 2017-03-07 Palantir Technologies Inc. Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US10552994B2 (en) 2014-12-22 2020-02-04 Palantir Technologies Inc. Systems and interactive user interfaces for dynamic retrieval, analysis, and triage of data items
US9870205B1 (en) 2014-12-29 2018-01-16 Palantir Technologies Inc. Storing logical units of program code generated using a dynamic programming notebook user interface
US9817563B1 (en) 2014-12-29 2017-11-14 Palantir Technologies Inc. System and method of generating data points from one or more data stores of data items for chart creation and manipulation
US10552998B2 (en) 2014-12-29 2020-02-04 Palantir Technologies Inc. System and method of generating data points from one or more data stores of data items for chart creation and manipulation
US10127021B1 (en) 2014-12-29 2018-11-13 Palantir Technologies Inc. Storing logical units of program code generated using a dynamic programming notebook user interface
US9870389B2 (en) 2014-12-29 2018-01-16 Palantir Technologies Inc. Interactive user interface for dynamic data analysis exploration and query processing
US10157200B2 (en) 2014-12-29 2018-12-18 Palantir Technologies Inc. Interactive user interface for dynamic data analysis exploration and query processing
US9335911B1 (en) 2014-12-29 2016-05-10 Palantir Technologies Inc. Interactive user interface for dynamic data analysis exploration and query processing
US10838697B2 (en) 2014-12-29 2020-11-17 Palantir Technologies Inc. Storing logical units of program code generated using a dynamic programming notebook user interface
US10372879B2 (en) 2014-12-31 2019-08-06 Palantir Technologies Inc. Medical claims lead summary report generation
US11030581B2 (en) 2014-12-31 2021-06-08 Palantir Technologies Inc. Medical claims lead summary report generation
US10474326B2 (en) 2015-02-25 2019-11-12 Palantir Technologies Inc. Systems and methods for organizing and identifying documents via hierarchies and dimensions of tags
US9727560B2 (en) 2015-02-25 2017-08-08 Palantir Technologies Inc. Systems and methods for organizing and identifying documents via hierarchies and dimensions of tags
US10459619B2 (en) 2015-03-16 2019-10-29 Palantir Technologies Inc. Interactive user interfaces for location-based data analysis
US9891808B2 (en) 2015-03-16 2018-02-13 Palantir Technologies Inc. Interactive user interfaces for location-based data analysis
US9886467B2 (en) 2015-03-19 2018-02-06 Plantir Technologies Inc. System and method for comparing and visualizing data entities and data entity series
US10223748B2 (en) 2015-07-30 2019-03-05 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US9454785B1 (en) 2015-07-30 2016-09-27 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US11501369B2 (en) 2015-07-30 2022-11-15 Palantir Technologies Inc. Systems and user interfaces for holistic, data-driven investigation of bad actor behavior based on clustering and scoring of related data
US9996595B2 (en) 2015-08-03 2018-06-12 Palantir Technologies, Inc. Providing full data provenance visualization for versioned datasets
US10484407B2 (en) 2015-08-06 2019-11-19 Palantir Technologies Inc. Systems, methods, user interfaces, and computer-readable media for investigating potential malicious communications
US10444941B2 (en) 2015-08-17 2019-10-15 Palantir Technologies Inc. Interactive geospatial map
US10444940B2 (en) 2015-08-17 2019-10-15 Palantir Technologies Inc. Interactive geospatial map
US10489391B1 (en) 2015-08-17 2019-11-26 Palantir Technologies Inc. Systems and methods for grouping and enriching data items accessed from one or more databases for presentation in a user interface
US10102369B2 (en) 2015-08-19 2018-10-16 Palantir Technologies Inc. Checkout system executable code monitoring, and user account compromise determination system
US10922404B2 (en) 2015-08-19 2021-02-16 Palantir Technologies Inc. Checkout system executable code monitoring, and user account compromise determination system
US10853378B1 (en) 2015-08-25 2020-12-01 Palantir Technologies Inc. Electronic note management via a connected entity graph
US11150917B2 (en) 2015-08-26 2021-10-19 Palantir Technologies Inc. System for data aggregation and analysis of data from a plurality of data sources
US11934847B2 (en) 2015-08-26 2024-03-19 Palantir Technologies Inc. System for data aggregation and analysis of data from a plurality of data sources
US10346410B2 (en) 2015-08-28 2019-07-09 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US11048706B2 (en) 2015-08-28 2021-06-29 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US9898509B2 (en) 2015-08-28 2018-02-20 Palantir Technologies Inc. Malicious activity detection system capable of efficiently processing data accessed from databases and generating alerts for display in interactive user interfaces
US10706434B1 (en) 2015-09-01 2020-07-07 Palantir Technologies Inc. Methods and systems for determining location information
US11080296B2 (en) 2015-09-09 2021-08-03 Palantir Technologies Inc. Domain-specific language for dataset transformations
US9965534B2 (en) 2015-09-09 2018-05-08 Palantir Technologies, Inc. Domain-specific language for dataset transformations
US10296617B1 (en) 2015-10-05 2019-05-21 Palantir Technologies Inc. Searches of highly structured data
US10572487B1 (en) 2015-10-30 2020-02-25 Palantir Technologies Inc. Periodic database search manager for multiple data sources
US10678860B1 (en) 2015-12-17 2020-06-09 Palantir Technologies, Inc. Automatic generation of composite datasets based on hierarchical fields
US10540061B2 (en) 2015-12-29 2020-01-21 Palantir Technologies Inc. Systems and interactive user interfaces for automatic generation of temporal representation of data objects
US9823818B1 (en) 2015-12-29 2017-11-21 Palantir Technologies Inc. Systems and interactive user interfaces for automatic generation of temporal representation of data objects
US10437612B1 (en) 2015-12-30 2019-10-08 Palantir Technologies Inc. Composite graphical interface with shareable data-objects
US10698938B2 (en) 2016-03-18 2020-06-30 Palantir Technologies Inc. Systems and methods for organizing and identifying documents via hierarchies and dimensions of tags
US10719188B2 (en) 2016-07-21 2020-07-21 Palantir Technologies Inc. Cached database and synchronization system for providing dynamic linked panels in user interface
US10698594B2 (en) 2016-07-21 2020-06-30 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US10324609B2 (en) 2016-07-21 2019-06-18 Palantir Technologies Inc. System for providing dynamic linked panels in user interface
US10437840B1 (en) 2016-08-19 2019-10-08 Palantir Technologies Inc. Focused probabilistic entity resolution from multiple data sources
US10318630B1 (en) 2016-11-21 2019-06-11 Palantir Technologies Inc. Analysis of large bodies of textual data
US10460602B1 (en) 2016-12-28 2019-10-29 Palantir Technologies Inc. Interactive vehicle information mapping system
US10956406B2 (en) 2017-06-12 2021-03-23 Palantir Technologies Inc. Propagated deletion of database records and derived data
US10403011B1 (en) 2017-07-18 2019-09-03 Palantir Technologies Inc. Passing system with an interactive user interface
US11599369B1 (en) 2018-03-08 2023-03-07 Palantir Technologies Inc. Graphical user interface configuration system
US10754822B1 (en) 2018-04-18 2020-08-25 Palantir Technologies Inc. Systems and methods for ontology migration
US10885021B1 (en) 2018-05-02 2021-01-05 Palantir Technologies Inc. Interactive interpreter and graphical user interface
US11119630B1 (en) 2018-06-19 2021-09-14 Palantir Technologies Inc. Artificial intelligence assisted evaluations and user interface for same
WO2021223178A1 (en) * 2020-05-07 2021-11-11 深圳市欢太科技有限公司 User interface processing method and related apparatus
CN115136117A (en) * 2020-05-07 2022-09-30 深圳市欢太科技有限公司 User interface processing method and related device

Similar Documents

Publication Publication Date Title
WO2013102892A1 (en) A system and method for generating personalized sensor-based activation of software
US11237724B2 (en) Mobile terminal and method for split screen control thereof, and computer readable storage medium
US20190147879A1 (en) Method and apparatus for performing preset operation mode using voice recognition
US20190304448A1 (en) Audio playback device and voice control method thereof
US8762869B2 (en) Reduced complexity user interface
CN109979465B (en) Electronic device, server and control method thereof
TWI511125B (en) Voice control method, mobile terminal apparatus and voice controlsystem
AU2010307516B2 (en) Method for controlling portable device, display device, and video system
US10007396B2 (en) Method for executing program and electronic device thereof
US9218052B2 (en) Framework for voice controlling applications
AU2014327147B2 (en) Quick tasks for on-screen keyboards
CN111443971B (en) Operation guiding method, electronic device and medium
US11093715B2 (en) Method and system for learning and enabling commands via user demonstration
US11217244B2 (en) System for processing user voice utterance and method for operating same
CN110085222B (en) Interactive apparatus and method for supporting voice conversation service
CN101026832A (en) Method and device for providing option menus using graphical user interface
CN107885823B (en) Audio information playing method and device, storage medium and electronic equipment
KR102013329B1 (en) Method and apparatus for processing data using optical character reader
CN104184890A (en) Information processing method and electronic device
US10901719B2 (en) Approach for designing skills for cognitive agents across multiple vendor platforms
CN104461446B (en) Software running method and system based on interactive voice
CN108476339A (en) A kind of remote control method and terminal
KR20190115356A (en) Method for Executing Applications and The electronic device supporting the same
CN113784200A (en) Communication terminal, display device and screen projection connection method
CN112511874A (en) Game control method, smart television and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13733635

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13733635

Country of ref document: EP

Kind code of ref document: A1