US20120272171A1 - Apparatus, Method and Computer-Implemented Program for Editable Categorization - Google Patents

Apparatus, Method and Computer-Implemented Program for Editable Categorization Download PDF

Info

Publication number
US20120272171A1
US20120272171A1 US13/091,620 US201113091620A US2012272171A1 US 20120272171 A1 US20120272171 A1 US 20120272171A1 US 201113091620 A US201113091620 A US 201113091620A US 2012272171 A1 US2012272171 A1 US 2012272171A1
Authority
US
United States
Prior art keywords
content
user
icons
related content
items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/091,620
Inventor
Keiji Icho
Ryouichi Kawanishi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Corp
Original Assignee
Panasonic Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Corp filed Critical Panasonic Corp
Priority to US13/091,620 priority Critical patent/US20120272171A1/en
Assigned to PANASONIC CORPORATION reassignment PANASONIC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ICHO, KEIJI, KAWANISHI, RYOUICHI
Priority to PCT/JP2012/002738 priority patent/WO2012144225A1/en
Priority to US13/806,100 priority patent/US9348500B2/en
Priority to CN201280001708.8A priority patent/CN102959549B/en
Priority to JP2013510895A priority patent/JP5982363B2/en
Publication of US20120272171A1 publication Critical patent/US20120272171A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/436Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • G06F16/4393Multimedia presentations, e.g. slide shows, multimedia albums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • the present disclosure relates generally to organization, categorization and extraction of computerized content, including computerized images, data and icons. More particularly the present disclosure relates to computer-implemented technology to assist a user in extracting and reorganizing desired content using a graphical user interface that models content as computer-generated physical objects in display space having kinematic properties that map to content relatedness properties.
  • the system associates relatedness between targeted content and other content with a computer-generated physical parameter.
  • Computerized content can take many forms.
  • the content is typically stored as raw image data files or as compressed image files (e.g., jpg format).
  • the content is typically stored as a collection of image frames encoded using a suitable CODEC (e.g., mpeg format).
  • Text applications may store content as generic text files, as application-specific files (e.g., Microsoft Word doc or docx format), or as printable files (e.g., pdf).
  • Some applications store content comprising both text and image data. Examples include presentation software applications (e.g. Microsoft PowerPoint).
  • Database applications typically store text and numeric data, and sometimes also image data according to a predefined data structure that assigns meaning to the stored data. Icon organizing and editing applications store icons as image data, in some cases with additional metadata.
  • the disclosed system associates relatedness between targeted content and other content with a physical parameter.
  • the disclosed system provides a user-friendly, natural way for a user to organize, categorize and extract content from a data store, such as a data store of digitized images or other visual content.
  • the system maps content relatedness (degree of relationship) onto computer-generated physical object properties.
  • the items of content are depicted as moveable objects to which the physical object properties are associated.
  • the user can select and move a desired item of content.
  • other related items of content move as if attracted to the selected content by an invisible attractive force (e.g., invisible spring force, invisible gravitational force, or other kind of force).
  • an invisible attractive force e.g., invisible spring force, invisible gravitational force, or other kind of force.
  • a general concept is to use some kind of physical parameter to represent the relatedness between targeted content and other contents.
  • physical parameters include but are not limited to: (a) force acting between related content (or icons) and targeted content (or icons), such as tensile force and/or attractive force; (b) speed with which related content (or icons) come close to targeted content (or icons); (c) final relative position of related content (or icons) to final position of targeted content (or icons); and combinations thereof.
  • FIG. 1 is a plan view of an exemplary information appliance, illustrating how the user organizes, categorizes and extracts content;
  • FIG. 2 is a schematic block diagram of a computer hardware implementation of the information appliance of FIG. 1 ;
  • FIG. 3 is a user interface diagram, illustrating in greater detail the user interface components of the information appliance of FIG. 1 ;
  • FIG. 4 a is a detailed user interface diagram, showing the grid component of the user interface of FIG. 3 and illustrating how items of content are related prior to movement of a selected content by the user;
  • FIG. 4 b is a detailed user interface diagram, showing the grid component of the user interface of FIG. 3 and illustrating how items of content move according to a user-designated trajectory and further illustrating how items of content are rearranged during such movement according to their respective degrees of relatedness;
  • FIG. 5 illustrates an alternate embodiment that features a control mechanism that permits a user to adjust a relatedness threshold or correlation metric threshold which regulates how many items of content are attracted during movement along the user-designated trajectory;
  • FIG. 6 is a software block diagram, illustrating the manner of programming the computer hardware of FIG. 2 , it being understood that the depicted software is stored in the computer memory and operated upon by the CPU;
  • FIG. 7 is a flowchart diagram depicting one preferred embodiment whereby selected content is analyzed and motion of that content is generated by the suitably programmed computer hardware;
  • FIG. 8 is a graphical representation of one embodiment of a computer-implemented model for controlling how motion of content is generated by the suitably programmed computer hardware, the model featuring a display space reflecting how items of content are positioned and move within the display space of the computer screen;
  • FIG. 9 is a flow chart diagram explaining the operation of the information appliance according to the model of FIG. 8 ;
  • FIGS. 10 a - 10 d illustrate the information appliance in use, performing a basic screen transition according to user operation
  • FIGS. 11 a - 11 d illustrate the information appliance in use, creating a personal category for an individual user or the user's family
  • FIGS. 12 a - 12 d illustrate the information appliance in use, creating a different category from the same content
  • FIGS. 13 a - 13 d illustrate the information appliance in use, creating a compound cluster from plural previously created clusters
  • FIG. 14 a is a detailed user interface diagram, showing an alternate embodiment of how items of content are related prior to movement of a selected content by the user according to a tree structure;
  • FIG. 14 b is a detailed user interface diagram, illustrating how items of content of FIG. 14 a move according to a user-designated trajectory and further illustrating how items of content are rearranged during such movement according to their respective degrees of relatedness and based on the tree structure;
  • FIG. 15 is a graphical representation of the alternate embodiment of FIGS. 14 a and 14 b , namely a computer-implemented model for controlling how motion of content is generated by the suitably programmed computer hardware, the model featuring a display space reflecting how items of content are positioned and move within the display space of the computer screen according to a tree structure; and
  • FIG. 16 illustrates a variation of the embodiment of FIGS. 14 a and 14 b that features a control mechanism that permits a user to adjust a relatedness threshold or correlation metric threshold which regulates how many items of content are attracted during movement along the user-designated trajectory.
  • the computer-implemented apparatus and method for organization, categorization and extraction of computerized content defines a relationship between relatedness between targeted content and other content using a predefined physical parameter.
  • the present description will explain in detail how to implement different examples of such apparatus and methods, using different examples of predefined physical parameters.
  • the predefined physical parameter can be a kinematic-related parameter, such as be a force, a speed, a relative position, or the like.
  • an exemplary physical parameter can be the force acting between related content (or icons) and targeted content (or icons), such as tensile force and attractive force.
  • Such force is generated by computer according to the following relationships:
  • the physical parameter can be a speed parameter, representing, for example, the speed with which related content (or icons) are attracted to the target content (or target icon).
  • speed is generated by computer according to the following relationships:
  • x ⁇ i ⁇ ( t ) x ⁇ i ⁇ ( t - ⁇ ⁇ ⁇ t ) + x ⁇ i ⁇ ( t - ⁇ ⁇ ⁇ t ) - x ⁇ T ⁇ ( t - ⁇ ⁇ ⁇ t ) l i
  • the physical parameter can be a position parameter, representing, for example, the final position of related content (or icon) relative to the targeted content (or icon).
  • position parameter representing, for example, the final position of related content (or icon) relative to the targeted content (or icon).
  • x ⁇ i ⁇ ( t ) x ⁇ i ⁇ ( t - ⁇ ⁇ ⁇ t ) + x ⁇ i ⁇ ( t - ⁇ ⁇ ⁇ t ) - ⁇ x ⁇ T ⁇ ( t - ⁇ ⁇ ⁇ t ) + r ⁇ i ⁇ l
  • the physical parameter can be comprised of combinations of parameters, such as combinations of a speed parameter (as described above) and a relative position parameter (as described above).
  • a speed parameter as described above
  • a relative position parameter as described above
  • the physical parameter can be comprised of other combinations, such as combinations of speed and relative position.
  • the following considerations would be applicable:
  • x ⁇ i ⁇ ( t ) x ⁇ i ⁇ ( t - ⁇ ⁇ ⁇ t ) + x ⁇ i ⁇ ( t - ⁇ ⁇ ⁇ t ) - ⁇ x ⁇ T ⁇ ( t - ⁇ ⁇ ⁇ t ) + r ⁇ i ⁇ l i
  • an information appliance will be featured. Again, it will be understood that this information appliance is merely an example of a device that may use the teachings herein.
  • an exemplary information appliance has been illustrated at 20 .
  • the information appliance has a touch-enabled display screen 22 upon which the touch-enabled user interface is displayed.
  • the displayed user interface comprises a plurality of different regions that the user can interact with using touch gestures.
  • a touch-enabled information appliance is shown here to illustrate the principles of the invention, it will be understood that other types of devices and other types of user interfaces, supporting other types of user interaction are possible.
  • a computer device having a mouse or stylus driven interface could also be used.
  • the depicted information appliance is adapted to manage image content, such as photo library content. It will be understood, however, that the principles of the invention can be applied to other types of content, such as video content, textual content, hypertext content, database content and the like. Thus where photographic images and thumbnail images or icons are described below, the reader will understand that the displayed images could represent a different type of content, such as video content, textual content, hypertext content, database content and the like.
  • the information appliance is preferably implemented using a computer architecture that includes a central processing unit or CPU 26 coupled to a bus 28 , to which random access memory 30 and storage memory 32 are also attached.
  • the computer architecture may also include an input/output (I/O) module attached to bus 28 to facilitate communication with external devices via any suitable means such as wired connection or wireless connection.
  • a display driver 36 is coupled to the bus 28 to support the touch display 22 .
  • the display driver 36 of FIG. 2 includes the necessary circuitry to drive the visual display and to receive the touch input commands produced when the user performs a touch gesture upon the touch display.
  • the user performs a touch and drag operation to effect selection of related content, as will be more fully discussed herein.
  • the information appliance organizes and displays image content, such as photographs within a user's personal photo collection.
  • the managed content is preferably organized into different classes or groups using automatic categorization technology.
  • the category groups are displayed graphically as thumbnail depictions or icons within the predefined region of the display screen at 40 .
  • the user can select one of the applicable categories by suitable touch gesture.
  • the category designated at 42 has been so selected.
  • the user interface displays individual thumbnail or icon representations of individual pieces of content belonging to that category. These are displayed in a grid as at 44 . The user can then select from that grid one or more individual pieces of content by suitable touch selection. By way of example, in FIG. 3 the user has selected the content at 46 . Once selected, the user interface displays an enlarged view of the selected contents within window 48 .
  • the displayed content may, itself, comprise identifiable sub-components.
  • the displayed content may include several individually identifiable objects, such as buildings, geographic features, animals, human faces and the like.
  • the displayed content within window 48 comprises a photograph featuring three persons' faces. If desired, these identifiable sub-components can be used to define a query by which the system searches for additional related content.
  • the system uses the selected person to initiate a query to retrieve other related content (e.g., other images in which that person appears).
  • the system will performs the query against all associated content, such as all content within the selected category group, to generate similarity scores associated with each element of the category group.
  • the images are each given their own similarity score. Images in which the selected person appears are given a high similarity score, whereas images lacking that person are given a low similarity score.
  • the user interface further includes a family space region 50 into which the user can drag selected content that he or she desires to be associated into a subset or family of the category.
  • a family space region 50 into which the user can drag selected content that he or she desires to be associated into a subset or family of the category.
  • the user extracts content for inclusion in the family by a dragging operation whereby the user selects a target content from grid 44 (e.g., target 46 a ) and drags that content to location 46 b which lies outside the confines of grid 44 .
  • a target content from grid 44 (e.g., target 46 a ) and drags that content to location 46 b which lies outside the confines of grid 44 .
  • target content 46 a when the user selects a target content, such as target content 46 a in FIG. 4 a , those additional pieces of content that are related to content 46 a (by virtue of the automatic categorization technique being used) are highlighted as illustrated.
  • Related content which have a high similarity score or high relationship score are preferably depicted in a more prominent fashion, such as by highlighting those pieces of content in a visually perceptible manner and also by displaying connecting lines that also connote a strong connection or relationship.
  • the strongly related content are shown at 52 a , 54 a and 56 a . Additional content with a lower degree of relatedness are graphically depicted in a different way to connote the lower degree of relatedness.
  • This may include shading or highlighting the related content in a more subdued fashion and also by generating connecting lines that are less prominent that those used to convey strongly related content.
  • these lesser related content are shown at 58 a , 60 a and 62 a.
  • FIG. 4 a additional content at 64 a and 66 a are illustrated with light shading, to convey a degree of relationship with target 46 a that is less than any of the other related pieces of content.
  • the connecting lines may also be rendered using a lighter shade to convey a less prominent or less bold relationship.
  • the related content follows the motion trajectory 70 of the target content.
  • the target content is shown beyond the confines of grid 44 as at 46 b . Note how the related content have followed the target content.
  • the associated content When the target content is moved via the dragging gesture, the associated content generally follows the same trajectory 70 as the target content. In one preferred embodiment, the associated content become spatially reorganized while following the trajectory 70 , so that the associated content with a higher degree of relatedness becomes arranged closer to the target content 46 b than the related content having a weaker relationship. This has been illustrated in FIG. 4 b.
  • each related piece of content “follows” the target content as if it were attached by an invisible spring having a spring force that is proportional to the degree of the relationship.
  • closely related content such as content items 52 b , 54 b and 56 b are pulled toward the target content 46 b by an invisible spring force that is stronger than the spring force that pulls less related content, such as content items 58 b , 60 b , 62 b and so forth.
  • the individual pieces of content are reordered according to the degree of relationship (strength of relationship) as the target content is moved by the user as illustrated in FIG. 4 b.
  • the effect of the velocity-sensitive component is to make the movement of individual content items somewhat sluggish, so that the motion and response to the invisible spring force is not instantaneous.
  • An alternate way of expressing the relationship would be to think of the items of content as moving through a viscous medium, so that changes in the position of the target content 46 b are not instantaneously mimicked by a comparable instantaneous change in the position of all related content. Rather, the related content will continue to coast to their new positions for a short time after the target content has already stopped.
  • the related content items follow the target content in a more complex kinematic relationship whereby the overall number of related items attracted during motion of the target content can be controlled by how quickly the user moves the target content.
  • the target content 46 b slowly, then even weakly related content will follow the trajectory 70 .
  • the target content 46 b quickly then only target content above a certain threshold will follow. The effect is as if the weaker interconnecting links (carrying the invisible spring force) can be broken if the speed of movement of the target content exceeds a certain threshold.
  • the threshold may be velocity dependent, so that the user can actually control how many items of related content are pulled away from the grid 44 by simply controlling how quickly he or she moves the target content.
  • FIG. 6 shows the software components and manner of programming (CPU 26 , FIG. 2 ) to effect the content categorizing, selecting and graphical display to effect the following motion trajectory as discussed above.
  • the software components may be loaded into memory 30 ( FIG. 2 ) and are then acted upon by CPU 26 to produce the above-described behaviors when the computer program is run. If desired, these components can be incorporated into or associated with the operating system of the information appliance 20 of FIG. 1 .
  • certain ones of the provided software modules are specifically adapted for handling visual image processing, such as face recognition and object recognition.
  • Other components are more general in nature and are adapted for extracting features of any creative find description, which can include not only features extracted from visual content (photographs, motion pictures, and the like) but also other data types as may be applicable to more general purpose data mining applications.
  • the computer program and software modules used to implement the functionality described above may comprise a functional block 100 performs the content categorization and presentation through the graphically user interface of the information appliance.
  • This functional block 100 comprises a category reorganization user interface 102 that in turn employs several software components.
  • one of the basic functions of the category reorganization user interface is categorization of the content.
  • one of the illustrated functions of the category reorganization interface is the function of general categorizing 104 .
  • this general categorizing can involve certain additional sub-categorizing aspects. Illustrated here are four such aspects, namely face recognizing 106 , object recognizing 108 , feature extracting 110 and content tagging 112 .
  • These categorizing software modules work as follows.
  • the face recognizing module 106 identifies regions within the image that represent faces using a suitable face recognition algorithm that analyzes the image to detect features corresponding to a subject's face, such as eyes, nose, cheekbones, jaw, etc.
  • the user may have applied tags to certain content and also to certain features found within that content.
  • the content tagging module 112 administers this functionality.
  • the portions of the image corresponding to the daughter's face may be tagged with her name.
  • the feature extracting techniques operate upon elements that are inherent to the image itself, content tagging involves additional metadata that is added by the user or is added as a result of some query or grouping having been performed.
  • the general categorizing module 104 and its associated sub-modules 106 , 108 , 110 and 112 are called into action when needed to organize the content into different categories.
  • categories may be displayed in category groups as in 40 .
  • the category reorganization user interface module 102 further includes software modules that handle the user interaction of selecting a target content within grid 44 ( FIG. 3 ), defining relationships between the target content and other content as well as handling all of the connecting wire visualization and following motion processing as was described in connection with FIGS. 4 a , 4 b and 5 .
  • the category reorganization user interface includes a selected content position determining module 114 , which functions to interpret which target content has been selected by the user when the user touches one of the content items within grid 44 ( FIG. 3 ).
  • the content relationship analyzing module 116 works in conjunction with module 114 , as well as the general categorizing modules, to determine which additional pieces of content are related to the one the user has selected. This determination includes associating a relatedness score (or correlation metric) to each piece of content that is related to the target content selected. In this regard, a numerical score may be assigned to the relationship. For example, a relatedness score of 0-100% may be assigned. A 100% relationship would denote a very strong relationship to the target content, whereas a 0% score would denote the absence of a relationship. Thus, relationships between the target content and the remaining content can vary over a suitable range as required by the data being analyzed.
  • the connecting wire visualizing module 118 generates connecting wires or lines between the target content and the related content.
  • the connecting wires may be visually depicted using different boldness or intensity values to denote different degrees of relatedness. For example, content items having a relatedness score of 75%-100% would be given a strong bold appearance, scores between 50% and 74% would be given a less bold appearance, scores between 25% and 49% would be depicted using a light line or dotted line, and so forth.
  • content items having a similarity score below a certain threshold, such as below 25% may be given no connecting wire visualization and would thus be considered “not related”.
  • different colors may be used to indicate different levels of relatedness.
  • the category reorganization user interface produces a user-friendly visualization whereby related content follow the trajectory of the target content as the user moves the target content from the grid region 44 to the family space region 50 ( FIG. 3 ).
  • the individual items of related content are treated as if they are connected by an invisible spring which produces a pulling force causing related content to follow the target content as the user moves it. This pulling force or tensile force is calculated in module 120 . Further details of this calculation will be discussed below.
  • the tensile force or spring force is used by the following motion processing module 122 , which associates a motion trajectory with each of the pieces of related content.
  • the following motion processing module 122 causes each of the related content to follow a trajectory generally in the direction of the target content whereby the pulling force acting on each item of content is equal to the tensile force associated with that piece of content.
  • a velocity-sensitive motion-resisting counterforce or dashpot may be associated with each piece of related content to give the effect that the related content moves toward the target content through a viscous medium so that the related content items reach their final destination after the target content has ceased to move.
  • the produced visual effect makes it appear that the related content are being pulled by elastic strings that stretch when the target content is moved and that continue to pull the associated content towards the target content, through a viscous medium, after the target content has come to rest.
  • the category reorganization user interface module 102 must therefore organize the relocated content items after motion is effected; otherwise, the related content may overlap and be difficult to visualize.
  • the related content position determining module 124 defines a boundary about each piece of content and applies a rule dictating that the related content will be positioned radially adjacent the target content based on relatedness score, with the further provision that the related content items shall be repositioned so that the individual content items do not overlap on another.
  • the category reorganization user interface through its made category reorganizing module 126 , associates recognition information with the newly formed cluster.
  • This allows the cluster to be tagged and saved for recall at a later time and also to be used as a starting point for performing further content recognition.
  • the user might select a first category group, such as group 42 ( FIG. 3 ) and then perform the above-described selection operation to assemble a cluster of related content items. The user could then save that assembled cluster and then use it as a basis for searching through a different category group selected from the category groups for FIG. 3 .
  • step 150 the process determines the selected content position sequentially. This step is performed by module 114 ( FIG. 6 ) when the user selects a target content.
  • module 124 identifies the related content at step 152 and then further ascertains the current position of the related content within the grid 44 (see, for example, FIG. 4 a ).
  • the individual pieces of selected content are processed sequentially. That is, the process depicted in FIG. 7 is implemented as a loop whereby each item of content is sequentially processed. However, due to the speed of the CPU, the user perceives the individual content items as moving simultaneously as the user drags the target content towards the family space region 50 .
  • the system calculates the tensile force (invisible spring force) sequentially for each item of content.
  • a motion calculation is performed to determine how the related content will move as the target content is moved by the user.
  • the motion process determines an acceleration value for each piece of related content. This acceleration value is then used to calculate the motion that the related content will exhibit.
  • Such motion is, of course, a vector quantity. That is, motion of the related content proceeds in a certain direction as dictated by the following motion model implemented by the module 122 .
  • the motion model is based on an analogy whereby each item of related content is attracted to the target content by an invisible spring force (the tensile force) between them.
  • the vector direction of motion of the related content is towards the center of the target content. Accordingly, as the user moves the target content, each item of related content will be attracted to and thus follow the trajectory of the target content, corresponding to step 158 .
  • the following motion calculation can include a velocity-sensitive, dashpot, term that tends to resist instantaneous changes in motion, thereby making the related content appear to move as if they were immersed in a viscous medium. While not required, this additional velocity-sensitive term makes movement of the related content lag behind movement of the target content. Thus, when the user stops moving the target content, the related content will continue to coast toward their final destinations, the final destinations being determined at the points where the tensile force returns to zero or until further movement of the related content is blocked because another piece of content already occupies the space.
  • the process In addition to computing the motion of each piece of related content, the process also generates the connecting wires or lines between each item of related content and the target content. This is performed at step 160 . Specifically, this step defines a line segment between the centers of the respective thumbnail images. As discussed above, the boldness or color of these line segments can be adjusted based on the degree of relatedness.
  • the collected set of content are then remapped into a new category at step 164 .
  • this step may include prompting the user to provide a category label that is then associated with the items of content.
  • the system permits more than one category tag or label to be associated with each item of content. Thus individual items of content can belong to more than one category, if desired.
  • the system includes a mechanism to allow the user to control how many items of related content are “attracted” to the target content during the selection and moving process. As described in connection with FIG. 5 , this can be accomplished by providing a touch-enabled control wheel that the user can manipulate to adjust the threshold of which content will be captured and which will not.
  • the control of FIG. 5 works as follows. The control 72 produces a numerical threshold value that changes over a range of values from a low value to a high value as the user rotates the control clockwise or counterclockwise. The value produced by control 72 is then used to set the threshold by which the system determines whether an item of content will be included or not.
  • control 72 is manipulated to a high threshold, then only content having a relatedness score of above 75% will be captured. Conversely, if the control is manipulated to a low value, then content having a relatedness score of above 25% will be captured.
  • the user is able to control how much content is captured by the speed at which the user moves the target content.
  • the embodiment models an object being pulled across a frictional surface so that the frictional force acts to oppose movement in the pulling direction.
  • the line or wire representing the tensile force is fragile and can stretch and break if the pulling force becomes too great. Weakly associated content is mapped using a more fragile connection, resulting in weakly associated content not being selected when its connection breaks. The FIG. 8 illustrates how this may be accomplished.
  • the system defines an artificial affinity space (not shown) whereby all items of content are related based on how strongly a target content (or icon) is related to the other content (or icons).
  • the system then establishes a relationship between the affinity value and a physical parameter.
  • the relationship can be a kinematic parameter such as object weight, where content having a low affinity value (e.g., unrelated) is assigned a relatively heavier weight, whereas content having a high affinity value (e.g., highly related) is assigned a light weight.
  • This object weight is then mapped to the display space 202 , where displayed objects appear to move as the target content T D is pulled in a certain direction by the user.
  • Objects having a mapped heavy weight will more slowly, or not at all, based on a predetermined threshold friction assigned to the “surface” upon which the displayed objects sit in display space. Conversely, objects with lighter weight will move more freely, following the general trajectory of the target content as it moves.
  • the affinity space can map the affinity value to a tensile force parameter.
  • Weakly related (e.g., unrelated) content objects are assigned a weak tensile force, whereas strongly related content objects are assigned a stronger tensile force.
  • This tensile force is then mapped to the display space 202 so that the more strongly related the content, the more strongly it is attracted to the target content as it is pulled by the user.
  • the affinity space can map the affinity value to a fragility value corresponding to how strong the affinity relationship is.
  • a fragility value corresponding to how strong the affinity relationship is.
  • objects are connected to the target object through a link (shown as a connecting line in display space) having a strength based on the fragility value. Links with low fragility value break as the target object is pulled.
  • unrelated or weakly related content severs its relationship with the target content and does not follow the target content as the target is pulled by the user.
  • the link represented by force F 4 may represent a comparatively fragile link, depicted by a thin connecting line. This fragile link will break based on how the target content is moved.
  • FIG. 8 is intended to illustrate a few examples of how kinematic motion behavior can be mapped onto the otherwise unrelated problem of how to select and display related content. Other models can be used instead.
  • the user selects an object representing a target content at 206 .
  • Selection of this object causes the system in affinity space 202 to identify related objects at 208 .
  • the user then moves the selected object at 210 in display space.
  • the speed at which the user moves the selected object is captured and used in affinity space at step 212 where the system determines which objects will follow for a given movement speed.
  • Step 212 thus corresponds to the setting of the escape velocity threshold in FIG. 8 .
  • the system then in step 214 assigns a tensile force for each object, based on the mapped parameter (e.g., object weight, tensile force, link material, etc.).
  • the assigned tensile forces F n are then supplied back to display space where they are used to cause related objects to move using their respective assigned tensile forces at 216 .
  • each of the content elements were directly attracted to the target content. Variations of this basic concept are possible. Thus, as shown in FIGS. 14 a , 14 b and 15 , content elements may be organized according to a tree structure. Thus certain elements are directly attracted to the target content, whereas other elements are attracted, as children, grandchildren, etc., of the directly attracted content.
  • the user has selected element 46 ( 46 a , 46 b ) as the target, moving it along the trajectory 70 as illustrated in FIG. 14 b .
  • elements 52 a , 54 a and 56 a are directly linked as having a strong affinity with target content 46 .
  • These linked elements have affinities for other elements thereby defining a parent-child-grandparent tree structure relationship.
  • element 52 a has an affinity with element 58 a , which, in turn, has an affinity with element 58 a .
  • the child content 58 b and grandchild content 64 b of element 52 b are attracted as well.
  • FIG. 15 illustrates how the embodiment of FIGS. 14 a and 14 b operate.
  • the items of content are capable of being attracted to one another (or pulled by one another) and thus captured by invisible tensile forces between one another.
  • content element A m is attracted to element A o by force F 4 ; and element A n is attracted to element A o by force F 5 .
  • the attractive force is between elements that are closest in proximity to one another in display space and not necessarily directly attracted to the target content Td.
  • Computation of each individual force may be performed in this embodiment using essentially the same computational process as used to calculate the force for the embodiment of FIG. 8 , with the exception that in FIG. 8 all forces are attracted to the same target T D , whereas in the present case of FIG. 15 , the target for each content element is the parent of that element.
  • the force calculation can be performed recursively, following the tree structure.
  • a control mechanism 72 may be included with the embodiment of FIGS. 14 a and 14 b . This is illustrated in FIG. 16 .
  • FIGS. 10 a - 10 d there is shown a basic screen transition example whereby the user selects content ( FIG. 10 a ), drags the selected content thereby attracting related content ( FIG. 10 b ), maps the related content into the Family Space region ( FIG. 10 c ) and then associates a label [Fido] to the gathered content ( FIG. 10 d ).
  • the associated content having been labeled is now available for display as a new category group.
  • FIGS. 11 a - 11 d show a representative use case, similar to that of FIGS. 10 a - 10 d , but where the user specifically selects the portion of the photograph in FIG. 11 b depicting a dog.
  • the user selects a portion of the image, such as the dog, and that portion is used as a basis for a relatedness query to find other images depicting that dog.
  • FIGS. 12 a - 12 d illustrate a further use case, similar to that of FIGS. 11 a - 11 d , but illustrating that the user can create new categories based on different selected content within a given image.
  • FIG. 12 a the user selects Mt. Fuji in the image and then pulls related images containing Mt. Fuji into the Family Space at FIG. 12 b .
  • FIG. 12 c the user starts with the same photograph as FIG. 12 a , but this time selects the blooming cherry blossoms and uses that selected content to pull related images containing blooming cherry blossoms.
  • FIGS. 13 a - 13 d illustrate how it is possible to create more complex categories, based on previously created categories.
  • the user selects one of the images within a previously defined category.
  • the selected image is displayed as an enlarged image in the window to the left.
  • the user selects one of the persons in the photograph and pulls a new category containing the selected person into a family space. Note that the family space now contains both the original category cluster and the newly created one.
  • These two category clusters may then be joined to define a composite cluster if desired.
  • Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.

Abstract

The information appliance displays content in a visually perceptible grid from which the user selects a target content and upon selection of the processor automatically identifies related content each with an associated relatedness score. Movement by the user of the target content causes the related content to move and follow the target content as if attracted by an invisible spring force or tensile force. The system thus presents the user with a graphical representation of moving items of content which are attracted to the related content based on the degree of relatedness. In this way the user quickly learns how to control selection and organization of related content by mimicking movement of physical objects acting under kinematic forces that mimic natural objects.

Description

    FIELD
  • The present disclosure relates generally to organization, categorization and extraction of computerized content, including computerized images, data and icons. More particularly the present disclosure relates to computer-implemented technology to assist a user in extracting and reorganizing desired content using a graphical user interface that models content as computer-generated physical objects in display space having kinematic properties that map to content relatedness properties. The system associates relatedness between targeted content and other content with a computer-generated physical parameter.
  • BACKGROUND
  • Computerized content can take many forms. In photographic applications the content is typically stored as raw image data files or as compressed image files (e.g., jpg format). In video applications the content is typically stored as a collection of image frames encoded using a suitable CODEC (e.g., mpeg format). Text applications may store content as generic text files, as application-specific files (e.g., Microsoft Word doc or docx format), or as printable files (e.g., pdf). Some applications store content comprising both text and image data. Examples include presentation software applications (e.g. Microsoft PowerPoint). Database applications typically store text and numeric data, and sometimes also image data according to a predefined data structure that assigns meaning to the stored data. Icon organizing and editing applications store icons as image data, in some cases with additional metadata.
  • When a user wishes to organize, categorize and extract content from a software system, such as those identified above or others, the process has heretofore been tedious and far from intuitive. Typically the software system requires the user to interact with a complex system of menus, dialog boxes or commands to achieve a desired content selection. Where the content includes a lot of non-text content, such as photographs, images, movies and the like, interaction becomes even more difficult because text searching techniques are not highly effective and may not even be available
  • Where the content data store is large, such as with a large collection of stored photographic images, the task of organizing, categorizing and extracting desired content can be quite daunting. There are some automated tools that can be used to categorize image content, based on image characteristic extraction, face/object recognition, and the like. However, these tools often retrieve too many hits, many of which the user must then manually reject.
  • SUMMARY
  • The disclosed system associates relatedness between targeted content and other content with a physical parameter. In this way, the disclosed system provides a user-friendly, natural way for a user to organize, categorize and extract content from a data store, such as a data store of digitized images or other visual content.
  • The system maps content relatedness (degree of relationship) onto computer-generated physical object properties. In the computer-generated display space, the items of content are depicted as moveable objects to which the physical object properties are associated. Using a suitable touch gesture, or pointing device selection operation, the user can select and move a desired item of content. In so doing, other related items of content move as if attracted to the selected content by an invisible attractive force (e.g., invisible spring force, invisible gravitational force, or other kind of force). Thus by dragging a selected content item, the items of related content will follow, exhibiting kinematic motion as if they were physical objects acted upon by the invisible attractive force, where the degree of relatedness defines the strength of that force. Thus strongly related contents are attracted by a stronger force than less related content. Thus by simply watching the content movement the user can tell how closely the content items relate to the selected content.
  • Because relatedness is mapped onto the computer-generated force, more strongly related items move more quickly towards the selected content, thereby causing the related content to naturally cluster with the most closely related content lying closer to the selected content than the less closely related content.
  • Although rendered in computer-generated display space, the selected item of content, and those related to it, move as if mimicking the behavior physical objects. The user thus learns quite quickly and naturally how to organize, categorize and extract content, simply by touch and drag or (click and drag) movements.
  • A general concept is to use some kind of physical parameter to represent the relatedness between targeted content and other contents. Examples of such physical parameters include but are not limited to: (a) force acting between related content (or icons) and targeted content (or icons), such as tensile force and/or attractive force; (b) speed with which related content (or icons) come close to targeted content (or icons); (c) final relative position of related content (or icons) to final position of targeted content (or icons); and combinations thereof.
  • Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
  • DRAWINGS
  • The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
  • FIG. 1 is a plan view of an exemplary information appliance, illustrating how the user organizes, categorizes and extracts content;
  • FIG. 2 is a schematic block diagram of a computer hardware implementation of the information appliance of FIG. 1;
  • FIG. 3 is a user interface diagram, illustrating in greater detail the user interface components of the information appliance of FIG. 1;
  • FIG. 4 a is a detailed user interface diagram, showing the grid component of the user interface of FIG. 3 and illustrating how items of content are related prior to movement of a selected content by the user;
  • FIG. 4 b is a detailed user interface diagram, showing the grid component of the user interface of FIG. 3 and illustrating how items of content move according to a user-designated trajectory and further illustrating how items of content are rearranged during such movement according to their respective degrees of relatedness;
  • FIG. 5 illustrates an alternate embodiment that features a control mechanism that permits a user to adjust a relatedness threshold or correlation metric threshold which regulates how many items of content are attracted during movement along the user-designated trajectory;
  • FIG. 6 is a software block diagram, illustrating the manner of programming the computer hardware of FIG. 2, it being understood that the depicted software is stored in the computer memory and operated upon by the CPU;
  • FIG. 7 is a flowchart diagram depicting one preferred embodiment whereby selected content is analyzed and motion of that content is generated by the suitably programmed computer hardware;
  • FIG. 8 is a graphical representation of one embodiment of a computer-implemented model for controlling how motion of content is generated by the suitably programmed computer hardware, the model featuring a display space reflecting how items of content are positioned and move within the display space of the computer screen;
  • FIG. 9 is a flow chart diagram explaining the operation of the information appliance according to the model of FIG. 8;
  • FIGS. 10 a-10 d illustrate the information appliance in use, performing a basic screen transition according to user operation;
  • FIGS. 11 a-11 d illustrate the information appliance in use, creating a personal category for an individual user or the user's family;
  • FIGS. 12 a-12 d illustrate the information appliance in use, creating a different category from the same content;
  • FIGS. 13 a-13 d illustrate the information appliance in use, creating a compound cluster from plural previously created clusters;
  • FIG. 14 a is a detailed user interface diagram, showing an alternate embodiment of how items of content are related prior to movement of a selected content by the user according to a tree structure;
  • FIG. 14 b is a detailed user interface diagram, illustrating how items of content of FIG. 14 a move according to a user-designated trajectory and further illustrating how items of content are rearranged during such movement according to their respective degrees of relatedness and based on the tree structure;
  • FIG. 15 is a graphical representation of the alternate embodiment of FIGS. 14 a and 14 b, namely a computer-implemented model for controlling how motion of content is generated by the suitably programmed computer hardware, the model featuring a display space reflecting how items of content are positioned and move within the display space of the computer screen according to a tree structure; and
  • FIG. 16 illustrates a variation of the embodiment of FIGS. 14 a and 14 b that features a control mechanism that permits a user to adjust a relatedness threshold or correlation metric threshold which regulates how many items of content are attracted during movement along the user-designated trajectory.
  • Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
  • DETAILED DESCRIPTION
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • As noted above, the computer-implemented apparatus and method for organization, categorization and extraction of computerized content, defines a relationship between relatedness between targeted content and other content using a predefined physical parameter. The present description will explain in detail how to implement different examples of such apparatus and methods, using different examples of predefined physical parameters. By way of non-limiting example, the predefined physical parameter can be a kinematic-related parameter, such as be a force, a speed, a relative position, or the like.
  • In this regard, an exemplary physical parameter can be the force acting between related content (or icons) and targeted content (or icons), such as tensile force and attractive force. Such force is generated by computer according to the following relationships:

  • {right arrow over (F)}=k i({right arrow over (x)} i −{right arrow over (x)} T)
      • {right arrow over (F)}: Force acting between related content/icon i and targeted content/icon T
      • ki: Parameter depending on relatedness between related content/icon i and targeted content/icon T (ki>0)
      • {right arrow over (x)}i: Position of related content/icon i
      • {right arrow over (x)}T: Position of targeted content/icon T
  • Alternatively, the physical parameter can be a speed parameter, representing, for example, the speed with which related content (or icons) are attracted to the target content (or target icon). Such speed is generated by computer according to the following relationships:
  • x i ( t ) = x i ( t - Δ t ) + x i ( t - Δ t ) - x T ( t - Δ t ) l i
      • {right arrow over (x)}i(t): Position of related content/icon i at time t
      • {right arrow over (x)}T(t): Position of targeted content/icon T at time t
      • li: Parameter depending on relatedness between related content/icon i and targeted content/icon T (li>1)
  • Alternatively, the physical parameter can be a position parameter, representing, for example, the final position of related content (or icon) relative to the targeted content (or icon). Such relative position is generated by computer according to the following relationships:
  • Final relative position of related content/icon i, {right arrow over (r)}i={right arrow over (x)}i,FINAL−{right arrow over (x)}T,FINAL, is set depending on relatedness between related content/icon i and targeted content/icon T. For example, related contents are assigned at the final time in decreasing order of relatedness, most closely-related first, as below:
  • Figure US20120272171A1-20121025-C00001
  • Here, in terms of speed with which related content/icon comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i, it becomes like the following;
  • x i ( t ) = x i ( t - Δ t ) + x i ( t - Δ t ) - { x T ( t - Δ t ) + r i } l
      • {right arrow over (x)}i(t): Position of related content/icon i at time t
      • {right arrow over (x)}T(t): Position of targeted content/icon T at time t
      • l: Constant parameter (l>1)
  • If desired, the physical parameter can be comprised of combinations of parameters, such as combinations of a speed parameter (as described above) and a relative position parameter (as described above). In this regard, the following considerations would be applicable:
  • Related content/icon i comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i ({right arrow over (r)}i is set depending on relatedness between related content/icon i and targeted content/icon T) and the force acting between the related content/icon i and the position of {right arrow over (x)}T+{right arrow over (r)}i becomes like the following;

  • {right arrow over (F)}=k i({right arrow over (x)} i−({right arrow over (x)} T +{right arrow over (r)} i))
      • ki: Parameter depending on relatedness between related content/icon i and targeted content/icon T (ki>0)
  • Alternatively, if desired, the physical parameter can be comprised of other combinations, such as combinations of speed and relative position. In this regard, the following considerations would be applicable:
  • Related content/icon i comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i ({right arrow over (r)}i is set depending on relatedness between related content/icon i and targeted content/icon T) and the speede with which related content/icon comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i becomes like the following;
  • x i ( t ) = x i ( t - Δ t ) + x i ( t - Δ t ) - { x T ( t - Δ t ) + r i } l i
      • {right arrow over (x)}i(t): Position of related content/icon i at time t
      • {right arrow over (x)}T(t): Position of targeted content/icon T at time t
      • li: Parameter depending on relatedness between related content/icon i and targeted content/icon T (li>1)
  • It will of course be understood that other implementations, using other physical parameters may also be used. Thus the above examples of physical parameters are not intended as limiting examples.
  • To understand how the apparatus and methods for associating relatedness with physical parameter(s) described herein may be used, an information appliance will be featured. Again, it will be understood that this information appliance is merely an example of a device that may use the teachings herein.
  • Referring to FIG. 1, an exemplary information appliance has been illustrated at 20. The information appliance has a touch-enabled display screen 22 upon which the touch-enabled user interface is displayed. In the illustrated example of FIG. 1, the displayed user interface comprises a plurality of different regions that the user can interact with using touch gestures. While a touch-enabled information appliance is shown here to illustrate the principles of the invention, it will be understood that other types of devices and other types of user interfaces, supporting other types of user interaction are possible. Thus, for example a computer device having a mouse or stylus driven interface could also be used.
  • For purposes of illustrating some of the principles of the invention, the depicted information appliance is adapted to manage image content, such as photo library content. It will be understood, however, that the principles of the invention can be applied to other types of content, such as video content, textual content, hypertext content, database content and the like. Thus where photographic images and thumbnail images or icons are described below, the reader will understand that the displayed images could represent a different type of content, such as video content, textual content, hypertext content, database content and the like.
  • As illustrated in FIG. 2, the information appliance is preferably implemented using a computer architecture that includes a central processing unit or CPU 26 coupled to a bus 28, to which random access memory 30 and storage memory 32 are also attached. The computer architecture may also include an input/output (I/O) module attached to bus 28 to facilitate communication with external devices via any suitable means such as wired connection or wireless connection. A display driver 36 is coupled to the bus 28 to support the touch display 22. To simplify the illustration, the display driver 36 of FIG. 2 includes the necessary circuitry to drive the visual display and to receive the touch input commands produced when the user performs a touch gesture upon the touch display. In this regard, as illustrated in FIG. 1, the user performs a touch and drag operation to effect selection of related content, as will be more fully discussed herein.
  • Referring to FIG. 3, the user interface displayed in FIG. 1 is now shown in greater detail. In the exemplary application illustrated in FIG. 1, the information appliance organizes and displays image content, such as photographs within a user's personal photo collection. The managed content is preferably organized into different classes or groups using automatic categorization technology. In the user interface of FIG. 3, the category groups are displayed graphically as thumbnail depictions or icons within the predefined region of the display screen at 40. The user can select one of the applicable categories by suitable touch gesture. In FIG. 3, the category designated at 42 has been so selected.
  • Once the user selects a category, the user interface then displays individual thumbnail or icon representations of individual pieces of content belonging to that category. These are displayed in a grid as at 44. The user can then select from that grid one or more individual pieces of content by suitable touch selection. By way of example, in FIG. 3 the user has selected the content at 46. Once selected, the user interface displays an enlarged view of the selected contents within window 48.
  • In some instances, the displayed content may, itself, comprise identifiable sub-components. For example, the displayed content may include several individually identifiable objects, such as buildings, geographic features, animals, human faces and the like. In FIG. 3, the displayed content within window 48 comprises a photograph featuring three persons' faces. If desired, these identifiable sub-components can be used to define a query by which the system searches for additional related content.
  • Thus, for example, by selecting one of the displayed persons' faces via touch gesture, the system uses the selected person to initiate a query to retrieve other related content (e.g., other images in which that person appears). The system will performs the query against all associated content, such as all content within the selected category group, to generate similarity scores associated with each element of the category group. Based on the results of the recognition algorithms, the images are each given their own similarity score. Images in which the selected person appears are given a high similarity score, whereas images lacking that person are given a low similarity score.
  • Of course, it will be understood that the specific similarity matching algorithms used will depend on the type of content being managed. In the case of image content such as photographic content and video content, face and object recognition algorithms as well as image characteristic extraction techniques would be used. In other applications, such as database applications, other database query techniques would be used.
  • The user interface further includes a family space region 50 into which the user can drag selected content that he or she desires to be associated into a subset or family of the category. As will be more fully explained below, one aspect of the present technology is to provide an easy to use and intuitive way of extracting related content for placement into the family.
  • Referring to FIGS. 4 a and 4 b, the user extracts content for inclusion in the family by a dragging operation whereby the user selects a target content from grid 44 (e.g., target 46 a) and drags that content to location 46 b which lies outside the confines of grid 44.
  • More specifically, when the user selects a target content, such as target content 46 a in FIG. 4 a, those additional pieces of content that are related to content 46 a (by virtue of the automatic categorization technique being used) are highlighted as illustrated. Related content which have a high similarity score or high relationship score are preferably depicted in a more prominent fashion, such as by highlighting those pieces of content in a visually perceptible manner and also by displaying connecting lines that also connote a strong connection or relationship. In FIG. 4 a, the strongly related content are shown at 52 a, 54 a and 56 a. Additional content with a lower degree of relatedness are graphically depicted in a different way to connote the lower degree of relatedness. This may include shading or highlighting the related content in a more subdued fashion and also by generating connecting lines that are less prominent that those used to convey strongly related content. In FIG. 4 a, these lesser related content are shown at 58 a, 60 a and 62 a.
  • The display of related information in this fashion can support multiple levels of relatedness. Thus in FIG. 4 a, additional content at 64 a and 66 a are illustrated with light shading, to convey a degree of relationship with target 46 a that is less than any of the other related pieces of content. In addition to using light or light highlighting the connecting lines may also be rendered using a lighter shade to convey a less prominent or less bold relationship.
  • As illustrated in FIG. 4 b, when the user drags the target content away from its resting place as depicted in FIG. 4 a, the related content follows the motion trajectory 70 of the target content. Thus, in FIG. 4 b, the target content is shown beyond the confines of grid 44 as at 46 b. Note how the related content have followed the target content.
  • When the target content is moved via the dragging gesture, the associated content generally follows the same trajectory 70 as the target content. In one preferred embodiment, the associated content become spatially reorganized while following the trajectory 70, so that the associated content with a higher degree of relatedness becomes arranged closer to the target content 46 b than the related content having a weaker relationship. This has been illustrated in FIG. 4 b.
  • In one preferred embodiment, each related piece of content “follows” the target content as if it were attached by an invisible spring having a spring force that is proportional to the degree of the relationship. Thus, closely related content, such as content items 52 b, 54 b and 56 b are pulled toward the target content 46 b by an invisible spring force that is stronger than the spring force that pulls less related content, such as content items 58 b, 60 b, 62 b and so forth. Thus, whereas the initial positions of the pieces of content are distributed according to the grid 44, as illustrated in FIG. 4 a, the individual pieces of content are reordered according to the degree of relationship (strength of relationship) as the target content is moved by the user as illustrated in FIG. 4 b.
  • To enhance the visual effect, the attractive force (invisible spring force) may be buffered or tempered by introducing a velocity-sensitive component that resists the attractive spring force. This velocity-sensitive component may be modeled as if each interconnecting link between target and related component includes a velocity-sensitive “dashpot.” Employing both a spring force and a retarding velocity-sensitive force, the force acting upon each item of related content may be expressed as F=k dx/dt−c dv/dt.
  • The effect of the velocity-sensitive component is to make the movement of individual content items somewhat sluggish, so that the motion and response to the invisible spring force is not instantaneous. An alternate way of expressing the relationship would be to think of the items of content as moving through a viscous medium, so that changes in the position of the target content 46 b are not instantaneously mimicked by a comparable instantaneous change in the position of all related content. Rather, the related content will continue to coast to their new positions for a short time after the target content has already stopped.
  • The visual effect produced by the velocity-sensitive component is to slow down the motion of the content following the target content, so that the user is able to see the strongly related content outpace the less related content as each moves to its final clustered position. Because the invisible spring forces attracting each piece of content to the target content depend on the individual relationship strength, the more strongly related items are attracted more quickly and thus tend to situate themselves most closely to the target content when the target content finally comes to rest.
  • In another embodiment, the related content items follow the target content in a more complex kinematic relationship whereby the overall number of related items attracted during motion of the target content can be controlled by how quickly the user moves the target content. In this embodiment, if the user moves the target content 46 b slowly, then even weakly related content will follow the trajectory 70. On the other hand, if the user moves the target content 46 b quickly, then only target content above a certain threshold will follow. The effect is as if the weaker interconnecting links (carrying the invisible spring force) can be broken if the speed of movement of the target content exceeds a certain threshold. As will be more fully explained, the threshold may be velocity dependent, so that the user can actually control how many items of related content are pulled away from the grid 44 by simply controlling how quickly he or she moves the target content.
  • In yet another embodiment, depicted in FIG. 5, a gesture-operated control 72 is provided to set the threshold and thus control to what extent the degrees of relationship or layers of linkages with the target content are attracted as the target is moved. The user rotates the control 72 in a clockwise direction to increase the number of related content items and counterclockwise to decrease the related number of items.
  • Referring now to FIG. 6, the computer programming used to implement embodiments of the disclosed system and method will now be discussed. Specifically, FIG. 6 shows the software components and manner of programming (CPU 26, FIG. 2) to effect the content categorizing, selecting and graphical display to effect the following motion trajectory as discussed above. The software components may be loaded into memory 30 (FIG. 2) and are then acted upon by CPU 26 to produce the above-described behaviors when the computer program is run. If desired, these components can be incorporated into or associated with the operating system of the information appliance 20 of FIG. 1.
  • For purposes of illustrating the principles of the invention, certain ones of the provided software modules are specifically adapted for handling visual image processing, such as face recognition and object recognition. Other components are more general in nature and are adapted for extracting features of any creative find description, which can include not only features extracted from visual content (photographs, motion pictures, and the like) but also other data types as may be applicable to more general purpose data mining applications.
  • As diagrammatically depicted at 100, the computer program and software modules used to implement the functionality described above may comprise a functional block 100 performs the content categorization and presentation through the graphically user interface of the information appliance. This functional block 100 comprises a category reorganization user interface 102 that in turn employs several software components. As illustrated, one of the basic functions of the category reorganization user interface is categorization of the content. Thus, one of the illustrated functions of the category reorganization interface is the function of general categorizing 104. Depending on the application involved, this general categorizing can involve certain additional sub-categorizing aspects. Illustrated here are four such aspects, namely face recognizing 106, object recognizing 108, feature extracting 110 and content tagging 112. These categorizing software modules work as follows.
  • When the content is a photograph, for example, the face recognizing module 106 identifies regions within the image that represent faces using a suitable face recognition algorithm that analyzes the image to detect features corresponding to a subject's face, such as eyes, nose, cheekbones, jaw, etc.
  • The object recognizing module 108, like the face recognizing module 106, performs feature identification. However, whereas the face recognizing module is specifically designed to recognize features found in the human face, the object recognizing module is more general and is thus able to recognize objects such as buildings, geographic features, furniture and the like. Both the face recognizing module and the object recognizing module may be implemented using a trained system that is capable of learning by extracting features from known faces and known objects. Both face recognizing module 106 and object recognizing module 108 thus rely upon the general feature extracting capabilities of the feature extracting module 110.
  • In some instances, the user may have applied tags to certain content and also to certain features found within that content. The content tagging module 112 administers this functionality. Thus, for example, if the person identifies a certain face as belonging to his or her daughter, the portions of the image corresponding to the daughter's face may be tagged with her name. Whereas, the feature extracting techniques operate upon elements that are inherent to the image itself, content tagging involves additional metadata that is added by the user or is added as a result of some query or grouping having been performed.
  • In use, when the user interacts with the information appliance through the touch display 22 (FIG. 2), the general categorizing module 104 and its associated sub-modules 106, 108, 110 and 112, are called into action when needed to organize the content into different categories. With reference to FIG. 3, such categories may be displayed in category groups as in 40.
  • The category reorganization user interface module 102 further includes software modules that handle the user interaction of selecting a target content within grid 44 (FIG. 3), defining relationships between the target content and other content as well as handling all of the connecting wire visualization and following motion processing as was described in connection with FIGS. 4 a, 4 b and 5.
  • Thus, the category reorganization user interface includes a selected content position determining module 114, which functions to interpret which target content has been selected by the user when the user touches one of the content items within grid 44 (FIG. 3). The content relationship analyzing module 116 works in conjunction with module 114, as well as the general categorizing modules, to determine which additional pieces of content are related to the one the user has selected. This determination includes associating a relatedness score (or correlation metric) to each piece of content that is related to the target content selected. In this regard, a numerical score may be assigned to the relationship. For example, a relatedness score of 0-100% may be assigned. A 100% relationship would denote a very strong relationship to the target content, whereas a 0% score would denote the absence of a relationship. Thus, relationships between the target content and the remaining content can vary over a suitable range as required by the data being analyzed.
  • Once the content relationship analyzing module 116 has performed its function, the connecting wire visualizing module 118 generates connecting wires or lines between the target content and the related content. As discussed and illustrated in connection with FIGS. 4 a, 4 b and 5, the connecting wires may be visually depicted using different boldness or intensity values to denote different degrees of relatedness. For example, content items having a relatedness score of 75%-100% would be given a strong bold appearance, scores between 50% and 74% would be given a less bold appearance, scores between 25% and 49% would be depicted using a light line or dotted line, and so forth. Depending on the application, content items having a similarity score below a certain threshold, such as below 25%, may be given no connecting wire visualization and would thus be considered “not related”. As an alternative to controlling the boldness or intensity of the connecting wires, different colors may be used to indicate different levels of relatedness.
  • As discussed in connection with FIGS. 4 a and 4 b, the category reorganization user interface produces a user-friendly visualization whereby related content follow the trajectory of the target content as the user moves the target content from the grid region 44 to the family space region 50 (FIG. 3). As discussed, the individual items of related content are treated as if they are connected by an invisible spring which produces a pulling force causing related content to follow the target content as the user moves it. This pulling force or tensile force is calculated in module 120. Further details of this calculation will be discussed below.
  • The tensile force or spring force is used by the following motion processing module 122, which associates a motion trajectory with each of the pieces of related content. To give the visual display a user-friendly, natural presentation, the following motion processing module 122 causes each of the related content to follow a trajectory generally in the direction of the target content whereby the pulling force acting on each item of content is equal to the tensile force associated with that piece of content.
  • If desired, a velocity-sensitive motion-resisting counterforce or dashpot may be associated with each piece of related content to give the effect that the related content moves toward the target content through a viscous medium so that the related content items reach their final destination after the target content has ceased to move. The produced visual effect makes it appear that the related content are being pulled by elastic strings that stretch when the target content is moved and that continue to pull the associated content towards the target content, through a viscous medium, after the target content has come to rest.
  • Because closely related content is attracted more strongly (stronger tensile force) to the target content, such associated content will naturally cluster more closely to the target content than less strongly related content. The category reorganization user interface module 102 must therefore organize the relocated content items after motion is effected; otherwise, the related content may overlap and be difficult to visualize. To handle this, the related content position determining module 124 defines a boundary about each piece of content and applies a rule dictating that the related content will be positioned radially adjacent the target content based on relatedness score, with the further provision that the related content items shall be repositioned so that the individual content items do not overlap on another.
  • Finally, having assembled a cluster of related content, the category reorganization user interface, through its made category reorganizing module 126, associates recognition information with the newly formed cluster. This allows the cluster to be tagged and saved for recall at a later time and also to be used as a starting point for performing further content recognition. For example, the user might select a first category group, such as group 42 (FIG. 3) and then perform the above-described selection operation to assemble a cluster of related content items. The user could then save that assembled cluster and then use it as a basis for searching through a different category group selected from the category groups for FIG. 3.
  • For a better understanding of the software modules, refer to FIG. 7, where the process flow implemented by modules 118, 120, 122 and 124 has been illustrated. This process flow represents one presently preferred method for remapping the target content and associated content to a new category, as described above. Thus, at step 150 the process determines the selected content position sequentially. This step is performed by module 114 (FIG. 6) when the user selects a target content. Upon selection of a target, module 124 identifies the related content at step 152 and then further ascertains the current position of the related content within the grid 44 (see, for example, FIG. 4 a). In this presently preferred embodiment, the individual pieces of selected content are processed sequentially. That is, the process depicted in FIG. 7 is implemented as a loop whereby each item of content is sequentially processed. However, due to the speed of the CPU, the user perceives the individual content items as moving simultaneously as the user drags the target content towards the family space region 50.
  • At step 154, the system calculates the tensile force (invisible spring force) sequentially for each item of content. In this presently preferred embodiment, the tensile force can be modeled as a spring force according to the formula F=kx, where k is proportional to the degree of relationship between that content and the target content. While a linear relationship is presently preferred, non-linear relationships may be used instead to achieve a different attractive force profile between the target content and the related content.
  • In accordance with the linear relationship F=kx, when the displacement (x) between the target content and the related content changes upon movement of the target content by the user, the tensile force becomes non-zero and may be calculated by the stated formula. As noted, in this preferred embodiment, each item of content is treated individually and each item may have its own tensile force value, depending on the particular degree of relatedness.
  • Having calculated the tensile force for the given related content, at step 156 a motion calculation is performed to determine how the related content will move as the target content is moved by the user. Using a physical object analogy, motion of the related content can be calculated using the equation F=ma, where m is a standardized mass (which can be the same value for all pieces of content) and a is the acceleration produced by the force F. Because the mass of all content may be treated as equal, it is seen that the applied force (the tensile force for that piece of content) is proportional to the acceleration produced.
  • Thus, the motion process determines an acceleration value for each piece of related content. This acceleration value is then used to calculate the motion that the related content will exhibit. Such motion is, of course, a vector quantity. That is, motion of the related content proceeds in a certain direction as dictated by the following motion model implemented by the module 122. In this presently preferred embodiment, the motion model is based on an analogy whereby each item of related content is attracted to the target content by an invisible spring force (the tensile force) between them. Thus, the vector direction of motion of the related content is towards the center of the target content. Accordingly, as the user moves the target content, each item of related content will be attracted to and thus follow the trajectory of the target content, corresponding to step 158.
  • If desired, in order to give the visual appearance a more realistic “real world” feel, the following motion calculation can include a velocity-sensitive, dashpot, term that tends to resist instantaneous changes in motion, thereby making the related content appear to move as if they were immersed in a viscous medium. While not required, this additional velocity-sensitive term makes movement of the related content lag behind movement of the target content. Thus, when the user stops moving the target content, the related content will continue to coast toward their final destinations, the final destinations being determined at the points where the tensile force returns to zero or until further movement of the related content is blocked because another piece of content already occupies the space.
  • In addition to computing the motion of each piece of related content, the process also generates the connecting wires or lines between each item of related content and the target content. This is performed at step 160. Specifically, this step defines a line segment between the centers of the respective thumbnail images. As discussed above, the boldness or color of these line segments can be adjusted based on the degree of relatedness.
  • After all of the related content have been selected and moved, the collected set of content are then remapped into a new category at step 164. If desired, this step may include prompting the user to provide a category label that is then associated with the items of content. The system permits more than one category tag or label to be associated with each item of content. Thus individual items of content can belong to more than one category, if desired.
  • In one embodiment, the system includes a mechanism to allow the user to control how many items of related content are “attracted” to the target content during the selection and moving process. As described in connection with FIG. 5, this can be accomplished by providing a touch-enabled control wheel that the user can manipulate to adjust the threshold of which content will be captured and which will not. The control of FIG. 5 works as follows. The control 72 produces a numerical threshold value that changes over a range of values from a low value to a high value as the user rotates the control clockwise or counterclockwise. The value produced by control 72 is then used to set the threshold by which the system determines whether an item of content will be included or not. For example, if the control 72 is manipulated to a high threshold, then only content having a relatedness score of above 75% will be captured. Conversely, if the control is manipulated to a low value, then content having a relatedness score of above 25% will be captured.
  • In yet another embodiment, the user is able to control how much content is captured by the speed at which the user moves the target content. The embodiment models an object being pulled across a frictional surface so that the frictional force acts to oppose movement in the pulling direction. The line or wire representing the tensile force is fragile and can stretch and break if the pulling force becomes too great. Weakly associated content is mapped using a more fragile connection, resulting in weakly associated content not being selected when its connection breaks. The FIG. 8 illustrates how this may be accomplished.
  • Referring to FIG. 8, the system defines an artificial affinity space (not shown) whereby all items of content are related based on how strongly a target content (or icon) is related to the other content (or icons). The system then establishes a relationship between the affinity value and a physical parameter. For purposes of illustration, the relationship can be a kinematic parameter such as object weight, where content having a low affinity value (e.g., unrelated) is assigned a relatively heavier weight, whereas content having a high affinity value (e.g., highly related) is assigned a light weight. This object weight is then mapped to the display space 202, where displayed objects appear to move as the target content TD is pulled in a certain direction by the user. Objects having a mapped heavy weight will more slowly, or not at all, based on a predetermined threshold friction assigned to the “surface” upon which the displayed objects sit in display space. Conversely, objects with lighter weight will move more freely, following the general trajectory of the target content as it moves.
  • Alternatively, the affinity space can map the affinity value to a tensile force parameter. Weakly related (e.g., unrelated) content objects are assigned a weak tensile force, whereas strongly related content objects are assigned a stronger tensile force. This tensile force is then mapped to the display space 202 so that the more strongly related the content, the more strongly it is attracted to the target content as it is pulled by the user.
  • As yet another alternative, the affinity space can map the affinity value to a fragility value corresponding to how strong the affinity relationship is. When the fragility value is mapped to display space, objects are connected to the target object through a link (shown as a connecting line in display space) having a strength based on the fragility value. Links with low fragility value break as the target object is pulled. In this way, unrelated or weakly related content severs its relationship with the target content and does not follow the target content as the target is pulled by the user. For example, the link represented by force F4 may represent a comparatively fragile link, depicted by a thin connecting line. This fragile link will break based on how the target content is moved.
  • It should be understood that the embodiments depicted in FIG. 8 are intended to illustrate a few examples of how kinematic motion behavior can be mapped onto the otherwise unrelated problem of how to select and display related content. Other models can be used instead.
  • To further explain the relationship in FIG. 8, refer now to the flow diagram in FIG. 9. Beginning in display space 200, the user selects an object representing a target content at 206. Selection of this object causes the system in affinity space 202 to identify related objects at 208. The user then moves the selected object at 210 in display space. The speed at which the user moves the selected object is captured and used in affinity space at step 212 where the system determines which objects will follow for a given movement speed. Step 212 thus corresponds to the setting of the escape velocity threshold in FIG. 8. The system then in step 214 assigns a tensile force for each object, based on the mapped parameter (e.g., object weight, tensile force, link material, etc.). The assigned tensile forces Fn are then supplied back to display space where they are used to cause related objects to move using their respective assigned tensile forces at 216.
  • In the embodiment illustrated in FIGS. 4 a, 4 b and 8, each of the content elements were directly attracted to the target content. Variations of this basic concept are possible. Thus, as shown in FIGS. 14 a, 14 b and 15, content elements may be organized according to a tree structure. Thus certain elements are directly attracted to the target content, whereas other elements are attracted, as children, grandchildren, etc., of the directly attracted content.
  • Referring to FIGS. 14 a and 14 b, the user has selected element 46 (46 a, 46 b) as the target, moving it along the trajectory 70 as illustrated in FIG. 14 b. With reference to FIG. 14 a, note that elements 52 a, 54 a and 56 a are directly linked as having a strong affinity with target content 46. These linked elements, in turn, have affinities for other elements thereby defining a parent-child-grandparent tree structure relationship. For example, element 52 a has an affinity with element 58 a, which, in turn, has an affinity with element 58 a. Thus, when the target content 46 b is moved along trajectory 70, as shown in FIG. 14 b, the child content 58 b and grandchild content 64 b of element 52 b are attracted as well.
  • FIG. 15 illustrates how the embodiment of FIGS. 14 a and 14 b operate. As shown in the display space 202 in FIG. 15, the items of content are capable of being attracted to one another (or pulled by one another) and thus captured by invisible tensile forces between one another. Thus, content element Am is attracted to element Ao by force F4; and element An is attracted to element Ao by force F5. In other words the attractive force is between elements that are closest in proximity to one another in display space and not necessarily directly attracted to the target content Td.
  • Computation of each individual force may be performed in this embodiment using essentially the same computational process as used to calculate the force for the embodiment of FIG. 8, with the exception that in FIG. 8 all forces are attracted to the same target TD, whereas in the present case of FIG. 15, the target for each content element is the parent of that element. Thus the force calculation can be performed recursively, following the tree structure.
  • If desired, a control mechanism 72 may be included with the embodiment of FIGS. 14 a and 14 b. This is illustrated in FIG. 16.
  • Example Use Cases
  • Having now explained the basic principles of the technology, some examples of the technology in use will be presented with reference to FIGS. 10 a-10 d, 11 a-11 d, 12 a-12 d and 13 a-13 d. Referring first to FIGS. 10 a-10 d, there is shown a basic screen transition example whereby the user selects content (FIG. 10 a), drags the selected content thereby attracting related content (FIG. 10 b), maps the related content into the Family Space region (FIG. 10 c) and then associates a label [Fido] to the gathered content (FIG. 10 d). The associated content having been labeled, is now available for display as a new category group.
  • FIGS. 11 a-11 d show a representative use case, similar to that of FIGS. 10 a-10 d, but where the user specifically selects the portion of the photograph in FIG. 11 b depicting a dog. In other words, the user selects a portion of the image, such as the dog, and that portion is used as a basis for a relatedness query to find other images depicting that dog.
  • FIGS. 12 a-12 d illustrate a further use case, similar to that of FIGS. 11 a-11 d, but illustrating that the user can create new categories based on different selected content within a given image. Thus in FIG. 12 a the user selects Mt. Fuji in the image and then pulls related images containing Mt. Fuji into the Family Space at FIG. 12 b. Similarly, in FIG. 12 c, the user starts with the same photograph as FIG. 12 a, but this time selects the blooming cherry blossoms and uses that selected content to pull related images containing blooming cherry blossoms.
  • FIGS. 13 a-13 d illustrate how it is possible to create more complex categories, based on previously created categories. Thus as illustrated in FIG. 13 b, the user selects one of the images within a previously defined category. The selected image is displayed as an enlarged image in the window to the left. Then, as depicted in FIG. 13 c, the user selects one of the persons in the photograph and pulls a new category containing the selected person into a family space. Note that the family space now contains both the original category cluster and the newly created one. These two category clusters may then be joined to define a composite cluster if desired.
  • The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
  • Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.

Claims (27)

1. A method comprising:
displaying items of content as individual graphical images organized according to a first spatial arrangement upon a display screen;
upon selection of a target content from the displayed items of content, employing a processor to identify items of related content in connection to the target content according to relatedness of each item of related content to the target content;
associating a physical parameter with each item of related content based on the relatedness thereof; and
upon movement of the target content, displaying the items of related content to move and follow the target content on the display screen, where each item of related content proceeds as if it were attracted to the target content by a force characterized by the physical parameter thereof.
2. The method of claim 1 wherein the step of displaying the items of related content employs a computer-generated following algorithm that simulates motion of an attractive force between an item of related content and the target content.
3. The method of claim 1 wherein the step of displaying the items of related content employs a computer-generated following algorithm that simulates a spring force that pulls the item of related content towards the target content.
4. The method claim 3 wherein the step of causing items of related content to move employs a computer-generated following algorithm that further simulates a frictional force that retards movement effected by said spring force.
5. The method of claim 1 further comprising using the processor to generate and display connecting lines between the target content and each item of related content, where the lines are rendered upon the electronic display screen using visually perceptible features to denote different degrees of relatedness, based on the relatedness score associated with each item of related content.
6. The method of claim 1 further including employing user-controllable means for defining a threshold below which items of content are deemed not related.
7. The method of claim 6 wherein said user-controllable means is a graphically displayed, processor generated control that the user manipulates to manually set the threshold.
8. The method of claim 6 wherein said user-controllable means is a processor generated threshold based upon the speed by which the user moves the target content.
9. The method of claim 1 further comprising responding to user entry of tag whereby the displayed items of related content are associated with said tag and organized as a category group
10. A method using a processor with associated electronic display screen to gather and organize items of content comprising:
displaying the items of content as individual graphical images organized according to a first spatial arrangement upon the electronic display screen;
responding to user selection of a target content, selected from said displayed items of content, by employing a processor to identify items of related content according to a predefined relatedness metric and associating a relatedness score with each item of related content;
associating a tensile force with each item of related content based upon that item's relatedness score;
responding to user movement of the target content by causing items of related content to move and follow the target content, where movement of each item of related content proceeds as if attracted to the target content by a force equal to that item's associated tensile force;
displaying the items of related content upon the electronic display screen as a spatial grouping around the target content whereby items of related content having a greater tensile force are positioned generally closer to the target content than related content having a lesser tensile force.
11. The method of claim 10 wherein the step of causing items of related content to move employs a computer-generated following algorithm that simulates motion of an attractive force between an item of related content and the target content.
12. The method of claim 10 wherein the step of causing items of related content to move employs a computer-generated following algorithm that simulates a spring force that pulls the item of related content towards the target content.
13. The method claim 12 wherein the step of causing items of related content to move employs a computer-generated following algorithm that further simulates a velocity-sensitive dashpot force that retards movement effected by said spring force.
14. The method of claim 10 further comprising using the processor to generate and display connecting lines between the target content and each item of related content, where the lines are rendered upon the electronic display screen using visually perceptible features to denote different degrees of relatedness, based on the relatedness score associated with each item of related content.
15. The method of claim 10 further including employing user-controllable means for defining a threshold below which items of content are deemed not related.
16. The method of claim 15 wherein said user-controllable means is a graphically displayed, processor generated control that the user manipulates to manually set the threshold.
17. The method of claim 15 wherein said user-controllable means is a processor generated threshold based upon the speed by which the user moves the target content.
18. The method of claim 10 further comprising responding to user entry of tag whereby the displayed items of related content are associated with said tag and organized as a category group
19. A method for categorizing content using a graphical user interface, comprising:
displaying a plurality of icons in a content selectable area of a display, each icon representing a selectable content object;
visually depicting movement of a selected icon from the content selection area to a grouping area of the display that is spatially distinct from the content selection area, where the selected icon was selected by a user from the plurality of icons;
calculating a correlation metric for each non-selected icon in the plurality of icons, where the correlation metric quantifies a correlation between the selected icon and the non-selected icon;
selecting a subset of the plurality of the icons having a correlation metric that exceeds a threshold;
visually depicting movement of icons in the subset of icons from the content selection area to the grouping area of the display, where movement of the icons in the subset of icons is coordinated with movement of the selected icon; and
providing the user with a perceptible indicator of the correlation metric for each of the icons in the subset of icons while visually depicting movement of the icons in the subset of icons.
20. The method of claim 19 wherein calculating a correlation metric further comprises determining a spatial distance on the display between the selected icon and each of the non-selected icons and calculating the correlation metric for each non-selected icon using the distance between the selected icon and the non-selected icon.
21. The method of claim 19 further comprises moving the subset of icons based on a tensile force between each of the subset and the selected icon, where the tensile force is a function of the correlation metric.
22. The method of claim 19 further comprises displaying a visible connection between the selected icon and each of the icons in the subset of icons while visually depicting movement of the icons in the subset of icons.
23. The method of claim 19 further comprises displaying a line from the selected icon and to each of the icons in the subset of icons, where at least one of width or brightness of the line to a given icon in the subset of icons is based on the correlation metric for the given icon, thereby providing the user with a perceptible indicator.
24. The method of claim 19 further comprises visually depicting movement of icons in the subset of icons as following the selected icon from the content selection area to the grouping area.
25. The method of claim 24 further comprises setting velocity at which a given icon in the subset of icons moves based on the correlation metric for the given icon, thereby providing the user with a perceptible indicator.
26. The method of claim 19 further comprises adjusting a value of the threshold in accordance with input from the user.
27. The method of claim 19 further comprises displaying a given icon in the subset of icons spatially in relation to the selected icon in the grouping area in accordance with the correlation metric for the given icon.
US13/091,620 2011-04-21 2011-04-21 Apparatus, Method and Computer-Implemented Program for Editable Categorization Abandoned US20120272171A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US13/091,620 US20120272171A1 (en) 2011-04-21 2011-04-21 Apparatus, Method and Computer-Implemented Program for Editable Categorization
PCT/JP2012/002738 WO2012144225A1 (en) 2011-04-21 2012-04-20 Classification device and classification method
US13/806,100 US9348500B2 (en) 2011-04-21 2012-04-20 Categorizing apparatus and categorizing method
CN201280001708.8A CN102959549B (en) 2011-04-21 2012-04-20 Classification device and classification method
JP2013510895A JP5982363B2 (en) 2011-04-21 2012-04-20 Classification apparatus and classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/091,620 US20120272171A1 (en) 2011-04-21 2011-04-21 Apparatus, Method and Computer-Implemented Program for Editable Categorization

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/806,100 Continuation US9348500B2 (en) 2011-04-21 2012-04-20 Categorizing apparatus and categorizing method

Publications (1)

Publication Number Publication Date
US20120272171A1 true US20120272171A1 (en) 2012-10-25

Family

ID=47022238

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/091,620 Abandoned US20120272171A1 (en) 2011-04-21 2011-04-21 Apparatus, Method and Computer-Implemented Program for Editable Categorization
US13/806,100 Active 2032-01-28 US9348500B2 (en) 2011-04-21 2012-04-20 Categorizing apparatus and categorizing method

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/806,100 Active 2032-01-28 US9348500B2 (en) 2011-04-21 2012-04-20 Categorizing apparatus and categorizing method

Country Status (4)

Country Link
US (2) US20120272171A1 (en)
JP (1) JP5982363B2 (en)
CN (1) CN102959549B (en)
WO (1) WO2012144225A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120304090A1 (en) * 2011-05-28 2012-11-29 Microsoft Corporation Insertion of picture content for use in a layout
US20120324380A1 (en) * 2011-06-16 2012-12-20 Nokia Corporation Method and apparatus for controlling a spatial relationship between at least two groups of content during movement of the content
US20130198631A1 (en) * 2012-02-01 2013-08-01 Michael Matas Spring Motions During Object Animation
US20130222299A1 (en) * 2012-02-24 2013-08-29 Samsung Electronics Co., Ltd. Method and apparatus for editing content view in a mobile device
US20130246956A1 (en) * 2012-03-15 2013-09-19 Fuji Xerox Co., Ltd. Information processing apparatus, non-transitory computer readable medium, and information processing method
US20130318479A1 (en) * 2012-05-24 2013-11-28 Autodesk, Inc. Stereoscopic user interface, view, and object manipulation
US20140082530A1 (en) * 2012-09-14 2014-03-20 Joao Batista S. De OLIVEIRA Document layout
US20140096010A1 (en) * 2012-09-28 2014-04-03 Interactive Memories, Inc. Methods for Motion Simulation of Digital Assets Presented in an Electronic Interface using Single Point or Multi-Point Inputs
EP2767953A1 (en) * 2013-02-13 2014-08-20 BlackBerry Limited Device with enhanced augmented reality functionality
US20150100453A1 (en) * 2013-10-09 2015-04-09 Ebay Inc. Color indication
US20150106387A1 (en) * 2013-10-11 2015-04-16 Humax Co., Ltd. Method and apparatus of representing content information using sectional notification method
US20150227531A1 (en) * 2014-02-10 2015-08-13 Microsoft Corporation Structured labeling to facilitate concept evolution in machine learning
US20150268825A1 (en) * 2014-03-18 2015-09-24 Here Global B.V. Rendering of a media item
US9208583B2 (en) 2013-02-13 2015-12-08 Blackberry Limited Device with enhanced augmented reality functionality
US20160134667A1 (en) * 2014-11-12 2016-05-12 Tata Consultancy Services Limited Content collaboration
US9557876B2 (en) 2012-02-01 2017-01-31 Facebook, Inc. Hierarchical user interface
US9645724B2 (en) 2012-02-01 2017-05-09 Facebook, Inc. Timeline based content organization
US20180182149A1 (en) * 2016-12-22 2018-06-28 Seerslab, Inc. Method and apparatus for creating user-created sticker and system for sharing user-created sticker
EP3399733A1 (en) * 2017-05-02 2018-11-07 OCE Holding B.V. A system and a method for dragging and dropping a digital object onto a digital receptive module on a pixel display screen
US20190180785A1 (en) * 2015-09-30 2019-06-13 Apple Inc. Audio Authoring and Compositing
US10726594B2 (en) 2015-09-30 2020-07-28 Apple Inc. Grouping media content for automatically generating a media presentation
US10976895B2 (en) * 2016-09-23 2021-04-13 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011092793A1 (en) * 2010-01-29 2011-08-04 パナソニック株式会社 Data processing device
JP5645530B2 (en) * 2010-07-29 2014-12-24 キヤノン株式会社 Information processing apparatus and control method thereof
JP2012244526A (en) * 2011-05-23 2012-12-10 Sony Corp Information processing device, information processing method, and computer program
JP5502943B2 (en) * 2012-06-29 2014-05-28 楽天株式会社 Information processing apparatus, authentication apparatus, information processing method, and information processing program
JP5895828B2 (en) * 2012-11-27 2016-03-30 富士ゼロックス株式会社 Information processing apparatus and program
JP6188370B2 (en) * 2013-03-25 2017-08-30 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Object classification method, apparatus and program.
AU352884S (en) * 2013-06-05 2013-12-11 Samsung Electronics Co Ltd Display screen with graphical user interface
US10747416B2 (en) 2014-02-13 2020-08-18 Samsung Electronics Co., Ltd. User terminal device and method for displaying thereof
US10866714B2 (en) * 2014-02-13 2020-12-15 Samsung Electronics Co., Ltd. User terminal device and method for displaying thereof
JP6476574B2 (en) * 2014-03-28 2019-03-06 富士通株式会社 Production plan preparation support program, production plan preparation support method and production plan preparation support device
CN105488067B (en) * 2014-09-19 2020-04-21 中兴通讯股份有限公司 Slide generation method and device
CN105528166A (en) * 2014-09-28 2016-04-27 联想(北京)有限公司 Control method and control apparatus
US10739939B2 (en) * 2015-04-28 2020-08-11 International Business Machines Corporation Control of icon movement on a graphical user interface
US20160364266A1 (en) * 2015-06-12 2016-12-15 International Business Machines Corporation Relationship management of application elements
DE102015212223B3 (en) * 2015-06-30 2016-08-11 Continental Automotive Gmbh Method for controlling a display device for a vehicle
JP6586857B2 (en) * 2015-10-26 2019-10-09 富士ゼロックス株式会社 Information processing apparatus and information processing program
WO2017120300A1 (en) * 2016-01-05 2017-07-13 Hillcrest Laboratories, Inc. Content delivery systems and methods
CN110062269A (en) 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 Extra objects display methods, device and computer equipment
JP7035662B2 (en) * 2018-03-15 2022-03-15 京セラドキュメントソリューションズ株式会社 Display control method for mobile terminal devices and mobile terminal devices
JP7329957B2 (en) * 2019-04-25 2023-08-21 東芝テック株式会社 Virtual object display device and program
JP7095002B2 (en) * 2020-02-19 2022-07-04 キヤノン株式会社 Image processing equipment, imaging equipment, image processing methods, computer programs and storage media

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754179A (en) * 1995-06-07 1998-05-19 International Business Machines Corporation Selection facilitation on a graphical interface
US20020033848A1 (en) * 2000-04-21 2002-03-21 Sciammarella Eduardo Agusto System for managing data objects
US20020080180A1 (en) * 1992-04-30 2002-06-27 Richard Mander Method and apparatus for organizing information in a computer system
US20030007017A1 (en) * 2001-07-05 2003-01-09 International Business Machines Corporation Temporarily moving adjacent or overlapping icons away from specific icons being approached by an on-screen pointer on user interactive display interfaces
US20040150664A1 (en) * 2003-02-03 2004-08-05 Microsoft Corporation System and method for accessing remote screen content
US20040189707A1 (en) * 2003-03-27 2004-09-30 Microsoft Corporation System and method for filtering and organizing items based on common elements
US20050044100A1 (en) * 2003-08-20 2005-02-24 Hooper David Sheldon Method and system for visualization and operation of multiple content filters
US20060161867A1 (en) * 2003-01-21 2006-07-20 Microsoft Corporation Media frame object visualization system
US20060190817A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Filtering a collection of items
US20060242139A1 (en) * 2005-04-21 2006-10-26 Yahoo! Inc. Interestingness ranking of media objects
US20070027855A1 (en) * 2005-07-27 2007-02-01 Sony Corporation Information processing apparatus, information processing method, and program
US20070271524A1 (en) * 2006-05-19 2007-11-22 Fuji Xerox Co., Ltd. Interactive techniques for organizing and retreiving thumbnails and notes on large displays
US20080077874A1 (en) * 2006-09-27 2008-03-27 Zachary Adam Garbow Emphasizing Drop Destinations for a Selected Entity Based Upon Prior Drop Destinations
US20080077583A1 (en) * 2006-09-22 2008-03-27 Pluggd Inc. Visual interface for identifying positions of interest within a sequentially ordered information encoding
US20080104536A1 (en) * 2006-10-27 2008-05-01 Canon Kabkushiki Kaisha Information Processing Apparatus, Control Method For Same, Program, And Storage Medium
US20080229222A1 (en) * 2007-03-16 2008-09-18 Sony Computer Entertainment Inc. User interface for processing data by utilizing attribute information on data
US20080235628A1 (en) * 2007-02-27 2008-09-25 Quotidian, Inc. 3-d display for time-based information
US20080307359A1 (en) * 2007-06-08 2008-12-11 Apple Inc. Grouping Graphical Representations of Objects in a User Interface
US20080307335A1 (en) * 2007-06-08 2008-12-11 Apple Inc. Object stack
US7542951B1 (en) * 2005-10-31 2009-06-02 Amazon Technologies, Inc. Strategies for providing diverse recommendations
US20090307623A1 (en) * 2006-04-21 2009-12-10 Anand Agarawala System for organizing and visualizing display objects
US20100333025A1 (en) * 2009-06-30 2010-12-30 Verizon Patent And Licensing Inc. Media Content Instance Search Methods and Systems
US20110055773A1 (en) * 2009-08-25 2011-03-03 Google Inc. Direct manipulation gestures
US20120041779A1 (en) * 2009-04-15 2012-02-16 Koninklijke Philips Electronics N.V. Clinical decision support systems and methods
US8220022B1 (en) * 2007-12-12 2012-07-10 Google Inc. Traversing video recommendations

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0685144B2 (en) * 1990-11-15 1994-10-26 インターナショナル・ビジネス・マシーンズ・コーポレイション Selective controller for overlay and underlay
US5715416A (en) * 1994-09-30 1998-02-03 Baker; Michelle User definable pictorial interface for a accessing information in an electronic file system
JPH09288556A (en) * 1996-04-23 1997-11-04 Atsushi Matsushita Visualized system for hyper media
US6263507B1 (en) * 1996-12-05 2001-07-17 Interval Research Corporation Browser for use in navigating a body of information, with particular application to browsing information represented by audiovisual data
US6188405B1 (en) * 1998-09-14 2001-02-13 Microsoft Corporation Methods, apparatus and data structures for providing a user interface, which exploits spatial memory, to objects
US6594673B1 (en) * 1998-09-15 2003-07-15 Microsoft Corporation Visualizations for collaborative information
GB9908631D0 (en) * 1999-04-15 1999-06-09 Canon Kk Search engine user interface
US7139421B1 (en) * 1999-06-29 2006-11-21 Cognex Corporation Methods and apparatuses for detecting similar features within an image
US7308140B2 (en) * 2000-05-31 2007-12-11 Samsung Electronics Co., Ltd. Method and device for measuring similarity between images
US6950989B2 (en) 2000-12-20 2005-09-27 Eastman Kodak Company Timeline-based graphical user interface for efficient image database browsing and retrieval
US6826316B2 (en) * 2001-01-24 2004-11-30 Eastman Kodak Company System and method for determining image similarity
JP4096541B2 (en) * 2001-10-01 2008-06-04 株式会社日立製作所 Screen display method
US20040201702A1 (en) 2001-10-23 2004-10-14 White Craig R. Automatic location identification and categorization of digital photographs
US20030160824A1 (en) * 2002-02-28 2003-08-28 Eastman Kodak Company Organizing and producing a display of images, labels and custom artwork on a receiver
US7043474B2 (en) * 2002-04-15 2006-05-09 International Business Machines Corporation System and method for measuring image similarity based on semantic meaning
AU2003252024A1 (en) * 2002-07-16 2004-02-02 Bruce L. Horn Computer system for automatic organization, indexing and viewing of information from multiple sources
US20040090460A1 (en) * 2002-11-12 2004-05-13 Hideya Kawahara Method and apparatus for updating a User Interface for a computer system based on a physics model
US8312049B2 (en) * 2003-06-24 2012-11-13 Microsoft Corporation News group clustering based on cross-post graph
US8600920B2 (en) * 2003-11-28 2013-12-03 World Assets Consulting Ag, Llc Affinity propagation in adaptive network-based systems
US10210159B2 (en) * 2005-04-21 2019-02-19 Oath Inc. Media object metadata association and ranking
US7925985B2 (en) * 2005-07-29 2011-04-12 Sap Ag Methods and apparatus for process thumbnail view
US20070100798A1 (en) * 2005-10-31 2007-05-03 Shyam Kapur Community built result sets and methods of using the same
US7664760B2 (en) * 2005-12-22 2010-02-16 Microsoft Corporation Inferred relationships from user tagged content
US7509588B2 (en) 2005-12-30 2009-03-24 Apple Inc. Portable electronic device with interface reconfiguration mode
US7907755B1 (en) * 2006-05-10 2011-03-15 Aol Inc. Detecting facial similarity based on human perception of facial similarity
US8031914B2 (en) * 2006-10-11 2011-10-04 Hewlett-Packard Development Company, L.P. Face-based image clustering
US20080089591A1 (en) 2006-10-11 2008-04-17 Hui Zhou Method And Apparatus For Automatic Image Categorization
US20080147488A1 (en) * 2006-10-20 2008-06-19 Tunick James A System and method for monitoring viewer attention with respect to a display and determining associated charges
US7992097B2 (en) * 2006-12-22 2011-08-02 Apple Inc. Select drag and drop operations on video thumbnails across clip boundaries
US7680882B2 (en) * 2007-03-06 2010-03-16 Friendster, Inc. Multimedia aggregation in an online social network
US7895533B2 (en) * 2007-03-13 2011-02-22 Apple Inc. Interactive image thumbnails
US20080307330A1 (en) * 2007-06-08 2008-12-11 Apple Inc. Visualization object divet
JP2009080580A (en) * 2007-09-25 2009-04-16 Toshiba Corp Image display device and display method
JP2009087057A (en) * 2007-09-28 2009-04-23 Sharp Corp Clustering device for clustering vector data, clustering method, program, and recording medium
US8254684B2 (en) * 2008-01-02 2012-08-28 Yahoo! Inc. Method and system for managing digital photos
US20090204915A1 (en) * 2008-02-08 2009-08-13 Sony Ericsson Mobile Communications Ab Method for Switching Desktop Panels in an Active Desktop
US20090228830A1 (en) * 2008-02-20 2009-09-10 Herz J C System and Method for Data Analysis and Presentation
JP4675995B2 (en) * 2008-08-28 2011-04-27 株式会社東芝 Display processing apparatus, program, and display processing method
US8683390B2 (en) * 2008-10-01 2014-03-25 Microsoft Corporation Manipulation of objects on multi-touch user interface
US8099419B2 (en) * 2008-12-19 2012-01-17 Sap Ag Inferring rules to classify objects in a file management system
US8774498B2 (en) * 2009-01-28 2014-07-08 Xerox Corporation Modeling images as sets of weighted features
US8175376B2 (en) * 2009-03-09 2012-05-08 Xerox Corporation Framework for image thumbnailing based on visual similarity
US20100333140A1 (en) * 2009-06-29 2010-12-30 Mieko Onodera Display processing apparatus, display processing method, and computer program product
US20110029904A1 (en) * 2009-07-30 2011-02-03 Adam Miles Smith Behavior and Appearance of Touch-Optimized User Interface Elements for Controlling Computer Function
US8577887B2 (en) * 2009-12-16 2013-11-05 Hewlett-Packard Development Company, L.P. Content grouping systems and methods
US8468465B2 (en) * 2010-08-09 2013-06-18 Apple Inc. Two-dimensional slider control

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020080180A1 (en) * 1992-04-30 2002-06-27 Richard Mander Method and apparatus for organizing information in a computer system
US5754179A (en) * 1995-06-07 1998-05-19 International Business Machines Corporation Selection facilitation on a graphical interface
US20020033848A1 (en) * 2000-04-21 2002-03-21 Sciammarella Eduardo Agusto System for managing data objects
US20030007017A1 (en) * 2001-07-05 2003-01-09 International Business Machines Corporation Temporarily moving adjacent or overlapping icons away from specific icons being approached by an on-screen pointer on user interactive display interfaces
US20060161867A1 (en) * 2003-01-21 2006-07-20 Microsoft Corporation Media frame object visualization system
US20040150664A1 (en) * 2003-02-03 2004-08-05 Microsoft Corporation System and method for accessing remote screen content
US20040189707A1 (en) * 2003-03-27 2004-09-30 Microsoft Corporation System and method for filtering and organizing items based on common elements
US20050044100A1 (en) * 2003-08-20 2005-02-24 Hooper David Sheldon Method and system for visualization and operation of multiple content filters
US20060190817A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Filtering a collection of items
US20060242139A1 (en) * 2005-04-21 2006-10-26 Yahoo! Inc. Interestingness ranking of media objects
US20070027855A1 (en) * 2005-07-27 2007-02-01 Sony Corporation Information processing apparatus, information processing method, and program
US7542951B1 (en) * 2005-10-31 2009-06-02 Amazon Technologies, Inc. Strategies for providing diverse recommendations
US20090307623A1 (en) * 2006-04-21 2009-12-10 Anand Agarawala System for organizing and visualizing display objects
US20070271524A1 (en) * 2006-05-19 2007-11-22 Fuji Xerox Co., Ltd. Interactive techniques for organizing and retreiving thumbnails and notes on large displays
US20080077583A1 (en) * 2006-09-22 2008-03-27 Pluggd Inc. Visual interface for identifying positions of interest within a sequentially ordered information encoding
US20080077874A1 (en) * 2006-09-27 2008-03-27 Zachary Adam Garbow Emphasizing Drop Destinations for a Selected Entity Based Upon Prior Drop Destinations
US20080104536A1 (en) * 2006-10-27 2008-05-01 Canon Kabkushiki Kaisha Information Processing Apparatus, Control Method For Same, Program, And Storage Medium
US20080235628A1 (en) * 2007-02-27 2008-09-25 Quotidian, Inc. 3-d display for time-based information
US20080229222A1 (en) * 2007-03-16 2008-09-18 Sony Computer Entertainment Inc. User interface for processing data by utilizing attribute information on data
US20080307359A1 (en) * 2007-06-08 2008-12-11 Apple Inc. Grouping Graphical Representations of Objects in a User Interface
US20080307335A1 (en) * 2007-06-08 2008-12-11 Apple Inc. Object stack
US8220022B1 (en) * 2007-12-12 2012-07-10 Google Inc. Traversing video recommendations
US20120041779A1 (en) * 2009-04-15 2012-02-16 Koninklijke Philips Electronics N.V. Clinical decision support systems and methods
US20100333025A1 (en) * 2009-06-30 2010-12-30 Verizon Patent And Licensing Inc. Media Content Instance Search Methods and Systems
US20110055773A1 (en) * 2009-08-25 2011-03-03 Google Inc. Direct manipulation gestures

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120304090A1 (en) * 2011-05-28 2012-11-29 Microsoft Corporation Insertion of picture content for use in a layout
US9600176B2 (en) * 2011-06-16 2017-03-21 Nokia Technologies Oy Method and apparatus for controlling a spatial relationship between at least two groups of content during movement of the content
US20120324380A1 (en) * 2011-06-16 2012-12-20 Nokia Corporation Method and apparatus for controlling a spatial relationship between at least two groups of content during movement of the content
US9606708B2 (en) 2012-02-01 2017-03-28 Facebook, Inc. User intent during object scrolling
US9645724B2 (en) 2012-02-01 2017-05-09 Facebook, Inc. Timeline based content organization
US11132118B2 (en) 2012-02-01 2021-09-28 Facebook, Inc. User interface editor
US9229613B2 (en) 2012-02-01 2016-01-05 Facebook, Inc. Transitions among hierarchical user interface components
US10775991B2 (en) 2012-02-01 2020-09-15 Facebook, Inc. Overlay images and texts in user interface
US20130227494A1 (en) * 2012-02-01 2013-08-29 Michael Matas Folding and Unfolding Images in a User Interface
US20130198631A1 (en) * 2012-02-01 2013-08-01 Michael Matas Spring Motions During Object Animation
US8976199B2 (en) 2012-02-01 2015-03-10 Facebook, Inc. Visual embellishment for objects
US9235318B2 (en) 2012-02-01 2016-01-12 Facebook, Inc. Transitions among hierarchical user-interface layers
US8990719B2 (en) 2012-02-01 2015-03-24 Facebook, Inc. Preview of objects arranged in a series
US8990691B2 (en) 2012-02-01 2015-03-24 Facebook, Inc. Video object behavior in a user interface
US9003305B2 (en) * 2012-02-01 2015-04-07 Facebook, Inc. Folding and unfolding images in a user interface
US9552147B2 (en) 2012-02-01 2017-01-24 Facebook, Inc. Hierarchical user interface
US9239662B2 (en) 2012-02-01 2016-01-19 Facebook, Inc. User interface editor
US9098168B2 (en) * 2012-02-01 2015-08-04 Facebook, Inc. Spring motions during object animation
US9235317B2 (en) 2012-02-01 2016-01-12 Facebook, Inc. Summary and navigation of hierarchical levels
US9557876B2 (en) 2012-02-01 2017-01-31 Facebook, Inc. Hierarchical user interface
US8984428B2 (en) 2012-02-01 2015-03-17 Facebook, Inc. Overlay images and texts in user interface
US20130222299A1 (en) * 2012-02-24 2013-08-29 Samsung Electronics Co., Ltd. Method and apparatus for editing content view in a mobile device
US9170725B2 (en) * 2012-03-15 2015-10-27 Fuji Xerox Co., Ltd. Information processing apparatus, non-transitory computer readable medium, and information processing method that detect associated documents based on distance between documents
US20130246956A1 (en) * 2012-03-15 2013-09-19 Fuji Xerox Co., Ltd. Information processing apparatus, non-transitory computer readable medium, and information processing method
US20130318479A1 (en) * 2012-05-24 2013-11-28 Autodesk, Inc. Stereoscopic user interface, view, and object manipulation
US20140082530A1 (en) * 2012-09-14 2014-03-20 Joao Batista S. De OLIVEIRA Document layout
US20140096010A1 (en) * 2012-09-28 2014-04-03 Interactive Memories, Inc. Methods for Motion Simulation of Digital Assets Presented in an Electronic Interface using Single Point or Multi-Point Inputs
US9208583B2 (en) 2013-02-13 2015-12-08 Blackberry Limited Device with enhanced augmented reality functionality
EP2767953A1 (en) * 2013-02-13 2014-08-20 BlackBerry Limited Device with enhanced augmented reality functionality
US20150100453A1 (en) * 2013-10-09 2015-04-09 Ebay Inc. Color indication
US20150106387A1 (en) * 2013-10-11 2015-04-16 Humax Co., Ltd. Method and apparatus of representing content information using sectional notification method
US10083212B2 (en) * 2013-10-11 2018-09-25 Humax Co., Ltd. Method and apparatus of representing content information using sectional notification method
US10318572B2 (en) * 2014-02-10 2019-06-11 Microsoft Technology Licensing, Llc Structured labeling to facilitate concept evolution in machine learning
US20150227531A1 (en) * 2014-02-10 2015-08-13 Microsoft Corporation Structured labeling to facilitate concept evolution in machine learning
US20150268825A1 (en) * 2014-03-18 2015-09-24 Here Global B.V. Rendering of a media item
US20160134667A1 (en) * 2014-11-12 2016-05-12 Tata Consultancy Services Limited Content collaboration
US20190180785A1 (en) * 2015-09-30 2019-06-13 Apple Inc. Audio Authoring and Compositing
US10726594B2 (en) 2015-09-30 2020-07-28 Apple Inc. Grouping media content for automatically generating a media presentation
US10976895B2 (en) * 2016-09-23 2021-04-13 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
US20180182149A1 (en) * 2016-12-22 2018-06-28 Seerslab, Inc. Method and apparatus for creating user-created sticker and system for sharing user-created sticker
EP3399733A1 (en) * 2017-05-02 2018-11-07 OCE Holding B.V. A system and a method for dragging and dropping a digital object onto a digital receptive module on a pixel display screen

Also Published As

Publication number Publication date
CN102959549A (en) 2013-03-06
US9348500B2 (en) 2016-05-24
WO2012144225A1 (en) 2012-10-26
CN102959549B (en) 2017-02-15
JPWO2012144225A1 (en) 2014-07-28
JP5982363B2 (en) 2016-08-31
US20130097542A1 (en) 2013-04-18

Similar Documents

Publication Publication Date Title
US20120272171A1 (en) Apparatus, Method and Computer-Implemented Program for Editable Categorization
US11181985B2 (en) Dynamic user interactions for display control
US20220270509A1 (en) Predictive virtual training systems, apparatuses, interfaces, and methods for implementing same
US11526255B2 (en) Systems and methods for organizing and identifying documents via hierarchies and dimensions of tags
KR100246066B1 (en) Multi-lateral annotation and hot links in interactive 3d graphics
US20130125069A1 (en) System and Method for Interactive Labeling of a Collection of Images
US8010907B2 (en) Automatic 3D object generation and deformation for representation of data files based on taxonomy classification
CN109074372A (en) Metadata is applied using drag and drop
US9910835B2 (en) User interface for creation of content works
CN109716275A (en) Based on personalized theme with multi-dimensional model come the method that shows image
CN109074209A (en) The details pane of user interface
US11150783B1 (en) GUI based methods and systems for working with large numbers of interactive items
US20230342024A1 (en) Systems and Methods of Interacting with a Virtual Grid in a Three-dimensional (3D) Sensory Space
US11269419B2 (en) Virtual reality platform with haptic interface for interfacing with media items having metadata
CN116009751A (en) Interface element back display method and electronic equipment
CN117909524A (en) Visual search determination for text-to-image replacement
Steiner ScenARy: Scene Awareness in HMD-assisted Multi-Machine Scenarios
KR20230157877A (en) Object filtering and information display in an augmented-reality experience
Wang et al. Using the Split View Controller
WO2023069068A1 (en) Gui based methods and systems for working with large numbers of interactive items
Tudoreanu et al. Legends as a device for interacting with visualizations
CN115061760A (en) State perception element visualization method oriented to analysis process
Yoo et al. Development of metaphor-based interface design for VR manipulator
Appert How to Model, Evaluate and Generate Interaction Techniques?
TOOL JARtool 2.0

Legal Events

Date Code Title Description
AS Assignment

Owner name: PANASONIC CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ICHO, KEIJI;KAWANISHI, RYOUICHI;REEL/FRAME:026421/0322

Effective date: 20110421

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION