WO2008151148A1 - Method and system for searching for digital assets - Google Patents

Method and system for searching for digital assets Download PDF

Info

Publication number
WO2008151148A1
WO2008151148A1 PCT/US2008/065561 US2008065561W WO2008151148A1 WO 2008151148 A1 WO2008151148 A1 WO 2008151148A1 US 2008065561 W US2008065561 W US 2008065561W WO 2008151148 A1 WO2008151148 A1 WO 2008151148A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
keywords
keyword
digital assets
digital
Prior art date
Application number
PCT/US2008/065561
Other languages
French (fr)
Inventor
Nate Gandert
Chris Ziobro
Evan Cariss
Mary Forster
Mary Pat Gotschall
Joy Moffatt
Jeff Oberlander
Jenny Blackburn
Debbie Cargile
Aaron Kraemer
Original Assignee
Getty Images, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Getty Images, Inc. filed Critical Getty Images, Inc.
Priority to AU2008259833A priority Critical patent/AU2008259833B2/en
Priority to EP08756629A priority patent/EP2165279A4/en
Publication of WO2008151148A1 publication Critical patent/WO2008151148A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3338Query expansion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3322Query formulation using system suggestions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3325Reformulation based on results of preceding query
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/374Thesaurus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Definitions

  • a user specifies one or more search terms in order to locate images corresponding to the search terms.
  • An image typically has metadata associated with it.
  • An image search generally works by examining the associated metadata to determine if the metadata match a user-specified search term.
  • images for which the associated metadata match a user-specified search term are then returned to the user.
  • Figure 1 is a block diagram of a suitable computer that may employ aspects of the invention.
  • Figure 2 is a block diagram illustrating a suitable system in which aspects of the invention may operate in a networked computer environment.
  • Figure 3 is a flow diagram of a process for searching for images.
  • Figure 4 depicts a representative interface.
  • Figure 5A depicts an interface in which an initial search has been refined.
  • Figure 5B depicts the interface in which subsequent refinements to the initial search have been made.
  • Figure 6 is a flow diagram of a process for returning terms in response to a user search.
  • Figure 7 is a schematic view of the composition of sets of terms.
  • Figures 8A-8E depict an interface in accordance with some embodiments of the invention.
  • Figures 9A-9E depict an interface in accordance with some embodiments of the invention.
  • a software and/or hardware facility for searching for digital assets, such as images is disclosed.
  • the facility provides a user with a visual and tactile or visual with mixed metadata method of searching for relevant images.
  • the facility allows a user to specify one or more search terms in an initial search request.
  • the facility may return images and terms or keywords in response to the initial search request that are intended to be both useful and inspirational to the user, to enable the user to brainstorm and find inspiration, and to visually locate relevant images.
  • the words "term” or "keyword” can include a phrase of one or more descriptive terms or keywords.
  • the facility locates images and/or terms that are classified in a structured vocabulary, such as the one described in U.S. Patent No. 6,735,583, assigned to Getty Images, Inc.
  • the structured vocabulary may also be referred to as the controlled vocabulary.
  • the facility allows the user to refine or expand the initial search by specifying that some of the returned images and/or terms shall or shall not become part of a subsequent search.
  • the facility allows the user to continue refining or expanding subsequent searches reiteratively or to start over with a new initial search.
  • the facility can also be used to search for other types of digital assets, such as video files, audio files, animation files, other multimedia files, text documents, text strings, keywords, or other types of documents or digital resources.
  • the facility provides the user with an interface that allows the user to search for images and view images and/or terms returned in response to the user's request.
  • the interface consists of a search region and three additional regions.
  • the facility permits the user to specify one or more search terms in the search region.
  • the facility places images and/or terms that are responsive to the search terms in the first additional region, which may be called the "Inspiration Palette.”
  • the facility allows the user to place some of the returned images and/or terms in the second additional region, which may be called the "Action Palette.”
  • the facility places images returned in response to the user's initial search and to subsequent searches in the third additional region, which may be called the "Reaction Palette.”
  • the facility may allow the user to specify a single search term or multiple search terms in the search region.
  • the facility may provide the user with the ability to select one or more categories in order to filter their initial and subsequent search results.
  • a filter may be selected by techniques well-known in the art, such as by a drop-down list or radio buttons.
  • the categories may be represented by labels corresponding to pre-defined groups of users to which use of the facility is directed, such as "Creative,” "Editorial,” or "Film.”
  • the facility may require the user to specify a filter before requesting a search or the facility may allow the user to specify a filter or change or remove a specified filter after requesting a search.
  • the facility may return unfiltered images and/or terms but place only filtered images and/or terms in the Inspiration Palette and/or Reaction Palette.
  • the facility may do so in order to avoid re-running a search with a different filter, thus allowing a user to change filters in real time. Or, the facility may return and place only filtered images.
  • the facility may choose one method or alternate between the two, depending upon user and system requirements.
  • the facility may also allow the user to choose to perform a new initial search or to refine or expand an initial search by specifying one or more additional search terms.
  • the facility may permit the user to specify that the facility return only images, or return only terms, or to return both images and/or terms.
  • the facility may suggest search terms in response to user input, especially ambiguous user input. For example, when the user enters a search term, the facility may show terms from the structured vocabulary that are associated with or that match that search term. The facility may suggest terms while the user is entering text, such as an "auto-complete" feature does, or the facility may suggest terms after the user has finished entering text. In such a way the facility may conform the user search term to a term in the structured vocabulary that it matches or is synonymous with. In some embodiments, the facility may use, e.g., a red star to indicate a direct match of the search term, and a grey star to indicate a more distant association, as indicated by the structured vocabulary.
  • a red star to indicate a direct match of the search term
  • a grey star to indicate a more distant association
  • the facility may provide other visual, audible or other indications to show direct matches of search terms or other associations as indicated by the structured vocabulary.
  • the facility may maintain a history of search terms for each user. The facility may then use techniques well-known in the art, such as a drop-down list, to display the search terms to the user in the search region. This allows the user to easily re-run stored searches.
  • the facility places images and/or terms that are responsive to an initial search in the Inspiration Palette.
  • the facility returns images and/or terms that it locates based upon their classifications in the structured vocabulary hierarchy.
  • the facility returns images and/or terms that directly match the user search term or are synonymous with the user search term.
  • the facility may return images and/or terms related to the direct matches, based upon their classifications in the structured vocabulary. For example, the facility may return images and/or terms from the structured vocabulary hierarchy that are parents, children or siblings of the direct matches.
  • the facility may also return images and/or terms that represent other categories by which the user may further refine an image search. These other categories may represent effectively binary choices, such as black and white or color, people or no people, indoors or outdoors.
  • the facility may return the term "black and white.” If the user specifies that "black and white" shall become part of a subsequent search, the facility will only return images that are in black and white. In general the facility returns a broader set of images and/or terms than would be returned in a typical keyword image search that returns images having metadata/tags that match the keywords.
  • the facility may return and place in the Inspiration Palette all relevant images and/or terms or some subset of the relevant images and/or terms.
  • the facility may also return and place in the Inspiration Palette a random sampling of images and/or terms.
  • the facility may also return and place in the Inspiration Palette images and/or terms based upon their popularity, i.e., the frequency with which they are selected as matching the search term, their association with popular search terms, or images based on the frequency with which terms match them. If the facility cannot display all of the images and/or terms in the Inspiration Palette the facility may provide the user with the ability to move sequentially within the search results, such as by using a scrollbar or paginating through the search results. Alternatively, the facility may provide the user with the ability to either "shuffle” or "refresh” the search results, so as to have displayed different images and/or terms.
  • the facility returns images and/or terms from data pools formed from various data sources.
  • a data pool is a collection of images and/or terms.
  • the facility may form the following data pools from the structured or controlled vocabulary data source: identity (the term corresponding to the initial search term), parents, ancestors, siblings, children, and related terms.
  • the facility may form the following data pools from the refinement terms data source: concepts, subjects, locations, age and others, such as number of people in the image or the gender of image subjects.
  • the facility may form data pools from any metadata item that describes an image.
  • the facility may form from other data sources the following data pools: popular search terms, which may be based upon search terms most often entered by users; popular terms; which may be based upon terms most often applied to images; usual suspects, which may be images and/or terms that are often beneficial to the user in filtering a search; and reference images, which may be images that match terms in the structured or controlled vocabulary.
  • the facility may form other data pools from other data sources, such as database data, static data sources, data from data mining operations, web log data and data from search indexers.
  • the facility may form the data pools periodically according to a schedule or as instructed by an administrator of the facility.
  • the usual suspects data pool includes images and/or terms that are returned because they may be useful to the user in filtering search results.
  • the facility may use terms from the usual suspects data pool in conjunction with the refinement terms in order to provide more relevant images and/or terms that fall within the usual suspects data pool.
  • the facility may also maintain categories or lists of excluded images and/or terms that are not to be returned in response to a user search.
  • Excluded images and/or terms may include candidate images and/or terms; images and/or terms that are marked with a certain flag; terms that are not applied to any images; terms that fall within a certain node of the structured vocabulary hierarchy; and images and/or terms that are on a manually-populated or automatically-populated exclusion list.
  • the facility may also apply rules to exclude images and/or terms from being returned in response to a user search.
  • An example rule may be that if an image and/or term has a parent in a particular data pool then images and/or terms from certain other data pools are not to be returned.
  • the facility places returned images and/or terms in the Inspiration Palette in a "tag cloud," which is a distribution based upon the size and position of the images and/or terms based upon their location in the structured vocabulary, as shown in Figures 5A and 5B.
  • the facility may order images and/or terms in the Inspiration Palette based upon one or more factors, such their location in the structured vocabulary, a result count, alphabetically, or the "popularity" of images and/or terms, as determined by the facility.
  • the facility may allow the user to specify how the facility shall place returned images and/or terms, such as by specifying that images shall be ordered in a left column and terms in a right column.
  • the facility may increase or decrease the font size of returned terms in accordance with their proximity to matching terms in the structured vocabulary. For example, a returned term that is an immediate parent of the matching term may have a larger font size than a returned term that is a more distant parent. Or, the facility may determine the font size of a returned term based upon the number of associated returned images. The facility may vary the size of returned images in a like manner. Images and/or terms of varying sizes may be placed in the Inspiration Palette based upon their sizes, with the largest images and/or terms in the upper left corner and proceeding down to the smallest images and/or terms in the lower right corner. Alternatively, the images and font sizes may be randomly arrayed to facilitate creativity. In some embodiments, the facility apportions weights to the results returned by a particular image and/or term. The facility may then place the returned images and/or terms in the Inspiration Palette in accordance with their apportioned weights.
  • the facility may associate contextual menus with returned images and/or terms in the Inspiration Palette.
  • the facility may display a contextual menu in response to a user indication such as a mouse click or rollover.
  • a contextual menu may display to the user various options, such as: 1) view greater detail about the image and/or term; 2) see the terms associated with the image and/or term; 3) add the image and/or term to the Action Palette; 4) find similar images and/or terms; 5) remove the image and/or term from the search results; and 6) add the image to a shopping cart or "light box" for possible purchase or license.
  • Other options may be displayed in the contextual menu.
  • the facility may display to the user an interface that contains greater detail about an image and/or term in response to a user indication such as a mouse click.
  • An image may be categorized and the facility may display other attributes or actions based on or determined by the image's categorization. If the facility displays to the user the terms associated with an image and/or term, the facility may order the displayed associated terms in accordance with the initial search terms. I.e., the facility may weight the associated terms so as to allow the user to locate relevant images.
  • the facility allows the user to place some of the returned images and/or terms in the Action Palette. By doing so, the user effectively changes their initial search results. This is because each image has a primary term associated with it. By placing an image in the Action Palette, the user specifies that the associated primary term shall become part of the subsequent search.
  • the facility may also form the subsequent search in other ways, such as by adding both primary and secondary terms, by allowing the user to choose which terms to add, or by adding terms according to a weighting algorithm.
  • the facility allows the user to place an image and/or term in the Action Palette using the technique known as "drag-and- drop.”
  • the facility may allow the user to indicate by other means that an image and/or term is to be placed in the Action Palette by other means, such as keyboard input or selection via an option in a contextual menu as described above.
  • the facility may allow the user to simultaneously place several images and/or terms in the Action Palette.
  • the Action Palette is further divided into two sub-regions.
  • the facility may add the associated terms to the initial search using a Boolean "AND.”
  • terms associated with images and/or terms placed in the second sub- region may be added to the initial search using a Boolean "NOT.”
  • the Action Palette region contains a third sub-region that corresponds to a Boolean "OR,” and the associated terms of images and/or terms placed in this third sub-region may be added to the initial search using a Boolean "OR.”
  • the Boolean "OR" sub-region enables a user to expand instead of refine an image search.
  • the facility when the user places an image and/or term from the Inspiration Palette in the Action Palette, the facility does not replace the moved image and/or term in the Inspiration Palette but instead leaves a blank space. In some embodiments the facility replaces the moved image and/or term with another image and/or term.
  • the facility allows the user to indicate the importance of images and/or terms added to the Action Palette. This may be done by allowing the user to order images and/or terms within the Action Palette, or by some other method that allows the user to specify a ranking of images and/or terms, such as by positioning or sizing the images and/or terms or by providing a weighting or ranking value. In some embodiments, the facility allows the user to specify the terms associated with an image that shall become part of the subsequent search.
  • an image of a yoga mat in a windowed room may have the primary term "yoga” associated with it as well as the secondary term “window.”
  • the facility may allow the user to specify that "yoga,” "window,” or another associated term shall become part of the subsequent search.
  • the facility may also allow the user to specify additional terms by other than dragging and dropping images and/or terms in the Action Palette. This may be done, for example, by allowing the user to select the Action Palette and manually enter a term, such as by typing a term into a text box that is displayed proximate to the Action Palette.
  • the facility may perform a subsequent search that changes the user's initial search.
  • the facility may cache the result set returned in response to the initial search, and subsequent searches may search within or filter the cached result set, thereby allowing the facility to forego performing a new search. This allows the user to refine or narrow the initial search.
  • the facility may cache result sets for other purposes, such as to allow faster pagination within results or to more quickly display images and/or terms in the Inspiration Palette, Action Palette or Reaction Palette.
  • an entirely new search is performed in response to the subsequent search query.
  • the facility does not remove images and/or terms in the Action Palette and does not clear images in the Reaction Palette.
  • the facility may provide a button or other control that allows the user to direct the facility to clear the Action Palette, the Reaction Palette, and/or the Inspiration Palette.
  • the user may drag-and-drop images and/or terms out of the Action Palette to remove their associated terms from subsequent searches.
  • the facility may then place removed images and/or terms in the Inspiration Palette to make them available for future operations.
  • the facility may allow the user to drag-and- drop images and/or terms from any palette region to any other palette region, such as from the Reaction Palette to the Inspiration Palette or from the Inspiration Palette to the Reaction Palette.
  • Images returned in response to an initial search and to subsequent searches are displayed in the Reaction Palette.
  • the facility may place an image in the Reaction Palette as soon as it is returned in order to indicate to the user that a search is ongoing. Alternatively, the facility may wait until all images are returned before placing them in the Reaction Palette in one batch.
  • the user may be able to specify the size of the images displayed in the Reaction Palette, such as by specifying either a small, medium or large size. Alternatively, the size may be automatically varied among the images based on the estimated responsiveness to the search query.
  • the facility returns images with associated terms that match any of the user-specified search terms or any combination thereof.
  • the facility sorts returned images in order of most matching terms and places them in the Reaction Palette. For example, if a user specifies four search terms A, B 1 C, and D, the facility would return all images with associated terms that match either A, B, C, or D, or any combination thereof, such as A, B, and C; B and D; or individually A; B; C; or D.
  • the facility would place in the Reaction Palette first the images that match all four search terms, then the images that match any combination of three search terms, then the images that match any combination of two search terms, then the images that match any individual search term.
  • the facility may sort returned images by other factors, such as the date of the image or the copyright, other rights holder information, or perceived responsiveness to the search query.
  • the facility may associate contextual menus with images in the Reaction Palette and display the contextual menus in response to a user indication.
  • Such contextual menus may offer options similar to those previously discussed as well as additional options, such as: 1) view the purchase or license price; 2) mark the image as a favorite; 3) view more images like the selected image or 4) view image metadata, such as the caption, photographer or copyright. Other options may be displayed in the contextual menu. If the user chooses the option to view more images like the selected image, the facility may display a different set of images taken from the larger set of returned images, or the facility may perform a new search incorporating the primary term associated with the selected image. Alternatively, the facility may permit the user to specify whether to display a different set of images or to perform a new search.
  • the facility may also zoom the image or otherwise allow the user to preview the image in response to a user indication, such as a mouse rollover.
  • the facility may also permit the user to drag-and-drop images to a "light box" or shopping cart region or icon for potential purchase or license.
  • the facility may allow the user to drag-and-drop an image from the Reaction Palette to the Action Palette so as to permit the user to request more images like the moved image.
  • the facility may also allow the user to drag-and-drop images and/or terms from the Inspiration Palette, the Action Palette or the Reaction Palette to the search region. This allows the facility to identify images more like the moved image by searching for images and/or terms with associated terms that are similar to the associated terms of the images and/or terms moved into the search region.
  • the search query corresponding to the images and/or terms specified by the user may be such that the facility returns no results.
  • the facility may present a message to the user indicating that there are no results and suggest to the user possible changes. The user may then drag-and-drop images and/or terms from the Action Palette to the Inspiration Palette, or within the different sub-regions of the Action Palette, in order to change the search query.
  • the facility provides only images and/or terms in the Inspiration Palette that will return results when added to the user's search query.
  • the facility may return a standard set of images and/or terms in response to a search query that would otherwise return no results.
  • the facility may ask the user questions during the search process, such as why the user has selected a particular image.
  • the facility could also request that the user rank selected images or ask the user to select a favorite image.
  • the facility may do so in an attempt to provide the user with more control over the search process and results, or in order to provide images and/or terms in the Inspiration Palette that better match the user's search terms.
  • the facility may also display in the search region the actual query corresponding to the user's search and subsequent refinements so as to provide the user with more control over the search process and results.
  • the facility supports digital assets, such as images, regardless of image content, classification, image size, type of image (e.g., JPEG, GIF, PNG, RAW or others) or other image attributes or characteristics.
  • the facility also supports other types of digital assets, such as video files, audio files, animation files, other multimedia files, text documents, or other types of documents or digital resources.
  • the facility displays only images that fall into one or more classifications, such as "Editorial,” "Creative,” or “Film.”
  • images that are classified as "Editorial” may be images of one or more individuals who are well- known to the public.
  • the facility may associate certain biographical information as terms with such images so as to permit the user to identify images of named individuals with varying career or personality aspects.
  • FIG. 1 Figure 1 and the following discussion provide a brief, general description of a suitable computing environment in which the invention can be implemented.
  • aspects of the invention are described in the general context of computer-executable instructions, such as routines executed by a general- purpose computer, e.g., a server computer, wireless device or personal computer.
  • a general- purpose computer e.g., a server computer, wireless device or personal computer.
  • PDAs personal digital assistants
  • wearable computers all manner of cellular or mobile phones, multi-processor systems, microprocessor-based or programmable consumer electronics, set-top boxes, network PCs, mini-computers, mainframe computers, and the like.
  • the terms "computer,” “system,” and the like are generally used interchangeably herein, and refer to any of the above devices and systems, as well as any data processor.
  • aspects of the invention can be embodied in a special purpose computer or data processor that is specifically programmed, configured, or constructed to perform one or more of the computer-executable instructions explained in detail herein.
  • aspects of the invention can also be practiced in distributed computing environments where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet.
  • LAN Local Area Network
  • WAN Wide Area Network
  • program modules may be located in both local and remote memory storage devices.
  • aspects of the invention may be stored or distributed on computer- readable media, including magnetically or optically readable computer discs, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media.
  • computer implemented instructions, data structures, screen displays, and other data under aspects of the invention may be distributed over the Internet or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme).
  • Figure 1 depicts one embodiment of the invention that employs a computer 100, such as a personal computer or workstation, having one or more processors 101 coupled to one or more user input devices 102 and data storage devices 104.
  • the computer is also coupled to at least one output device such as a display device 106 and one or more optional additional output devices 108 (e.g., printer, plotter, speakers, tactile or olfactory output devices, etc.).
  • the computer may be coupled to external computers, such as via an optional network connection 110, a wireless transceiver 112, or both.
  • the input devices 102 may include a keyboard and/or a pointing device such as a mouse. Other input devices are possible such as a microphone, joystick, pen, game pad, scanner, digital camera, video camera, and the like.
  • the data storage devices 104 may include any type of computer-readable media that can store data accessible by the computer 100, such as magnetic hard and floppy disk drives, optical disk drives, magnetic cassettes, tape drives, flash memory cards, digital video disks (DVDs), Bernoulli cartridges, RAMs, ROMs, smart cards, etc. Indeed, any medium for storing or transmitting computer-readable instructions and data may be employed, including a connection port to or node on a network such as a local area network (LAN), wide area network (WAN) or the Internet (not shown in Figure 1).
  • LAN local area network
  • WAN wide area network
  • the Internet not shown in Figure 1
  • a distributed computing environment with a web interface includes one or more user computers 202 in a system 200 are shown, each of which includes a browser program module 204 that permits the computer to access and exchange data with the Internet 206, including web sites within the World Wide Web ("Web") portion of the Internet.
  • the user computers may be substantially similar to the computer described above with respect to Figure 1.
  • User computers may include other program modules such as an operating system, one or more application programs (e.g., word processing or spread sheet applications), and the like.
  • the computers may be general-purpose devices that can be programmed to run various types of applications, or they may be single-purpose devices optimized or limited to a particular function or class of functions.
  • any application program for providing a graphical user interface to users may be employed, as described in detail below; the use of a web browser and web interface are only used as a familiar example here.
  • At least one server computer 208 coupled to the Internet or Web 206, performs much or all of the functions for receiving, routing and storing of electronic messages, such as web pages, audio signals, and electronic images. While the Internet is shown, a private network, such as an intranet may indeed be preferred in some applications.
  • the network may have a client-server architecture, in which a computer is dedicated to serving other client computers, or it may have other architectures such as a peer-to-peer, in which one or more computers serve simultaneously as servers and clients.
  • a database 210 or databases, coupled to the server computer(s), stores much of the web pages and content exchanged between the user computers.
  • the server computer(s), including the database(s) may employ security measures to inhibit malicious attacks on the system, and to preserve integrity of the messages and data stored therein (e.g., firewall systems, secure socket layers (SSL), password protection schemes, encryption, and the like).
  • security measures to inhibit malicious attacks on the system, and to preserve integrity of the messages and data stored therein (e.g.,
  • the server computer 208 may include a server engine 212, a web page management component 214, a content management component 216 and a database management component 218.
  • the server engine performs basic processing and operating system level tasks.
  • the web page management component handles creation and display or routing of web pages. Users may access the server computer by means of a URL associated therewith.
  • the content management component handles most of the functions in the embodiments described herein.
  • the database management component includes storage and retrieval tasks with respect to the database, queries to the database, and storage of data such as video, graphics and audio signals.
  • FIG 3 is a flow diagram of a process 300 implemented by the facility to permit the user to submit an initial search for images and refine the initial search by specifying that some of the returned images and/or terms shall or shall not become part of a subsequent search.
  • the facility displays the interface to the user, with the interface including two or more of the search region, the Inspiration Palette, the Action Palette and the Reaction Palette.
  • the facility receives one or more search terms from the user. The facility disambiguates the one or more search terms by conforming or normalizing them to one or more terms in the structured vocabulary. The user then selects the appropriate structured vocabulary term. Alternatively, the user may start over and input one or more different search terms.
  • the facility receives the selected term.
  • the facility determines if any images and/or terms match the selected term or are synonymous with the selected term. Matching may be an exact match, a close match, a fuzzy match, or other match algorithm.
  • the facility returns matching images and/or terms and displays the images and/or terms as described above.
  • the facility receives an indication from the user of a refinement of the initial search.
  • the facility updates the initial search results based on the initial search and the user refinement of the initial search.
  • Figure 4 depicts a representative interface 400, which includes the search region 455 and the three additional regions described above.
  • the user may specify one or more search terms in text box 405, located in search region 455.
  • the facility has returned images and terms that match the user search term "yoga,” and placed the returned images and terms in Inspiration Palette 410.
  • These images and terms include image 460 and term 415, which is displayed in a larger font size relative to that of surrounding terms.
  • Inspiration Palette 410 also contains an image 465 for which an associated term 420 is displayed.
  • Action Palette 425 is divided into two sub-regions 430 and 435.
  • the user may place images and/or terms in sub-region 430 to specify that the facility is to add the associated terms to the initial search using a Boolean "AND.”
  • term 440 is placed in sub-region 430.
  • the user may also place images and/or terms in sub-region 435 to specify that the facility is to add the associated terms to the initial search using a Boolean "NOT.”
  • the interface includes a third sub-region (not shown) corresponding to a Boolean "OR.”
  • Reaction Palette 445 contains images (not shown) returned in response to an initial search and subsequent searches. These include images 450a, 450b ... through 45On.
  • Figure 5A depicts another interface 500.
  • the interface includes text box 505, in which the search term "yoga” has again been entered.
  • the Inspiration Palette 510 contains images and terms returned in response to the search request.
  • the Inspiration Palette has a scrollbar 535 that allows the user to scroll through all returned images and terms. In this interface the images and terms are ordered differently from the images and terms in Figure 4.
  • the Action Palette 525 contains the term "yoga” as well as an image, and the corresponding term 515 and image 520 in the Action Palette are highlighted to indicate to the user that they have been placed in the Action Palette.
  • the Reaction Palette 530 contains images returned in response to the refinement of the initial search.
  • the facility may enable the user to paginate between pages of images in the Reaction Palette, or to shuffle or refresh the images.
  • Figure 5B depicts the same interface 500 but with more images and terms in the Action Palette 525, indicating subsequent refinements of the user's initial search. Again, the terms and images in the Action Palette are highlighted in the Inspiration Palette 510.
  • the Reaction Palette 530 in Figure 5B contains a different set of images than the Reaction Palette depicted in Figure 5A, illustrating that subsequent refinements to a search produces different results.
  • Figure 6 is a flow diagram of a process 600 implemented by the facility for returning terms in response to a user search.
  • the facility may make certain assumptions in implementing the process.
  • the facility may assume that data pools are the sole sources of terms.
  • the facility may assume that each data pool has a desired number of terms.
  • the facility may assume that not all pools can supply enough terms to match the amount of terms desired to be taken from those pools.
  • the facility may assume that demand is distributed to designated data pools or groups of data pools.
  • the facility may assume that term weights may be assigned globally by popularity or, in some cases, within the data pool.
  • step 605 the facility retrieves the desired number of terms from each data pool.
  • the facility may pull the data from each of the desired data pools using appropriate filtering and sorting and/or a distribution model for data extraction. Data pools may be populated from the original data sources, data caches, or static lists.
  • step 610 the facility retrieves additional terms from the existing data pools to accommodate for unfulfilled demand. Unfulfilled demand may occur when a data pool does not have enough data to satisfy the initial request. For example, if a term has no children, the children data pool would have no data and any desired number of terms would become unfulfilled demand.
  • step 615 the facility determines the weight of each term, if the term is to be weighted. A term's weight may be dependent on its data pool.
  • Specific data pools may provide weighted values and refinement terms indicate how many times they occur. Specific data pools may derive a constant weight and related terms may have a constant weight since they are manually assigned. A term may be given a score based upon the number of times it is assigned to images and weighted accordingly. A term may be given a score based upon being listed amongst the popular search terms and weighted accordingly.
  • the facility compiles a list of terms from all the data pools. The facility may combine the terms for use in the list. The facility may retain the original data pool and term weight for use in the display and organization of the terms.
  • the facility includes a configuration component that allows administrators of the facility to specify various aspects of the process for returning terms.
  • the configuration component may allow administrators to fine-tune the process as well as enable administrators to evaluate the effectiveness of the process.
  • the configuration component may allow administrators to specify, among other aspects: named sections for term data pools; a data pool from which to derive terms (from an existing set of available data pools); the desired number of terms to return; the maximum number of terms to use from specific data pools; and where the facility may retrieve additional terms if a specific data pool cannot meet the demand for terms from that data pool.
  • the following depicts a sample configuration for the facility:
  • the "Pool #” refers to an identification number for the data pool.
  • “Name” refers to the name of the data pool and may describe the relationship of the terms in the data pool to the search term entered by the user.
  • “Priority” refers to the order in which the Inspiration Palette should be populated with terms from these data pools.
  • “Desired” refers to the minimum number of terms that should be taken from the data pool if there is an adequate supply of terms within the data pool.
  • a data pool's priority and desired settings may combine to specify that the facility is to return terms from lower priority data pools if the facility cannot obtain the desired number of terms from the data pool of that particular priority.
  • “Max” refers to the maximum number of terms that should be taken from the particular data pool.
  • “Floor” refers to the minimum percentage of the search results that term must be associated with in order for the facility to return the term for display in the Inspiration Palette. "Ceiling” refers to the maximum percentage of the search results that the term can be associated with in order for the facility to return the term for display in the Inspiration Palette. In embodiments where the Action Palette region contains an "OR" sub-region the configuration component may not use a ceiling.
  • “Configurable” refers to the ability of the values in the various columns to be set by administrators of the facility. There may be additional columns and/or values to the configuration component that allow administrators to further configure the facility.
  • the various data pools may have different requirements due to their characteristics and the configuration component allows for the different requirements to be taken into account. For example, the facility may select terms using a different process from different data pools, or may have different exclusions for different data pools.
  • Figure 7 depicts a schematic view 700 of the composition of sets of terms according to some embodiments.
  • Data sources 705, such as term lookup 720 provide different data types 710, such as controlled vocabulary 725.
  • Data may be extracted from the data sources using selected distribution 735 and organized into different sets of data pools 715, such as parent 730.
  • a configuration 740 may be applied to the data pools individually and/or as sets of data pools to form the sets of terms 750.
  • the facility may cache sets of terms for specific periods of time. The facility may do so because the sets of terms may not change frequently. Caching sets of terms may enable the facility to quickly return images and/or terms in response to a user search.
  • the facility may also cache the data in the data pools and/or the data in data sources. The facility may do so because this data may be expensive to gather, cheap to store, and it may change infrequently.
  • the facility may also cache data at various other levels in order to improve system performance. For example, the facility may derive the data pools and/or the term sets prior to usage by performing data mining on the keyword data and other data sources. The data generated from this mining could then be formatted into a persistent data cache in order to provide a very fast access and an optimal user experience.
  • Figures 8A-8E depict an interface 800 in accordance with another embodiment of the invention.
  • the user may specify a search term in text box 805 and submit the search term by clicking the search button 810.
  • the facility has returned multiple terms for display in the Inspiration Palette 825.
  • the facility has not returned images for display in the Inspiration Palette 825.
  • the Action Palette 830 contains three sub-regions.
  • Sub-region 840 corresponds to a Boolean "AND” and terms dragged-and-dropped into sub-region 840 will be added to the user's initial search using a Boolean "AND,” which enables a user to refine an initial search.
  • Sub-region 845 corresponds to a Boolean "OR” and terms dragged-and-dropped into sub-region 845 will be added to the user's initial search using a Boolean "OR,” which enables a user to expand an initial search.
  • Sub-region 850 corresponds to a Boolean "NOT” and terms dragged-and-dropped into sub-region 850 will be added to the user's initial search using a Boolean "NOT,” which enables a user to exclude content.
  • Images returned in response to the user's initial search and subsequent searches are displayed in the Reaction Palette 855.
  • the interface 800 also contains a grouping of multiple check boxes 815, which enable a user to apply or remove display filters.
  • the filters “RM,” “RR,” and “RF,” which stand for “Rights-Managed,” “Rights-Ready,” and “Royalty-Free,” respectively, as well as the filters “Photography,” and “Illustrations” are configured as Boolean “OR” filters in the embodiment depicted in Figure 8A.
  • the facility will return and display images that satisfy any one of the filters specified in the check boxes 815.
  • the interface 800 also contains a button 820 labeled "More Filters” which can enable a user to specify more ways to filter display results.
  • Figure 8B depicts additional filters 835 that a user may select.
  • the filters 835 include "Color,” which if selected displays color images, "Black & White,” which if selected displays black and white images, "Horizontal,” which if selected displays images that have a horizontal or landscape orientation, and "Vertical,” which if selected displays images that have a vertical or portrait orientation.
  • Other filters are, of course, possible.
  • the facility has suggested search terms in response to user input entered into text box 805.
  • the facility displays search terms in the region 860 that may be associated with the search term "yoga" 862 entered by the user.
  • the facility may use, e.g., a colored star to indicate a direct match in the structured vocabulary or other factors or to indicate the best match to the user's input.
  • the facility has suggested the term 865 labeled "Yoga (Relaxation Exercise)" and marked the term 865 with a red star in response to the user's input of "yoga.”
  • the facility has also marked several other terms 870 with a grey star to indicate a more distant association with the user's input, as indicated by the structured vocabulary or other factors.
  • Figure 8D depicts a region 875 that the facility displays in response to a user indication such as a mouse click within the sub-region 840 of the Action Palette 830.
  • the region 875 contains a text box 880 that allows a user to type an additional search term, such as the term "exercise" 885.
  • the user may wish to type in an additional search term, for example, if the user does not find appropriate terms in the Inspiration Palette 825 that the user desires to use to modify the user's initial search.
  • the user can similarly add search terms to the other sub-regions 845 and 850 of the Action Palette 830.
  • the search button 882 the facility adds the search term 885 to the user's initial search using the appropriate Boolean connector and returns and displays images and/or terms returned by the modified initial search.
  • Figure 8E the facility has suggested search terms in response to user input entered into text box 880.
  • the facility displays search terms in the region 897 that may be associated with the search term "exercise" 885 entered by the user.
  • the facility uses a red star to mark the terms 890 that optimally correspond to the user's search term "exercise” 885, as indicated by the structured vocabulary or other factors.
  • Additional search terms 895 may be marked with a grey star to indicate a more distant association with the user's input, as indicated by the structured vocabulary or other factors.
  • Figures 9A-9E depict interfaces in accordance with another embodiment of the invention.
  • Figure 9A depicts an interface 900 that enables a user to provide a keyword in search box 905 and select the suggested keyword that best matches the concept or idea for which user intends to search for images.
  • Figure 9B depicts the interface 900 showing that the user has entered the words "rock climb" 962 into search box 905.
  • the facility has disambiguated the user-entered words 962 by conforming or normalizing them to a term in the structured vocabulary and has suggested four keywords in response: "Rock Climbing (Climbing)” 965, "Climbing Wall (Climbing Equipment),” “Climbing Equipment (Sports Equipment)” and “Rock Boot (Sports Footwear)” (the latter three keywords collectively labeled 970).
  • the star next to the keyword “Rock Climbing (Climbing)” 965 indicates that the facility has determined that this keyword optimally matches or corresponds to the user's search term "rock climb.”
  • Figure 9C depicts the interface 900 showing the facility's response to the user's selection of the keyword "Rock Climbing (Climbing)” 962.
  • the Inspiration Palette region 925 contains keywords suggested by the facility, such as keywords determined by the process 600 illustrated in Figure 6 (e.g., keywords from the structured vocabulary that have an ancestor or descendant relation to the keyword "Rock Climbing (Climbing)," in addition to other keywords.)
  • the Action Palette contains two sub-regions: one sub-region 945 into which the user may drag- and-drop keywords from the Inspiration Palette region 925 to take the user's search for images in a new direction; and one sub-region 950 into which the user may drag- and-drop keywords from the Inspiration Palette region 925 to exclude images having those associated keywords.
  • the Reaction Palette region 955 displays a set of images having the keyword "Rock Climbing (Climbing)" associated with them that the facility has retrieved (e.g., those images having the keyword "rock climbing” associated with them).
  • Figure 9D depicts the interface 900 showing the facility's response to the user having dragged-and-dropped the keyword "Determination” 965 into the first sub-region 945 in the Action Palette.
  • the facility has provided a new set of images in response to the selection of the keyword "Determination” 965 for display in the Reaction Palette region 955.
  • Figure 9E depicts the interface 900 presented in response to the user having dragged-and-dropped the keyword "Mountain” 970 into the second sub-region 950 in the Action Palette.
  • the facility has narrowed the set of images displayed in the Reaction Palette region 955 by excluding those images having the keyword "Mountain” 970 associated with them.

Abstract

A method of presenting digital assets in response to a search query by a user to locate at least one digital asset from a database of digital assets is described. Each digital asset has at least one keyword associated with it, and each associated keyword is part of a hierarchical organization of keywords. A first set of digital assets that have associated keywords equivalent to the search query is identified as well as suggested keywords that have e.g., an ancestor, descendant or sibling relation to the search query. The digital assets and the suggested keywords are presented to the user. The user selects a suggested keyword, and a second set of digital assets that have associated keywords equivalent to the suggested keyword is identified. The second set of digital assets is presented to the user.

Description

METHOD AND SYSTEM FOR SEARCHING FOR DIGITAL
ASSETS
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims the priority of U.S. Provisional Application No. 60/941 ,582, entitled METHOD AND SYSTEM FOR SEARCHING FOR IMAGES, filed June 1 , 2007, the entirety of which is hereby incorporated by reference.
BACKGROUND
[0002] In a typical image search, a user specifies one or more search terms in order to locate images corresponding to the search terms. An image typically has metadata associated with it. An image search generally works by examining the associated metadata to determine if the metadata match a user-specified search term. Typically, images for which the associated metadata match a user-specified search term are then returned to the user.
[0003] One problem that a typical image search poses is that an image is unlikely to have associated metadata that adequately describe the image. It has been said that a picture is worth a thousand words. However, it is unlikely that an image has such a large amount of associated metadata. Even if an image has such a large amount of associated metadata, it is unlikely that the individual elements of the associated metadata would combine in a coherent manner to evoke the same response in a user that viewing the image may evoke.
[0004] Accordingly, a system that allows a user to search for images by a method that improves upon a typical image search would have significant utility.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Figure 1 is a block diagram of a suitable computer that may employ aspects of the invention.
[0006] Figure 2 is a block diagram illustrating a suitable system in which aspects of the invention may operate in a networked computer environment. [0007] Figure 3 is a flow diagram of a process for searching for images. [0008] Figure 4 depicts a representative interface.
[0009] Figure 5A depicts an interface in which an initial search has been refined.
[0010] Figure 5B depicts the interface in which subsequent refinements to the initial search have been made.
[0011] Figure 6 is a flow diagram of a process for returning terms in response to a user search.
[0012] Figure 7 is a schematic view of the composition of sets of terms.
[0013] Figures 8A-8E depict an interface in accordance with some embodiments of the invention.
[0014] Figures 9A-9E depict an interface in accordance with some embodiments of the invention.
DETAILED DESCRIPTION
[0015] A software and/or hardware facility for searching for digital assets, such as images, is disclosed. The facility provides a user with a visual and tactile or visual with mixed metadata method of searching for relevant images. The facility allows a user to specify one or more search terms in an initial search request. The facility may return images and terms or keywords in response to the initial search request that are intended to be both useful and inspirational to the user, to enable the user to brainstorm and find inspiration, and to visually locate relevant images. For purposes of this description the words "term" or "keyword" can include a phrase of one or more descriptive terms or keywords. In some embodiments, the facility locates images and/or terms that are classified in a structured vocabulary, such as the one described in U.S. Patent No. 6,735,583, assigned to Getty Images, Inc. of Seattle, Washington, entitled METHOD AND SYSTEM FOR CLASSIFYING AND LOCATING MEDIA CONTENT, the entirety of which is hereby incorporated by reference. The structured vocabulary may also be referred to as the controlled vocabulary. The facility allows the user to refine or expand the initial search by specifying that some of the returned images and/or terms shall or shall not become part of a subsequent search. The facility allows the user to continue refining or expanding subsequent searches reiteratively or to start over with a new initial search. The facility can also be used to search for other types of digital assets, such as video files, audio files, animation files, other multimedia files, text documents, text strings, keywords, or other types of documents or digital resources.
[0016] In the following description, like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific embodiments of the invention. Certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section.
[0017] The following description provides specific details for a thorough understanding and enabling description of these embodiments. One skilled in the art will understand, however, that the invention may be practiced without many of these details. Additionally, some well-known structures or functions may not be shown or described in detail, so as to avoid unnecessarily obscuring the relevant description of the various embodiments. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing aspects of the inventions herein described.
[0018] The facility provides the user with an interface that allows the user to search for images and view images and/or terms returned in response to the user's request. In some embodiments, the interface consists of a search region and three additional regions. The facility permits the user to specify one or more search terms in the search region. The facility places images and/or terms that are responsive to the search terms in the first additional region, which may be called the "Inspiration Palette." The facility allows the user to place some of the returned images and/or terms in the second additional region, which may be called the "Action Palette." The facility places images returned in response to the user's initial search and to subsequent searches in the third additional region, which may be called the "Reaction Palette."
[0019] The facility may allow the user to specify a single search term or multiple search terms in the search region. The facility may provide the user with the ability to select one or more categories in order to filter their initial and subsequent search results. A filter may be selected by techniques well-known in the art, such as by a drop-down list or radio buttons. The categories may be represented by labels corresponding to pre-defined groups of users to which use of the facility is directed, such as "Creative," "Editorial," or "Film." The facility may require the user to specify a filter before requesting a search or the facility may allow the user to specify a filter or change or remove a specified filter after requesting a search. If the user specifies a filter, the facility may return unfiltered images and/or terms but place only filtered images and/or terms in the Inspiration Palette and/or Reaction Palette. The facility may do so in order to avoid re-running a search with a different filter, thus allowing a user to change filters in real time. Or, the facility may return and place only filtered images. The facility may choose one method or alternate between the two, depending upon user and system requirements. The facility may also allow the user to choose to perform a new initial search or to refine or expand an initial search by specifying one or more additional search terms. In some embodiments the facility may permit the user to specify that the facility return only images, or return only terms, or to return both images and/or terms.
[0020] In some embodiments, the facility may suggest search terms in response to user input, especially ambiguous user input. For example, when the user enters a search term, the facility may show terms from the structured vocabulary that are associated with or that match that search term. The facility may suggest terms while the user is entering text, such as an "auto-complete" feature does, or the facility may suggest terms after the user has finished entering text. In such a way the facility may conform the user search term to a term in the structured vocabulary that it matches or is synonymous with. In some embodiments, the facility may use, e.g., a red star to indicate a direct match of the search term, and a grey star to indicate a more distant association, as indicated by the structured vocabulary. Or, the facility may provide other visual, audible or other indications to show direct matches of search terms or other associations as indicated by the structured vocabulary. In some embodiments the facility may maintain a history of search terms for each user. The facility may then use techniques well-known in the art, such as a drop-down list, to display the search terms to the user in the search region. This allows the user to easily re-run stored searches.
Inspiration Palette
[0021] The facility places images and/or terms that are responsive to an initial search in the Inspiration Palette. In some embodiments, the facility returns images and/or terms that it locates based upon their classifications in the structured vocabulary hierarchy. The facility returns images and/or terms that directly match the user search term or are synonymous with the user search term. In addition to those direct matches, the facility may return images and/or terms related to the direct matches, based upon their classifications in the structured vocabulary. For example, the facility may return images and/or terms from the structured vocabulary hierarchy that are parents, children or siblings of the direct matches. The facility may also return images and/or terms that represent other categories by which the user may further refine an image search. These other categories may represent effectively binary choices, such as black and white or color, people or no people, indoors or outdoors. For example, the facility may return the term "black and white." If the user specifies that "black and white" shall become part of a subsequent search, the facility will only return images that are in black and white. In general the facility returns a broader set of images and/or terms than would be returned in a typical keyword image search that returns images having metadata/tags that match the keywords. The facility may return and place in the Inspiration Palette all relevant images and/or terms or some subset of the relevant images and/or terms. The facility may also return and place in the Inspiration Palette a random sampling of images and/or terms. The facility may also return and place in the Inspiration Palette images and/or terms based upon their popularity, i.e., the frequency with which they are selected as matching the search term, their association with popular search terms, or images based on the frequency with which terms match them. If the facility cannot display all of the images and/or terms in the Inspiration Palette the facility may provide the user with the ability to move sequentially within the search results, such as by using a scrollbar or paginating through the search results. Alternatively, the facility may provide the user with the ability to either "shuffle" or "refresh" the search results, so as to have displayed different images and/or terms.
[0022] In some embodiments, the facility returns images and/or terms from data pools formed from various data sources. A data pool is a collection of images and/or terms. The facility may form the following data pools from the structured or controlled vocabulary data source: identity (the term corresponding to the initial search term), parents, ancestors, siblings, children, and related terms. The facility may form the following data pools from the refinement terms data source: concepts, subjects, locations, age and others, such as number of people in the image or the gender of image subjects. The facility may form data pools from any metadata item that describes an image. The facility may form from other data sources the following data pools: popular search terms, which may be based upon search terms most often entered by users; popular terms; which may be based upon terms most often applied to images; usual suspects, which may be images and/or terms that are often beneficial to the user in filtering a search; and reference images, which may be images that match terms in the structured or controlled vocabulary. The facility may form other data pools from other data sources, such as database data, static data sources, data from data mining operations, web log data and data from search indexers. The facility may form the data pools periodically according to a schedule or as instructed by an administrator of the facility.
[0023] In some embodiments, the usual suspects data pool includes images and/or terms that are returned because they may be useful to the user in filtering search results. The facility may use terms from the usual suspects data pool in conjunction with the refinement terms in order to provide more relevant images and/or terms that fall within the usual suspects data pool.
[0024] The facility may also maintain categories or lists of excluded images and/or terms that are not to be returned in response to a user search. Excluded images and/or terms may include candidate images and/or terms; images and/or terms that are marked with a certain flag; terms that are not applied to any images; terms that fall within a certain node of the structured vocabulary hierarchy; and images and/or terms that are on a manually-populated or automatically-populated exclusion list. The facility may also apply rules to exclude images and/or terms from being returned in response to a user search. An example rule may be that if an image and/or term has a parent in a particular data pool then images and/or terms from certain other data pools are not to be returned.
[0025] In some embodiments, the facility places returned images and/or terms in the Inspiration Palette in a "tag cloud," which is a distribution based upon the size and position of the images and/or terms based upon their location in the structured vocabulary, as shown in Figures 5A and 5B. Or, the facility may order images and/or terms in the Inspiration Palette based upon one or more factors, such their location in the structured vocabulary, a result count, alphabetically, or the "popularity" of images and/or terms, as determined by the facility. The facility may allow the user to specify how the facility shall place returned images and/or terms, such as by specifying that images shall be ordered in a left column and terms in a right column. The facility may increase or decrease the font size of returned terms in accordance with their proximity to matching terms in the structured vocabulary. For example, a returned term that is an immediate parent of the matching term may have a larger font size than a returned term that is a more distant parent. Or, the facility may determine the font size of a returned term based upon the number of associated returned images. The facility may vary the size of returned images in a like manner. Images and/or terms of varying sizes may be placed in the Inspiration Palette based upon their sizes, with the largest images and/or terms in the upper left corner and proceeding down to the smallest images and/or terms in the lower right corner. Alternatively, the images and font sizes may be randomly arrayed to facilitate creativity. In some embodiments, the facility apportions weights to the results returned by a particular image and/or term. The facility may then place the returned images and/or terms in the Inspiration Palette in accordance with their apportioned weights.
[0026] The facility may associate contextual menus with returned images and/or terms in the Inspiration Palette. The facility may display a contextual menu in response to a user indication such as a mouse click or rollover. A contextual menu may display to the user various options, such as: 1) view greater detail about the image and/or term; 2) see the terms associated with the image and/or term; 3) add the image and/or term to the Action Palette; 4) find similar images and/or terms; 5) remove the image and/or term from the search results; and 6) add the image to a shopping cart or "light box" for possible purchase or license. Other options may be displayed in the contextual menu. Alternatively, the facility may display to the user an interface that contains greater detail about an image and/or term in response to a user indication such as a mouse click. An image may be categorized and the facility may display other attributes or actions based on or determined by the image's categorization. If the facility displays to the user the terms associated with an image and/or term, the facility may order the displayed associated terms in accordance with the initial search terms. I.e., the facility may weight the associated terms so as to allow the user to locate relevant images.
Action Palette
[0027] The facility allows the user to place some of the returned images and/or terms in the Action Palette. By doing so, the user effectively changes their initial search results. This is because each image has a primary term associated with it. By placing an image in the Action Palette, the user specifies that the associated primary term shall become part of the subsequent search. The facility may also form the subsequent search in other ways, such as by adding both primary and secondary terms, by allowing the user to choose which terms to add, or by adding terms according to a weighting algorithm. The facility allows the user to place an image and/or term in the Action Palette using the technique known as "drag-and- drop." The facility may allow the user to indicate by other means that an image and/or term is to be placed in the Action Palette by other means, such as keyboard input or selection via an option in a contextual menu as described above. The facility may allow the user to simultaneously place several images and/or terms in the Action Palette. In some embodiments, the Action Palette is further divided into two sub-regions. For images and/or terms placed in the first sub-region, the facility may add the associated terms to the initial search using a Boolean "AND." Conversely, terms associated with images and/or terms placed in the second sub- region may be added to the initial search using a Boolean "NOT." In some embodiments, the Action Palette region contains a third sub-region that corresponds to a Boolean "OR," and the associated terms of images and/or terms placed in this third sub-region may be added to the initial search using a Boolean "OR." The Boolean "OR" sub-region enables a user to expand instead of refine an image search. In some embodiments, when the user places an image and/or term from the Inspiration Palette in the Action Palette, the facility does not replace the moved image and/or term in the Inspiration Palette but instead leaves a blank space. In some embodiments the facility replaces the moved image and/or term with another image and/or term.
[0028] In some embodiments, the facility allows the user to indicate the importance of images and/or terms added to the Action Palette. This may be done by allowing the user to order images and/or terms within the Action Palette, or by some other method that allows the user to specify a ranking of images and/or terms, such as by positioning or sizing the images and/or terms or by providing a weighting or ranking value. In some embodiments, the facility allows the user to specify the terms associated with an image that shall become part of the subsequent search. For example, an image of a yoga mat in a windowed room may have the primary term "yoga" associated with it as well as the secondary term "window." The facility may allow the user to specify that "yoga," "window," or another associated term shall become part of the subsequent search. The facility may also allow the user to specify additional terms by other than dragging and dropping images and/or terms in the Action Palette. This may be done, for example, by allowing the user to select the Action Palette and manually enter a term, such as by typing a term into a text box that is displayed proximate to the Action Palette.
[0029] Once the user has placed an image and/or term in the Action Palette, the facility may perform a subsequent search that changes the user's initial search. In some embodiments the facility may cache the result set returned in response to the initial search, and subsequent searches may search within or filter the cached result set, thereby allowing the facility to forego performing a new search. This allows the user to refine or narrow the initial search. The facility may cache result sets for other purposes, such as to allow faster pagination within results or to more quickly display images and/or terms in the Inspiration Palette, Action Palette or Reaction Palette. In some embodiments, an entirely new search is performed in response to the subsequent search query.
[0030] In some embodiments, if the user specifies additional search terms in the search region the facility does not remove images and/or terms in the Action Palette and does not clear images in the Reaction Palette. The facility may provide a button or other control that allows the user to direct the facility to clear the Action Palette, the Reaction Palette, and/or the Inspiration Palette. [0031] The user may drag-and-drop images and/or terms out of the Action Palette to remove their associated terms from subsequent searches. The facility may then place removed images and/or terms in the Inspiration Palette to make them available for future operations. The facility may allow the user to drag-and- drop images and/or terms from any palette region to any other palette region, such as from the Reaction Palette to the Inspiration Palette or from the Inspiration Palette to the Reaction Palette.
Reaction Palette
[0032] Images returned in response to an initial search and to subsequent searches are displayed in the Reaction Palette. The facility may place an image in the Reaction Palette as soon as it is returned in order to indicate to the user that a search is ongoing. Alternatively, the facility may wait until all images are returned before placing them in the Reaction Palette in one batch. The user may be able to specify the size of the images displayed in the Reaction Palette, such as by specifying either a small, medium or large size. Alternatively, the size may be automatically varied among the images based on the estimated responsiveness to the search query.
[0033] If the user specifies more than one search term in the initial search, then the facility returns images with associated terms that match any of the user-specified search terms or any combination thereof. The facility sorts returned images in order of most matching terms and places them in the Reaction Palette. For example, if a user specifies four search terms A, B1 C, and D, the facility would return all images with associated terms that match either A, B, C, or D, or any combination thereof, such as A, B, and C; B and D; or individually A; B; C; or D. The facility would place in the Reaction Palette first the images that match all four search terms, then the images that match any combination of three search terms, then the images that match any combination of two search terms, then the images that match any individual search term. Alternatively, the facility may sort returned images by other factors, such as the date of the image or the copyright, other rights holder information, or perceived responsiveness to the search query.
[0034] The facility may associate contextual menus with images in the Reaction Palette and display the contextual menus in response to a user indication. Such contextual menus may offer options similar to those previously discussed as well as additional options, such as: 1) view the purchase or license price; 2) mark the image as a favorite; 3) view more images like the selected image or 4) view image metadata, such as the caption, photographer or copyright. Other options may be displayed in the contextual menu. If the user chooses the option to view more images like the selected image, the facility may display a different set of images taken from the larger set of returned images, or the facility may perform a new search incorporating the primary term associated with the selected image. Alternatively, the facility may permit the user to specify whether to display a different set of images or to perform a new search.
[0035] The facility may also zoom the image or otherwise allow the user to preview the image in response to a user indication, such as a mouse rollover. The facility may also permit the user to drag-and-drop images to a "light box" or shopping cart region or icon for potential purchase or license.
[0036] In some embodiments, the facility may allow the user to drag-and-drop an image from the Reaction Palette to the Action Palette so as to permit the user to request more images like the moved image. The facility may also allow the user to drag-and-drop images and/or terms from the Inspiration Palette, the Action Palette or the Reaction Palette to the search region. This allows the facility to identify images more like the moved image by searching for images and/or terms with associated terms that are similar to the associated terms of the images and/or terms moved into the search region.
[0037] In certain situations, the search query corresponding to the images and/or terms specified by the user may be such that the facility returns no results. In some embodiments, the facility may present a message to the user indicating that there are no results and suggest to the user possible changes. The user may then drag-and-drop images and/or terms from the Action Palette to the Inspiration Palette, or within the different sub-regions of the Action Palette, in order to change the search query. In some embodiments, the facility provides only images and/or terms in the Inspiration Palette that will return results when added to the user's search query. In some embodiments, the facility may return a standard set of images and/or terms in response to a search query that would otherwise return no results. [0038] In order to learn user preferences, the facility may ask the user questions during the search process, such as why the user has selected a particular image. The facility could also request that the user rank selected images or ask the user to select a favorite image. The facility may do so in an attempt to provide the user with more control over the search process and results, or in order to provide images and/or terms in the Inspiration Palette that better match the user's search terms. The facility may also display in the search region the actual query corresponding to the user's search and subsequent refinements so as to provide the user with more control over the search process and results.
[0039] The facility supports digital assets, such as images, regardless of image content, classification, image size, type of image (e.g., JPEG, GIF, PNG, RAW or others) or other image attributes or characteristics. The facility also supports other types of digital assets, such as video files, audio files, animation files, other multimedia files, text documents, or other types of documents or digital resources. In some embodiments the facility displays only images that fall into one or more classifications, such as "Editorial," "Creative," or "Film." For example, images that are classified as "Editorial" may be images of one or more individuals who are well- known to the public. The facility may associate certain biographical information as terms with such images so as to permit the user to identify images of named individuals with varying career or personality aspects.
Suitable System
[0040] Figure 1 and the following discussion provide a brief, general description of a suitable computing environment in which the invention can be implemented. Although not required, aspects of the invention are described in the general context of computer-executable instructions, such as routines executed by a general- purpose computer, e.g., a server computer, wireless device or personal computer. Those skilled in the relevant art will appreciate that the invention can be practiced with other communications, data processing, or computer system configurations, including: Internet appliances, hand-held devices (including personal digital assistants (PDAs)), wearable computers, all manner of cellular or mobile phones, multi-processor systems, microprocessor-based or programmable consumer electronics, set-top boxes, network PCs, mini-computers, mainframe computers, and the like. Indeed, the terms "computer," "system," and the like are generally used interchangeably herein, and refer to any of the above devices and systems, as well as any data processor.
[0041] Aspects of the invention can be embodied in a special purpose computer or data processor that is specifically programmed, configured, or constructed to perform one or more of the computer-executable instructions explained in detail herein. Aspects of the invention can also be practiced in distributed computing environments where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
[0042] Aspects of the invention may be stored or distributed on computer- readable media, including magnetically or optically readable computer discs, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media. Indeed, computer implemented instructions, data structures, screen displays, and other data under aspects of the invention may be distributed over the Internet or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme).
[0043] Turning to the figures, Figure 1 depicts one embodiment of the invention that employs a computer 100, such as a personal computer or workstation, having one or more processors 101 coupled to one or more user input devices 102 and data storage devices 104. The computer is also coupled to at least one output device such as a display device 106 and one or more optional additional output devices 108 (e.g., printer, plotter, speakers, tactile or olfactory output devices, etc.). The computer may be coupled to external computers, such as via an optional network connection 110, a wireless transceiver 112, or both.
[0044] The input devices 102 may include a keyboard and/or a pointing device such as a mouse. Other input devices are possible such as a microphone, joystick, pen, game pad, scanner, digital camera, video camera, and the like. The data storage devices 104 may include any type of computer-readable media that can store data accessible by the computer 100, such as magnetic hard and floppy disk drives, optical disk drives, magnetic cassettes, tape drives, flash memory cards, digital video disks (DVDs), Bernoulli cartridges, RAMs, ROMs, smart cards, etc. Indeed, any medium for storing or transmitting computer-readable instructions and data may be employed, including a connection port to or node on a network such as a local area network (LAN), wide area network (WAN) or the Internet (not shown in Figure 1).
[0045] Aspects of the invention may be practiced in a variety of other computing environments. For example, referring to Figure 2, a distributed computing environment with a web interface includes one or more user computers 202 in a system 200 are shown, each of which includes a browser program module 204 that permits the computer to access and exchange data with the Internet 206, including web sites within the World Wide Web ("Web") portion of the Internet. The user computers may be substantially similar to the computer described above with respect to Figure 1. User computers may include other program modules such as an operating system, one or more application programs (e.g., word processing or spread sheet applications), and the like. The computers may be general-purpose devices that can be programmed to run various types of applications, or they may be single-purpose devices optimized or limited to a particular function or class of functions. More importantly, while shown with web browsers, any application program for providing a graphical user interface to users may be employed, as described in detail below; the use of a web browser and web interface are only used as a familiar example here.
[0046] At least one server computer 208, coupled to the Internet or Web 206, performs much or all of the functions for receiving, routing and storing of electronic messages, such as web pages, audio signals, and electronic images. While the Internet is shown, a private network, such as an intranet may indeed be preferred in some applications. The network may have a client-server architecture, in which a computer is dedicated to serving other client computers, or it may have other architectures such as a peer-to-peer, in which one or more computers serve simultaneously as servers and clients. A database 210 or databases, coupled to the server computer(s), stores much of the web pages and content exchanged between the user computers. The server computer(s), including the database(s), may employ security measures to inhibit malicious attacks on the system, and to preserve integrity of the messages and data stored therein (e.g., firewall systems, secure socket layers (SSL), password protection schemes, encryption, and the like).
[0047] The server computer 208 may include a server engine 212, a web page management component 214, a content management component 216 and a database management component 218. The server engine performs basic processing and operating system level tasks. The web page management component handles creation and display or routing of web pages. Users may access the server computer by means of a URL associated therewith. The content management component handles most of the functions in the embodiments described herein. The database management component includes storage and retrieval tasks with respect to the database, queries to the database, and storage of data such as video, graphics and audio signals.
[0048] Figure 3 is a flow diagram of a process 300 implemented by the facility to permit the user to submit an initial search for images and refine the initial search by specifying that some of the returned images and/or terms shall or shall not become part of a subsequent search. In step 305 the facility displays the interface to the user, with the interface including two or more of the search region, the Inspiration Palette, the Action Palette and the Reaction Palette. In step 310 the facility receives one or more search terms from the user. The facility disambiguates the one or more search terms by conforming or normalizing them to one or more terms in the structured vocabulary. The user then selects the appropriate structured vocabulary term. Alternatively, the user may start over and input one or more different search terms. After the user has selected the appropriate term from the structured vocabulary, the facility receives the selected term. In step 315 the facility determines if any images and/or terms match the selected term or are synonymous with the selected term. Matching may be an exact match, a close match, a fuzzy match, or other match algorithm. In step 320 the facility returns matching images and/or terms and displays the images and/or terms as described above. In step 325 the facility receives an indication from the user of a refinement of the initial search. In step 330 the facility updates the initial search results based on the initial search and the user refinement of the initial search.
[0049] Figure 4 depicts a representative interface 400, which includes the search region 455 and the three additional regions described above. The user may specify one or more search terms in text box 405, located in search region 455. In this depiction the facility has returned images and terms that match the user search term "yoga," and placed the returned images and terms in Inspiration Palette 410. These images and terms include image 460 and term 415, which is displayed in a larger font size relative to that of surrounding terms. Inspiration Palette 410 also contains an image 465 for which an associated term 420 is displayed. Action Palette 425 is divided into two sub-regions 430 and 435. The user may place images and/or terms in sub-region 430 to specify that the facility is to add the associated terms to the initial search using a Boolean "AND." In this example term 440 is placed in sub-region 430. The user may also place images and/or terms in sub-region 435 to specify that the facility is to add the associated terms to the initial search using a Boolean "NOT." In some embodiments the interface includes a third sub-region (not shown) corresponding to a Boolean "OR." Reaction Palette 445 contains images (not shown) returned in response to an initial search and subsequent searches. These include images 450a, 450b ... through 45On.
[0050] Figure 5A depicts another interface 500. The interface includes text box 505, in which the search term "yoga" has again been entered. The Inspiration Palette 510 contains images and terms returned in response to the search request. The Inspiration Palette has a scrollbar 535 that allows the user to scroll through all returned images and terms. In this interface the images and terms are ordered differently from the images and terms in Figure 4. The Action Palette 525 contains the term "yoga" as well as an image, and the corresponding term 515 and image 520 in the Action Palette are highlighted to indicate to the user that they have been placed in the Action Palette. The Reaction Palette 530 contains images returned in response to the refinement of the initial search. There is also a scrollbar 540 that permits the user to scroll through images in the Reaction Palette. In some embodiments the facility may enable the user to paginate between pages of images in the Reaction Palette, or to shuffle or refresh the images. [0051] Figure 5B depicts the same interface 500 but with more images and terms in the Action Palette 525, indicating subsequent refinements of the user's initial search. Again, the terms and images in the Action Palette are highlighted in the Inspiration Palette 510. The Reaction Palette 530 in Figure 5B contains a different set of images than the Reaction Palette depicted in Figure 5A, illustrating that subsequent refinements to a search produces different results.
[0052] Figure 6 is a flow diagram of a process 600 implemented by the facility for returning terms in response to a user search. The facility may make certain assumptions in implementing the process. The facility may assume that data pools are the sole sources of terms. The facility may assume that each data pool has a desired number of terms. The facility may assume that not all pools can supply enough terms to match the amount of terms desired to be taken from those pools. The facility may assume that demand is distributed to designated data pools or groups of data pools. The facility may assume that term weights may be assigned globally by popularity or, in some cases, within the data pool.
[0053] In step 605 the facility retrieves the desired number of terms from each data pool. The facility may pull the data from each of the desired data pools using appropriate filtering and sorting and/or a distribution model for data extraction. Data pools may be populated from the original data sources, data caches, or static lists. In step 610 the facility retrieves additional terms from the existing data pools to accommodate for unfulfilled demand. Unfulfilled demand may occur when a data pool does not have enough data to satisfy the initial request. For example, if a term has no children, the children data pool would have no data and any desired number of terms would become unfulfilled demand. In step 615 the facility determines the weight of each term, if the term is to be weighted. A term's weight may be dependent on its data pool. Specific data pools may provide weighted values and refinement terms indicate how many times they occur. Specific data pools may derive a constant weight and related terms may have a constant weight since they are manually assigned. A term may be given a score based upon the number of times it is assigned to images and weighted accordingly. A term may be given a score based upon being listed amongst the popular search terms and weighted accordingly. In step 620 the facility compiles a list of terms from all the data pools. The facility may combine the terms for use in the list. The facility may retain the original data pool and term weight for use in the display and organization of the terms.
[0054] In some embodiments the facility includes a configuration component that allows administrators of the facility to specify various aspects of the process for returning terms. The configuration component may allow administrators to fine-tune the process as well as enable administrators to evaluate the effectiveness of the process. The configuration component may allow administrators to specify, among other aspects: named sections for term data pools; a data pool from which to derive terms (from an existing set of available data pools); the desired number of terms to return; the maximum number of terms to use from specific data pools; and where the facility may retrieve additional terms if a specific data pool cannot meet the demand for terms from that data pool. The following depicts a sample configuration for the facility:
Pool # Name Priority Desired Max Floor Ceiling
1 Term 1 1 1 n/a n/a
6 Related Terms 2 15 15 25% 95%
5 Children 3 2 4 n/a n/a
2 Parent 4 1 1 n/a n/a
7 Refine: Subject 5 10 15 25% 95%
8 Refine: Concept 6 10 15 25% 95%
4 Siblings 7 2 4 n/a n/a
3 Ancestors 8 1 3 n/a n/a
Configurable yes yes yes yes yes
[0055] In this sample configuration the "Pool #" refers to an identification number for the data pool. "Name" refers to the name of the data pool and may describe the relationship of the terms in the data pool to the search term entered by the user. "Priority" refers to the order in which the Inspiration Palette should be populated with terms from these data pools. "Desired" refers to the minimum number of terms that should be taken from the data pool if there is an adequate supply of terms within the data pool. A data pool's priority and desired settings may combine to specify that the facility is to return terms from lower priority data pools if the facility cannot obtain the desired number of terms from the data pool of that particular priority. "Max" refers to the maximum number of terms that should be taken from the particular data pool. "Floor" refers to the minimum percentage of the search results that term must be associated with in order for the facility to return the term for display in the Inspiration Palette. "Ceiling" refers to the maximum percentage of the search results that the term can be associated with in order for the facility to return the term for display in the Inspiration Palette. In embodiments where the Action Palette region contains an "OR" sub-region the configuration component may not use a ceiling. "Configurable" refers to the ability of the values in the various columns to be set by administrators of the facility. There may be additional columns and/or values to the configuration component that allow administrators to further configure the facility. The various data pools may have different requirements due to their characteristics and the configuration component allows for the different requirements to be taken into account. For example, the facility may select terms using a different process from different data pools, or may have different exclusions for different data pools.
[0056] Figure 7 depicts a schematic view 700 of the composition of sets of terms according to some embodiments. Data sources 705, such as term lookup 720, provide different data types 710, such as controlled vocabulary 725. Data may be extracted from the data sources using selected distribution 735 and organized into different sets of data pools 715, such as parent 730. A configuration 740 may be applied to the data pools individually and/or as sets of data pools to form the sets of terms 750.
[0057] In some embodiments, the facility may cache sets of terms for specific periods of time. The facility may do so because the sets of terms may not change frequently. Caching sets of terms may enable the facility to quickly return images and/or terms in response to a user search. The facility may also cache the data in the data pools and/or the data in data sources. The facility may do so because this data may be expensive to gather, cheap to store, and it may change infrequently. The facility may also cache data at various other levels in order to improve system performance. For example, the facility may derive the data pools and/or the term sets prior to usage by performing data mining on the keyword data and other data sources. The data generated from this mining could then be formatted into a persistent data cache in order to provide a very fast access and an optimal user experience.
[0058] Figures 8A-8E depict an interface 800 in accordance with another embodiment of the invention. In Figure 8A, the user may specify a search term in text box 805 and submit the search term by clicking the search button 810. In response to the user's search term, the facility has returned multiple terms for display in the Inspiration Palette 825. In this embodiment the facility has not returned images for display in the Inspiration Palette 825. The Action Palette 830 contains three sub-regions. Sub-region 840 corresponds to a Boolean "AND" and terms dragged-and-dropped into sub-region 840 will be added to the user's initial search using a Boolean "AND," which enables a user to refine an initial search. Sub-region 845 corresponds to a Boolean "OR" and terms dragged-and-dropped into sub-region 845 will be added to the user's initial search using a Boolean "OR," which enables a user to expand an initial search. Sub-region 850 corresponds to a Boolean "NOT" and terms dragged-and-dropped into sub-region 850 will be added to the user's initial search using a Boolean "NOT," which enables a user to exclude content. Images returned in response to the user's initial search and subsequent searches are displayed in the Reaction Palette 855. The interface 800 also contains a grouping of multiple check boxes 815, which enable a user to apply or remove display filters. The filters "RM," "RR," and "RF," which stand for "Rights-Managed," "Rights-Ready," and "Royalty-Free," respectively, as well as the filters "Photography," and " "Illustrations" are configured as Boolean "OR" filters in the embodiment depicted in Figure 8A. The facility will return and display images that satisfy any one of the filters specified in the check boxes 815. The interface 800 also contains a button 820 labeled "More Filters" which can enable a user to specify more ways to filter display results. For example, Figure 8B depicts additional filters 835 that a user may select. The filters 835 include "Color," which if selected displays color images, "Black & White," which if selected displays black and white images, "Horizontal," which if selected displays images that have a horizontal or landscape orientation, and "Vertical," which if selected displays images that have a vertical or portrait orientation. Other filters are, of course, possible.
[0059] In Figure 8C, the facility has suggested search terms in response to user input entered into text box 805. The facility displays search terms in the region 860 that may be associated with the search term "yoga" 862 entered by the user. In some embodiments, the facility may use, e.g., a colored star to indicate a direct match in the structured vocabulary or other factors or to indicate the best match to the user's input. For example, the facility has suggested the term 865 labeled "Yoga (Relaxation Exercise)" and marked the term 865 with a red star in response to the user's input of "yoga." The facility has also marked several other terms 870 with a grey star to indicate a more distant association with the user's input, as indicated by the structured vocabulary or other factors.
[0060] Figure 8D depicts a region 875 that the facility displays in response to a user indication such as a mouse click within the sub-region 840 of the Action Palette 830. The region 875 contains a text box 880 that allows a user to type an additional search term, such as the term "exercise" 885. The user may wish to type in an additional search term, for example, if the user does not find appropriate terms in the Inspiration Palette 825 that the user desires to use to modify the user's initial search. The user can similarly add search terms to the other sub-regions 845 and 850 of the Action Palette 830. When the user clicks the search button 882 the facility adds the search term 885 to the user's initial search using the appropriate Boolean connector and returns and displays images and/or terms returned by the modified initial search.
[0061] In Figure 8E, the facility has suggested search terms in response to user input entered into text box 880. The facility displays search terms in the region 897 that may be associated with the search term "exercise" 885 entered by the user. Similar to the description of Figure 8C, the facility uses a red star to mark the terms 890 that optimally correspond to the user's search term "exercise" 885, as indicated by the structured vocabulary or other factors. Additional search terms 895 may be marked with a grey star to indicate a more distant association with the user's input, as indicated by the structured vocabulary or other factors.
[0062] Figures 9A-9E depict interfaces in accordance with another embodiment of the invention. Figure 9A depicts an interface 900 that enables a user to provide a keyword in search box 905 and select the suggested keyword that best matches the concept or idea for which user intends to search for images. Figure 9B depicts the interface 900 showing that the user has entered the words "rock climb" 962 into search box 905. The facility has disambiguated the user-entered words 962 by conforming or normalizing them to a term in the structured vocabulary and has suggested four keywords in response: "Rock Climbing (Climbing)" 965, "Climbing Wall (Climbing Equipment)," "Climbing Equipment (Sports Equipment)" and "Rock Boot (Sports Footwear)" (the latter three keywords collectively labeled 970). The star next to the keyword "Rock Climbing (Climbing)" 965 indicates that the facility has determined that this keyword optimally matches or corresponds to the user's search term "rock climb." Figure 9C depicts the interface 900 showing the facility's response to the user's selection of the keyword "Rock Climbing (Climbing)" 962. The Inspiration Palette region 925 contains keywords suggested by the facility, such as keywords determined by the process 600 illustrated in Figure 6 (e.g., keywords from the structured vocabulary that have an ancestor or descendant relation to the keyword "Rock Climbing (Climbing)," in addition to other keywords.) The Action Palette contains two sub-regions: one sub-region 945 into which the user may drag- and-drop keywords from the Inspiration Palette region 925 to take the user's search for images in a new direction; and one sub-region 950 into which the user may drag- and-drop keywords from the Inspiration Palette region 925 to exclude images having those associated keywords. The Reaction Palette region 955 displays a set of images having the keyword "Rock Climbing (Climbing)" associated with them that the facility has retrieved (e.g., those images having the keyword "rock climbing" associated with them).
[0063] Figure 9D depicts the interface 900 showing the facility's response to the user having dragged-and-dropped the keyword "Determination" 965 into the first sub-region 945 in the Action Palette. The facility has provided a new set of images in response to the selection of the keyword "Determination" 965 for display in the Reaction Palette region 955. Figure 9E depicts the interface 900 presented in response to the user having dragged-and-dropped the keyword "Mountain" 970 into the second sub-region 950 in the Action Palette. The facility has narrowed the set of images displayed in the Reaction Palette region 955 by excluding those images having the keyword "Mountain" 970 associated with them.
Conclusion
[0064] The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. For example the user search pages are described in the context of a Web base environment, with the search /results pages being accessed via a Web browser. The methods and processes could equally well be executed as a standalone system Furthermore, although the user desired features are described in the context of copy space requirements, other features in an image could also be searched, where a predetermined pixel variation can be identified. Although the subject matter has been described in language specific to structural features and/or methodological steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or steps described. Rather, the specific features and steps are disclosed as preferred forms of implementing the claimed subject matter.
[0065] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of "including, but not limited to." As used herein, the terms "connected," "coupled," or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling of connection between the elements can be physical, logical, or a combination thereof. Additionally, the words "herein," "above," "below," and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word "or," in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
[0066] The above detailed description of embodiments of the invention is not intended to be exhaustive or to limit the invention to the precise form disclosed above. While specific embodiments of, and examples for, the invention are described above for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative embodiments may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges.
[0067] The teachings of the invention provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments. Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further embodiments of the invention.
[0068] These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain embodiments of the invention, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its implementation details, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the invention under the final claims.

Claims

CLAIMSI/We claim:
1. A method of presenting digital images in response to a search query by a user, comprising: receiving a search query from a user to locate at least one digital image from a database of digital images, wherein: each digital image in the database of digital images has associated therewith at least one keyword, and each associated keyword is part of a structured vocabulary of hierarchically organized keywords; conforming the search query to one of the hierarchically organized keywords in the structured vocabulary; determining a first set of digital images related to the conformed search query, wherein each digital image in the first set of digital images has associated therewith a keyword equivalent to or synonymous with the conformed search query; determining suggested keywords from the structured vocabulary, wherein at least some of the suggested keywords have, under the hierarchically organized keywords, either an ancestor, descendant or sibling relation to the conformed search query; presenting the first set of digital images to the user; presenting the suggested keywords to the user; receiving a selection of a suggested keyword from the user; determining a second set of digital images related to the selected keyword, wherein each digital image in the second set of digital images has associated therewith a keyword equivalent to or synonymous with the selected keyword; and presenting the second set of digital images to the user.
2. The method of claim 1 , wherein: the suggested keywords having either an ancestor, descendant or sibling relation to the conformed search query are weighted according to their nearness to the conformed search query in the structured vocabulary and are presented to the user in varying font sizes depending upon their weights; at least some of the suggested keywords have a concept or subject in common with the conformed search query; and at least some of the suggested keywords are popularly searched keywords.
3. The method of claim 1 , further comprising: receiving a selection of a second suggested keyword from the user; determining a third set of digital images related to the second suggested keyword, wherein each digital image in the third set of digital images has associated therewith a keyword equivalent to or synonymous with the second suggested keyword; and presenting the third set of digital images to the user.
4. A computer-readable medium encoded with a computer program to provide digital assets in response to a search for digital assets, the computer program including instructions to perform a method comprising: receiving a search query from a user to locate at least one digital asset from a collection of digital assets, wherein: each digital asset in the collection of digital assets has associated therewith at least one keyword; and at least some of the associated keywords are included in a collection of keywords organized by their relationships to each other; determining a first set of digital assets related to the search query, wherein each digital asset has associated therewith a keyword that matches the conformed search query; determining suggested keywords from the collection of keywords, wherein at least some of the suggested keywords are related to the search query; presenting the first set of digital assets to the user; presenting the suggested keywords to the user; receiving an indication of a keyword from the user; determining a second set of digital assets related to the indicated keyword, wherein each digital asset in the second set of digital assets has associated therewith a keyword that matches the indicated keyword; and presenting the second set of digital images to the user.
5. The computer-readable medium of claim 4, wherein the method further comprises conforming the search query to a keyword in the collection of keywords.
6. The computer-readable medium of claim 4, wherein at least some of the suggested keywords have either an ancestor, descendant or sibling relationship to the search query.
7. The computer-readable medium of claim 4, wherein at least some of the suggested keywords have a concept or subject in common with the search query.
8. The computer-readable medium of claim 4, wherein at least some of the suggested keywords are popularly searched keywords.
9. The computer-readable medium of claim 4, wherein the suggested keywords are weighted in accordance with their relationships to the search query and the weighted suggested keywords are presented to the user in varying font sizes in accordance with their weights.
10. The computer-readable medium of claim 4, wherein the second set of digital assets is a subset of the first set of digital assets.
11. The computer-readable medium of claim 4, wherein the second set of digital assets includes digital assets that are not included in the first set of digital assets.
12. The computer-readable medium of claim 4, wherein the digital assets are digital images.
13. The computer-readable medium of claim 4, wherein receiving an indication of a keyword from the user includes receiving an indication of a keyword that is manually entered by the user.
14. A computer system for providing digital assets in response to a search for digital assets, comprising: means for storing a collection of digital assets and a collection of keywords, wherein: each digital asset is associated with at least one keyword, and at least some of the associated keywords are organized by their relationships to each other in the collection of keywords; means for receiving a search for a digital asset from a requestor, wherein the search includes a term; means for conforming the term to a keyword in the collection of keywords; means for determining first and second sets of digital assets related to the conformed term, wherein the first set of digital assets have associated therewith a keyword that is directly related to the conformed term, while the second set of digital assets are related, but not directly, to the first set of digital assets; and means for providing the sets of digital assets to the requestor.
15. The system of claim 14, further comprising: means for determining a set of suggested keywords; and means for providing the set of suggested keywords to the requestor.
16. The system of claim 15, wherein at least some of the suggested keywords have either an ancestor, descendant or sibling relationship to the conformed term.
17. The system of claim 15, wherein the suggested keywords are weighted in accordance with their relationships to the conformed search term and the suggested keywords are sized in accordance with their weights.
18. A method of displaying digital assets in response to a search query by a user, comprising: receiving a search query from a user for locating at least one digital asset from a database of digital assets stored on a server, wherein: each digital asset has associated with it one or more keywords; the keywords are from a database of keywords stored on the server, wherein at least some of the keywords are organized in hierarchical relationships; and at least some of the keywords in the database are related to other keywords in the database; providing the search query to the server; receiving a first set of digital assets from the server, wherein each digital asset in the first set of digital assets has an associated keyword that matches the search query; receiving a set of additional keywords retrieved from the database of keywords from the server, wherein at least some of the additional keywords are hierarchically related to the search query; displaying the set of digital assets in a first region to the user; and displaying the set of additional keywords in a second region to the user.
19. The method of claim 18, further comprising: receiving a keyword from the database of keywords that is responsive to the search query from the server; displaying the keyword to the user; receiving a selection of the keyword from the user; and providing the selected keyword to the server.
20. The method of claim 18, further comprising: displaying a third region configured to receive additional keywords moved into the third region by the user; receiving an additional keyword moved into the third region from the user; providing the additional keyword to the server; receiving a second set of digital assets from the server, wherein each digital asset in the second set of digital assets has an associated keyword that matches the additional keyword; and displaying the second set of digital assets in the first region to the user.
21. The method of claim 18, further comprising: displaying a third region configured to receive additional keywords moved into the third region by the user; receiving an additional keyword moved into the third region from the user; providing the additional keyword to the server; receiving a second set of digital assets from the server, wherein each digital asset in the second set of digital assets does not have any associated keywords that match the additional keyword, and wherein the second set of digital assets is a subset of the first set of digital assets; and displaying the second set of digital assets in the first region to the user.
22. The method of claim 18, further comprising: displaying a third region configured to receive an additional search query that is manually entered by the user; receiving from the user a manually-entered search query; providing the manually-entered search query to the server; receiving a second set of digital assets from the server, wherein each digital asset in the second set of digital assets has an associated keyword that matches the manually-entered search query; and displaying the second set of digital assets in the first region to the user.
23. The method of claim 22, further comprising: receiving another keyword responsive to the manually-entered search query from the server; displaying the other keyword to the user; receiving a selection of the other keyword from the user; providing the selected other keyword to the server;
24. The method of claim 18, further comprising: displaying a third region configured to receive an additional search query that is manually entered by the user; receiving from the user a manually-entered search query; providing the manually-entered search query to the server; receiving a second set of digital assets from the server, wherein each digital asset in the second set of digital assets does not have any associated keywords that match the manually-entered search query, and wherein the second set of digital assets is a subset of the first set of digital assets; and displaying the second set of digital assets in the first region to the user.
25. The method of claim 18, wherein: at least some of the additional keywords have either an ancestor, descendant or sibling relation to the selected keyword, are weighted according to their nearness to the selected keyword and are displayed in varying font sizes in accordance with their weights; at least some of the additional keywords have a concept or subject in common with the selected keyword; and at least some of the additional keywords are popularly searched keywords.
26. The method of claim 18, further comprising: displaying a control configured to either apply or remove a filter to the user, wherein the filter pertains to intellectual property rights of the digital assets; receiving an indication to either apply or remove the filter from the user; providing the indication to either apply or remove the filter to the server; receiving a second set of digital assets that are either filtered or unfiltered in accordance with the provided indication; and displaying the second set of digital assets in the first region to the user.
27. A computer system for providing digital assets in response to searches for digital assets, comprising: a database configured to: store multiple keywords, wherein at least some of the keywords are organized in a hierarchical structure; and store multiple digital assets, wherein at least some of the digital assets have associated with them one or more keywords; and a server computer configured to: receive a request for one or more digital assets from a client computer, wherein the request includes a search term; identify a first and second sets of digital assets responsive to the search term from the multiple digital assets stored in the database, wherein the first set of digital assets has associated therewith a keyword that directly matches the search term, while the second set of digital assets are related, but not directly, to the first set of digital assets; and provide the first and second sets of digital assets to the client computer.
28. The computer system of claim 27, wherein the server computer is further configured to: identify a first set of suggested keywords from the multiple keywords stored in the database; and provide the first set of suggested keywords to the client computer.
29. The computer system of claim 28, wherein: the server computer is further configured to conform the search term to a keyword in the hierarchical structure; at least some of the suggested keywords have either an ancestor, descendant or sibling relation to the keyword in the hierarchical structure and are weighted according to their nearness to the conformed search term in the hierarchical structure; at least some of the suggested keywords have a concept or subject in common with the keyword in the hierarchical structure; and at least some of the suggested keywords are popular keywords.
30. The computer system of claim 28, wherein the server computer is further configured to: receive a suggested keyword from the client computer; identify a second set of digital assets responsive to the suggested keyword from the multiple digital assets stored in the database, wherein each one of the digital assets either has an associated keyword matching the suggested keyword or each one of the digital assets does not have any associated keywords that match the suggested keyword; and provide the second set of digital assets to the client computer.
PCT/US2008/065561 2007-06-01 2008-06-02 Method and system for searching for digital assets WO2008151148A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2008259833A AU2008259833B2 (en) 2007-06-01 2008-06-02 Method and system for searching for digital assets
EP08756629A EP2165279A4 (en) 2007-06-01 2008-06-02 Method and system for searching for digital assets

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US94158207P 2007-06-01 2007-06-01
US60/941,582 2007-06-01

Publications (1)

Publication Number Publication Date
WO2008151148A1 true WO2008151148A1 (en) 2008-12-11

Family

ID=40089425

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/065561 WO2008151148A1 (en) 2007-06-01 2008-06-02 Method and system for searching for digital assets

Country Status (4)

Country Link
US (2) US9251172B2 (en)
EP (1) EP2165279A4 (en)
AU (1) AU2008259833B2 (en)
WO (1) WO2008151148A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457504A (en) * 2018-05-07 2019-11-15 苹果公司 Digital asset search technique
US11400021B2 (en) 2014-10-03 2022-08-02 Plas-Tech Engineering, Inc. Syringe assembly

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8732221B2 (en) * 2003-12-10 2014-05-20 Magix Software Gmbh System and method of multimedia content editing
US8144995B2 (en) * 2005-10-04 2012-03-27 Getty Images, Inc. System and method for searching digital images
US7844609B2 (en) 2007-03-16 2010-11-30 Expanse Networks, Inc. Attribute combination discovery
US20090043752A1 (en) 2007-08-08 2009-02-12 Expanse Networks, Inc. Predicting Side Effect Attributes
US8683378B2 (en) * 2007-09-04 2014-03-25 Apple Inc. Scrolling techniques for user interfaces
US9349109B2 (en) * 2008-02-29 2016-05-24 Adobe Systems Incorporated Media generation and management
IL199762A0 (en) * 2008-07-08 2010-04-15 Dan Atsmon Object search navigation method and system
KR101014554B1 (en) * 2008-07-31 2011-02-16 주식회사 메디슨 Ultrasound system and method of offering preview pages
US8818978B2 (en) * 2008-08-15 2014-08-26 Ebay Inc. Sharing item images using a similarity score
US7945683B1 (en) * 2008-09-04 2011-05-17 Sap Ag Method and system for multi-tiered search over a high latency network
US7917438B2 (en) 2008-09-10 2011-03-29 Expanse Networks, Inc. System for secure mobile healthcare selection
US8200509B2 (en) 2008-09-10 2012-06-12 Expanse Networks, Inc. Masked data record access
US20100125809A1 (en) * 2008-11-17 2010-05-20 Fujitsu Limited Facilitating Display Of An Interactive And Dynamic Cloud With Advertising And Domain Features
JP4735995B2 (en) * 2008-12-04 2011-07-27 ソニー株式会社 Image processing apparatus, image display method, and image display program
US20100169262A1 (en) * 2008-12-30 2010-07-01 Expanse Networks, Inc. Mobile Device for Pangenetic Web
US8386519B2 (en) 2008-12-30 2013-02-26 Expanse Networks, Inc. Pangenetic web item recommendation system
US8108406B2 (en) 2008-12-30 2012-01-31 Expanse Networks, Inc. Pangenetic web user behavior prediction system
US20100281038A1 (en) * 2009-04-30 2010-11-04 Nokia Corporation Handling and displaying of large file collections
US20100281425A1 (en) * 2009-04-30 2010-11-04 Nokia Corporation Handling and displaying of large file collections
US8233999B2 (en) * 2009-08-28 2012-07-31 Magix Ag System and method for interactive visualization of music properties
US8327268B2 (en) * 2009-11-10 2012-12-04 Magix Ag System and method for dynamic visual presentation of digital audio content
US9190109B2 (en) * 2010-03-23 2015-11-17 Disney Enterprises, Inc. System and method for video poetry using text based related media
US9792638B2 (en) 2010-03-29 2017-10-17 Ebay Inc. Using silhouette images to reduce product selection error in an e-commerce environment
US8861844B2 (en) 2010-03-29 2014-10-14 Ebay Inc. Pre-computing digests for image similarity searching of image-based listings in a network-based publication system
US20110258569A1 (en) * 2010-04-20 2011-10-20 Microsoft Corporation Display of filtered data via frequency distribution
US9104670B2 (en) * 2010-07-21 2015-08-11 Apple Inc. Customized search or acquisition of digital media assets
US8412594B2 (en) 2010-08-28 2013-04-02 Ebay Inc. Multilevel silhouettes in an online shopping environment
US8990146B2 (en) 2010-12-22 2015-03-24 Sap Se Systems and methods to provide server-side client based caching
US9721006B2 (en) * 2011-03-21 2017-08-01 Lexisnexis, A Division Of Reed Elsevier Inc. Systems and methods for enabling searches of a document corpus and generation of search queries
US20120254790A1 (en) * 2011-03-31 2012-10-04 Xerox Corporation Direct, feature-based and multi-touch dynamic search and manipulation of image sets
JP5679194B2 (en) * 2011-05-18 2015-03-04 ソニー株式会社 Information processing apparatus, information processing method, and program
US9031960B1 (en) 2011-06-10 2015-05-12 Google Inc. Query image search
US8463807B2 (en) * 2011-08-10 2013-06-11 Sap Ag Augmented search suggest
US20140115525A1 (en) * 2011-09-12 2014-04-24 Leap2, Llc Systems and methods for integrated query and navigation of an information resource
US9269243B2 (en) * 2011-10-07 2016-02-23 Siemens Aktiengesellschaft Method and user interface for forensic video search
US10496250B2 (en) 2011-12-19 2019-12-03 Bellevue Investments Gmbh & Co, Kgaa System and method for implementing an intelligent automatic music jam session
US20130167059A1 (en) * 2011-12-21 2013-06-27 New Commerce Solutions Inc. User interface for displaying and refining search results
US8924890B2 (en) * 2012-01-10 2014-12-30 At&T Intellectual Property I, L.P. Dynamic glyph-based search
US9519661B2 (en) * 2012-04-17 2016-12-13 Excalibur Ip, Llc Method and system for updating a background picture of a web search results page for different search queries
US20140123178A1 (en) * 2012-04-27 2014-05-01 Mixaroo, Inc. Self-learning methods, entity relations, remote control, and other features for real-time processing, storage, indexing, and delivery of segmented video
US20140075393A1 (en) * 2012-09-11 2014-03-13 Microsoft Corporation Gesture-Based Search Queries
US20140207758A1 (en) * 2013-01-24 2014-07-24 Huawei Technologies Co., Ltd. Thread Object-Based Search Method and Apparatus
US9449079B2 (en) * 2013-06-28 2016-09-20 Yandex Europe Ag Method of and system for displaying a plurality of user-selectable refinements to a search query
CN104598483A (en) * 2013-11-01 2015-05-06 索尼公司 Picture filtering method and device and electronic device
CN103714125A (en) * 2013-12-11 2014-04-09 广州亿码科技有限公司 Searching method
US9477748B2 (en) * 2013-12-20 2016-10-25 Adobe Systems Incorporated Filter selection in search environments
JP6435779B2 (en) * 2014-10-30 2018-12-12 富士ゼロックス株式会社 Information processing apparatus and information processing program
US20160179796A1 (en) * 2014-12-23 2016-06-23 Rovi Guides, Inc. Methods and systems for selecting identifiers for media content
CN105786858A (en) * 2014-12-24 2016-07-20 深圳富泰宏精密工业有限公司 Information search system and method
US9767483B2 (en) * 2015-07-22 2017-09-19 Adobe Systems Incorporated Enabling access to third-party digital assets for systems that market content to target audiences
US11392632B1 (en) * 2016-12-12 2022-07-19 SimpleC, LLC Systems and methods for locating media using a tag-based query
US11431769B2 (en) * 2018-04-26 2022-08-30 Slack Technologies, Llc Systems and methods for managing distributed client device membership within group-based communication channels
US11243996B2 (en) * 2018-05-07 2022-02-08 Apple Inc. Digital asset search user interface
WO2019233463A1 (en) 2018-06-07 2019-12-12 Huawei Technologies Co., Ltd. Quality-aware keyword query suggestion and evaluation
CN111368119A (en) * 2020-02-26 2020-07-03 维沃移动通信有限公司 Searching method and electronic equipment
CN115080602B (en) * 2022-03-21 2023-05-26 北京科杰科技有限公司 Method for realizing accurate search of data assets based on NLP algorithm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1160686A2 (en) 2000-05-30 2001-12-05 Godado.Com Ltd. A method of searching the internet and an internet search engine
US20020091661A1 (en) 1999-08-06 2002-07-11 Peter Anick Method and apparatus for automatic construction of faceted terminological feedback for document retrieval
US6453315B1 (en) 1999-09-22 2002-09-17 Applied Semantics, Inc. Meaning-based information organization and retrieval
US20030023560A1 (en) * 2001-07-27 2003-01-30 Fujitsu Limited Design asset information search system
US20060015489A1 (en) * 2000-12-12 2006-01-19 Home Box Office, Inc. Digital asset data type definitions
US20060018506A1 (en) * 2000-01-13 2006-01-26 Rodriguez Tony F Digital asset management, targeted searching and desktop searching using digital watermarks
WO2007012120A1 (en) 2005-07-26 2007-02-01 Redfern International Enterprises Pty Ltd Enhanced searching using a thesaurus

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61105671A (en) 1984-10-29 1986-05-23 Hitachi Ltd Natural language processing device
US4907188A (en) 1985-09-12 1990-03-06 Kabushiki Kaisha Toshiba Image information search network system
US4829453A (en) 1987-03-05 1989-05-09 Sharp Kabushiki Kaisha Apparatus for cataloging and retrieving image data
EP0400503B1 (en) 1989-05-31 1996-09-18 Kabushiki Kaisha Toshiba High-speed search system for image data storage
US5469354A (en) 1989-06-14 1995-11-21 Hitachi, Ltd. Document data processing method and apparatus for document retrieval
US5761655A (en) 1990-06-06 1998-06-02 Alphatronix, Inc. Image file storage and retrieval system
US5404507A (en) 1992-03-02 1995-04-04 At&T Corp. Apparatus and method for finding records in a database by formulating a query using equivalent terms which correspond to terms in the input query
US5337233A (en) 1992-04-13 1994-08-09 Sun Microsystems, Inc. Method and apparatus for mapping multiple-byte characters to unique strings of ASCII characters for use in text retrieval
US5553277A (en) 1992-12-29 1996-09-03 Fujitsu Limited Image search method for searching and retrieving desired image from memory device
JP2583386B2 (en) 1993-03-29 1997-02-19 日本電気株式会社 Keyword automatic extraction device
US5802361A (en) 1994-09-30 1998-09-01 Apple Computer, Inc. Method and system for searching graphic images and videos
US5963940A (en) 1995-08-16 1999-10-05 Syracuse University Natural language information retrieval system and method
US5778361A (en) 1995-09-29 1998-07-07 Microsoft Corporation Method and system for fast indexing and searching of text in compound-word languages
US5721897A (en) * 1996-04-09 1998-02-24 Rubinstein; Seymour I. Browse by prompted keyword phrases with an improved user interface
US5978804A (en) 1996-04-11 1999-11-02 Dietzman; Gregg R. Natural products information system
US5963893A (en) 1996-06-28 1999-10-05 Microsoft Corporation Identification of words in Japanese text by a computer system
US6144968A (en) * 1997-03-04 2000-11-07 Zellweger; Paul Method and apparatus for menu access to information objects indexed by hierarchically-coded keywords
US6247009B1 (en) * 1997-03-10 2001-06-12 Canon Kabushiki Kaisha Image processing with searching of image data
US6035269A (en) 1998-06-23 2000-03-07 Microsoft Corporation Method for detecting stylistic errors and generating replacement strings in a document containing Japanese text
US6175830B1 (en) 1999-05-20 2001-01-16 Evresearch, Ltd. Information management, retrieval and display system and associated method
US6442545B1 (en) 1999-06-01 2002-08-27 Clearforest Ltd. Term-level text with mining with taxonomies
US6496830B1 (en) 1999-06-11 2002-12-17 Oracle Corp. Implementing descending indexes with a descend function
US6311194B1 (en) 2000-03-15 2001-10-30 Taalee, Inc. System and method for creating a semantic web and its applications in browsing, searching, profiling, personalization and advertising
US6662192B1 (en) 2000-03-29 2003-12-09 Bizrate.Com System and method for data collection, evaluation, information generation, and presentation
US7099860B1 (en) * 2000-10-30 2006-08-29 Microsoft Corporation Image retrieval systems and methods with semantic and feature based relevance feedback
US6735583B1 (en) 2000-11-01 2004-05-11 Getty Images, Inc. Method and system for classifying and locating media content
JP3376996B2 (en) 2000-12-11 2003-02-17 株式会社日立製作所 Full text search method
JP3303881B2 (en) 2001-03-08 2002-07-22 株式会社日立製作所 Document search method and apparatus
US7870279B2 (en) * 2002-12-09 2011-01-11 Hrl Laboratories, Llc Method and apparatus for scanning, personalizing, and casting multimedia data streams via a communication network and television
WO2006113506A2 (en) * 2005-04-15 2006-10-26 Perfect Market Technologies, Inc. Search engine with suggestion tool and method of using same
US7974976B2 (en) * 2006-11-09 2011-07-05 Yahoo! Inc. Deriving user intent from a user query

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091661A1 (en) 1999-08-06 2002-07-11 Peter Anick Method and apparatus for automatic construction of faceted terminological feedback for document retrieval
US6453315B1 (en) 1999-09-22 2002-09-17 Applied Semantics, Inc. Meaning-based information organization and retrieval
US20060018506A1 (en) * 2000-01-13 2006-01-26 Rodriguez Tony F Digital asset management, targeted searching and desktop searching using digital watermarks
EP1160686A2 (en) 2000-05-30 2001-12-05 Godado.Com Ltd. A method of searching the internet and an internet search engine
US20060015489A1 (en) * 2000-12-12 2006-01-19 Home Box Office, Inc. Digital asset data type definitions
US20030023560A1 (en) * 2001-07-27 2003-01-30 Fujitsu Limited Design asset information search system
WO2007012120A1 (en) 2005-07-26 2007-02-01 Redfern International Enterprises Pty Ltd Enhanced searching using a thesaurus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GREENBERG J: "Optimal Query Expansion (QE) Processing Methods with Semantically Encoded Structured Thesauri Terminology", JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, vol. 52, no. 6, 1 April 2001 (2001-04-01), pages 487 - 498
See also references of EP2165279A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11400021B2 (en) 2014-10-03 2022-08-02 Plas-Tech Engineering, Inc. Syringe assembly
CN110457504A (en) * 2018-05-07 2019-11-15 苹果公司 Digital asset search technique

Also Published As

Publication number Publication date
EP2165279A4 (en) 2012-01-18
US20160253410A1 (en) 2016-09-01
US9251172B2 (en) 2016-02-02
US20080301128A1 (en) 2008-12-04
AU2008259833B2 (en) 2012-11-08
EP2165279A1 (en) 2010-03-24
US10242089B2 (en) 2019-03-26
AU2008259833A1 (en) 2008-12-11

Similar Documents

Publication Publication Date Title
US10242089B2 (en) Method and system for searching for digital assets
US11513998B2 (en) Narrowing information search results for presentation to a user
US20230078155A1 (en) Narrowing information search results for presentation to a user
KR101475126B1 (en) System and method of inclusion of interactive elements on a search results page
JP5309155B2 (en) Interactive concept learning in image retrieval
US9053115B1 (en) Query image search
US7769771B2 (en) Searching a document using relevance feedback
JP4776894B2 (en) Information retrieval method
JP5571091B2 (en) Providing search results
US9619469B2 (en) Adaptive image browsing
US8229927B2 (en) Apparatus, system, and method for information search
US11907669B2 (en) Creation of component templates based on semantically similar content
US20150006505A1 (en) Method of and system for displaying a plurality of user-selectable refinements to a search query
US20130024448A1 (en) Ranking search results using feature score distributions
US20140188931A1 (en) Lexicon based systems and methods for intelligent media search
WO2008027367A2 (en) Search document generation and use to provide recommendations
WO2013173099A2 (en) Knowledge panel
CN112740202A (en) Performing image search using content tags
Nazemi et al. Visual trend analysis with digital libraries
RU2698405C2 (en) Method of search in database
US8131702B1 (en) Systems and methods for browsing historical content
Lihui et al. Using Web structure and summarisation techniques for Web content mining
Wang et al. Beyond concept detection: The potential of user intent for image retrieval
JP6800478B2 (en) Evaluation program for component keywords that make up a Web page
Sakthivelan et al. A new approach to classify and rank events based videos based on Event of Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08756629

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008259833

Country of ref document: AU

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2008756629

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2008259833

Country of ref document: AU

Date of ref document: 20080602

Kind code of ref document: A