WO2017123746A1 - System and method for intuitive content browsing - Google Patents

System and method for intuitive content browsing Download PDF

Info

Publication number
WO2017123746A1
WO2017123746A1 PCT/US2017/013175 US2017013175W WO2017123746A1 WO 2017123746 A1 WO2017123746 A1 WO 2017123746A1 US 2017013175 W US2017013175 W US 2017013175W WO 2017123746 A1 WO2017123746 A1 WO 2017123746A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
focal point
visual representation
browsed
identified
Prior art date
Application number
PCT/US2017/013175
Other languages
French (fr)
Inventor
Roi KLIPER
Original Assignee
Teh Joan And Irwin Jacobs Technion-Cornell Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teh Joan And Irwin Jacobs Technion-Cornell Institute filed Critical Teh Joan And Irwin Jacobs Technion-Cornell Institute
Publication of WO2017123746A1 publication Critical patent/WO2017123746A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Definitions

  • the present disclosure relates generally to displaying content, and more particularly to intuitively organizing content to allow for interactions in two-dimensional and three- dimensional space.
  • refinement wastes time and computing resources due to the submission of additional queries, even for users that are familiar with the idiosyncrasies of web-based content searches. Further, inexperienced users may be frustrated by the inability to properly refine their searches to obtain the desired results.
  • a user living in New York City seeking to purchase wine may submit a query of "wine.”
  • search results related to wine generally, the user may wish to refine his search to focus on red wine and, as a result, enters a refined query of "red wine.”
  • the user may wish to further refine his search to focus on red wine originating from France and, thus, enter a refined query of "red wine France.”
  • the results of this search may include content related red wine being sold in France and/or to red wine originating from France being sold anywhere in the world.
  • the user may further need to refine his search on French red wine that can be bought locally and, therefore, enter a further refined query of "red wine France in New York.”
  • Each of the refinements requires the user to manually enter a refined query and submit the query for a new search, thereby wasting the user's time and unnecessarily using computing resources.
  • Existing solutions for refining content queries often involve offering predetermined potential refined queries and directing users to content upon user interactions with the potential refined queries.
  • the potential refined queries may be based on, e.g., queries submitted by previous users.
  • previous user queries do not always accurately capture a user's current needs, particularly when the user is not aware of his or her needs. For example, a user seeking to buy chocolate may initially enter the query "chocolate” before ultimately deciding that she would like to buy dark chocolate made in Zurich, Switzerland.
  • Potential refinements offered based on the initial query may include "dark chocolate,” “white chocolate,” “milk chocolate,” and "Swiss chocolate,” none of which entirely captures the user's ultimate needs. Thus, the user may need to perform several refinements and resend queries multiple times before arriving at the desired content.
  • the user when viewing search results or otherwise viewing content, the user is typically presented with display options such as organizing content in various organizational schemes (e.g., list form, grid form, and so on) and/or based on different ordering schemes (e.g., by date or time, relevancy to a query, alphabetical order, and so on).
  • organizational schemes e.g., list form, grid form, and so on
  • ordering schemes e.g., by date or time, relevancy to a query, alphabetical order, and so on.
  • users viewing content related to a particular book may wish to view content related to books by the same author, about the same subject, from the same genre or literary era, and so on.
  • users may be able to reorganize displayed content by, e.g., changing the organizational scheme, submitting refinement queries, changing the ordering scheme, and so on.
  • Certain embodiments disclosed herein include a method for intuitive content browsing.
  • the method includes determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Certain embodiments disclosed herein also include a system for intuitive content browsing.
  • the system comprises: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: determine, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identify, based on the request and the determined initial focal point, the content to be browsed; generate, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and send, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Figure 1 is a network diagram utilized to describe the various disclosed embodiments.
  • Figure 2 is a flowchart illustrating a method for organizing content according to an embodiment.
  • Figure 3 is a screenshot illustrating a spherical organization of content.
  • Figure 4 is a flowchart illustrating a method for displaying content that may be intuitively browsed according to an embodiment.
  • Figure 5 is a schematic diagram of a visual representation generator according to an embodiment.
  • Fig. 1 shows an example network diagram 100 utilized to describe the various disclosed embodiments.
  • the network diagram 100 includes a network 1 10, a user device 120, a visual representation generator 130, a plurality of content retrieval systems 140-1 through 140-n (hereinafter referred to individually as a search engine 140 and collectively as search engines 140, merely for simplicity purposes) and an inventory management system 150.
  • the network 1 10 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks configured to communicate between the elements of the 1 10.
  • the user device 120 may be a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computer device, an e-reader, a game console, or any other device equipped with browsing capabilities.
  • the content retrieval systems 140 may include, but are not limited to, search engines or other sources of content from which content may be retrieved. Alternatively or collectively, the content retrieval systems 140 may include or be communicatively connected to one or more data sources which can be queried or crawled for content.
  • the user device 120 may further include a browsing agent 125 installed therein.
  • the browsing agent 125 may be, but is not limited to, a mobile application, a virtual application, a web application, a native application, and the like. In certain configurations, the browsing agent 125 can be realized as an add-on or plug-in for a web browser. In other configurations, the browsing agent 125 is a web browser.
  • the user device 120 may receive a user query or otherwise receive a request to display content (e.g., via the browsing agent 125) and to send, to the visual representation generator 130, a request to generate a visual representation of the content to be browsed.
  • the request to generate a visual representation may include, but is not limited to, the user query, the content to be browsed, an identifier of the content to be browsed, or a combination thereof.
  • the user query may include a text query or a voice query.
  • the user query may be submitted through a user gesture, e.g., tapping on a certain image or key word.
  • the visual representation generator 130 is configured to receive the request to generate a visual representation and to determine an initial focal point based on the request.
  • the initial focal point includes content to be initially displayed prominently (e.g., before navigation) to the user.
  • Non-limiting examples of prominently displaying the initial focal point include displaying the initial focal point as larger than other content; displaying the initial focal point in a center, top, or other portion of a display; displaying the focal point with at least one prominence marker (e.g., a letter, a number, a symbol, a graphic, a color, etc.); displaying the focal point with a higher brightness or resolution than other content; displaying the focal point using one or more animations (e.g., displaying the focal point as moving up and down); a combination thereof; and the like.
  • prominence marker e.g., a letter, a number, a symbol, a graphic, a color, etc.
  • a most recent image of dog may be selected as the initial focal point such that, when the visual representation is initially displayed to the user, the image of the dog is the largest and centermost image appearing on a display (not shown) of the user device 120.
  • determining an initial focal point based on the request may further include pre-processing the user query.
  • Pre-processing the user query may include, but is not limited to, correcting typos, enriching the query with information related to the user (e.g., a browsing history, a current location, etc.), and so on.
  • the initial focal point may include a web site utilized as a seed for a search.
  • the initial focal point for a search based on the user query "buy shoes" may be a web site featuring a large variety of shoes.
  • the visual representation generator 130 is configured to retrieve content from the retrieval systems 140 based on a focal point. For the first time content is retrieved for the request, the initial focal point is used.
  • the retrieval systems 140 may search using the user query with respect to the focal point. Alternatively or collectively, the visual representation generator 130 may crawl through one or more of the retrieval systems 140 for the content.
  • the retrieved content may include, but is not limited to, search results, content to be displayed, or both.
  • the visual representation generator 130 may be configured to query an inventory management system 150 and to receive, from the inventory management system 150, a probability that one or more vendors have a sufficient inventory of a product based on the user query and the focal point.
  • An example implementation for an inventory management system for returning probabilities that vendors have sufficient inventories of product are described further in the above- referenced US Patent Application No. 14/940,396 filed on November 13, 2015, assigned to the common assignee, which is hereby incorporated by reference for all that it contains.
  • the visual representation generator 130 is further configured to organize the retrieved content.
  • the content may be organized around the focal point. Accordingly, the content, when initially organized, may be organized around the initial focal point.
  • the visual representation generator 130 may be configured to receive user interactions respective of the organized content and to determine a current focal point based on the user interactions.
  • the content may be organized as points on a sphere and displayed to the user.
  • the sphere may be displayed in a three- dimensional (3D) plane (i.e., using a stereoscopic display) or in a two-dimensional (2D) plane (i.e., such that the sphere appears to be 3D merely via optical illusion).
  • the visual representation generator 130 is configured to generate a visual representation including a browsing environment and a plurality of dynamic content elements.
  • Each dynamic content element includes content or representations of content to be browsed.
  • the focal point includes one of the dynamic content elements.
  • the browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment in which the content is to be browsed.
  • the visual illustrations may be two-dimensional, three-dimensional, and the like.
  • the browsing environment may include visual illustrations of a real location (e.g., a store or other physical location), of a non-real location (e.g., a cartoon library, a virtual store, an imaginary combination of real stores, or any other virtual or fictional location), or any other visual illustrations (for example, a visual illustration showing text, objects, people, animals, solid colors, patterns, combinations thereof, and the like).
  • the browsing environment is rendered at the beginning of browsing and static (i.e., remain the same as content is browsed) or dynamic (i.e., re-rendered or otherwise updated as content is browsed).
  • the browsing environment may include images showing a physical store in which products represented by the dynamic content elements are sold, where the images are updated to show different areas in the physical store as the user "moves" through the store by navigating among dynamic elements representing products sold in the store.
  • the browsing environment may include a static image illustrating a library, where the static image remains constant as the user navigates among dynamic elements representing books in the library.
  • the dynamic content elements may be updated and rendered in real-time as the user browses. Updating the dynamic content elements may include, but is not limited to, changing the content to be displayed, updating information related to each content (e.g., updating a value for a number of items in stock when the content represents a purchasable product), dynamically organizing the dynamic content elements, a combination thereof, and the like. Dynamic organization of the dynamic content elements may be based on one or more dynamic organization rules.
  • Such dynamic organization rules may be based on, but are not limited to, amount of inventory in stock for store products (e.g., a current inventory or projected future inventory), popularity of content (e.g., content which is trending may be organized closer to the focal point), relevance to user interests (e.g., content that is more relevant to current user interests may be organized closer to the focal point), combinations thereof, and the like.
  • the visual representation may represent an online store, with the browsing environment showing a storefront image and the dynamic content elements including product listings.
  • the product listings may include, but are not limited to, text (e.g., product descriptions, product information such as inventory and price, etc.), images (e.g., images of the product), videos (e.g., videos demonstrating the product), sound (e.g., sound including customer reviews), combinations thereof, and the like.
  • the dynamic content elements are rendered in realtime as the user browses.
  • the rendered dynamic content elements include information related to the product listings, where the information includes at least a current or projected inventory.
  • the rendered dynamic content elements are organized in real-time based on dynamic organization rules.
  • the dynamic organization rules are based on inventory such that lower inventory items or items having minimal inventory (e.g., having an amount of inventory below a predetermined threshold) are organized closer to the focal point. Such organization may be useful for, e.g., incentivizing users to buy lower stock products. As the user browses, inventory information for the product listings is updated and the dynamic content elements are organized accordingly.
  • FIG. 3 shows an example screenshot illustrating a content sphere 300 which is a spherical visual representation of content.
  • the content sphere 300 is organized around a focal point 310.
  • a plurality of images 320 act as points on the sphere representing content. If the focal point changes, the sphere may be rotated to show a different icon as the focal point.
  • a horizontal axis 330 and a vertical axis 340 visually represent potential directions for changing the focal point to view additional content. For example, the user may gesture horizontally to view content oriented along the horizontal axis 330 and may gesture vertically to view content oriented along the vertical axis 340.
  • the images 320 may include icons, textual information, widgets, or any other representation or presentation of the displayed content.
  • the axes 330 and 340 may be adaptably changed as the user selects new content to be the focal point 310 (e.g., by providing user gestures with respect to one of the images 320), as the user rotates the content sphere 300 (e.g., by providing user gestures with respect to the axes 330 and 340), or both. That is, the visual representation generator 130 is configured to predict (through a learning process), the user's path as the user browses via the presented content sphere 300. As an example, if the focal point 310 includes content related to President Bill Clinton, then rotating the content sphere 300 down along the vertical axis 340 may return results related to the US in the 1990's. On the other hand, rotating the content sphere 310 in the right direction along the horizontal axis 330 may return results related to the Democratic party.
  • axes of interest may be initially predefined, and then adaptably modified.
  • the focal point 310 is changed to content related to President Obama by rotating the content sphere 300 along the horizontal axis 330
  • the content available by rotating along the vertical access 340 may become content related to the US in the 2000's.
  • the user may be provided with an endless browsing experience.
  • each content item can be presented in different virtual settings. As an example, a lipstick may be presented in the context of cosmetics and then again as part of a Halloween costume, thus continually experiencing new browsing experience.
  • the display may include a graphical user interface for receiving user interactions with respect to the spherically organized search results.
  • the search results for the query "alcoholic drinks" may be displayed as points on the content sphere 300 based on an initial focal point of a website featuring beer. Results from the initial focal point may be initially displayed as the focal point 310.
  • a new focal point 310 may be determined as a web site for another type of alcoholic beverage (e.g., wine, vodka, and so on).
  • a new focal point may be determined as a web site for a particular brand of beer.
  • the example content sphere 300 shown in Fig. 3 is merely an example of a visual representation and is not limiting on the disclosed embodiments.
  • the content sphere 300 is shown as having two axes merely for illustrative purposes. Different numbers of axes may be equally utilized, any of which may be, but are not necessarily, horizontal or vertical. For example, diagonal axes may be utilized in addition to or instead of horizontal and vertical axes. Further, the axes may be three- dimensional without departing from the scope of the disclosure.
  • the content sphere 300 may be navigated by moving closer or farther away from a center point of the sphere.
  • the content may be shown in a shape other than a spherical shape without departing from the scope of the disclosure. It should also be noted that the content sphere 300 is shown as having a solid black background surrounding the images 320 merely as an example illustration. Other browsing environments (e.g., other colors, patterns, static or dynamic images, videos, combinations thereof, etc.) may be equally utilized without departing from the scope of the disclosed environments. For example, the content may be rendered as three-dimensional representations of shelves and aisles of a real store, where the view of the shelves and aisles is updated as the user browses through images of products in the store.
  • Fig. 2 is an example flowchart 200 illustrating a method for refining search results according to an embodiment.
  • the method may be performed by a visual representation generator (e.g., the visual representation generator 130, Fig. 1 ).
  • the method may be utilized to adaptively update visual representations of search results (e.g., search results displayed as the content sphere 300, Fig. 3).
  • a query by a user of a user device is received.
  • the query may be received in the form of text, multimedia content, and so on.
  • the query may be a textual query or a voice query.
  • the query may be preprocessed by, e.g., correcting typos, enriching the query with user information, and so on.
  • a focal point is determined based on the received query.
  • the focal point may include a web site to be utilized as a seed for a search based on the query.
  • the determination may include identifying one or more web sites related to the user query.
  • the seed web site may be selected from among the identified web sites based on, e.g., relative validity of the sites (e.g., numbers of legitimate clicks or presence of malware). For example, a user query for "cheese" may result in identification of web sites related to grocery stores, restaurants, and so on.
  • the seed website may be utilized as the initial focal point for the query such that content related to the seed website is displayed as the focal point prior to user interactions with respect to the visual representation.
  • At S230 at least one retrieval system is queried with respect to the received user query to retrieve search results.
  • the focal point is further sent to the retrieval systems as a seed for the search.
  • the retrieval systems may include, but are not limited to, search engines, inventory management systems, and other systems capable of retrieving content respective of queries.
  • S230 may further include querying at least one inventory management system for probabilities that products indicated in the search results are in stock at particular merchants.
  • the probabilities may be utilized to, e.g., enrich or otherwise provide more information related to the search results.
  • the probability that a brand of shoe is in stock at a particular merchant may be provided in, e.g., a top left corner of an icon representing content related to the brand of shoe sold by the merchant.
  • a visual representation is generated based on the search results and the identified focal point.
  • the visual representation may include points representing particular search results (e.g., a particular web page or a portion thereof).
  • the visual representation may include a graphical user interface for receiving user interactions respective of the search results.
  • S240 includes organizing the search results respective of the focal point.
  • the search results may be organized graphically.
  • the organization may include assigning the search results to points on, e.g., a sphere or other geometrical organization of the search results.
  • the generated visual representation may include a browsing environment and a plurality of dynamic content elements.
  • the browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment (e.g., a real location, a virtual location, background colors and patterns, etc.) in which content is to be browsed.
  • the browsing environment may be static, or may be dynamically updated in real-time as a user browses the content.
  • the dynamic content elements are updated in real-time as a user browses the content.
  • the dynamic content elements may be further reorganized in real-time based on the user browsing.
  • the generated visual representation may include a static storefront image and a plurality of dynamic elements updated in real-time to show product listings with current inventories, where the dynamic elements are organized such that lower inventory content items are closer to the focal point that higher inventory items.
  • the visual representation of the organized search results is caused to be displayed to the user.
  • S250 may include sending the visual representation to a user device (e.g., the user device 120, Fig. 1 ).
  • the visual representation may be displayed as a three-dimensional (3D) representation of the search results.
  • At S260 at least one user input is received with respect to the displayed visual representation.
  • the user inputs may include, but are not limited to, key strokes, mouse clicks, mouse movements, user gestures on a touch screen (e.g., tapping, swiping), movement of a user device (as detected by, e.g., an accelerometer, a global positioning system (GPS), a gyroscope, etc.), voice commands, and the like.
  • S270 based on the received user inputs, the search results are refined.
  • S270 may include determining a current focal point based on the user inputs and the visual representation.
  • S270 includes updating the visual representation using a website of the new focal point as a seed for the search.
  • S280 it is determined whether additional user inputs have been received and, if so, execution continues with S260; otherwise, execution terminates.
  • S280 includes determining if the additional user inputs include a new or modified query and, if so, execution may continue with S210.
  • Fig. 4 is an example flowchart 400 illustrating a method for displaying content for intuitive browsing according to an embodiment.
  • the method may be performed by a visual representation generator (e.g., the visual representation generator 130).
  • the visual representation generator may query, crawl, or otherwise obtain content from content retrieval systems (e.g., the content retrieval systems 140).
  • the method may be performed by a user device (e.g., the user device 120) based on locally available content, retrieved content, or both.
  • a request to display content is received.
  • the request may include, but is not limited to, a query, content to be displayed, an identifier of content to be displayed, an identifier of at least one source of content, a combination thereof, and so on.
  • a focal point is determined.
  • the focal point may be determined based on, but not limited to, the query, the content to be displayed, the designated content sources, information about a user (e.g., a user profile, a browsing history, demographic information, etc.), combinations thereof, and so on.
  • the focal point may be, but is not limited to, content, a source of content, a category or other grouping of content, a representation thereof, and so on.
  • the focal point may be related to a website to be used as a seed for a search with respect to the query (e.g., a web crawl).
  • the identified content may be related to the focal point.
  • the identified content may be stored locally, or may be retrieved from at least one data source (e.g., the content retrieval systems 140 or the inventory management system 150, Fig. 1 )
  • the identified content may include, but is not limited to, content from the same or similar web sources, content that is contextually related to content of the focal point (e.g., belonging to a same category or otherwise sharing common or related information), and so on. Similarity of content may be based on matching the content. In an embodiment, content and sources may be similar if they match above a predefined threshold.
  • the identified content is organized with respect to the focal point.
  • the organization may be based on one or more axes.
  • the axes may represent different facets of the determined content such as, but not limited to, creator (e.g., an artist, author, director, editor, publisher, etc.), geographic location, category of subject matter, type of content, genre, time of publication, and any other point of similarity among content.
  • content related to a focal point of a particular movie may be organized based on one or more axes such as, but not limited to, movies featuring the same actor(s), movies by the same director, movies by the same publisher, movies within the same genre, movies originating in the same country, movies from a particular decade or year, other media related to the movie (e.g., a television show tying into the movie) merchandise or other products related to the movie, and so on.
  • a visual representation of the organized content is generated.
  • the visual representation may include points, each point representing at least a portion of the identified content.
  • the visual representation may include a graphical user interface for receiving user interactions respective of the search results.
  • the visual representation may be spherical, may allow a user to change axes by gesturing horizontally, and may allow a user to change content within an axis by gesturing vertically.
  • the visual representation may be three-dimensional.
  • the generated visual representation is caused to be displayed on a user device.
  • S460 includes sending the visual representation to the user device.
  • the visual representation may be updated when, e.g., an amount of content that has been displayed is above a predefined threshold, a number of user interactions is above a predefined threshold, a refined or new query is received, and the like.
  • a request to display content is received. The request includes the query "the thinker.” A focal point including an image of the sculpture "The Thinker” by Auguste Rodin is determined. Content related to "The Thinker," including various sculptural and artistic works, is determined using the website in which the image is shown as a seed for a search.
  • the content is organized spherically based on axes including other famous sculptures, sculptures by French sculptors, art by Auguste Rodin, works featuring "The Thinker,” sculptures created in the late 1800s, and versions of "The Thinker” made from different materials.
  • a visual representation of the spherically organized content is generated and caused to be displayed on a user device.
  • Fig. 5 is an example schematic diagram of the visual representation generator 130 according to an embodiment.
  • the visual representation generator 130 includes a processing circuitry 510 coupled to a memory 515, a storage 520, and a network interface 530.
  • the components of the visual representation generator 130 may be communicatively connected via a bus 540.
  • the processing circuitry 510 may be realized as one or more hardware logic components and circuits.
  • illustrative types of hardware logic components include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information.
  • the memory 515 may be volatile (e.g., RAM, etc.), non-volatile (e.g., ROM, flash memory, etc.), or a combination thereof.
  • computer readable instructions to implement one or more embodiments disclosed herein may be stored in the storage 520.
  • the memory 515 is configured to store software.
  • Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code).
  • the instructions when executed by the one or more processors, cause the processing circuitry 510 to perform the various processes described herein. Specifically, the instructions, when executed, cause the processing circuitry 510 to perform generation of visual representations of content for intuitive browsing, as discussed hereinabove.
  • the storage 520 may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs), or any other medium which can be used to store the desired information.
  • flash memory or other memory technology
  • CD-ROM Compact Discs
  • DVDs Digital Versatile Disks
  • the network interface 530 allows the visual representation generator 130 to communicate with the user device 120, the content retrieval systems 140, the inventory management system 150, or a combination of, for the purpose of, for example, obtaining requests, obtaining content, obtaining probabilities, querying, sending visual representations, combinations thereof, and the like.
  • search results may be organized as points on different sides a cube such that user interactions may cause the displayed cube side to change, thereby changing the search results being displayed.
  • the content may be organized based on the subject matter of the content. For example, the content may be organized differently for queries for restaurants than for requests to display documents on a user device.
  • any reference to an element herein using a designation such as "first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.
  • the phrase "at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including "at least one of A, B, and C," the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.
  • the various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs"), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.

Abstract

A method and system for intuitive content browsing. The method includes determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.

Description

SYSTEM AND METHOD FOR INTUITIVE CONTENT BROWSING
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims the benefit of US Provisional Application No. 62/279,125 filed on January 15, 2016. The contents of the above-referenced applications are hereby incorporated by reference.
TECHNICAL FIELD
[002] The present disclosure relates generally to displaying content, and more particularly to intuitively organizing content to allow for interactions in two-dimensional and three- dimensional space.
BACKGROUND
[003] As the Internet becomes increasingly prevalent in modern society, the amount of information available to the average user has increased astronomically. Consequently, systems for retrieving and browsing web-based content are used much more frequently. Such systems often accept a user query for content to browse in the form of, for example, text, multimedia content (an image, videos, audio), and so on. The widespread availability of new content has led to further developments of display mechanisms allowing users to consumer and interact with data. For example, touch screens allowing users to intuitively interact with displayed content and three-dimensional virtual reality displays allowing users to view content in an immersive environment have become available to the average person. These evolving display mechanisms provide varied and improved user experiences as time goes on.
[004] To obtain relevant content, user queries must contain sufficient information to identify relevant material. Although algorithms used in tandem with, e.g., search engines, have been developed to provide a much greater likelihood of finding relevant content for even basic queries, users may nevertheless face challenges in accurately finding particularly relevant content due to the arcane rules utilized in accepting user queries. For example, users can find more accurate content using logical operators that may not be known or understood by the average person. [005] The challenges in utilizing existing content retrieval systems cause further difficulties for users seeking to refine queries. Refinement may include submitting a refined query to the retrieval system and receiving new results respective of the refined query, thereby effectively submitting a new search. As a result, refinement wastes time and computing resources due to the submission of additional queries, even for users that are familiar with the idiosyncrasies of web-based content searches. Further, inexperienced users may be frustrated by the inability to properly refine their searches to obtain the desired results.
[006] As an example, a user living in New York City seeking to purchase wine may submit a query of "wine." Upon viewing search results related to wine generally, the user may wish to refine his search to focus on red wine and, as a result, enters a refined query of "red wine." The user may wish to further refine his search to focus on red wine originating from France and, thus, enter a refined query of "red wine France." The results of this search may include content related red wine being sold in France and/or to red wine originating from France being sold anywhere in the world. The user may further need to refine his search on French red wine that can be bought locally and, therefore, enter a further refined query of "red wine France in New York." Each of the refinements requires the user to manually enter a refined query and submit the query for a new search, thereby wasting the user's time and unnecessarily using computing resources.
[007] Existing solutions for refining content queries often involve offering predetermined potential refined queries and directing users to content upon user interactions with the potential refined queries. The potential refined queries may be based on, e.g., queries submitted by previous users. However, previous user queries do not always accurately capture a user's current needs, particularly when the user is not aware of his or her needs. For example, a user seeking to buy chocolate may initially enter the query "chocolate" before ultimately deciding that she would like to buy dark chocolate made in Zurich, Switzerland. Potential refinements offered based on the initial query may include "dark chocolate," "white chocolate," "milk chocolate," and "Swiss chocolate," none of which entirely captures the user's ultimate needs. Thus, the user may need to perform several refinements and resend queries multiple times before arriving at the desired content.
[008] Moreover, for users seeking to purchase products, it may be difficult to determine in which stores the products are physically available. To find such information, a user may need to visit e-commerce websites of stores until he or she finds a store that lists the item as "in stock." Nevertheless, such listings may be outdated or otherwise inaccurate, thereby causing user frustration and a need to conduct further searches.
[009] Further, when viewing search results or otherwise viewing content, the user is typically presented with display options such as organizing content in various organizational schemes (e.g., list form, grid form, and so on) and/or based on different ordering schemes (e.g., by date or time, relevancy to a query, alphabetical order, and so on). For example, a user viewing content related to a particular book may wish to view content related to books by the same author, about the same subject, from the same genre or literary era, and so on. To view this additional content, users may be able to reorganize displayed content by, e.g., changing the organizational scheme, submitting refinement queries, changing the ordering scheme, and so on.
[0010] To this end, it is desirable to provide content in ways that are intuitive and therefore easily digestible by the average user. Intuitive content organization and navigation therefore serve an important role in improving the overall user experience by increasing user engagement and allowing for more efficient retrieval and/or viewing of content. Such improvements to user experience may be particularly important in the search engine context, as improved user experience may result in increased use of search engine services and/or purchases of products.
[0011] Additionally, some solutions exist for allowing users to browse stores remotely via remoted controlled on premises cameras (i.e., disposed in the store) or preexisting (e.g., static) photos or images of the premises and inventory. These solutions allow users to intuitively engage with content as they would in a physical store, but are typically limited to displaying store contents as they were previously (based on previously captured images) or as they currently are (e.g., via live video feed or images otherwise captured in real-time), but not based on potential future configurations (e.g., based on predicted inventories and other future changes).
[0012] It would therefore be advantageous to provide a solution that would overcome the deficiencies of the prior art. SUMMARY
[0013] A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term "some embodiments" may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
[0014] Certain embodiments disclosed herein include a method for intuitive content browsing.
The method includes determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
[0015] Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point. [0016] Certain embodiments disclosed herein also include a system for intuitive content browsing. The system comprises: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: determine, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identify, based on the request and the determined initial focal point, the content to be browsed; generate, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and send, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
[0018] Figure 1 is a network diagram utilized to describe the various disclosed embodiments.
[0019] Figure 2 is a flowchart illustrating a method for organizing content according to an embodiment.
[0020] Figure 3 is a screenshot illustrating a spherical organization of content.
[0021] Figure 4 is a flowchart illustrating a method for displaying content that may be intuitively browsed according to an embodiment.
[0022] Figure 5 is a schematic diagram of a visual representation generator according to an embodiment.
DETAILED DESCRIPTION
[0023] It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
[0024] Fig. 1 shows an example network diagram 100 utilized to describe the various disclosed embodiments. The network diagram 100 includes a network 1 10, a user device 120, a visual representation generator 130, a plurality of content retrieval systems 140-1 through 140-n (hereinafter referred to individually as a search engine 140 and collectively as search engines 140, merely for simplicity purposes) and an inventory management system 150.
[0025] The network 1 10 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks configured to communicate between the elements of the 1 10. The user device 120 may be a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computer device, an e-reader, a game console, or any other device equipped with browsing capabilities. The content retrieval systems 140 may include, but are not limited to, search engines or other sources of content from which content may be retrieved. Alternatively or collectively, the content retrieval systems 140 may include or be communicatively connected to one or more data sources which can be queried or crawled for content.
[0026] The user device 120 may further include a browsing agent 125 installed therein. The browsing agent 125 may be, but is not limited to, a mobile application, a virtual application, a web application, a native application, and the like. In certain configurations, the browsing agent 125 can be realized as an add-on or plug-in for a web browser. In other configurations, the browsing agent 125 is a web browser. The user device 120 may receive a user query or otherwise receive a request to display content (e.g., via the browsing agent 125) and to send, to the visual representation generator 130, a request to generate a visual representation of the content to be browsed. The request to generate a visual representation may include, but is not limited to, the user query, the content to be browsed, an identifier of the content to be browsed, or a combination thereof. The user query may include a text query or a voice query. The user query may be submitted through a user gesture, e.g., tapping on a certain image or key word. [0027] In an embodiment, the visual representation generator 130 is configured to receive the request to generate a visual representation and to determine an initial focal point based on the request. The initial focal point includes content to be initially displayed prominently (e.g., before navigation) to the user. Non-limiting examples of prominently displaying the initial focal point include displaying the initial focal point as larger than other content; displaying the initial focal point in a center, top, or other portion of a display; displaying the focal point with at least one prominence marker (e.g., a letter, a number, a symbol, a graphic, a color, etc.); displaying the focal point with a higher brightness or resolution than other content; displaying the focal point using one or more animations (e.g., displaying the focal point as moving up and down); a combination thereof; and the like. For example, if the content to be browsed includes images of a dog, a most recent image of dog may be selected as the initial focal point such that, when the visual representation is initially displayed to the user, the image of the dog is the largest and centermost image appearing on a display (not shown) of the user device 120.
[0028] In a further embodiment, determining an initial focal point based on the request may further include pre-processing the user query. Pre-processing the user query may include, but is not limited to, correcting typos, enriching the query with information related to the user (e.g., a browsing history, a current location, etc.), and so on. In another embodiment, the initial focal point may include a web site utilized as a seed for a search. As an example, the initial focal point for a search based on the user query "buy shoes" may be a web site featuring a large variety of shoes.
[0029] In an embodiment, the visual representation generator 130 is configured to retrieve content from the retrieval systems 140 based on a focal point. For the first time content is retrieved for the request, the initial focal point is used. The retrieval systems 140 may search using the user query with respect to the focal point. Alternatively or collectively, the visual representation generator 130 may crawl through one or more of the retrieval systems 140 for the content. The retrieved content may include, but is not limited to, search results, content to be displayed, or both.
[0030] In another embodiment, the visual representation generator 130 may be configured to query an inventory management system 150 and to receive, from the inventory management system 150, a probability that one or more vendors have a sufficient inventory of a product based on the user query and the focal point. An example implementation for an inventory management system for returning probabilities that vendors have sufficient inventories of product are described further in the above- referenced US Patent Application No. 14/940,396 filed on November 13, 2015, assigned to the common assignee, which is hereby incorporated by reference for all that it contains.
[0031] In an embodiment, the visual representation generator 130 is further configured to organize the retrieved content. The content may be organized around the focal point. Accordingly, the content, when initially organized, may be organized around the initial focal point. The visual representation generator 130 may be configured to receive user interactions respective of the organized content and to determine a current focal point based on the user interactions. In an embodiment, the content may be organized as points on a sphere and displayed to the user. The sphere may be displayed in a three- dimensional (3D) plane (i.e., using a stereoscopic display) or in a two-dimensional (2D) plane (i.e., such that the sphere appears to be 3D merely via optical illusion).
[0032] In another embodiment, the visual representation generator 130 is configured to generate a visual representation including a browsing environment and a plurality of dynamic content elements. Each dynamic content element includes content or representations of content to be browsed. In an embodiment, the focal point includes one of the dynamic content elements.
[0033] The browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment in which the content is to be browsed. The visual illustrations may be two-dimensional, three-dimensional, and the like. The browsing environment may include visual illustrations of a real location (e.g., a store or other physical location), of a non-real location (e.g., a cartoon library, a virtual store, an imaginary combination of real stores, or any other virtual or fictional location), or any other visual illustrations (for example, a visual illustration showing text, objects, people, animals, solid colors, patterns, combinations thereof, and the like).
[0034] In an embodiment, the browsing environment is rendered at the beginning of browsing and static (i.e., remain the same as content is browsed) or dynamic (i.e., re-rendered or otherwise updated as content is browsed). As a non-limiting example, the browsing environment may include images showing a physical store in which products represented by the dynamic content elements are sold, where the images are updated to show different areas in the physical store as the user "moves" through the store by navigating among dynamic elements representing products sold in the store. As another non-limiting example, the browsing environment may include a static image illustrating a library, where the static image remains constant as the user navigates among dynamic elements representing books in the library.
[0035] The dynamic content elements may be updated and rendered in real-time as the user browses. Updating the dynamic content elements may include, but is not limited to, changing the content to be displayed, updating information related to each content (e.g., updating a value for a number of items in stock when the content represents a purchasable product), dynamically organizing the dynamic content elements, a combination thereof, and the like. Dynamic organization of the dynamic content elements may be based on one or more dynamic organization rules. Such dynamic organization rules may be based on, but are not limited to, amount of inventory in stock for store products (e.g., a current inventory or projected future inventory), popularity of content (e.g., content which is trending may be organized closer to the focal point), relevance to user interests (e.g., content that is more relevant to current user interests may be organized closer to the focal point), combinations thereof, and the like.
[0036] As a non-limiting example for a visual representation including a browsing environment and a plurality of dynamic content elements, the visual representation may represent an online store, with the browsing environment showing a storefront image and the dynamic content elements including product listings. The product listings may include, but are not limited to, text (e.g., product descriptions, product information such as inventory and price, etc.), images (e.g., images of the product), videos (e.g., videos demonstrating the product), sound (e.g., sound including customer reviews), combinations thereof, and the like. The dynamic content elements are rendered in realtime as the user browses. The rendered dynamic content elements include information related to the product listings, where the information includes at least a current or projected inventory. The rendered dynamic content elements are organized in real-time based on dynamic organization rules. The dynamic organization rules are based on inventory such that lower inventory items or items having minimal inventory (e.g., having an amount of inventory below a predetermined threshold) are organized closer to the focal point. Such organization may be useful for, e.g., incentivizing users to buy lower stock products. As the user browses, inventory information for the product listings is updated and the dynamic content elements are organized accordingly.
[0037] Fig. 3 shows an example screenshot illustrating a content sphere 300 which is a spherical visual representation of content. The content sphere 300 is organized around a focal point 310. A plurality of images 320 act as points on the sphere representing content. If the focal point changes, the sphere may be rotated to show a different icon as the focal point. A horizontal axis 330 and a vertical axis 340 visually represent potential directions for changing the focal point to view additional content. For example, the user may gesture horizontally to view content oriented along the horizontal axis 330 and may gesture vertically to view content oriented along the vertical axis 340. It should be note that the images 320 may include icons, textual information, widgets, or any other representation or presentation of the displayed content.
[0038] The axes 330 and 340 may be adaptably changed as the user selects new content to be the focal point 310 (e.g., by providing user gestures with respect to one of the images 320), as the user rotates the content sphere 300 (e.g., by providing user gestures with respect to the axes 330 and 340), or both. That is, the visual representation generator 130 is configured to predict (through a learning process), the user's path as the user browses via the presented content sphere 300. As an example, if the focal point 310 includes content related to President Bill Clinton, then rotating the content sphere 300 down along the vertical axis 340 may return results related to the US in the 1990's. On the other hand, rotating the content sphere 310 in the right direction along the horizontal axis 330 may return results related to the Democratic party.
[0039] Further, as the content of the focal point 310 changes, the related content available via the axes 330 and 340 may change. Thus, axes of interest may be initially predefined, and then adaptably modified. For example, when the focal point 310 is changed to content related to President Obama by rotating the content sphere 300 along the horizontal axis 330, the content available by rotating along the vertical access 340 may become content related to the US in the 2000's. [0040] It should be appreciated that, by adaptably changing the content along the axes of interest (e.g., the axes 330 and 340), the user may be provided with an endless browsing experience. Further, each content item can be presented in different virtual settings. As an example, a lipstick may be presented in the context of cosmetics and then again as part of a Halloween costume, thus continually experiencing new browsing experience.
[0041] In a further embodiment, the display may include a graphical user interface for receiving user interactions with respect to the spherically organized search results. As a non-limiting example, the search results for the query "alcoholic drinks" may be displayed as points on the content sphere 300 based on an initial focal point of a website featuring beer. Results from the initial focal point may be initially displayed as the focal point 310. When a user rotates (e.g., swipes or moves) a mouse icon horizontally across the sphere 300, a new focal point 310 may be determined as a web site for another type of alcoholic beverage (e.g., wine, vodka, and so on). When a user swipes or moves a mouse icon vertically across the content sphere 300, a new focal point may be determined as a web site for a particular brand of beer.
[0042] It should be noted that the example content sphere 300 shown in Fig. 3 is merely an example of a visual representation and is not limiting on the disclosed embodiments. In particular, the content sphere 300 is shown as having two axes merely for illustrative purposes. Different numbers of axes may be equally utilized, any of which may be, but are not necessarily, horizontal or vertical. For example, diagonal axes may be utilized in addition to or instead of horizontal and vertical axes. Further, the axes may be three- dimensional without departing from the scope of the disclosure. For example, the content sphere 300 may be navigated by moving closer or farther away from a center point of the sphere.
[0043] It should further be noted that the content may be shown in a shape other than a spherical shape without departing from the scope of the disclosure. It should also be noted that the content sphere 300 is shown as having a solid black background surrounding the images 320 merely as an example illustration. Other browsing environments (e.g., other colors, patterns, static or dynamic images, videos, combinations thereof, etc.) may be equally utilized without departing from the scope of the disclosed environments. For example, the content may be rendered as three-dimensional representations of shelves and aisles of a real store, where the view of the shelves and aisles is updated as the user browses through images of products in the store.
[0044] Fig. 2 is an example flowchart 200 illustrating a method for refining search results according to an embodiment. In an embodiment, the method may be performed by a visual representation generator (e.g., the visual representation generator 130, Fig. 1 ). The method may be utilized to adaptively update visual representations of search results (e.g., search results displayed as the content sphere 300, Fig. 3).
[0045] At S210, a query by a user of a user device is received. The query may be received in the form of text, multimedia content, and so on. The query may be a textual query or a voice query.
[0046] At optional S215, the query may be preprocessed by, e.g., correcting typos, enriching the query with user information, and so on.
[0047] At S220, a focal point is determined based on the received query. The focal point may include a web site to be utilized as a seed for a search based on the query. The determination may include identifying one or more web sites related to the user query. The seed web site may be selected from among the identified web sites based on, e.g., relative validity of the sites (e.g., numbers of legitimate clicks or presence of malware). For example, a user query for "cheese" may result in identification of web sites related to grocery stores, restaurants, and so on. The seed website may be utilized as the initial focal point for the query such that content related to the seed website is displayed as the focal point prior to user interactions with respect to the visual representation.
[0048] At S230, at least one retrieval system is queried with respect to the received user query to retrieve search results. The focal point is further sent to the retrieval systems as a seed for the search. The retrieval systems may include, but are not limited to, search engines, inventory management systems, and other systems capable of retrieving content respective of queries.
[0049] In an embodiment, S230 may further include querying at least one inventory management system for probabilities that products indicated in the search results are in stock at particular merchants. The probabilities may be utilized to, e.g., enrich or otherwise provide more information related to the search results. As a non-limiting example, the probability that a brand of shoe is in stock at a particular merchant may be provided in, e.g., a top left corner of an icon representing content related to the brand of shoe sold by the merchant.
[0050] At S240, a visual representation is generated based on the search results and the identified focal point. The visual representation may include points representing particular search results (e.g., a particular web page or a portion thereof). The visual representation may include a graphical user interface for receiving user interactions respective of the search results. In an embodiment, S240 includes organizing the search results respective of the focal point. In a further embodiment, the search results may be organized graphically. In yet a further embodiment, the organization may include assigning the search results to points on, e.g., a sphere or other geometrical organization of the search results.
[0051] In another embodiment, the generated visual representation may include a browsing environment and a plurality of dynamic content elements. The browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment (e.g., a real location, a virtual location, background colors and patterns, etc.) in which content is to be browsed. The browsing environment may be static, or may be dynamically updated in real-time as a user browses the content. The dynamic content elements are updated in real-time as a user browses the content. The dynamic content elements may be further reorganized in real-time based on the user browsing. As a non-limiting example, the generated visual representation may include a static storefront image and a plurality of dynamic elements updated in real-time to show product listings with current inventories, where the dynamic elements are organized such that lower inventory content items are closer to the focal point that higher inventory items.
[0052] At S250, the visual representation of the organized search results is caused to be displayed to the user. In an embodiment, S250 may include sending the visual representation to a user device (e.g., the user device 120, Fig. 1 ). In an embodiment, the visual representation may be displayed as a three-dimensional (3D) representation of the search results.
[0053] At S260, at least one user input is received with respect to the displayed visual representation. The user inputs may include, but are not limited to, key strokes, mouse clicks, mouse movements, user gestures on a touch screen (e.g., tapping, swiping), movement of a user device (as detected by, e.g., an accelerometer, a global positioning system (GPS), a gyroscope, etc.), voice commands, and the like.
[0054] At S270, based on the received user inputs, the search results are refined. In an embodiment, S270 may include determining a current focal point based on the user inputs and the visual representation. In a further embodiment, S270 includes updating the visual representation using a website of the new focal point as a seed for the search.
[0055] At S280, it is determined whether additional user inputs have been received and, if so, execution continues with S260; otherwise, execution terminates. In another embodiment, S280 includes determining if the additional user inputs include a new or modified query and, if so, execution may continue with S210.
[0056] Fig. 4 is an example flowchart 400 illustrating a method for displaying content for intuitive browsing according to an embodiment. In an embodiment, the method may be performed by a visual representation generator (e.g., the visual representation generator 130). The visual representation generator may query, crawl, or otherwise obtain content from content retrieval systems (e.g., the content retrieval systems 140). In another embodiment, the method may be performed by a user device (e.g., the user device 120) based on locally available content, retrieved content, or both.
[0057] At S410, a request to display content is received. The request may include, but is not limited to, a query, content to be displayed, an identifier of content to be displayed, an identifier of at least one source of content, a combination thereof, and so on.
[0058] At S420, based on the request, a focal point is determined. The focal point may be determined based on, but not limited to, the query, the content to be displayed, the designated content sources, information about a user (e.g., a user profile, a browsing history, demographic information, etc.), combinations thereof, and so on. The focal point may be, but is not limited to, content, a source of content, a category or other grouping of content, a representation thereof, and so on. In an embodiment, the focal point may be related to a website to be used as a seed for a search with respect to the query (e.g., a web crawl).
[0059] At S430, content to be browsed with respect to the focal point is identified. The identified content may be related to the focal point. The identified content may be stored locally, or may be retrieved from at least one data source (e.g., the content retrieval systems 140 or the inventory management system 150, Fig. 1 ) As examples, the identified content may include, but is not limited to, content from the same or similar web sources, content that is contextually related to content of the focal point (e.g., belonging to a same category or otherwise sharing common or related information), and so on. Similarity of content may be based on matching the content. In an embodiment, content and sources may be similar if they match above a predefined threshold.
[0060] At S440, the identified content is organized with respect to the focal point. In an embodiment, the organization may be based on one or more axes. The axes may represent different facets of the determined content such as, but not limited to, creator (e.g., an artist, author, director, editor, publisher, etc.), geographic location, category of subject matter, type of content, genre, time of publication, and any other point of similarity among content. As a non-limiting example, content related to a focal point of a particular movie may be organized based on one or more axes such as, but not limited to, movies featuring the same actor(s), movies by the same director, movies by the same publisher, movies within the same genre, movies originating in the same country, movies from a particular decade or year, other media related to the movie (e.g., a television show tying into the movie) merchandise or other products related to the movie, and so on.
[0061] At S450, a visual representation of the organized content is generated. The visual representation may include points, each point representing at least a portion of the identified content. The visual representation may include a graphical user interface for receiving user interactions respective of the search results. In an embodiment, the visual representation may be spherical, may allow a user to change axes by gesturing horizontally, and may allow a user to change content within an axis by gesturing vertically. In another embodiment, the visual representation may be three-dimensional.
[0062] At S460, the generated visual representation is caused to be displayed on a user device. In an embodiment, S460 includes sending the visual representation to the user device. In an embodiment, the visual representation may be updated when, e.g., an amount of content that has been displayed is above a predefined threshold, a number of user interactions is above a predefined threshold, a refined or new query is received, and the like. [0063] As a non-limiting example, a request to display content is received. The request includes the query "the thinker." A focal point including an image of the sculpture "The Thinker" by Auguste Rodin is determined. Content related to "The Thinker," including various sculptural and artistic works, is determined using the website in which the image is shown as a seed for a search. The content is organized spherically based on axes including other famous sculptures, sculptures by French sculptors, art by Auguste Rodin, works featuring "The Thinker," sculptures created in the late 1800s, and versions of "The Thinker" made from different materials. A visual representation of the spherically organized content is generated and caused to be displayed on a user device.
[0064] Fig. 5 is an example schematic diagram of the visual representation generator 130 according to an embodiment. The visual representation generator 130 includes a processing circuitry 510 coupled to a memory 515, a storage 520, and a network interface 530. In another embodiment, the components of the visual representation generator 130 may be communicatively connected via a bus 540.
[0065]The processing circuitry 510 may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information.
[0066]The memory 515 may be volatile (e.g., RAM, etc.), non-volatile (e.g., ROM, flash memory, etc.), or a combination thereof. In one configuration, computer readable instructions to implement one or more embodiments disclosed herein may be stored in the storage 520.
[0067] In another embodiment, the memory 515 is configured to store software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing circuitry 510 to perform the various processes described herein. Specifically, the instructions, when executed, cause the processing circuitry 510 to perform generation of visual representations of content for intuitive browsing, as discussed hereinabove.
[0068] The storage 520 may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs), or any other medium which can be used to store the desired information.
[0069] The network interface 530 allows the visual representation generator 130 to communicate with the user device 120, the content retrieval systems 140, the inventory management system 150, or a combination of, for the purpose of, for example, obtaining requests, obtaining content, obtaining probabilities, querying, sending visual representations, combinations thereof, and the like.
[0070] It should be understood that the embodiments described herein are not limited to the specific architecture illustrated in Fig. 5, and other architectures may be equally used without departing from the scope of the disclosed embodiments.
[0071] It should be noted that various embodiments described herein are discussed with respect to content from search results merely for simplicity purposes and without limitation on the disclosed embodiments. Other content, including content preexisting on a device, content available via a storage or data source, and the like, may be displayed and browsed without departing from the scope of the disclosure.
[0072] It should be further noted that the embodiments described herein are discussed with respect to a spherical representation of content merely for simplicity purposes and without limitation on the disclosed embodiments. Other geometrical representations may be utilized with points on the geometric figures representing search results without departing from the scope of the disclosure. For example, the search results may be organized as points on different sides a cube such that user interactions may cause the displayed cube side to change, thereby changing the search results being displayed.
[0073] It should also be noted that various examples for changing content are provided merely for the sake of illustration and without limitation on the disclosed embodiments. Content may be organized in other ways without departing from the scope of the disclosure.
[0074] It should be further noted that the content may be organized based on the subject matter of the content. For example, the content may be organized differently for queries for restaurants than for requests to display documents on a user device.
[0075] It should be understood that any reference to an element herein using a designation such as "first," "second," and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.
[0076] As used herein, the phrase "at least one of" followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including "at least one of A, B, and C," the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.
[0077] The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units ("CPUs"), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
78] All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims

CLAIMS What is claimed is:
1 . A method for intuitive content browsing, comprising:
determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed;
generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and
sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
2. The method of claim 1 , wherein the generated visual representation includes at least one axis, wherein the generated visual representation is browsed along each of the at least one axis.
3. The method of claim 2, wherein the browsing of the displayed visual representation includes selecting, based on at least one user input, a new focal point, wherein the displayed visual representation is updated with respect to the new focal point.
4. The method of claim 1 , wherein the request includes a query, wherein the content to be browsed includes at least search results.
5. The method of claim 4, wherein the determined focal point includes content of a web site, wherein identifying the content to be browsed further comprises:
searching, based on the query, in at least one content retrieval system for the content to be browsed, wherein the web site is utilized as a seed for the search.
6. The method of claim 4, further comprising:
querying at least one inventory management system for probabilities that products indicated in the search results are available from at least one merchant.
7. The method of claim 4, further comprising:
determining, based on at least one user input, a new focal point, the new focal point including content of a web site; and
updating the visual representation based on the new focal point.
8. The method of claim 1 , wherein the generated visual representation includes a browsing environment and at least one dynamic content element, the browsing environment including at least one visual illustration, each dynamic content element including one of the content to be browsed, wherein the at least one dynamic content element is updated in real-time as the identified content is browsed.
9. The method of claim 8, wherein the browsing environment is updated in real-time as the identified content is browsed.
10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising:
determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed;
generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and
sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
1 1 . A system for intuitive content browsing, comprising:
a processing circuitry; and
a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to:
determine, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identify, based on the request and the determined initial focal point, the content to be browsed;
generate, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and
send, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
12. The system of claim 1 1 , wherein the generated visual representation includes at least one axis, wherein the generated visual representation is browsed along each of the at least one axis.
13. The system of claim 12, wherein the browsing of the displayed visual representation includes selecting, based on at least one user input, a new focal point, wherein the displayed visual representation is updated with respect to the new focal point.
14. The system of claim 1 1 , wherein the request includes a query, wherein the content to be browsed includes at least search results.
15. The system of claim 14, wherein the determined focal point includes content of a web site, wherein the system is further configured to:
search, based on the query, in at least one content retrieval system for the content to be browsed, wherein the web site is utilized as a seed for the search.
16. The system of claim 14, wherein the system is further configured to: query at least one inventory management system for probabilities that products indicated in the search results are available from at least one merchant.
17. The system of claim 14, wherein the system is further configured to:
determine, based on at least one user input, a new focal point, the new focal point including content of a web site; and
update the visual representation based on the new focal point.
18. The system of claim 1 1 , wherein the generated visual representation includes a browsing environment and at least one dynamic content element, the browsing environment including at least one visual illustration, each dynamic content element including one of the content to be browsed, wherein the at least one dynamic content element is updated in real-time as the identified content is browsed.
19. The system of claim 18, wherein the browsing environment is updated in real-time as the identified content is browsed.
PCT/US2017/013175 2016-01-15 2017-01-12 System and method for intuitive content browsing WO2017123746A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662279125P 2016-01-15 2016-01-15
US62/279,125 2016-01-15

Publications (1)

Publication Number Publication Date
WO2017123746A1 true WO2017123746A1 (en) 2017-07-20

Family

ID=59311515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/013175 WO2017123746A1 (en) 2016-01-15 2017-01-12 System and method for intuitive content browsing

Country Status (1)

Country Link
WO (1) WO2017123746A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062172A1 (en) * 1999-04-14 2000-10-19 Verizon Laboratories Inc. Synchronized spatial-temporal image browsing for content assessment
US6326988B1 (en) * 1999-06-08 2001-12-04 Monkey Media, Inc. Method, apparatus and article of manufacture for displaying content in a multi-dimensional topic space
US6868525B1 (en) * 2000-02-01 2005-03-15 Alberti Anemometer Llc Computer graphic display visualization system and method
US20060004914A1 (en) * 2004-07-01 2006-01-05 Microsoft Corporation Sharing media objects in a network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062172A1 (en) * 1999-04-14 2000-10-19 Verizon Laboratories Inc. Synchronized spatial-temporal image browsing for content assessment
US6326988B1 (en) * 1999-06-08 2001-12-04 Monkey Media, Inc. Method, apparatus and article of manufacture for displaying content in a multi-dimensional topic space
US6868525B1 (en) * 2000-02-01 2005-03-15 Alberti Anemometer Llc Computer graphic display visualization system and method
US20060004914A1 (en) * 2004-07-01 2006-01-05 Microsoft Corporation Sharing media objects in a network

Similar Documents

Publication Publication Date Title
US20210042809A1 (en) System and method for intuitive content browsing
US20200183966A1 (en) Creating Real-Time Association Interaction Throughout Digital Media
US11907240B2 (en) Method and system for presenting a search result in a search result card
US9619829B2 (en) Evolutionary content determination and management
US10540055B2 (en) Generating interactive content items based on content displayed on a computing device
US20200311126A1 (en) Methods to present search keywords for image-based queries
TWI573042B (en) Gesture-based tagging to view related content
US8660912B1 (en) Attribute-based navigation of items
KR101820256B1 (en) Visual search and three-dimensional results
US20140195890A1 (en) Browser interface for accessing supplemental content associated with content pages
US9460167B2 (en) Transition from first search results environment to second search results environment
US20140195506A1 (en) System and method for generating suggestions by a search engine in response to search queries
US20140195337A1 (en) Browser interface for accessing supplemental content associated with content pages
CN107111640B (en) Method and user interface for presenting auxiliary content with image search results
US9183577B2 (en) Selection of images to display next to textual content
US10497041B1 (en) Updating content pages with suggested search terms and search results
US9619519B1 (en) Determining user interest from non-explicit cues
JP2008146492A (en) Information providing device, information providing method, and computer program
WO2015101945A1 (en) Generating a news timeline and recommended news editions
US10437902B1 (en) Extracting product references from unstructured text
WO2017123746A1 (en) System and method for intuitive content browsing
US20150332322A1 (en) Entity sponsorship within a modular search object framework
WO2014110048A1 (en) Browser interface for accessing supple-mental content associated with content pages
KR101701952B1 (en) Method, apparatus and computer program for displaying serch information
KR20190025738A (en) Simplified overlay ads

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17738922

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.10.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 17738922

Country of ref document: EP

Kind code of ref document: A1