US20140254942A1 - Systems and methods for obtaining information based on an image - Google Patents

Systems and methods for obtaining information based on an image Download PDF

Info

Publication number
US20140254942A1
US20140254942A1 US13/990,791 US201313990791A US2014254942A1 US 20140254942 A1 US20140254942 A1 US 20140254942A1 US 201313990791 A US201313990791 A US 201313990791A US 2014254942 A1 US2014254942 A1 US 2014254942A1
Authority
US
United States
Prior art keywords
item
image
module
information
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/990,791
Inventor
Hailong Liu
Bin Xiao
Wen Cha
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Assigned to TENCENT TECHNOLOGY (SHENZHEN) CO., LTD. reassignment TENCENT TECHNOLOGY (SHENZHEN) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHA, Wen, LIU, HAILONG, XIAO, BIN
Publication of US20140254942A1 publication Critical patent/US20140254942A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F17/30247
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06K9/6215
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the present disclosure generally relates to image-based information retrieval, and more particularly, to methods and systems for identifying one or more items from a scanned image and retrieving information relating to the identified one or more items.
  • Another existing mechanism for retrieving information regarding an item is to scan a barcode (linear or matrix) associated with the item.
  • the barcode can usually be found on or in close proximity of the item. It can be scanned using, for example, a dedicated scanner, such as a common barcode scanner, or a mobile device equipped with a camera and the required scanning application.
  • a dedicated scanner such as a common barcode scanner
  • a mobile device equipped with a camera and the required scanning application there are certain limitations with scanning barcodes.
  • the amount of information retrievable from a barcode is usually limited. Scanning the barcode on a product in a supermarket may only provide the name and price of the product. More advanced barcodes, such as Quick Response (QR) codes, can provide a Web link, name, contact information such as an address, phone number, email address, and/or some other similar data type when scanned.
  • QR Quick Response
  • the information retrievable from these barcodes is typically limited to the information available in the corresponding backend system/database, such as
  • Radio Frequency Identification technology is another mechanism for automatically identifying and tracking tags attached to an item.
  • RFID technology relies on radio-frequency electromagnetic fields to transfer data in a non-contacting fashion.
  • An RFID system typically requires RFID tags to be attached to the item and a reader for reading data associated with a particular item from the corresponding tag. The reader can transmit the data to a computer system to be further processed.
  • RFID technology has the same shortcomings as barcodes in that only a relatively limited amount of information can be retrieved from reading the RFID tags. Furthermore, the fact that it requires special tags and readers makes it a less desirable solution for retrieving information since most people do not carry a RFID reader on them.
  • This generally relates to systems and methods for retrieving information relating to an item based on a scanned image of the item.
  • the systems and methods can involve using a device, such as a smartphone, to capture a 2-dimensional image of an item and transmit the captured image to a server.
  • the server can analyze the image against pre-stored data to determine a corresponding item associated with the image and obtain information relating to the item from a data repository such as the Internet. The information can then be transmitted from the server to the device.
  • an information-providing system can include an image-receiving module that receives an image from a device, an item-selection module that identifies an item based on the received image, an information-retrieving module that retrieves information relating to the item, and a data transmitting module that transmits the retrieved information to the device, wherein the item is identified by matching one or more features of the received image with features identified from a training image associated with the item.
  • the system can also include a training image processing module that identifies one or more features from at least one training image.
  • the training image processing module can further include: a keypoint-identifying module that identifies at least one keypoint of the received image, a descriptor-generating module that generates a descriptor for each of the at least one keypoint, a feature ID generating module that quantifies a descriptor to generate at least one feature ID, and a database-access module that stores the at least one feature ID and at least one of its corresponding item in a database.
  • the system can also include a database for storing the features identified from the training image.
  • the database can store the features and one or more items associated with each of the features.
  • the information is retrieved from the Internet.
  • the identified item is a book and the received image includes a book cover of the book.
  • the item-selection module can further include: a keypoint-identifying module that identifies at least one keypoint of the received image, a descriptor-generating module that generates a descriptor for each of the at least one keypoint, a feature ID generating module that quantifies a descriptor to generate at least one feature ID, an item-selecting module that selects at least one item corresponding to each of the at least one feature ID, a hit-counting module that determines a total number of hits for each of the selected items, and a top item selection module that selects one of the selected items that best matches with the received image.
  • the item-selection module includes: a threshold module that determines whether the number of hits for an item exceeds a predetermined threshold, and an item-eliminating module that eliminates an item if the number of hits for the item does not exceed the predetermined threshold.
  • the item-selection module includes a geometric verification module that performs geometric verification on the received image and the training image associated with the best-matching item.
  • FIG. 1 is a block diagram illustrating the exemplary components of an information-retrieval system, according to an embodiment of the disclosure.
  • FIG. 2 is a flow chart illustrating the exemplary steps in an image-based information retrieval process, according to an embodiment of the disclosure.
  • FIGS. 3 a - 3 c are screen shots on the requesting device illustrating exemplary user interfaces for retrieving information based on a scanned image, according to an embodiment of the disclosure.
  • FIG. 4 is a flow chart illustrating the exemplary steps of an image-based information retrieval process, according to an embodiment of the disclosure.
  • FIG. 5 illustrates an exemplary database table for storing features IDs and items, according to an embodiment of the disclosure.
  • FIG. 6 illustrates exemplary steps in the process of determining an item based on a scanned image received from a user device, according to an embodiment of the disclosure.
  • FIG. 7 is a block diagram illustrating exemplary modules of the server for providing information in response to receiving a scanned image from the user device, according to an embodiment of the disclosure.
  • FIG. 8 is a block diagram illustrating the exemplary modules of the training image processing module of FIG. 7 , according to an embodiment of the disclosure.
  • FIG. 9 is a block diagram illustrating the exemplary modules of the item-selection module of FIG. 7 , according to an embodiment of the disclosure.
  • FIG. 10 illustrates exemplary common hardware components of a server, according to an embodiment of the disclosure.
  • This generally relates to systems and methods for retrieving information relating to an item based on a scanned image of the item.
  • the systems and methods can involve using a device, such as a smartphone, to capture a 2-dimensional image of an item and transmit the captured image to a server.
  • the server can analyze the image against pre-stored data to determine a corresponding item associated with the image and obtain information relating to the item from a data repository such as the Internet. The information can then be transmitted from the server to the device.
  • FIG. 1 is a block diagram illustrating the exemplary components of an information-retrieval system, according to an embodiment of the disclosure.
  • a device 100 can be connected to a server 102 , which in turn can be connected to the Internet 104 .
  • the device 100 can be any electronic device with image-capturing capability.
  • the device 100 can include an image-capturing component such as a camera or a webcam.
  • the device 100 in FIG. 1 is shown to be a smartphone, it can also be other devices including, for example, a personal computer (PC), Mac, desktop computer, laptop computer, tablet PC, e-reader, camera, camcorder, in-car communication device, and other consumer electronic devices.
  • PC personal computer
  • Mac desktop computer
  • laptop computer tablet PC
  • e-reader camera
  • camcorder camcorder
  • the device 100 can use its camera to capture an image of an item of interest and send the captured image to the server 102 .
  • the device 100 can also include communication component for communicating with other devices including the server 102 .
  • the device 100 can be connected to the server via a wired or a wireless connection/network including, but not limited to, the Internet, local area network (LAN), wide area network (WAN), cellular network, Wi-Fi network, virtual private network (VPN), and Bluetooth connection.
  • the image can be transmitted as a part of a request requesting the server 102 to identify one or more items in the image and return additional information relating to the identified one or more items to the device 100 .
  • the device 100 can also be referred to as a “requesting device” hereinafter.
  • the server 102 can be any suitable computing device or devices capable of receiving image data from one or more devices 100 , identifying an item based on the image data, retrieving additional data regarding the identified item from internal and/or external sources such as the Internet 104 , and transmitting the retrieved data back to the requesting device(s).
  • Various methods can be employed by the server to extract information (e.g., features such as the color, brightness, and/or relative position of the pixels) from the image and identify, based on the extracted information, one or more items associated with the image (e.g., an item depicted in the image).
  • the information extracted from an image can identify features that, alone or in combination, can be used to identify the image among a collection of images.
  • this can be done by the server querying a database containing pre-stored feature IDs and corresponding item(s).
  • the feature IDs in the database can be generated from a collection of training images of various items.
  • a training image can be any existing image depicting one or more items.
  • One or more features can be identified from a training image by using a feature extracting mechanism, an example of which will be discussed in detail below.
  • a unique ID e.g., feature ID
  • the database can store pairing of various features and items.
  • the server can perform the same feature extracting process on the scanned image to generate one or more feature IDs from the scanned image. These feature IDs can then be used to query the database to find the matching item(s). The item with the best matching score can be identified as the item associated with the particular scanned image.
  • the server 102 can be network-enabled to communicate with one or more requesting devices.
  • the server 102 can also have access to internal and/or external information repositories to search and retrieve information relating to items identified to be associated with a scanned image.
  • the exemplary external repository can be the Internet 104 . After the server identifies the item associated with the scanned image received from the requesting device, the server can gather information relevant to the item from the Internet and forward some or all of the gathered information to the requesting device.
  • the repository can be one or more internal or external database storing information regarding a collection of items.
  • the type of information stored in the database can be customized based on the categories (e.g., movie, book) of items to be covered by the image-based information look-up search function provided by the server.
  • the server can include any information relating to any items in the physical world.
  • FIG. 2 is a flow chart illustrating the exemplary steps in an image-based information retrieval process, according to an embodiment of the disclosure.
  • a requesting device can capture an image of an item (step 201 ). In one embodiment, this can be done by, for example, scanning a 2-dimensional item, such as a cover of a book, using the camera on a smartphone.
  • the requesting device can transmit the image to a server to identify the item depicted in or associated with the image (step 202 ).
  • the scanned book cover can be sent to the server to retrieve information regarding the particular book.
  • the image-transmitting step can take place automatically after the requesting device determines that the scanning operation was successfully.
  • the device may perform one or more quality-assurance steps to ensure that the captured image meets certain criteria in terms as clarity, resolution, size, brightness, etc. so that it can be properly analyzed by the server. If the image does not meet one or more of the criteria, the device can prompt the user to scan the image again. In some embodiments, the user has to manually transmit the image to the server.
  • the user can also enter addition information, such as keywords specifying the type of information to be returned by the server, to be transmitted with the image to the server.
  • addition information such as keywords specifying the type of information to be returned by the server
  • the scanned image of the book cover can be transmitted to the server.
  • the user may optionally enter keywords, such as “author” and/or “cover designer,” to be transmitted with the image. These keywords may direct the server to search for information specifically relating to the author and/or cover designer of this particular book after the server identifies the book from the scanned image.
  • the server can identify one or more features of the image (step 203 ).
  • the one or more features can be represented by one or more feature IDs.
  • the image of the book cover may be processed by the server to extract certain features defined by, for example, color, brightness, and/or relative position of the pixels of the image. Each of these features can then be quantified as a unique feature ID.
  • the server can identify an item based on the feature IDs calculated from the image (step 204 ). This can involve looking up items corresponding to each of the feature IDs in a database and ranking these items based on the number of feature IDs to which they correspond. The item ranked the highest can be determined to be the best match for the scanned image (e.g., most likely to be the item depicted in the image). Steps 203 and 204 will be described in further detail below.
  • the server can then search on the Internet (or another data repository) for information relating to the item (step 205 ).
  • the server can determine that the cover image corresponds to the cover of one of the Harry Potter books.
  • the server can then run a search on the Internet for information, such as title and plot summary of all Harry Potter books, readers' reviews of the book, information relating to the graphic designer who designed the book cover, and online book stores offering this particular book for sale.
  • any information regarding the book that is available on the Internet can be found by the server and made available to the user device.
  • the search can incorporate these keywords to provide results tailored to the user's interest.
  • the server can then transmit the search results to the requesting device (step 206 ).
  • the results can be displayed on the screen of the device for user browsing. They can also include web links to other websites where additional information can be available to the user. For example, the user may follow one of the links to an online book store to buy a copy of the Harry Potter book. Additionally or alternatively, he may also purchase and download an electronic copy of the book onto his device so that he can start reading right away.
  • the disclosed methods and systems do not require any customized hardware, such as a barcode scanner. Any device with a camera can be used to scan an image of an item and receive information regarding the particular item from the server. Furthermore, the item of interest does not have to come with any barcode, QR code, or any other type of code to enable the information retrieval process. All it takes is for the user to scan, or capture in another way, a two-dimensional image of the item using the camera on his mobile phone to have access to potentially all kinds of information relating to the item. This process cannot be any more straightforward from the user end.
  • the backend server can have access to the whole Internet to find information relating to the item. This can overcome the limitations of existing systems where only a limited amount of information (e.g., the information available in a closed system) can be returned in response to an inquiry based on, for example, a QR code.
  • a limited amount of information e.g., the information available in a closed system
  • FIGS. 3 a - 3 c are exemplary screen shots on the requesting device illustrating user interfaces for retrieving information based on a scanned image, according to embodiments of the disclosure.
  • FIG. 3 a illustrates an exemplary interface 300 for initiating an image-scanning operation.
  • This interface can be an interface of an application (e.g., a book cover scanning application for retrieving information on a book by scanning the cover) installed on the device.
  • the interface can be launched from the home screen of the device directly and/or the camera application of the device. It can be launched by a softkey, hardkey, or voice control of the device. As illustrated in FIG.
  • a menu including one or more options 304 , 306 , 308 , 310 can be superimposed on top of the camera view, which may be darken to indicate that no active scanning is taking place.
  • One of the options can be an “Image Scan” option 308 that can allow a user to scan an image of an item or object (e.g., a book cover) that is of interest to the user.
  • the interface 300 can include additional menu options, such as “2D Code Scan” for scanning QR codes and “Translate” for translating text captured by the camera.
  • the menu can also include a “Cancel” softkey to leave this interface 300 .
  • a frame 302 indicating the scanning area can be displayed to allow the user visually control the subsequent image scanning operation.
  • the scanning operation can be initiated and a corresponding “Image Scan” interface, such as the one illustrated in FIG. 3 b , can be displayed on the user device.
  • a corresponding “Image Scan” interface such as the one illustrated in FIG. 3 b
  • the superimposed option menu is no longer displayed on the interface 312 of FIG. 3 b .
  • the image frame 302 ′ can expand to occupy most of the display area on the interface 312 .
  • the device can be positioned so that the cover of a Harry Potter book can be viewed within the frame 302 ′ through the camera lens of the device.
  • the user can move the device around to ensure that the item can be fully captured by the camera.
  • the device can automatically start scanning when it determines that the item is within the frame 302 ′.
  • the scan can produce a 2-dimensional image of the item.
  • the size 316 of the image file can also be displayed on the interface 312 .
  • the image can be automatically transmitted to the server to start a query to retrieve information relating to the item in the image.
  • one or more optional fields can be displayed either on the “Image Scan” interface 312 or a subsequent interface to allow the user to specify additional search and/or filtering criteria, such as keywords. For example, the user can input “author” and/or “price” to request specific information regarding the particular book.
  • FIG. 3 c illustrates an exemplary “Scan Result” screen 318 displaying the information returned from the server.
  • the “Scan Result” screen 318 can include a thumbnail of the original scanned image 320 , the title 322 , author 324 , and edition 326 of the book.
  • Each of the title 322 , author 324 , and edition 326 can also be a link to retrieve additional information regarding the respective field.
  • the screen 318 can also display information relating to at least one online book store (e.g., Amazon.com) 328 that has the book in stock. Clicking on the name of the store, for example, can allow the user to be redirected to the online store where he may purchase a copy of the book.
  • the “Scan Result” screen 318 can display any other information identified by server as relating to the item in the scanned image. Depending on the type of item captured in the image, the “Scan Result” screen can be customized to display different kinds of information.
  • the scan results can include, for example, a clip of the preview of the movie, a list of other movies involving the same director or actors, list of theatres and show times, and/or a link to a movie ticket website where the user can purchase a ticket to see the movie.
  • embodiments of the disclosure may only require minimum user input, namely, pointing the camera of the user's device at an item of interest, to obtain potentially all sorts of information regarding the item. From the user's point of view, the whole process can be carried out in a seamless and extremely straight forward fashion with minimum amount of user input.
  • the server can identify various features of the image.
  • the server can include a database of features extracted from a collection of known images (i.e., training images) of items.
  • the server can then match the features extracted from the scanned image with the features from the collection of training images to identify a best-matching training image from the collection of training images. Because each training image can be associated with at least one known item, the server can identify one or more items relating to the scanned image if features from the scanned image are found to match with features from multiple training images.
  • FIG. 4 is a flow chart illustrating the exemplary steps of such a process, according to an embodiment of the disclosure.
  • the server can collect images of various items (step 401 ). For example, if the server is to provide information relating to books and movies based on scanned book covers and movie posters received from the users, the server can have a collection of images of book covers, movie posters, and/or DVD covers. These images are referred to as training images and can be stored on the server as references.
  • the process of generating feature IDs representing the features of a training image is discussed in the following steps 402 - 404 . It should be understood that the process described in these steps is one of many different methods that can be used for generating a list of feature IDs from an image. Other suitable methods can also be employed to accomplish the same results.
  • the server can first identify a number of keypoints of the image (step 402 ).
  • SIFT scale-invariant feature transform
  • SIFT features can be extracted from the training image.
  • SIFT features can be invariant with respect to, for example, rotating, scaling, and illumination changes of the image. They can also be relatively stable with respect to, for example, the changing of the viewing angle, affine transformation, noise, and other factors that may affect an image.
  • SIFT feature extraction can be carried out as follows.
  • scale space extrema can be detected.
  • the training image can be convolved with Difference of Gaussians (DoGs) that occur at multiple scales.
  • DoGs Difference of Gaussians
  • the image pyramids can be in P groups and each group can include S layers.
  • the layers of the first group can be generated by convolving the original image with DoGs that occur at multiple scales (adjacent layers can have a scale difference of factor k).
  • the next group can be generated by downsampling the previous group of images.
  • a DoG pyramid can be generated from the differences between the adjacent Gaussian image pyramids.
  • an accurate location of each keypoint can be determined. Specifically, this can be done by fitting a 3-dimensional quadratic function to accurately determine the location and scale of each keypoint. At the same time, low contrast candidate points and edge response points along an edge can be discarded to improve consistency in the later feature-matching processes and also increase noise-rejection capability.
  • To accurately locating a keypoint can include determining a main orientation of the keypoint and generating a descriptor of the keypoint.
  • the keypoint can be used as the center of the neighboring window for sampling.
  • An orientation histogram can be used for determining the gradient orientation of the neighboring pixels.
  • An orientation histogram with 36 bins can be formed, with each bin covering 10 degrees for a total range of 0-360 degrees. The peaks in this histogram can correspond to the dominant orientations of the neighboring gradient of the keypoint, and thus can be used as the dominant orientations of the keypoint.
  • the orientations corresponding to the peaks that are within 80% of the highest peaks can be the supplemental direction of the keypoint.
  • a descriptor for each keypoint can be generated (step 403 ).
  • the zero degree direction of the axes can be rotated to match to the dominant orientation of the keypoint to achieve rotational invariance.
  • a set of orientation histograms can be created on 4 ⁇ 4 pixel neighborhoods each with 8 bins.
  • a sum of each gradient orientation can then be determined.
  • the SIFT vector can already have the effects from geometric distortion factors such as scaling or rotating of the training image removed. This vector can then be normalized to unit length in order to enhance invariance to affine changes in illumination.
  • the 128-dimensional SIFT feature vector can then be quantified as a feature ID (e.g., a number from 1-1,000,000) (step 404 ). That is, each SIFT feature vector representing a feature of the training image can have a corresponding numeric feature ID. Typically, more than one feature can be identified from a training image. Accordingly, each training image may be associated with multiple feature IDs. Because each training image can be associated with an item (e.g., the item depicted in the image), the item can also be associated with the multiple feature IDs.
  • a feature ID e.g., a number from 1-1,000,000
  • each feature ID may be associated with multiple items.
  • the relationship between the features (as identified by their respective feature ID) and the items can be captured and stored in a database accessible to the server (step 405 ).
  • the database can be any suitable data storage program/format including, but not limited to, a list, text file, spreadsheet, relational database, and/or object-oriented database.
  • FIG. 5 illustrates an exemplary database table 500 for storing features IDs 502 and their corresponding items 504 .
  • the format of the table and the structure of the database can vary and do not have to conform to the two-column format shown in FIG. 5 .
  • the feature IDs 502 can be unique incremental numbers from, for example, 1 to 1 million.
  • Each feature ID can be associated with a feature from one of the training images and generated using the process described in FIG. 4 .
  • the number of feature IDs can depend on the number of features ascertainable from the training images and does not have to be limited to 1 million as shown in table 500 .
  • each feature ID can correspond to one or more items. As previously discussed, when the same feature is found in two different images, the corresponding feature ID can be associates with two different items. For example, as shown in the table of FIG. 5 , both “Harry Potter and the Chamber of Secrets” and “Harry Potter and the Goblet of Fire” can be associated with the feature ID “1.” However, “Harry Potter and the Goblet of Fire” may also have a second feature (identified by feature ID “3”), that is not associated with “Harry Potter and the Chamber of Secrets.” In this example, feature ID “2” can correspond to another book, “Da Vinci Code.” Although book names are listed under the “Item” column 504 in table 500 , it should be understood that the book names can be replaced by item IDs in other embodiments, where each book can be associated with a unique item ID.
  • the server can process the scanned image to extract features IDs representing various features of the scanned image and look up the corresponding item(s) from the database (e.g., the table of FIG. 5 ) to determine an item associated with the scanned image.
  • the database e.g., the table of FIG. 5
  • FIG. 6 illustrates exemplary steps in the process of determining an item based on a scanned image received from a user device, according to embodiments of the disclosure.
  • the server can receive a scanned image from a user device (step 601 ).
  • the scanned image can be transmitted over a network.
  • other information associated with the scanned image can also be received by the server.
  • Such information may include an ID of the user device from which the scanned image was transmitted and/or keywords specified by the user.
  • the scanned image can then be processed to generate one or more feature IDs that identify the various features of the image.
  • the same process discussed above for generating feature IDs from training images can be applied on the scanned image.
  • keypoints of the scanned image can be identified (step 602 ).
  • a descriptor for each identified keypoint can be generated (step 603 ).
  • the descriptors can then be quantified to generate feature IDs (step 604 ).
  • Steps 602 - 604 can correspond to steps 402 - 404 of FIG. 4 . As such, details of exemplary implementations of each of these steps are not repeated here.
  • corresponding item(s) for each feature ID can be looked up from a database (e.g., table of FIG. 5 ) listing all feature IDs generated from the training images and the items corresponding to each of these feature IDs. That is, the items corresponding to the feature IDs of the scanned image can be selected from the database (step 605 ).
  • a database e.g., table of FIG. 5
  • the total number of hits for each of the selected items can be determined (step 606 ). For example, “Harry Potter and the Chamber of Secrets” can have a total of one hit while “Harry Potter and the Goblet of Fire” can have a total of two hits based on the information in the table of FIG. 5 . Typically, the higher the total number of hits is for an item, the better match it can be with regard to the scanned image.
  • the total number of hits for each item can be compared to a predetermined threshold value to eliminate feature IDs with relative low number of hits (step 607 ). In one embodiment, for example, the threshold number can be 20. If the total hit of an item does not meet the threshold, it can be eliminated from consideration. Among the items that exceed the threshold, the item with the most hits can be selected to be the corresponding item for the particular scanned image (step 608 ).
  • a geometric verification step can be performed to further verify that the scanner image matches with the training image associated with the candidate item (step 610 ) before an item is determined to be the best match for the scanned image.
  • geometric verification can involve matching the individual pixels or features from the scanned image with those from the training image of the item selected through the process described above. This can be done by, for example, measuring and comparing the relative distances between two or more pixels in each of the two images. Based on how well the relative distances between the pixels match in the two images, it can be determined whether the training image is a top match for the scanned image. If the geometric verification is successful, the item associated with the training image can be confirmed to be the item associated with the scanned image.
  • the server can search for information relating to the item in one or more data repositories. For example, if “Harry Potter and the Goblet of Fire” is determined to be the item associated with the scanned image received from the user device, the server can conduct a search for information relating to this particular book. The results from the search can then be transmitted back to the user device for display, as shown in the screen shot of FIG. 3 c.
  • the same methods and systems can be applied to obtain information relating to any item, as long as an image of the item can be captured by scanning or other mechanisms and the item in the image can be recognized based on the information available (e.g., information extracted from the training images) to the server.
  • the item can also include, for example, the logo of a product, a screen shot from another device, a work of art such as a painting, or a 3-dimensional object such as a building. It should also be understood that the processes for extracting information such as feature IDs from an image are not limited to those described in the embodiments above.
  • the above-described exemplary processes including, but not limited to, generating a list of feature IDs from training images, storing these feature IDs with their corresponding items in the database, determining a best-matching item for a scanned image using the information stored in the database, and obtaining information relating to the best-matching item can be implemented using various combinations of software, firmware, and hardware technologies on the server (or a cluster of servers).
  • the server may include one or more modules for facilitating the various tasks of these processes.
  • FIGS. 7-9 illustrate exemplary modules in an exemplary server for performing these tasks. In some embodiments, these modules can be implemented mostly in software.
  • FIG. 7 illustrates exemplary modules of the server for providing information in response to receiving a scanned image from the user device, according to embodiments of the disclosure.
  • the server 700 can include, for example, a training-image processing module 702 , an image-receiving module 704 , an item-selection module 706 , an information retrieval module 708 , a data-transmission module 710 , and a database 712 .
  • the image-receiving module 704 can receive one or more scanned images from user devices.
  • the received scanned images can be processed by the item-selection module 706 to determine an item that is the best match to the scanned image.
  • the item-selection module 706 can perform one or more of steps illustrated in FIG. 6 .
  • the information retrieval module 708 can be connected to the Internet for retrieving information relating to the selected item. The information can then be transmitted back to the requesting device by the data-transmission module 710 .
  • the server can also include a training image processing module 702 for collecting and processing training images to identify various feature IDs and their corresponding items.
  • the training image processing module 702 can perform one or more steps illustrated in FIG. 4 .
  • the corresponding feature IDs and items identified by the training image processing module 702 can be stored in the database 712 , which can be accessed by the item-selection module 706 during the process of selecting an item based on a received scanned image.
  • FIG. 8 is a block diagram illustrating the exemplary modules of the training image processing module of FIG. 7 , according to embodiments of the disclosure.
  • the training image processing module 800 can include, for example, a training image obtaining module 801 , a keypoint-identifying module 802 , a descriptor-generating module 803 , a feature ID generating module 804 , and a database-access module 805 .
  • the training image obtaining module 801 can obtain training images (e.g., performing step 401 in FIG. 4 ) to be processed by the other modules of the training image processing module 800 .
  • the keypoint-identifying module 802 can identify keypoints of a training image (e.g., performing step 402 in FIG. 4 ).
  • the descriptor-generating module 804 can generate a descriptor for each identified keypoint (e.g., performing step 403 in FIG. 4 ).
  • the feature ID generating module 804 can quantify the descriptors to generate corresponding feature IDs (e.g., performing step 404 in FIG. 4 ).
  • the database-access module 805 can read and write to a database. In particular, the database-access module 805 can store feature IDs with the names of their corresponding items in the database (e.g., performing step 405 in FIG. 4 ).
  • FIG. 9 is a block diagram illustrating the exemplary modules of the item-selection module of FIG. 7 , according to embodiments of the disclosure.
  • the item-selection module 900 can include one or more of the following sub-modules: a scanned image receiving module 901 for receiving scanned images from one or more user devices (e.g., performing step 601 in FIG. 6 ); a keypoint-identifying module 902 for identifying keypoints of a scanned image (e.g., performing step 602 in FIG. 6 ); a descriptor-generating module 903 for generating a descriptor for each keypoint (e.g., performing step 603 in FIG.
  • a feature ID generating module 904 for quantifying a descriptor to generate a feature ID (e.g., performing step 604 in FIG. 6 ); an item-selecting module 905 for selecting items corresponding to one or more feature IDs (e.g., performing step 605 in FIG. 6 ); a hit-counting module 906 for determining a total number of hits for each of the selected items (e.g., performing step 606 in FIG. 6 ); a threshold module 907 for determining whether the number of hits for an item exceeds a predetermined threshold (e.g., performing step 607 in FIG.
  • a top item selection module 908 for selecting the item as the item that best matches with the scanned image (e.g., performing step 608 in FIG. 6 ); an item-eliminating module 909 for eliminating an item if the number of hits for the item does not exceed a predetermined threshold (e.g., performing step 609 in FIG. 6 ); and a geometric verification module 910 for performing geometric verification on a scanned image based on the training image associated with the best-matching item (e.g., performing step 610 in FIG. 6 ).
  • the training image processing module 800 and the item-selection module 900 can share one or more of the keypoint-identifying module, descriptor-generating module, and feature ID generating module 904 .
  • one or more of these modules on the server can be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
  • an instruction execution system, apparatus, or device such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
  • a “non-transitory computer-readable storage medium” can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the non-transitory computer readable storage medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), a portable optical disc such a CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like.
  • the non-transitory computer readable storage medium can be part of a computing system serving as the server.
  • FIG. 10 illustrates exemplary common hardware components of one such computing system.
  • the system 1000 can include a central processing unit (CPU) 1002 , I/O components 1004 including, but not limited to one or more of display, keypad, touch screen, speaker, and microphone, storage medium 1006 such as the ones listed in the last paragraph, and network interface 1008 , all of which can be connected to each other via a system bus 1010 .
  • the storage medium 1006 can include one or more of the modules of FIGS. 7-9 .
  • modules illustrated in FIGS. 7-9 are described to be modules on the server, it should be understood that, in other embodiments, one or more of these modules can be part of the requesting device. That is, at least part of the above-described processes of, for example, generating feature IDs from training images and scanned images, identifying one or more items based on the feature IDs, and searching for information relating to the identified items can be performed by the requesting device without involving the server.
  • one or more of the image-receiving module 704 , training image processing module 702 , item-selection module 706 , information retrieval module 708 , data transition module 710 , and database 712 can be a part of the requesting device.
  • the image-receiving module 704 can be connected to the camera of the device for receiving images captured by the camera. It can also be connected to, for example, a communication module for receiving an image from an email program, messaging application, the Internet, and/or removable storage means such as SIM cards and USB drives.
  • the requesting device can also process these images locally using one or more of the other modules shown in FIG. 7 . For example, it can perform one or more steps illustrated in FIG.
  • the requesting device can also perform one or more steps illustrated in FIG. 6 to select an item based on a scanned image.
  • the requesting device can retrieve information relating to the selected item from the Internet or other internal or external data repositories.
  • one or more modules illustrated in FIGS. 8 and 9 can also reside on the requesting device.
  • one of the requesting devices can process image-based information retrieval request from one or other requesting devices.
  • no dedicated server is necessary to carry out the processes of methods discussed above.
  • the various steps and tasks involved in the processes described in view of FIGS. 4-6 can be divided between the server and one or more requesting devices.
  • the server may handle the processing of training images and the requesting device can handle the processing of scanned images locally and transmit only the feature IDs generated from the scanned image to the server for identifying an item and retrieving information from the Internet.
  • the server can process the training images to obtain a list of feature IDs with corresponding items (e.g., the table of FIG. 5 ) and transmit the list to one or more requesting devices.
  • the requesting device(s) can store the list locally and, after processing a scanned image, identify an item based on information from the scanned image and the list.
  • the requesting device(s) can then search for information relating to the item by either directly connecting to the Internet and perform a search or passing along the item to the server and ask the server to perform the search and return the search results.
  • embodiments of the disclosure provides methods and systems that can allow a user to scan a 2-dimensional image of any item of his interest using, for example, his smartphone, and provide him with all sorts of information relating to the item that can be ascertained from, for example, the Internet and/or other existing data repositories.
  • This can provide a simple, low-cost, but also user-friendly and effective way of looking up information relating to anything that can be captured in an image.

Abstract

An information-providing system is disclosed. The information-providing system can include an image-receiving module that receives an image from a device, an item-selection module that identifies an item based on the received image, an information-retrieving module that retrieves information relating to the item, and a data transmitting module that transmits the retrieved information to the device, wherein the item is identified by matching one or more features of the received image with features identified from a training image associated with the item.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims the priority benefit of Chinese Patent Application No. 201210123853.5, filed on Apr. 25, 2012, the contents of which are incorporated by reference herein in their entirety for all purposes.
  • FIELD
  • The present disclosure generally relates to image-based information retrieval, and more particularly, to methods and systems for identifying one or more items from a scanned image and retrieving information relating to the identified one or more items.
  • BACKGROUND
  • Very often in everyday life do people encounter items in the physical world that they would like to have more information about. For example, someone looking at a movie poster on a wall may want to find out more about the director and actors involved with the movie, such as their previous work. He may also want to see a preview of the movie and, if he likes the preview, find a nearby theater and buy a ticket to see the movie. Similarly, a person browsing in a book store may want to read reviews of a particular book or cross-shop the book at online book stores.
  • There are a number of existing ways to obtain information relating to items such as the movie poster or book. One way is to conduct a manual search using, for example, a browser or application-based search engine on a PC or mobile device. This process is usually tedious and slow because it requires the user to manually enter a descriptive search string. Also, it may only work well for text-based searches. It is usually difficult to run a search for an image without specialized software.
  • Another existing mechanism for retrieving information regarding an item is to scan a barcode (linear or matrix) associated with the item. The barcode can usually be found on or in close proximity of the item. It can be scanned using, for example, a dedicated scanner, such as a common barcode scanner, or a mobile device equipped with a camera and the required scanning application. However, there are certain limitations with scanning barcodes. For example, the amount of information retrievable from a barcode is usually limited. Scanning the barcode on a product in a supermarket may only provide the name and price of the product. More advanced barcodes, such as Quick Response (QR) codes, can provide a Web link, name, contact information such as an address, phone number, email address, and/or some other similar data type when scanned. Nevertheless, the information retrievable from these barcodes is typically limited to the information available in the corresponding backend system/database, such as an inventory management system of a supermarket. Such system/database may not have all the information desired by the person interested in the item.
  • Radio Frequency Identification (RFID) technology is another mechanism for automatically identifying and tracking tags attached to an item. RFID technology relies on radio-frequency electromagnetic fields to transfer data in a non-contacting fashion. An RFID system typically requires RFID tags to be attached to the item and a reader for reading data associated with a particular item from the corresponding tag. The reader can transmit the data to a computer system to be further processed. Nevertheless, RFID technology has the same shortcomings as barcodes in that only a relatively limited amount of information can be retrieved from reading the RFID tags. Furthermore, the fact that it requires special tags and readers makes it a less desirable solution for retrieving information since most people do not carry a RFID reader on them.
  • Accordingly, information retrieval systems and methods that can provide a simpler and more user-friendly experience and have access to a large information repository for providing information relating to a wide range of items are highly desirable.
  • SUMMARY
  • This generally relates to systems and methods for retrieving information relating to an item based on a scanned image of the item. In particular, the systems and methods can involve using a device, such as a smartphone, to capture a 2-dimensional image of an item and transmit the captured image to a server. The server can analyze the image against pre-stored data to determine a corresponding item associated with the image and obtain information relating to the item from a data repository such as the Internet. The information can then be transmitted from the server to the device.
  • In one embodiment, an information-providing system is disclosed. The information-providing system can include an image-receiving module that receives an image from a device, an item-selection module that identifies an item based on the received image, an information-retrieving module that retrieves information relating to the item, and a data transmitting module that transmits the retrieved information to the device, wherein the item is identified by matching one or more features of the received image with features identified from a training image associated with the item.
  • In another embodiment, the system can also include a training image processing module that identifies one or more features from at least one training image. In another embodiment, the training image processing module can further include: a keypoint-identifying module that identifies at least one keypoint of the received image, a descriptor-generating module that generates a descriptor for each of the at least one keypoint, a feature ID generating module that quantifies a descriptor to generate at least one feature ID, and a database-access module that stores the at least one feature ID and at least one of its corresponding item in a database. In another embodiment, the system can also include a database for storing the features identified from the training image. In another embodiment, the database can store the features and one or more items associated with each of the features. In another embodiment, the information is retrieved from the Internet. In another embodiment, the identified item is a book and the received image includes a book cover of the book.
  • In yet another embodiment, the item-selection module can further include: a keypoint-identifying module that identifies at least one keypoint of the received image, a descriptor-generating module that generates a descriptor for each of the at least one keypoint, a feature ID generating module that quantifies a descriptor to generate at least one feature ID, an item-selecting module that selects at least one item corresponding to each of the at least one feature ID, a hit-counting module that determines a total number of hits for each of the selected items, and a top item selection module that selects one of the selected items that best matches with the received image. In yet another embodiment, the item-selection module includes: a threshold module that determines whether the number of hits for an item exceeds a predetermined threshold, and an item-eliminating module that eliminates an item if the number of hits for the item does not exceed the predetermined threshold. In yet another embodiment, the item-selection module includes a geometric verification module that performs geometric verification on the received image and the training image associated with the best-matching item.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating the exemplary components of an information-retrieval system, according to an embodiment of the disclosure.
  • FIG. 2 is a flow chart illustrating the exemplary steps in an image-based information retrieval process, according to an embodiment of the disclosure.
  • FIGS. 3 a-3 c are screen shots on the requesting device illustrating exemplary user interfaces for retrieving information based on a scanned image, according to an embodiment of the disclosure.
  • FIG. 4 is a flow chart illustrating the exemplary steps of an image-based information retrieval process, according to an embodiment of the disclosure.
  • FIG. 5 illustrates an exemplary database table for storing features IDs and items, according to an embodiment of the disclosure.
  • FIG. 6 illustrates exemplary steps in the process of determining an item based on a scanned image received from a user device, according to an embodiment of the disclosure.
  • FIG. 7 is a block diagram illustrating exemplary modules of the server for providing information in response to receiving a scanned image from the user device, according to an embodiment of the disclosure.
  • FIG. 8 is a block diagram illustrating the exemplary modules of the training image processing module of FIG. 7, according to an embodiment of the disclosure.
  • FIG. 9 is a block diagram illustrating the exemplary modules of the item-selection module of FIG. 7, according to an embodiment of the disclosure.
  • FIG. 10 illustrates exemplary common hardware components of a server, according to an embodiment of the disclosure.
  • DETAILED DESCRIPTION
  • In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments in which the disclosure can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments of this disclosure.
  • This generally relates to systems and methods for retrieving information relating to an item based on a scanned image of the item. In particular, the systems and methods can involve using a device, such as a smartphone, to capture a 2-dimensional image of an item and transmit the captured image to a server. The server can analyze the image against pre-stored data to determine a corresponding item associated with the image and obtain information relating to the item from a data repository such as the Internet. The information can then be transmitted from the server to the device.
  • FIG. 1 is a block diagram illustrating the exemplary components of an information-retrieval system, according to an embodiment of the disclosure. As illustrated, a device 100 can be connected to a server 102, which in turn can be connected to the Internet 104. The device 100 can be any electronic device with image-capturing capability. In particular, the device 100 can include an image-capturing component such as a camera or a webcam. Although the device 100 in FIG. 1 is shown to be a smartphone, it can also be other devices including, for example, a personal computer (PC), Mac, desktop computer, laptop computer, tablet PC, e-reader, camera, camcorder, in-car communication device, and other consumer electronic devices. The device 100 can use its camera to capture an image of an item of interest and send the captured image to the server 102. Accordingly, the device 100 can also include communication component for communicating with other devices including the server 102. For example, the device 100 can be connected to the server via a wired or a wireless connection/network including, but not limited to, the Internet, local area network (LAN), wide area network (WAN), cellular network, Wi-Fi network, virtual private network (VPN), and Bluetooth connection. In this embodiment, the image can be transmitted as a part of a request requesting the server 102 to identify one or more items in the image and return additional information relating to the identified one or more items to the device 100. Accordingly, the device 100 can also be referred to as a “requesting device” hereinafter.
  • Although only one device 100 is shown to be connected to the server 102, it should be understood that additional devices can also be connected to the server 102 and request information in the same fashion. The server 102 can be any suitable computing device or devices capable of receiving image data from one or more devices 100, identifying an item based on the image data, retrieving additional data regarding the identified item from internal and/or external sources such as the Internet 104, and transmitting the retrieved data back to the requesting device(s). Various methods can be employed by the server to extract information (e.g., features such as the color, brightness, and/or relative position of the pixels) from the image and identify, based on the extracted information, one or more items associated with the image (e.g., an item depicted in the image). The information extracted from an image can identify features that, alone or in combination, can be used to identify the image among a collection of images.
  • In some embodiments, this can be done by the server querying a database containing pre-stored feature IDs and corresponding item(s). The feature IDs in the database can be generated from a collection of training images of various items. As referred to hereinafter, a training image can be any existing image depicting one or more items. One or more features can be identified from a training image by using a feature extracting mechanism, an example of which will be discussed in detail below. A unique ID (e.g., feature ID) for each of these features can be stored in the database along with the names or IDs of one or more items associated with the training image from which the features were identified. Essentially, the database can store pairing of various features and items.
  • When a request (or a scanned image) from a requesting device is received by the server, the server can perform the same feature extracting process on the scanned image to generate one or more feature IDs from the scanned image. These feature IDs can then be used to query the database to find the matching item(s). The item with the best matching score can be identified as the item associated with the particular scanned image.
  • Referring again to FIG. 1, the server 102 can be network-enabled to communicate with one or more requesting devices. In addition, the server 102 can also have access to internal and/or external information repositories to search and retrieve information relating to items identified to be associated with a scanned image. In the embodiment illustrated in FIG. 1, the exemplary external repository can be the Internet 104. After the server identifies the item associated with the scanned image received from the requesting device, the server can gather information relevant to the item from the Internet and forward some or all of the gathered information to the requesting device. In other embodiments, the repository can be one or more internal or external database storing information regarding a collection of items. The type of information stored in the database can be customized based on the categories (e.g., movie, book) of items to be covered by the image-based information look-up search function provided by the server. In one embodiment, the server can include any information relating to any items in the physical world.
  • FIG. 2 is a flow chart illustrating the exemplary steps in an image-based information retrieval process, according to an embodiment of the disclosure. First, a requesting device can capture an image of an item (step 201). In one embodiment, this can be done by, for example, scanning a 2-dimensional item, such as a cover of a book, using the camera on a smartphone.
  • After the image is captured, the requesting device can transmit the image to a server to identify the item depicted in or associated with the image (step 202). For example, the scanned book cover can be sent to the server to retrieve information regarding the particular book. The image-transmitting step can take place automatically after the requesting device determines that the scanning operation was successfully. Additionally or alternatively, the device may perform one or more quality-assurance steps to ensure that the captured image meets certain criteria in terms as clarity, resolution, size, brightness, etc. so that it can be properly analyzed by the server. If the image does not meet one or more of the criteria, the device can prompt the user to scan the image again. In some embodiments, the user has to manually transmit the image to the server. In some embodiments, the user can also enter addition information, such as keywords specifying the type of information to be returned by the server, to be transmitted with the image to the server. In the book cover example, the scanned image of the book cover can be transmitted to the server. The user may optionally enter keywords, such as “author” and/or “cover designer,” to be transmitted with the image. These keywords may direct the server to search for information specifically relating to the author and/or cover designer of this particular book after the server identifies the book from the scanned image.
  • After receiving the image transmitted from the requesting device, the server can identify one or more features of the image (step 203). In one embodiment, the one or more features can be represented by one or more feature IDs. For example, the image of the book cover may be processed by the server to extract certain features defined by, for example, color, brightness, and/or relative position of the pixels of the image. Each of these features can then be quantified as a unique feature ID.
  • Next, the server can identify an item based on the feature IDs calculated from the image (step 204). This can involve looking up items corresponding to each of the feature IDs in a database and ranking these items based on the number of feature IDs to which they correspond. The item ranked the highest can be determined to be the best match for the scanned image (e.g., most likely to be the item depicted in the image). Steps 203 and 204 will be described in further detail below.
  • After the item is determined, the server can then search on the Internet (or another data repository) for information relating to the item (step 205). In the book cover example, the server can determine that the cover image corresponds to the cover of one of the Harry Potter books. The server can then run a search on the Internet for information, such as title and plot summary of all Harry Potter books, readers' reviews of the book, information relating to the graphic designer who designed the book cover, and online book stores offering this particular book for sale. Essentially, any information regarding the book that is available on the Internet can be found by the server and made available to the user device. In the embodiments where the scanned image is transmitted to the server along with keywords entered by the user, the search can incorporate these keywords to provide results tailored to the user's interest.
  • The server can then transmit the search results to the requesting device (step 206). The results can be displayed on the screen of the device for user browsing. They can also include web links to other websites where additional information can be available to the user. For example, the user may follow one of the links to an online book store to buy a copy of the Harry Potter book. Additionally or alternatively, he may also purchase and download an electronic copy of the book onto his device so that he can start reading right away.
  • The exemplary embodiments discussed above can provide a much simpler and effective way for retrieving information than any existing mechanisms including those described in the Background section. First, the disclosed methods and systems do not require any customized hardware, such as a barcode scanner. Any device with a camera can be used to scan an image of an item and receive information regarding the particular item from the server. Furthermore, the item of interest does not have to come with any barcode, QR code, or any other type of code to enable the information retrieval process. All it takes is for the user to scan, or capture in another way, a two-dimensional image of the item using the camera on his mobile phone to have access to potentially all kinds of information relating to the item. This process cannot be any more straightforward from the user end. Another advantage of the disclosed systems and methods is that the backend server can have access to the whole Internet to find information relating to the item. This can overcome the limitations of existing systems where only a limited amount of information (e.g., the information available in a closed system) can be returned in response to an inquiry based on, for example, a QR code.
  • FIGS. 3 a-3 c are exemplary screen shots on the requesting device illustrating user interfaces for retrieving information based on a scanned image, according to embodiments of the disclosure. In particular, FIG. 3 a illustrates an exemplary interface 300 for initiating an image-scanning operation. This interface can be an interface of an application (e.g., a book cover scanning application for retrieving information on a book by scanning the cover) installed on the device. In some embodiments, the interface can be launched from the home screen of the device directly and/or the camera application of the device. It can be launched by a softkey, hardkey, or voice control of the device. As illustrated in FIG. 3 a, a menu including one or more options 304, 306, 308, 310 can be superimposed on top of the camera view, which may be darken to indicate that no active scanning is taking place. One of the options can be an “Image Scan” option 308 that can allow a user to scan an image of an item or object (e.g., a book cover) that is of interest to the user. The interface 300 can include additional menu options, such as “2D Code Scan” for scanning QR codes and “Translate” for translating text captured by the camera. Although three menu options are illustrated in the screen shot of FIG. 3 a, it should be understood that the menu options can vary depending on the functions provided by the device and/or a particular application. In this embodiment, the menu can also include a “Cancel” softkey to leave this interface 300. Optionally, a frame 302 indicating the scanning area can be displayed to allow the user visually control the subsequent image scanning operation.
  • If the user hits the “Image Scan” softkey on the interface 300 of FIG. 3 a, the scanning operation can be initiated and a corresponding “Image Scan” interface, such as the one illustrated in FIG. 3 b, can be displayed on the user device. As illustrated in FIG. 3 b, the superimposed option menu is no longer displayed on the interface 312 of FIG. 3 b. Instead, the image frame 302′ can expand to occupy most of the display area on the interface 312. In this example, the device can be positioned so that the cover of a Harry Potter book can be viewed within the frame 302′ through the camera lens of the device. As with any other camera-based operations, the user can move the device around to ensure that the item can be fully captured by the camera. The device can automatically start scanning when it determines that the item is within the frame 302′. In this embodiment, the scan can produce a 2-dimensional image of the item. Optionally, as illustrated in FIG. 3 b, the size 316 of the image file can also be displayed on the interface 312. The image can be automatically transmitted to the server to start a query to retrieve information relating to the item in the image. In some embodiment, one or more optional fields can be displayed either on the “Image Scan” interface 312 or a subsequent interface to allow the user to specify additional search and/or filtering criteria, such as keywords. For example, the user can input “author” and/or “price” to request specific information regarding the particular book.
  • The processes being performed by the server will be discussed in detail in latter paragraphs. After the server finds relevant information regarding the item associated with the scanned image, it can transmit this information in a specific format to the user device. FIG. 3 c illustrates an exemplary “Scan Result” screen 318 displaying the information returned from the server. As illustrated, the “Scan Result” screen 318 can include a thumbnail of the original scanned image 320, the title 322, author 324, and edition 326 of the book. Each of the title 322, author 324, and edition 326 can also be a link to retrieve additional information regarding the respective field. In this embodiment, the screen 318 can also display information relating to at least one online book store (e.g., Amazon.com) 328 that has the book in stock. Clicking on the name of the store, for example, can allow the user to be redirected to the online store where he may purchase a copy of the book. Additionally or alternatively, the “Scan Result” screen 318 can display any other information identified by server as relating to the item in the scanned image. Depending on the type of item captured in the image, the “Scan Result” screen can be customized to display different kinds of information. For example, if the scanned image includes a movie poster, the scan results can include, for example, a clip of the preview of the movie, a list of other movies involving the same director or actors, list of theatres and show times, and/or a link to a movie ticket website where the user can purchase a ticket to see the movie.
  • As apparent from the exemplary user interfaces 300, 312, 318 of FIGS. 3 a-c, embodiments of the disclosure may only require minimum user input, namely, pointing the camera of the user's device at an item of interest, to obtain potentially all sorts of information regarding the item. From the user's point of view, the whole process can be carried out in a seamless and extremely straight forward fashion with minimum amount of user input.
  • At mentioned above, when the server receives the scanned image from the requesting device, the server can identify various features of the image. The server can include a database of features extracted from a collection of known images (i.e., training images) of items. The server can then match the features extracted from the scanned image with the features from the collection of training images to identify a best-matching training image from the collection of training images. Because each training image can be associated with at least one known item, the server can identify one or more items relating to the scanned image if features from the scanned image are found to match with features from multiple training images.
  • First, the processing of training images by the server to generate and store a list of features and their associated items is discussed. FIG. 4 is a flow chart illustrating the exemplary steps of such a process, according to an embodiment of the disclosure. First, the server can collect images of various items (step 401). For example, if the server is to provide information relating to books and movies based on scanned book covers and movie posters received from the users, the server can have a collection of images of book covers, movie posters, and/or DVD covers. These images are referred to as training images and can be stored on the server as references. The process of generating feature IDs representing the features of a training image is discussed in the following steps 402-404. It should be understood that the process described in these steps is one of many different methods that can be used for generating a list of feature IDs from an image. Other suitable methods can also be employed to accomplish the same results.
  • In this embodiment, to extract one or more features from a training image, the server can first identify a number of keypoints of the image (step 402). First, scale-invariant feature transform (SIFT) features can be extracted from the training image. SIFT features can be invariant with respect to, for example, rotating, scaling, and illumination changes of the image. They can also be relatively stable with respect to, for example, the changing of the viewing angle, affine transformation, noise, and other factors that may affect an image. In one embodiment, SIFT feature extraction can be carried out as follows.
  • First, scale space extrema can be detected. To effectively extract stable keypoints, the training image can be convolved with Difference of Gaussians (DoGs) that occur at multiple scales.

  • D(x,y,σ)=(G(x,y,kσ)−G(x,y,σ))*I(x,y)=L(x,y,kσ)−L(x,y,σ)
  • This can be achieved by generating Gaussian image pyramids. The image pyramids can be in P groups and each group can include S layers. The layers of the first group can be generated by convolving the original image with DoGs that occur at multiple scales (adjacent layers can have a scale difference of factor k). The next group can be generated by downsampling the previous group of images. A DoG pyramid can be generated from the differences between the adjacent Gaussian image pyramids.
  • To locate the scale space extrema, each sampling point (e.g., pixel) in the DoG pyramid can be compared to its eight adjacent pixels at the same scale and nine upper and nine lower neighboring pixels in each of the neighboring scales (a total of 8+9*2=26 pixels). If the value of the pixel is lesser or greater than the value of the 26 neighboring pixels, the pixel can be determined as a local extremum (i.e., a keypoint).
  • Next, an accurate location of each keypoint can be determined. Specifically, this can be done by fitting a 3-dimensional quadratic function to accurately determine the location and scale of each keypoint. At the same time, low contrast candidate points and edge response points along an edge can be discarded to improve consistency in the later feature-matching processes and also increase noise-rejection capability. To accurately locating a keypoint can include determining a main orientation of the keypoint and generating a descriptor of the keypoint.
  • To determine the orientation of a keypoint, the keypoint can be used as the center of the neighboring window for sampling. An orientation histogram can be used for determining the gradient orientation of the neighboring pixels. An orientation histogram with 36 bins can be formed, with each bin covering 10 degrees for a total range of 0-360 degrees. The peaks in this histogram can correspond to the dominant orientations of the neighboring gradient of the keypoint, and thus can be used as the dominant orientations of the keypoint. In the gradient orientation histogram, the orientations corresponding to the peaks that are within 80% of the highest peaks can be the supplemental direction of the keypoint.
  • Referring to FIG. 4, next, a descriptor for each keypoint can be generated (step 403). In this embodiment, first, the zero degree direction of the axes can be rotated to match to the dominant orientation of the keypoint to achieve rotational invariance. Then, in a 16×16 region around the keypoint, a set of orientation histograms can be created on 4×4 pixel neighborhoods each with 8 bins. A sum of each gradient orientation can then be determined. Each keypoint can be described by the 4×4=16 sums. Since there are 4×4=16 histograms, each with 8 bins, 128 values can be generated for each keypoint to form a 128-dimensional SIFT feature vector. The SIFT vector can already have the effects from geometric distortion factors such as scaling or rotating of the training image removed. This vector can then be normalized to unit length in order to enhance invariance to affine changes in illumination.
  • The 128-dimensional SIFT feature vector can then be quantified as a feature ID (e.g., a number from 1-1,000,000) (step 404). That is, each SIFT feature vector representing a feature of the training image can have a corresponding numeric feature ID. Typically, more than one feature can be identified from a training image. Accordingly, each training image may be associated with multiple feature IDs. Because each training image can be associated with an item (e.g., the item depicted in the image), the item can also be associated with the multiple feature IDs.
  • Similarly, the same feature may be found in different training images. For example, the book cover of “Harry Potter and the Chamber of Secrets” may share some of the same features with that of “Harry Potter and the Goblet of Fire” (e.g., both covers may include an image of the text “Harry Potter”). Accordingly, each feature ID may be associated with multiple items. The relationship between the features (as identified by their respective feature ID) and the items can be captured and stored in a database accessible to the server (step 405).
  • In the various embodiments, the database can be any suitable data storage program/format including, but not limited to, a list, text file, spreadsheet, relational database, and/or object-oriented database. FIG. 5 illustrates an exemplary database table 500 for storing features IDs 502 and their corresponding items 504. It should be understood that the format of the table and the structure of the database can vary and do not have to conform to the two-column format shown in FIG. 5. In this example, the feature IDs 502 can be unique incremental numbers from, for example, 1 to 1 million. Each feature ID can be associated with a feature from one of the training images and generated using the process described in FIG. 4. The number of feature IDs can depend on the number of features ascertainable from the training images and does not have to be limited to 1 million as shown in table 500.
  • As shown in the table 500, each feature ID can correspond to one or more items. As previously discussed, when the same feature is found in two different images, the corresponding feature ID can be associates with two different items. For example, as shown in the table of FIG. 5, both “Harry Potter and the Chamber of Secrets” and “Harry Potter and the Goblet of Fire” can be associated with the feature ID “1.” However, “Harry Potter and the Goblet of Fire” may also have a second feature (identified by feature ID “3”), that is not associated with “Harry Potter and the Chamber of Secrets.” In this example, feature ID “2” can correspond to another book, “Da Vinci Code.” Although book names are listed under the “Item” column 504 in table 500, it should be understood that the book names can be replaced by item IDs in other embodiments, where each book can be associated with a unique item ID.
  • When the server receives a scanned image from a user device as an request for information relating to the item in the image, the server can process the scanned image to extract features IDs representing various features of the scanned image and look up the corresponding item(s) from the database (e.g., the table of FIG. 5) to determine an item associated with the scanned image.
  • FIG. 6 illustrates exemplary steps in the process of determining an item based on a scanned image received from a user device, according to embodiments of the disclosure. First, the server can receive a scanned image from a user device (step 601). The scanned image can be transmitted over a network. In some embodiments, other information associated with the scanned image can also be received by the server. Such information may include an ID of the user device from which the scanned image was transmitted and/or keywords specified by the user. The scanned image can then be processed to generate one or more feature IDs that identify the various features of the image. The same process discussed above for generating feature IDs from training images can be applied on the scanned image. In particular, keypoints of the scanned image can be identified (step 602). A descriptor for each identified keypoint can be generated (step 603). The descriptors can then be quantified to generate feature IDs (step 604). Steps 602-604 can correspond to steps 402-404 of FIG. 4. As such, details of exemplary implementations of each of these steps are not repeated here.
  • With the feature IDs determined, corresponding item(s) for each feature ID can be looked up from a database (e.g., table of FIG. 5) listing all feature IDs generated from the training images and the items corresponding to each of these feature IDs. That is, the items corresponding to the feature IDs of the scanned image can be selected from the database (step 605). Using the table of FIG. 5 as an example, if the feature IDs from the scanned image include “1” and “3,” both “Harry Potter and the Chamber of Secrets” and “Harry Potter and the Goblet of Fire” can be selected because each of these items corresponds to at least one of feature IDs “1” and “3.” In contrast, “Da Vinci Code” is not selected because the scanned image did not generate feature ID “2.”
  • Next, the total number of hits for each of the selected items can be determined (step 606). For example, “Harry Potter and the Chamber of Secrets” can have a total of one hit while “Harry Potter and the Goblet of Fire” can have a total of two hits based on the information in the table of FIG. 5. Typically, the higher the total number of hits is for an item, the better match it can be with regard to the scanned image. Optionally, the total number of hits for each item can be compared to a predetermined threshold value to eliminate feature IDs with relative low number of hits (step 607). In one embodiment, for example, the threshold number can be 20. If the total hit of an item does not meet the threshold, it can be eliminated from consideration. Among the items that exceed the threshold, the item with the most hits can be selected to be the corresponding item for the particular scanned image (step 608).
  • In some embodiment, a geometric verification step can be performed to further verify that the scanner image matches with the training image associated with the candidate item (step 610) before an item is determined to be the best match for the scanned image. In particular, geometric verification can involve matching the individual pixels or features from the scanned image with those from the training image of the item selected through the process described above. This can be done by, for example, measuring and comparing the relative distances between two or more pixels in each of the two images. Based on how well the relative distances between the pixels match in the two images, it can be determined whether the training image is a top match for the scanned image. If the geometric verification is successful, the item associated with the training image can be confirmed to be the item associated with the scanned image.
  • After an item is determined to be the item corresponding to the scanned image, the server can search for information relating to the item in one or more data repositories. For example, if “Harry Potter and the Goblet of Fire” is determined to be the item associated with the scanned image received from the user device, the server can conduct a search for information relating to this particular book. The results from the search can then be transmitted back to the user device for display, as shown in the screen shot of FIG. 3 c.
  • Although the above embodiments describe identifying books and movies from images of book covers and movie posters, respectively, the same methods and systems can be applied to obtain information relating to any item, as long as an image of the item can be captured by scanning or other mechanisms and the item in the image can be recognized based on the information available (e.g., information extracted from the training images) to the server. In various embodiments, the item can also include, for example, the logo of a product, a screen shot from another device, a work of art such as a painting, or a 3-dimensional object such as a building. It should also be understood that the processes for extracting information such as feature IDs from an image are not limited to those described in the embodiments above. Without departing from the spirit of the disclosure, other suitable processes for recognizing text, graphics, facial expressions, geographic locations, 1D and 2D codes, etc. can also be used for identifying a particular item for the purpose of providing information relating to the item. Examples of other types of image processing systems and methods are described in, for example, Chinese Patent Application No. 201210123853.5, filed Apr. 26, 2012, the content of which is incorporated by reference herein in their entirety.
  • The above-described exemplary processes including, but not limited to, generating a list of feature IDs from training images, storing these feature IDs with their corresponding items in the database, determining a best-matching item for a scanned image using the information stored in the database, and obtaining information relating to the best-matching item can be implemented using various combinations of software, firmware, and hardware technologies on the server (or a cluster of servers). The server may include one or more modules for facilitating the various tasks of these processes. FIGS. 7-9 illustrate exemplary modules in an exemplary server for performing these tasks. In some embodiments, these modules can be implemented mostly in software.
  • FIG. 7 illustrates exemplary modules of the server for providing information in response to receiving a scanned image from the user device, according to embodiments of the disclosure. The server 700 can include, for example, a training-image processing module 702, an image-receiving module 704, an item-selection module 706, an information retrieval module 708, a data-transmission module 710, and a database 712. The image-receiving module 704 can receive one or more scanned images from user devices. The received scanned images can be processed by the item-selection module 706 to determine an item that is the best match to the scanned image. In some embodiments, the item-selection module 706 can perform one or more of steps illustrated in FIG. 6. The information retrieval module 708 can be connected to the Internet for retrieving information relating to the selected item. The information can then be transmitted back to the requesting device by the data-transmission module 710.
  • The server can also include a training image processing module 702 for collecting and processing training images to identify various feature IDs and their corresponding items. In some embodiments, the training image processing module 702 can perform one or more steps illustrated in FIG. 4. The corresponding feature IDs and items identified by the training image processing module 702 can be stored in the database 712, which can be accessed by the item-selection module 706 during the process of selecting an item based on a received scanned image.
  • FIG. 8 is a block diagram illustrating the exemplary modules of the training image processing module of FIG. 7, according to embodiments of the disclosure. As illustrated, the training image processing module 800 can include, for example, a training image obtaining module 801, a keypoint-identifying module 802, a descriptor-generating module 803, a feature ID generating module 804, and a database-access module 805. The training image obtaining module 801 can obtain training images (e.g., performing step 401 in FIG. 4) to be processed by the other modules of the training image processing module 800. The keypoint-identifying module 802 can identify keypoints of a training image (e.g., performing step 402 in FIG. 4). The descriptor-generating module 804 can generate a descriptor for each identified keypoint (e.g., performing step 403 in FIG. 4). The feature ID generating module 804 can quantify the descriptors to generate corresponding feature IDs (e.g., performing step 404 in FIG. 4). The database-access module 805 can read and write to a database. In particular, the database-access module 805 can store feature IDs with the names of their corresponding items in the database (e.g., performing step 405 in FIG. 4).
  • FIG. 9 is a block diagram illustrating the exemplary modules of the item-selection module of FIG. 7, according to embodiments of the disclosure. The item-selection module 900 can include one or more of the following sub-modules: a scanned image receiving module 901 for receiving scanned images from one or more user devices (e.g., performing step 601 in FIG. 6); a keypoint-identifying module 902 for identifying keypoints of a scanned image (e.g., performing step 602 in FIG. 6); a descriptor-generating module 903 for generating a descriptor for each keypoint (e.g., performing step 603 in FIG. 6); a feature ID generating module 904 for quantifying a descriptor to generate a feature ID (e.g., performing step 604 in FIG. 6); an item-selecting module 905 for selecting items corresponding to one or more feature IDs (e.g., performing step 605 in FIG. 6); a hit-counting module 906 for determining a total number of hits for each of the selected items (e.g., performing step 606 in FIG. 6); a threshold module 907 for determining whether the number of hits for an item exceeds a predetermined threshold (e.g., performing step 607 in FIG. 6); a top item selection module 908 for selecting the item as the item that best matches with the scanned image (e.g., performing step 608 in FIG. 6); an item-eliminating module 909 for eliminating an item if the number of hits for the item does not exceed a predetermined threshold (e.g., performing step 609 in FIG. 6); and a geometric verification module 910 for performing geometric verification on a scanned image based on the training image associated with the best-matching item (e.g., performing step 610 in FIG. 6).
  • In some embodiments, the training image processing module 800 and the item-selection module 900 can share one or more of the keypoint-identifying module, descriptor-generating module, and feature ID generating module 904.
  • In some embodiments, one or more of these modules on the server can be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “non-transitory computer-readable storage medium” can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The non-transitory computer readable storage medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), a portable optical disc such a CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like.
  • The non-transitory computer readable storage medium can be part of a computing system serving as the server. FIG. 10 illustrates exemplary common hardware components of one such computing system. As illustrated, the system 1000 can include a central processing unit (CPU) 1002, I/O components 1004 including, but not limited to one or more of display, keypad, touch screen, speaker, and microphone, storage medium 1006 such as the ones listed in the last paragraph, and network interface 1008, all of which can be connected to each other via a system bus 1010. The storage medium 1006 can include one or more of the modules of FIGS. 7-9.
  • Although the modules illustrated in FIGS. 7-9 are described to be modules on the server, it should be understood that, in other embodiments, one or more of these modules can be part of the requesting device. That is, at least part of the above-described processes of, for example, generating feature IDs from training images and scanned images, identifying one or more items based on the feature IDs, and searching for information relating to the identified items can be performed by the requesting device without involving the server.
  • For example, referring again to FIG. 7, one or more of the image-receiving module 704, training image processing module 702, item-selection module 706, information retrieval module 708, data transition module 710, and database 712 can be a part of the requesting device. The image-receiving module 704 can be connected to the camera of the device for receiving images captured by the camera. It can also be connected to, for example, a communication module for receiving an image from an email program, messaging application, the Internet, and/or removable storage means such as SIM cards and USB drives. The requesting device can also process these images locally using one or more of the other modules shown in FIG. 7. For example, it can perform one or more steps illustrated in FIG. 4 to identify feature IDs from training images and store them in a local database. Alternatively or additionally, the requesting device can also perform one or more steps illustrated in FIG. 6 to select an item based on a scanned image. Alternatively or additionally, the requesting device can retrieve information relating to the selected item from the Internet or other internal or external data repositories. Depending on the tasks performed by the requesting device, one or more modules illustrated in FIGS. 8 and 9 can also reside on the requesting device.
  • In one embodiment, one of the requesting devices can process image-based information retrieval request from one or other requesting devices. As such, no dedicated server is necessary to carry out the processes of methods discussed above. In other embodiments, the various steps and tasks involved in the processes described in view of FIGS. 4-6 can be divided between the server and one or more requesting devices. For example, the server may handle the processing of training images and the requesting device can handle the processing of scanned images locally and transmit only the feature IDs generated from the scanned image to the server for identifying an item and retrieving information from the Internet. In another example, the server can process the training images to obtain a list of feature IDs with corresponding items (e.g., the table of FIG. 5) and transmit the list to one or more requesting devices. The requesting device(s) can store the list locally and, after processing a scanned image, identify an item based on information from the scanned image and the list. The requesting device(s) can then search for information relating to the item by either directly connecting to the Internet and perform a search or passing along the item to the server and ask the server to perform the search and return the search results.
  • Essentially, embodiments of the disclosure provides methods and systems that can allow a user to scan a 2-dimensional image of any item of his interest using, for example, his smartphone, and provide him with all sorts of information relating to the item that can be ascertained from, for example, the Internet and/or other existing data repositories. This can provide a simple, low-cost, but also user-friendly and effective way of looking up information relating to anything that can be captured in an image.
  • Although embodiments of this disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of embodiments of this disclosure as defined by the appended claims.

Claims (28)

1. An image analysis system comprising:
an image-receiving module that receives an image from an external source,
a keypoint-identifying module that identifies at least one keypoint from the image,
a descriptor-generating module that generates a descriptor for each of the at least one identified keypoint,
a feature ID-generating module that generates a feature ID to each descriptor,
an item-selecting module that selects at least one item based on at least one feature ID generated by the feature ID-generating module,
a top item selecting module that selects a best-matched item from the selected at least one item, and
an information retrieving module that retrieves information relating to the best-matched item.
2. The image analysis system of claim 1, wherein the external source comprises a network.
3. The image analysis system of claim 1, wherein the keypoint-identifying module identifying the at least one keypoint from the image comprises extracting at least one Scale-invariant feature transform (SIFT) feature from the image.
4. The image analysis system of claim 1, wherein the keypoint-identifying module identifying the at least one keypoint from the image comprises determining a location and orientation associated with the at least one identified keypoint.
5. The image analysis system of claim 1, wherein the descriptor-generating module generating a descriptor for each of the at least one identified keypoint comprises generating a SIFT vector for each of the at least one identified keypoint.
6. The image analysis system of claim 5, wherein the feature ID-generating module generating a feature ID to each descriptor comprises assigning a unique feature ID to each SIFT vector.
7. The image analysis system of claim 1, wherein the item-selecting module selecting at least one item based on at least one feature ID comprises:
querying a database containing multiple feature IDs and items corresponding to each of the feature IDs to obtain at least one item corresponding to the at least one item, and
the top item selecting module selecting a best-matched item comprises:
determining an item with a most number of hits from the obtained at least one item as the best-matched item.
8-10. (canceled)
11. An information-providing system comprising:
an image-receiving module that receives an image from a device,
an item-selection module that identifies an item based on the received image,
an information-retrieving module that retrieves information relating to the item, and
a data transmitting module that transmits the retrieved information to the device,
wherein the item is identified by matching one or more features of the received image with features identified from a training image associated with the item.
12. The information-providing system of claim 11, comprising:
a training image processing module that identifies one or more features from at least one training image.
13. (canceled)
14. The information-providing system of claim 11, comprising:
a database for storing the features identified from the training image.
15. (canceled)
16. The information-providing system of claim 11, wherein the information is retrieved from the Internet.
17. The information-providing system of claim 11, wherein the identified item is a book and the received image comprises a book cover of the book.
18. The information-providing system of claim 11, wherein the item-selection module comprises:
a keypoint-identifying module that identifies at least one keypoint of the received image,
a descriptor-generating module that generates a descriptor for each of the at least one keypoint,
a feature ID generating module that quantifies a descriptor to generate at least one feature ID,
an item-selecting module that selects at least one item corresponding to each of the at least one feature ID,
a hit-counting module that determines a total number of hits for each of the at least one selected item, and
a top item selection module that selects one of the at least one selected item that best matches with the received image.
19-20. (canceled)
21. An information-providing method comprising:
receiving an image from a device,
identifying an item based on the received image,
retrieving information relating to the item, and
transmitting the retrieved information to the device,
wherein the item is identified by matching one or more features of the received image with features identified from a training image associated with the item.
22. The information-providing method of claim 21, comprising:
identifying one or more features from at least one training image.
23. (canceled)
24. The information-providing method of claim 21, comprising:
storing the features identified from the training image.
25. (canceled)
26. The information-providing method of claim 21, wherein retrieving the information relating to the item comprises retrieving the information related to the item from the Internet.
27. The information-providing method of claim 21, wherein the identified item is a book and the received image comprises a book cover of the book.
28. The information-providing method of claim 21, wherein identifying an item based on the received image comprises:
identifying at least one keypoint of the received image,
generating a descriptor for each of the at least one keypoint,
quantifying a descriptor to generate at least one feature ID,
selecting at least one item corresponding to each of the at least one feature ID,
determining a total number of hits for each of the at least one selected item, and
selecting one of the at least one selected item that best matches with the received image.
29-31. (canceled)
32. An information-requesting device comprising:
a camera that captures an image of an item,
a keypoint-identifying module that identifies at least one keypoint from the image,
a descriptor-generating module that generates a descriptor for each of the at least one identified keypoint,
a feature ID-generating module that generates a feature ID to each descriptor,
a data-receiving module that receives a list of corresponding feature IDs and items from an external source,
an item look up module that identifies an item from the received list based on the at least one generated feature ID, and
an information retrieving module that retrieves information relating to the identified item.
33-35. (canceled)
US13/990,791 2012-04-25 2013-04-25 Systems and methods for obtaining information based on an image Abandoned US20140254942A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2012101238535A CN102682091A (en) 2012-04-25 2012-04-25 Cloud-service-based visual search method and cloud-service-based visual search system
CN201210123853.5 2012-04-25
PCT/CN2013/074731 WO2013159722A1 (en) 2012-04-25 2013-04-25 Systems and methods for obtaining information based on an image

Publications (1)

Publication Number Publication Date
US20140254942A1 true US20140254942A1 (en) 2014-09-11

Family

ID=46814016

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/241,863 Active US9411849B2 (en) 2012-04-25 2013-04-09 Method, system and computer storage medium for visual searching based on cloud service
US13/990,791 Abandoned US20140254942A1 (en) 2012-04-25 2013-04-25 Systems and methods for obtaining information based on an image

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/241,863 Active US9411849B2 (en) 2012-04-25 2013-04-09 Method, system and computer storage medium for visual searching based on cloud service

Country Status (4)

Country Link
US (2) US9411849B2 (en)
CN (2) CN102682091A (en)
SG (1) SG2014007280A (en)
WO (2) WO2014005451A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150073919A1 (en) * 2013-09-11 2015-03-12 Cinsay, Inc. Dynamic binding of content transactional items
US20150138385A1 (en) * 2013-11-18 2015-05-21 Heekwan Kim Digital annotation-based visual recognition book pronunciation system and related method of operation
US9113214B2 (en) 2008-05-03 2015-08-18 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US9332302B2 (en) 2008-01-30 2016-05-03 Cinsay, Inc. Interactive product placement system and method therefor
US9451010B2 (en) 2011-08-29 2016-09-20 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US20160292183A1 (en) * 2015-04-01 2016-10-06 Ck&B Co., Ltd. Server computing device and image search system based on contents recognition using the same
US20160371634A1 (en) * 2015-06-17 2016-12-22 Tata Consultancy Services Limited Computer implemented system and method for recognizing and counting products within images
US9576218B2 (en) * 2014-11-04 2017-02-21 Canon Kabushiki Kaisha Selecting features from image data
US9607330B2 (en) 2012-06-21 2017-03-28 Cinsay, Inc. Peer-assisted shopping
US9697504B2 (en) 2013-09-27 2017-07-04 Cinsay, Inc. N-level replication of supplemental content
US10055768B2 (en) 2008-01-30 2018-08-21 Cinsay, Inc. Interactive product placement system and method therefor
US20180294011A1 (en) * 2013-05-20 2018-10-11 Intel Corporation Elastic cloud video editing and multimedia search
CN109716286A (en) * 2016-08-16 2019-05-03 电子湾有限公司 Determine the item with confirmed feature
CN110324590A (en) * 2019-08-08 2019-10-11 北京中呈世纪科技有限公司 A kind of Information-based Railway system pattern recognition device and its recognition methods
CN111339744A (en) * 2015-07-31 2020-06-26 小米科技有限责任公司 Ticket information display method, device and storage medium
US10701127B2 (en) 2013-09-27 2020-06-30 Aibuy, Inc. Apparatus and method for supporting relationships associated with content provisioning
US10789631B2 (en) 2012-06-21 2020-09-29 Aibuy, Inc. Apparatus and method for peer-assisted e-commerce shopping
US11087282B2 (en) 2014-11-26 2021-08-10 Adobe Inc. Content creation, deployment collaboration, and channel dependent content selection
US20210256766A1 (en) * 2020-02-13 2021-08-19 Magic Leap, Inc. Cross reality system for large scale environments
CN113409920A (en) * 2021-08-18 2021-09-17 明品云(北京)数据科技有限公司 Data transmission management method and system
CN113720565A (en) * 2021-08-04 2021-11-30 宁波和邦检测研究有限公司 Handrail collision test method and system, storage medium and intelligent terminal
US11227315B2 (en) 2008-01-30 2022-01-18 Aibuy, Inc. Interactive product placement system and method therefor
US11436446B2 (en) * 2018-01-22 2022-09-06 International Business Machines Corporation Image analysis enhanced related item decision
US11562542B2 (en) 2019-12-09 2023-01-24 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11632679B2 (en) 2019-10-15 2023-04-18 Magic Leap, Inc. Cross reality system with wireless fingerprints
US11789524B2 (en) 2018-10-05 2023-10-17 Magic Leap, Inc. Rendering location specific virtual content in any location
US11790619B2 (en) 2020-02-13 2023-10-17 Magic Leap, Inc. Cross reality system with accurate shared maps
US11830149B2 (en) 2020-02-13 2023-11-28 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682091A (en) * 2012-04-25 2012-09-19 腾讯科技(深圳)有限公司 Cloud-service-based visual search method and cloud-service-based visual search system
CN103841438B (en) * 2012-11-21 2016-08-03 腾讯科技(深圳)有限公司 Information-pushing method, information transmission system and receiving terminal for digital television
CN103020231B (en) * 2012-12-14 2018-06-08 北京百度网讯科技有限公司 The local feature of picture is quantified as to the method and apparatus of visual vocabulary
CN103064981A (en) * 2013-01-18 2013-04-24 浪潮电子信息产业股份有限公司 Method for searching images on basis of cloud computing
CN103177102A (en) * 2013-03-22 2013-06-26 北京小米科技有限责任公司 Method and device of image processing
CN104252618B (en) * 2013-06-28 2019-12-13 广州华多网络科技有限公司 method and system for improving photo return speed
RU2647696C2 (en) 2013-10-21 2018-03-16 МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи Mobile video search
CN103646371A (en) * 2013-11-27 2014-03-19 深圳先进技术研究院 Network sharing-based crime forensics system and method
CN103824053B (en) * 2014-02-17 2018-02-02 北京旷视科技有限公司 The sex mask method and face gender detection method of a kind of facial image
CN103984942A (en) * 2014-05-28 2014-08-13 深圳市中兴移动通信有限公司 Object recognition method and mobile terminal
KR102340251B1 (en) * 2014-06-27 2021-12-16 삼성전자주식회사 Method for managing data and an electronic device thereof
CN104148301B (en) * 2014-07-09 2016-09-07 广州市数峰电子科技有限公司 Waste plastic bottle sorting equipment based on cloud computing and image recognition and method
CN105792010A (en) * 2014-12-22 2016-07-20 Tcl集团股份有限公司 Television shopping method and device based on image content analysis and picture index
WO2016101766A1 (en) * 2014-12-23 2016-06-30 北京奇虎科技有限公司 Method and device for obtaining similar face images and face image information
CN105989628A (en) * 2015-02-06 2016-10-05 北京网梯科技发展有限公司 Method and system device for obtaining information through mobile terminal
US10796196B2 (en) * 2015-03-05 2020-10-06 Nant Holdings Ip, Llc Large scale image recognition using global signatures and local feature information
US9721186B2 (en) 2015-03-05 2017-08-01 Nant Holdings Ip, Llc Global signatures for large-scale image recognition
WO2017000109A1 (en) * 2015-06-29 2017-01-05 北京旷视科技有限公司 Search method, search apparatus, user equipment, and computer program product
CN105095446A (en) * 2015-07-24 2015-11-25 百度在线网络技术(北京)有限公司 Medicine search processing method, server and terminal device
CN105354252A (en) * 2015-10-19 2016-02-24 联想(北京)有限公司 Information processing method and apparatus
US10216868B2 (en) * 2015-12-01 2019-02-26 International Business Machines Corporation Identifying combinations of artifacts matching characteristics of a model design
CN105868238A (en) * 2015-12-09 2016-08-17 乐视网信息技术(北京)股份有限公司 Information processing method and device
CN105515955A (en) * 2015-12-25 2016-04-20 北京奇虎科技有限公司 Chat information distribution method and device
CN106971134A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 It is a kind of can error correction pattern recognition device and method
CN105740378B (en) * 2016-01-27 2020-07-21 北京航空航天大学 Digital pathology full-section image retrieval method
CN107368826B (en) * 2016-05-13 2022-05-31 佳能株式会社 Method and apparatus for text detection
CN106096520A (en) * 2016-06-02 2016-11-09 乐视控股(北京)有限公司 A kind of information-pushing method and device
CN106250906A (en) * 2016-07-08 2016-12-21 大连大学 Extensive medical image clustering method based on over-sampling correction
CN106203449A (en) * 2016-07-08 2016-12-07 大连大学 The approximation space clustering system of mobile cloud environment
CN106203514B (en) * 2016-07-12 2019-02-12 腾讯科技(深圳)有限公司 The method and apparatus of image recognition readjustment notice
CN106203406A (en) * 2016-08-27 2016-12-07 李春华 A kind of identification system based on cloud computing
CN107798358A (en) * 2016-08-29 2018-03-13 杭州海康威视数字技术股份有限公司 A kind of harbour container management method, apparatus and system
CN106227216B (en) * 2016-08-31 2019-11-12 朱明� Home-services robot towards house old man
CN107995458B (en) * 2016-10-27 2020-10-27 江苏苏宁物流有限公司 Method and device for shooting packaging process
CN106599250A (en) * 2016-12-20 2017-04-26 北京小米移动软件有限公司 Webpage starting method and device
CN107066247B (en) * 2016-12-29 2020-08-18 世纪龙信息网络有限责任公司 Patch query method and device
CN106970996B (en) * 2017-04-05 2021-02-19 苏华巍 Data analysis system and method
CN107193981A (en) * 2017-05-26 2017-09-22 腾讯科技(深圳)有限公司 Collection file is shown, processing method and processing device, computer-readable storage medium and equipment
CN107392238B (en) * 2017-07-12 2021-05-04 华中师范大学 Outdoor plant knowledge expansion learning system based on mobile visual search
CN108021986A (en) * 2017-10-27 2018-05-11 平安科技(深圳)有限公司 Electronic device, multi-model sample training method and computer-readable recording medium
CN107798115A (en) * 2017-11-03 2018-03-13 深圳天珑无线科技有限公司 Image identification search method, system and the mobile terminal of mobile terminal
RU2668717C1 (en) * 2017-12-13 2018-10-02 Общество с ограниченной ответственностью "Аби Продакшн" Generation of marking of document images for training sample
CN108428275A (en) * 2018-01-03 2018-08-21 平安科技(深圳)有限公司 Row number method, server and storage medium based on recognition of face
CN108573067A (en) * 2018-04-27 2018-09-25 福建江夏学院 A kind of the matching search system and method for merchandise news
CN109034115B (en) * 2018-08-22 2021-10-22 Oppo广东移动通信有限公司 Video image recognizing method, device, terminal and storage medium
CN109166057B (en) * 2018-09-12 2020-05-26 厦门盈趣科技股份有限公司 Scenic spot tour guide method and device
CN111259698B (en) * 2018-11-30 2023-10-13 百度在线网络技术(北京)有限公司 Method and device for acquiring image
CN109766466A (en) * 2018-12-29 2019-05-17 广东益萃网络科技有限公司 Querying method, device, computer equipment and the storage medium of product information
US11494884B2 (en) 2019-02-21 2022-11-08 Canon U.S.A., Inc. Method and system for evaluating image sharpness
CN110009798A (en) * 2019-03-18 2019-07-12 深兰科技(上海)有限公司 A kind of motivational techniques, device, equipment, medium and containing box that article is launched
CN110374403A (en) * 2019-04-11 2019-10-25 上海济子医药科技有限公司 Cerebral apoplexy security protection early warning door and its method
CN110414518A (en) * 2019-06-26 2019-11-05 平安科技(深圳)有限公司 Network address recognition methods, device, computer equipment and storage medium
CN110399921B (en) * 2019-07-25 2021-07-20 维沃移动通信有限公司 Picture processing method and terminal equipment
CN110362714B (en) * 2019-07-25 2023-05-02 腾讯科技(深圳)有限公司 Video content searching method and device
CN110532113B (en) * 2019-08-30 2023-03-24 北京地平线机器人技术研发有限公司 Information processing method and device, computer readable storage medium and electronic equipment
CN111782849B (en) * 2019-11-27 2024-03-01 北京沃东天骏信息技术有限公司 Image retrieval method and device
CN111191356A (en) * 2019-12-24 2020-05-22 乐软科技(北京)有限责任公司 Virtual reality-based dim environment detection simulation method
CN111223073A (en) * 2019-12-24 2020-06-02 乐软科技(北京)有限责任公司 Virtual detection system
CN114372835B (en) * 2022-03-22 2022-06-24 佰聆数据股份有限公司 Comprehensive energy service potential customer identification method, system and computer equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222201A1 (en) * 2007-03-08 2008-09-11 Microsoft Corporation Digital media metadata management
US20080263015A1 (en) * 2007-04-23 2008-10-23 Weigen Qiu Generalized Language Independent Index Storage System And Searching Method
US20100080469A1 (en) * 2008-10-01 2010-04-01 Fuji Xerox Co., Ltd. Novel descriptor for image corresponding point matching
US20100195914A1 (en) * 2009-02-02 2010-08-05 Michael Isard Scalable near duplicate image search with geometric constraints
US7860317B2 (en) * 2006-04-04 2010-12-28 Microsoft Corporation Generating search results based on duplicate image detection
US20100331041A1 (en) * 2009-06-26 2010-12-30 Fuji Xerox Co., Ltd. System and method for language-independent manipulations of digital copies of documents through a camera phone
US7961986B1 (en) * 2008-06-30 2011-06-14 Google Inc. Ranking of images and image labels
US20110182515A1 (en) * 2010-01-27 2011-07-28 Sony Corporation Learning device, learning method, identifying device, identifying method, and program
US8036497B2 (en) * 2005-03-01 2011-10-11 Osaka Prefecture University Public Corporation Method, program and apparatus for storing document and/or image using invariant values calculated from feature points and method, program and apparatus for retrieving document based on stored document and/or image
US20120314959A1 (en) * 2011-06-10 2012-12-13 Steven White Image Scene Recognition
US20120327203A1 (en) * 2011-06-21 2012-12-27 Samsung Electronics Co., Ltd. Apparatus and method for providing guiding service in portable terminal

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659673A (en) * 1988-12-16 1997-08-19 Canon Kabushiki Kaisha Image processing apparatus
JPH05328121A (en) * 1992-05-20 1993-12-10 Ricoh Co Ltd Method and device for picture processing
US6763148B1 (en) * 2000-11-13 2004-07-13 Visual Key, Inc. Image recognition methods
GB2403363A (en) * 2003-06-25 2004-12-29 Hewlett Packard Development Co Tags for automated image processing
JP2006018551A (en) * 2004-07-01 2006-01-19 Sony Corp Information processing apparatus and method, and program
US7657100B2 (en) * 2005-05-09 2010-02-02 Like.Com System and method for enabling image recognition and searching of images
US7760917B2 (en) * 2005-05-09 2010-07-20 Like.Com Computer-implemented method for performing similarity searches
US8732025B2 (en) * 2005-05-09 2014-05-20 Google Inc. System and method for enabling image recognition and searching of remote content on display
US20080177640A1 (en) * 2005-05-09 2008-07-24 Salih Burak Gokturk System and method for using image analysis and search in e-commerce
US7657126B2 (en) * 2005-05-09 2010-02-02 Like.Com System and method for search portions of objects in images and features thereof
US8849821B2 (en) * 2005-11-04 2014-09-30 Nokia Corporation Scalable visual search system simplifying access to network and device functionality
JP4682030B2 (en) * 2005-11-30 2011-05-11 富士通株式会社 Graphic search program, recording medium recording the program, graphic search device, and graphic search method
WO2008032203A2 (en) * 2006-09-17 2008-03-20 Nokia Corporation Method, apparatus and computer program product for a tag-based visual search user interface
US20080071770A1 (en) * 2006-09-18 2008-03-20 Nokia Corporation Method, Apparatus and Computer Program Product for Viewing a Virtual Database Using Portable Devices
US20080268876A1 (en) * 2007-04-24 2008-10-30 Natasha Gelfand Method, Device, Mobile Terminal, and Computer Program Product for a Point of Interest Based Scheme for Improving Mobile Visual Searching Functionalities
US20080267521A1 (en) * 2007-04-24 2008-10-30 Nokia Corporation Motion and image quality monitor
CN101178773B (en) * 2007-12-13 2010-08-11 北京中星微电子有限公司 Image recognition system and method based on characteristic extracting and categorizer
US20110225196A1 (en) * 2008-03-19 2011-09-15 National University Corporation Hokkaido University Moving image search device and moving image search program
EP2298176A4 (en) * 2008-06-03 2012-12-19 Hitachi Medical Corp Medical image processing device and method for processing medical image
CN101339601B (en) * 2008-08-15 2011-09-28 张擎宇 License plate Chinese character recognition method based on SIFT algorithm
JP5527503B2 (en) * 2009-02-13 2014-06-18 富士ゼロックス株式会社 Monitoring device, information processing system, and program
US8548231B2 (en) * 2009-04-02 2013-10-01 Siemens Corporation Predicate logic based image grammars for complex visual pattern recognition
US9195898B2 (en) * 2009-04-14 2015-11-24 Qualcomm Incorporated Systems and methods for image recognition using mobile devices
JP2011048438A (en) * 2009-08-25 2011-03-10 Olympus Corp Apparatus and system for processing pathological diagnosis
CN101697232B (en) * 2009-09-18 2012-03-07 浙江大学 SIFT characteristic reducing method facing close repeated image matching
CN102063436A (en) 2009-11-18 2011-05-18 腾讯科技(深圳)有限公司 System and method for realizing merchandise information searching by using terminal to acquire images
CN102110122B (en) * 2009-12-24 2013-04-03 阿里巴巴集团控股有限公司 Method and device for establishing sample picture index table, method and device for filtering pictures and method and device for searching pictures
US20120023131A1 (en) * 2010-07-26 2012-01-26 Invidi Technologies Corporation Universally interactive request for information
WO2012020927A1 (en) * 2010-08-09 2012-02-16 에스케이텔레콤 주식회사 Integrated image search system and a service method therewith
CN102411582B (en) * 2010-09-21 2016-04-27 腾讯科技(深圳)有限公司 Image searching method, device and client
CN101980250B (en) * 2010-10-15 2014-06-18 北京航空航天大学 Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field
US9122951B2 (en) * 2010-11-01 2015-09-01 Drvision Technologies Llc Teachable object contour mapping for biology image region partition
CN102214222B (en) * 2011-06-15 2013-08-21 中国电信股份有限公司 Presorting and interacting system and method for acquiring scene information through mobile phone
CN102685091B (en) 2011-11-28 2015-08-19 曙光信息产业(北京)有限公司 A kind of ten thousand mbit ethernet gearbox Fifo Read-write Catrol and tolerant systems
CN102682091A (en) * 2012-04-25 2012-09-19 腾讯科技(深圳)有限公司 Cloud-service-based visual search method and cloud-service-based visual search system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8036497B2 (en) * 2005-03-01 2011-10-11 Osaka Prefecture University Public Corporation Method, program and apparatus for storing document and/or image using invariant values calculated from feature points and method, program and apparatus for retrieving document based on stored document and/or image
US7860317B2 (en) * 2006-04-04 2010-12-28 Microsoft Corporation Generating search results based on duplicate image detection
US20080222201A1 (en) * 2007-03-08 2008-09-11 Microsoft Corporation Digital media metadata management
US20080263015A1 (en) * 2007-04-23 2008-10-23 Weigen Qiu Generalized Language Independent Index Storage System And Searching Method
US7961986B1 (en) * 2008-06-30 2011-06-14 Google Inc. Ranking of images and image labels
US20100080469A1 (en) * 2008-10-01 2010-04-01 Fuji Xerox Co., Ltd. Novel descriptor for image corresponding point matching
US20100195914A1 (en) * 2009-02-02 2010-08-05 Michael Isard Scalable near duplicate image search with geometric constraints
US20100331041A1 (en) * 2009-06-26 2010-12-30 Fuji Xerox Co., Ltd. System and method for language-independent manipulations of digital copies of documents through a camera phone
US20110182515A1 (en) * 2010-01-27 2011-07-28 Sony Corporation Learning device, learning method, identifying device, identifying method, and program
US20120314959A1 (en) * 2011-06-10 2012-12-13 Steven White Image Scene Recognition
US20120327203A1 (en) * 2011-06-21 2012-12-27 Samsung Electronics Co., Ltd. Apparatus and method for providing guiding service in portable terminal

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9351032B2 (en) 2008-01-30 2016-05-24 Cinsay, Inc. Interactive product placement system and method therefor
US10055768B2 (en) 2008-01-30 2018-08-21 Cinsay, Inc. Interactive product placement system and method therefor
US9986305B2 (en) 2008-01-30 2018-05-29 Cinsay, Inc. Interactive product placement system and method therefor
US9674584B2 (en) 2008-01-30 2017-06-06 Cinsay, Inc. Interactive product placement system and method therefor
US9332302B2 (en) 2008-01-30 2016-05-03 Cinsay, Inc. Interactive product placement system and method therefor
US9338499B2 (en) 2008-01-30 2016-05-10 Cinsay, Inc. Interactive product placement system and method therefor
US9338500B2 (en) 2008-01-30 2016-05-10 Cinsay, Inc. Interactive product placement system and method therefor
US9344754B2 (en) 2008-01-30 2016-05-17 Cinsay, Inc. Interactive product placement system and method therefor
US10425698B2 (en) 2008-01-30 2019-09-24 Aibuy, Inc. Interactive product placement system and method therefor
US11227315B2 (en) 2008-01-30 2022-01-18 Aibuy, Inc. Interactive product placement system and method therefor
US10438249B2 (en) 2008-01-30 2019-10-08 Aibuy, Inc. Interactive product system and method therefor
US10986412B2 (en) 2008-05-03 2021-04-20 Aibuy, Inc. Methods and system for generation and playback of supplemented videos
US9113214B2 (en) 2008-05-03 2015-08-18 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US10225614B2 (en) 2008-05-03 2019-03-05 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US9210472B2 (en) 2008-05-03 2015-12-08 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US9813770B2 (en) 2008-05-03 2017-11-07 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US11005917B2 (en) 2011-08-29 2021-05-11 Aibuy, Inc. Containerized software for virally copying from one endpoint to another
US10171555B2 (en) 2011-08-29 2019-01-01 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US9451010B2 (en) 2011-08-29 2016-09-20 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US9607330B2 (en) 2012-06-21 2017-03-28 Cinsay, Inc. Peer-assisted shopping
US10789631B2 (en) 2012-06-21 2020-09-29 Aibuy, Inc. Apparatus and method for peer-assisted e-commerce shopping
US10726458B2 (en) 2012-06-21 2020-07-28 Aibuy, Inc. Peer-assisted shopping
US11837260B2 (en) 2013-05-20 2023-12-05 Intel Corporation Elastic cloud video editing and multimedia search
US11056148B2 (en) * 2013-05-20 2021-07-06 Intel Corporation Elastic cloud video editing and multimedia search
US20180294011A1 (en) * 2013-05-20 2018-10-11 Intel Corporation Elastic cloud video editing and multimedia search
US20150073919A1 (en) * 2013-09-11 2015-03-12 Cinsay, Inc. Dynamic binding of content transactional items
US10559010B2 (en) 2013-09-11 2020-02-11 Aibuy, Inc. Dynamic binding of video content
US11074620B2 (en) * 2013-09-11 2021-07-27 Aibuy, Inc. Dynamic binding of content transactional items
US9875489B2 (en) 2013-09-11 2018-01-23 Cinsay, Inc. Dynamic binding of video content
US9953347B2 (en) 2013-09-11 2018-04-24 Cinsay, Inc. Dynamic binding of live video content
US11763348B2 (en) 2013-09-11 2023-09-19 Aibuy, Inc. Dynamic binding of video content
US10268994B2 (en) 2013-09-27 2019-04-23 Aibuy, Inc. N-level replication of supplemental content
US10701127B2 (en) 2013-09-27 2020-06-30 Aibuy, Inc. Apparatus and method for supporting relationships associated with content provisioning
US9697504B2 (en) 2013-09-27 2017-07-04 Cinsay, Inc. N-level replication of supplemental content
US11017362B2 (en) 2013-09-27 2021-05-25 Aibuy, Inc. N-level replication of supplemental content
US20150138385A1 (en) * 2013-11-18 2015-05-21 Heekwan Kim Digital annotation-based visual recognition book pronunciation system and related method of operation
US9462175B2 (en) * 2013-11-18 2016-10-04 Heekwan Kim Digital annotation-based visual recognition book pronunciation system and related method of operation
US9576218B2 (en) * 2014-11-04 2017-02-21 Canon Kabushiki Kaisha Selecting features from image data
US11087282B2 (en) 2014-11-26 2021-08-10 Adobe Inc. Content creation, deployment collaboration, and channel dependent content selection
US20160292183A1 (en) * 2015-04-01 2016-10-06 Ck&B Co., Ltd. Server computing device and image search system based on contents recognition using the same
US9934250B2 (en) * 2015-04-01 2018-04-03 Ck&B Co., Ltd. Server computing device and image search system based on contents recognition using the same
US20160371634A1 (en) * 2015-06-17 2016-12-22 Tata Consultancy Services Limited Computer implemented system and method for recognizing and counting products within images
US10510038B2 (en) * 2015-06-17 2019-12-17 Tata Consultancy Services Limited Computer implemented system and method for recognizing and counting products within images
CN111339744A (en) * 2015-07-31 2020-06-26 小米科技有限责任公司 Ticket information display method, device and storage medium
CN109716286A (en) * 2016-08-16 2019-05-03 电子湾有限公司 Determine the item with confirmed feature
US11436446B2 (en) * 2018-01-22 2022-09-06 International Business Machines Corporation Image analysis enhanced related item decision
US11789524B2 (en) 2018-10-05 2023-10-17 Magic Leap, Inc. Rendering location specific virtual content in any location
CN110324590A (en) * 2019-08-08 2019-10-11 北京中呈世纪科技有限公司 A kind of Information-based Railway system pattern recognition device and its recognition methods
US11632679B2 (en) 2019-10-15 2023-04-18 Magic Leap, Inc. Cross reality system with wireless fingerprints
US11562542B2 (en) 2019-12-09 2023-01-24 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11748963B2 (en) 2019-12-09 2023-09-05 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
US11790619B2 (en) 2020-02-13 2023-10-17 Magic Leap, Inc. Cross reality system with accurate shared maps
US11830149B2 (en) 2020-02-13 2023-11-28 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization
US20210256766A1 (en) * 2020-02-13 2021-08-19 Magic Leap, Inc. Cross reality system for large scale environments
CN113720565A (en) * 2021-08-04 2021-11-30 宁波和邦检测研究有限公司 Handrail collision test method and system, storage medium and intelligent terminal
CN113409920A (en) * 2021-08-18 2021-09-17 明品云(北京)数据科技有限公司 Data transmission management method and system

Also Published As

Publication number Publication date
CN102682091A (en) 2012-09-19
CN103377287A (en) 2013-10-30
US20150046483A1 (en) 2015-02-12
SG2014007280A (en) 2014-03-28
WO2013159722A1 (en) 2013-10-31
WO2014005451A1 (en) 2014-01-09
US9411849B2 (en) 2016-08-09
CN103377287B (en) 2016-09-07

Similar Documents

Publication Publication Date Title
US20140254942A1 (en) Systems and methods for obtaining information based on an image
US20210397838A1 (en) Systems and methods for image-feature-based recognition
US9336459B2 (en) Interactive content generation
US9535930B2 (en) System and method for using an image to provide search results
CN109643318B (en) Content-based searching and retrieval of brand images
US20170024384A1 (en) System and method for analyzing and searching imagery
US9092458B1 (en) System and method for managing search results including graphics
JP5395920B2 (en) Search device, search method, search program, and computer-readable recording medium storing the program
CN108959586A (en) Text vocabulary is identified in response to visual query
US9613059B2 (en) System and method for using an image to provide search results
WO2010071617A1 (en) Method and apparatus for performing image processing
US20230044463A1 (en) System and method for locating products
KR101910825B1 (en) Method, apparatus, system and computer program for providing aimage retrieval model
Pal et al. Hybrid features of tamura texture and shape-based image retrieval
KR20150096552A (en) System and method for providing online photo gallery service by using photo album or photo frame
Angeli et al. Making paper labels smart for augmented wine recognition
Patel VISUAL SEARCH APPLICATION FOR ANDROID
AU2013273790A1 (en) Heterogeneous feature filtering

Legal Events

Date Code Title Description
AS Assignment

Owner name: TENCENT TECHNOLOGY (SHENZHEN) CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, HAILONG;XIAO, BIN;CHA, WEN;REEL/FRAME:030560/0095

Effective date: 20130529

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION