US20100030578A1 - System and method for collaborative shopping, business and entertainment - Google Patents

System and method for collaborative shopping, business and entertainment Download PDF

Info

Publication number
US20100030578A1
US20100030578A1 US12/409,074 US40907409A US2010030578A1 US 20100030578 A1 US20100030578 A1 US 20100030578A1 US 40907409 A US40907409 A US 40907409A US 2010030578 A1 US2010030578 A1 US 2010030578A1
Authority
US
United States
Prior art keywords
user
users
model
apparel
exemplary embodiment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/409,074
Inventor
M.A. Sami Siddique
Abida Raouf
Abdul Aziz Raouf
Jesse Smith
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dressbot Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/409,074 priority Critical patent/US20100030578A1/en
Assigned to DRESSBOT, INC. reassignment DRESSBOT, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAOUF, ABDUL AZIZ, RAOUF, ABIDA, SIDDIQUE, M. A. SAMI, SMITH, JESSE
Publication of US20100030578A1 publication Critical patent/US20100030578A1/en
Priority to US13/612,593 priority patent/US10002337B2/en
Priority to US13/834,888 priority patent/US20130215116A1/en
Priority to US15/087,323 priority patent/US10872322B2/en
Priority to US17/128,657 priority patent/US11893558B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0637Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/20Point-of-sale [POS] network systems
    • G06Q20/204Point-of-sale [POS] network systems comprising interface for record bearing medium or carrier for electronic funds transfer or payment credit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/12Accounting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the embodiments described herein relate generally to immersive online shopping, entertainment, business, travel and product modeling, in particular to a method and system for modeling of apparel items online in a collaborative environment.
  • the methods and systems described herein relate to online methods of collaboration in community environments.
  • the methods and systems are related to an online apparel modeling system that allows users to have three-dimensional models of their physical profile created. Users may purchase various goods and/or services and collaborate with other users in the online environment.
  • FIG. 1 is a block diagram of the components of a shopping, entertainment, and business system
  • FIG. 2 is a block diagram of the components of a computing device
  • FIG. 3 is a block diagram of the components of a server application
  • FIG. 4 is a block diagram of the components of a data store
  • FIG. 5 is a flowchart diagram of an access method
  • FIG. 6A-J illustrate the model generation method
  • FIG. 7A-D illustrate the modes of operation in a collaborative environment
  • FIG. 8 is an image of a sample main page screen for shopping
  • FIG. 9 is an image of a sample upload window for data for model generation
  • FIG. 10 is a image of a sample local application window and a sample browser window
  • FIG. 11 is an image of a sample facial synthesis window
  • FIG. 12A is an image of a sample measurement window
  • FIG. 12B is an image of a sample constructed photorealistic model
  • FIG. 12C is another image of a sample constructed photorealistic model
  • FIG. 13A is an image of a set of non photorealistic renderings of the user model shown from different viewpoints
  • FIG. 13B is an image showing a sample mechanism that allows users to make body modifications directly on the user model using hotspot regions
  • FIG. 13C is an image showing a sample ruler for taking measurements of the user model
  • FIG. 14 is an image of a sample environment manager
  • FIG. 15A is an image of a sample user model environment
  • FIG. 15B is an image illustrating sample features of collaborative shopping
  • FIG. 16 is a sample image of a component of a Shopping Trip management panel
  • FIG. 17 is an image of a sample friends manager window
  • FIG. 18 is an image of a sample friendship management window
  • FIG. 19 is an image of a sample chat window
  • FIG. 20 is an image of a sample collaborative environment
  • FIG. 21A-G are images illustrating Split-Bill features
  • FIG. 22 is an image of a sample apparel display window
  • FIG. 23 is an image of a shared item window
  • FIG. 24 is an image of a sample fitting room window in a browser window
  • FIG. 25 is an image of a sample wardrobe item
  • FIG. 26 is an image of a sample wardrobe consultant window
  • FIG. 27 is an image describing a sample instance of user interaction with the wardrobe and fitting room
  • FIG. 28 is an image of a sample 3D realization of a virtual wardrobe
  • FIG. 29A is an image showing sample visual sequences displayed to a user while the apparel and hair is being modeled and fitted on the user model.
  • FIG. 29B is an image illustrating sample mechanisms available to the user for making body adjustments to their user model
  • FIG. 29C is an image showing sample product catalogue views available to the user and a sample mechanism for trying on a product in the catalogue on the user model;
  • FIG. 30 is an image showing sample visualization schemes for fit information with respect to the body surface
  • FIG. 31 is an image of a sample browser main page screen and a sample local application screen, showing sample features
  • FIG. 32 is an image of a sample user model environment
  • FIG. 33 is an image of a sample user model environment with sample virtual components
  • FIG. 34 is an image where a sample user model music video is shown
  • FIG. 35 is an image showing sample manipulations of a user model's expressions and looks
  • FIG. 36 is an image of a sample virtual store window showing virtual interaction between a user and a sales service representative
  • FIG. 37 is an outline of a sample ADF file in XML format
  • FIG. 38 is a flowchart diagram that provides an overview of ADF file creation and use
  • FIG. 39A is in image of a sample procedure for a user to gain access to friends on system 10 from the user's account on a social networking site such as Facebook;
  • FIG. 39B is an image of a sample user account page on system 10 before a user has logged into Facebook;
  • FIG. 39C is an image of a sample page for accessing a social networking site (Facebook) through system 10 ;
  • FIG. 39D is an image of a sample user account page on system 10 after a user has logged into Facebook;
  • FIG. 40 is a sample image of a Shopping Trip management panel
  • FIG. 41A-F are snapshots of a sample realization of the system discussed with reference to FIG. 20 ;
  • FIG. 42 illustrates a sample interaction between various parties using system 10 ;
  • FIG. 43 is an image illustrating sample features of the hangout zone
  • FIG. 44 is an image of a sample main page in the hangout zone
  • FIG. 45 is an image of a sample style browser display window
  • FIG. 46A is an image of another sample main page for shopping
  • FIG. 46B is an image of a sample store window
  • FIG. 46C is an image of another sample store window
  • FIG. 46D is an image of sample shopping trip window
  • FIG. 46E is an image of a user's sample personalized looks window
  • FIG. 46F is an image of a sample fitting room window
  • FIG. 46G is an image of another sample fitting room window
  • FIG. 46H is an image of a sample shopping diary window
  • FIG. 46I is an image of a sample directory page
  • FIG. 47A-B are sample images illustrating a feature that allows users to customize the look and feel of the browser application
  • FIGS. 48A-F are images illustrating sample layout designs and select features of system 10 ;
  • FIGS. 49A-O are images illustrating sample features of the AFMS/VOS
  • FIG. 49L is an image of the sample storage structure of the AFMS/VOS
  • FIG. 49M is an image of a sample user accounts management structure within the AFMS/VOS
  • FIG. 49N is an image that shows sample abstraction of a search query that is fed into the search engine that is a part of the AFMS/VOS;
  • FIG. 49O is an image of a sample implementation of the AFMS/VOS as a website
  • FIG. 49P is an image of a sample application management structure within the AFMS/VOS
  • FIG. 49Q is an image of an exemplary embodiment of file tagging, sharing, and searching features in the VOS/AFMS;
  • FIG. 49R is a sample image of a user interface for filtering search data
  • FIG. 49S is a sample image of an interface to the object oriented file system
  • FIG. 50 illustrates a sample mobile communication system when a user is in a store
  • FIG. 51A illustrates a sample communication network demonstrating external connections to system 10 ;
  • FIG. 51B illustrates a sample flowchart showing the operation of the VS
  • FIG. 52A illustrates an image/video/audio analysis module for generic scene analysis
  • FIG. 52B illustrates a method for detecting surprise
  • FIG. 53 illustrates a sample interface for broadcasting and collaborative communication
  • FIG. 54A-F novel devices for human-computer interaction
  • FIG. 55 illustrates an exemplary embodiment of a method for audio/video/text summarization
  • FIG. 56 illustrates a sample usage of a collaborative VS application
  • the embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. However, preferably, these embodiments are implemented in computer programs executing on programmable computers, each comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
  • the programmable computer may be a mainframe computer, server, personal computer, laptop, personal data assistant, or cellular telephone.
  • a program code is applied to input data to perform the functions described herein and generate output information.
  • the output information is applied to one or more output devices, in known fashion.
  • Each program is preferably implemented in a high level procedural or object-oriented programming and/or scripting language to communicate with a computer system.
  • the programs can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language.
  • Each such computer program is preferably stored on a storage media or a device (e.g. ROM or magnetic diskette), readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein.
  • the inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • system, processes and methods of the described embodiments are capable of being distributed in a computer program product comprising a computer-readable medium that bears computer-usable instructions for one or more processors.
  • the medium may be provided in various forms, including one or more diskettes, compact disks, tapes, chips, wireline transmissions, satellite transmissions, internet transmissions or downloadings, magnetic and electronic storage media, digital and analog signals, and the like.
  • the computer-usable instructions may also be in various forms, including compiled and non-compiled code.
  • FIG. 1 a block diagram illustrating components of an online apparel modeling and collaboration system 10 are shown in an exemplary embodiment.
  • the modeling system 10 allows users to have three-dimensional models that are representative of their physical profile created.
  • the three-dimensional models are herein referred to as user models or character models, and are created based on information provided by the user. This information includes, but is not limited to, any combination of: images; movies; measurements; outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts; scans such as laser scans; skin tone, race, gender, weight, hair type etc.; high resolution scans and images of the eyes; motion capture data (mocap).
  • the users may then edit and manipulate the user models that are created.
  • the user models may then be used to model items of apparel.
  • the virtual modeling of apparel provides the user with an indication regarding the suitability of the apparel for the user.
  • the items of apparel may include, but are not limited to, items of clothing, jewelry, footwear, accessories, hair items, watches, and any other item that a user may adorn.
  • the user is provided with various respective functionalities when using the system 10 .
  • the functionalities include, but are not limited to, generating, viewing and editing three-dimensional models of users, viewing various apparel items placed on the three-dimensional models, purchasing apparel items, interacting with other members of online communities, sharing the three-dimensional models and sharing the apparel views with other members of the online communities.
  • the online modeling system 10 in an exemplary embodiment comprises one or more users 12 who interact with a respective computing device 14 .
  • the computing devices 14 have resident upon them or associated with them a client application 16 that may be used on the model generation process as described below.
  • the respective computing devices 14 communicate with a portal server 20 .
  • the portal server 20 is implemented on a computing device and is used to control the operation of the system 10 and the user's interaction with other members of the system 10 in an exemplary embodiment.
  • the portal server 20 has resident upon it or has associated with it a server application 22 .
  • the portal server 20 interacts with other servers that may be administered by third parties to provide various functionalities to the user.
  • the online modeling system 10 interacts with retail servers 24 , community servers 26 , entertainment servers 23 , media agency servers 25 , financial institution servers 27 in a manner that is described below.
  • the portal server 20 has resident upon it or associated with it an API (Application Programming Interface) 21 that would allow external applications from external vendors, retailers and other agencies not present in any of the servers associated with system 10 , to install their software/web applications. Validation procedures may be enforced by the portal server to grant appropriate permissions to external applications to connect to system 10 .
  • API Application Programming Interface
  • the users 12 of the system 10 may be any individual that has access to a computing device 14 .
  • the computing device 14 is any computer type device, and may include a personal computer, laptop computer, handheld computer, phone, wearable computer, server type computer and any other such computing devices.
  • the components of the computing device 14 in an exemplary embodiment are described in greater detail with regards to FIG. 2 to 56 .
  • the computing application 16 is a software application that is resident upon or associated with the computing device 14 .
  • the computing application 16 allows the user to access the system and to communicate with the respective servers.
  • the computing application aids in the rendering process that generates the three-dimensional user model as is described below.
  • the user accesses the system through a web browser, as the system is available on the Internet. Details on the web browser and computing application interaction are described with reference to FIG. 10 .
  • the communication network 18 is any network that provides for connectivity between respective computing devices.
  • the communication network 18 may include, but is not limited to, local area networks (LAN), wide area networks (WAN), an Intranet or the Internet.
  • the communication network 18 is the Internet.
  • the network may include portions or elements of telephone lines, Ethernet connections, ISDN lines, optical-data transport links, wireless data links, wireless cellular links and/or any suitable combination of the same and/or similar elements.
  • the portal server 20 is a server-type computing device that has associated with it a server application 22 .
  • the server application 22 is a software application that is resident upon the portal server 20 and manages the system 10 as described in detail below.
  • the components of the software application 22 are described in further detail below with regard to FIG. 3 .
  • the retail server 24 is a server-type computing device that may be maintained by a retailer that has an online presence.
  • the retail server 24 in an exemplary embodiment has access to information regarding various items of apparel that may be viewed upon the three-dimensional model.
  • the retail server 24 may be managed by an independent third party that is independent of the system 10 .
  • the retails server 24 may be managed by the portal server 20 and server application 22 .
  • the community server 26 may be a server that implements community networking sites with which the system 10 may interact. Such sites may include sites where users interact with one another on a social and community level. Through interacting with community server 26 , the system 10 allows for members of other online communities to be invited to be users of the system 10 .
  • the entertainment server 23 in an exemplary embodiment, may be a server that provides gaming facilities and services; functions as a database of movies and music (new and old releases); contains movie related media (video, images, audio, simulations) and music videos; provides up-to-date information on movie showtimes, ticket availability etc. on movies released in theatres as well as on music videos, new audio/video releases; houses entertainment related advertisement content etc.
  • the media server agency 25 may be linked with media stations, networks as well as advertising agencies.
  • the financial institution server 27 in an exemplary embodiment may be linked with financial institutions and provides service offerings available at financial institutions and other financial management tools and services relevant to online and electronic commerce transactions. These include facilities for split-bill transactions, which will be described later. Services also include providing financial accounts and keeping track of financial transactions, especially those related with the purchase of products and services associated with system 10 .
  • FIG. 2 a block diagram illustrating the components of a computing device in an exemplary embodiment is shown.
  • the computing device 14 in an exemplary embodiment, has associated with it a network interface 30 , a memory store 32 , a display 34 , a central processing unit 36 , an input means 38 , and one or more peripheral devices 40 .
  • the network interface 30 enables the respective device to communicate with the communication network 18 .
  • the network interface 30 may be a conventional network card, such as an Ethernet card, wireless card, or any other means that allows for communication with the communication network 16 .
  • the memory store 32 is used to store executable programs and other information and may include storage means such as conventional disk drives, hard drives, CD ROMS, or any other non-volatile memory means.
  • the display 34 allows the user to interact with the system 10 with a monitor-type/projection-type/multi-touch display/tablet device.
  • the CPU 36 is used to execute instructions and commands that are loaded from the memory store 32 .
  • the input devices 38 allow users to enter commands and information into the respective device 14 .
  • the input devices 38 may include, but are not limited to, any combinations of keyboards, a pointing device such as a mouse, or other devices such as microphones and multi-touch devices.
  • the Peripheral devices 40 may include, but are not limited to, devices such as printers, scanners, and cameras.
  • FIG. 3 a block diagram illustrating the components of a server application is shown in an exemplary embodiment.
  • the modules that are described herein are described for purposes of example as separate modules to illustrate functionalities that are provided by the respective server application 22 .
  • the server application 22 in an exemplary embodiment has associated with it a modeling module 50 , a community module 52 , a management module 54 , an environment module 56 , a retailer module 58 , a shopping module 60 , a wardrobe module 62 an advertising module 64 , entertainment module 66 , and a financial services module 68 .
  • the server application 22 interacts with a data store 70 that is described in further detail with regard to FIG. 4 .
  • the data store 70 is resident upon the server in an exemplary embodiment and is used to store data related to the system 10 as described below. Each of these modules may have a corresponding module on 14 and/or 16 . Computational load (and/or storage data) may be shared across these modules or exclusively handled by one. In an exemplary embodiment, the cloth modeling and rendering can be handled by the local application.
  • the modeling module 50 is used to generate a three-dimensional model of a user.
  • the user model as described below is generated based on a user's physical profile as provided through information of the user including, but not limited to images, movies, outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts; scans such as laser scans; skin tone, race, gender, weight, hair type, high resolution scans and images of the eyes; motion capture data, submitted measurements, and modifications made to the generated model.
  • the three-dimensional image may first be created based on one or more two-dimensional images that are provided by the user (these include full body images and images of the head from one of more perspectives).
  • These images are passed on to a reconstruction engine to generate a preliminary three-dimensional model.
  • a reconstruction engine to generate a preliminary three-dimensional model.
  • physical characteristics of the user are extracted.
  • the physical characteristics are used to generate a preliminary three-dimensional model of the user.
  • This preliminary model is then optimized.
  • the 3D surface of the preliminary model may be modified to better match the user's physical surface.
  • the modification to the mesh is made using Finite Element Modeling (FEM) analysis by setting reasonable material properties (example stiffness) for different regions of the face surface and growing/shrinking regions based on extracted features of the face, Further, user-specified optimization is also performed.
  • FEM Finite Element Modeling
  • This process involves user specifications regarding the generated model, and further techniques described below.
  • the modeling module 50 combines the generated three-dimensional profile from the user's electronic image, with the user-specified features and the user modifications to form a three-dimensional profile as is described in detail below. Users can update/re-build their model at a later point in time as well. This is to allow the user to create a model that reflects changes in their physique such as growth, aging, weight loss/gain etc. with the passage of time. Additionally, the system 10 may be incorporated with prediction algorithms that incorporate appropriate changes brought about by the growth and aging process in a given user model. Prediction algorithms that display changes in the user model after weight loss would also be accommodated by system 10 .
  • the user model can be incorporated with the personality or style aspects of the user or of another person that the user chooses.
  • system 10 can learn the walking style of the user and apply it to the virtual model.
  • the accent of the celebrity may be learnt and applied to the speech/dialogues of the model. In an exemplary embodiment, this can be accomplished using bilinear models as discussed in paper 1 and 2 .
  • the modeling module 50 also allows the user to view items of apparel that have been displayed upon the user model that has been generated. The user is able to see how items of apparel appear on their respective model, and how such items fit.
  • the module enables photorealistic modeling of apparel permitting life-like simulation (in terms of texture, movement, color, shape, fit etc.) of the apparel.
  • the modeling module 50 is able to determine where certain items of apparel may not fit appropriately, and where alterations may be required. Such a determination is indicated to the user in exemplary embodiment through visual indicators such as, but not limited to, arrows on screen, varying colors, digital effects including transparency/x-ray vision effect where the apparel turns transparent and the user is able to examine fit in the particular region.
  • the modeling module 50 also provides the user with the functionality to try on various items of apparel and for the simulated use of cosmetic products, dental products and various hair and optical accessories. Users are able to employ virtual make-up applicators to apply cosmetic products to user models.
  • Virtual make up applicators act as virtual brushes that simulate real cosmetic brushes can be used to select product(s) from a catalogue (drag product) and apply (drop product) onto a user model's face. This is accomplished, in exemplary embodiment, by warping or overlaying the predefined texture map corresponding to the product on to the face using a technique similar to that used in [1].
  • the texture map could be parameterized as a function of user characteristics such as skin tone, shape of face.
  • the user is also presented with the option of letting the system apply selected product(s) to the user model's face.
  • the face texture map is processed (using digital signal processing techniques as exemplary embodiment) to create the effect of a given cosmetic product.
  • an additional texture layer is applied with the desired effect on top of the existing face texture map.
  • a correspondence between a cosmetic product and its effect on the user model allows users to visualize the effect of applying a given cosmetic product (This also applies to hair, dental and optical products).
  • the module suggests the most suitable choice of cosmetic products as well as the procedure and tools of application to enhance/flatter a user's look. Suggestions will also be provided along similar lines for dental, hair and optical products. Additionally, real-time assistance is provided to the user for application of cosmetic products.
  • the user can visualize themselves on their monitor or other display device available while applying make-up (as in a mirror) and at the same time interact with a real-time process that will be pre-programmed to act as a fashion consultant and will guide the user in achieving optimal looks and get feedback on their look as well while they apply make-up.
  • the application collects real-time video, image and other data from the webcam.
  • the application provides text, audio, visual and/or other type of information to guide the user through the optimal make-up application procedure given the specific parameters.
  • the user can also specify other objective and subjective criteria regarding the look they want to achieve such as the occasion for the look, the type of look, the cosmetic product brands, time needed for application etc.
  • the application provides specific feedback related to the existing make-up that the user has already put on.
  • the application may advise the user to use a matte foundation based on their skin type (program computes metrics involving illumination and reflection components based on the face image to assess the oiliness of the skin) or to use upward strokes while applying blush based on their cheek configuration (algorithms that localize contouring regions and/or assess concavities on face regions are used).
  • the automatic make-up applicator/advisor can present a virtual palette of cosmetic products on the monitor or display device and allow the users to select the colours/products of their choice.
  • the program can perform a virtual ‘make-over’ of the user.
  • the application uses the real-time video of the user available through the webcam or other forms of video/images captured by other forms of video/image capture devices; identifies the different facial features and applies the appropriate cosmetic products (cheeks with blush, eyelids with eye shadow) to the video/image of the user and presents it on the display. If it involves streaming video content of the user, as in the case of a webcam, the user can visualize the cosmetic application process in real-time as it is carried out by the application on the user's face on the display.
  • a real fashion consultant is also able to assist the user in a similar manner in achieving the desired looks with cosmetic products, using the webcam and/or other video or image capture feature.
  • the effect of applying cosmetic products can be achieved by moving the face texture map corresponding to the user model, or an image of the user closer towards an average face. This can be accomplished by applying PCA (Principal Components Analysis [2]) and removing the higher order components, or it can also be done by computing the Fourier transform of the user model's texture map or the user's image and removing the higher frequency components.
  • PCA Principal Components Analysis
  • a similar technique can also be used to identify a user's beauty by looking at the weights of the higher order principal components. Effect of applying beauty products can be more realistically simulated by looking at the principal components before and after the application of a cosmetic product on a number of users and then applying the same change to the given user's texture model or the user's image. The user can thus get assistance in applying cosmetic products not simply on a 2D or 3D virtual effigy of their self but also on their actual face. This increases the interactivity and precision of the cosmetic application process for the user.
  • the user is also able to choose from various hairstyles that are available for selection.
  • the modeling module 50 then causes the user model to be displayed with the hairstyle that has been selected by the user.
  • the user may change their hair style of the model, and apply hair products that affect the appearance of hair.
  • the selections of hair styles and other products by the user may be made based on hair styles that are featured from various respective hair salons.
  • the module enables photorealistic modeling of hair permitting life-like simulation (in terms of texture, movement, color, shape etc.) of the model's hair.
  • the modeling module 50 also allows the user to specify various actions and activities that the user model is to undertake.
  • the model may be made to move in a variety of environments with various patterns of movement to provide to the user a better idea of how the model appears in different settings or environments.
  • the user is able to perform various manipulations of the various parts of the user model in an exemplary embodiment.
  • the user is presented in an exemplary embodiment with specified activity choices that the user may wish the model to engage in. Examples of such activities include, but are not limited to singing, speech and dancing.
  • activities include, but are not limited to singing, speech and dancing.
  • the users in an exemplary embodiment join a network upon which their models are placed into a common 3D environment. Any information related to interaction between the user models such as location of the model in the environment, occlusion, model apparel, motion/activity information related to the model is transmitted to each computing application either directly or via a server.
  • the community module 52 allows the user to interact with other users of the system 10 or with members of other community networks.
  • the community module 52 allows users to interact with other users through real-time communication. Messages can also be exchanged offline.
  • the user can interact with other users through their virtual character model.
  • the model can be dressed up in apparel, make-up and hairstyles as desired by the user and involved in interaction with other users.
  • the user can animate character expressions, movements and actions as it communicates. This is done via a set of commands (appearing in a menu or other display options) to which the model has been pre-programmed to respond to.
  • a menu of mood emoticons (happy, angry, surprised, sad etc.) and action icons (wave, side-kick, laugh, salsa move, pace etc.) are presented to the user to enact on their virtual model while using it to communicate/interact with other users.
  • the expressions/movements/actions of the character model can be synchronized with the user's intentions which are communicated to the model in the form of text, speech, or other information.
  • the user may type or say the word laugh and the model will respond by laughing.
  • Another technique used for animating the model's expressions/movements/actions includes tracking the user's expressions/movements/actions through the use of a webcam, video camera, still camera and/or other video or image capture device and applying the same expressions/movements/actions to the character model (synchronized application or after a delay).
  • the character may be programmed to respond to visual cues and/or expressions and/or tone and/or mood of the user by putting on the appropriate expressions, acting accordingly and delivering the effect of the user input.
  • speech or text input to a user model may also be provided through a mobile phone.
  • the community interaction features of the system 10 allow the user to share views of the user model with other users. By sharing the user model with other users, the user is able to request and receive comments, ratings and general feedback regarding the respective apparel items and style choices made by the user. Receiving feedback and comments from other users enhances the user's experience with the system by simulating a real world shopping experience.
  • the community module 52 When interacting with other users of the system 10 , the community module 52 allows users to interact with one another through use of their respective models.
  • the community module 52 further includes chat functionality that allows users to participate in text, video or voice communication with other users of the system 10 .
  • the chat application may allow automatic translation to facilitate users who speak different languages to communicate).
  • users may interact with other users through engaging in collaborative virtual shopping trips as described in detail herein. Users can share their models with other users or build models of other people and shop for items for other people too. This feature would prove useful in the case of gift-giving.
  • Another feature in this module includes a ‘hangout’ zone—a social networking, events planning and information area. This is a feature which assists users in organizing and coordinating social events, conferences, meetings, social gatherings and other activities.
  • Users can initiate new events or activities in the hangout zone and send virtual invites to people in their network and other users as well. The users can then accept or decline invites and confirm if they can make it to the event.
  • Event/activity/occasion information and description including, but not limited to, details such as the theme, location, venue, participants, attendees, news and other articles related to the event, photos, videos and other event related media, user feedback and comments etc can be posted and viewed in the hangout zone. Suggestions on what to wear and/or bring to the event and where to buy it are also featured.
  • This zone will also feature upcoming events and shows, music bands/groups and celebrities coming to town.
  • a map feature will be integrated to help users locate the venue of the event and get assistance with directions.
  • the zone will also feature information on the area surrounding the venue of the event such as nearby restaurants, shopping plazas, other events in proximity of the venue etc.
  • groups of users can coordinate excursion to movies. Users can start a new thread (i.e., create a new item page) in the hangout zone regarding visiting the theatre on a particular date. Invitees can then vote for the movie they want to watch, post news, ratings and other media items related to the movies; share views in celebrity or movie apparel on the page; discuss and chat with other users regarding their plans.
  • Information provided by the entertainment servers 23 and media agency servers 25 will be used to keep content relating to movies, shows, and other entertainment venues updated in the hangout zone.
  • special events such as weddings and sports events may be planned in the hangout zone
  • sample bridal outfits may be displayed in the zone for members of the group organizing the wedding, in the form of images, or on the virtual model of the bride or on mannequins etc.
  • Apparel suggestions may be provided to the bride and groom, for example, based on the season, time of day the wedding is held, whether the event is indoor/outdoor, the budget allocated for the outfits, etc.
  • Suggestions on bridesmaids' dresses and other outfits may be provided based on what the bride and groom are wearing and other factors such as the ones taken into account while suggesting bride and groom outfits.
  • a digital calendar may be featured in the hangout zone indicating important timing information regarding the event such as number of days left for the event, other important days surrounding the events etc. To-do and/or itemized lists which may be sorted according to days preceding the event may also be featured in the hangout zone.
  • a facility may be provided for incorporating information from other calendars such as the GoogleTM CalendarTM or MicrosoftTM OutlookTM etc and/or for linking these calendars within the hangout zone.
  • a virtual assistant may be present in the hangout zone which is a 3D simulation of a real or fictional character for purposes of providing information, help, and suggestions. The virtual assistant would be present to make interaction more ‘human’ in the hangout zone.
  • an event profile page in the hangout zone is shown in FIG.
  • An image/video/simulation 726 describing/related to the event can be uploaded on the page.
  • the event title and brief information 727 regarding the time, location, venue and other information related to the event is displayed.
  • a digital calendar is available to the moderators of the event for marking important dates and noting associated tasks.
  • An example note 729 is shown that lists the important dates for the month and which appears when the user clicks on the name of the month in the calendar, in an exemplary embodiment, The note shows the number of days left for the event; the important dates and tasks associated with the event as marked by the user.
  • a facility is also available for members to join the event profile page to view the progress of preparation of the event, take part in discussions and other activities surrounding the event using the features and facilities available in the hangout zone.
  • the member profile images/videos/simulations and/or name and/or other information would be displayed in a panel 730 on the event page, in an exemplary embodiment. The viewer may scroll the panel using the left/right control 731 , shown in an exemplary embodiment to browse all members of the event. These members would also include the invitees for the event. Invitations for the event can be sent to the invitees via the hangout zone. These members will be asked questions related to the status of their attendance such as if they plan to attend the event or not, whether they are unsure or undecided and similar questions.
  • Invitees may send the host or event planner (i.e., the source of invitation) an RSVP confirming attendance via real-time notification, email, SMS, phone, voice message, and similar communication means.
  • the RSVP may contain other information such as accompanying guests, outfit the invitee plans to wear, whether they need transportation assistance in order to get to the event, tips for event planning and other such information related to the invitee with respect to the event.
  • the system processes payments from the user.
  • the system processes the documents.
  • a window/dialog/pop-up 734 may appear with a larger image view of the member and details on member event status including fields such as attendance, member's event outfit, guest accompanying the invitee to the event etc.; and/or member profile information.
  • Icon 735 in this dialog/pop-up window allows the member viewing the invitee's profile and event status 734 to invite him/her on a shopping trip, via a real-time notification, email, SMS, phone call or message and other means of messaging, while the icon 736 indicates if the invitee is online and allows the member viewing the invitee's profile to invite to chat or send message to the invitee.
  • Members on the event page can also get details of the venue and the area where the event is being held by clicking on the ‘area info’ section 737 as shown in an exemplary embodiment.
  • a pop-up/dialog/window 738 opens up showing location and venue information on a map; places of interest in the vicinity of the event such as eateries, hangouts, and other scheduled public events.
  • a discussion forum facility 739 allows members of the event to start topic threads and discuss various event related topics. Members can view all the discussion topics and categories, active members of the discussion forum and view online members for engaging in discussions/chats/real-time interaction with. Members in the hangout zone can take advantage of the shopping and virtual modeling facility available via system 10 to shop online for apparel and other needs for the event. Invitees may shop for gifts via the electronic gift registry available as part of the event planning services.
  • Shopping assistance panels 741 and 742 provide tips, relevant event shopping and assistance categories, display relevant advertisement and other information, and provide other shopping help. Specific examples include event outfit, and gift ideas; listings, reviews and assistance in seeking event venue, organizers, decorators, fashion boutiques, car rentals etc.
  • FIG. 44 depicts some of the facilities in a browser window 745 , that users can navigate to in the hangout zone
  • the left and right panel menus, 746 and 747 respectively, indicate some of the different online venues that the user can visit on system 10 .
  • These include museums, studios, movies, parks, tours and other venues as well as stores, which will take the user to the shopping module 60 on system 10 .
  • These facilities may be simulated environments which users can visit or virtual events which users may participate in via their virtual characters or directly. Alternatively, these facilities can be mapped to real physical venues which may be equipped with cameras and other visual equipment to facilitate real-time browsing and access to the facility via system 10 .
  • users may participate in a virtual tour of a real museum or a historical site. Users may watch a live video feed (or hear live audio feed) of a graduation ceremony or a musical concert or a hockey match or weddings and other community, social, business, entertainment, education events. Translation of video feeds in multiple languages is also available to members. Users can choose to view the event in the original language or in the translated version. Translations may be provided by other members of the system in real-time (during live transmission) or after the event. Users can choose which member's translation to listen to during the event. Ratings of member translators may be available to guide this decision.
  • Translations can be provided either as subtitles or audio dubbing in an exemplary embodiment.
  • Translations may be computer-generated. This may be done in exemplary embodiment by converting speech to text, text to translated text, followed by translated text to speech in the new language.
  • users can obtain information and details regarding specific real events and/or places and/or facilities of interest to them such as music festivals, concerts, fairs and exhibitions, movie studios, games, historical sites etc in the hangout zone. For details on these facilities, refer to the environment module 56 and its descriptions in this document.
  • the facilities mentioned in FIG. 44 may manifest themselves as the different types of environments described with reference to the environment module 56 .
  • a map facility 748 is available which provides digital/animated representations of a virtual world containing virtual facilities in the hangout zone and/or fictional mappings of real facilities in virtual worlds.
  • Real location and area maps and venue information of the real places and events as well as driving directions to events and venues are provided to assist users.
  • the hangout zone may be linked to other websites that provide map, location and area information.
  • Users can obtain assistance 749 , which may be real-time/live, on what places they can visit, on what's new, special attractions, upcoming events, on activities in the hangout zone etc.
  • Users may send event invitations 750 to friends, as mentioned previously. These can be invitations for real events or events that users can participate in through system 10 such as games, virtual tours, virtual fashion shows and other events and activities.
  • Users may examine 751 other invitees to a particular event and see who else is participating in an event or activity or has confirmed attendance. Users may also obtain the latest weather and traffic updates 752 as well as all traffic and weather information relevant to a given event/venue/activity. Users may attend and participate in live virtual events in real time where they can meet celebrities and get their autographs signed digitally.
  • the events described in the hangout zone are not meant to be limited to the hangout zone or any specific space but are described as such in order to illustrate activities that can be carried out in a social networking space.
  • the event management module may be used in conjunction or integrated with a guest validation system.
  • a guest validation system would assist in ascertaining if guests arriving at an event are confirmed attendees or invitees to the event.
  • guests can enter their name and password (which may be issued with the electronic invitation sent by the system, upon payment of event registration fees where required) either at a terminal or using their handheld.
  • invitees can have a print out of an entry or invitation card with a bar code (issued with the electronic invitation) which can be swiped at the event for entry. This would be most useful in cases where an event requires registration and a fee to register.
  • This invention incorporates additional collaborative features such as collaborative viewing of videos or photos or television and other synchronized forms of multimedia sharing.
  • Users may select and customize their viewing environments, and/or background themes and skins for their viewer. They may select and invite other users to participate in synchronized sessions for sharing videos, and other multimedia.
  • immersive features are provided by system 10 to further facilitate collaboration between users and to make their experience increasingly real and life-like as well as functional and entertaining.
  • users may mark objects in the videos, write or scribble over the video content as it plays, This feature can be likened to a TV screen that acts as a transparent whiteboard under which a video is playing and on top of which markings can be made or writing is possible.
  • users can further interact by expressing emotions through their character models which may be engaged in the same environment or through emoticons and other animated objects.
  • their character models which may be engaged in the same environment or through emoticons and other animated objects.
  • the user can make their user model smile via a control key for their user model which may be pre-programmed to respond with a smile when the given control key is pressed.
  • Pointing to objects, writing, expressing emotions through emoticons, SMS/text to invite for a shopping trip are actions as part of synchronized collaboration in an exemplary embodiment.
  • the whiteboard feature which permits freehand writing and drawing may be available to users during shopping trips or events and/or for any collaborative interaction and/or real time interaction and/or for enabling users to take electronic notes and/or draft shopping lists and uses described with reference to FIG. 20 in this document.
  • related content for example advertisements
  • related content may be placed in the proximity of the drawing screen and/or related content may be communicated via audio/speech, and/or graphics/images/videos.
  • a ‘virtual showcase’ will allow users to showcase and share their talent and/or hand-made items (handiwork) and/or hobbies with online users.
  • users can upload digital versions of their art work which may include any form of art work such as paintings or handicrafts such as knit and embroidered pieces of work; handmade products such as wood-work, origami, floral arrangements; culinary creations and associated recipes; and any form of outcome or product or result of a hobby or sport. All the above are meant to be exemplary embodiments of items that can be displayed in the virtual showcase.
  • users can post/showcase videos demonstrating feats of skateboarding or instructional videos or animations for cooking, and other talents.
  • the virtual showcase may contain virtual art galleries, in an exemplary embodiment, featuring art-work of users. Members may be able to browse the virtual art gallery and the gallery environment may be simulated such that it gives the users the illusion of walking in a real art gallery.
  • the art galleries may be simulated 2D or 3D environments, videos, images or any combination thereof and/or may include components of augmented reality. Users can also adorn their virtual rooms and other 2D or 3D spaces with their virtual artwork.
  • the management module 54 allows the user to control and manage their account and settings associated with their account.
  • the user may reset his/her password and enter and edit other profile and preference information that is associated with the user.
  • the profile and preference information that is provided by the user may be used to tailor apparel items, or combinations of apparel items for the user.
  • the environment module 56 allows the user to choose the virtual environment in which to place their user model.
  • the system 10 allows users to visualize how various apparel items will appear when they are wearing them, the ability to choose respective virtual environments further aids the user in this visualization process. For example, where a user's 3-D model is used to determine the suitability of evening wear or formal wear, the user is better able to appreciate the modeling where a formal background is provided.
  • the virtual environments may be static image or dynamic backgrounds or three-dimensional or multi-dimensional environments, or any suitable combination of the above.
  • a dynamic background could include an animated sequence or a video or a virtual reality experience.
  • Images or animations or video or other multimedia that are represented by the respective environments may include, but are not limited to, vacation destinations, tourist destinations, historical sites, natural scenery, period themes (the 60s, 70s, Contemporary era etc.), entertainment venues, athletic facilities, runways for modeling, etc.
  • the environments that are provided by the system 10 may be customized and tailored by the users. Specifically, users may be provided the option of removing or adding components associated with the environment and to alter backgrounds in the environments. For example, with respect to adding and or removing physical components, where a living room environment is being used and is provided to the system 10 , various components associated with the living room may be added, deleted or modified. With respect to the addition of components, components such as furniture and fixtures may be added through functionality provided to the user.
  • the user in an exemplary embodiment is provided with drag and drop functionality that allows the user to drag the various components into an environment, and out of an environment.
  • the drag-and-drop functionality may incorporate physics based animation to enhance realism.
  • the users may specify where things are placed in an environment.
  • the users are able to choose from a listing of components that they wish to add.
  • the respective components that are chosen and placed in the virtual environments may be associated with respective companies that are attempting to promote their products. For example, where a user has placed a sofa in their virtual environment, the user may view the selections of sofas that may be placed in the virtual environment and each sofa that may be selected will have information pertaining to it that will help the user decide whether to place it in their virtual environment.
  • Advertisements may be displayed in these environments and thus, these environments would serve as an advertising medium.
  • a billboard in the background may exhibit a product ad or people in the environment may wear apparel displaying logos of the brand being advertised.
  • Virtual environments may also represent or incorporate part or whole of a music video or movie or game scene or animation or video.
  • User models would have the ability to interact with virtual embodiments of movie characters and celebrities. As an example, the user model may be placed in a fight scene from a movie.
  • Another feature that would be supported by the entertainment environments is to allow users to purchase apparel and other items shown in the particular movie. For example, the user could purchase apparel worn by the characters in the movie or the cars driven in the movie or the mobile phones used in the movie. Additionally, users could replace the characters in the movie or music video with their user models.
  • the model would be able to orchestrate the exact movements (dialogue, movements, actions, expressions) of the original character.
  • the system 10 may provide the user with pre-rendered scenes/environments where the music and environment cannot be manipulated to a great degree by the user but where rendering of the character model can occur so that it can be inserted into the scene, its expressions/actions can be manipulated and it can be viewed from different camera angles/viewpoints within the environment.
  • Users can save or share with other users the various manifestations of their user model after manipulating/modifying it and the animation/video sequence containing the model in various file formats.
  • the modified user model or the animation/video sequence can then be exported to other locations including content sharing sites or displayed on the profile page.
  • the user may indicate their display status through the use of their character model with the appropriate backdrop and other digital components. For instance, users may indicate that they are reading a given book by displaying their model on their profile page reading a book against a backdrop that reflects the theme of the book or their model may be engaged with other models in an act from the book or a play or a movie that they are watching.
  • a feature encompassing a virtual space/environment where virtual fashion shows are held is available through system 10 .
  • Professional and amateur designers can display their collections on virtual models in virtual fashion shows.
  • the virtual models and virtual environments can be custom made to suit the designer's needs and/or virtual models of real users and celebrities may be employed.
  • Auctions and bidding can take place in these virtual spaces for apparel modeled in the fashion shows.
  • Groups of users can also participate in virtual fashion shows in a shared environment using their 3D models to showcase apparel.
  • the whole or part of a virtual environment may incorporate physics based animation effects to enhance realism of the environment, its contents and interaction with the user.
  • an environment representing a basketball court could be integrated with physics based animation effects.
  • the motion dynamics of the basketball players, the ball, the basket etc. would be based on the physics of real motion and thus, the game sequence would appear realistic.
  • Users are also able to select their own environment, and may upload their own environment to be used in the system 10 .
  • the system 10 also includes simulated shopping environments. An animated navigation menu is provided so that the user may locate stores/stalls of interest.
  • the shopping environment may be represented by components of a virtual mall which may contain simulations of components of real stores, or it may be a simulated representation of a real mall which may contain other animated virtual components.
  • the environment may be presented as a virtual reality animation/simulation which may contain video/simulations/images of actual/real stores and components; or it may be presented as a real-time or streaming video or a video/series of images of a real mall with animated stores and components; or as a virtual reality simulation of a real store.
  • System 10 recommends stores to visit based on specific user information such as profession, gender, size, likes/dislikes etc. For instance, for a short female, the system can recommend browsing petite fashion stores.
  • the system can point out to the user if a product is available in the user's size as the user is browsing products or selecting products to view.
  • the system may also point out the appropriate size of the user in a different sizing scheme, for example, in the sizing scheme of a different country (US, EUR, UK etc.).
  • the system also takes into account user fit preferences. For instance, a user may want clothes to be a few inches looser than his/her actual fit size.
  • the system would add the leeway margin, as specified by the user, to the user's exact fit apparel size in order to find the desired fit for the user.
  • a user who wishes to view and/or model apparel items may select from the various items of apparel through a shopping environment such as a store or a mall.
  • the models are allowed to browse the virtual store environment by selecting and inspecting items that are taken from the respective racks and shelves associated with the virtual environment.
  • physics based animation can be incorporated to make the shopping environment, its contents and user interaction with the environment realistic.
  • the clothes in the shelves and racks can be made to appear realistic by simulating real texture and movement of cloth.
  • a live feed can be provided to users from real stores regarding the quantity of a particular item.
  • This information can either be conveyed, for example, either numerically or an animation of a shelf/rack containing the actual number of items in inventory can be displayed or a video of the real store with the items on shelf can be displayed to the user.
  • the live feed feature can be used by the source supplying the apparel to convey other information such as store/brand promotions, special offers, sales, featured items etc. (not restricted to real-time inventory information).
  • the shopping environment can include other stores and fixtures and other items found in a real shopping mall to simulate/replicate real shopping environments as closely as possible.
  • food stores and stalls may be augmented in the virtual shopping environment. These ‘virtual food stores’ could represent simulations or images/videos of fictional or non-fictional stores.
  • These virtual stores would serve as an advertising medium for food brands and products as well as superstores, restaurants, corner stores or any other place providing a food service, manufacturing or serving as the retail outlet for a food brand.
  • Virtual characters acting as store personnel offer virtual samples of ‘featured food products’, just as in a real mall setting.
  • Other items found in real shopping environments that are incorporated include fountains, in an exemplary embodiment. These virtual fountains can be incorporated with physics based animation techniques to simulate water movement as in a real fountain.
  • Store personnel such as sales representatives and customer service representatives are represented by virtual characters that provide online assistance to the user while shopping, speak and orchestrate movements in a manner similar to real store personnel and interact with the user model.
  • An ‘augmented reality display table’ is featured by system 10 where vendors can display their products to the customer and interact with the customer. For example, a jewelry store personnel may pick out a ring from the glass display for showing the user. A salesperson in a mobile phone store may pick out a given phone and demonstrate specific features. At the same time, specifications related to the object may be displayed and compared with other products. Users also have the ability to interact with the object in 2D, 3D or higher dimensions. The salesperson and customer may interact simultaneously with the object. Physics based modeling may also be supported.
  • This display table may be mapped to a real store and the objects virtually overlaid.
  • indoor game facilities such as ice-skating rinks, golf parks, basketball etc. Environments that simulate these facilities virtually will be available.
  • Users can engage their models in these activities and participate in a game with others users.
  • the user can see other ‘people’ in a virtual mall.
  • These may represent real users or fictional virtual characters. The user will have the option to set their user model as invisible or visible so that their model can be viewed by other users browsing the mall.
  • this collaborative environment works as follows:
  • the local application 271 provides a visualization engine.
  • Webcam content from the customers and the sales personnel may be integrated into or used in conjunction with the engine.
  • 3D product models are available, they can be used interactively via the common mode or other modes of operation, as discussed with reference to FIG. 7 , for example.
  • webcam views may be used either directly or converted to models based on webcam images (using techniques similar to those discussed in [3] for going from sketch to model in exemplary embodiment). These models/images can then be used in the visualization engine.
  • Interaction with the engine can take place using conventional input/output (I/O) devices such as a keyboard and a mouse, or using I/O devices discussed with reference to FIG. 54 .
  • I/O input/output
  • Video capturing devices may be used to capture the view of a counter or a product display in the store, for example. This content may be transmitted both to the salesperson and the customer. Either party can then augment this content with their own input. The customer may also bring in objects into this augmented world, for example, for colour or style matching. Augmentation may be accomplished using techniques similar to those in [4].
  • the collaborative environment described here with reference to FIG. 36 may be thought of as a 3D version of the collaborative environment described with reference to FIG. 20 . All of the tools available in the collaborative environment discussed with reference to FIG. 20 may be available in the collaborative environment of FIG. 36 .
  • the various respective virtual environments that are used may all have associated with them various multimedia files that may be linked to the respective environments.
  • music, or video files may be linked or embedded into the respective environments.
  • the system 10 may also allow for downloading of music (and other audio files) from a repository of music, in an exemplary embodiment, that may then be played while the user is navigating and/or interacting with their respective environment.
  • the user will have the option of selecting music from the repository and downloading tracks or directly playing the music from a media player within the browser.
  • audio files can also run seamlessly in the environment. These can be set by the sponsor of an environment. For example, in a virtual music store environment, the store sponsor can play tracks of new releases or specials being advertised.
  • the soundtrack of the movie could play within the environment.
  • These tracks can be customized according to the sponsor or user.
  • the sponsor of the environment and the music or media files sponsor do not necessarily have to be the same.
  • the user may be given control over the type of media files that are played within or linked with an environment.
  • the medium may also be an online radio, The radio may be mapped to real radio stations. Users have the option to share media files (name, description and other information associated with the file and/or actual content) with their social network or send links of the source of the media files. Users can also order and purchase media files that they are listening to online.
  • a ‘buy now’ link would be associated with the media file that would take the user to the transaction processing page to process the purchase of the media file online.
  • Users may create their own 3D or 2D virtual spaces by adding virtual components from catalogues.
  • a user may rent or buy virtual rooms (2D or 3D) from a catalogue and add virtual furniture, virtual artwork, virtual home electronics such as a TV, refrigerator, oven, washing machine, home entertainment system etc. and other components.
  • the user may add rooms to create a home with outdoor extensions such as a patio and backyard to which components may also be added.
  • Users may visit each other users' virtual spaces and environments.
  • Users may also buy virtual food products. which may be stored in virtual refrigerators or stores. These virtual food products may be designed such that they decrease over time and eventually finish or become spoilt if unused ‘virtually’.
  • purchasing a bag of virtual rice may be equivalent to donating a bag of virtual rice as food aid to developing countries.
  • Users may furnish their rooms with objects that change or grow with time such as plants.
  • the user may buy a virtual seed and over time, the seed would grow into a full-size virtual plant.
  • the virtual plant may be designed such that it grows automatically or upon proper caretaking by the user such as providing virtual water, nutrients, sunlight and other necessities to the plant. This would help users to become more empathic and acquire useful skills such as gardening or caretaking.
  • Florists and greenhouses may also find this feature useful. They may design virtual plants and flowers such that their requirements are mapped to the real plants or flowers they represent. For instance, roses may require specific nutrients, soil types, sunlight duration etc. for their proper growth.
  • virtual rose plants may be designed to grow only if provided with the necessities (virtual) that real roses require. Thus, these virtual plants would prove useful as instructional or training tools for people who would like to learn how to cultivate specific plants properly before purchasing real plants.
  • users may be given scores. Users would also be able to purchase the real plants from florists, greenhouses and other stores subscribing to system 10 , whose information would be available to users. Furthermore, users may buy virtual pets.
  • These virtual pets may be designed to grow on their own or upon proper caretaking by their owners just as in the case of virtual plants. This feature could help users to become better pet caretakers before they buy real pets.
  • the concept of virtual pets can be taken further. Proceeds that are collected from the purchase of virtual pets may be used to support animal shelters or humane societies or animal relief or wildlife conservation efforts.
  • a virtual pet may be mapped to an animal that has been saved as a result of the proceeds collected from the purchase of virtual pets. Users may directly sponsor an animal whose virtual representation they would own upon sponsoring the animal. Users would also receive updates about the welfare of the animal they sponsored (if they are not able to directly own the real animal such as in the case of a wild animal) and about related relief, rescue or conservation efforts associated with similar animals.
  • the retailer module 58 allows the system 10 to interact with the various respective retailers with which the system 10 is associated. Specifically, the retailer module 58 tracks the respective items that may be purchased through use of the system 10 . The retailer module 58 interacts with the retail servers 26 of retailers with respect to product offerings that may be available through the system 10 . Information from the retailer module 58 pertaining to items that can be purchased is acquired by system 10 . This information may be encapsulated in a CAD (Computer Aided Design) file for example.
  • CAD Computer Aided Design
  • the shopping module 60 allows for users to purchase items that may be viewed and/or modeled.
  • Each retailer in the retailer module 58 may have a customizable store page or virtual store available in the shopping module 60 . Users can administer their page or virtual/online store as discussed with reference to FIG. 42 . Each store can be customized according to the retailer's needs.
  • Retailers may add web and software components to their store available through system 10 . These components include those that would allow the retailer to add featured items, special offers, top picks, holiday deals and other categories of items to their virtual store.
  • the retailer can make available their products for sale through these stores/pages.
  • the users of the system 10 as mentioned above have access to various online product catalogues from virtual stores and/or virtual malls.
  • catalogues may be mapped from virtual stores and/or virtual malls or real stores and/or malls.
  • the user will be asked specific information relating to the shopping interests and style preferences.
  • the shopping module 60 based on the user-specified preferences and information may also make recommendations regarding items of apparel that are based on the user's interests, preference and style that have been determined from previous purchases. This can be accomplished using a variety of machine learning algorithms such as neural networks or support vector machines. Current implementation includes the use of collaborative filtering [5]. Alternatively, Gaussian process methodologies [6] may also be used.
  • the recommendations are made to the user based on information collected on the variables in the user's profile (example: preferences, style, interests) as well as based on the user's purchasing and browsing history.
  • the uncertainty that is computed in closed form using Gaussian process classification is used to express the degree of confidence in the recommendation that is made. This can be expressed using statements like ‘you may like this’ or ‘you will definitely love this’ etc.
  • the interests of the user may be specified by the user, and alternatively may be profiled by the system 10 based on the user's demographics.
  • the shopping module 60 also provides the user with various search functionalities.
  • the user may perform a search to retrieve apparel items based on criteria that may include, but are not limited to, a description of the apparel including size, price, brand, season, style, occasion, discounts, and retailer.
  • Users can search and shop for apparel based on the look they want to achieve. For example, this could include ‘sporty’, ‘professional’, ‘celebrity’ and other types of looks. Users may also search and shop for apparel belonging to special categories including, but not limited to, maternity wear, uniforms, laboratory apparel etc.
  • Apparel may be presented to the user on virtual mannequins by the shopping module 60 .
  • Other forms of display include a ‘revolving virtual display’ or a ‘conveyor belt display’ etc.
  • a revolving display may assume the form of a glass-like cube or some other shape with a mannequin on each face of the cube/shape sselling different apparel and/or jewelry.
  • a conveyor belt display may feature virtual mannequins in a window, donning different apparel and/or jewelry. The mannequins may move in the window in a conveyor belt fashion, with a sequence of mannequin displays appearing in the window periodically. The speed of the conveyor belt or the revolving display may be modified. Other displays may be used and other manifestations of the conveyor and revolving display may be used.
  • the mannequins may be replaced by user models or by simply product images and/or other visual/virtual manifestations of the product.
  • FIG. 45 where another display scheme—the ‘Style browser’ 755 is shown in an exemplary embodiment,
  • the style browser display operates directly on the user model 650 in that the apparel items in an electronic catalogue are displayed on the user model as the user browses the product catalogue.
  • the user can browse tops in a catalogue in the window section 756 by using the left 757 and right 758 arrow icons.
  • the tops are modeled and displayed directly on the user model 650 .
  • the user is able to examine fit and look information while browsing the catalogue itself.
  • Displayed apparel may be in 2D or 3D format. Users can also view detailed information regarding apparel. For example, this information includes material properties of the apparel such as composition, texture, etc; cloth care instructions; source information (country, manufacturer/retailer); images describing apparel such as micro-level images that reveal texture; etc. Other information assisting the user in making purchasing decisions may also be displayed.
  • the display information for each apparel will also include the return policy for that item.
  • This policy may include terms that are different in the case that an item is returned via postal mail versus if the item is taken to a physical store location for return by the customer.
  • the return policy may be mapped to the terms and conditions of the physical store itself. This would allow a user to purchase something online and still be able to return it at a physical store location.
  • the retailer may specify a different return policy for the apparel when it is bought online as opposed to when it is bought at the physical store.
  • the return policy may also incorporate separate terms and conditions that take into account the requirements of system 10 for returning any given item.
  • matching/coordinating items that go with the items the users are looking at or items that are in the users fitting room, shopping cart, or wardrobe, and that fit the users body and their taste may be presented to the users.
  • Suggestions on coordinating/matching items may also be made across users. For example, if a bride and a bridegroom go on a shopping trip, a wedding dress for the bride and a corresponding/matching tuxedo for the bridegroom that fit them respectively may be presented.
  • a virtual fitting room is available to the user.
  • the virtual fitting room includes items that the user has selected to try on or fit on their user model and that the user may or may not decide to purchase.
  • the fitting room provides the user with a graphical, simulated representation of a fitting room environment and the apparel items selected for fitting on the user's model. The user can add an item to their fitting room by clicking on an icon next to the item they wish to virtually try on. Once an item has been added to the fitting room, that item will become available to the user in the local application for fitting on their model.
  • FIG. 27 An example of user interaction with the fitting room is illustrated in FIG. 27 .
  • the user may choose to add an item to the fitting room for trial fit with their user model. Once the item has been added to the fitting room, the user may try on the item on their user model, and/or decide to purchase the item, in which case the apparel item can be added to the virtual wardrobe described later. Alternately, the user may decide not to purchase the item in which case the item will stay in the fitting room until the user chooses to delete it from their fitting room. Users may make the contents of their fitting room publicly accessible or restrict access to members of their social network or provide limited access to anyone they choose. This option will allow users to identify items of interest that other users have in their fitting room and browse and shop for the same or similar items on system 10 .
  • Physics based animation can be incorporated to make the fitting room, its contents and user interaction with the fitting room as realistic as possible.
  • the clothes in the fitting room can be made to appear realistic by simulating real texture and movement of cloth.
  • users may be able to drag and drop clothes, optical accessories, hairstyles, other apparel, accessories, and digitized components and their manifestations onto their character model.
  • they will be able to drag components placed in the fitting room or wardrobe or from an electronic catalogue onto their model.
  • the drag-and-drop functionality may incorporate physics based animation to enhance realism.
  • the users may specify where things are placed on their character model.
  • the user may choose to order and purchase the real apparel online.
  • the user may also submit fit information (visual as well as text) including information on where alterations may be needed, as provided by the modeling module 50 , as well as any additional information associated with an apparel item that the user is purchasing online to a ‘tailoring’ service.
  • This service would be able to make the requisite alterations for the user for a fee.
  • a facility would also be available to the user to custom order clothes online from a designer or supplier of apparel if they (designer, supplier) choose to provide the service.
  • the user may build a model for the person for whom the gift is intended and fit apparel on to this third party model to test goodness of fit before purchasing the apparel.
  • the user for whom the gift is being purchased already has a user account/profile available in system 10 then their user model may be accessed by the gift-giver upon receiving permission from the user for purposes of testing goodness of fit. If a user wishes to access fit or other information or the user model of a friend, the friend would receive a notification that the specific information has been requested by the user. The friend would have the option to grant or deny access to any or all of their information or their user model. If the friend denies access, the user may still be able to purchase a gift for the friend as the system will be able to access the friend's information and inform the user if a particular apparel is available in their friend's size.
  • the system would, thus, provide subjective information regarding the fit of an apparel with respect to another user without directly revealing any fit or other information of the user for whom the item is being purchased. If an apparel item is available in the friend's size, the user may order it upon which the system would deliver the appropriate sized apparel (based on the sizing and fit information in the friend's profile) to the friend. A confirmation request may be sent to the friend for confirming the size of the apparel before the purchase order is finalized. (This method can be used for other products such as prescription eyewear). Users have the option to display icons on their profile and/or home page that indicate gifts received from other people (items purchased on the site for respective user by other users). A ‘Mix and Match’ section will allow users to view items from different vendors.
  • Users may coordinate items and visualize their appearance on the user model. This visualization would assist users in the mix and match process. Items on sale may also be presented from different vendors in the mix and match section. Items on sale/discounted items may also be presented in other areas of the site. Furthermore, there may be other sections on the site featuring special items available for purchase. In exemplary embodiment, these may include autographed apparel and other goods by celebrities. Not only is the user able to purchase real apparel from the site (described later on), but the user can also buy virtual manifestations of apparel, hairstyles, makeup etc.
  • Users may be interested in purchasing these virtual items for use in external sites, gaming environments, for use with virtual characters in other environments etc. Users can also search for and buy items on other users' shopping lists, registries and/or wishlists. Users may also set-up gift registries accessible on their member pages for occasions such as weddings, anniversaries, birthdays etc.
  • the shopping module 60 also determines for each user a preferred or featured style that would be suitable for the respective user.
  • the determination of a preferred or featured style may be based on various inputs. Inputs may include the preferences and picks of a fashion consultant of which the system 10 keeps track.
  • the one or more fashion consultant's choices for featured styles may be updated into the system 10 , and the system 10 then provides respective users with updated style choices based on the selections of the fashion consultants.
  • styles and/or apparel items may be presented to the user based on information the system 10 has collected regarding their shopping preferences, stores, brands, styles and types of apparel that are purchased, along with personal information related to their physical profile and age.
  • the user model may be used to make apparel suggestions by the system.
  • the convex hull of the user model is used to determine apparel that would best fit/suit the user.
  • the various featured looks that are selected by the system 10 may be presented to the user upon request of the user, and the selected featured looks may also be presented to the user upon login to the system.
  • various selected styles with a user's model may be presented to the user upon request or upon login where the user model is modeling apparel that is similar to what celebrities or other notable personalities may be wearing.
  • Fashion consultants, stylists and designers may be available on site for providing users with fashion tips, news, recommendations and other fashion related advice. Live assistance may be provided through a chat feature, video and other means. Additionally, it may be possible for users to book appointments with fashion consultants of their choice.
  • Animated virtual characters representing fashion consultants, stylists and designers may also be used for the purpose of providing fashion related advice, tips news and recommendations.
  • Virtual fashion consultants may make suggestions based on the user's wardrobe and fitting room contents. It would also be possible for users interested in giving fashion advice to other users to do so on the site. In an exemplary embodiment, this may be accomplished by joining a ‘fashion amateurs’ network where members may provide fashion advice to other users or even display their own fashion apparel designs. Consultants may be available to provide assistance with other services such as technical, legal, financial etc.
  • the wardrobe module 62 provides the user with a graphical, simulated representation of the contents of their real and/or virtual wardrobe.
  • the virtual wardrobe comprises the respective items of apparel that are associated with the user in the system 10 .
  • the virtual wardrobe will store all of the items that the user has purchased.
  • FIG. 27 describes an instance of user interaction with the virtual wardrobe 440 and fitting room 420 .
  • the user may browse apparel 400 displayed by the system, an instance of which is described with reference to FIG. 22 . Once the user decides to purchase an item, it will be added to the virtual wardrobe. The user may then choose to keep the item in their wardrobe or delete it. If the user decides to return an item, that item will be transferred from the user's wardrobe to the fitting room.
  • the virtual wardrobe may also comprise representations of apparel items that the user owns that are not associated with the system 10 .
  • the user may upload respective images, animation, video and other multimedia formats or any combination thereof of various real apparel items to the system 10 . Once uploaded, the users are then able to interact with their respective physical wardrobe contents through use of the system 10 .
  • Identification (ID) tags on the virtual wardrobe items may assist the user in mapping items from the real to virtual wardrobe.
  • An ID tag can have standard or user defined fields in order to identify a given item. Standard fields, for instance, can include, but are not limited to, ID number, colour, apparel type, occasion, care instructions, price, make and manufacturer, store item was purchased from, return policy etc.
  • User defined fields may include, for example, comments such as ‘Item was gifted to me by this person on this date’, and other fields. Users are able to browse the contents of their wardrobe online. This allows the user the ability to determine which apparel items they may need to purchase based on their need and/or desire. Users may make the contents of their wardrobe publicly accessible or restrict access to members of their social network or provide limited access to anyone they choose. This option will allow users to identify items of interest that other users have in their wardrobe and browse and shop for the same and/or similar items on the system 10 . An icon may appear on the profile/home page of the user—‘buy what this user has bought’ to view recent purchases of the user and buy the same and/or similar items via system 10 .
  • the user may also decide to conduct an auction of some or all of the real items in their wardrobe.
  • the user will be able to mark or tag the virtual representations of these items in their virtual wardrobe and other users with access to the wardrobe can view and purchase auction items of interest to them.
  • an icon may appear on the profile page of the user indicating that they are conducting an auction to notify other users. It may be possible for users to mark items in their virtual wardrobe for dry-cleaning. This information may be used to notify dry-cleaning services in the area about items for pick-up and delivery from respective users in an exemplary embodiment.
  • Physics based animation can be incorporated to make the wardrobe, its contents and user interaction with the wardrobe as realistic as possible.
  • the clothes in the wardrobe can be made to appear realistic by simulating real texture and movement of cloth.
  • the wardrobe classification criteria may include, but are not limited to, colour, style, occasion, designer, season, size/fit, clothing type, fabric type, date of purchase etc.
  • the virtual wardrobe may also have associated with it multimedia files such as music, which provide a more enjoyable experience when perusing the contents of the virtual wardrobe.
  • a virtual/real style consultant and/or other users may be available to advise on the contents of the wardrobe.
  • the advertising module 64 in an exemplary embodiment coordinates the display and use of various apparel items and non-apparel items. Advertisers associated with the system 10 wish for their particular product offering to be displayed to the user in an attempt to increase the product's exposure.
  • the advertising module determines which offering associated with an advertiser is to be displayed to the user.
  • Some components related to the advertising module 64 are linked to the environment module, the details of which were discussed in the section describing the environment module 56 . These include, in exemplary embodiments, environments based on a theme reflecting the product being advertised; components associated with environments such as advertisement banners and logos; actual products being advertised furnishing/occupying the environments. Music advertisers can link environments with their playlists/soundtracks/radio players.
  • Movie advertisers can supply theme based environments which may feature music/apparel/effigies and other products related to the movie. Users will be able to display character models on their profile page wearing sponsored apparel (digitized versions) that sponsors can make available to users through the advertising module 64 ; or users can display images or videos of themselves in their profile wearing real sponsored apparel. In a similar manner, users supporting a cause may buy real or digital apparel sponsoring the cause (for example, a political or charitable cause) and display their character model in such apparel or put up videos or images of themselves in real versions of the apparel. Advertisers belonging to the tourism industry may use specific environments that showcase tourist spots, cultural events, exhibitions, amusement parks, natural and historical sites and other places of interest to the tourist. The above examples have been mentioned as exemplary embodiments to demonstrate how advertisers can take advantage of the environment module 56 for brand/product advertising purposes.
  • the entertainment module 66 encompasses activities that include the user being able to interact and manipulate their model by animating it to perform different activities such as singing, dancing, etc and using it to participate in gaming and augmented reality environments and other activities. Some features associated with the entertainment module 66 have already been discussed in the context of the environment module 56 . These include the ability of the user to animate the virtual model's movements, actions, expressions and dialogue; the facility to use the model in creating music videos, movies, portraits; interacting via the model with different users in chat sessions, games, shopping trips etc.; and other means by which the user may interact with the virtual model or engage it in virtual activities.
  • the entertainment module 66 features the user model or another virtual character on the user's profile page as an ‘information avatar’ to provide news updates, fashion updates, information in the form of RSS feeds, news and other feeds and other information that is of interest to the user or that the user has subscribed to.
  • the character model may supply this information in various ways, either through speech, or by directing to the appropriate content on the page or by displaying appropriate content at the request of the user, all of which are given as exemplary embodiments.
  • the main purpose of using the virtual model to provide information feeds and updates of interest to the user is to make the process more ‘human’, interactive and to provide an alternative to simple text and image information and feed content.
  • the ‘information avatar’ or ‘personal assistant’ can incorporate weather information and latest fashion news and trends, as an exemplary embodiment, to suggest apparel to wear to the user.
  • Information from the media agency servers 25 and entertainment servers 23 is used to keep the content reported and used by the ‘information avatar’ updated.
  • Users will be able to interact with each other using creative virtual tools.
  • An example includes interactive virtual gifts. These gifts may embody virtual manifestations of real gifts and cards. Users may have the option to virtually wrap their presents using containers, wrapping and decoration of their choice. They may also set the time that the virtual gift automatically opens or is allowed to be opened by the gift-receiver.
  • Exemplary embodiments of gifts include pop-up cards and gifts; gifts with text/voice/audio/video/animated messages or coupons and other surprises; gifts that grow or change over time.
  • An example of a gift that changes over time constitutes a tree or a plant that is still a seedling or a baby plant when it is gifted and is displayed on the gift-receiver's home page for example. Over fixed time intervals, this plant/tree animation would change to reflect virtual ‘growth’ until the plant/tree is fully grown at a specified endpoint.
  • the type of plant/tree may be a surprise and may be revealed when the plant/tree is fully grown at the end of the specified period. There may be a surprise message or another virtual surprise/gift that is displayed/revealed to the user when the plant/tree reaches the endpoint of the growth/change interval.
  • Gifts that change over time may include other objects and are not necessarily restricted to the examples above.
  • the server application 22 also has associated with it a data store 70 .
  • the server application 22 has access to the data store 70 that is resident upon the portal server 20 or associated with the portal server 20 .
  • the data store 70 is a static storage medium that is used to record information associated with the system 10 .
  • the data store 70 is illustrated in further detail with respect to FIG. 4 .
  • FIG. 4 where the components of the data store 70 are shown in a block diagram in an exemplary embodiment.
  • the components of the data store 70 shown here are shown for purposes of example, as the data store 70 may have associated with it one or more databases.
  • the databases that are described herein as associated with the data store are described for purposes of example, as the various databases that have been described may be further partitioned into one or more databases, or may be combined with the data records associated with other databases.
  • the data store 70 in an exemplary embodiment comprises a user database 80 , an apparel database 82 , a 3-D model database 84 , and an environment database 86 .
  • the user database 80 in an exemplary embodiment is used to record and store information regarding a user of the system 10 . Such information includes, but is not limited to a user's access login and password that is associated with the system 10 .
  • a user's profile information is also stored in the user database 80 which includes, age, profession, personal information, and user's physical measurements that have been specified by the user, images provided by the user, a user's history, information associated with a user's use of the system.
  • a user's history information may include, but is not limited to, the frequency of their use of the system, the time and season they make purchases, the items they have purchased, the retailers from whom the items were purchased, and information regarding the various items.
  • Information regarding the various items may include, but is not limited to, the colour, style and description of the items.
  • the apparel database 82 stores information regarding the various items of apparel that are available through the system 10 .
  • the 3-D model database 86 stores predetermined 3-D models and parts of various 3-D models that are representative of various body types. The 3-D models are used to specify the user model that is associated with the user.
  • the environment database 86 stores the various environments that are provided by the system 10 and that may be uploaded by users as described below.
  • Access method 100 is engaged by the user when the user first logs into the system 10 .
  • the access method 100 describes the various options that are available to the user upon first accessing the system.
  • Method 100 begins at step 101 , where the user accesses the system 10 by logging into the system 10 . Users can also browse the system without authentication as a guest. Guests have access to limited content.
  • the system 10 is accessible through the Internet. As the system 10 is accessible through the Internet, the user accesses the system by entering the URL associated with the system 10 . Each user of the system 10 has a login and password that is used to access the system 10 .
  • step 102 Upon successful validation as an authorized user, method 100 proceeds to step 102 , where the user is presented with their respective homepage. The user may be shown their user model (if they have previously accessed the system) displaying featured items of apparel when they log in. The user is presented with a variety of options upon logging into the system 10 .
  • Method 100 proceeds to step 103 if the user has selected to modify their respective environments associated with the user. At step 103 , the user as described in detail below has the ability to modify and alter the respective virtual environments that are associated with the user.
  • step 104 proceeds to step 104 when the user chooses to manage their friends. Users may add other users from within the system 10 , and from external community sites as their friends, and may manage the interaction with their friends. The management of friends in the system 10 is explained in further detail below.
  • Method 100 proceeds to step 105 when the user wishes to generate or interact with their user model.
  • Method 100 proceeds to step 106 where the user wishes to view items that may be purchased.
  • Method 100 proceeds to step 107 where the user may engage in different collaborative and entertainment activities as described in this document.
  • the steps that have been described herein, have been provided for purposes of example, as various additional and alternative steps may be associated with a user's accessing of their respective home page.
  • the model generation method 110 outlines the steps involved in generating the 3-D user model.
  • Method 110 begins at step 111 , at which the user provides data to the system 10 .
  • the data can be provided all at once or incrementally.
  • the data can be provided by the user or by his/her friends. Friends may grant or deny access to data request and have control over what data is shared.
  • the data provided may include but is not limited to image(s) and/or video(s) of the face 113 and/or body 114 ; measurements 115 of the body size including the head as described below; apparel size commonly worn by the user and the preferred apparel size(s) and preferences 116 for style of clothing (such as fitted, baggy, preferred placement of pants (above, below, or on waist), color, European, trendy, sophisticated etc.), brands, etc.; laser scan data (obtained, for example, from a booth at a store equipped with a laser scanner), meshes (for instance, those corresponding to impressions of the ear or foot), outlines of body parts (for instance, those of the hands and feet), mould scans, mocap data, magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and computed tomography (CT) data 117 ; and other data 118 such as correspondence between feature points on the 3D model's surface and the 2D images supplied by the user (for example the location of the feature points on the face as shown in
  • references to anatomical landmarks on the user supplied data and user specific info such as the age or age group, gender, ethnicity, size, skin tone, weight of the user.
  • User data may be imported from other sources such as social-networking sites or the virtual operating system described later in this document. (Such importing of data also applies to the other portals discussed in this document).
  • the input to the method 110 includes prior information 112 including, but not limited to, annotated 3D surface models of humans that include information such as anatomical landmarks, age, gender, ethnicity, size, etc.; anatomical information, for instance, probability densities of face and body proportions across gender, age groups, ethnic backgrounds, etc.; prior knowledge on the nature of the input data such as shape-space priors (SSPs) (described below), priors on measurements, priors on acceptable apparel sizes, priors on feature point correspondence; sequencing of steps for various action factors (described below), etc.
  • SSPs shape-space priors
  • the prior information 112 includes data stored in the data store 70 .
  • the prior information 112 is also used to determine “surprise” as described later in this document.
  • system 10 makes recommendations to the user on stores, brands, apparel as well as provides fit information, as described previously.
  • the system informs the user about how well an apparel fits, if the apparel is available in a given user's size and the specific size in the apparel that best fits the user.
  • the system takes into account user fit preferences, for example a user's preference for loose fit clothing.
  • the system may suggest whether apparel suits a particular user based on the user's style preferences.
  • the system may recommend a list of items to the user ordered according to user preferences.
  • a user may prefer collar shirts over V-necks. Furthermore, the user may not like turtlenecks at all.
  • the system may present the shirt styles to the user in an ordered list such that the collar shirts are placed above the V-neck shirts and the turtlenecks are placed towards the bottom of the ordered list, so that the user has an easier time sorting out and choosing styles that suit their taste and preferences from the store collection.
  • the system may combine style preferences as specified the user, and/or user style based on buying patterns of user and/or other users' ratings of apparel, and/or fashion consultant ratings and/or apparel popularity (assessed according to the number of the particular apparel item purchased for example). Any combination of the above information may be used to calculate the “style score” or “style factor” or “style quotient” of a particular item (algorithm providing the score is referred to as “style calculator).
  • a user may select the information that the system should use in calculating the style factor of a particular item. The user may inquire about the style score of any particular item in order to guide their shopping decision. The system may use the scores calculated by the style calculator in order to provide apparel recommendations; style ratings of products and apparel items; user-customized catalogues and lists of products that are ordered and sorted according to an individual's preferences and/or popularity of apparel items.
  • the system can inform a user of the body measurements/dimensions required to fit apparel of the specified size. Alternatively, given a user's body measurements, the system can inform the user of the apparel size that would fit in a given brand or make/manufacturer. Further, the system can suggest sizes to the user in related apparel. In exemplary embodiment, if a user is browsing jackets in a store and the system has information about the shirt size of the user, then based on the user's shirt size, the system can suggest the appropriate jacket sizes for the user. In an exemplary embodiment, the system can provide fit information to the user using a referencing system that involves using as reference a database containing apparel of each type and in each size (based on the standardized sizing system).
  • Body measurements specified by a user are used by the system to estimate and suggest apparel size that best meets the user's fit needs (‘fit’ information incorporates user preferences as well such as preference for comfort, loose or exact fit etc.).
  • the reference apparel size database is also used to suggest size in any of the different types of apparel such as jackets or coats or jeans or dress pants etc.
  • a user may be looking for dress pants, for instance, and the system may only know the user's apparel size in jeans and not the user's body measurements.
  • the system compares jeans in the user's size from the reference apparel database with dress pants the user is interested in trying/buying, and by incorporating any additional user fit preferences, the system suggests dress pants that would best fit the user i.e., are compatible with the user's fit requirements.
  • Fit information may specify an uncertainty along with fit information in order to account for, in exemplary embodiment, any differences that may arise in size/fit as a result of brand differences and/or apparel material properties and/or non-standardized apparel size and/or subjectivity in user preferences and/or inherent system uncertainty, if any exists.
  • the system informs a user, who prefers exact fit in shirts, that a shirt the user is interested in purchasing, and which is a new polyester material with a different composition of materials and that stretches more as a result, fits with ⁇ 5% uncertainty. This is due to the fact that the stretch may or may not result in an exact fit and may be slightly loose or may be exact. Since the material is new and the system may not have information on its material properties and how such a material would fit, it cannot provide an absolute accurate assessment of the fit. It instead uses material information that is close to the new material in order to assess fit, and expresses the uncertainty in fit information. Fit information is communicated to the user, in exemplary embodiment, via text, speech or visually (images, video, animation for example) or any combination thereof.
  • An API Application Programming Interface
  • These applications may include, in exemplary embodiment, widgets/applications that provide fit information specific to their brands and products to users; store locater applications etc.
  • an application that lets vendors provide fit information works simply by looking up in a database or using a classifier such as Na ⁇ ve Bayes [7-9] or k-nearest neighbours (KNN) [9, 10]. For example, an application may state whether a garments that a user(s) is browsing from a catalog fits the user(s).
  • (1) Database a Database.
  • the application can look up the user's size and the manufacturer of the clothing in a database to find the size(s) corresponding to the given manufacturer that fits the user. If the item currently being viewed is available in the user's size, the item is marked as such.
  • the database can be populated with such information a priori and the application can add to the database as more information becomes available.
  • the a posteriori probability of an apparel size (as) fitting a user given the user's body size (us) information and the manufacturer of the apparel (m) can be computer using the Bayes rule, This can be expressed as the product of the probability of the user's size (us) given the apparel size (as) and the manufacturer (m) of the apparel, and that of the prior probability of the apparel size given the manufacturer, divided by the joint probability of the user's size apparel size given the manufacturer (i.e. p(as
  • us,m) p(us
  • the prior probabilities can be learnt by building histograms from sufficiently large data and normalizing them so that the probability density sums to one.
  • the user may be presented with items that fit the user, or the apparel sizes that fit the user may be compared with the item that the user is currently viewing and if the item that is being viewed belongs to the apparel sizes that fit the user, a check mark or a “fits me” indication may be made next to the item.
  • KNN Information on the body size (for example, measurements of various parts of the body), apparel size for different manufacturers for both males and females, and (optionally) other factors such as age are stored in a database for a sufficiently large number of people. Each of these prices of information (i.e. body size, apparel size) is multiplied by a weight (to avoid biases).
  • the closest exemplar is found by computing the Euclidean distance between the given body size (multiplied by the associated weights for each measurement) and those in the database.
  • the majority vote of the output value i.e. the corresponding field of interest in the database, for example, the apparel size corresponding to the body measurements
  • the output value is then divided by the corresponding weigh (weight can take the value 1 also).
  • the input could be the apparel size for a given manufacturer and the output could be the body sizes that fit this apparel.
  • the apparel sizes that fit the user may be computed and the user may be presented with the available sizes for the user.
  • the user can also filter catalogs to show only items that fit the user or correspond to the user's preferences.
  • the system can point out to the user if a product is available in the user's size as the user is browsing products or selecting products to view.
  • the system may also point out the appropriate size of the user in a different sizing scheme, for example, in the sizing scheme of a different country (US, EUR, UK etc.).
  • the system also takes into account user fit preferences. For instance, a user may want clothes to be a few inches looser than his/her actual fit size.
  • the system would add the leeway margin, as specified by the user, to the user's exact fit apparel size in order to find the desired fit for the user.
  • Method 110 begins at the preprocessing step 119 at which it preprocesses the user data 111 using prior knowledge 112 to determine the appropriate combination of modules 120 , 123 , 124 , 125 , and 126 to invoke. Method 110 then invokes and passes the appropriate user data and prior knowledge to an appropriate combination of the following modules: image/video analysis module 120 , measurements analysis module 123 , apparel size analysis module 124 , mesh analysis module 125 , and a generic module 126 as described in detail below. These modules 120 , 123 , 124 , and 125 attempt to construct the relevant regions of the user model based on the input provided. At the information fusion step 127 , the data produced by the modules 120 , 123 , 124 , 125 and 126 is fused.
  • Method 110 then instantiates a preliminary model at step 128 , optimizes it at the model optimization step 129 , and details it at step 130 .
  • Method 110 then presents the user with a constructed model at step 131 for user modifications, if any.
  • the constructed model and the user changes are passed on to a learning module 132 , the output of which is used to update the prior knowledge in order to improve the model construction method 110 .
  • a learning module 132 the output of which is used to update the prior knowledge in order to improve the model construction method 110 .
  • its intermediary progress is shown to the user.
  • the user is allowed to correct the method. In an exemplary embodiment, this is done by displaying the model at the intermediately steps along with the parameters involved and allowing the user to set the values of these parameters though an intuitive interface.
  • a user model is generated. Each of the steps of method 110 is described in further detail below.
  • Measurements 115 provided as input to the method 110 include, in an exemplary embodiment, measurements with respect to anatomical landmarks, for example, the circumference of the head and neck, distance from trichion to tip of nose, distance from the tip of the nose to the mental protuberance, width of an eye, length of the region between the lateral clavicle region to anterior superior iliac spine, circumference of the thorax, waist, wrist circumference, thigh circumference, shin length, circumference of digits on right and left hands, thoracic muscle content, abdominal fat content, measurements of the pelvis, measurements of the feet, weight, height, default posture (involving measurements such as elevation of right and left shoulders, stance (upper and lower limbs, neck, seat, waist, etc.), humping, etc.).
  • anatomical landmarks for example, the circumference of the head and neck, distance from trichion to tip of nose, distance from the tip of the nose to the mental protuberance, width of an eye, length of the region between the lateral clavi
  • Apparel size/preferences 116 include, in an exemplary embodiment, clothing size such as dress size (eg. 14, 8, etc.), hat size, shoe size, collar size, length of jacket, trouser inseam, skirt length etc., including an indication of whether measurements represent an exact size or include a preferred margin or are taken over clothes.
  • the specific measurements differ for males and females reflecting the anatomical difference between the genders and differences in clothing. For instance, in the case of females, measurements may include a more elaborate measurement of the upper thorax involving measurements such as those of the largest circumference of the thorax covering the bust, shoulder to bust length, bust to bust length etc.
  • Image(s) and/or video(s) of the face 113 and/or body 114 provided to the system can also be imported from other sources and can also be exported to other destinations.
  • the method 110 may use images that the user has uploaded to social networking sites such as Facebook or Myspace or image sharing sites such as Flickr.
  • the method 110 can work with any subset of the data provided in 111 , exemplary embodiments of which are described below.
  • the method 110 is robust to incomplete data and missing information. All or part of the information requested may be provided by the user i.e. the information provided by the user is optional. In the absence of information, prior knowledge in the form of symmetry, interpolation and other fill-in methods, etc are used as described below.
  • the method 110 instantiates, in an exemplary embodiment, a generic model which could be based on an average model or a celebrity model.
  • the method 110 proceeds accordingly as described below.
  • action factors e.g. photorealistic model or a version of nonphotorealistic rendering (NPR)
  • a 3D model of appropriate complexity is developed.
  • a highly complex (a higher order approximation with a higher poly count) model is generated, a downsampled version (a lower poly count model) is also created and stored. This lower poly count model is then used for physical simulations in order to reduce the processing time while the higher poly count model is used for visualization. This allows plausible motion and an appealing visualization. Goodness of fit information for apparel is computed using the higher poly count model unless limited by the action factors.
  • Method 110 at the preprocessing step 119 at which it preprocesses the user input data using prior knowledge to determine which of the modules 120 , 123 , 124 , 125 and 126 to invoke; depending on the input provided and the action factors, an appropriate combination of modules 120 , 123 , 124 , 125 and 126 is invoked.
  • the method 110 attempts to construct the most accurate model based on the data for the given action factors.
  • the accuracy of a model constructed using each of the modules 120 , 123 , 124 , 125 and 126 is available as prior knowledge 112 , and is used to determine the appropriate combination of modules 120 , 123 , 124 , 125 and 126 to invoke.
  • the client platform is computationally advanced (modern hardware, latest browser version, shader support, etc.)
  • the image/video analysis module 120 is invoked; if only body measurements are provided, only the measurements analysis module 123 is invoked; if only apparel size information is provided, only the apparel size analysis module 124 is invoked; if only a full body laser scan is provided, only the mesh analysis module is invoked; if only apparel size information and an image of the face is provided, only the apparel size analysis module 124 and the images/videos analysis module, more specifically the head analysis module 121 , are invoked; if only an image of the face is provided, only the generic module 126 and the images/videos analysis module, more specifically the head analysis module 121 , are invoked; if an image of the face, body measurements and a laser scan of the foot is provided the image/videos analysis module, more specifically the head analysis module 121 , the measurements analysis module and the mesh analysis modules are invoked and so on
  • the generic module For regions of the body, for which information is unavailable, the generic module is invoked. In the extreme case of no user information or very limited computational resources, only the generic module 126 is invoked. Other data 118 such as age and gender, if provided, and prior knowledge is available to each to the modules 120 , 123 , 124 , 125 and 126 to assist in the model construction process. Parameters may be shared between the modules 120 , 123 , 124 , 125 and 126 . Each of the modules 120 , 123 , 124 , 125 and 126 are described in detail next.
  • This module consists of a head analysis module 121 and a body analysis module 122 , in an exemplary embodiment.
  • the head analysis module 121 and the body analysis module 122 construct a 3-D model of the user's head and body, respectively, based on the image(s) and video(s) provided.
  • the head analysis module 121 and the body analysis module 122 may work in parallel and influence each other.
  • the head analysis module 121 and the body analysis module 122 are described in detail below.
  • this module After receiving image and or video file(s), this module extracts information on the user's physical attributes at step 137 and generates a three-dimensional model at step 138 . A detailed description of this process is provided below.
  • FIG. 6C where it is shown, in an exemplary embodiment, that the steps of the model construction process in the image/video analysis module are handled separately for the user's face and the body.
  • the head analysis module 121 produces a model of the user's head while the body analysis module 122 produces a model of the user's body. These models are then merged at the head-body fusion step. A detailed description of this process is provided below.
  • FIG. 6D wherein a detailed description of the model generation process of the images/videos analysis module 120 for steps 121 and 122 is provided in an exemplary embodiment.
  • the steps of the model construction are first described in the context of the head analysis module 121 .
  • the body analysis module 122 proceeds in a similar fashion.
  • the module 120 after receiving image(s) and/or videos and prior knowledge, first sorts the data into images and videos at step 139 , based on the file extension, file header, or user tag in an exemplary embodiment. If only image(s) are present, the method proceeds to the preprocessing step 141 .
  • the method first extracts images from the video that approximately represent a front view of the face and/or a side view of the face, if available and proceeds to the processing step 141 . This is done in an exemplary embodiment using a technique similar to that used in [11]. In another exemplary embodiment, a 3D model of the face is constructed using a technique similar to that in [12]. If a combination of videos and images are present and the resolution of the image(s) is higher than that of the video, the method proceeds to the preprocessing step 141 using the higher resolution images. If a low resolution video is present, for example a video captured using a cell phone, high resolution images are first generated and then the method proceeds to the processing step 141 . This can be done, in an exemplary embodiment, using a technique similar to that used in [13]. Stereo images and/or videos can also be processed. In an exemplary embodiment, this can be done using a technique similar to [14].
  • preprocess step 141 in FIG. 6D of the image/video analysis module 120 wherein the image(s) are preprocessed.
  • An approximate region containing the face region in the images is identified at this step. This is done, in an exemplary embodiment, using a rotationally invariant neural network. In another exemplary embodiment, this can be done using support vector machines (SVMs) in a manner similar to that described in [15].
  • SVMs support vector machines
  • facial pose is defined as the 3D orientation of a person's face in 3D space. It can be parameterized, in an exemplary embodiment, by the orientation of the line joining the eyes and the two angles between the facial triangle (formed by the eyes and nose) and the image plane.
  • the scale of the image is computed, in an exemplary embodiment, using (i) the measurement of a reference region as marked by the user, if available, or (ii) the size of a common object (eg.
  • a highlighter in the image at approximately the same depth as the person in the image, if available, or (ii) the measured size of a known object (eg. a checkered pattern) held by the user in the image. If multiple faces are detected in a single image, the user may be asked which face the user would like a model created for or a model may be created for each face in the image allowing the user to decide which ones to store and which ones to delete.
  • the method 110 then proceeds to step 148 , where the global appearance is analyzed, and step 142 , where the local features of the head are analyzed.
  • the global appearance analysis step 148 involves, in an exemplary embodiment, projecting the foreground on a manifold constructed, for example, using principal component analysis (PCA), probabilistic principal component analysis (PPCA), 2D PCA, Gaussian Process Latent Variable Models GPLVM, or independent component analysis (ICA).
  • PCA principal component analysis
  • PPCA probabilistic principal component analysis
  • 2D PCA Gaussian Process Latent Variable Models GPLVM
  • ICA independent component analysis
  • This manifold may be parameterized by global factors such as age, gender, pose, illumination, ethnicity, mood, weight, expression, etc.
  • the coefficients corresponding to the projection are used to produce a likelihood of observing the images given a face model. In an exemplary embodiment, this is given by a Gaussian distribution centered at the coefficients corresponding to the projection.
  • the estimated parameters from the previous step are updated using Bayes rule and the likelihood determined at this step.
  • the posterior global parameters thus computed serve as priors at step 142 .
  • the method 110 segments the face into various anatomical regions (steps 143 - 146 ), projects these regions onto local manifolds (at steps 149 and 150 ) to generate local 3D surfaces, fuses these local 3D surfaces and post processes the resulting head surface (steps 151 and 152 ), optimizes the model 153 and adds detail to the model 154 . These steps are described in detail below.
  • the method 110 at step 142 identifies various anatomical regions of the face in the image and uses this information to construct a 3D surface of the head. This is done, in an exemplary embodiment, using shape space priors (SSPs). SSPs are defined here as a probability distribution on the shape of the regions of an object (in this context a face), the relative positions of the different regions of the object, the texture of each of these regions, etc. SSPs define a prior on where to expect the different regions of the object. SSPs are constructed here based on anatomical data. In an exemplary embodiment, an SSP is constructed that defines the relative locations, orientations, and shapes of the eyes, nose, mouth, ears, chin and hair in the images.
  • SSPs shape space priors
  • the method 110 at step 143 extracts basic primitives from the images such as intensity, color, texture, etc.
  • the method 110 at step 2326 to aid in segmentation of facial features, extracts more complex primitives such as the outlines of various parts of the face and proportions of various parts of the face using morphological filters, active contours, level sets, Active Shape Models (ASMs) (for example, [16]), or a Snakes approach [17], in an exemplary embodiment.
  • ASMs Active Shape Models
  • the active contours algorithm deforms a contour to lock onto objects or boundaries of interest within an image using energy minimization as the principle of operation.
  • the contour points iteratively approach the object boundary in order to reach a minima in energy levels.
  • the ‘internal’ energy component is dependent on the shape of the contour. This component represents the facets acting on the contour surface and constraining it to be smooth.
  • the ‘external’ energy component is dependent on the image properties such as the gradient, properties that draw the contour surface to the target boundary/object.
  • the outputs of steps 143 and 144 which define likelihood functions are used together with SSPs, in an exemplary embodiment using Bayes rule, to segment the regions of the head, helmet, eyes, eyebrows, nose, mouth, etc. in the image(s).
  • a helmet is defined here as the outer 3D surface of the head including the chin, and cheeks but excluding the eyes, nose, mouth and hair.
  • the result is a set of hypotheses that provide a segmentation of various parts of the head along with a confidence measure for each segmentation.
  • Segmentation refers to the sectioning out of specific objects from other objects within an image or video frame.
  • an outline that conforms to the object perimeter is generated to localize the object of interest and segregate it from other objects in the same frame).
  • the confidence measure in an exemplary embodiment, is defined as the maximum value of the probability density function, at the segmented part's location. If the confidence measure is not above a certain threshold (in certain challenging cases eg. partial occlusion, bad lighting, etc.), other methods are invoked at the advanced primitive extraction step 145 .
  • this is done by selecting a method in a probabilistic fashion by sampling for a method from a proposal density (such as the one shown in FIG. 6I ). For example, if the face of the user is in a shadow region, a proposal density is selected that gives the probability of successfully segmenting the parts of a face under such lighting conditions for each method available. From this density a method is sampled and used to segment the facial features and provide a confidence measure of the resulting segmentation. If the updated confidence is still below the acceptable threshold, the probability density is sampled for another method and the process is repeated until either the confidence measure is over the threshold or the maximum number of iterations is reached at which point the method asks for user assistance in identifying the facial features.
  • a proposal density such as the one shown in FIG. 6I
  • a graphical model is built that predicts the location of the other remaining features or parts of the face. This is done using SSPs to build a graphical model (for eg. a Bayes Net).
  • SSPs to build a graphical model (for eg. a Bayes Net).
  • FIG. 6E where a graphical model is shown in an exemplary embodiment
  • FIG. 6F where the corresponding predicted densities are shown in image coordinates.
  • the connections between the nodes can be built in parallel.
  • the prior on the location from the previous time step is used together with the observation from the image (result of applying a segmentation method mentioned above), to update the probability of the part that is being segmented and the parts that have been segmented, and to predict the locations of the remaining parts using sequential Bayesian estimation.
  • This is done simultaneously for more than one part. For example, if the location of the second eye is observed and updated, it can be used to predict the location of the nose, mouth and the eyebrow over the second eye as shown in FIG. 6E .
  • a simplified walkthrough of the sequential Bayesian estimation for segmenting the regions of the face is shown in FIG. 6F .
  • the pose of the face is determined.
  • an isosceles triangle connecting these features is identified.
  • the angle of facial orientation is then determined by computing the angle between this isosceles triangle and the image plane.
  • the pose thus computed also serves as a parameter at the classification step 151 .
  • the segmentation methods used are designed to segment the parts of the head at smooth boundaries. Next, parameters corresponding to these parts such as pose, lighting, gender, age, race, height, weight, mood, face proportions, texture etc. are computed.
  • this is done as follows: once a majority of the parts of the head are identified, they are projected onto a corresponding manifold in feature space (eg. edge space).
  • a manifold exists for each part of the face. These manifolds are built by projecting the 3D surface corresponding to a part of the face onto an image plane (perspective projection) for a large number of parts (corresponding to different poses, lighting conditions, gender, age, race, height, weight, mood, face proportions, etc.), applying a feature filter (eg. a Canny edge detector) at step 149 to convert to a feature space (eg.
  • a feature filter eg. a Canny edge detector
  • PCA principal component analysis
  • PPCA probabilistic principal component analysis
  • 2D PCA Gaussian Process Latent Variable Models GPLVM
  • ICA independent component analysis
  • a classifier in an exemplary embodiment, a Na ⁇ ve Bayes classifier, a support vector machine (SVM), or a Gaussian Process classifier, to output the most plausible 3D surface given the parameters.
  • a particular parameter is already supplied as part of 118 , for eg. the gender of the user, then it is used directly with the classifier and the corresponding computation is skipped (eg. estimation of gender).
  • Teeth reconstruction is also handled similarly.
  • the teeth that are constructed are representative of those in the image provided including the color and orientation of teeth. This is needed later for animation and other purposes such as to show virtually results of dental corrections, whitening products, braces, invisalines, etc. Hair are also handled similarly.
  • the manifold is additionally parameterized by the 3D curvature, length, specularity, color, 3D arrangement, etc.
  • a helical model is used as the underlying representation for a hair strand.
  • hair can be modeled from image(s) using techniques similar to [24-26]. If, however, the action factors do not allow a representation of the teeth, ears and hair exactly as in the image, less complex precomputed models are used.
  • 3D surface exemplars for various parts of the head for example, a helmet defined below, eyes, nose, mouth, etc.
  • a new model is instantiated by instantiating a copy of the identified exemplar surfaces.
  • the instantiated surfaces are parametric by construction, these parametric models are modified slightly (within allowed limits), if necessary, to represent parameters as extracted from the image(s) wherever possible at the optimization step 153 .
  • the exemplars that are used with the classifier are rigged models and thus enable easy modifications.
  • the size of the skeletal structures and the weight of the nodes are modified to match the extracted parameters.
  • the rigged models also allow user modifications (as described with reference to FIG. 29B ) and facilitate animations.
  • the 3D surfaces generated at step 153 are merged.
  • the boundaries of the 3D surfaces corresponding to the parts of the face are merged and smoothed using techniques similar to those used at the head-body fusion step 155 ( FIG.
  • Symmetry is used to complete occluded or hidden parts. For example, if the user's hair are partially occluding one side of the face, symmetry is used to complete the missing part.
  • the most likely surface and texture are substituted. For example. if the user's teeth not visible owing to the mouth being closed, the most likely set of teeth, given the parameters corresponding to the user.
  • the most likely surface and texture are computed using a classifier such as Na ⁇ ve Bayes, while the placement is computed using SSPs and Bayesian inference.
  • 3D surfaces of the entire head for different combinations of constituent part parameters are maintained and an appropriate model is instantiated at step 152 based on the output of the classification step 151 .
  • a preliminary 3D model of the user's head is available which is passed onto the head-body fusion step 155 .
  • the body analysis module 122 proceeds similar to the head analysis module 121 , where instead of extracting parameters of parts of the face, parameters of the various body parts (excluding the head) are extracted from the image(s) and/or videos.
  • the local feature analysis step 142 for the body analysis module 122 involves individually analyzing the upper limbs, the lower limbs, the thorax, the abdomen, and the pelvis.
  • the location of the body in the image and its pose is identified at the preprocessing step 141 using a technique similar to that used in [27].
  • a preliminary 3D model of the user's body is generated which is passed onto the head-body fusion step 155 .
  • the head model estimate and the body model estimate are merged using smoothness assumptions at the boundaries, if necessary.
  • this is accomplished by treating the regions at the boundaries as B-splines and introducing a new set of B-splines to interconnect the two regions to be merged (analogous to using sutures) and shrinking the introduced links until the boundary points are sufficiently close.
  • a 1-D example is shown in FIG. 6G .
  • the boundaries at the neck region may be approximated as being pseudo-circular and the radii of the body model's neck region and the head model's neck region can be matched. This may involve introducing a small neck region with interpolated radius values.
  • the choice of the method used for fusion depends, in an exemplary embodiment, on the action factors. For instance, if limited data is provided by the user leading to a relatively coarse approximation to the user, the pseudo-circular approximation method mentioned above is used. As another example, a particular version of an NPR model desired by the user may not require sophisticated model for which the pseudo-circular approximation method mentioned above is used.
  • the output of the head-body fusion step 155 is passed onto the information fusion step 127 .
  • the measurements analysis module 123 processes the measurements provided by the user in order to construct a user model or part thereof.
  • These measurements include the various head and body measurements 115 provided by the user.
  • the measurements 115 provided are used to estimate any missing measurements based on anatomical and anthropometric data, and data on plastic surgery available as part of the prior knowledge 112 .
  • the proportions of the remaining parts of the head are generated based on anthropometric data as follows: the diameter of the head, along the eyes and the ears is taken to be 5 ⁇ , the distance from the trichion to the menton is taken to be 6 ⁇ .
  • the shape is appropriately adjusted based on anthropometric data. For example, the shape of an average Asian head as seen from above is circular while that of an average Caucasian is elliptical. This information is then passed to a classifier to output the most plausible 3D surface of the head given the parameters. Measurements of the body are used to instantiate a model corresponding to these measurements from a generative model.
  • a generative model is available as part of the prior knowledge 112 and is constructed, in an exemplary embodiment, using anthropometric data. In an exemplary embodiment, this is done using techniques similar to those used in [29, 30].
  • the classifier If a very limited number of measurements are available in addition to images, they are passed onto the classifier at step 151 and the extraction of the corresponding measurement from the image(s) or video(s) is skipped, in an exemplary embodiment.
  • the output of the measurements analysis module is passed onto the information fusion step 127 .
  • Prior knowledge 112 includes an association of an average 3D model with size data for shirts, dresses, trousers, skirts, etc. For example, there is an average 3D model of the upper body of a male associated with a men's shirt collar size of 42 and similarly a model of the lower body for a trouser waist size of 32 and a length of 32, or a hat size of 40 cm, or a shoe size of 11.
  • the generative models learnt from anthropometric data, for example as in [29] may have size parameters mapped to apparel size, thereby giving a generative model that is parameterized by apparel size. These models are also rigged, in an exemplary embodiment using a technique similar to that used in [31], to allow animation.
  • a user model can be created from apparel size data by (i) instantiating the corresponding average 3D model for the various body parts for which an apparel size is specified, or instantiating the part of the body corresponding to the apparel using a generative model parameterized by apparel size, and (ii) merging the 3D surfaces for the various body parts using merging techniques similar to those used at step 155 using most probable generic models for body parts (available from the generic module 126 ) for which apparel size is not provided.
  • the output of the apparel size analysis module is passed onto the information fusion step 127 .
  • this module After receiving user data 111 and prior knowledge 112 , once invoked, this module first sorts 156 the data [such as laser scan data, meshes (for instance, those corresponding to impressions of the ear or foot), outlines of body parts (for instance, those of the hands and feet), mocap (motion capture) data, magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and computed tomography (CT) data] to determine the most accurate choice of data to use for model construction.
  • data such as laser scan data, meshes (for instance, those corresponding to impressions of the ear or foot), outlines of body parts (for instance, those of the hands and feet), mocap (motion capture) data, magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and computed tomography (CT) data
  • the module 125 then proceeds as follows: The module 125 filters the data at step 157 to remove any noise and to correct any holes in the data. This is done, in an exemplary embodiment, using template-based parameterization and hole-filing techniques similar to those used in [29]. At this step, unnecessary information such as meshes corresponding to background points is also removed.
  • step 158 This can be done, in an exemplary embodiment, by asking the user to mark such regions through an intuitive user interface. This is followed by the fill-in step 158 at which symmetry is used to complete missing regions such as an arm, if any, using symmetry. If mesh or volume data is not available for the missing regions, the corresponding regions are generated by the generic module 126 and fused at the information fusion step 127 . The model is then rigged at the rigging step 159 . Rigging provides a control skeleton for animations and also for easily modifying the body parts of the user's model. The mesh output from step 158 is used with a generic human skeleton and an identification of the orientation of the mesh to automatically rig the mesh.
  • Generic male and female versions one for age group 0-8, 8-12, 13-20, 21-30, 31-60, 60+ in an exemplary embodiment are available as part of the prior knowledge 112 .
  • the orientation of the mesh i.e which side is up
  • the orientation of the header is obtained by asking the user through an intuitive user interface. Rigging is done automatically, in an exemplary embodiment, using a technique similar to that used in [31]. It can also be done using techniques similar to those used in [32, 33].
  • a mesh is first constructed, in an exemplary embodiment, using a technique similar to that used in [34]. This mesh is then passed on to the fill-in step 158 and the rigging step 159 described above.
  • a model is generated using shape completion techniques such as that used in [35], in an exemplary embodiment. The model thus generated is rigged automatically, in an exemplary embodiment, using a technique similar to that used in [31]. For outlines, this module extracts constraints from the outlines and morphs the mesh to satisfy the constraints.
  • this is done as follows: (i) Feature points on the outline corresponding to labeled feature points on the mesh (for example, points over the ends of eyebrows, over the ears, and the occipital lobe) are identified by the user through a guided interface such as the one shown in FIG. 11 .
  • This can also be automated using perceptual grouping and anatomical knowledge. For example, consider a scenario where a user prints out a sheet that has a reference marker from the website and draws an outline of his/her foot, or takes an image of his/her foot with a penny next to the foot. Given such an image, the image is first scaled to match the units of the coordinate system of the 3D mesh using scale information from the reference markers in the image.
  • the image is search for commonly known objects such as a highlighter or a penny using template matching and the known size of such objects is used to set the scale of the foot outline.
  • the user may be asked to identify at least one measurement on the foot.
  • the orientation of the foot is then identified. This is done by applying a Canny edge detector to get the edge locations and the orientations, connecting or grouping edgels (a pixel at which an edge has been identified) that have an orientation within a certain threshold, and finding the longest pair of connected edges. This gives the orientation of the foot.
  • Both ends of the foots are searched to identify the region of higher frequency content (using a Fourier Transform or simply projecting the region at each end onto a slice along the foot and looking at the resulting histogram) corresponding to the toes.
  • the big toe is then identified by comparing the widths of the edges defining the toes and picking the one corresponding to the greatest width.
  • the little toe and the region corresponding to the heel are identified and reference points on these regions corresponding to those on the 3D meshes are marked which now define a set of constraints.
  • the corresponding reference points are then displaced towards the identified reference points from the image using Finite Element Analysis (FEM) techniques such as those used in [36], [37], or as in [38].
  • FEM Finite Element Analysis
  • the extracted constraints are also passed onto the other modules 120 , 123 , 124 and 126 and a similar method is applied to ensure that the generated model conforms to the constraints.
  • morphing of the mesh to conform to constraints is particularly used, if action factors allow, for parts of the body that cannot be easily approximated by a cylinder such as the head.
  • Such morphing of the mesh based on constraints provided by the user such as an outline or an image of their foot or fingers are useful for computing goodness of fit information for apparel such as shoes and rings. (For the case of rings, it is also possible to simply measure the circumference of the ring and let the measurement analysis module construct the appropriate model).
  • two roughly orthogonal images of the fingers with a reference material in the background or an outline of the fingers on a printable sheet containing a reference marker could be used and analyzed as above.
  • a users hand can be placed in front of a webcam with a ref on paper in the background or a computer screen in the background containing a reference marker.
  • the advantage of such an image based constraint extraction is that it allows multiple fingers to be captured at once. This is particularly useful when buying, say mittens or gloves or a ring, for a friend as a surprise gift.
  • the user simply needs to take an image(s) of the appropriate region of his/her friend's body, mark the size of some known object in the image, for example, the width of the user's face.
  • Imprints and moulds such as those of the foot and ears can be converted to meshes can be done either by laser scanning. It can also be done taking multiple images of the imprints and moulds and constructing the mesh using structure from focus, structure from motion, structure from shading, specularity, etc.; techniques similar to those used in [18] and [22]. Medical images and volumes such as MRI and CT volumes can also be used, if available, to create the user model or part thereof. This can be done using techniques similar to those used in [39, 40].
  • a volume is first created as follows and processed as described above for the case of laser scan data.
  • a transform is applied producing a feature space image.
  • a silhouette transform is applied which produces an image with a silhouette of the object(s) of interest. This can be done in an exemplary embodiment using a technique similar to that used in [41].
  • the silhouette is then backprojected. This can be done, in an exemplary embodiment, by summing the contributions from each of the silhouettes taking into account the geometry provided as shown in FIG. 6J .
  • rays are traced from pixels on the feature space transformed images to voxels (3D pixels) of a volume (a 3D image).
  • voxels 3D pixels
  • a volume a 3D image
  • the value of the pixel in the feature space transformed image is added. This added value may be corrected for a 1/r 2 effect (inverse square law of light and electromagnetic radiation).
  • any other feature space transform can be used.
  • the images are processed as described above with geometry information extracted from the images as follows:
  • the eyes, nose and mouth can be identified similar to techniques used at step 121
  • (ii) Form triangles by connecting the salient features.
  • the eyes, nose, and mouth of a person in an image may be connected to form a triangle.
  • This module processes other data 118 , if available, together with prior knowledge 112 in order to produce a generic model or part thereof.
  • This module is invoked when there is insufficient information for constructing a user model or part thereof via the other modules 120 , 123 , 124 , and 125 , or if the action factors do not allow the generation of a more accurate model that is conformal to the user through modules 120 , 123 , 124 , and 125 .
  • the information in other data 118 or that provided by the modules 120 , 123 , 124 , and 125 is passed onto a classifier similar to that used at step 151 .
  • a Na ⁇ ve Bayes classifier a support vector machine (SVM), or a Gaussian Process classifier is used, to output the most plausible 3D surface given the information. If only a part of the model (such as a limb) is required by the other modules 120 , 123 , 124 , and 125 , then only the required part is generated using the classifier. If the whole model is required, then the entire user model is generated using the classifier. In an exemplary embodiment, the classifier outputs an exemplar that is a rigged model. The rigged exemplar is then modified, if necessary, to better match the user.
  • SVM support vector machine
  • Gaussian Process classifier is used, to output the most plausible 3D surface given the information. If only a part of the model (such as a limb) is required by the other modules 120 , 123 , 124 , and 125 , then only the required part is generated using the classifier. If the whole model is required, then the entire user model is generated using the classifier.
  • the classifier is built using labeled training data. In an exemplary embodiment, this is done using rigged 3D surfaces or meshes that have associated with them labels identifying the age, gender, weight, height, ethnicity, color, apparel size etc. of the corresponding 3D surface or mesh. The labeling can be done manually as it only needs to be done once when building the classifier.
  • the classifier is stored and available as part of prior knowledge 112 . As more and more data becomes available, the classifier is updated at the learning step 132 . In essence, the method 110 is constantly learning and improving its model construction process.
  • the processed information from the modules 120 , 123 , 124 , 125 , and 126 is then fused at the information fusion step 127 .
  • merging of the outputs of components of 120 , 123 , 124 , 125 , and 126 takes place.
  • Parts of the skeleton are also joined at the joint locations. For example, for the above example, the full body skeleton is joined with the foot skeleton at the ankle joint. For regions of the body for which data is unavailable, the output of the generic module is used. For regions of the body for which multiple models of similar accuracy exist, the corresponding models are merged in a probabilistic framework. For example, the expected value of this 3D model's surface is computed over all pieces of data available as outputs of 120 , 123 , 124 , 125 , and 126 to produce an estimate of the 3D model of the user's head. In an exemplary embodiment, this is done using Bayesian model averaging, committees, boosting and other techniques for combining models may be used.
  • a preliminary 3D model is instantiated using the output of the information fusions step.
  • the model is named and all the appropriate data structures are updated.
  • the model is also textured at this step. This is done by setting up a constrained boundary value problem (BVP) with constrains defined by the feature point correspondence and using texture from the image(s) provided by the user. In an exemplary embodiment, this is done using a technique similar to that presented in [45] for the face.
  • BVP constrained boundary value problem
  • the feature point correspondence between points on the 3D model and those in the images is obtained using the segmentation results from step 146 . Alternatively, this correspondence data may be obtained through a user interface. An exemplary embodiment of such a user interface is discussed in reference to FIG. 11 .
  • a texture map for the face is obtained by unwrapping a texture map from the input video sequence or input images using a technique similar to the texture mapping technique described in [46].
  • the images may be processed to complete missing or occluded regions (such as occlusion by hair, glasses, etc.) using shape space priors and symmetry.
  • Skin tone is also identified at this step.
  • regions representing skin can be identified by converting the image to a representation in the HSV (Hue, Saturation, Value) color space or RGB (Red, Green, Blue) color space. Skin pixels have characteristic HSV and RGB values. By setting the appropriate thresholds for the HSV or RGB parameters, the skin regions may be identified.
  • the skin reflectance model may incorporate diffuse and specular components to better identify the skin.
  • the variation of the pixel values (and higher order statistics) for example in RGB space can be used to estimate the skin texture.
  • This texture is then used to fill in skin surfaces with unspecified texture values, for example, ears that are hidden behind hair.
  • skin texture is extracted from the face and used wherever necessary on the head and the body since the face of a user is usually visible in the image or video.
  • texture is computed and mapped for teeth, hair, and the iris and pupil of the eyes. If image or video data is unavailable, a generic texture is used. The choice of a generic texture is based on other information provided by the user as part of other data 118 (eg. age, race, gender, etc.), if available.
  • the model is then optimized at step 129 .
  • Optimization involves improving the model to better match the user. Optimization procedures similar to those employed at step 125 and 153 are used at a global scale, if necessary or possible, again depending on user data and the action factors. Consistency checks are also made to ensure that scale and orientation of the different regions of the model are plausible and appropriate corrections are made if necessary. Textures on the model are also optimized at this step if the action factors allow. This involves optimizations such as reilluminating the model so that the illumination is globally consistent and so that the model can be placed in new illumination contexts. This is done in an exemplary embodiment using techniques similar to those used in [19, 20, 47].
  • Forward and backward projection may be applied in a stochastic fashion to ensure consistency with the 2D input image, if provided, and to make finer modifications to the model, if necessary depending on action factors.
  • the comparison of the projected 3D model and the 2D image may be done in one or more feature space(s), for example in edge space. All of the actions performed are taken depending on the action factors as described earlier.
  • the method 110 then proceeds to step 130 at which the model is detailed.
  • the photorealism of the model is enhanced and any special effects that are required for NPR are added based on the action factors.
  • the photorealism is enhanced, for example, by using bump maps for, say, wrinkles and incorporating subsurface scattering for skin. Facial hair, facial accessories and finer detail are also added to the model.
  • Method 110 then proceeds to the user modification step 131 at which the user is allowed to make changes to the model if desired.
  • changes include, in an exemplary embodiment, changes to the skin tone, proportions of various body parts, textures (for example, the user may add scars, birthmarks, henna, etc.), etc.
  • An easy to use user interface allows the user to make such changes as described later in this document.
  • Users are also allowed to set default preferences for their model at this point. For instance, they may choose to have a photorealistic model or a nonphotorealistic (NPR) model as their default model (NPR models may be multi-dimensional-1-D, 2-D, 2.5D, 3-D, 4-D or higher). Users can also create several versions of their NPR model based on their specific taste.
  • NPR nonphotorealistic
  • NPR models can be constructed by simply applying a new texture or using algorithms such as those described in [48-50].
  • the method may ask the user for assistance.
  • the user is allowed to make changes to the model at any time.
  • the model can be updated accordingly.
  • newer versions of the software are released, newer, more accurate versions of the model may be created using the information already supplied by the user or prompting the user to provide more (optional) information.
  • All the models created by the user are stored and the user is allowed to use any or all of them at any time.
  • the models created by the user are stored in the user database 80 and are also cached on the client side 14 and 16 for performance purposes.
  • the model generated before user modifications as well as the user modifications and user data 111 are passed onto the learning step 132 , the output of which is used to update the prior knowledge 112 in order to improve the model construction method 110 over time.
  • This can be done using reinforcement learning and supervised learning techniques such as Gaussian process regression.
  • the manifolds and the classifier used in the model construction process are updated.
  • a model that is created is significantly further away in distance from the existing exemplars of the classifier and has been found frequently, it is added as a new exemplar.
  • a user model is created.
  • the method accesses the quality of the data, for example, the resolution of the images, the poly count of the meshes, etc. in order to determine if the newer data can improve the model. If it is determined that the new data can improve the module, the method 110 processes the data to improve the quality of the user model and a new version of the model is created and stored.
  • the measurements of various body parts can be updated at any time as the user ages, gains/loses weight, goes through maternity etc.
  • the method 110 described above can be used for building models of other objects.
  • 3D objects for use in the virtual world.
  • the user can identify the class of the object (such as a pen, a laptop, etc.) for which a model is being created.
  • the class of the object for which a model is being created is useful for selecting the appropriate priors for model construction for the given object from the prior knowledge 112 .
  • the class of the object being considered can be automatically determined as discussed with reference to FIG. 49Q .
  • a generative model for motion is used.
  • users are allowed to tune various parameters corresponding to a walking style such as a masculine/feminine walking style, a heavy/light person walking style, a happy/sad walking style etc.
  • Such generative models are learnt, in an exemplary embodiment, using Gaussian process models with style and content separation using a technique similar to that used in [51].
  • a 3D model When the action factors are very limiting, for example, on limited platforms such as a cell phone or a limited web browser, several approximations may be used to display a 3D model.
  • the user on rotating a user model, the user is presented with a 3D model of the user from a quantized set of views i.e. if a user rotates his/her viewpoint, the viewpoint nearest to this user selected viewpoint from a set of allowed viewpoints is chosen and displayed to the user. In this way, an entire 3D scene can be represented using as only as many viewpoints as the system permits, thereby allowing a more compact and responsive user experience.
  • precomputed views of the model corresponding to different viewpoints are used.
  • the apparel on a generic user model of a given size and the corresponding fit info is precomputed for various parameters (for example, for different apparel sizes) and the appropriate view is displayed to the user.
  • the view may be an image or an animation such as one showing the user walking in a dress.
  • static backgrounds may be used instead of dynamic one.
  • a quantized version of the environment may be displayed i.e. as with the case of the user model, when the user chooses to navigate to a certain viewpoint, the closest available viewpoint from a set of allowed viewpoints for the environment is chosen and displayed to the user.
  • Users can also choose to create a strictly 2D user model and try out apparel in 2D. This is one of the several options available for NPR models. In an exemplary embodiment, this is done by invoking the generic module 126 with a 2D option for the classifier i.e. the output of the classifier is a 2D rigged mesh.
  • the 2D classifier is built using the same technique as described for the 3D models but using 2D rigged models instead. Users can also draw a model of themselves. This can then be either manually rigged through a user-interface or automatically using a 2D form of the technique used in [31], in an exemplary embodiment. Users also have the option of creating their own 3D models, and using them for trying out apparel and for various entertainment purposes such as playing games and creating music videos containing their user model.
  • an application programming interface may be available for developers to build applications using this data.
  • an application could use this data to determine items that fit a user as a user browses a catalog, as described later.
  • a mobile device or cell phone application could allow users to scan a bar code or an RFID (radio frequency identification) tag on an apparel in a real store and see if the apparel fits the user. (Such scanning of bar codes or RFIDs and looking up of repositories can have other applications such as scanning a food item to check if it is consumable by the user i.e. its ingredients satisfy the dietary restrictions of a user).
  • FIGS. 7A-D illustrate protocols for collaborative interaction in exemplary embodiments. These protocols can be used for a number of applications. These protocols are described next for the modes of operation of a Shopping TripTM. Other applications based on these protocols are described later in this document.
  • a user may initiate a shopping trip at any time. There are four modes of operation of a shopping trip: regular, asynchronous, synchronous and common. In the regular mode, a user can shop for products in the standard way—browse catalogues, select items for review and purchase desired items. Whereas the regular mode of shopping involves a single user, the asynchronous, synchronous and common modes are different options for collaborative shopping available to users. In the asynchronous mode, the user can collaborate with other shoppers in an asynchronous fashion.
  • the asynchronous mode does not require that other shoppers the user wishes to collaboratively shop with, be online.
  • the user can share images, videos, reviews and other links (of products and stores for instance) they wish to show other users (by dragging and dropping content into a share folder in an exemplary embodiment). They can send them offline messages, and itemized lists of products sorted according to ratings, price or some other criteria.
  • Any share or communication or other electronic collaborative operation can be performed without requiring other collaborators to be online, in the asynchronous mode at the time of browsing.
  • the synchronous and common modes require all collaborating members to be online and permit synchronized share, communication and other electronic collaborative operations. In these modes, the users can chat and exchange messages synchronously in real-time. In the synchronous mode, ‘synchronized content sharing’ occurs.
  • FIG. 20 Reference is made to FIG. 20 to describe this operation in an exemplary embodiment.
  • Users involved in synchronized collaboration can browse products and stores on their own.
  • ‘Synchronized content sharing’ permits the user to display the products/store view and other content being explored by other users who are part of the shopping trip by selecting the specific user whose browsing content is desired, from a list 244 as shown in FIG. 20 .
  • a shopping trip session involving two users—user 1 and user 2 , browsing from their respective computing devices and browsers.
  • user 1 and user 2 are browsing products by selecting “My view” from 244 .
  • user 1 now selects user 2 from the view list 244 .
  • the same content is displayed on user 1 's display screen thereby synchronizing the content on the display screens of users 1 and 2 .
  • User 1 may switch back to her view whenever she wants and continue browsing on her own.
  • user 2 can view the content of user 1 by selecting user 1 from the switch view list.
  • This mode can assume two forms. In the first form, a user is appointed as the ‘head’ from among the members of the same shopping trip. This head navigates/browses products and stores on their display screen and the same view is broadcast and displayed on the screens of all users of the same shopping trip.
  • all users can navigate/browse through product, store or other catalogues and virtual environments and the information/content is delivered in the sequence that it is requested (to resolve user conflicts) and the same content is displayed on all user screens simultaneously using the protocol that is described below.
  • the system in FIG. 20 involving synchronous collaboration between users may be integrated with a ‘One Switch View’ (OSV) button that allows users to switch between user views just by pressing one button/switch, which may be a hardware button or a software icon/button.
  • the user whose view is displayed on pressing the switch is the one on the list following the user whose view is currently being displayed, in exemplary embodiment.
  • This OSV button may be integrated with any of the collaborative environments discussed in this document.
  • FIG. 7A the regular mode of operation of a shopping trip is shown.
  • An instance of a client 201 in the regular mode of operation makes a request to the server application 22 to view a product or a store or other data.
  • the request can be made using HTTP request, RMI (remote method invocation), RPC (remote procedure call).
  • the client instance then receives a response from the server.
  • FIG. 7B an asynchronous mode of operation is shown in an exemplary embodiment.
  • the user instance 201 makes a request to the server.
  • a list 203 of shopping trip members and their information is maintained on the server for any given user.
  • the list 203 is a list of users that have been selected by the client C 6111 to participate in the shopping trip.
  • the server then sends a response to the client 201 with the requested content. If the item is tagged for sharing, the server adds it to a list of shared items for that user.
  • Other users on the shopping trip may request to view the shared items upon which the server sends the requisite response to this request. For instance, a user may view a product while browsing and may tag it as shared or add it to a share bin/folder. For instance, a user (C 6111 ) may view a product and add it to a share bin. Other users (C 6742 , C 5353 ) may then view the items in that bin.
  • the shopping trip members list 203 may also be stored locally on the client's side in an alternative exemplary embodiment.
  • FIG. 7C where the synchronous mode of shopping is shown in exemplary embodiment.
  • a client instance 201 makes a request to the server to view a product, for example, an appropriate response is sent not only to the client requesting the information but also to all members on the shopping trip list who have selected that client's browsing contents (refer FIG. 20 ).
  • the synchronous mode works as follows: (1) A user, say USER 1 , visits a product page. (2) The product is registered in a database as USER 1 's last viewed page.
  • FIG. 7D where the common mode of a shopping trip is shown in exemplary embodiment.
  • FIG. 7D it is shown that several clients can simultaneously make a request and simultaneously receive a response. At any given time, any of the clients can send a request to the server to view an item, to explore an item (as discussed in reference to FIG. 36 ), etc. in exemplary embodiment.
  • the following is a description of the communication protocol for the common mode of operation of a shopping trip.
  • a client When a client sends a request to the server, it also monitors a channel on the server (could be a bit or a byte or any other data segment on the server in exemplary embodiment) to see if there any simultaneous requests made by other users. If no simultaneous requests are detected, the client completes the request and the server responds to all clients in the shopping trip with the appropriate information requested. For instance, if a catalogue item is viewed by one of the users, all other clients see that item. As another example, if a client turns over a 3D item, then all other clients see the item turned over from their respective views. If however, a simultaneous request is detected at the channel, then the client aborts its request and waits for a random amount of time before sending the request again.
  • a channel on the server could be a bit or a byte or any other data segment on the server in exemplary embodiment
  • the random wait time increases with the number of unsuccessful attempts. If the response duration is lengthy, then requests are suspended until the response is completed by the server, in exemplary embodiment.
  • a conflict management scheme may be implemented wherein the client also monitors the server's response for a possible conflict and sends the request when there are no conflicts.
  • the server may respond to requests if there are no conflicts and may simply pause if there is a conflict.
  • the user may tag an item for sharing and add it to a bin along with a video, audio and/or text message. When other users request to see items in this bin, they are shown the product along with the audio, video or text message.
  • the audio channels for all the users are added up and the video channel for whichever user's view is selected ( FIG. 20 ) is shown.
  • the audio channels from the users on the shopping trip are added up and presented to all the users while the video stream may correspond to the user who has just completed sending a request successfully through the common mode communication protocol described above.
  • Sessions may be saved as described before.
  • the views and the timeline during any session can be annotated. These pieces of information are cross-referenced to enable the user to browse by any of the pieces of information and view the corresponding information.
  • the clients may also interact in a peer to peer fashion as opposed to going through a server.
  • the clients in the synchronized mode, if the user makes a request for a webpage to the server, then that information can be passed on to the other clients on the shopping trip via a peer to peer protocol.
  • a user may also be engaged in multiple shopping trips (in multiple shopping trip modes) with different sets of users. Additionally, sub-groups within a shopping may interact separately from the rest of the group and/or disjoin the rest of the members of the shopping trip and then later resume activities with the group.
  • the user While operating in any of these modes, the user has the option to turn on an ‘automatic’ mode feature whereby the system engages the user in a guided shopping experience.
  • the user may select items or categories of items that the user is interested in and specify product criteria, preferences and other parameters.
  • the user may also specify stores that the user is interested in browsing. Once this is done, the system walks the user through relevant products and stores automatically for a simulated guided shopping experience.
  • the automated mode may be guided by a virtual character or a simulated effigy or a real person.
  • the user can indicate at any time if she wishes to switch to the manual mode of shopping.
  • the modes of operation presented here for shopping can be applied to other collaborative applications. For instance, going on a field trip, or virtual treasure hunt, or sharing applications as discussed with reference to FIG. 49 O.
  • sample images describe the operation of the system 10 with examples that are provided through sample screen shots of the use of the system 10 .
  • FIG. 8 and FIG. 31 where a sample main page screen 250 is shown, in an exemplary embodiment.
  • the sample main screen 250 is used for purposes of example.
  • the main screen 250 in an exemplary embodiment presents the user with various options.
  • the options in an exemplary embodiment include the menu options 252 .
  • the options menu 252 allows a user to select from the various options associated with the system 10 that are available to them.
  • the options menu allows a user to select tabs where they can specify further options related to their respective environment 620 , friends 622 and wardrobe 624 as has been described in FIG. 5 .
  • Users can search the site for appropriate content and for shopping items using the search bar 632 ; they can browse for items and add them to their shopping trolley 628 which dynamically updates as items are added and removed from it; and complete purchase transactions on the checkout page 626 .
  • the options that have been provided here have been provided for purposes of example, and other options may be provided to the user upon the main page screen 250 .
  • users can choose and set the theme, layout, look and feel, colours, and other design and functional elements of the main and other pages associated with their account on system 10 , in the preferences section 630 .
  • users can choose the colour scheme associated with the menu options 252 and the background of the main and other pages.
  • the local application described further below is launched on clicking the button 254 .
  • the status bar 256 displays the command dressbot: start which appears as the local application is started.
  • Button 258 starts the model creation process.
  • a notification 634 is displayed inside the browser window 250 .
  • users can engage, with their virtual model and with other users, in collaborative activities which include, in exemplary an embodiment, participating in virtual tours and visiting virtual destinations 636 ; taking part in virtual events 638 such as fashion shows, conferences and meetings etc, all or some of which may support elements of augmented reality.
  • a media player or radio may be available/linked available in the browser in an exemplary embodiment 640 , Featured apparel items 642 and other current offers or news or events may also appear on the main page 250 in an exemplary embodiment.
  • FIGS. 9 to 13 to better illustrate the process by which a 3D user model is created.
  • the 3-D user model is created by first receiving user input, where the user supplies respective images of themselves as requested by the system 10 .
  • FIG. 9 where a sample image upload window is shown in an exemplary embodiment.
  • the image upload window is accessible to the user through accessing the system 10 .
  • the system 10 is accessed through the Internet.
  • the sample upload window 260 is used to upload images of the user that are then used by the system 10 to generate the user model.
  • the user is requested to upload various images of themselves.
  • the user in an exemplary embodiment uploads images of the facial profile, side perspective and a front perspective.
  • the user is able to upload the images from their respective computing device or other storage media that may be accessed from their respective device.
  • the client application 16 resident, or associated with the computing device causes a client application window 270 to be displayed to the user when the user model is being created.
  • the client application can request and submit data back to the server.
  • the protocol for communication between the application 16 and server 20 is the HTTP protocol in an exemplary embodiment.
  • the application 16 in an exemplary embodiment initiates authenticated post requests to a PHP script that resides on the portal server and that script relays the requested information back to the application 16 from the server 20 . People are comfortable with shopping on the internet using a browser and with monetary transactions through a browser.
  • a rich 2D and/or 3D environment is desired.
  • Such an environment can be a computational burden on the portal server.
  • the computationally intensive rendering aspects have been pushed to the client side as an example.
  • this computational efficiency can be achieved through the use of a local stand-alone application or a browser plug-in, or run within a browser, or a local application that interacts with the browser and portal server 20 .
  • the current implementation in an exemplary embodiment, involves a local application 271 that interacts with the browser and the portal server and is a component of the client application 270 .
  • the local application and the browser interact with each other and also with the portal server 20 , which in turn interacts with other components of the internet.
  • Each of the modules of the portal server 20 may have a corresponding module on the client application.
  • This may be a part of the browser or local application 271 , the browser or a combination of the two.
  • the browser and the local application interact in an exemplary embodiment, via protocols like HTTP and this communication may take place via the portal server 20 or directly.
  • the purpose of the local application 271 is to enable computationally intensive tasks to be carried out locally such as computations required for 3D renderings of the apparel, the user's model and the environments. This gives the appearance of running 3D graphics in a browser.
  • a callback function is implemented within the local application that listens for such notifications.
  • the appropriate callback function is invoked.
  • the gathering of information from the server is done using HTTP.
  • the application window 270 displays to the user the current state of the model, and allows the user to perform various modifications to the user model, as detailed below.
  • the user is able to modify the respective measurements that are associated with a preliminary user model that has been generated.
  • the measurements specified by the user may be specific measurements that more closely resemble the user's physical profile.
  • the measurements that are specified may also be prospective measurements, where the user may wish to specify other measurements.
  • the user may specify measurements that are larger than their current measurements, if for example, they wish to model maternity clothes.
  • the user may specify measurements that are smaller than their current measurements, thereby providing prospective looks with regards to what a user may look like if they were to lose weight.
  • the head and face region of the user's model is simulated by the modeling module 50 utilizing images of the user's face taken from different angles.
  • the face generation process may be completely automated so that the modeling module 50 synthesizes the model's face by extracting the appropriate content from the user's images without any additional input from the user or it may be semi-automated requiring additional user input for the model face generation process.
  • FIG. 11 where a sample facial synthesis display window 280 is shown illustrating a semi-automated facial synthesis procedure.
  • the reference image 282 shows the user where to apply markers on the face i.e., points on the face to highlight.
  • the sample image 284 in an exemplary embodiment shows points highlighting regions of the user's face corresponding to the markers in the reference image 282 .
  • the modeling module 50 may require additional inputs from the user to further assist the face generation process. This input may include information on facial configuration such as the shape or type of face and/or facial features; subjective and/or objective input on facial feature dimensions and relative positions and other information.
  • the type of input acquired by the modeling module 50 may be in the form of text, speech or visual input. Additionally, the modeling module 50 may provide options to the user in order to specify various areas/points upon the respective area of the model that they wish to make further modifications/refinements/improvements to.
  • FIGS. 12 to 13 To be able to better illustrate the how the user may make modifications to the user model in an exemplary embodiment, reference is made now to FIGS. 12 to 13 .
  • FIG. 12A a sample measurement window 290 is shown, in an exemplary embodiment.
  • the measurement window 290 allows the user to specify empirical data that is used to generate or modify the user model.
  • the user is able to specify the measurements through aid of a graphical representation that displays to the user the area or region for which a measurement is being requested.
  • videos and/or audio may be used to assist the user in making measurements.
  • Measurements associated with a user's waist have been shown here for purposes of example as the user may specify measurements associated with other areas of their body as described above.
  • the user may specify various modifications of the user model that are not limited to body size measurements. Such modifications may include, but are not limited to, apparel size, body size, muscle/fat content, facial hair, hair style, hair colours, curliness of hair, eye shape, eye color, eyebrow shape, eyebrow color, facial textures including wrinkles and skin tone.
  • FIGS. 12B and 12C where a sample image of a constructed model image 300 and 302 are shown, respectively.
  • the model image window allows the user to inspect the created user model, by analyzing various views of the created model.
  • Various features are provided to the user to allow the user to interact with the created model, and to be able to better view various profiles associated with the model.
  • Features 303 , 304 , 305 and 306 are depicted as examples.
  • Pressing button 306 presents the user with options to animate the user model or the environment.
  • the user may be presented with animation options on the same page or directed to a different page.
  • the user may be presented with specific preset expressions/actions in a menu, for example, to apply on their user model.
  • the user may animate their model through text/speech commands or commands expressed via other means.
  • the user may also choose to synchronize their model to their own expressions/actions which are captured via a video capture device such as a webcam for example.
  • the user is also provided with environments to embed the character in as it is animated.
  • Icon 306 allows the user to capture images of the model, or to record video sequences of model animation, which may then be shared by the user with other users.
  • the facial icon 303 when engaged causes the face of the generated model to be zoomed in on.
  • the body icon 304 when engaged causes the entire user model to be displayed on the screen.
  • non photorealistic renderings 310 A, 310 B, and 310 C are shown.
  • the non photorealistic renderings display a series of images, illustrating various views that may be seen of a user model.
  • the respective non-photorealistic renderings illustrate the various rotations of the user model that the user may view and interact with.
  • non photorealistic renderings 310 A and 310 B illustrate how the user may modify the wrist dimensions of the model.
  • the user may select areas on the user model where they wish to modify a respective dimension.
  • FIG. 13A shows the wrist being localized via a highlighted coloured (hotspot) region 312 as an example.
  • the dialog box 313 containing slider controls can be used by the user to adjust measurements of the selected body part and is shown as an exemplary embodiment.
  • FIG. 13B shows more sample images of how users can make body modifications directly on the user model using hotspot regions 312 .
  • FIG. 13C shows a sample ruler for taking measurements of the user model which may be displayed by clinking on a ruler display icon 316 .
  • This ruler allows the user to take physical measurements of the user model and to quickly check measurements visually.
  • the ruler may also prove useful to the user in cases where they wish to check how a given apparel or product affects original measurements.
  • the user may try on different pairs of shoes on the user model and check how much the height changes in each case.
  • FIG. 14 where a sample environment manager window 330 is shown in an exemplary embodiment.
  • the environment module as described above, allows a user to choose respective environment backgrounds.
  • the system 10 has default backgrounds that that the user may select from.
  • the user is provided with functionality that allows them to add a new environment. By uploading an image and providing it with a name, the user is able to add an environment from the list that they may select from.
  • Various types of environments may be added, including static environments, panoramic environments, multidimensional environments and 3-D environments.
  • a 3D environment can be constructed from image(s) using techniques similar to those presented in [44].
  • FIG. 15A where a sample user model environment image 340 is shown containing a photorealistic user model.
  • the image 340 is shown for purposes of example, and as explained, various background environments may be used.
  • the user model that is shown in FIG. 15A has been customized in a variety of areas. Along with the apparel that the user has selected for their respective user model, the user is able to perform different customizations of the model and environment. Examples of which are shown here for purposes of example.
  • labels 342 the user has customized the hair of the user. The customization of a user model's hair may include, the style, hair and colour.
  • the environment may be customized, including the waves that are shown in the respective beach environment that is illustrated herein.
  • FIG. 15B where some aspects of collaborative shopping are illustrated.
  • User model views may be shared between users. Users may also interact via their model in a shared environment.
  • window 354 shows two user models in a shared window between users.
  • Product catalogue views 355 may also be shared between users. For example, views of mannequins displaying apparel in product display window 355 may be shared with other users using the share menu 358 .
  • views of shopping malls 356 may be shared with other users as the user is browsing a virtual mall or store.
  • FIG. 32 depicts an environment where a fashion show is taking place and where one or more users can participate with their virtual models 650 .
  • the environment settings, theme and its components 652 can be changed and customized by the user. This is a feature that designers, professional or amateur, and other representatives of the fashion industry can take advantage of to showcase their products and lines. They may also be able to rent/lease/buy rights to use the virtual model of users whom they would like to model their products. Users may also be able to purchase/obtain tickets and attend live virtual fashion shows with digital models featuring digital apparel whose real and digital versions could be bought by users.
  • FIG. 32 depicts an environment where a fashion show is taking place and where one or more users can participate with their virtual models 650 .
  • the environment settings, theme and its components 652 can be changed and customized by the user. This is a feature that designers, professional or amateur, and other representatives of the fashion industry can take advantage of to showcase their products and lines. They may also be able to rent/lease/buy rights to use the virtual model
  • FIG. 33 shows a living room scene which can be furnished by the user with furniture 654 and other components from an electronic catalogue in an exemplary embodiment.
  • Users may use their model 650 to pose or perform other activities to examine the look and feel of the room, the setting and furnishing, which they may replicate in their own real rooms.
  • This feature is further representative of ‘interactive’ catalogues where users are not just limited to examining different views of a product before purchasing it from an electronic catalogue but are able to examine it in a setting of their choice, interact with it via their virtual model or directly, acquire different perspectives of the product in 3D, and get acquainted with enhanced depictions of the look and feel of the product. Environments will also be available to users that change with time or other properties.
  • an environment that represents the time of day may change accordingly and show a daytime scene (with the sun possibly and other daytime environment components) during daylight hours which changes to represent the way the light changes and dims during the evening time which subsequently changes into a night scene with the appropriate lighting, other environmental conditions and components in an exemplary embodiment.
  • Environments that reflect the weather would also be available.
  • Retailers would have the opportunity to make available their apparel digitally with the appropriate environments.
  • galoshes, raincoats, umbrellas and water-resistant watches and jewelry may be featured in a rainy scene.
  • Users may also customize/program scenes to change after a certain period of time, in an exemplary embodiment. For instance, they can program a given scene or scene components to change after a fixed period of time.
  • User models may also be programmed to reflect changes over time such as ageing, weight loss/gain etc.
  • FIG. 34 where a sample virtual model is shown in a customized music video that the user has generated.
  • This figure is shown in exemplary embodiment and it illustrates the different activities the user can engage their virtual model in; the different environments they can choose to put their model in as well as the expression/action animation control they have over their virtual character model.
  • Display window 672 shows the virtual model singing in a recording studio;
  • display window 674 shows the model driving in a sports car while
  • display window 676 shows the model waving and smiling.
  • the user can choose to combine the different scenes/animations/frames to form a music video as depicted in FIG. 34 .
  • Another feature is a voice/text/image/video to song/music video conversion.
  • Users can upload audio/video/text to the system and the system generates a song or a music video of the genre that the user selects.
  • a user can enter text and specify a song style such as ‘country’ or ‘rock’ and other styles.
  • the system generates a voice that sings the written text in the specified style.
  • the voice may also be selected (based on samples provided by the system) by the user or picked by the computer. (Given some content, the system can find related words to make rhymes while adhering to the provided content. In an exemplary embodiment, this can done by analyzing phonemes and looking up in a thesaurus to find rhyming words where necessary).
  • the system 10 may provide the user with pre-rendered scenes/environments where the music and environment cannot be manipulated to a great degree by the user but where rendering of the character model can occur so that it can be inserted into the scene, its expressions/actions can be manipulated and it can be viewed from different camera angles/viewpoints within the environment.
  • Users can save and/or share with other users the various manifestations of their user model after manipulating/modifying it and the animation/video sequence containing the model in various file formats.
  • the modified user model or the animation/video sequence can then be exported to other locations including content sharing sites or displayed on the profile or other pages.
  • users may want to share their vacation experiences with other users.
  • users can show their character model engaged in different activities (that they were involved in during their vacation), against different backdrops representing the places they visited. This could also serve as an advertising avenue for the tourism industry.
  • the model may be animated to reflect the status of the user and then displayed on the profile page to indicate other members of the status of the user. For instance, the character model may reflect the mood of the user—happy, excited, curious, surprised etc.
  • the model may be shown running (image/simulation/video) in a jogging suit to indicate that the user is out running or exercising, in one exemplary embodiment.
  • the brand of the digital apparel may appear on the apparel in which case featuring the model on the profile page with the apparel on would serve as brand advertisement for that apparel.
  • Skin color can be changed by changing HSV or RGB and skin texture parameters as discussed with reference to step 128 in FIG. 6A .
  • Skin embellishments such as henna or natural skin pigmentation such as birthmarks etc. can be added by using an image of the respective object and warping it onto the user model where placed by the user.
  • Color palettes (a colour wheel for example) may be provided with different variations of skin tones for users to pick a skin tone. Similar palettes may exist for makeup application.
  • the community module allows the respective user to interact with other users of the system 10 .
  • users are also able to invite other members to be users of the system 10 .
  • the system 10 allows for multiple methods of interaction between the respective users of the system.
  • the various methods of interaction are described herein.
  • One such method of interaction is the concept of a collaborative shopping trip that is described in further detail herein.
  • users of the system 10 may interact with one another with respect to items of apparel or other products, each other's models, messages, and pictures or images.
  • the real-world concept of inviting friends, shopping, and receiving their respective feedback on purchased items is emulated through the system 10 .
  • the shopping trip management panel 360 allows users to manage existing shopping trips that they have created, or to create new shopping trips. Once the user has created a new shopping trip, the user may then invite other users to become members of their shopping trip as described with reference to FIG. 40 .
  • the user may send invites for shopping trips and other synchronized collaboration via the messaging service provided through system 10 and through other online or offline modes of messaging including email, SMS or text, chat and other means. Notifications can also be sent to users on social networking sites inviting them for collaborative activities. Users can also access past sessions that they were on through the panel 360 .
  • the friends manager window 370 allows users to invite other users to join them in their shopping trips.
  • the system 10 allows for friends that are associated with the system 10 , and those that may be associated with one or more other community networking sites to be invited.
  • Community networking sites include sites such as Facebook, or My Space and others that allow their API to be used by external applications
  • a user's list of friends from social networking sites may be displayed within the system 10 .
  • a procedure for accessing friends on a user's Facebook account is presented in FIGS. 39 to 42 .
  • FIG. 39 to 42 a procedure for accessing friends on a user's Facebook account is presented in FIGS. 39 to 42 .
  • FIGS. 39A presents the sequence of events leading to the availability of one's Facebook friends on their account in system 10 .
  • FIGS. 39B to 39D display magnified views of each of the windows shown in FIG. 39A .
  • the user can view his account information 716 as shown in FIGS. 39A and 39B .
  • a provision 719 exists on the account page 716 for signing into Facebook, an external social networking site, which will facilitate access to Facebook account resources (other social networking sites may be present and accessed through system 10 ). As illustrated in FIGS. 39A-B , this will take the user to their login page 717 on Facebook, upon which the user may log in to his Facebook account 720 .
  • Users are able to invite friends from the community network sites to interact with. Upon requesting that a friend from a community networking site join in a shopping expedition, the friend when accessing their account in the community network site, receives a notification that a request has been made. The user may choose to accept or reject the request.
  • FIG. 18 where a sample system friendship management window 380 is shown in an exemplary embodiment.
  • the system friendship manager is used to manage a user's relationship with other users of the system 10 .
  • the manager window 380 lists a user's friends, along with friend requests that are still pending. Search functionality is also provided for, where a user may search for other users by entering their names
  • the chat window in an exemplary embodiment may be created for every shopping trip that is associated with the user.
  • users are able to engage in an interactive chat session with one or more other users.
  • the shopping trip feature allows two or more users to collaborate while shopping online. This may entail limited or full sharing of account resources for the duration of the shopping trip.
  • users can view the contents of each other's shopping carts, shopping lists, wishlists, fitting rooms, user models, and share audio play lists and other resources. They can set and view shared ratings, feedback, comments and other user-specified information regarding a product. They can mark items with user tags that can be shared between members of the shopping trip.
  • FIG. 20 a collaboration interface for a shopping trip 240 is shown in exemplary embodiment.
  • Members of the shopping trip are shown by clicking on button 241 .
  • a list of stores that the users can browse is presented in panel 242 .
  • This panel may show all the stores subscribing to system 10 .
  • the members of the shopping trip may add stores of interest to them or remove stores from the panel.
  • the store names may be presented as a list or on a map of a virtual or real mall in an exemplary embodiment. In this example, the stores appear in a list 242 .
  • the shopping environments may be animated and/or video/image representations of fictional malls or real malls, or other manifestations as described previously with reference to the environment module 56 , the shopping module 60 , and the entertainment module 66 .
  • the shopping environments may incorporate a mode with augmented reality features, which were described previously with reference to the shopping module 60 .
  • Users can engage in an interactive session within a store environment in 243 , as in FIG. 46 , when operating via this mode. Users can also view product catalogues and individual products in 243 . Users can also view stores in 243 that are available on the retail server 24 . Users can acquire different product views, and examine products in 3D in 243 .
  • a mode with physics based effects may be incorporated to simulate product look and feel as well as simulate realistic interaction with the product virtually via display 243 .
  • information of a specific mall may be provided in the form of audio and visual (video/image sequences and/or text) feeds via 243 when a user selects a particular mall. This way, users would be able to shop remotely in malls or stores located in other countries such as Paris, Milan, New York and other cities and shopping hubs. Individual stores in the mall may also transmit live feeds via webcams, in exemplary embodiment, (and/or other image, video capture devices) which users can view in 243 .
  • This feed content may incorporate information on the latest stock, new arrivals, promotions, sales, window displays, shelf contents, inventory, salespeople, store arrangements, live reviews and other information relevant to the store. Miscellaneous information such as job openings in the store may also be included.
  • Feed information would be uploaded via a web page onto the portal server 20 . This information would be broadcast in 243 to clients requesting the feeds. Tools may be available to vendors to edit feed information. For instance, video feed information may be edited, image information may be enhanced through photorealistic effects etc. Feed information would provide a mode of advertising to stores.
  • the facility to publish feed content may be available through an independent plug-in or software application to stores. The feed information does not necessarily have to be generated from physical store locations. This information may be provided by the brand or store head office.
  • Feed content may be hyperlinked.
  • customers browse store feeds they may click on a product item to browse its details such as those described with reference to 22 .
  • Other details may be included such as inventory details of a particular item; product ratings (maybe assigned by customers or style consultants); style information; links to other products that can be worn with it and/or other similar styles in the store.
  • the hyperlinks may be represented by icon such as animated tags.
  • Other hyperlinks that may be present in the store feeds include links to electronic fashion magazines or videos containing information or demos or reviews about specific store products, styles, brands, etc.
  • shopping trip members may choose to shop collaboratively.
  • shop collaboratively There are several ways to engage in a collaborative shopping trip, as described previously in this document.
  • a user may browse the chosen environment and/or products, and at any given time, the video, animation or image sequence information that is displayed on the user's screen while the user is browsing the environment and products is considered the specific user's ‘view’.
  • Users can choose to display the views of all members, which will appear on a split-window screen in an exemplary embodiment. Alternatively, they can choose to display a specific member's view on their screen or return to their own view.
  • Members on a shopping trip can switch between views 244 of individual members browsing the common environment or product 243 .
  • users can choose to browse different digital manifestations 245 of the environment and/or product such as streaming video, image sequences, virtual simulation, augmented reality, other media content or any combination thereof.
  • users can drag-and-drop and/or add items and products that they wish to share with other users from display screen 243 to a sharing folder, the contents of which can be viewed by the members of the shopping trip at any time.
  • Users may view and examine their own account resources such as their virtual/digital model, wardrobe and fitting room contents, shopping cart, wishlist, image and other features during the shopping trip.
  • the user may view his resources in the window 246 , by selecting from the menu 247 .
  • the user model is displayed in 246 .
  • FIG. 20 shows a chat window 390 in another exemplary embodiment, within the shopping trip scenario.
  • FIG. 20 shows a chat window 390 in another exemplary embodiment, within the shopping trip scenario.
  • a user and their friends can collaboratively view information on restaurants in 243 .
  • Visual 3D menus may be available for viewing restaurant meal choices, for receiving feed information on specials, promotions, reviews and other relevant restaurant information. Users would also be able to collaboratively order a meal for take-out and review restaurant menus and other information online in order to decide where they would like to go for dining.
  • FIG. 40 an exemplary embodiment of the process joining a shopping trip through a user interface is shown.
  • this process proceeds as follows: When a user clicks on a “Go Shopping” button, he/she is presented with a screen with three columns—left, middle, right. The column on the left lists all existing shopping trips that the user's friends are currently engaged in. The user can choose to join any of these shopping trips by clicking on a “join” button. The user also has the option of searching for a shopping trip of interest. When a key word is searched for the related shopping trips are presented in the left column.
  • the keyword could be the name of a shopping trip or an item of interest that is being shopped for, or an occasion, as examples.
  • the user clicks on the name of a shopping trip in the left column the members of that shopping trip are shown in the middle column.
  • the user can also invite other friends by clicking on the name of a friend from the right column and then clicking on the “invite” button.
  • the right column includes a list of all the user's friends. These friends include friends on from our shopping site, social networking sites such as Facebook, or friends from the virtual operating system/immersive system described in this document.
  • the user can also search for a name of friend to add to the shopping trip. If the friend is found, the name appears in the right column and the user can invite the friend by clicking on the invite button).
  • the friend then receives an invitation via a notification on a social networking site, a phone call, an SMS, an email or other means as described before.
  • the friend's name appears in the middle column in red until the friend accepts the invitation. If the user's friend accepts the invitation, that friend's name appears in the middle column in blue. An orange color indicates that the friend will be joining later. Other cues may also be used to display the status of the friend.
  • the user can also initiate a new shopping trip by specifying a name and clicking on the “new” button.
  • the user also has the option of removing friends from a shopping trip that the user has initiated by clicking on the remove button under the middle column. The user can start the shopping trip or resume a shopping trip by clicking on the “GO” button.
  • the next screen presented on clicking “GO” is a screen listing cities, malls, and stores.
  • the users can pick any city, mall, or store to go to and shop via any of the modes of interaction of a shopping trip described earlier with reference to FIG. 7 .
  • the user can be engaged in multiple shopping trips and can switch between any of the trips or add/remove friends by coming back to this interface.
  • the name of the shopping trip that the user is currently viewing in appears on top as the user shops.
  • Such an interface is also used for going to events such as those described with respect to the “hand and chill” feature (For example, as described with reference to FIG. 44 ).
  • the main shopping page includes two buttons—“Browse” and “Shopping Trip”. Clicking on “Browse” lets the user shop in the regular mode of shopping. Clicking on “Shopping Trip” loads the screen shown in FIG. 40 .
  • FIG. 41A-F snapshots of a realization of the system discussed with reference to FIG. 20 are shown in an exemplary embodiment.
  • the user Upon visiting the site (in a browser in this case), the user is presented with the option of logging in or browsing in regular mode (as shown in FIG. 41A ). After logging in, the user can click on the “Shopping Trip” icon from the top menu. As shown in FIG. 41B , this brings up the shopping trip screen discussed with reference to FIG. 40 . Shown in the middle column are the friends that are on the selected shopping trip. Friends that have not yet accepted the invitation to join the shopping trip are highlighted in red. Trip requests show up in the panel on the right and/or as a Facebook notification and/or as an SMS, etc.
  • a sliding chat window 390 can be used at any time.
  • shown in FIG. 41C is one instance of the synchronous mode of operation of a shopping trip in use.
  • users are presented with a list of stores that they can go to.
  • the user is presented with a menu (menu on the left in FIG. 41C ) for browsing through products.
  • This menu may be customized for each store, for example, by providing the vendors with an application programming interface (API) or by letting the vendors customize the menu and navigation options through the store portal discussed with reference to FIG. 42 .
  • API application programming interface
  • Item-dependent views are also provided. Based on the content that is being viewed, an appropriate viewing method is used.
  • the chat window enables the user to chat with a selected user (who could be on our website or on any other social networking site like Facebook or on a chat application such as msn or via email or on a cell phone communicating via text such as through SMS or via voice by employing text to speech conversion, in exemplary embodiments) or with all members of a selected shopping trip.
  • the panel on the right in FIG. 41C (but to the left of the chat window 390 ) provides various options and controls to the user as described earlier.
  • the “My Friends Views” box in the panel is similar to 244 described earlier.
  • a view which could be the user's own view or any of the user's friend's views and interact with friends in the modes of operation discussed with reference to FIG. 7A-D , and described next in an exemplary embodiment.
  • clicking on a friend's name in the “My Friends Views” displays the view 243 as seen by that friend in the current user's view 243 .
  • the common mode which can be initiated by clicking on a ‘common’ icon next to the friend's name
  • the view of the current user including navigation options becomes interactable/controllable by all the friends who have been marked as ‘common’.
  • the view 243 is undockable/dockable/movable/dragable to allow multiple views simultaneously and can also be minimized/maximized/resized. One way to do this is to drag out the view 243 which opens it in a new window that can be placed elsewhere. Multiple views may be opened at any given time. As shown in FIG. 41C in an exemplary embodiment, the multiple views are shown by numbers next to “My View”, or the user's friends' names in 244 . This is particularly useful when viewing multiple items collaboratively.
  • friends may find a skirt that they like and may need to search for a top to go with it.
  • An interface similar to that described with reference to FIG. 45 can also be used here for mixing and matching.
  • the panel is also undockable/dockable and can be moved/dragged around and also be minimized/maximized/resized based on the users' preference.
  • My Friends Views users can also see which of the user's friends are online or are actively browsing. This is indicated by the color of a ‘person’ icon next to each name.
  • a shortcut is also located next to each of the friends' names to quickly slide out the chat box 390 and chat with the friend. Users can also click on a phone icon that lets the user talk to a friend or all members of a shopping trip.
  • this is done either over VoIP (Voice over Internet Protocol) or by dialing out via a telephone/cellular line through a modem. Users can also engage in a video chat with their friends. Clicking on the radio on the left, brings up options for the radio (such as a title to play, a playlist, volume, play individually, play the same music for all members of the shopping trip, etc.) in the view 243 . These options can be set using the various modes of interaction as described above, Clicking on the “shared items” icon on the top menu brings the “My Shared Items” and “My Friends Shared Items” boxes in the panel as shown in FIG. 41D in an exemplary embodiment.
  • These boxes list the items that are posted by the user or by the user's friends for sharing with others asynchronously.
  • Clicking on the “My Wardrobe” icon on the top menu brings up a “My Wardrobe” box in the panel as shown in FIG. 41E in an exemplary embodiment.
  • This box lists the items that the user has in his/her wardrobe. In an exemplary embodiment, items get added to the wardrobe once the corresponding real items are purchased. Users can drag and drop items from the “My Wardrobe” box to the view 243 or can mark the items in “My Wardrobe” for sharing.
  • Clicking on the “Consultant” icon brings up a “Chat with a consultant” box in the panel as shown in FIG. 41F in an exemplary embodiment. Users can add consultants from a list.
  • Recommendations on style consultants by friends are also displayed. Users can share views and engage in an audio/video/text chat with consultants similar to the way they interact with their friends as described above. Consultants can also participate in collaborative decision making through votes described as described in this document.
  • “Check Out” icon users are presented with the SPLIT-BILL screen as discussed with reference to FIG. 21 . Clicking on the “Logout” icon logs the user out of the system. The user's friends can see that the user has logged out as the colour of the icon next to the name of the user under “My Friends Views” changes. The user may join the shopping trip later and continue shopping. The user can exit from a shopping trip by clicking on the shopping trip icon, which brings up the screen shown in FIG. 40 or 41 B, and then clicking on the “exit” icon next to the name of the shopping trip.
  • the interface and system described here can also be used to browse external websites and even purchase items.
  • Store feeds (which could be videos on the latest items in the store or the items on sale in a store, or could also be streaming videos from live webcams in stores displaying items on sale) as described in this document are also viewable in the screen 243 .
  • Users of the shopping trip can not only access products offered by various stores but also services.
  • a movie ticket purchase service is offered that works as follows in an exemplary embodiment: Suppose a bunch of friends want to go out to watch a movie. These friends can go on our site. On selecting the name of a cinema from a services menu, the users are presented with a screen that displays the available locations for the cinema. Users can choose the location they want to go, or assign a head to decide on the location or let the system propose a location to go to.
  • the system proposes alternatives. If any of the users assigns a head, the choice of the head is taken as the choice of the user too.
  • the system can also propose locations. For example, it may calculate the location of a theater that minimizes the travel for all the users on a shopping trip such as a location that falls close to all the users. The system may also identify locations where there is a special promotion or a sale or something to do in the proximity.
  • Users of the shopping trip can also collaboratively pick and choose designs, styles, colours, and other aspects of apparel, and share their user model or user data 111 to build customized apparel.
  • users can design a room and purchase furniture, or design, build and buy furniture or other items.
  • Collaboration during shopping can be used not only for product or catalog or mall browsing but with any shopping facility or shopping tool such as the shopping cart, fitting room, wardrobe, user model, consultant, etc.
  • Tools present in toolbar 239 such as editing zooming, panning, tilting, manipulating view, undo, etc, as described with reference to FIG. 20 can also be used during a shopping trip.
  • FIG. 42 one form of interaction between various parties with system 10 is shown in exemplary embodiment.
  • Consumers can interact with their various computing devices 14 , 16 not shown in the image.
  • Other users may include shipping and handling users, administrative staff, technical support, etc.
  • Consumers browse products, interact together and shop.
  • vendors selling the product are notified. They then approve the purchase order, upon which the payment received from the customer is deposited in the corresponding vendor's account.
  • the shipment order is placed through shipping and handling users.
  • the customer may pick up order at a store branch using a ‘pick up ID’ and/or other pieces of identification.
  • the store the customer is interested in picking up the order at can be specified through the system.
  • the system may find the vendor store closest in proximity to the customer's location (customer's home, office etc.).
  • An interface exists for interaction between any type of user and system 10 , and between different groups of users via system 10 .
  • customers may interact with each other and with store personnel/vendors, and with fashion consultants via a webpage interface.
  • Vendors may interact with customers, consultants and other businesses via a ‘MyStore’ page available to vendors. Vendors can upload store feeds (in audio, video, text formats etc.), product information and updates via this page, as well as interact with customers. Vendors can see (limited information on) who is entering their store in real time and also offline.
  • Fashion consultants can upload relevant information through pages customized to their need. They can upload the latest fashion tips, magazines, brochures, style information etc. They can easily pull up and display to the user product information, dress ‘how-tos’, style magazines and related information as appropriate. They can also interact via various forms of interaction (such as audio/video/text chat etc.) described in this document.
  • Split-Bill is a feature that enables users to share the cost of a purchase or the amount of a transaction by allocating some or all of the cost or amount to be paid by each of the users. Optionally, a subset of users that are party to the transaction may be allocated the entire cost or amount of the transaction. This feature also calculates the portion of taxes paid by each individual in a transaction and can be used in conjunction with the receipt management system discussed with reference to FIG. 48D .
  • Split-Bill also enables users to claim their portion of an expense when claiming reimbursement for expenses (for example, expenses incurred on part of an employee for the purposes of work). There are many options for ways of operation of the Split-Bill feature.
  • FIG. 21A demonstrates an exemplary embodiment of Split-Bill 261 .
  • Different payment schemes are available to the users of a shopping trip.
  • a member of the shopping trip may pay for the entire bill using option 262 or each member pay for his/her individual purchases using option 263 .
  • the bill may be split between members by amount or percentage (as illustrated in FIG. 21A ) or other means of division using option 264 .
  • Such a service would also be applicable to electronic gift cards available through system 10 .
  • More than one user may contribute to an electronic gift card and the gift card may be sent to another user via system 10 .
  • the recipient of the gift card would be notified by an email message or a notification alert on his/her profile page or other means.
  • the senders of the gift card may specify the number of people contributing to the gift card and the exact amount that each sender would like to put in the gift card or the percentage of the total value of the gift card that they would like to contribute to.
  • the Split-Bill method works as follows: When a user decides to split a bill on a supported website or application, they choose the friends that they wish to split the bill with and the portions of the bill that each friend including themselves will pay. After that, they confirm their order as usual and get sent a payment processing gateway to make payment.
  • the other participants are notified of the split bill payment. These other users accept the split bill notification and are sent to the confirmation page for an order where they confirm their portion of the bill and are sent to the payment processing gateway. Once each member of the split bill group has made their payment, the order's status is changed to paid and becomes ready for fulfillment. A hold may be placed on authenticated payment until all other participants' payments have been authenticated at which point all the authenticated payments are processed. If a participant declines to accept a payment, then the payments of all other participants may be refunded. Users can also split a bill with a friend (or friends) who is offline.
  • a user or users come to the Split-Bill screen and indicate the name of the user(s) that they would like to split a portion or all of the bill with. That user(s) is then sent a notification (on our website or on any other social networking site like Facebook or on a chat application such as msn or via email or on a cell phone communicating via text such as through SMS or via voice by employing text to speech conversion, in exemplary embodiments). That user(s) can then decide to accept it in which case the transaction is approved and the payment is processed, or deny it in which case the transaction is disapproved and the payment is denied.
  • This mode of operation is similar to the asynchronous mode of operation as discussed with reference to FIG. 7B .
  • the Split-Bill method works as follows: When members of a shopping trip decide to split a bill on a supported website or application, each of them is presented with a screen such as the one shown in FIG. 21B in an exemplary embodiment.
  • the user enters the amount that he/she would like to pay (top row) of the total amount.
  • Other users are shown similar screens. As the user enters this amount, it is “flooded” (viewable) to the other users' screens.
  • the user can also enter the amount that he/she would like other members to pay in the first column.
  • the other columns indicate the amounts that others have entered. For example, in FIG. 21B it is shown that Alisha has entered “50” as the amount that she would like to pay.
  • each column is for entering the amount that a member of the trip would like the members of the trip to pay.
  • a user can optionally override the amount that another user (user B) should pay in their (user A's) column in the row that corresponds to the user's (user B) name. If the amounts entered by all the members for any given row are consistent, a check mark appears.
  • a user must enter the value in at least their field and column to indicate approval. The user cannot override the values in the grayed out boxes as these boxes represent the values entered by other users. If there is inconsistency in the values entered in any row, a cross appears next to the row to indicate that the values entered by the users don't match.
  • an “Adds up to box” indicates the sum of the amounts that the users' contributions add up to.
  • the amounts along the diagonal are added up in the “Adds up to box”.
  • Another field indicates the required total for a purchase.
  • Yet another field shows how much more money is needed to meet the required total amount. If all rows are consistent, the users are allowed to proceed with the transaction by clicking on the “continue” button.
  • the amounts entered can be the amounts in a currency or percentages of the total.
  • users can also view a total of the amounts that each of the users is entering, as shown in FIG. 21C in an exemplary embodiment.
  • Users can also select a radio button or a check box below the column corresponding to a user to indicate that they would like that user's allocation of amounts across friends. For example, as shown in FIG. 21C the user has chosen Alisha's way of splitting the bill. If all members chose Alisha's way of splitting the bill, then a check mark appears below Alisha's column and the users are allowed to proceed by clicking on the “continue” button. The user whom other members are choosing for splitting the bill may also be communicated for example using colours. This mode of operation is similar to the synchronous mode of operation as discussed with reference to FIG. 7C .
  • the Split-Bill method works as follows: When members of a shopping trip decide to split a bill on a supported website or application, each of them is presented with a screen such as the one shown in FIG. 21D in an exemplary embodiment. Users can enter the amount that they would like to pay in a field next to their name. If the amount adds up to the required total, the users are allowed to continue with the purchase.
  • the Split-Bill method works as follows: When members of a shopping trip decide to split a bill on a supported website or application, each of them is presented with a screen such as the one shown in FIG. 21D in an exemplary embodiment. Users can enter the amount that they would like to pay in a field next to their name. In this case, the users can enter an amount in any of the fields next to the members names simultaneously using the communication protocol described with reference to FIG. 7D . The users also share the same view. Each user also gets to approve his/her amount by checking a box next to their name. If the amount adds up to the required total and each of the users has approved his/her amount, the users are allowed to continue with the purchase. This mode of operation is similar to the common mode of operation as discussed with reference to FIG. 7D .
  • FIG. 21E An exemplary embodiment of such a method is illustrated in FIG. 21E . As shown in this figure, a user has chosen to pay for his “Red Jersey”, Alisha's sweater, and Robin's socks and tuque. The user's total is also shown.
  • FIG. 21F where another exemplary embodiment of Split-Bill is shown. Users can drag and drop items from a shared shopping cart into a list under their name. The list indicates the items that the user would like to pay for. At the bottom of the list the total of each user is also shown.
  • FIG. 21G where another exemplary embodiment of Split-Bill is shown. Users can drag and drop items from a shared shopping list into a list under their name and indicate the amount of the total bill that they would like to pay. This could be an amount in a currency or a percentage of the bill. In another exemplary embodiment, users can state an amount or a maximum amount (which could even be zero) that they can afford to pay. Other users can make payments on behalf of this user.
  • the Split-Bill feature can also work in any combination of the methods described above.
  • options are also available to split a bill evenly between users or to split the outstanding or remaining amount evenly between users.
  • the above embodiments of Split-Bill can also be used in conjunction with multiple shopping trips.
  • a trip leader may also be assigned to decide on how the bill is split.
  • Reoccurring or monthly payments may also be shared between friends using the above methods. This can also take place in a round Robin fashion where one user pays the first month, a second user the second month and so on.
  • the Split-Bill feature allows processing of credit, debit, points cards and/or other supported payment options. Payments can be made using any combination of these options.
  • a product that is about to be purchased may be paid for partially from a debit/bank account, partially via a credit card, partially using a gift card, and partially using points or store credits.
  • Points or credits may come from stores or from a user's friends.
  • the Split-Bill feature enables currency conversion. Users in different countries can view the amount to be shared in their local currency or other currencies of their choice.
  • the Split-Bill feature also enables users to request money or points from their friends (including those on social networks such as Facebook) or other users. This can be done when the user from whom money is being requested is online or offline similar to the method described above.
  • the Split-Bill method is also available as an independent component on a website for people to share the amount of a translation. Users can collaboratively buy products/services and send them as a gift to other users. Users can also ship gifts to users based on their location as specified in social networking sites or on our site or based on their mobile device location. This allows users to send gifts to an up-to-date address of the users' friends.
  • Investments may be made through Split-Bill.
  • Other financial transactions may be conducted in a collaborative manner, including currency exchange.
  • Currency may be exchanged, in exemplary embodiment, with a friend or someone in a friend's network so that the user may ensure that the transaction is being carried out through a trusted reference.
  • a person traveling to another country may exchange money with a relative or friend in that country.
  • shares and stocks may be traded collaboratively, for example through a split bill interface. Tools may be available for investors to collaboratively make investments and assist them in making decisions.
  • FIG. 35 a virtual model is shown in display windows illustrating examples of how a user can animate their character model's expressions/movements/actions and/or change their model's look.
  • the expressions/actions/dialogue/movements of the character model can be synchronized with the user's own expressions/actions/dialogue/movements as tracked in the image/video (in an exemplary embodiment using a method similar to [52]) of the user or these can be dictated by the user through text/speech and/or other command modes or through pre-programmed model expression/action control options provided through system 10 .
  • the display window 682 shows the virtual model ‘raising an eyebrow’; display window 684 shows the model with a surprised expression sporting a different hairstyle; display window 686 shows the virtual model under different lighting conditions with a different hair colour.
  • the exemplary embodiments in the figure are not restrictive and are meant to illustrate the flexibility of the virtual models and how a user can animate and/or control their virtual model's looks, expressions, actions, background/foreground conditions etc. Facial expressions may be identified or classified using techniques similar to those used in [53]. The virtual model can be thus manipulated even when the user uses it to communicate and interact with other users, for example, as in a virtual chat session.
  • stylists and friends of the user can apply makeup to the user model's face to illustrate make up tips and procedures.
  • the makeup may be applied to a transparent overlay on top the content (user model's face) being displayed.
  • the system allows the user to save the animation and collaboration sessions involving the user model.
  • FIG. 36 shows a sample virtual store window 690 involving virtual interaction between the user and a sales service representative in a real jewelry store, and incorporating augmented reality elements as described next.
  • a sales representative 691 interacts with the user in real-time via streaming video (acquired by a webcam or some other real-time video capture device).
  • the user in this instance interacts with the sales personnel via the user model 650 which is lip-syncing to the user's text and speech input.
  • Panoramic views of the displays 692 in the real jewelry store appear in the store window 690 .
  • An ‘augmented reality display table’ 693 is present on which the sales representative can display jewelry items of interest to the user.
  • Virtual interaction takes place via plug and play devices (for example I/O devices such as a keyboard, mouse, game controllers) that control the movement of simulated hands (of the user 694 and sales personnel 695 ).
  • a device that functions as an ‘articulated’ control i.e., not restricted in movement and whose motion can be articulated as in the case of a real hand, can be used to augment reality in the virtual interaction.
  • Store personnel such as sales representatives and customer service representatives are represented by virtual characters that provide online assistance to the user while shopping, speak and orchestrate movements in a manner similar to real store personnel and interact with the user model.
  • the augmented reality display table is featured by system 10 so that vendors can display their products to the customer and interact with the customer.
  • a jewelry store personnel may pick out a ring from the glass display for showing the user.
  • a salesperson in a mobile phone store may pick out a given phone and demonstrate specific features.
  • specifications related to the object may be displayed and compared with other products.
  • Users also have the ability to interact with the object 696 in 2D, 3D or higher dimensions.
  • the salesperson and customer may interact simultaneously with the object 696 .
  • Physics based modeling accomplished using techniques similar to those described in [54], is incorporated (these techniques may be utilized elsewhere in the document where physics based modeling is mentioned).
  • This display table can be mapped to the display table in a real store and the objects virtually overlaid.
  • a detailed description 697 of the object the user is interested in is provided on the display while the user browses the store and interacts with the store personnel.
  • a menu providing options to change settings and controls is available in the virtual store window, by clicking icon 540 in an exemplary embodiment.
  • the above example of a virtual store illustrates features that make the virtual store environment more realistic and interaction more life-like and is described as an exemplary embodiment. Other manifestations of this virtual store may be possible and additional features to enhance a virtual store environment including adding elements of augmented reality can be incorporated.
  • the display windows provide visual representations of the apparel items that are available to model/purchase to the user.
  • the display window 400 comprises a visual representation 402 of the apparel item.
  • a visual representation of a skirt is provided. Further information regarding the pricing, and ordering information, should the user desire to purchase this item is available.
  • the user is able to view reviews of this apparel items that have been submitted by other users by engaging the review icon 404 in an exemplary embodiment.
  • the user is able to further share this particular apparel item with friends by engaging the share icon 406 in an exemplary embodiment.
  • clicking on this icon presents the user with a screen to select a mode of operation. If the synchronous mode or the common mode of interaction are chosen, the user is presented with a shopping trip window as described with reference to FIG. 40 . If the user chooses the asynchronous mode of operation, the item gets added to the “shared items” list. The user can manage shared items through an interface as described with reference to FIG. 23 . If the user is engaged in the synchronous or common modes of interaction, clicking on the icon 406 , adds the item to the “shared items” list. The user can also send this item or a link to the item to users of social networking sites.
  • the user is able to try on the apparel items on their respective user model by engaging the fitting room icon 408 in an exemplary embodiment.
  • the method by which a user may try on various apparel items has been described here for purposes of providing one example of such a method.
  • Suitability of fit information may be displayed next to each catalog item. In an exemplary embodiment, this is done by stating that the item fits (‘fits me’) 410 and/or placing an icon that conveys the fit info (for eg. icon 550 ). Further details of displaying the goodness of fit information is described with reference to FIG. 30 .
  • a 2D or 3D silhouette 554 may also be placed next to catalog items to visually show goodness of fit.
  • Information on how the apparel feels is also communicated to the user. This is done in an exemplary embodiment, by displaying a zoomed in image of the apparel 412 (“Feels Like”) illustrating the texture of the apparel. The sound that the apparel makes on rubbing it may also be made available.
  • Models of products for use in catalogs may also be constructed by using images submitted by users. Images contributed by several users may be stitched together to create models of products. Similarly, images from several users may also be used to create a user model for the users' friend. Holes or missing regions, if any, present in the constructed models may be filled with texture information that corresponds to the most likely texture for a given region. The most likely texture for any given region can be estimated, in an exemplary embodiment, using Na ⁇ ve Bayes or KNN. This can be done as described earlier, using statistics drawn from regions in images surrounding the holes as the input and the texture in the missing region as the output.
  • FIG. 24 where a sample fitting room window 420 is shown in an exemplary embodiment.
  • the fitting room window 420 lists the various apparel items that the user has selected to try on.
  • Each apparel item has an identification number assigned to it by system 10 for purposes of identification.
  • the user requests that the system 10 fit and display the apparel item on the user model.
  • An item of apparel is comprised of patterns (tailoring, stitch-and-sew terminology). All items of apparel are described that are associated with the system 10 have an apparel description file (ADF) associated with them.
  • ADF apparel description file
  • the ADF file can be in XML format and the CAD file provided to system 10 by the retailer module 58 can be encapsulated within this ADF file.
  • the apparel description file contains all information regarding the apparel including information necessary to model and display the apparel and to determine its fit on a model.
  • Any and all information related to the actual apparel and any and all information needed by system 10 to create the virtual apparel, display and fit it on a model is contained within the ADF file.
  • An ADF file in XML format is presented in FIG. 37 in an exemplary embodiment,
  • the ADF file 700 contains header information 701 followed by information describing a specific apparel.
  • the apparel tags 702 indicate the start ( ⁇ apparel>) and end ( ⁇ /apparel>) of apparel description. Specific tags are provided within this region for describing different aspects of the apparel.
  • the manufacturer description 703 includes the name of the manufacturer, the country source, the composition and size information in this file.
  • the care information 704 provides details on whether the apparel can be washed or dry-cleaned; the pattern tags 705 enclose the CAD filename containing the details on apparel pattern data; the fitting information 706 that describes how a virtual manifestation of the apparel fits on a virtual human model is encapsulated by the fitting tags 706 ; the media tags 707 enclose filenames that provide visual, audio and other sense (such as feel) information about the apparel, as well as the files and other data containing display information about the specific apparel (the 3D display data for the apparel model lies within the ⁇ render> tag in this example). Further store information 708 such as the unique store ID in the system 10 , the name of the store and other details relating to a specific store such as the return policy is provided in the ADF file.
  • the ADF file 700 in FIG. 37 is presented for purposes of illustration and is not meant to be restricted to the XML format or the tags given in the file. Other manifestations of the ADF are possible and other tags (descriptors) may be included to describe a given apparel.
  • Much of the information describing the apparel is contained in the CAD file obtained from the retailer 58 , while the information necessary to model, display and fit the apparel is augmented with the CAD file to form the ADF.
  • FIG. 38 where a quick overview is provided of ADF file creation and use, in an exemplary embodiment. Apparel information 711 described previously, as well as information associated with the specific apparel in its CAD file is packaged by the ADF creation software 712 to form the ADF file 700 .
  • This ADF file information is then subsequently used in modeling the apparel digitally for purposes of display in electronic catalogues and displays 713 ; for fitting on 3D user models 714 ; for displaying and listing in the virtual wardrobe and fitting room 715 as well as other forms of digital apparel viewing and interaction.
  • Pattern information comprising the apparel is extracted. This information is contained in the CAD and/or ADF files and is parsed to form the geometric and physics models of the apparel.
  • a mesh is generated by tessellating 3D apparel pattern data into polygons.
  • This geometric model captures the 3D geometry of the apparel and enables 3D visualization of apparel.
  • the physics model is formed by approximating the apparel to a deformable surface composed of a network of point masses connected by springs.
  • the properties of the springs are adjusted to reflect the properties of the material comprising the apparel.
  • the movement of the cloth and other motion dynamics of the apparel are simulated using fundamental laws of dynamics involving spring masses.
  • Cloth dynamics are specified by a system of PDEs (Partial Differential Equations) governing the springs whose properties are characterized by the apparel material properties.
  • PDEs Partial Differential Equations
  • the physics model enables accurate physical modeling of the apparel and its dynamics.
  • Reference points on the apparel specify regions on the apparel corresponding to specific anatomical landmarks on the human body. The information concerning these points and their corresponding landmarks on the body will be contained in the CAD and ADF files.
  • FIG. 29A illustrates an example of the visual sequences 460 , from left to right, displayed to the user in a window while the apparel is being fitted on a non photorealistic rendering of the user model.
  • An example of the visual sequences 462 from left to right, presented to the user in a window during hair modeling on the non photorealistic rendered user model is also shown in FIG. 29A .
  • the hair 464 on the user model is animated using physics-based techniques which permit realistic simulation of hair look and feel, movement and behavior.
  • FIG. 29B where a user model adjustments interface 470 is shown in an exemplary embodiment, containing a non photorealistic rendering of a user model.
  • Options to make body adjustments are displayed upon clicking the menu display icon 476 .
  • a sample mechanism is shown for making adjustments to the body.
  • Slider controls 475 and 477 can be used to make skeleton and/or weight related adjustments to the user model. Skeleton adjustments allow modifications to be made to the generative model of the skeletal structure of the user model. This renders anatomically accurate changes to be made to the user model.
  • a taller user model (with elongated bones) 472 is obtained whereas, by moving some of the skeleton adjustment controls 475 to the left, a petite user model 471 is obtained.
  • weight adjustment controls 477 can be used to obtain a heavier user model 474 or a slimmer user model 473 .
  • manipulating the skeletal adjustment controls increases or decreases the distance between a joint and its parent joint. For example increasing the value of the length of a shin increases the distance between the ankle joint and its parent joint, the knee joint.
  • manipulating the weight adjustment controls increases or decreases the weight assigned to the corresponding vertices and moves them closer or farther from the skeleton. For example, increasing the weight of a selected portion of the shin places the vertices corresponding to that region further from the skeleton.
  • Continuity constraints (a sigmoid function in an exemplary embodiment) are imposed at the joints to ensure plausible modifications to the user model. Users can also deform the user model by nudging the vertices corresponding to the user model. Users can also specify the body muscle/fat content which sets the appropriate physical properties. This is used, for example, to produce physically plausible animation corresponding to the user.
  • FIG. 29C where a sample window is shown demonstrating product catalogue views available to the user from which apparel may be selected for fitting onto their user model.
  • a product catalogue 480 may be displayed by clicking a menu display icon 482 . The user may then select a given outfit/apparel/product from the catalogue upon which it will be fit and displayed on the user model.
  • product catalogues are available in the local application 271 or within the browser or a combination of both as described with reference to FIG. 10 and FIG. 31 .
  • the user's model with apparel chosen by the user By clothing the user's model with apparel chosen by the user, the user is able to visualize and examine the appearance of the apparel on their body from an external perspective and also get an approximate idea of how the apparel fits.
  • metrics are used that define the suitability of apparel not just based on size information but also as a function of body type and fit preferences.
  • the system will relay suitability of fit information to the user using aspects that are, but not limited to, quantitative and qualitative in nature.
  • goodness of fit is a quantitative metric.
  • the convex hull of the model is compared with the volume occupied by a given piece of clothing.
  • apparel can be modeled as springs by system 10 .
  • regions of tight fit in this case, physical stress and strain on the apparel and/or model can be computed using the spring constant of the apparel material.
  • Regions of loose fit may be determined by evaluating normals from the surface. The distance between the body surface and the apparel surface can be ascertained by computing the norm of the vector defined by the intersection of the surface normal to the model's surface with the cloth surface. This process can be made computationally efficient by sampling surface normals non-uniformly. For instance, regions of high curvature and greater importance may have many more normals evaluated than regions of low curvature.
  • qualitative aspects are also incorporated by system 10 . These include, but are not limited to, user preferences. An example of this is the user preference for loose fitting clothes.
  • regions of different fit on the apparel may be colored differently.
  • Visual indicators include, but are not limited to, arrows on screen, varying colors, digital effects including transparency/x-ray vision effect where the apparel turns transparent and the user is able to examine fit in the particular region.
  • FIG. 30 Some examples are illustrated in FIG. 30 .
  • the visualization options are provided to the user via a menu available by clicking the icon 540 , in exemplary embodiment.
  • different fit regions are depicted using coloured arrows 542 , highlighted regions 544 as well as transparency/x-ray effects 546 .
  • Transparency/x-ray effects 546 allow fit information to be visualized with respect to body surface.
  • the apparel on the 3D body model is made transparent in order for the user to visually examine overall apparel fit information—regions of tight/proper/loose fit.
  • regions of tight fit are shown using red coloured highlight regions (armpit region).
  • Loose fitting regions are shown via green arrows (upper leg) and green highlight (hips).
  • Comfort/smug fitting is depicted using orange arrows (waist) and yellow highlight (lower leg).
  • Users may also define the numerical margins that they consider ‘tight’, loose’ and so on for different apparel. For example, the user may consider a shirt to be proper fitting around the arms if the sleeves envelope the arm leaving between 1-2 cm margin. The user may specify these margins and other settings using the options menu 540 available to the user.
  • the transparency/x-ray effect also provides visual information with regards to layers of clothing.
  • the users may wish to select particular items for visualization on the model.
  • they may select from the itemized list 552 which lists all of the apparel items the user has selected to fit on the user model as part of an ensemble for instance.
  • the items that are not selected may disappear or become transparent/light in colour (i.e., recede or fade) in order to make more prominent the selected items of apparel.
  • the transparency effect emphasizes certain items visually while still preserving other layers of clothing so that the highlighted apparel may be examined with respect to other items it will be worn in combination with.
  • the layers worn by the model in FIG. 30 may be examined from different perspectives of the model (cross-sectional view for example).
  • This page also provides the user with the menu (available by clicking icon 540 ) described previously for setting/manipulating the model and environment as well as setting view options, share options (for example, sharing model views with friends in specific apparel).
  • Other purposes for which visual indicators may be applied includes, but is not limited to, relaying the user with information regarding the quality or make of an apparel. For example, different colours may be used to outline or highlight a shoe sole in order convey whether the given shoe is hard-soled or soft-soled, Separate icons may also be provided such as 548 provided to interact and/or manipulate model as shown in FIG. 30 . Additionally, an icon summarizing suitability of fit may be provided 550.
  • the ‘summary’ icon may be programmed by default, for example, to give a ‘thumbs up’ if two qualitative and quantitative aspects are satisfied. This default setting may be changed to suit the user's suitability of fit requirements. More details on the fit are available to the user by clicking on or hovering over the icon 550 . The user can also choose to display portions of these details next to the icon through the preferences page. In an exemplary embodiment, the user can see the fit information by taking an item to the fitting room (eg. by dragging and dropping a catalog item into the fitting room).
  • the user can see all the items that the user is browsing with the fit information without the need to place the item in the fitting room. All instances of features shown in FIG. 30 are illustrative examples and are not meant to be restricted to these and can embody and encompass other forms, illustrations and techniques.
  • the shared item window 430 displays the various items that the user has shared, in a shared list 432 , and a list of items that friends have shared in a friend shared list 434 .
  • the snapshots lists 436 allow a user to share various images that they have captured of their user model with other users. When viewing and interacting with the user model, the user is provided the ability to capture an image or snapshot of the image, and share the respective snapshot or image with other users.
  • Wardrobe images 440 are used in an exemplary embodiment to display to the user the apparel items that a user has added to their wardrobe.
  • a user may browse all of the items that are in their virtual wardrobe, and may request that they receive comments regarding items in their wardrobe from a consultant.
  • the user is presented with options as in the tabbed menu 442 shown in exemplary embodiment, so that they can quickly navigate and browse the apparel in their wardrobe and fitting room; try on apparel on their model as well as get feedback regarding apparel and dressing style options from the style consultant.
  • the icons 444 available to the user in their wardrobe include: (1) the icon that displays to the user apparel information such as the make and manufacturer details, care instructions, store it was bought from, return policy etc. as well as user tagged information such as who gifted the apparel, the occasion to wear it for, etc.; (2) the icon to fit selected apparel on the user model; (2) the icon to share selected apparel with other users.
  • the icons shown have been presented as examples and may include icons that perform other functions. The icons shown may be represented with different symbols/pictures in other manifestations. Reference is made to FIG. 28 where a drawing of a 3D realization of a virtual wardrobe is shown.
  • This wardrobe can be incorporated with physics based animation functionality so that users can drag around objects; arrange and place them as desired in the wardrobe; move them into boxes or bins or hangers or racks etc. Users will be able to visualize articles of clothing and other apparel in their wardrobe; tag each item with a virtual label that may contain apparel specific information as well as user specified information such as the date the apparel was bought; the person who gifted the apparel; upcoming events on which it can be worn as well as links to other items in the wardrobe and/or fitting room with which that item can be coordinated or accessorized with etc.
  • FIG. 26 where a sample style consultant window 450 is shown in an exemplary embodiment. The style consultant 452 is able to comment on the user's items in the wardrobe, upon request of the user.
  • the icons 454 shown from left to right include: (1) the icon to obtain information on the specific style consultant; (2) the icon to add/remove style consultants from the user's personal list.
  • Icon 456 provides the user with options to engage in communication with the style consultant either via email or chat which may be text/voice/video based or may involve augmented reality, in exemplary embodiments.
  • FIG. 27 a sample diagram is presented illustrating the actions involving the fitting room 420 and wardrobe 440 that the user may engage in while browsing for apparel.
  • the user can add an item to their fitting room by clicking on an icon 424 next to the item they wish to virtually try on.
  • an item Once an item has been added to the fitting room 420 , that item will become available to the user in the local application 271 for fitting on their model.
  • the user may model the apparel item on their user model, and/or decide to purchase the item, in which case the apparel item can be added to the virtual wardrobe 440 .
  • the user may decide not to purchase the item in which case the item will stay in the fitting room until the user chooses to delete it from their fitting room.
  • the user may choose to keep a purchased item in their wardrobe 440 or delete it. If the user decides to return an item, that item will be transferred from the user's wardrobe 440 to the fitting room 420 .
  • the user may also decide to conduct an auction or a garage sale of some or all of the real items in their wardrobe. Users with access to the virtual wardrobe can then view and purchase items on sale of interest to them via system 10 .
  • the virtual items in the fitting room and wardrobe can also be purchased for use in other sites that employ virtual characters/models.
  • the virtual apparel items in the fitting room and wardrobe may be exported to external sites or software involving virtual characters/models such as gaming sites, ‘virtual worlds’ sites and software.
  • FIG. 46A shows a profile or home page of a user registered with system 10 .
  • the user can grant access to this page to other users by setting permissions.
  • Icon 801 displays the logo of system 10 and provides the user with a menu containing certain options such as home page access and help with features available to the user on system 10 .
  • Display box 802 represents the information card providing profile details of the user.
  • Display box 804 contains hyperlinks to all stores subscribing to system 10 or just the favourite/most frequently visited stores by the user. Additionally, users may engage display box 805 for adding friends they would like to collaborate with. In an exemplary embodiment, users may add friends they normally like to acquire feedback from or go out with for shopping. The user may also add other users registered with system 10 whose fashion/style sense they like and follow (the user would be that person's ‘style fan’ in that case).
  • Another menu 803 is provided in FIG. 46A as an exemplary embodiment which permits the user to access more features available on system 10 .
  • FIG. 46B where a store page 806 is shown.
  • the products available in the store 808 may be categorized according to different fields such as department, category, size etc. Users may also be able to search for products in the store. Stores have the option of personalizing their store pages.
  • the season's collection may be displayed in a product display window 809 . Items featured by the store and other item collections may also be displayed in another window 810 .
  • FIG. 46B also displays a collaborative shopping trip window 807 on the same page. The shopping trip window may be launched by clicking on icon 815 .
  • the shopping trip dialog 807 containing collaborative shopping features may open up in a separate window or in the same window/page being viewed by the user.
  • a synchronized product viewer 811 enables collaborative shopping between members of that shopping trip displayed in window 814 .
  • Products being browsed by other users of the shopping trip may be viewed in the product viewer 811 via menu 812 .
  • the user can browse the shopping cart, shopping list, wishlist, wardrobe, and other personalized shopping features shown in 814 of the selected user, if that user has granted permission, by clicking on the ‘GO’ button in window 814 .
  • a chat window 813 and/or other synchronous or asynchronous means of communication may be available to enable communication with other users while shopping.
  • FIG. 46C illustrates another layout in exemplary embodiment.
  • FIG. 46B This layout combines some store page features with collaborative shopping trip features on the same page.
  • a regular store page 806 shown in FIG. 46B may convert to a page as in FIG. 46C upon activating the shopping trip.
  • FIG. 46D where a sample shopping trip manager window/page is shown. Users can create new shopping trips 816 ; categorize trips by labeling them and invite friends on shopping trips. Users can view and sort shopping trips 817 according to labels.
  • a ‘look’ in this context is defined as a collection of products put together by the user from different product catalogues to create a complete ensemble or attire defining a suggested ‘look’.
  • Other users may gauge a user's fashion sense or style by browsing through the given user's looks page.
  • a browser window 818 allows the user to browse looks they created.
  • Each look 819 is composed of several items put together by the user.
  • a look 819 may contain a blazer, a blouse, a skirt, a pair of shoes, a handbag and other accessories to complement the given look.
  • a user may obtain expanded views of products comprising a given look by highlighting a look 819 , upon which another dialog or window 820 is launched containing expanded views 821 of items composing 819 .
  • a product options menu 822 appears which is comprised mainly of the four option boxes outlined in red.
  • the other sub-menus 823 - 826 appear upon clicking the respective main product menu options besides which they appear.
  • the product options menu 822 is shown in exemplary embodiment and it enables tasks such as product purchase 824 , product sharing with other users 823 , rating the product according to different criteria 825 and addition of the product to various personalized user lists 826 .
  • FIG. 46F shows some features comprising the fitting room 827 . These may include the shopping cart 828 , or items that the user has selected but is undecided about purchasing 829 , and the product viewer 830 which provides product views of the item selected from the shopping cart or the ‘decide later’ cart.
  • FIG. 46G shows Another version of the fitting room which incorporates the product viewer 830 , the shopping cart, ‘decide later’ items as well as other customized user lists such as shared items, top picks, my looks and others.
  • the shopping diary is comprised of personalized user lists such as shopping lists, wishlists, gift registries, multimedia lists and others. Additionally it may incorporate a shopping blog and other features.
  • FIG. 46I a layout or directory of the mall comprising stores subscribing to system 10 is shown in an exemplary embodiment.
  • This can be customized to form a user-specific directory that lists businesses and people that a user is associated with in a community. Stores are listed on the left and categorized by gender and age group.
  • a map or layout 1106 of the virtual mall is presented to the user where the stores on system 10 may additionally be shown graphically or using icons.
  • a store image 1104 may be displayed.
  • a ‘window shopping’ feature permits users to get live feed from the store including information 1105 such as other users browsing the store.
  • the user may be able to identify contacts in their friends list who are browsing the store via this feature and also identify the contact's category (i.e., work—W, personal—P etc.). Additionally, other services 1102 may be listed such as dental and other clinics. Users may be able to book appointments online via a clinic appointment system available through system 10 . Users may also make use of a ‘smart check’ feature that checks the user's calendar for available slots and suggests potential dates to the user for booking appointments and/or proceeds to book the appointment for the user by providing the clinic with the user's availability dates. Once the clinic confirms a booking, the smart check calendar feature informs the user of the confirmed date via SMS/email/voicemail/phone call. Users may set their preferred method of communication.
  • the clinic may additionally suggest to the clinic the best dates for scheduling an appointment by cross-referencing both the patient/client's schedule and the clinic's schedule.
  • Users may mark other appointments in their digital calendar.
  • the calendar may send appointment reminders via SMS, email, phone call to the user depending on user preferences and the user will be presented with options to confirm, cancel or postpone the appointment upon receiving the appointment reminder.
  • the calendar would notify the user of the duration after which the appointment is scheduled, for example—‘your dentist appointment is in 15 minutes’.
  • the smart-check feature could also cross-reference the dentist clinic's electronic schedule in real time and inform the user whether their appointment is delayed or postponed because the clinic is not running late or for some other reason.
  • Other services such as food/catering 1103 may be available permitting the user to order online.
  • an ‘electronic receipt manager’ Another feature available on system 10 is an ‘electronic receipt manager’. This feature allows the user to keep track of all receipts of products purchased through system 10 and other receipts that the user may want to keep track of. This may prove useful to users for purposes such as exchanging or returning merchandise, tax filing, corporate reimbursements and others. Users would be able to categorize receipts (example, business, personal etc.); import and export receipts to other places such as the user's local computer or a tax filing software and other places; conduct calculations involving amounts on those receipts. Stores on system 10 may also find it useful to have and store these electronic receipts in order to validate product purchases during a product return or exchange. (Receipts for purchases made at the physical stores can also be uploaded to the electronic receipt manager.
  • the store and services layout 1106 , and store and services listing may also be customized by the user to comprise favourite stores and services of the user i.e., stores and services such as the dentist, mechanic, family physician, hair salon, eateries etc. most frequently visited by the user (may be entitled ‘My Business’ section in exemplary embodiment). This would permit the user to create their own virtual mall or virtual community providing quick and easy access to stores and services most beneficial to the user as well as their contact and other information. (Users can search for businesses and add them to their ‘community’ or contacts list.
  • a list of businesses with that name or similar names may be shown and may be displayed in ascending order of the distance from the user's home, office, city, or current location).
  • a user can also visit other users' virtual malls and communities.
  • a virtual mall may be mapped to a real mall and contain stores and services that are present in the real mall.
  • the ‘My Business’ concept described above may be integrated with social networking sites. Tools may be available to businesses to communicate with the user clients and customers, such as via the clinic appointment system described above. Tools may be available to customers to manage receipts, product information and also to split bills.
  • the system described with reference to FIG. 46I may be integrated with the VOS and/or VS described in this document.
  • FIGS. 47 A-B illustrate features that allow the user to customize pages on system 10 ; to set the theme and other features that allow the user to personalize the browser application's and/or local application's look and feel.
  • FIG. 47A shows a theme options menu 1108 where a user can choose and set the colour theme of the browser pages that they will be viewing during their session on system 10 .
  • the user has chosen ‘pink’. Accordingly, the theme changes as shown via the windows in FIGS. 47A-B .
  • FIG. 47B also shows features available to the user for specifying the delivery information 1112 of a product upon purchase. Users may specify a friend from their address book or friends' list and also specify the delivery location type (i.e., work, home etc.). The system would then directly access the latest address information of that friend from their user profile. This address would subsequently be used as the delivery address.
  • FIGS. 48A-F where some features and layout designs of system 10 are illustrated in exemplary embodiment. These features and designs can be used with the local application or a web browser or a website in exemplary embodiments. The description of these figures is provided with respect to the local application but it also holds in the case of a browser implementation or a website implementation of the same.
  • the display screen 1130 is encased by an outer shell 1131 , henceforth referred to as the ‘faceplate’ of the local application.
  • the faceplate can be changed by a user by selecting from a catalogue of faceplates with different designs and configurations, which will be available under menu options.
  • buttons with icons 1132 On the faceplate are navigation links represented by buttons with icons 1132 , in an exemplary embodiment.
  • the lifesaver icon 1133 serves as a link for the help menu.
  • Button 1134 represents the user account navigation link which directs the user to their account or profile space/section on the local application, consisting of the user's personal information, account and other information; settings and options available to the user to configure their local application or browser application; information and links to tools and applications that the user may add to their local or browser application.
  • Navigation link 1135 on the faceplate is discussed with reference to FIG. 48A . Other navigation links on the faceplate will be discussed with reference to the figures that follow.
  • Button 1135 directs the user to the user model space/section of the local application (button 1135 is highlighted with a red glow here to show that it is the active link in this figure i.e., the screen 1130 displays the user model space).
  • buttons 1135 are highlighted with a red glow here to show that it is the active link in this figure i.e., the screen 1130 displays the user model space.
  • Menu options 1137 for viewing, modifying and using the 3D model are provided on this page.
  • Other features may be present in this space that can be utilized in conjunction with the 3D model.
  • the fitting room icon 1138 is provided as an exemplary embodiment. Upon activating this icon (by clicking it for example), the fitting room contents are displayed 1139 (in the form of images here) enabling the user easy access to the apparel they would like to fit on their user model 1136 .
  • navigation link 1145 which represents ‘shopping tools’ is shown as being active.
  • the display screen 1130 displays the shopping tools space of the local application.
  • This space provides the user with applications and options that assist in shopping online and/or electronically via the local application software.
  • Icon 1146 when activated (by hovering over icon with mouse or by clicking icon, as examples) displays a menu of user lists 1147 (shopping list, wishlist, registries etc.), which may be used to document shopping needs.
  • This menu 1147 subsides/is hidden when the icon is deactivated (i.e., by moving the mouse away from the icon or by clicking the icon after activating it, as examples).
  • Icons 1148 - 1152 in FIG. 48B function in a similar way in terms of activation and deactivation.
  • Icon 1148 provides a menu with features to assist in shopping and in making the shopping experience immersive. As shown in the figure, these features include the collaborative shopping trip feature, consultation (online or offline) with a style or fashion expert among others.
  • Feature 1149 provides the user with access to gift catalogues, gift cards/certificates, as well as information on gifts received and sent.
  • Icon 1150 provides the shopping cart menu listing items that the user has chosen for purchase; that the user has selected for making a decision to purchase or not at a later date. It also directs the user to the checkout page. Feature 1151 assists the user in making shopping related searches and also in seeking out products in specific categories such as ‘top bargains’, ‘most selling, ‘highest rated’ etc. Icon 1152 provides features customizable by the user and/or user specific tools such as item ratings, product tags or labels etc.
  • Navigation link 1160 which represents the ‘connect’ feature is shown as being active. This link directs the user to the social networking space of the local application.
  • the list box 1161 provides the user with a listing of the user's friends and other contacts. It may contain contact names, contact images, web pages, personal and other information relating to each contact.
  • Feature 1162 provides the user with the facility to select multiple contacts (in this case, feature 1162 appears in the form of checkboxes as an exemplary embodiment).
  • social networking features are provided i.e., applications that provide the facility to shop, communicate, interact online, virtually and/or electronically and perform other activities electronically with contacts. Some of these features are illustrated in FIG.
  • Icons 1163 , 1165 , 1167 can be activated and deactivated in a fashion similar to icons 1146 , 1148 - 1152 in FIG. 48B .
  • a shopping trip invite menu 1164 appears, providing the user with options to send an automated or user-customized shopping trip invitation message to all or selected contacts from the list 1161 . These options are symbolized by the icons in the menu 1164 . From left to right, these icons allow the user to send invitations via ‘instant notification’, ‘phone’, email’, ‘SMS’ or ‘text message’, and ‘chat’.
  • Feature 1165 provides a menu with options to communicate with all or selected users in 1161 .
  • Feature 1166 provides the user with gift giving options available on system 10 . Users can select friends in 1161 via 1162 and choose the from the gift options available in menu 1167 . From left to right in menu 1167 , these icons represent the following gift options: ‘gift cards’, ‘shop for gifts’, ‘donate with friends’, ‘virtual gifts’. This list can contain other gift options such as the ones provided by 1149 in FIG. 48B .
  • the arrow 1168 allows the user to navigate to other applications in this space that are not shown here but maybe added later.
  • FIG. 48D the ‘financial tools’ link 1175 is shown as active and the corresponding space that the user is directed to is shown in the display screen 1130 . Some of the features accessible by the user in this space are described next. Feature 1176 and other icons in this space can be activated and deactivated in a manner similar to icons in other spaces of the local application, as explained previously. Upon activating icon 1176 , options menu 1177 appears displaying options that can be used to view, manage and perform other activities related to purchase receipts, refunds and similar transactions.
  • ‘billing history’ allows the user to view the complete listing of financial transactions conducted through system 10 ; ‘pay bills’ allows the user to pay for purchases made through system 10 via a credit card provided for making purchases at stores on system 10 ; ‘refunds’ assists in making and tracking refunds; ‘manage receipts’ allows the user to organize, label electronic receipts, and other housekeeping functions involving their receipts, perform calculations on receipts; ‘edit tags’ allows users to create, modify, delete receipt/bill tag or labels. These could include ‘business’, ‘personal’ and other tags provided by the system or created by the user.
  • the accounts feature 1178 provides options that allow the user to view and manage accounts—balances, transfers and other account related activities, account statistics and other account specific information.
  • Feature 1179 provides other tools that assist the user in managing financial transactions conducted on system 10 , as well as financial accounts, and other personal and business finances. Some of these are shown in the figure and include—‘expense tracker’, ‘split bill’ which was described previously in this document, ‘currency converter’, tax manager’ etc. Since this is a space requiring stringent security measures, icon 1180 details the user on security measures taken by system 10 to protect information in this space.
  • the electronic receipts may be linked with warranty information for products from the manufacturer/retailer, so that users may track remaining and applicable warranty on their products over time.
  • warranty information on a user's account may serve useful for authenticating product purchase and for warranty application terms. Since the receipt is proof of product purchase, it may also be used to link a user's account containing the receipt for a product, with the user manual, product support information and other exclusive information only available to customers purchasing the product. Other information such as accessories compatible with a product purchased may linked/sent to the user account containing the product's receipt.
  • FIG. 48E where the ‘share manager’ space ( 1185 ) on the local application is described.
  • User files on a local machine or on in the user account on system 10 can be shared by activating a share icon similar to 1186 .
  • Items may be shared in other spaces as well but this space provides a comprehensive list of features for sharing items, managing shared items, users and activities involving shared items. Users can keep track of items they have shared with other users ( 1187 , 1188 ). Users may change share settings and options, view their sharing activity history, tag shared items, add/remove files/folders and perform other actions to manage their sharing activity and items ( 1189 , 1190 ). Users may maintain lists of other users they share items with, subscribe to and send updates to sharing network on items shared, and maintain groups/forums for facilitating discussion, moderating activities on shared items ( 1191 ).
  • Style tools are available to assist users in making better fashion choices while shopping for clothes and apparel ( 1214 ). These tools include consulting or acquiring fashion tips/advice from a fashion consultant, constructing a style profile which other users or fashion experts may view and provide appropriate fashion related feedback.
  • a ‘my look’ section is also present in this space where users can create their own ensembles/looks by putting together items from electronic clothing and apparel catalogues (available from online stores for example). Further, users may manage browse or search for outfits of a particular style in store catalogues using style tools provided in this space ( 1214 ).
  • a virtual fitting room ( 1216 ) is present to manage apparel items temporarily as the user browses clothing stores. Apparel in the fitting room may be stored for trying on/fitting on the user model.
  • a virtual wardrobe space ( 1218 ) also exists for managing purchased apparel or apparel that already exists in the user's physical wardrobe. The simulations/images/descriptions of apparel in the wardrobe may be coordinated or tagged using the wardrobe tools ( 1218 ).
  • the fitting room and wardrobe feature and embodiment descriptions provided earlier also apply here.
  • the application has been referred to as a ‘local application’. However, this application may also be run as a whole or part of a web application or a website or a web browser or as an application located on a remote server.
  • FIGS. 49A-O where an immersive Application and File Management System (AFMS) or Virtual Operating System (VOS) and its features are described.
  • AFMS/VOS system or a subset of its features may be packaged as a separate application that can be installed and run on the local or network machine. It can also be implemented as a web browser or as part of a web browser and/or as part of an application that is run from a web server and can be accessed through a website. It can also be packaged as a part of a specialized or reconfigurable hardware or as a piece of software or as an operating system.
  • This application may be platform independent. It may also take the form of a virtual embodiment of a computing device shown in FIG. 2 .
  • FIG. 49A is a login window that provides a layer of security which may or may not be present when an application using this system is accessed depending on the security level selected.
  • Default file categories may be provided with the system and are some are shown in the figure in an exemplary embodiment. These are folders to store web links ( 1250 ), shopping related content ( 1252 ), multimedia related content ( 1254 ) and data files ( 1256 ). Users may create their own folders or remove any of the default folders provided, if they wish. In this figure, the shopping related folder is selected. It contains the categories or tags 1258 , which are shown in exemplary embodiment. The user can create new tags, remove tags, create sub-level tags/categories and so on. The user can also conduct tag-keyword specific files searches within the system.
  • the user can go the product tag and access the sub-tags ( 1260 ) within this category.
  • the user can select the keyword Canon200P (highlighted in orange in the figure).
  • Other tags/sub-tags ( 1264 ) can be similarly selected to be used in combination in the keyword specific search.
  • An operator menu 1262 is provided so that the user can combine the tags using either an ‘OR’ or ‘AND’ operator in order to conduct their search, the results of which can be obtained by clicking the search operator 1266 .
  • the user may also choose to filter certain results out using the ‘filter’ function 1268 which allows the user to set filter criteria such as tag keywords and/or filename or and/or subject, content or context specific words and other criteria.
  • the user may also choose to filter out tags and/or sub-tags by using a feature that allows the user to mark the tag as shown (in this case with a ‘x’ sign 1270 as shown in exemplary embodiment).
  • User can create multiple levels of tags and sub-tags as shown by 1272 .
  • a file categorizing system has been defined in terms of tags that can be created and linked/associated with files and folders. Users can view tags, as shown in FIG. 48B , instead of filenames and folder names as in a standard file system.
  • the tagging method can also be used to tag websites while browsing. Tags can be used with documents, images, applications, and any other type of data. Files and folders can be searched and the appropriate content retrieved by looking up on one or a combination of tags associated with the files and folders. Users may also simply specify tags and the AFMS would identify the appropriate location to store/save/backup the file. In exemplary embodiment, if a user is trying to save an image with the tag, ‘Ireland’.
  • the AFMS would identify the file as an image file and the tag ‘Ireland’ as a place/destination that it identifies as not being in the user's vicinity i.e., (not in the same city or country as the user). Then, the AFMS would proceed to store the file in an image space/section/file space in the subspace/subsection entitled or tagged as ‘My Places’ or ‘Travel’. If a subspace does not exist that already contains pictures of Ireland, it would create a new folder with the name/tag ‘Ireland’ and save the image in the newly created subspace, else it would save the image to the existing folder containing pictures of ‘Ireland’.
  • the user may want to save a project file tagged as ‘Project X requirements’.
  • the AFMS determines that there are associate accounts, as described later, that share files related to Project X on the owner user's account.
  • the AFMS proceeds to save the file in the space tagged as ‘Project X’ and sets file permissions allowing associate accounts that share Project X's space on the owner user's account to access the newly saved file (Project X requirements).
  • the AFMS/VOS not only determines the appropriate load/save location for files, but also the permissions to set for any new file on the system. Additionally, the file and folder content may be searched to retrieve relevant files in a keyword search.
  • a user may tag a photo showing the user as a child with his mom on the beach, with the term ‘childhood memories’.
  • the user may tag the same photo with the phrase ‘My mommy and me’ and ‘beach’. Anytime the user searches for any of the tags, the photo is included in the collection of photos (or album) with the given tag.
  • a single photo can comprise multiple albums if it is tagged with multiple keywords/phrases.
  • one such application is a photo mixer/slideshow/display program that takes as input a tag name(s), retrieves all photos with the specified tag, and dynamically creates and displays the slideshow/photo album containing those photos.
  • Applications 1280 may be provided by the AFMS/VOS system. Alternatively, external applications may be added to it. In the following figure, examples of two applications are shown in context in order to describe the immersive features of this system.
  • the first application is a blog 1282 .
  • This application can be instantiated (i.e., opens up) within the AFMS itself, in an exemplary embodiment. If the blog exists on a website, then the user would navigate to that site and edit its contents from within AFMS. Users can then add multimedia content to their blog with ease.
  • the AFMS provides an interface 1284 for viewing and using files that may be located either on the user's local machine or in the AFMS or on a remote machine connected to the web.
  • the file viewer/manager may open up in a sidebar 1284 as shown in exemplary embodiment, or in a new dialog window or take some other form which allows concurrent viewing of both the application 1282 and files. Snapshots of files can be seen within this file manager as shown by 1284 . The user can then simply drag and drop files for use in application 1282 . Examples of this are shown in FIG. 49C . The user can drag and drop images or videos 1286 for user with the blog application 1282 . The following figure FIG. 49D shows the resulting effect. Further, the complete file repository may be accessed by using a navigation scheme 1288 within the manager to view contents. Here a cursor scheme 1288 is used to navigate within the file manager.
  • FIG. 49D where the blog application 1282 is shown with the image and video files 1290 that were uploaded by dragging and dropping from their respective file locations using the file manager window 1284 .
  • the file manager window 1284 in FIG. 49D shows files that include the tags ‘Products: HP’ and ‘Reviews: CNET’. Web links are shown sorted by date. The figure shows that hyperlinked content can also be embedded within applications via the file manager. Here the link is dragged and dropped 1292 demonstrating ease of use even in such cases.
  • FIG. 49E where the result is shown.
  • the hyperlinked content appears with the title, source and a summary of the content. The way this content appears can be modified by hovering with the mouse over this content, in an exemplary embodiment. This causes a window 1296 to appear which shows options that the user can select to show/hide entire hyperlinked article content, or summary and/or the source of the content.
  • FIGS. 49F-G where an example of immersive file features comprising the AFMS/VOS is given with reference to another application.
  • it is a notebook/scrapbook application 1300 as shown in FIG. 49F .
  • Options 1302 for customizing applications and/or changing application settings will be present in the AFMS.
  • the file manager window 1304 from which files under the relevant tags can be dragged and dropped 1306 to the appropriate location in the application 1300 .
  • FIG. 49G shows the results 1310 where the selected multimedia files have been uploaded to the application by a simple move of the mouse from the file space to the application space within the AFMS.
  • Content 1312 in the application may be edited or uploaded from the file space right within the AFMS where the users have readily available their file space, applications, the web and other resources.
  • FIG. 49H presents the example at the top in terms of a user need.
  • a user may want to create an exclusive file space (also called ‘smart file spaces’) for books where they can store and manage a variety of file types and content.
  • the AFMS/VOS allows the user to create such a section. The procedure starts off by creating and naming the section and picking an icon for it which will be provided in a catalogue 1320 to users. Users may also add their own icons to this catalogue. The result is the user's very own book space 1326 which can be referenced by the iconic section caption 1322 . The user may decide to add folders or tags in this space.
  • FIG. 49H shows the user dragging and dropping images of books that the user is interested in, into the books section 1326 .
  • the image content thus gets uploaded into the user's customized file space. Images and other content uploaded/copied from a site in this manner into a user's file space may be hyperlinked to the source and/or be associated with other information relating to the source. Users can add tags to describe the data uploaded into the file space.
  • the AFMS/VOS may automatically scan uploaded object for relevant keywords that describe the object for tagging purposes.
  • the system may use computer vision techniques to identify objects within the image and tag the image with appropriate keywords. This is equivalent to establishing correspondence between images and words. This can be accomplished using probabilistic latent semantic analysis [55]. This can also be done in the case of establishing correspondence between words (sentences, phonemes) and audio.
  • FIG. 49I illustrates that textual content/data may also be copied/uploaded into the user's customized file space by selecting and copying the content in the space. This content may be stored as a data file or it may be ‘linked’ to other objects that the user drags the content over to, in the file space. For instance, in FIG.
  • the user drags the selected content 1328 from the webspace 1324 over the image 1330 .
  • the copied content gets linked to this image object 1330 .
  • the linked content may be retrieved in a separate file or it may appear alongside the object, or in a separate dialog or pop-up or window when the user selects the particular object, for instance, by clicking on it.
  • FIG. 49J shows the file space 1340 after content from the website has been uploaded.
  • the image objects 1342 along with their source information are present.
  • the content 1344 (corresponding to the selected text 1328 in FIG. 49I ) can be viewed alongside the linked image in the file space 1340 .
  • the AFMS/VOS allows for creation and management of ‘context specific file spaces’ where the user can easily load content of different types and organize information that appears to go together best, from a variety of sources and locations, in a flexible way, and without worrying about lower layer details.
  • An object in a file space can be cross-referenced with information or data from other applications that is of relevance or related to that object.
  • the book object or information unit 1346 can be cross referenced with web links, related emails and calendar entries as shown in 1348 and categorized using relevant tags.
  • the user has added web links of stores that sell the book, emails and calendar entries related to the subject matter and events involving the book.
  • the information in any given smart file space can be used by the AFMS/VOS to answer user queries related to objects in the file spaces.
  • the user may query the AFMS for the date of the ‘blink’ book signing event in the ‘My Books’ file space 1340 in FIG. 49J .
  • the AFMS identifies the ‘blink’ object 1346 in the file space and looks up appropriate information linked to or associated with 1346 .
  • the AFMS searches for linked calendar entries and emails associated with 1346 related to ‘book signing’, by parsing their subject, tags and content.
  • the AFMS would identify and parse the email entry on book signing in 1348 in FIG. 49J and answer the query with the relevant date information.
  • each file space may be associated with an XML file.
  • the code underlying the content is parsed and the appropriate information and properties are identified.
  • This information includes type of content or data, content source, location, link information (for example, this is a link to an image of a house), content description/subject.
  • Other information that the AFMS/VOS determines includes, the application needed to view or run the object being saved into the file space. For instance, when an image is dragged and dropped into a file space from a web page, the HTML code for the web page is parsed by the AFMS in order to identify the object type (image) and its properties. Parsing the image source tag ( ⁇ src>) in the HTML file for the web page provides the source information for the image, in exemplary embodiment.
  • FIG. 49K collaborative features of the AFMS/VOS and its associated file management features are described.
  • Users can maintain a list of friends 1260 and their information in the AFMS/VOS. These friends can have limited access accounts on this system (called ‘associate’ accounts described later) so that they can access and share the primary user's resources or interact with the primary user.
  • Users can set options to share information units/objects in their file spaces, such as book object 1362 in the ‘My Books’ section 1326 in FIG. 49K , with their friends. Users can drag and drop objects directly onto a friend's image/name in order to share those objects with the friend.
  • Another feature in this file system is that when an object 1362 in the file space 1326 and friends 1364 from the friends list 1360 are selected concurrently, a special options window 1366 pops up that presents features relevant to the ‘sharing’ scenario.
  • the AFMS/VOS recognizes that selections from both the friends list and file space have been made and presents users with options/features 1366 that are activated only when such a simultaneous selection occurs and not when either friends or file space objects are exclusively selected.
  • Some of these options are shown in 1366 in exemplary embodiment. For instance, users can set group tasks for themselves and their friends involving the selected object, such as attending the author signing event for the book 1362 . Other options include turning on updates, such as the addition of objects, for a section to the selected friends; going on a shopping trip for the object with selected friends.
  • Owners may be able to keep track of physical items they lend to or borrow from their friends.
  • An object in a file space may be a virtual representation of the physical item. Users can set due dates or reminders on items so that items borrowed or lent can be tracked and returned on time.
  • a timestamp may be associated with a borrowed item to indicate the duration for which the item has been borrowed.
  • This method(s) to keep track of items can serve as a Contract Management System. This service can be used to set up contracts (and other legal documents) between users using timestamps, reminders and other features as described.
  • witnesseses and members bound to a contract may establish their presence during contract formation and attestation via a webcam or live video transmission and/or other electronic means for live video capture and transmission.
  • Members bound to a contract and witnesses may attest documents digitally (i.e., use digital signatures captured by electronic handwriting capture devices for example). Users may also create their WILL through this system. User authenticity may be established based on unique pieces of identification such as their Social Insurance Number (SIN), driver's license, passport, electronic birth certificate, retinal scans, fingerprints, health cards, etc. and/or any combination or the above. Once the authenticity of the user has been verified by the system, the system registers the user as an authentic user. Lawyers and witnesses with established credibility and authenticity on the system may be sought by other users of the system who are seeking a lawyer or witness for a legal document signing/creation for example.
  • SIN Social Insurance Number
  • driver's license passport
  • electronic birth certificate electronic birth certificate
  • retinal scans fingerprints, health cards, etc. and/or any combination or the above.
  • the credibility of lawyers, witnesses and other people involved in authenticating/witnessing/creating a legal document may further be established by users who have made use of their services. Based on their reliability and service, users may rate them in order to increase their credibility/reliability score through the system.
  • group options involving data objects and users is a unique file management feature of the AFMS/VOS that allows for shared activities and takes electronic collaboration to a higher level.
  • the Contract Management System may be used/distributed as a standalone system.
  • FIG. 49K shows options/features 1370 that are presented for managing an information unit upon selecting the particular object or information unit 1368 in a file space. These options allow users to send an email or set tasks/reminders related to the object; tag the object, link other objects; receive news feeds related to that object; add it to another file space; and perform other tasks as given in 1370 .
  • a user may want to look up information on the last client meeting for a specific project.
  • the file space for the project created by the user, would contain the calendar entry for the last meeting, the email link containing the meeting minutes as an attachment, and other related objects and files.
  • the user may also share the project file space with other users involved in the project by adding them as ‘friends’ and sharing the file space content, in exemplary embodiment.
  • the smart file space saves the user time and effort as the user no longer has to perform tedious tasks in order to consolidate items that may ‘belong together’ according to a user's specific needs.
  • the user does not need to save the meeting minutes or the email content separately; just dragging and dropping the appropriate email from the email application to the project's file space suffices and the email and attachment are automatically linked to/associated with the project.
  • the user does not have to open the calendar application and tediously browse for the last calendar entry pertaining to the meeting.
  • sharing the project space with colleagues is easy so that project members can keep track of all files and information related to a project without worrying about who has or doesn't have a particular file.
  • Other information may be available to users sharing a file space such as the date and time a particular file was accessed by a user, comments posted by shared users etc.
  • tools to ease file sharing and collaboration may be available via the VOS as described below with reference to FIG. 20 .
  • FIG. 49L represents an exemplary embodiment of the storage structure of the AFMS/VOS.
  • Data stored on a user's local machine or remote sites or servers such as a user's work machine, or online storage, and data of user's friends on the system is managed by the file management layer.
  • the file management layer handles conflict analysis, file synchronization, tagging, indexing, searching, version control, backups, virus scanning and removal, security and fault protection and other administrative tasks.
  • Data (modification, updates, creation, backup) in all user and shared accounts on local or remote machines, on web servers, web sites, mobile device storage and other places can be synchronized by this layer.
  • a property of the file system is that it caches files/and other user data locally when network resources are limited or unavailable and synchronizes data as network resources become available, to ensure smooth operation even during network disruptions. Backups of data conducted by AFMS may be on distributed machines.
  • An abstract layer operates on top of the file management system and provides a unified framework for access by abstracting out the lower layers. The advantage of this is that the VOS offers location transparency to the user. The user may log in anywhere and see a consistent organization of files via the VOS interface, independent of where the files/data may be located or where the user may be accessing them. The VOS allows users to search for data across all of the user's resources independent of the location of the data.
  • FIG. 49P demonstrates an exemplary embodiment of an application execution protocol run by the Application Resource Manager ARM (which is a part of the virtual operating system).
  • the ARM checks to see whether this application is available on the portal server 1402 . If so, then the application is run from the portal server 1404 . If not, then the application plug-in is sought 1406 . If the plug-in exists, the application is run from the local machine 1412 .
  • a check for the application on the local machine is conducted 1410 . If available, the application is executed from the client's local machine 1412 . If not, the application is run from a remote server on which the user has been authenticated (i.e., has access permission) 1414 , 1416 . If all the decision steps in the algorithm in FIG. 49P yield a negative response, the ARM suggests installation paths and alternate sources for the application to the user 1418 . The user's data generated from running the application is saved using the distributed storage model.
  • Another feature of the AFMS is that the user may store files in a “redirect” folder i.e., files moved/saved to this folder are redirected by the AFMS to the appropriate destination folder based on the file's tags and/or content. The user may then be notified of where the file has been stored (i.e., destination folder) via a note or comment or link in the “redirect” folder that directs the user to the appropriate destination.
  • An index file may automatically be generated for folders based on titles/keywords/tags in the documents and/or the filename. This index may display titles/keywords/tags along with snapshots of the corresponding files.
  • FIG. 49M where a user accounts management structure is shown.
  • a user management layer that manages a given ‘owner’ user's accounts as well as ‘associate’ accounts, which would include accounts of all other friends, users and groups (the owner would like to associate with).
  • Associate accounts would be created to give access to the owner account resources and data.
  • the owner account would have all administrative rights and privileges (read, write, execute, for example) and can set permissions on associate accounts to grant or restrict access to the owner's account and resources.
  • An associate account may be viewed as ‘the set of all owner resources that the associate user has access to, and the set of all activities that the associate user can engage in with the owner user’.
  • An associate account would be linked to and accessible from the associate user's primary/owner account.
  • the owner account may be accessible to and from the owner user's computer, to and from a machine at remote locations such as the office, to and from accounts at social networking sites, and through a web browser/web sites.
  • Account information such as usernames and passwords for the user's accounts on websites and other servers that the user accesses from the VOS may be stored on the system so that the user bypasses the need to enter this information every time the user accesses their external account.
  • the owner may set group policies for the associate accounts so that they have access to specific resources and applications for specific time periods on the owner's account. Owner users have the option of classifying associate users into categories such as acquaintances from work, school, family, strangers etc.
  • VOS virtual reality system
  • Another feature of the VOS is that over time it allows the user to specify automatic changes in access privileges/permissions of associate accounts on the user's network.
  • a user may want to let associates accounts, starting out with limited access/privileges, have access to more resources over time.
  • the user is able to specify the resources that associate accounts may automatically access after a certain period of time has elapsed since their account was created or since their access privileges were last changed.
  • the user may also be able to grant greater access privileges automatically to associate accounts after they demonstrate a certain level of activity.
  • the VOS automatically changes the access privileges of the associate users who have been granted access to increased/decreased resources as pre-specified by the user through options provided by the VOS.
  • This is the ‘Growing Relations’ feature of the VOS where access privileges rules of associate accounts are specified by a user and are changed accordingly by the system, as and when specified by the user.
  • the VOS is able to regulate resource use and change access privileges automatically in the absence of user specified access privilege rules, in another exemplary embodiment.
  • the VOS may monitor activity levels of associate accounts and interactivity between user and associate users and automatically determine which associate users may be allowed greater access privileges.
  • the system may deem this associate user as a ‘trusted associate’. It may also use other means of determining the ‘trustworthiness’ of an associate user.
  • the system may seek permission of the user before changing access privileges of the trusted associate user.
  • the ‘trust score’ the method used by the system to keep track of the activity levels of an associate account
  • the system would promote the status of the associate account progressively by assigning status levels such as: Stranger, Acquaintance, Friend, Family—in that order from first to last. The higher the status of an account, the more access privileges are granted to that account.
  • the VOS detects that there is little interactivity of an associate account over time, or determines lower resource needs of an associate account or assesses that an associate account is less ‘trustworthy’ based on usage patterns of associate account users, then the VOS would regress the status of the account and grant less privileges accordingly.
  • the system may again seek the permission of the user before modifying access privileges of any associate account.
  • the VOS allows password synchronization across websites, networks and machines. For example, if a user changes a password for logging onto a local machine, say a home computer, the password change is synchronized with a password the user may use to login to their account on a webpage.
  • Various levels of access privileges may be granted by the VOS to users, including but not limited to that of a root user, administrator, regular user, super user, guest, limited user, etc., in exemplary embodiment.
  • the VOS also allows execution of shell commands.
  • VOS also provides a software development kit for users to write applications for the VOS.
  • the system may also contain an immersive search engine application that performs searches on queries presented to it.
  • the search engine may be available as a standalone feature for use with browsers and/or network machine(s) or local machine browsing applications. It may be available as part of a VOS browser, containing one or more of the VOS's features. Some of the features unique to this immersive search engine are described next. Reference is made to FIG. 49N where abstraction of a search query is demonstrated in exemplary embodiment.
  • the input is not limited to typing text and using a keyboard. Instead a new approach is proposed, where the input could be speech to text, or mouse gestures or other data.
  • a user may be able to drag and drop content from a newsfeed into the search query field.
  • Context level searches may be performed by the search engine.
  • the search engine when a user comes across an image while browsing the web, the user may be able to simply drag and drop the image into the search field and the browser would retrieve search results that pertain to the image objects, theme or subject.
  • the user may quote a sentence and the search engine would retrieve searches related to the underpinning of that statement in a context related search, in another exemplary embodiment.
  • This method effectively provides a layer of abstraction for the conventional search.
  • the search engine can also retrieve search results in the form of lists where each lists contains the results that fall under a specific category or context. Categories and sort criteria may be user specified.
  • the user may want to search for cars of a particular year and want them categorized according to color, most selling, safety rating and other criteria.
  • the search engine then retrieves search results of cars of that year sorted according to the specified criteria in different lists. It also keeps track of user information so that it can provide contextual information specific or relevant to the user's life. For example, if a user's friend has a car with the specifications that the user is searching for, then the search engine indicates to the user that the user's friend has a car with the same or similar specifications.
  • the search engine mines the information units present in a user's directory in order to present relevant contextual information along with search results.
  • the user may be interested in six cylinder engine cars as inferred by the system based on information objects in the user's directory.
  • the search engine then indicates to the user as to which of the search results pertain to six cylinder engine cars.
  • This type of contextual data mining can be done as discussed in reference to FIG. 6E .
  • this search engine can present to the user information in a variety of formats, not necessarily restricting the search output to text. For instance, the results may be converted from text to speech.
  • tags can then be used by web crawlers to rank pages for use in search engines.
  • web crawlers used by search engines rely primarily on the keywords provided by authors of websites, as well as content on web pages.
  • the method described here also utilizes tags provided by ordinary users browsing websites. This method also allows sites to be searched which are not registered with the search engine.
  • FIG. 49 O an exemplary embodiment of the VOS is shown running as a website. The user may be presented with this screen upon logging in.
  • An API is also available for developers to build applications for the VOS. Any of the applications such as text editors, spreadsheet applications, multimedia applications (audio/video, photo and image editing), white board can be used collaboratively with other users through an intuitive interface. Collaborative application sharing may be accomplished using techniques discussed with reference to FIG. 7A , B, C, D.
  • Shared users may include friends/family members/other associates from social networking sites or work or home computer accounts. Any changes made to data or applications and other resources can be viewed by all users engaged in the collaboration of these resources and accounts.
  • the VOS may also provide an interface that allows for text, video and audio overlay.
  • the calendar feature in FIG. 49 O cross-checks calendars of all users for scheduling an event or an appointment or a meeting and suggests dates convenient for all users involved.
  • a time-stamping feature is also available that lets users timestamp documents.
  • This feature also has an encryption option that allows users to encrypt documents before uploading, acquire a timestamp for the document and retrieve it for future use, keeping the confidential all the while. This might serve useful where time-stamping documents serves as proof of ownership of an invention, for example. Encryption may be accomplished using two encryption keys in exemplary embodiment.
  • FIG. 49 O also incorporates advanced search (described previously with reference to FIG. 49N ), distributed data access ( FIG. 49L ), advanced user management ( FIG. 49M ), safety deposit box, media room, launch pad, library, TV/radio and other features as shown in FIG. 49 O.
  • the ‘safety deposit box’ would contain sensitive materials such as medical records, legal documents, etc. These contents are encrypted and password protected.
  • data is encrypted at the source before backing it up on machines.
  • the files may also be accessible or linked to commercial and other public or private large-scale repositories.
  • a ‘calendar alert’ application may remind the user of pending actions. For instance, based on their medical record, the application would alert the user that a vaccination is due, or a dentist appointment is due. In another instance, the application would alert the user based on financial records that their taxes are due. Similar scenarios may exist for legal documents.
  • the ‘media room’ would include all files and folders and content that the user wishes to publish or make public such as web pages, videos (such as YouTube videos) etc.
  • the launch pad is a feature that allows users to place objects in a region and take appropriate actions with those objects. It provides an interface for programming actions that can be taken with respect to objects in a variety of formats.
  • the launch pad includes community users who can contribute their applications and other software for use.
  • a user may move 2D onto a “3D-fy” application widget in the launch pad section in order to transform the 2D images into their corresponding 3D versions.
  • a user may add an application in the launch pad area that allows document sharing and editing through a webcam.
  • the library section may include e-documents such as e-books, electronic articles, papers, journals, magazines etc. This section will be equipped with the facility whereby electronic magazines, e-papers etc.
  • the TV/radio feature allows users to browse and view channels in a more traditional sense online.
  • the channels may be browsed using the keyboard or mouse. It may also be combined with the user interface discussed with reference to FIG. 54D .
  • the output of cable TV could also be viewed via this facility. In exemplary embodiment, this can be done by redirecting output from the user's TV or cable source to the user's current machine via the internet or network.
  • the channels can be changed remotely, for example via the interface provided by the VOS or a web interface independent of the VOS.
  • this may be done by connecting a universal TV/radio/cable remote to a home computer and pointing the device towards the object being controlled via the remote, if necessary (if it's an infrared or other line-of-sight communication device).
  • a software on the computer communicates with the remote to allow changing of channels and other controls.
  • the audio/video (A/V) output of the TV or cable is connected to the computer.
  • the computer then communicates with the remote device over the Internet, for display/control purposes in exemplary embodiment.
  • the TV/radio content may include files, and other media content on the user's local or remote machine(s), and/or other user accounts and/or shared resources.
  • the radio may play live content from real radio stations.
  • the system may also allow recording of TV/radio shows. On logging off the VOS, the state of the VOS including any open applications may be saved to allow the user to continue from where the user left upon logging in again. Any active sessions may also persist, if desired.
  • FIG. 49Q provides an additional exemplary embodiment of file tagging, sharing and searching features in the VOS/AFMS.
  • a web browser 1440 which may be the VOS browser
  • the user may choose to save web page content such as an image 1442 .
  • the user would be able to choose the format to save it in, and also edit and save different versions of the image.
  • the image 1444 is shown with a border around it.
  • the user can tag images to be saved using keywords 1446 . Parts of the image can also be labeled as 1448 .
  • the user can specify friends and associate users to share the image with 1450 .
  • the location 1454 of the image/file can be specified in abstract terms.
  • the user can specify the location where the file is saved such as the home or office machine, or ‘mom's computer’. Owing to the distributed file storage nature of the VOS, the lower layers can be abstracted out if the user chooses to hide them.
  • the VOS is based on a language processing algorithm. It can recognize keywords and sort them according to grammatical categories such as nouns, verbs, adjectives etc, by looking up a dictionary in exemplary embodiment. It can learn the characteristics of the associated word based on the image. More specifically, the user may be able to train the algorithm by selecting a keyword and highlighting an object or section of the image to create the association between the keyword and its description.
  • the user may select the keyword ‘horse’ and draw a box around the horse in the image, or the user may select ‘white’ and click on a white area in the image.
  • the system can be ‘contextually’ trained. Similar training and associative learning can occur in the case of audio and video content.
  • the system would be able to make contextual suggestions to the user.
  • the user may search for a ‘black leather purse’.
  • the VOS would remember search terms for a period of time and make suggestions.
  • the system would notify the user of this fact and the source/store/brand of the purse and check the store catalogue from which the purse was bought, for similar or different purse in ‘black’ and/or ‘leather’.
  • the system would inform a user ‘A’ of photos that an associate user ‘B’ has added containing user A's friend whom the user A wishes to receive updates on.
  • the VOS presents search results in a ‘user-friendly’ manner to the user.
  • Some aspects may be pre-programmed, some aspects may be learned over time by the VOS with regards what constitutes a user-friendly presentation, whether it involves displaying images, videos, audio, text, and any other file or data in any other format to the user.
  • a user may search for a friend's photos and the VOS would display images found of the user's friend after properly orienting them, by applying affine/perspective transformations for example, before displaying them to the user.
  • the user's friend may also be highlighted by using markings or by zooming in, as examples in order to make it easier for the user to identify their friend in a group, for instance.
  • VOS searches for relevant information matching these search terms/filters based on tags associated with files and objects.
  • computer vision techniques can be used to characterize whole images/video sequences, and objects and components within images/videos.
  • the system can make comments, based on user's mined data, such as ‘it's your friend's favourite music track’. It can analyze the soundtrack and find tunes/music similar to the one the user is listening to. It can identify other soundtracks that have been remixed by other users with the track the user is listening to or find soundtracks compatible with the user's taste etc. Extraction of familiar content can be done by the system in exemplary embodiment using a mixture of Gaussians [56] or techniques similar to those in [57]. The user would be able to specify subjective criteria and ask the system to play music accordingly.
  • the user can specify the mood of the music to listen to, for instance—sad, happy, melodramatic, comical, soothing, etc.
  • Mood recognition of music can be performed via techniques specified in [58].
  • the system can also monitor user activities or judge user mood through a video or image capture device such as a webcam and play music accordingly or make comments such as ‘hey, you seem a little down today’ and play happy music or suggest an activity that would make the user happy or show links that are compatible with the user's interests to cheer the user up.
  • the tracks can be played either from the user's local machine or from online stores and other repositories or from the user's friends' shared resources. Detecting the mood underlying a soundtrack and content similar to a soundtrack can be detected using techniques specified in [59].
  • the VOS can make recommendations to users in other areas by incorporating user preferences and combining them with friend's preferences, as in the case of a group decision or consult i.e., ‘collaborative decision-making or consulting’.
  • users may specify their movie preferences such as ‘action, ‘thriller’, ‘drama’, ‘science fiction’, ‘real life’, etc. They may specify other criteria such as day and time of day they prefer to watch a movie, preferred ticket price range, preferred theatre location, etc.
  • users may consult with each other or plan together. For example, a group of friends may want to go and watch a movie together.
  • Every user has their own movie preference, which the system may incorporate to suggest the best option and other associated information, in this case the movie name, genre, show time etc.
  • Other tools and features to facilitate group decisions include taking votes and polls in favour or against the various options available to the users. The system would then tally the votes and give the answer/option/decision that received the maximum votes.
  • the system may also incorporate general information about the subject of decision in order to make recommendations. For instance, in the movie example, the system may take into account the popularity of a movie in theatres (using box office information for example), ticket deals for a movie, etc. in order to make recommendations. Users can also use the modes of operation described with reference to FIG. 7 for collaborative applications on the VOS. For example, when editing a file collaboratively, as a user edits a document, he/she can see the additions/modifications that are being made by other users.
  • FIG. 49R where an example of a user interface for filtering search data is shown. Users can filter files on the basis of location, file types or by file author(s).
  • FIG. 49S where an exemplary embodiment of an object oriented file system is shown.
  • Users can specify the structure of a folder (used for storing files on a computer).
  • a user can create of folder of type “company” in which the user specifies a structure by creating entries for subfolders of type “HR”, “R&D”, “Legal”, and “IT”.
  • Regular folders may also be created.
  • Each of the created folders can have its own structure.
  • the user can have a folder listing all the folders of type “company” as shown in the box on the left in the top row of FIG. 49S .
  • the content of a selected folder is shown in a box on the right in the top row.
  • the user has options to view by “company” or by the structures that constitute that folder, say by “HR”.
  • FIG. 49S the top row shows an example of viewing by “company”. If the user chooses to view by “HR”, the view on the right (as shown in the bottom row of FIG. 49S ) displays the all the HR folders organized by “company”.
  • Other filters are also available to the users that search according to the desired fields of a folder. Arrows are available on the right and left of the views to go higher up or deeper into folders.
  • the folders and files can have tags that describe the folder and the files.
  • the proposed object oriented file system simplifies browsing and proves the advantages of a traditional file system and a fully fertilged database.
  • FIG. 20 The collaborative interface shown in FIG. 20 for a shopping trip may be used in the case of other collaborative activities such as application, file, document and data sharing.
  • a generic version of the interface FIG. 20 is now described in exemplary embodiment to illustrate this extension.
  • Panel 241 lists friends involved in the collaboration.
  • An application panel replaces the store panel 242 and displays shared applications of users involved in the collaboration.
  • Panel 247 lists the user's documents, data files and other resources.
  • Panel 248 lists the user's friends' documents, data files and other resources.
  • Window 243 would facilitate collaborative sharing of applications, documents, data, other files and resources between users of a collaboration.
  • Users can direct any signal to 243 —video, audio, speech, text, image, including screen capture, i.e., they may specify a region of the screen that they wish to share in 243 , which could include the entire desktop screen.
  • a perspective correction may be applied to documents that are being shared. For example, if a video of a talk is being shared and the video of the slides of the presentation is being shot from an angle (as opposed to the camera being orthogonal to the screen), a perspective transform may be applied so that lines of text on the screen appear horizontal to ease viewing) Users may be able to drag and drop applications, files, documents, data, or screenshots as well as contents/files captured by the screenshots and other resources into window 243 during collaborative sharing.
  • Window 243 has a visual overlay for users to write or draw over to permit increased interactivity during collaborative discussions. This is analogous to whiteboard discussions except that here the overlay may be transparent to permit writing, scribbling, markings, highlighting over content being shared in 243 . All this content may be undone or reversed.
  • the overlay information can be saved without affecting the original content in 243 if the user chooses to do so. Overlay information can be saved in association with the original content.
  • the system also allows a ‘snap to object’ feature which allows users to select and modify objects in the view.
  • the toolbar 239 provides overlay tools and application and/or document and file specific tools for use with the specific application and/or file or document or data being shared in 243 .
  • View 243 also supports multiple layers of content. These layers could be hidden or viewed.
  • the screen size of 243 is resizable, movable, dockable, undockable. All sessions and content (viewed, edited, text, speech, image, video, etc.), including collaborative content and information may be saved including all environmental variables.
  • collaborative environments such as these can be specialized to cater to occupation, age group, hobby, tasks, and similar criteria.
  • a shared environment with features described above may exist for students where they can collaborate on homework assignments and group projects as well as extracurricular activities such as student council meetings, organization of school events etc. Specialized tools to assist students collaborate on school related activities is provided with toolbar 239 .
  • This environment would also contain applications specific to the context. For instance, in the students' collaborative environment, students would be able to provide reviews on courses or teachers using the application provided for this purpose.
  • the whiteboard may be integrated with a ‘convert to physical model’ feature that transforms a sketch or other illustration or animation on the whiteboard to an accurate physical model animation or video sequence. This may be accomplished via techniques similar to those described in [3].
  • a user may draw a ball rolling on a floor which then falls off a ledge.
  • the physics feature may convert the sketch to an animation sequence where the floor has a friction coefficient, and the ball follows Newton's Laws of Motion and the Laws of Gravitation while rolling on the floor or free-falling.
  • voice to model conversion may occur where the semantics underlying speech is analyzed and used to convert to a physical model.
  • This may be accomplished by converting speech to text and then text to picture [60] and then going from picture to model [3]. Objects seen in a webcam may be converted to a model [3]. Users can then be allowed to manipulate this object virtually. The virtual object's behaviour may be modeled to be physically plausible. Based on the content of the whiteboard deciphered through OCR optical character recognition techniques or sketch to model recognition [3] or speech to model recognition, related content (for example advertisements) may be placed in the proximity of the drawing screen and/or related content may be communicated via audio/speech, and/or graphics/images/videos.
  • related content for example advertisements
  • the interface shown in FIG. 20 may be used for exhibitions, where different vendors can show their product offerings.
  • FIG. 51A shows devices, systems and networks that system 10 can be connected to, in exemplary embodiment.
  • System 10 is connected to the Public Switched Telephone Network (PSTN), to cellular networks such as the Global System for Mobile Communications (GSM) and/or CDMA networks, WiFi networks.
  • PSTN Public Switched Telephone Network
  • GSM Global System for Mobile Communications
  • the figure also shows connections of system 10 to exemplary embodiments of computing applications 16 , and exemplary embodiments of computing devices 14 , such as a home computing device, a work computing device, a mobile communication device which could include a cell phone, a handheld device or a car phone as examples.
  • the AFMS/VOS may be connected to external devices, systems and networks in a similar manner as system 10 .
  • the AFMS may additionally be connected to system 10 itself to facilitate shopping, entertainment, and other services and features available through system 10 .
  • This service makes use of the data, and applications connected to the network shown in FIG. 51A .
  • This service may be available on the portal server 20 as part of system 10 , or it may be implemented as part of the virtual operating system, or it may be available as an application on a home server or any of the computing devices shown in FIG. 51A and/or as a wearable device and/or as a mobile device.
  • the Human Responder Service or Virtual Secretary is a system that can respond to queries posed by the user regarding user data, applications or services. The system mines user data and application data, as well as information on the Internet in order to answer a given query.
  • An exemplary embodiment of a query that a user can pose to the system through a mobile communication device includes “What is the time and location of the meeting with Steve?” or “What is the shortest route to the mall at Eglinton and Crawford road?” or “Where is the nearest coffee shop.” Further refinements in the search can be made by specifying filters.
  • An exemplary embodiment of such a filter includes a time filter in which the period restriction for the query may be specified such as “limit search to this week” or “limit search to this month”.
  • the filters may also be as generic as the query and may not necessarily be restricted to time periods.
  • the input query may be specified in text, voice/audio, image and graphics and/or other formats.
  • the user can send a query SMS via their mobile device to the Virtual Secretary (VS) inquiring about the location of the party the user is attending that evening.
  • the VS looks up the requested information on social networking sites such as Facebook of which the user is a member, the user's calendar and email. After determining the requested information, the VS then responds to the user by sending a reply SMS with the appropriate answer. If multiple pieces of information are found, the VS may ask the user which piece of information the user would like to acquire further details on. The user may also dictate notes or reminders to the VS, which it may write down or post on animated sticky notes for the user.
  • the VS may be implemented as an application 16 on a home computing device 14 that is also connected to the home phone line. Calls by the VS can be made or received through VoIP (Voice-over-Internet-Protocol) or the home phone line.
  • the VS can also be connected to appliances, security monitoring units, cameras, GPS (Global Positioning Systems) units. This allows the user to ask the VS questions such as “Is Bob home?” or “Who's at home?”
  • the VS can monitor the activity of kids in the house and keep an eye out for anomalies as described with reference to FIG. 52B . Prior belief on the location of the kids can come from their schedules which may be updated at any time.
  • Other services are available to the user include picking up the home phone and asking the VS to dial a contact's number, which the VS would look up in the user's address book on the user's home computer or on a social networking site or any of the resources available through the VOS.
  • the user may click on an image of a user and ask the VS to dial the number of that user.
  • the user may point to a friend through a webcam connected to the VS and ask the VS to bring up a particular file related to the friend or query the VS for a piece of information related to the friend.
  • the VS may also monitor the local weather for anomalies, and other issues and matters of concern or interest to the user.
  • the VS system For instance, if a user is outside and the VS system is aware of a snowstorm approaching, it sends a warning notification to the user on their mobile phone such as, “There is a snow-storm warning in the area John. It would be best if you return home soon.”
  • Other issues that the VS may monitor include currency rates, gas prices, sales at stores etc. This information may be available to or acquired by the VS via feeds from the information sources or via websites that dynamically update the required information.
  • the system waits for external user commands. Commands can come via SMS, voice/audio/speech through a phone call, video, images. These commands are first pre-processed to form instructions. This can be accomplished using speech-to-text conversion for SMS, voice/audio/speech; parsing gestures in videos; and processing images using methods described with reference to FIG. 52A . These instructions are then buffered into memory. The system polls memory to see if an instruction is available. If an instruction is available, the system fetches the instruction, decodes and executes it, and sends it back to memory. The response in memory is then preprocessed and communicated to the external world.
  • commands can come via SMS, voice/audio/speech through a phone call, video, images. These commands are first pre-processed to form instructions. This can be accomplished using speech-to-text conversion for SMS, voice/audio/speech; parsing gestures in videos; and processing images using methods described with reference to FIG. 52A . These instructions are then buffered into
  • the VS answers queries by looking up local information—user, application data on the local machine, and then proceeds to look up information in other networks to which the user has access, such as web-based social networks, and the internet. It may also automatically mine and present information where applicable.
  • the VS searches address books on the local machine and/or the Internet and/or social networks such as web-based, home or office networks to look up a person's name, phone and other information, including pictures, and display the appropriate information during an incoming phone call. If the information is not on any of the user's networks, the VS may look up public directories and other public information to identify caller and source.
  • the VS may also look up/search cached information that was previously looked up or that is available on the local machine. Additionally, the VS gives information about the type of caller and relation between caller and user. For instance, the VS informs the user whether the call is from a telemarketing agency or from the dentist or from Aunt May in San Francisco etc. The VS may also specify location of the caller at the time of the call using GPS and positioning and location techniques. The VS may make use of the colloquial language to communicate with the user.
  • the call display feature can be used as a standalone feature with cell phones and landlines and VoIP phones. A user may query the VS with a generic query such as ‘What is an Oscilloscope?’ The VS conducts a semantic analysis to determine the nature of the query.
  • the query determines that the query is related to a definition for a term.
  • it would look up a source for definitions such as an encyclopaedia, based on its popularity and reliability as a source of information on the internet, or as specified by the user. As an example, it may look up Wikipedia to answer the user's query in this case.
  • the VS may also be linked to, accessible to/by mobile phones or handheld devices of members in the user's friends' network, businesses in the user's network and other users and institutions.
  • Location can be computed/determined using mobile position location technologies such as the GPS (Global Positioning System) or triangulation data of base stations, or a built in GPS unit on a cell phone in exemplary embodiment.
  • the VS can inform the user if friends of the users are in the location or vicinity in which the user is located at present; and/or indicate the position of the user's friend relative to the user and/or the precise location of a given friend.
  • the VS may point this out to the user saying, “Hey, George is at the baked goods aisle in the store.”
  • the VS may establish a correspondence between the GPS location coordinates on the store map available via the retail server 24 .
  • the VS may additionally overlay the location coordinates on a map of the store and display the information on the user's handheld device.
  • the VS may display a ‘GPS trail’ that highlights the location of a user over time (GPS location coordinates in the recent past of a user being tracked). The trail may be designed to reflect age of data.
  • the colour of a trail may vary from dark to light red where the darker the colour, the more recent the data.
  • the users may communicate via voice and/or text and/or video, and/or any combination of the above.
  • the content of the conversation may be displayed in chat boxes and/or other displays and/or graphics overlaid on the respective positions of the users on the map.
  • the user can query the VS to identify the current geographic location of a friend at any given time. Therefore, identification of a friend's location is not necessarily restricted to when a friend is in the user's vicinity.
  • Users may watch live video content of their friend on their mobile device from their location. They may interact with each other via an overlaid whiteboard display and its accompanying collaborative tools as described with reference to FIG. 20 .
  • ‘User A’ may be lost and he may phone his friend, ‘User B’ who can recognize the current location of User A based on the landmarks and video information User A transmits via his mobile.
  • User B may also receive GPS coordinates on her mobile via the VS.
  • User B can then provide directions to User A to go left or right based on the visual information (images/video) that is transmitted to User B's mobile via User A's mobile.
  • User B may also scribble arrows on the transparent overlay on the video, to show directions with reference to User A's location in the video, which would be viewable by User A.
  • related content for example advertisements
  • OCR optical character recognition techniques or sketch to model recognition [3] or speech to model recognition related content
  • related content for example advertisements
  • Businesses may use location data for delivery purposes in exemplary embodiment. For instance, pizza stores may deliver an order made via the VS to the user placing the order, based on their GPS coordinates. Users can request to be provided with exact ‘path to product’ in a store (using the communication network and method described with reference to FIG. 50 previously), upon which the VS provides the user with exact coordinates of the product in the store and directions to get there. The product location and directions may be overlaid on a store/mall map.
  • users may request ‘path to products’, and they will be provided with product location information and directions overlaid on a map of the virtual world. Alternatively, they may be directed to their destination by a virtual assistant or they may directly arrive at their virtual destination/product location in the virtual world.
  • Order placements and business transactions can also be conducted via a user's mobile device.
  • a user may view a list of products and services on their mobile device.
  • the user may place an order for a product or service via their mobile device via SMS or other services using the WAP protocol or through a cell phone based browser in exemplary embodiment.
  • the vendor is informed of the order placed through a web portal and keeps item ready for pick up or delivers the item to address specified by user or the current location of user, which may be determined using a cell phone location technique such as GPS and cell-phone triangulation.
  • Users may pre-pay for services or make reservations for services such as those provided in a salon via their mobile device and save waiting time at the salon.
  • Vendors may have access to ‘MyStore’ pages, as described in exemplary embodiment previously with reference to FIG.
  • Electronic receipts may be sent to the user on their cell phone via email, SMS, web mail, or any other messaging protocol compatible with cell phones. Other information can be linked to the cell phone based on electronic receipts such as warranty and other information as described previously with reference to electronic receipts.
  • a user ‘Ann’ may be a tourist visiting Italy for the first time, and would like to find out which restaurants have good ratings and where they are located.
  • the user can query the system to determine which restaurants ‘Jim’ (a friend who visited Italy recently) ate at, their locations, and the menu items he recommends.
  • the system looks up Ann's friend's network on a social networking site, in exemplary embodiment, to access and query Jim's account and acquire the appropriate information.
  • Jim has a virtual map application where he has marked the location of the restaurants he visited when he was in Italy.
  • the restaurants each have a digitized menu available (hyperlinked to the restaurant location on the map) where items can be rated by a given user.
  • Jim's travel information may be available from a travel itinerary that is in document or other format.
  • the restaurant location information may be overlaid onto a virtual map and presented to Ann.
  • the menu items that Jim recommended, along with their ratings may be hyperlinked to the restaurant information on the map in document, graphics, video or other format.
  • Other files such as photos taken by Jim at the restaurants, may be hyperlinked to the respective restaurant location on the map.
  • the VS utilized information on a friend's account that may be located on a user's machine or other machine on the local network, or on the community server 26 or on a remote machine on the internet; a map application that may be present on the local machine, or on the portal server 20 or other remote machine; and restaurant information on the retail server 24 or other machine.
  • the VS can combine information and data and/or services from one or more storage devices and/or from one or more servers in the communication network in FIG. 51A .
  • Users may utilize the VS for sharing content ‘on the fly’.
  • a website or space on a web server may exist where users can create their ‘sharing networks’.
  • sharing networks may be created via a local application software that can be installed on a computing machine.
  • a sharing network comprises member users whom the user would like to share content with.
  • a user may create more than one sharing network based on the type of content he/she would like to share with members of each network.
  • Members may approve/decline request to be added to a sharing network.
  • a space is provided to each sharing network where the members in the sharing network may upload content via their mobile communication device or a computing machine by logging into their sharing network. Once the user uploads content into the sharing space, all members of that particular sharing space are notified of the update.
  • Sharing network members will be informed immediately via an SMS/text message notification broadcast, as an example. Members may change the notification timing. They may also alternatively or additionally opt to receive notification messages via email and/or phone call.
  • a user may upload videos to a sharing space. Once the video has been uploaded, all the other members of the sharing network are notified of the update. Members of the network may then choose to send comments ‘on the fly’ i.e., members respond to the video update by posting their comments, for which notifications are in turn broadcast to all members of the sharing network.
  • the VS may directly broadcast the uploaded content or a summary/preview/teaser of the uploaded content to all members of the sharing network. Real-time communication is also facilitated between members of a sharing network. Chat messages and live video content such as that from a webcam can be broadcast to members of a sharing network in real-time.
  • the sharing network feature may be available as a standalone feature and not necessarily as part of the VS.
  • the tourism industry can make use of the VS to provide users with guided tours as the user is touring the site.
  • Instructions such as ‘on your right is the old Heritage building’, and ‘in front of you are the Green Gardens’, may be provided as the user browses a site and transmits visual and/or text and/or speech information via their mobile and/or other computing device to the VS.
  • a user may transmit site information in the form of images/videos to the VS, as he browses the site on foot.
  • the VS can provide tour guide information based on the GPS coordinates of a user.
  • Instructions may be provided live as the user is touring a site. The user may transmit their views via a webcam to the tour application, which is part of the VS.
  • the tour application then processes the images/videos in real-time and transmits information on the what is being viewed by the user (i.e., ‘guided tour’ information).
  • Users may ask the VS/tour application queries such as ‘What is this’ and point to a landmark in the image or ask ‘What is this white structure with black trimmings to my left?’.
  • the VS tour application may decipher speech information and combine the query with image/video and any visual information provided to answer the user.
  • the tour instructions/information can be integrated with whiteboard features so that landmarks can be highlighted with markings, labels etc., as the user is touring the site.
  • the VS may alternately or additionally transmit site information/tour instructions based on the GPS coordinates and orientation of the user.
  • Orientation information helps to ascertain the direction in which the user is facing so that appropriate landmark referencing may be provided such as ‘to you left is . . . ’, ‘turn right to enter this 14th century monument’ etc.
  • Orientation may be determined by observing two consecutive coordinates and computing the displacement vector.
  • Tour information/instructions may be registered with existing map applications and information and/or street view applications and information (for example Google Street View).
  • Computationally intensive tasks such as registration of the user's view with maps or other views in a database, may be transmitted to a remote server and the results may be transmitted back to the user's mobile device.
  • Advertisement information may be overlaid/linked to relevant sites on user views on a mobile in exemplary embodiment.
  • Data from the user's mobile device may be used to reconstruct a 3D model of the scene, and may be available for viewing remotely.
  • the reconstruction, if too intensive’ may occur on a remote machine.
  • Instructions may also be catered to users on foot (instead of in a vehicle for example), via the handheld. These include instructions specific to a person on foot, such as ‘turn around’, ‘look up’, in exemplary embodiment. In the case of directions to a location as well, users may be provided alternate instructions to arrive at a destination when traveling by foot (thus, directions are not limited to driving directions).
  • the VS may be integrated with a map application where users can directly or mark recommended places to visit. These marked places may be hyperlinked with to-do lists that specify the activities or events the user can engage in at those places; or blogs that catalogue user experiences. Photos, videos and other graphics and multimedia content may be linked to a place on the map describing the place, its significance and its attractions. These may also be pictures/videos taken by friends, virtual tours etc. A user may add or request to see specific feeds for a given place. In exemplary embodiment, the local news headlines corresponding to a selected place on the map may be displayed. Areas of interest such as general news, weather, science or entertainment, may be selected by the user to filter and display news and other information of interest.
  • Event feeds that display events or activities on a particular month or week or day of the year at a place may be requested.
  • Generic user videos capturing user experience or travel content at a place may be displayed. These may be videos that are extracted from a video uploading site such as YouTube, based on keywords such as place or other default keywords or keywords specified by the user.
  • Local shopping feeds containing information about the places with the most popular or cheap and other categories of shopping items may be linked or associated with the places on the map. Most popular local music and where to buy information may be associated with a place. Other local information such as car rentals, local transit, restaurants, fitness clubs and other information can be requested by the user. Thus, local information is made easily available on any computing or mobile or display device.
  • map overlays and hyperlinks to appropriate sources/places are used in order to make information presentation as user-friendly as possible.
  • the user can also request the VS to display itineraries that include cities, places, events, attractions, hotels that the user chooses.
  • the user may specify filters such as price range and time period to include in forming the itinerary.
  • the VS would scan the appropriate databases detailing places, events, attractions and hotels and their associated information such as prices, availability, ticket information etc. in order to draw up a suggested itinerary accommodating user requirements as best as possible.
  • the user may make all reservations and purchases of tickets online.
  • the VS would direct the user to the appropriate reservation, purchasing and ticketing agents.
  • the VS may be equipped with a facility to make hotel, event bookings and ticket purchases (for events, attractions etc.) online.
  • the VS may be used to connect to the services in a local community as well. Users can request an appointment at the dentist's office, upon which the system will connect to a scheduling software at the dentist's end (service's end), in exemplary embodiment.
  • the scheduling software would check for available slots on the day and time requested by the user, schedule an appointment if the slot is available and send a confirmation to the VS.
  • the VS then informs the user of the confirmation. If the available date and time is already taken or not available, the scheduler sends the user a list of available slots around the day and time the user has requested.
  • the VS provides this information to the user in a user-friendly format and responds to the scheduler with the option the user has selected.
  • a facility is a ‘Centralized Communication Portal’ (CCP) which provides users with access to all emails (work, home, web based, local application based), voice messages, text messages, VoIP messages, chat messages, phone calls, faxes and any other messages/calls available through electronic messaging services.
  • the CCP may take the form of a web based software or a mobile device software and/or both and/or local application for use on a computing machine or a mobile device or a landline phone.
  • the CCP is equipped with text-to-speech and speech-to-text conversion so that it is possible for users to access emails in the form of voice messages, and voice messages in text format, in exemplary embodiment.
  • the user can set the display name and number or email address of outgoing phone calls, emails, SMS or the system can determine these automatically based on factors such as who the message is for or what the context of the message is, etc.
  • the system only lets the users set the phone number or email address of outgoing messages if the user owns these phone numbers and email addresses.
  • the owner ship of a phone number or email address is established by posing a challenge question to the user the answer to which is sent to the phone number or email address.
  • the CCP can simultaneously make a recording of the conversation, if access is granted by the participants of the call; convert the call recording into text; reformat the message if necessary and provide the user with options to do something with the recording such as email or save call recording, in an exemplary embodiment.
  • the CCP can keep track of a call or message duration and/or size. This may be useful in case of professional services that charge per call or message for their services provided via phone or email or other messaging service(s).
  • the CCP allows users to program features. In an exemplary embodiment, users can program the CCP to respond in a certain way to an incoming call.
  • the user may program the CCP to ignore call or forward the call to an answering machine, if the incoming call is from a specific number or person, for instance.
  • a user may program the CCP to respond to calls by automatically receiving the call after two rings, for example, and playing a message such as ‘please state your name’, or ‘please wait until Ann picks up’, or playing audio tracks from a certain folder available on the user's local machine or a remote machine or through a web page.
  • the caller user may be able to view videos that the receiver user (i.e., the user receiving the call) has programmed the CCP to play before they pick up the call (the video may play via a visual interface provided by the CCP).
  • users may be able to set forwarding options for incoming calls and emails. For example, the user may program the CCP to forward all incoming emails (chat or text messages) or incoming emails (chat or text messages) from specific users to a mobile handheld/phone; forward incoming calls to a mobile phone to an email address or to another cell phone(s), in exemplary embodiments.
  • Images in emails/text/chat messages may be converted to text using computer vision techniques such as those described with reference to FIG. 52 and FIG. 6 . Text to speech conversion may then be carried out and, thus image information in text/email/chat messages can also be made available via voice messages or voice chat.
  • PBX Primary Branch eXchange
  • An easy-to-use visual interface may be provided by the CCP.
  • the interface may display the status of the receiver user.
  • the status of a user may be: busy, back in 10 minutes, not in, hold/wait, leave message, attending another call, call another number: #####, etc.
  • a virtual character may greet the caller via the visual interface and inform the caller of the receiver's status, and instruct the caller to leave a message or direct the caller to another phone number or provide alternate directions.
  • a video recording of the receiver user may greet the caller user and provide status information and/or instructions to leave a message, call another number, hold/wait etc.
  • Image to text conversions may also be useful to convey visual elements of a conversation (in addition to the audio/speech elements), in the case that users would like to view webcam/video conversations in text message form or in audio/voice format.
  • Text to image conversion can be carried out using techniques similar to those described in [60]. This conversion may be utilized when users opts to see email/chat/text/SMS messages via the visual interface. In this case, in addition to displaying text information, image information obtained via text-to-image conversion may be displayed. Alternatively, this converted image information can be displayed as a summary or as a supplement to the actual messages.
  • Users may additionally connect to each other during a call or chat or email communication via webcam (s) whose output is available via the CCP's visual interface. Any or all of the collaborative tools, and methods of interaction discussed with reference to FIG. 20 may be made available to users by the CCP for collaborative interaction between participants during a call or chat or email communication via the CCP's visual interface.
  • Users may be able to organize their messages, call information and history in an environment that allows flexibility.
  • users may be able to create folders and move, add, delete information to and from folders. They may tag messages and calls received/sent. They may organize calls and messages according to tags provided by the system (such as sender, date) or custom tags that they can create. Call and message content and tags are searchable.
  • Spam detection for phone calls, chat, text and voice messages is integrated with the CCP, in addition to spam detection for email.
  • this is accomplished using a classifier such as a Na ⁇ ve Bayes classifier [7, 61].
  • spam feature lists may be created using input from several users as well as dummy accounts.
  • a user's friend who receives the same or similar email, phone call, SMS, etc. marks it as spam then the probability of that message being spam is increased.
  • Dummy accounts may be setup and posted on various sources such as on the internet and messages collected on these accounts are also marked with a high probability of being spam. Users also have the option to unmark these sources/numbers as spam.
  • a signature may be used by the CCP to confirm the authenticity of the source of the message. In an exemplary embodiment, this signature is produced when the user's friend logs into the system. In another exemplary embodiment, this signature may be produced based on the knowledge of the user's friend available to the CCP. Additionally, the CCP may inform the user that a particular number appears to be spam and if the user would like to pick up the phone and/or mark the caller as spam. The CCP may additionally provide the user with options regarding spam calls such as: mute the volume for a spam call (so that rings are not heard), direct to answering machine, respond to spam call with an automated message, or end call, block caller etc. Users may arrange meetings via the CCP.
  • a user may specify meeting information such as the date, time and location options, members of the meeting, topic, agenda.
  • the CCP then arranges the meeting on behalf of the user by contacting the members of the meeting and confirming their attendance and/or acquiring alternate date, time, location and other options pertaining to the meeting that may be more convenient for a particular member. If any of the users is not able to attend, the CCP tries to arrange an alternate meeting using the date/time/location information as specified by the user that is not able to attend and/or seeks an alternate meeting date/time/location from the user wishing to arrange the meeting. The CCP repeats the process until all users confirm that they can attend or until it synchronizes alternate date, time and location parameters specified by all members of the meeting.
  • the spam detector may provide more levels of spam detection; it may provide several levels of classification. If desired by the user, it can automatically sort emails, phone calls, SMS, etc. based on various criteria such as importance, nature (eg. social, work related, info, confirmation, etc.) etc. This may be done in an exemplary embodiment by learning from labels specified by users and/or attributes extracted from the content of the email, phone call, SMS etc. using Na ⁇ ve Bayes. In an exemplary embodiment, a technique similar to that used in [62] is used for ranking.
  • the CCP may assign users a unique ID similar to a unique phone number or email address, which may consist of alphanumeric characters and symbols. In exemplary embodiment, it may assume the form ‘username#company’. It may be tied to existing top-level domains (TLDs), for example, the ‘.com’ domain. When someone dials or types this ID, a look up table is used to resolve the intended address which could be a phone number or email/chat address or VoIP ID/address or SMS ID. Users may specify whether they would like to use the CCP ID as the primary address to communicate with any user on their contact list. Users may also use the CCP ID as an alias.
  • TLDs top-level domains
  • the CCP may be integrated with the VS and/or incorporates one or more features of the VS, and vice versa.
  • JARMS Job Application and Resume Management Service
  • This application may be available on the portal server 20 .
  • Users can create their “Job Profile” via this service.
  • Forms and fields will be available for users to document their background and qualifications including their personal history, education, work and voluntary experience, extra-curriculars, affiliations, publications, awards and accomplishments, and other information of relevance to their careers.
  • This service would provide questionnaires that may be useful to record or test skill subsets of the user. Hiring managers may find this additional information useful to assess a given job applicant's skills.
  • HR Human Resources
  • the skill and HR questions may be posted in text, audio, video and any other multimedia format.
  • the user responses to those questions may also be posted in text, audio, video and any other multimedia format.
  • a “Portfolio” section is available that assists the user in developing, preparing and uploading documents and other files of relevance to their career, for example, resumes, posters, publications, bibliographies, references, transcripts, reports, manuals, websites etc. This service will make it convenient for the user to upload documents in a variety of formats. Also, the user can design different resumes for application to different types of jobs.
  • a tools suite assists the user in document uploading, manipulation and conversion. In exemplary embodiment, a PDF (Portable Document Format) conversion tool, document mark-up, and other tools are provided to the user.
  • PDF Portable Document Format
  • transcripts directly from their University Registrar/Transcript offices, or websites through this service.
  • the transcripts may be authenticated by the Universities or certified electronically. In this manner, the employers can be assured of the validity of the transcript uploaded through this service. References and their contact information is provided by the user via this service. Links to the accounts of the referees on JARMS or social networking sites such as Linkedin may also be provided on the user's profile. Videos from YouTube or other sources that document user accomplishments or work such as a conference presentation or an online seminar or a product demonstration and other examples may be uploaded.
  • JARMS is equipped with additional security features so that information is not easily viewed or captured by third party individuals or software etc.
  • Employers to which users are interested in submitting their application to may be provided with access to the user's job profile. Users may also select the account resources they would like to make accessible to the employer.
  • An “Interview Room” facility is available through JARMS which is an online space where real time interviews can be conducted.
  • Visual information along with audio and other content from a webcam, camcorder, phone etc. may be broadcast and displayed in windows that assume a configuration as shown in FIG. 53 , so that all users in an interview session can be seen simultaneously.
  • the interview room may be moderated by personnel from the institution or company that is conducting the interview. This session moderator can allow or disallow individuals from joining the session.
  • the interviewee and interviewers can view each other simultaneously during the interview session in the display windows in FIG. 53 , by using video capture devices at each end and broadcasting the captured content.
  • the interview may involve video and audio content only or it may be aided by speech to text devices that convert audio content to text and display content as in the ‘Transcript’ display box FIG. 53 .
  • text input devices such as a keyboard/mouse may be used to enter text.
  • JARMS sessions may be private or public. These sessions may be saved or loaded or continued or restored. The session content including video content may be played, paused, rewinded, forwarded.
  • the collaborative broadcasting and viewing of content in windows arranged as in the configuration given in FIG. 53 may occur during an online shopping session or during a news coverage session online or a technical support session and during other collaborative communication and broadcast sessions online.
  • questions posed by viewers of the news story will appear in a ‘Live Viewer Feed’ (Feedback) box.
  • Live Image Retrieval looks up/searches for images corresponding to the words relayed in the broadcast in real-time, either on the local machine or the internet or a file or folder specified by one or more of the users involved in the collaborative session, and displays the appropriate images during the session to the viewers in another display window.
  • the system may look up image tags or filename or other fields characterizing or associate with the image in order to perform the image search and retrieval corresponding to words in the collaborative conversation or broadcast. In exemplary embodiment, this can be accomplished as shown in [60].
  • the Live Image Retrieval (LIR) application can be used with other applications and in other scenarios.
  • a user may specify an object in text or voice or other audio format, during online shopping.
  • the LIR would retrieve images corresponding to the specified word from the retail server 24 .
  • the user can then select the retrieved image that best matches the user's notion of that object.
  • the user may specify black purse and the LIR would retrieve images of many different types of black purses from different sources such as a black leather purse, brand name/regular black purses, black purses in stores in proximity of the user's location, fancy/everyday use black purses, etc.
  • system 10 or the VS directs the user to the source of that purse, which may be an online store.
  • Social Bug SB
  • users upload content conveying information of interest to the general public such as activities, restaurants, shopping, news etc.
  • These topics may be linked to specific geographical areas, so that users can look up information that pertains to a specific region of interest, such as the local community they reside in.
  • users may look up or search content related to activities and events in their local community.
  • the content may be uploaded by common users or business owners.
  • Such video content will provide more information related to a topic in the form of reviews, user experiences, recommendations etc.
  • the content is as dynamic and topics as wide-ranging as the users' interests.
  • the uploaded content may assume the format of videos in exemplary embodiment. Moderators for each region may filter the content uploaded by users and choose the most relevant videos.
  • the content may be organized or categorized according to fields such as ‘activities’, ‘events’, ‘businesses’, ‘shopping item/store’, ‘news area’ etc. Users can also specify the kind of information they would like to receive more information on via feeds, in an exemplary embodiment. Users may opt to receive feeds on a particular tag/keyword or user or event or business or subject.
  • the user can indicate specific filters like ‘video author’, ‘reviewer’, ‘subject’, ‘region/locality’, ‘date created’, ‘event date’, ‘price range’, and videos, video feeds and related content will be presented grouped according to the filters and/or filter combinations and keywords specified. Users can also specify objects in videos they are looking for, for example, ‘Italian pasta’, or a particular chef, in videos about restaurants. Video tags and other information describing a video (such as title, author, description, location etc.) may be used in order to find and filter videos based on criteria specified by the user. Additionally, video content (for instance, image frames, music and speech content) is mined in order to filter or find videos according to the user specified criteria.
  • video content for instance, image frames, music and speech content
  • This application allows users to indicate whether they liked a given video. Users can specify what they like about a video using keywords. Users may specify what kind of content they would like to see more of. A section/field titled ‘More of . . . ” would assist users in specifying preferences, suggestions about content they like or would like to see more of.
  • Links and applications would be provided to users via this service depending on the content being viewed.
  • links would be provided allowing users to send a query to the restaurant, call up the restaurant, or book reservations via SMS, phone, email or chat.
  • news feed items and polls related to the content the user is viewing will be provided in the form of summaries or links.
  • Top rated or most viewed response videos posted by viewers to news stories may also be posted on the same page. Videos may be pre-filtered by moderators.
  • organizations working for social causes can post response videos to news stories covering issues such as poverty or human rights. They may conduct campaigns or provide information online through the use of videos.
  • Such response videos will help to target specific audiences interested in the issues the organization is working/campaigning for. Since news videos are more popular, traffic can be directed to other videos relaying similar content but which may not necessarily belong to the same genre (for instance, two videos may both talk about poverty, but one may be a news story and the other an advertisement or documentary produced by an NGO). These videos may be posted as response videos to more popular videos, which may not necessarily be news videos.
  • Objects in videos and/or frames may be hyperlinked and/or tagged.
  • a user may click or hover or select an item of interest (a necklace, for example) and be provided with details on the make, model, materials of the necklace, pricing information etc. on the same or different frame/page.
  • tags/comments/links may appear automatically.
  • Users may also be provided with additional information such as deals available at the store; other users browsing the video and user's friends, if any, that are browsing/have browsed the same video or shopped at the store; where similar products or items may be found; store/business ratings/comments/reviews; how the store compares with other stores with reference to specific criteria such as bargains, quality, service, availability of items, location accessibility. Additional features such as those discussed with reference to FIG. 36 may be available.
  • tagged/hyperlinked objects within videos/images/simulations (which may be live or not) may be used for providing guided tours.
  • videos/image frames may be tagged/hyperlinked. As a video plays and a tagged frame appears, the corresponding tag is displayed to the user.
  • the tags/hyperlinks/comments described above are searchable. On searching for a tag or browsing through tags the corresponding videos are shown.
  • Users can also avail of the translation feature that enables translation of videos in different languages either in real-time or offline.
  • Text, audio and/or video content is translated and presented as audio/speech, text (subtitles for example).
  • Shared viewing of videos between friends is possible. When shared viewing or broadcasting occurs, the same video may be simultaneously viewed by users sharing it, in different languages.
  • the same feature is available in any/all of the chat applications mentioned in this document i.e., text typed in a certain language in a chat application may be translated to multiple languages and made available in real-time or offline to the different users of a chat session in audio/speech, text (subtitles for example).
  • the video presentation/content may be interactive i.e., users watching the videos may interact with each other via the collaborative tools described with reference to FIG. 20 and modes referenced in FIG. 7 .
  • the video may be a live broadcast where the presenter or video author(s) or video participants may interact with the audience watching the broadcast via the collaborative tools described with reference to FIG. 20 and modes referenced in FIG. 7 .
  • Video summarization (VSumm) techniques may involve tracking of most popular keywords. These include most commonly used search terms, and tags of most viewed videos in exemplary embodiment.
  • VSumm may also keep track of important keywords via phrases implicitly referencing them such as ‘important point to be noted is . . . ’ in a video, in order to identify important regions/content in videos (i.e., these regions are namely those audio/video signal sequences in a video in which important keywords are embedded).
  • users may specify summarization parameters, such as the length of the summarized video and/or filters.
  • Users can employ filters to specify scenes (video, audio, text content/clips) to include in the summaries. These filters may include keywords or person or object name contained in the video clip to be included in the summary.
  • a user may specify an actor's name whose scenes are to be contained in the summary of a movie.
  • Other filters may include the kind of content the user would like to pre-filter in the video such as ‘obscene language’ in exemplary embodiment.
  • the sequence can be summarized according to the procedure illustrated in FIG. 55 and described next, in exemplary embodiment.
  • an audio-visual A/V (or audio, or image or video, or text or any combination thereof) sequence it may be broken down (split) into audio, video, image and text streams, while maintaining association.
  • a PowerPoint presentation is the input, then the audio-video-image-text content on any given slide is associated.
  • audio and video signals at any given time are associated.
  • Different processing techniques are then applied in different stages as shown in FIG. 55 to carry out the input sequence summarization.
  • pre-processing is carried out using digital signal processing techniques.
  • a transformation is applied to an image sequence to convert it into the corresponding signal in some pre-defined feature space.
  • a Canny Edge detector may be applied to the frames of an image sequence to obtain an edge space version of the image.
  • Multiple filters may be applied at this step. Subsequences can be identified not just over time, but also over frequency and space.
  • the resulting pre-processed data sequences are passed on to the Grouping stage.
  • subsequences are identified and grouped based on their similarity.
  • Distance metrics such as Kullback-Leibler divergence, relative entropy, mutual information, Hellinger distance, L 1 or L 2 distance are used to provide a measure of similarity between consecutive images, in exemplary embodiment. For instance, when mutual information is computed for consecutive data frames, and a high value is obtained, the data frames are placed in the same group; if a low value is obtained, the frame is placed in a new group.
  • Motion information is also extracted from an image sequence using optical flow for example. Subsequences exhibiting similar motion are grouped together. Frequencies corresponding to different sources, for example different speakers are identified and may be used during synopsis formation.
  • a script may be composed based on users identified and their spoken words.
  • frequencies corresponding to different sources are identified using expectation-maximization (EM) with Mixture of Gaussians (MoG). This method may also be used in the context of interviews (as described with reference to FIG. 53 ), live broadcasts, and other video and data sequence summaries.
  • EM expectation-maximization
  • MoG Mixture of Gaussians
  • Semantic analysis is then carried out on the data sequence to identify and localize important pieces of information within a subsequence. For text information, for instance, large-font or bold/italicized/highlighted/underlined and other specially formatted text, which generally indicates highlighted/important points, is identified. Significant objects and scenes within an image or video sequence, may be identified using object recognition and computer vision techniques. Significant speech or audio components may be identified by analyzing tone, mood, expression and other characteristics in the signal. Using expectation-maximization (EM) with Mixture of Gaussians (MoG) for example, the speech signal can be separated from background music or the speech of a celebrity can be separated from background noise.
  • EM expectation-maximization
  • MoG Mixture of Gaussians
  • tags may be analyzed to identify important components.
  • the associated tagged file describing the text may contain tags indicating bold/italicized points i.e., important content in the file. From subsequences determined to be significant, exemplars may be extracted. Exemplars may be a portion of the subsequence.
  • the word or a sentence in the case of text, it could be a word or a sentence; for an image sequence it could be a frame or a portion of the frame or a set of frames or a composite of frames/frame portions in the subsequence; for an audio signal it could be a syllable(s), or a word, or a music note(s) or a sentence (this system also enables music to text conversion.
  • Notes corresponding to the music may be output as a text file.
  • it may contain C-sharp, A-minor).
  • the subsequences may additionally be compressed (lossless or lossy compression may occur) using Wavelet transform (for example), composited, shortened, decimated, excised or discarded. This summarization procedure is also useful for mobile applications where bandwidth, graphics and memory resources are limiting.
  • an image can be divided in space into different regions and the most significant components can be extracted based on an evaluation of the significance of the information in these regions.
  • significant components can be extracted from a sequence of images, and these significant portions can then be composited together within a single image or a sequence of images, similar to a collage or mosaic.
  • the sequence represents an input data sequence (each square represents a single frame or data unit in the input information sequence).
  • the sequence may consist of different scenes.
  • a given scene could be one that represents the inside of a car; another could be an office scene shot from a particular viewpoint; another could be a lecture slide.
  • subsequences are identified based on similarity measures described before. The different subsequences that are identified by the algorithm are shown with different symbols in this figure. Subsequences can be of variable length as illustrated in FIG. 55 .
  • the Semantic analysis step then extracts exemplars from each group (in this case +, O). In this case, the algorithm picks out a + frame from the subsequence it labeled as ‘+’, and a portion (O, O) of the subsequence it identified as ‘O’.
  • the associated data—audio, video sequence data are reformatted.
  • reformatting is based on significance. For instance, if an image is larger, it may occupy a larger portion of the screen. Audio content may be renormalized if necessary.
  • the audio, video and text channels may be merged to produce a new sequence or they may be provided to the user separately without merging.
  • the AFMS, VS, LIR, JARMS, SB systems may be used within a local area network such as a home or office network. Users who wish to share each other's data may be added to the network permitting sharing of applications within the network and restricting access to the data of the shared network users.
  • the AFMS, VS, LIR, JARMS, SB systems and/or the features and methods described with reference to system 10 and/or a combination of any of the above may be used in conjunction with each other or independently.
  • One or more features and methods of the AFMS, VS, LIR, JARMS, SB systems and/or the features and methods described with reference to system 10 and/or any combination of the above may be used as standalone features as part independent systems or as part of other systems not described in this document.
  • the shopping trip feature may be incorporated as a feature that is part of a browser or that may be installed as a browser plug in. This would allow activation of the shopping trip upon visiting almost any site accessible by the browser. All of the features described as part of this invention can also be incorporated as such i.e., as part of a browser or as a browser plug in, making it possible to use these features on any site.
  • This invention further illustrates the 3D browser concept.
  • This browser would incorporate web pages and websites with the depth component in addition to 2D elements. Users will be able to get a sense of 3D space as opposed to 2D space while browsing web pages and websites via the 3D browser.
  • This invention incorporates additional features available on a mobile device such as a mobile phone or a personal digital assistant (PDA) to assist the user while shopping in a physical store.
  • a mobile device such as a mobile phone or a personal digital assistant (PDA) to assist the user while shopping in a physical store.
  • the mobile device When users enter a store, the mobile device will detect and identify the store by receiving and processing wireless signals that may be sent by a transmitter in the store, and will greet users with the appropriate welcome message. For example, if the store is called ‘ABC’, the user will be greeted with the message ‘welcome to ABC’ on their wireless device.
  • the user may be uniquely identified by the store based on their mobile phone number for example.
  • the store may have a unique ID that will identified by the cell phone and used to also keep track of stores/places visited by the user.
  • store specials and offers and other information may be presented to the user on their mobile device (in the form of visual or audio or other forms of relaying digital input on a mobile device).
  • the mobile may instead accept user input (text, speech and other forms) for identifying store and then present relevant store information to the user.
  • Users will be able to search for items in the store using their mobile device and will be able to identify the location (such as the department, aisle, counter location etc.) of the product they wish to buy. They will receive an indication of whether they are approaching the location of or are in the vicinity of the product in the store and/or if they have reached or identified the correct location. The user may see a ‘path to product’ as described elsewhere in this document.
  • the mobile device is equipped with a barcode scanner and can be used for checking inventory, price and product information by scanning the barcode on a product.
  • the mobile device may also process the user's shopping list available on the mobile device and automatically generate availability, inventory, location, discounts, product description, reviews and other relevant information pertaining to the product and display it to the user. In an exemplary embodiment, this may be accomplished as follows with reference to FIG. 50 .
  • the mobile device 901 may transmit appropriate information request/query signals to a wireless SAP (service access point) in the store which in turn, will transmit relevant store and product information which is received and displayed by the mobile device. Depending on the specific area of the store that the user is in, the products in that area may be displayed on their mobile device.
  • a wireless SAP service access point
  • Users may also access their model on their mobile device and try-on apparel on the model, via a local application 271 version for mobile devices.
  • a user may also go on a shopping trip (as discussed with reference to FIG. 20 ) using their mobile phone 901 .
  • Other members of the shopping trip may be using a mobile device 902 as well or a computer. Users will also be able to see whether their friends are in the store using their mobile device 901 .
  • the image/video/audio/text analysis 1550 module outlines the steps of interaction or engagement with the outside world, i.e. external to the computer.
  • the module 1550 may be used for generic image/audio/video/text scene analysis.
  • this module works as follows: The module is preloaded with a basic language that is stored in a “memory” database 1554 . This language contains a dictionary which in turn contains words and their meanings, grammar (syntax, lexis, semantics, pragmatics, etc.), pronunciation, relation between words, and an appearance library 1556 .
  • the appearance library 1556 consists of an appearance based representation of all or a subset of the words in the dictionary. Such a correspondence between words or phrases, their pronunciation including phonemes and audio information, and appearances is established in an exemplary embodiment using Probabilistic Latent Semantic Analysis (PLSA) [55].
  • PLSA Probabilistic Latent Semantic Analysis
  • graphs a set of vertices and edges
  • cladograms are used to represent the relation between words. Words are represented by vertices in the graph. Words that are related are connected by edges. Edges encode similarity and differences between the attached words.
  • a visual representation of the similarity could be made by making the length of the edges linking words proportional to the degree of similarity. Vertices converge and diverge as more and more information becomes available.
  • This system also enables conversion from speech to image, image to speech, text to image, image to text, text to speech, speech to text, image to text to speech, speech to text to image or any combination thereof.
  • the memory database 1554 and the appearance library 1556 are analogous to “experience”. The appearance library 1556 and the memory database 1554 may be used during the primitive extraction, fusion, hypothesis formation, scene interpretation, innovation, communication, and other steps to assist the process by providing prior knowledge.
  • the stimuli can be images, video, or audio in an exemplary embodiment. It could also include temperature, a representation of taste, atmospheric conditions, etc.
  • From these stimuli basic primitives are extracted. More complex primitives are then extracted these basic primitives. This may be based on an analysis of intra-primitive and inter-primitive relations. This may trigger the extraction of other basic primitives or complex filters in a “focus shifting” loop where focus of the system shifts from one region or aspect of a stimulus to another aspect or region of the stimulus. Associations between the complex primitives are formed and these primitives are then fused. (The primitive extraction and fusion method described here is similar to that described in reference to FIG.
  • the prior knowledge 112 is available as part of the appearance library 1556 and the memory database 1554 .
  • the method is also applicable for audio stimuli).
  • Hypotheses are then formed and are verified.
  • the output of this step is a set of hypotheses (if multiple hypotheses are found) that are ranked by the degree of certainty or uncertainty.
  • the output of analysis on an image of a scene containing people may be a probability density on the location of people in the scene.
  • the modes or the “humps” in this density may be used to define hypotheses on the location of people in the image.
  • the probability of each mode may be used to define the certainty of the existence of an instance of a person at the specified location.
  • the variance of each mode may be used to define the spatial uncertainty with which a person can be localized.
  • the output of the hypothesis formation and verification step is passed on to a scene interpretation step at which the information makes interpretations on the scene. For example, if the system identifies a cow, some chickens, and a horse in a video, and identifies the sound of crows, it may identify the scene as a farm scene. This may be done using a classifier as described before.
  • the output of the scene analysis step is passed on to an innovation step.
  • the system innovative remarks to the analyzed stimuli.
  • the system looks for things it has seen in the recent past, surprising things, things of interest for example gadgets and makes comments such as—“Hey, I saw this guy last week”, “That's the new gadget that came out yesterday”, or “That's a pleasant surprise”. Surprise is detected using the method described with reference to FIG. 52B .
  • the system also filters out things that it does not want to communicate with the outside world. This could include information that is obvious or that which is confidential.
  • the output of the innovation model is communicated to the external world. This can be done via text, audio (using text to speech techniques), images [60] or video.
  • the text/audio output may include expressions such as, “I am looking at a farm scene.
  • the module 1550 may be driven by an intention. The intention can be based on the user's interest. For example, if the user likes hockey, it may pay more attention to things that are related to hockey in the stimuli.
  • the module may perform a search on the “winstick” and extract pricing and availability information and some technical details on how the “winstick” is made to be a better hockey stick.
  • the method 1650 operates as follows: The method constantly predicts the state of the system and observes the state of the system. (Alternatively, the method may predict and observe the state only as necessary).
  • the state of the system includes variables that are of interest.
  • the state may include the state of the user which may involve the location of the user in a given camera view or the mood of the user extracted from an image or based on the music the user is listening to, or the location of the user extracted from a Global Positioning System GPS, the mood of the user's friends, etc.
  • the state of the environment may include the weather, the day of the week, the location where the user is, the number of people at the user's home, etc.
  • One stage of the predict-update cycle is shown in FIG. 52B .
  • the system uses the output of the (i ⁇ 1) th stage i.e. previous stage's output and predicts the state of the system at the prediction step 1652 . This can be done, in an exemplary embodiment, using a prediction algorithm such as Gaussian process regression for example as used in [51] or other statistical approaches such as those used in [63].
  • the output of the prediction stage includes a predicted probability density of the state of the system. This is passed on to an observation step 1654 together with an observation of the system.
  • the output of the observation step 1654 includes an updated probability density called an observed density.
  • An observation of the system in an exemplary embodiment could be an analysis of an image taken through a webcam (eg. image-based extraction of the pose of the user) or a measurement of the temperature of the room using a thermal sensor, or any other measurement appropriate for the system.
  • an observed probability density is computed from the observation and the predicted density by computing the a posteriori density using Bayes rule.
  • the observed density is computed based on the observation alone. The difference between the predicted probability density and the observed probability density is then measured at the measurement step 1656 .
  • a test is made to determine if the distance is significant. In an exemplary embodiment, this is done based on a threshold—if the distance is over a threshold, the distance is considered significant and if it is below the threshold the distance is considered insignificant.
  • the threshold could be assigned or could be determined automatically.
  • the threshold is chosen to be a statistic of the predicted or observed density, In another exemplary embodiment, the threshold is chosen to be a function of the degree of certainty or uncertainty in the estimate of the predicted or observed densities. In yet another exemplary embodiment, the threshold is learnt from training data. If the distance is significant, the system is enters a “surprised” state. Otherwise it remains in an “unsurprised” state. The “surprised” state and the “unsurprised” states are handled by their respective handlers. The degree of surprise may be dependent on the distance between the predicted and observed probability densities. This allows the system to express the degree of surprise. For example, the system may state that it is “a little surprised” or “very surprised” or even “shocked”.
  • the system may incorporate the nature of the event at the prediction step thus leading to a predicted density that is closer to the observed density and essentially getting used to the event).
  • a system is used, for example, for detecting anomalies.
  • the system may monitor the locations of kids of a home by using signals from their cell phones (for example, text messages from their cell phones indicating the GPS coordinates) using a particle filter. If a surprise is observed (for example if the location of the kid is outside the predicted range for the given time), the surprise handler may send a text notification to the kid's parents.
  • the system may also be used in surveillance applications to detect anomalies.
  • the system may monitor a user's location while he/she is driving a vehicle on the highway. If the user slows down on the highway, the system may lookup weather and traffic conditions and suggest alternative routes to the user's destinations. If the user's vehicle stops when the system didn't expect it to, the system's surprise handler may say to the user things such as—“Do you need a tow truck?”, “Is everything ok?”, “Do you want to call home for help?”, etc. If a response is not heard, the system's surprise handler may notify the user's family or friends. Such a system, may also be used to predict the state of the user, for example, the mood of the user.
  • the surprise handler may play a comedy video or play a joke to the user to cheer him up. If the user is on a video sharing site or in the TV room for extended hours and the system sees that an assignment is due in a couple of days, the system may suggest to the user to start working on the assignment and may complain to others (such as the user's parents) if the user does not comply.
  • Such a system is also useful for anomaly detection at a plant. Various parameters may be monitored and the state of the system may be predicted. If the distance between the predicted and observed states is high, an anomaly may be reported to the operator. Images and inputs from various sensors monitoring an inpatient may be analyzed by the system and anomalies may be reported when necessary.
  • Another application of method 1650 would be as a form of interaction with the user.
  • the method may be used to monitor the activities of the user which may be used to build a model of the users activities. This model can then be used to predict the activities of the user. If a surprise if found, the surprise handler could inform the user accordingly.
  • the surprise handler may state that the user is supposed to be at the doctor's office and is getting late. The surprise handles may make similar comments on the user's friend's activities.
  • the surprise handler may also take actions such as make a phone call, turn off the room's light if the user falls asleep and wake up the user when it's time to go to school.
  • Method 1650 also enables a system to make comments based on visually observing the user. For example, the a system may make comments such as, “Wow! Your eye color is the same as the dress your are wearing”, or “You look pretty today”, based on the user's dressing patterns, method 1650 , heuristics that define aesthetics and/or the method used to determine beauty described earlier in this document.
  • the probability densities referred to above can be discrete, continuous, or a sampled version of a continuous density or could even be arbitrary functions or simply scalars that are representative of the belief of the state in exemplary embodiments.
  • the systems may express that it is not surprised and explain why. For example, if a tennis player loses, the system may say that it is not surprised because the wind was blowing against her direction during the match or if a football team loses, the system may express to the users that it is not surprised because the positions of the team players was consistently ill-positioned.
  • the system may parse news and if it is found that a famous person is dead, it may express that is “shocked” to hear the news.
  • This expression by the system can be made through a number of ways, for example through the use of text to speech conversion.
  • the concept of surprise can also used for outlier rejection.
  • a system may employ the method described here during training to identify outliers and either not use them or assign lower weights to them so that the outliers do not corrupt the true patterns that are sought from a data.
  • a session is a lasting connection typically between a client (eg. 14) and a server (eg. 20) that is typically initiated when a user is authenticated on the server and ends when a user chooses to exit the session or the session times out.
  • a clique session is one in which multiple users are authenticated and share the same session.
  • a clique session may be initiated by any subset of the set of users who have agreed to collaborate or it may require authentication of all the users. Similarly, a clique session can be terminated if any subset or all the users of the clique session exit. The order of authentication may or may not be important.
  • all users of a clique session may have the same unique clique session ID under which the clique session data is stored.
  • Clique sessions are useful for online collaboration applications.
  • Clique session IDs can also be used for accessing resources that require high security.
  • users of a joint account online may choose to have access to the online resource only if both users are authenticated and log in.
  • a user of a bank account may have a question for a bank teller about his account. In order for the teller to view the user's account, the teller would first have to log in and then the user would have to log in to the same account to allow the teller to view the user's account and answer his question.
  • Clique sessions may also be used for peer-to-peer connections.
  • FIG. 54A-F novel devices for interaction are shown in exemplary embodiments. These devices allow another way for users to communicate with computing devices 14 .
  • FIG. 54A where a novel pointing devices are shown in exemplary embodiments. This could also take a 1D form 1700 , a 2D form 1710 , or a 3D form 1720 .
  • the 1D form 11700 works as follows: A source or a transmitter bank 1712 is located on one side of the device and a sink or sensor or a receiver bank is located on the opposite side 1714 .
  • the source may emit lasers or other optical signals, or any other directional electromagnetic radiation or even fluids.
  • the corresponding sensor on the receiver bank is blocked from receiving the signal. This is used to define the location of the object. If lasers are used, a laser frequency different from that of typical background lighting is used.
  • the interrupting unit emit instead of the source or transmitting bank. The unit also allows the use of multiple interrupting units. In this case, the multiple sensors would be blocked and this would be used to define the location of the interrupting units.
  • a transmitter and receiver may be used in an alternating fashion so that each side has both transmitters and receivers. In the 2D form 1710 , a second set of receivers and transmitters are placed orthogonal to the first one.
  • FIG. 54A is composed of a set of holes.
  • a transmitter and a receiver are located in each of these holes.
  • Each of these transmitters may employ lasers or other optical signals, or any other directional electromagnetic radiation or even fluids.
  • the transmitter and the receiver are both oriented such that they point out of the device in the direction of the hole.
  • an interrupting unit such as a pen or a finger
  • the signal bounces off the interrupting device and is sensed by the receiver. This signal is then used to define the location of the interrupting unit.
  • FIG. 54B where an illustration 1732 of the use of the 2D form 1710 is show.
  • the user can simply drag a finger on the unit and use that to point to objects or for free form drawing.
  • the unit may also be placed over a computer screen and used as a mouse.
  • an illustration 1734 of the use of the 3D form 1720 This can be used to manipulate objects in 3D. For example, this can be used with the technology described with reference to FIG. 36 .
  • This device may be used with a hologram for visual feedback or it may be used with any conventional visualizing unit such as a monitor.
  • the device 1720 can also be used with multiple hands as shown in the illustration 1734 .
  • FIG. 54C another illustration of the use of the device 1710 is shown in an exemplary embodiment.
  • the device 1710 may be placed on paper and the user may use a pen to write as usual on the paper. As the user writes. the device 1710 also captures the position of the pen. This is then used to create a digital version of the writing and may be stored on the unit 1710 or transferred to a computing device.
  • the device 1710 is also portable. The corners of the device 1710 can be pushed inwards and the unit folded as shown in FIG. 54C . The compact form of this device takes the form of a pen as shown in FIG. 54C .
  • the device 1710 can also include a palette that includes drawing tools such as a polygons, selection tools, eraser, etc.
  • the user can also slide the device 1710 as he/she writes to create a larger document than the size of the device. This movement of the device 1710 is captured and a map is built accordingly.
  • the motion may be captured using motion sensors or using optical flow [64] if the unit is equipped with optical sensors.
  • the device 1710 may also be moved arbitrarily in 3D and the motion may be captured along with location of the interrupting device to create art or writing in 3D using the 2D form 1710 .
  • the device 1710 can also be used as a regular mouse.
  • the apparatus presented in FIG. 54A-C may also be used as a virtual keyboard. Regions in the grid may be mapped to keyboard keys.
  • a user can place the apparatus on a printout of a keyboard (or a virtual keyboard may be projected using for example lasers) and use it for typing.
  • the device 1740 includes a QWERTY keyboard or any other keyboard 1748 that allows users to enter text or alphanumerics, a mouse 1746 , controls for changing the volume or channels 1744 , other controls for switching between and controlling computing devices and entertainment devices such as a DVD player, a TV tuner, a cable TV box, a video player, a gaming device.
  • the device may be used as a regular universal TV remote and/or to control a computer.
  • the mouse may be used by rocking the pad 1746 to a preferred direction or sliding a finger over the pad.
  • the device 1740 communicates with other devices via infrared, Bluetooth, WiFi, USB and/or other means.
  • the device 1740 allows users to control the content being viewed and to manipulate content.
  • the device 1740 allows users to watch videos on a video sharing site. Users can use the keyboard 1748 to enter text in a browser to go to a site of their choice and enter text into a search box to bring up the relevant videos to watch. They can then use the mouse 1746 to click on the video to watch.
  • the keyboard 1748 and the mouse 1746 can be used as a regular keyboard and mouse for use with any other application as well.
  • the keyboard may also be used to switch TV/cable channels by typing the name of the channel.
  • a numeric keypad may be present above the keypad, or number keys may be a part of the alpha (alphabets) keyboard and can be accessed by pressing a function key, in an exemplary embodiment.
  • the device 1740 may also include an LCD screen or a touch screen.
  • the device 1740 may also be used with a stylus.
  • the functionality of the device may be reprogrammable.
  • the device could also be integrated with a phone.
  • the device may be used with one hand or two hands as shown in FIG. 54E in an exemplary embodiment.
  • the device allows easy text entry when watching videos.
  • the device facilitates interactive television.
  • the content of the television may be changed using this remote.
  • the device 1740 may also include motion sensors.
  • the motion of this device may be used change channels, volume, or control characters on a screen.
  • the device may be used to search a video for tags and jump to tags of interest.
  • the device may also feature a numeric keypad that allows easy placement of phone calls.
  • FIG. 54F where of a novel human computer interface system is illustrated in an exemplary embodiment.
  • This system makes use of a line of sight that includes two or more objects.
  • the location of the user's finger and an eye are used to determine the location where the user is pointing.
  • the location of the user's finger(s) or hand(s) and that of one or both of the user's eyes can be used to determine where the user is pointing on the screen.
  • the user may point to a screen 1760 using one or more finger(s)/hand(s) 1762 .
  • One or more cameras may monitor the location of 1762 and the user's right eye 1764 and/or left eye 1766 .
  • the cameras may be on top of the screen, on the sides, at the bottom or may even be behind the screen 1760 .
  • a side view and a top view of the setup are also shown in FIG. 54F .
  • the system may make use of motion parallax to precisely determine the location pointed at by the user.
  • documents may be uniquely identifiable. This may be done by assigning a unique identification number to each document that is registered in a database. Documents can be indexed based on tags such as the chapter number and the line number. The tags may be inferred, or extracted or present in the underlying document. Users can embed quotes from documents. For example, a webpage may contain an embedded quote to a line from a chapter of a book. In an exemplary embodiment, hovering over the quotation or clicking on the quotation may display the corresponding quotation.
  • embedding a quotation tag with an identification number may display the quotation in the document in which the quotation is embedded.
  • Quotations can be used for text, audio, video, or other media.
  • a version number may be used for related documents.
  • the system enables the user to find related quotes or verses. “Quotation chains” may also be supported. Quotation chains enable the user to quote a document that in turn quotes another document so that the source of the information can be traced.
  • the system 10 has been described herein with regards to being accessible only through the Internet, where a server application is resident upon a server 20 .
  • the respective applications that provide the functionalities that have been described above, may be installed on a localized stand-alone devices in alternative embodiments.
  • the respective apparel items and other products that the user may view and or selected, may then be downloaded to the respective device upon connecting to an Internet server.
  • the stand-alone devices in alternative embodiments may communicate with the server, where the server has access to various databases and repositories wherein items and offerings may be stored.
  • These stand-alone devices may be available as terminals or stations at a store, which may be linked to store inventories. Using these terminals, it may be possible to search via keywords, voice, image, barcode and specify filters like price range.

Abstract

The methods and systems described herein relate to online methods of collaboration in community environments. The methods and systems are related to an online apparel modeling system that allows users to have three-dimensional models of their physical profile created. Users may purchase various goods and/or services and collaborate with other users in the online environment.

Description

  • This application claims the benefit of Provisional Application No. 61/064,716, filed Mar. 21, 2008, which is hereby incorporated herein by reference.
  • FIELD
  • The embodiments described herein relate generally to immersive online shopping, entertainment, business, travel and product modeling, in particular to a method and system for modeling of apparel items online in a collaborative environment.
  • BACKGROUND
  • Times have changed. There has been a dramatic rise in nuclear families and this coupled with increasing globalization is affecting the way we live, work, and interact. But humans will continue to remain human; the human instinct to form communities, stay connected, interact and collaborate still exists. There is a need to facilitate and ease these processes in a new era of ever-growing population and information where time is precious. The experience of real face-to-face interaction is often missing. Technology has to emulate components of real experiences and human factors in order for users to be fully satisfied.
  • An ever growing segment of the population is relying on the Internet to purchase various products and services. Offerings such as those related to travel have become ever more popular with respect to online purchasing. As users are generally familiar with their travel requirements, and adequate information is provided online for users to make their travel decisions, many users make all of their travel bookings online.
  • While there has been an increase in the percentage of people purchasing items of apparel online, it has not mirrored the percentages of people that purchase goods and services such as travel packages online. One of the main reasons for the different rates of adoption is because of the requirements associated with purchasing items of apparel. One of the main requirements when purchasing apparel whether purchased online or through a conventional establishment is to ensure that the item fits. The determination of whether an item fits often cannot be made with regards to just the displayed or stated size of the item. Items from different manufacturers though of the same size, often fit differently. Therefore, people often wish to be able to try on the items before purchasing to determine the suitability of fit, and how it appears.
  • Further, when shopping for items of apparel, people generally enjoy the social components of shopping. Many people will often take others to stores when purchasing apparel for the feedback or even company. As a result of the limitations associated with current models for online apparel shopping, the public has not been as ready to adopt such shopping methods. Methods are needed to facilitate collaboration and decision making, and for emulating reality through technology in all facets of the user's life including work, business, study, research, travel, legal affairs, family life, entertainment, and shopping.
  • SUMMARY
  • The methods and systems described herein relate to online methods of collaboration in community environments. The methods and systems are related to an online apparel modeling system that allows users to have three-dimensional models of their physical profile created. Users may purchase various goods and/or services and collaborate with other users in the online environment.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a better understanding of the embodiments described herein and to show more clearly how they may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings which show at least one exemplary embodiment, and in which:
  • FIG. 1 is a block diagram of the components of a shopping, entertainment, and business system;
  • FIG. 2 is a block diagram of the components of a computing device;
  • FIG. 3 is a block diagram of the components of a server application;
  • FIG. 4 is a block diagram of the components of a data store;
  • FIG. 5 is a flowchart diagram of an access method;
  • FIG. 6A-J illustrate the model generation method;
  • FIG. 7A-D illustrate the modes of operation in a collaborative environment;
  • FIG. 8 is an image of a sample main page screen for shopping;
  • FIG. 9 is an image of a sample upload window for data for model generation;
  • FIG. 10 is a image of a sample local application window and a sample browser window;
  • FIG. 11 is an image of a sample facial synthesis window;
  • FIG. 12A is an image of a sample measurement window;
  • FIG. 12B is an image of a sample constructed photorealistic model;
  • FIG. 12C is another image of a sample constructed photorealistic model;
  • FIG. 13A is an image of a set of non photorealistic renderings of the user model shown from different viewpoints;
  • FIG. 13B is an image showing a sample mechanism that allows users to make body modifications directly on the user model using hotspot regions;
  • FIG. 13C is an image showing a sample ruler for taking measurements of the user model;
  • FIG. 14 is an image of a sample environment manager;
  • FIG. 15A is an image of a sample user model environment;
  • FIG. 15B is an image illustrating sample features of collaborative shopping;
  • FIG. 16 is a sample image of a component of a Shopping Trip management panel;
  • FIG. 17 is an image of a sample friends manager window;
  • FIG. 18 is an image of a sample friendship management window;
  • FIG. 19 is an image of a sample chat window;
  • FIG. 20 is an image of a sample collaborative environment;
  • FIG. 21A-G are images illustrating Split-Bill features;
  • FIG. 22 is an image of a sample apparel display window;
  • FIG. 23 is an image of a shared item window;
  • FIG. 24 is an image of a sample fitting room window in a browser window;
  • FIG. 25 is an image of a sample wardrobe item;
  • FIG. 26 is an image of a sample wardrobe consultant window;
  • FIG. 27 is an image describing a sample instance of user interaction with the wardrobe and fitting room;
  • FIG. 28 is an image of a sample 3D realization of a virtual wardrobe;
  • FIG. 29A is an image showing sample visual sequences displayed to a user while the apparel and hair is being modeled and fitted on the user model.
  • FIG. 29B is an image illustrating sample mechanisms available to the user for making body adjustments to their user model;
  • FIG. 29C is an image showing sample product catalogue views available to the user and a sample mechanism for trying on a product in the catalogue on the user model;
  • FIG. 30 is an image showing sample visualization schemes for fit information with respect to the body surface;
  • FIG. 31 is an image of a sample browser main page screen and a sample local application screen, showing sample features;
  • FIG. 32 is an image of a sample user model environment;
  • FIG. 33 is an image of a sample user model environment with sample virtual components;
  • FIG. 34 is an image where a sample user model music video is shown;
  • FIG. 35 is an image showing sample manipulations of a user model's expressions and looks;
  • FIG. 36 is an image of a sample virtual store window showing virtual interaction between a user and a sales service representative;
  • FIG. 37 is an outline of a sample ADF file in XML format;
  • FIG. 38 is a flowchart diagram that provides an overview of ADF file creation and use;
  • FIG. 39A is in image of a sample procedure for a user to gain access to friends on system 10 from the user's account on a social networking site such as Facebook;
  • FIG. 39B is an image of a sample user account page on system 10 before a user has logged into Facebook;
  • FIG. 39C is an image of a sample page for accessing a social networking site (Facebook) through system 10;
  • FIG. 39D is an image of a sample user account page on system 10 after a user has logged into Facebook;
  • FIG. 40 is a sample image of a Shopping Trip management panel;
  • FIG. 41A-F are snapshots of a sample realization of the system discussed with reference to FIG. 20;
  • FIG. 42 illustrates a sample interaction between various parties using system 10;
  • FIG. 43 is an image illustrating sample features of the hangout zone;
  • FIG. 44 is an image of a sample main page in the hangout zone;
  • FIG. 45 is an image of a sample style browser display window;
  • FIG. 46A is an image of another sample main page for shopping;
  • FIG. 46B is an image of a sample store window;
  • FIG. 46C is an image of another sample store window;
  • FIG. 46D is an image of sample shopping trip window;
  • FIG. 46E is an image of a user's sample personalized looks window;
  • FIG. 46F is an image of a sample fitting room window;
  • FIG. 46G is an image of another sample fitting room window;
  • FIG. 46H is an image of a sample shopping diary window;
  • FIG. 46I is an image of a sample directory page;
  • FIG. 47A-B are sample images illustrating a feature that allows users to customize the look and feel of the browser application;
  • FIGS. 48A-F, are images illustrating sample layout designs and select features of system 10;
  • FIGS. 49A-O are images illustrating sample features of the AFMS/VOS;
  • FIG. 49L is an image of the sample storage structure of the AFMS/VOS;
  • FIG. 49M is an image of a sample user accounts management structure within the AFMS/VOS;
  • FIG. 49N is an image that shows sample abstraction of a search query that is fed into the search engine that is a part of the AFMS/VOS;
  • FIG. 49O is an image of a sample implementation of the AFMS/VOS as a website;
  • FIG. 49P is an image of a sample application management structure within the AFMS/VOS;
  • FIG. 49Q is an image of an exemplary embodiment of file tagging, sharing, and searching features in the VOS/AFMS;
  • FIG. 49R is a sample image of a user interface for filtering search data;
  • FIG. 49S is a sample image of an interface to the object oriented file system;
  • FIG. 50 illustrates a sample mobile communication system when a user is in a store;
  • FIG. 51A illustrates a sample communication network demonstrating external connections to system 10;
  • FIG. 51B illustrates a sample flowchart showing the operation of the VS;
  • FIG. 52A illustrates an image/video/audio analysis module for generic scene analysis;
  • FIG. 52B illustrates a method for detecting surprise;
  • FIG. 53 illustrates a sample interface for broadcasting and collaborative communication;
  • FIG. 54A-F novel devices for human-computer interaction;
  • FIG. 55 illustrates an exemplary embodiment of a method for audio/video/text summarization; and
  • FIG. 56 illustrates a sample usage of a collaborative VS application;
  • It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
  • DETAILED DESCRIPTION
  • It will be appreciated that, for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements or steps. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Furthermore, this description is not to be considered as limiting the scope of the embodiments described herein in any way, but rather as merely describing the implementation of the various embodiments described herein.
  • The embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. However, preferably, these embodiments are implemented in computer programs executing on programmable computers, each comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. For example, and without limitation, the programmable computer may be a mainframe computer, server, personal computer, laptop, personal data assistant, or cellular telephone. A program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
  • Each program is preferably implemented in a high level procedural or object-oriented programming and/or scripting language to communicate with a computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Each such computer program is preferably stored on a storage media or a device (e.g. ROM or magnetic diskette), readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • Furthermore, the system, processes and methods of the described embodiments are capable of being distributed in a computer program product comprising a computer-readable medium that bears computer-usable instructions for one or more processors. The medium may be provided in various forms, including one or more diskettes, compact disks, tapes, chips, wireline transmissions, satellite transmissions, internet transmissions or downloadings, magnetic and electronic storage media, digital and analog signals, and the like. The computer-usable instructions may also be in various forms, including compiled and non-compiled code.
  • Reference is now made to FIG. 1, wherein a block diagram illustrating components of an online apparel modeling and collaboration system 10 are shown in an exemplary embodiment. The modeling system 10 allows users to have three-dimensional models that are representative of their physical profile created. The three-dimensional models are herein referred to as user models or character models, and are created based on information provided by the user. This information includes, but is not limited to, any combination of: images; movies; measurements; outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts; scans such as laser scans; skin tone, race, gender, weight, hair type etc.; high resolution scans and images of the eyes; motion capture data (mocap). The users may then edit and manipulate the user models that are created. The user models may then be used to model items of apparel. The virtual modeling of apparel provides the user with an indication regarding the suitability of the apparel for the user. The items of apparel may include, but are not limited to, items of clothing, jewelry, footwear, accessories, hair items, watches, and any other item that a user may adorn. The user is provided with various respective functionalities when using the system 10. The functionalities include, but are not limited to, generating, viewing and editing three-dimensional models of users, viewing various apparel items placed on the three-dimensional models, purchasing apparel items, interacting with other members of online communities, sharing the three-dimensional models and sharing the apparel views with other members of the online communities. These features are representative of ‘interactive shopping’ where users are not just limited to examining different views of a product before purchasing it from an electronic catalogue but are able to examine 3D product simulations by putting them on their 3D virtual embodiments, interacting with products via their virtual model or directly, acquiring different perspectives of the product in 3D, getting acquainted with enhanced depictions of the look and feel of the product as well as sharing all of these experiences and product manifestations with their social network. Media content that captures the user model engaged in virtual activities such as game-play, singing, dancing, and other activities may also be shared. The user models may be exported to gaming environments including third party games. The respective functionalities are described in further detail with reference to FIGS. 2 to 50 Such a system may be generalized to include items other than apparel. In an exemplary embodiment, the user may be presented with options for the color of a car that best matches the user's hair-color.
  • The online modeling system 10 in an exemplary embodiment comprises one or more users 12 who interact with a respective computing device 14. The computing devices 14 have resident upon them or associated with them a client application 16 that may be used on the model generation process as described below. The respective computing devices 14 communicate with a portal server 20. The portal server 20 is implemented on a computing device and is used to control the operation of the system 10 and the user's interaction with other members of the system 10 in an exemplary embodiment. The portal server 20 has resident upon it or has associated with it a server application 22. The portal server 20 interacts with other servers that may be administered by third parties to provide various functionalities to the user. In an exemplary embodiment, the online modeling system 10 interacts with retail servers 24, community servers 26, entertainment servers 23, media agency servers 25, financial institution servers 27 in a manner that is described below. Further, the portal server 20 has resident upon it or associated with it an API (Application Programming Interface) 21 that would allow external applications from external vendors, retailers and other agencies not present in any of the servers associated with system 10, to install their software/web applications. Validation procedures may be enforced by the portal server to grant appropriate permissions to external applications to connect to system 10.
  • The users 12 of the system 10 may be any individual that has access to a computing device 14. The computing device 14 is any computer type device, and may include a personal computer, laptop computer, handheld computer, phone, wearable computer, server type computer and any other such computing devices. The components of the computing device 14 in an exemplary embodiment are described in greater detail with regards to FIG. 2 to 56. The computing application 16 is a software application that is resident upon or associated with the computing device 14. The computing application 16 allows the user to access the system and to communicate with the respective servers. In an exemplary embodiment, the computing application aids in the rendering process that generates the three-dimensional user model as is described below. In an exemplary embodiment, the user accesses the system through a web browser, as the system is available on the Internet. Details on the web browser and computing application interaction are described with reference to FIG. 10.
  • The communication network 18 is any network that provides for connectivity between respective computing devices. The communication network 18 may include, but is not limited to, local area networks (LAN), wide area networks (WAN), an Intranet or the Internet. In an exemplary embodiment, the communication network 18 is the Internet. The network may include portions or elements of telephone lines, Ethernet connections, ISDN lines, optical-data transport links, wireless data links, wireless cellular links and/or any suitable combination of the same and/or similar elements.
  • The portal server 20 is a server-type computing device that has associated with it a server application 22. The server application 22 is a software application that is resident upon the portal server 20 and manages the system 10 as described in detail below. The components of the software application 22 are described in further detail below with regard to FIG. 3. The retail server 24 is a server-type computing device that may be maintained by a retailer that has an online presence. The retail server 24 in an exemplary embodiment has access to information regarding various items of apparel that may be viewed upon the three-dimensional model. The retail server 24 may be managed by an independent third party that is independent of the system 10. The retails server 24 may be managed by the portal server 20 and server application 22. The community server 26 may be a server that implements community networking sites with which the system 10 may interact. Such sites may include sites where users interact with one another on a social and community level. Through interacting with community server 26, the system 10 allows for members of other online communities to be invited to be users of the system 10. The entertainment server 23 in an exemplary embodiment, may be a server that provides gaming facilities and services; functions as a database of movies and music (new and old releases); contains movie related media (video, images, audio, simulations) and music videos; provides up-to-date information on movie showtimes, ticket availability etc. on movies released in theatres as well as on music videos, new audio/video releases; houses entertainment related advertisement content etc. The media server agency 25 may be linked with media stations, networks as well as advertising agencies. It includes, but is not limited to news information, content and updates as relates to events, weather, fashion, in an exemplary embodiment. The financial institution server 27 in an exemplary embodiment may be linked with financial institutions and provides service offerings available at financial institutions and other financial management tools and services relevant to online and electronic commerce transactions. These include facilities for split-bill transactions, which will be described later. Services also include providing financial accounts and keeping track of financial transactions, especially those related with the purchase of products and services associated with system 10.
  • Reference is now made to FIG. 2, where a block diagram illustrating the components of a computing device in an exemplary embodiment is shown. The computing device 14, in an exemplary embodiment, has associated with it a network interface 30, a memory store 32, a display 34, a central processing unit 36, an input means 38, and one or more peripheral devices 40.
  • The network interface 30 enables the respective device to communicate with the communication network 18. The network interface 30 may be a conventional network card, such as an Ethernet card, wireless card, or any other means that allows for communication with the communication network 16. The memory store 32 is used to store executable programs and other information and may include storage means such as conventional disk drives, hard drives, CD ROMS, or any other non-volatile memory means. The display 34 allows the user to interact with the system 10 with a monitor-type/projection-type/multi-touch display/tablet device. The CPU 36 is used to execute instructions and commands that are loaded from the memory store 32. The input devices 38 allow users to enter commands and information into the respective device 14. The input devices 38 may include, but are not limited to, any combinations of keyboards, a pointing device such as a mouse, or other devices such as microphones and multi-touch devices. The Peripheral devices 40 may include, but are not limited to, devices such as printers, scanners, and cameras.
  • Reference is now made to FIG. 3, where a block diagram illustrating the components of a server application is shown in an exemplary embodiment. The modules that are described herein are described for purposes of example as separate modules to illustrate functionalities that are provided by the respective server application 22. The server application 22 in an exemplary embodiment has associated with it a modeling module 50, a community module 52, a management module 54, an environment module 56, a retailer module 58, a shopping module 60, a wardrobe module 62 an advertising module 64, entertainment module 66, and a financial services module 68. The server application 22 interacts with a data store 70 that is described in further detail with regard to FIG. 4. The data store 70 is resident upon the server in an exemplary embodiment and is used to store data related to the system 10 as described below. Each of these modules may have a corresponding module on 14 and/or 16. Computational load (and/or storage data) may be shared across these modules or exclusively handled by one. In an exemplary embodiment, the cloth modeling and rendering can be handled by the local application.
  • The modeling module 50, is used to generate a three-dimensional model of a user. The user model as described below is generated based on a user's physical profile as provided through information of the user including, but not limited to images, movies, outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts; scans such as laser scans; skin tone, race, gender, weight, hair type, high resolution scans and images of the eyes; motion capture data, submitted measurements, and modifications made to the generated model. In an exemplary embodiment, the three-dimensional image may first be created based on one or more two-dimensional images that are provided by the user (these include full body images and images of the head from one of more perspectives). These images are passed on to a reconstruction engine to generate a preliminary three-dimensional model. In an exemplary embodiment, based on the respective images that are provided, physical characteristics of the user are extracted. The physical characteristics are used to generate a preliminary three-dimensional model of the user. This preliminary model is then optimized. In an exemplary embodiment of the optimization process, the 3D surface of the preliminary model may be modified to better match the user's physical surface. The modification to the mesh is made using Finite Element Modeling (FEM) analysis by setting reasonable material properties (example stiffness) for different regions of the face surface and growing/shrinking regions based on extracted features of the face, Further, user-specified optimization is also performed. This process, in an exemplary embodiment, involves user specifications regarding the generated model, and further techniques described below. Users in an exemplary embodiment are asked for specific information relating to their physical profile that is described in detail below. In exemplary embodiment, the modeling module 50 combines the generated three-dimensional profile from the user's electronic image, with the user-specified features and the user modifications to form a three-dimensional profile as is described in detail below. Users can update/re-build their model at a later point in time as well. This is to allow the user to create a model that reflects changes in their physique such as growth, aging, weight loss/gain etc. with the passage of time. Additionally, the system 10 may be incorporated with prediction algorithms that incorporate appropriate changes brought about by the growth and aging process in a given user model. Prediction algorithms that display changes in the user model after weight loss would also be accommodated by system 10. These could be used by weight loss retailers to advertise their weight loss/health products. The user model can be incorporated with the personality or style aspects of the user or of another person that the user chooses. In an exemplary embodiment, using content from a video that shows the user walking, system 10 can learn the walking style of the user and apply it to the virtual model. In another exemplary embodiment, from an audio or video file of a conversation or a dialogue that a celebrity is engaged in, the accent of the celebrity may be learnt and applied to the speech/dialogues of the model. In an exemplary embodiment, this can be accomplished using bilinear models as discussed in paper 1 and 2.
  • The modeling module 50 also allows the user to view items of apparel that have been displayed upon the user model that has been generated. The user is able to see how items of apparel appear on their respective model, and how such items fit. The module enables photorealistic modeling of apparel permitting life-like simulation (in terms of texture, movement, color, shape, fit etc.) of the apparel. The modeling module 50 is able to determine where certain items of apparel may not fit appropriately, and where alterations may be required. Such a determination is indicated to the user in exemplary embodiment through visual indicators such as, but not limited to, arrows on screen, varying colors, digital effects including transparency/x-ray vision effect where the apparel turns transparent and the user is able to examine fit in the particular region.
  • The modeling module 50 also provides the user with the functionality to try on various items of apparel and for the simulated use of cosmetic products, dental products and various hair and optical accessories. Users are able to employ virtual make-up applicators to apply cosmetic products to user models. Virtual make up applicators act as virtual brushes that simulate real cosmetic brushes can be used to select product(s) from a catalogue (drag product) and apply (drop product) onto a user model's face. This is accomplished, in exemplary embodiment, by warping or overlaying the predefined texture map corresponding to the product on to the face using a technique similar to that used in [1]. The texture map could be parameterized as a function of user characteristics such as skin tone, shape of face. The user is also presented with the option of letting the system apply selected product(s) to the user model's face. In this case, the face texture map is processed (using digital signal processing techniques as exemplary embodiment) to create the effect of a given cosmetic product. Or, an additional texture layer is applied with the desired effect on top of the existing face texture map. A correspondence between a cosmetic product and its effect on the user model allows users to visualize the effect of applying a given cosmetic product (This also applies to hair, dental and optical products). Additionally, the module suggests the most suitable choice of cosmetic products as well as the procedure and tools of application to enhance/flatter a user's look. Suggestions will also be provided along similar lines for dental, hair and optical products. Additionally, real-time assistance is provided to the user for application of cosmetic products. By connecting a webcam to system 10, the user can visualize themselves on their monitor or other display device available while applying make-up (as in a mirror) and at the same time interact with a real-time process that will be pre-programmed to act as a fashion consultant and will guide the user in achieving optimal looks and get feedback on their look as well while they apply make-up. In an exemplary embodiment, the application collects real-time video, image and other data from the webcam. Then, based on an assessment of user parameters such as face configuration, skin tone and type, facial feature (eyes, nose, cheeks, chin etc.) configuration and type, their relative position and other parameters as well as based on the availability of cosmetic products, the application provides text, audio, visual and/or other type of information to guide the user through the optimal make-up application procedure given the specific parameters. The user can also specify other objective and subjective criteria regarding the look they want to achieve such as the occasion for the look, the type of look, the cosmetic product brands, time needed for application etc. The application provides specific feedback related to the existing make-up that the user has already put on. For example, the application may advise the user to use a matte foundation based on their skin type (program computes metrics involving illumination and reflection components based on the face image to assess the oiliness of the skin) or to use upward strokes while applying blush based on their cheek configuration (algorithms that localize contouring regions and/or assess concavities on face regions are used). Additionally, the automatic make-up applicator/advisor can present a virtual palette of cosmetic products on the monitor or display device and allow the users to select the colours/products of their choice. The program can perform a virtual ‘make-over’ of the user. In an exemplary embodiment, the application uses the real-time video of the user available through the webcam or other forms of video/images captured by other forms of video/image capture devices; identifies the different facial features and applies the appropriate cosmetic products (cheeks with blush, eyelids with eye shadow) to the video/image of the user and presents it on the display. If it involves streaming video content of the user, as in the case of a webcam, the user can visualize the cosmetic application process in real-time as it is carried out by the application on the user's face on the display. Instead of a pre-programmed application, a real fashion consultant is also able to assist the user in a similar manner in achieving the desired looks with cosmetic products, using the webcam and/or other video or image capture feature. In an exemplary embodiment, the effect of applying cosmetic products can be achieved by moving the face texture map corresponding to the user model, or an image of the user closer towards an average face. This can be accomplished by applying PCA (Principal Components Analysis [2]) and removing the higher order components, or it can also be done by computing the Fourier transform of the user model's texture map or the user's image and removing the higher frequency components. A similar technique can also be used to identify a user's beauty by looking at the weights of the higher order principal components. Effect of applying beauty products can be more realistically simulated by looking at the principal components before and after the application of a cosmetic product on a number of users and then applying the same change to the given user's texture model or the user's image. The user can thus get assistance in applying cosmetic products not simply on a 2D or 3D virtual effigy of their self but also on their actual face. This increases the interactivity and precision of the cosmetic application process for the user.
  • The user is also able to choose from various hairstyles that are available for selection. The modeling module 50 then causes the user model to be displayed with the hairstyle that has been selected by the user. The user may change their hair style of the model, and apply hair products that affect the appearance of hair. The selections of hair styles and other products by the user may be made based on hair styles that are featured from various respective hair salons. The module enables photorealistic modeling of hair permitting life-like simulation (in terms of texture, movement, color, shape etc.) of the model's hair. The modeling module 50 also allows the user to specify various actions and activities that the user model is to undertake. The model may be made to move in a variety of environments with various patterns of movement to provide to the user a better idea of how the model appears in different settings or environments. The user is able to perform various manipulations of the various parts of the user model in an exemplary embodiment. The user is presented in an exemplary embodiment with specified activity choices that the user may wish the model to engage in. Examples of such activities include, but are not limited to singing, speech and dancing. Where users wish to participate in activities in shared environments where user models are allowed to interact, the users in an exemplary embodiment join a network upon which their models are placed into a common 3D environment. Any information related to interaction between the user models such as location of the model in the environment, occlusion, model apparel, motion/activity information related to the model is transmitted to each computing application either directly or via a server.
  • The community module 52 allows the user to interact with other users of the system 10 or with members of other community networks. The community module 52 allows users to interact with other users through real-time communication. Messages can also be exchanged offline. The user can interact with other users through their virtual character model. The model can be dressed up in apparel, make-up and hairstyles as desired by the user and involved in interaction with other users. The user can animate character expressions, movements and actions as it communicates. This is done via a set of commands (appearing in a menu or other display options) to which the model has been pre-programmed to respond to. In an exemplary embodiment, a menu of mood emoticons (happy, angry, surprised, sad etc.) and action icons (wave, side-kick, laugh, salsa move, pace etc.) are presented to the user to enact on their virtual model while using it to communicate/interact with other users. Alternatively, the expressions/movements/actions of the character model can be synchronized with the user's intentions which are communicated to the model in the form of text, speech, or other information. As an exemplary embodiment, the user may type or say the word laugh and the model will respond by laughing. Another technique used for animating the model's expressions/movements/actions includes tracking the user's expressions/movements/actions through the use of a webcam, video camera, still camera and/or other video or image capture device and applying the same expressions/movements/actions to the character model (synchronized application or after a delay). The character may be programmed to respond to visual cues and/or expressions and/or tone and/or mood of the user by putting on the appropriate expressions, acting accordingly and delivering the effect of the user input. Further, speech or text input to a user model may also be provided through a mobile phone.
  • The community interaction features of the system 10 allow the user to share views of the user model with other users. By sharing the user model with other users, the user is able to request and receive comments, ratings and general feedback regarding the respective apparel items and style choices made by the user. Receiving feedback and comments from other users enhances the user's experience with the system by simulating a real world shopping experience.
  • When interacting with other users of the system 10, the community module 52 allows users to interact with one another through use of their respective models. The community module 52 further includes chat functionality that allows users to participate in text, video or voice communication with other users of the system 10. (The chat application may allow automatic translation to facilitate users who speak different languages to communicate). Further, users may interact with other users through engaging in collaborative virtual shopping trips as described in detail herein. Users can share their models with other users or build models of other people and shop for items for other people too. This feature would prove useful in the case of gift-giving. Another feature in this module includes a ‘hangout’ zone—a social networking, events planning and information area. This is a feature which assists users in organizing and coordinating social events, conferences, meetings, social gatherings and other activities. Users can initiate new events or activities in the hangout zone and send virtual invites to people in their network and other users as well. The users can then accept or decline invites and confirm if they can make it to the event. Event/activity/occasion information and description including, but not limited to, details such as the theme, location, venue, participants, attendees, news and other articles related to the event, photos, videos and other event related media, user feedback and comments etc can be posted and viewed in the hangout zone. Suggestions on what to wear and/or bring to the event and where to buy it are also featured. This zone will also feature upcoming events and shows, music bands/groups and celebrities coming to town. A map feature will be integrated to help users locate the venue of the event and get assistance with directions. The zone will also feature information on the area surrounding the venue of the event such as nearby restaurants, shopping plazas, other events in proximity of the venue etc. In another exemplary embodiment, groups of users can coordinate excursion to movies. Users can start a new thread (i.e., create a new item page) in the hangout zone regarding visiting the theatre on a particular date. Invitees can then vote for the movie they want to watch, post news, ratings and other media items related to the movies; share views in celebrity or movie apparel on the page; discuss and chat with other users regarding their plans. Information provided by the entertainment servers 23 and media agency servers 25 will be used to keep content relating to movies, shows, and other entertainment venues updated in the hangout zone. In another exemplary embodiment, special events such as weddings and sports events may be planned in the hangout zone, As an example, sample bridal outfits may be displayed in the zone for members of the group organizing the wedding, in the form of images, or on the virtual model of the bride or on mannequins etc. Apparel suggestions may be provided to the bride and groom, for example, based on the season, time of day the wedding is held, whether the event is indoor/outdoor, the budget allocated for the outfits, etc. Suggestions on bridesmaids' dresses and other outfits may be provided based on what the bride and groom are wearing and other factors such as the ones taken into account while suggesting bride and groom outfits. A digital calendar may be featured in the hangout zone indicating important timing information regarding the event such as number of days left for the event, other important days surrounding the events etc. To-do and/or itemized lists which may be sorted according to days preceding the event may also be featured in the hangout zone. A facility may be provided for incorporating information from other calendars such as the Google™ Calendar™ or Microsoft™ Outlook™ etc and/or for linking these calendars within the hangout zone. A virtual assistant may be present in the hangout zone which is a 3D simulation of a real or fictional character for purposes of providing information, help, and suggestions. The virtual assistant would be present to make interaction more ‘human’ in the hangout zone. In an exemplary embodiment, an event profile page in the hangout zone is shown in FIG. 43 displaying some of the features in the hangout zone. An image/video/simulation 726 describing/related to the event can be uploaded on the page. The event title and brief information 727 regarding the time, location, venue and other information related to the event is displayed. A digital calendar is available to the moderators of the event for marking important dates and noting associated tasks. An example note 729 is shown that lists the important dates for the month and which appears when the user clicks on the name of the month in the calendar, in an exemplary embodiment, The note shows the number of days left for the event; the important dates and tasks associated with the event as marked by the user. A facility is also available for members to join the event profile page to view the progress of preparation of the event, take part in discussions and other activities surrounding the event using the features and facilities available in the hangout zone. The member profile images/videos/simulations and/or name and/or other information would be displayed in a panel 730 on the event page, in an exemplary embodiment. The viewer may scroll the panel using the left/right control 731, shown in an exemplary embodiment to browse all members of the event. These members would also include the invitees for the event. Invitations for the event can be sent to the invitees via the hangout zone. These members will be asked questions related to the status of their attendance such as if they plan to attend the event or not, whether they are unsure or undecided and similar questions. The responses to these questions will be tallied and the total of each response displayed as 732 in an exemplary embodiment. These responses can also be used by the system to estimate costs incurred for the event based on attendance. Invitees may send the host or event planner (i.e., the source of invitation) an RSVP confirming attendance via real-time notification, email, SMS, phone, voice message, and similar communication means. The RSVP may contain other information such as accompanying guests, outfit the invitee plans to wear, whether they need transportation assistance in order to get to the event, tips for event planning and other such information related to the invitee with respect to the event. In the case of events where a registration fee is required, the system processes payments from the user. In cases where documents are required for eligibility for attending the event (for instance, a scientific conference), the system processes the documents. Upon selecting a member 733 from the event member panel 730, another window/dialog/pop-up 734 may appear with a larger image view of the member and details on member event status including fields such as attendance, member's event outfit, guest accompanying the invitee to the event etc.; and/or member profile information. Icon 735 in this dialog/pop-up window allows the member viewing the invitee's profile and event status 734 to invite him/her on a shopping trip, via a real-time notification, email, SMS, phone call or message and other means of messaging, while the icon 736 indicates if the invitee is online and allows the member viewing the invitee's profile to invite to chat or send message to the invitee. Members on the event page can also get details of the venue and the area where the event is being held by clicking on the ‘area info’ section 737 as shown in an exemplary embodiment. Upon doing so, a pop-up/dialog/window 738 opens up showing location and venue information on a map; places of interest in the vicinity of the event such as eateries, hangouts, and other scheduled public events. Further details on each of these different aspects may be obtained. A discussion forum facility 739 allows members of the event to start topic threads and discuss various event related topics. Members can view all the discussion topics and categories, active members of the discussion forum and view online members for engaging in discussions/chats/real-time interaction with. Members in the hangout zone can take advantage of the shopping and virtual modeling facility available via system 10 to shop online for apparel and other needs for the event. Invitees may shop for gifts via the electronic gift registry available as part of the event planning services. Shopping assistance panels 741 and 742 provide tips, relevant event shopping and assistance categories, display relevant advertisement and other information, and provide other shopping help. Specific examples include event outfit, and gift ideas; listings, reviews and assistance in seeking event venue, organizers, decorators, fashion boutiques, car rentals etc. Reference is now made to FIG. 44 which depicts some of the facilities in a browser window 745, that users can navigate to in the hangout zone, in an exemplary embodiment, The left and right panel menus, 746 and 747 respectively, indicate some of the different online venues that the user can visit on system 10. These include museums, studios, movies, parks, tours and other venues as well as stores, which will take the user to the shopping module 60 on system 10. These facilities may be simulated environments which users can visit or virtual events which users may participate in via their virtual characters or directly. Alternatively, these facilities can be mapped to real physical venues which may be equipped with cameras and other visual equipment to facilitate real-time browsing and access to the facility via system 10. This would enable virtual tourism and participation in real events in real-time from remote locations either collaboratively with other users or on one's own. In an exemplary embodiment, users may participate in a virtual tour of a real museum or a historical site. Users may watch a live video feed (or hear live audio feed) of a graduation ceremony or a musical concert or a hockey match or weddings and other community, social, business, entertainment, education events. Translation of video feeds in multiple languages is also available to members. Users can choose to view the event in the original language or in the translated version. Translations may be provided by other members of the system in real-time (during live transmission) or after the event. Users can choose which member's translation to listen to during the event. Ratings of member translators may be available to guide this decision. Translations can be provided either as subtitles or audio dubbing in an exemplary embodiment. Translations may be computer-generated. This may be done in exemplary embodiment by converting speech to text, text to translated text, followed by translated text to speech in the new language. Furthermore, users can obtain information and details regarding specific real events and/or places and/or facilities of interest to them such as music festivals, concerts, fairs and exhibitions, movie studios, games, historical sites etc in the hangout zone. For details on these facilities, refer to the environment module 56 and its descriptions in this document. The facilities mentioned in FIG. 44 may manifest themselves as the different types of environments described with reference to the environment module 56. A map facility 748 is available which provides digital/animated representations of a virtual world containing virtual facilities in the hangout zone and/or fictional mappings of real facilities in virtual worlds. Real location and area maps and venue information of the real places and events as well as driving directions to events and venues are provided to assist users. The hangout zone may be linked to other websites that provide map, location and area information. Users can obtain assistance 749, which may be real-time/live, on what places they can visit, on what's new, special attractions, upcoming events, on activities in the hangout zone etc. Users may send event invitations 750 to friends, as mentioned previously. These can be invitations for real events or events that users can participate in through system 10 such as games, virtual tours, virtual fashion shows and other events and activities. Users may examine 751 other invitees to a particular event and see who else is participating in an event or activity or has confirmed attendance. Users may also obtain the latest weather and traffic updates 752 as well as all traffic and weather information relevant to a given event/venue/activity. Users may attend and participate in live virtual events in real time where they can meet celebrities and get their autographs signed digitally. The events described in the hangout zone are not meant to be limited to the hangout zone or any specific space but are described as such in order to illustrate activities that can be carried out in a social networking space. The features described above with respect to the ‘hangout zone’ may be used as part of an event management module in the server application 22 whose services are available through a website or as part of a local application, In addition, the event management module may be used in conjunction or integrated with a guest validation system. A guest validation system would assist in ascertaining if guests arriving at an event are confirmed attendees or invitees to the event. Upon arriving at the event venue, guests can enter their name and password (which may be issued with the electronic invitation sent by the system, upon payment of event registration fees where required) either at a terminal or using their handheld. Alternatively, invitees can have a print out of an entry or invitation card with a bar code (issued with the electronic invitation) which can be swiped at the event for entry. This would be most useful in cases where an event requires registration and a fee to register.
  • This invention incorporates additional collaborative features such as collaborative viewing of videos or photos or television and other synchronized forms of multimedia sharing. Users may select and customize their viewing environments, and/or background themes and skins for their viewer. They may select and invite other users to participate in synchronized sessions for sharing videos, and other multimedia. In addition to synchronized sharing, immersive features are provided by system 10 to further facilitate collaboration between users and to make their experience increasingly real and life-like as well as functional and entertaining. During synchronized video sharing, for example, users may mark objects in the videos, write or scribble over the video content as it plays, This feature can be likened to a TV screen that acts as a transparent whiteboard under which a video is playing and on top of which markings can be made or writing is possible. During synchronized multimedia sharing, users can further interact by expressing emotions through their character models which may be engaged in the same environment or through emoticons and other animated objects. In an exemplary embodiment, if a funny scene is playing in a video, the user can make their user model smile via a control key for their user model which may be pre-programmed to respond with a smile when the given control key is pressed. Pointing to objects, writing, expressing emotions through emoticons, SMS/text to invite for a shopping trip are actions as part of synchronized collaboration in an exemplary embodiment. The whiteboard feature which permits freehand writing and drawing may be available to users during shopping trips or events and/or for any collaborative interaction and/or real time interaction and/or for enabling users to take electronic notes and/or draft shopping lists and uses described with reference to FIG. 20 in this document. Based on the content of the whiteboard deciphered through OCR optical character recognition techniques or sketch to model recognition [3] or speech to model recognition, related content (for example advertisements) may be placed in the proximity of the drawing screen and/or related content may be communicated via audio/speech, and/or graphics/images/videos.
  • A ‘virtual showcase’ will allow users to showcase and share their talent and/or hand-made items (handiwork) and/or hobbies with online users. In an exemplary embodiment, users can upload digital versions of their art work which may include any form of art work such as paintings or handicrafts such as knit and embroidered pieces of work; handmade products such as wood-work, origami, floral arrangements; culinary creations and associated recipes; and any form of outcome or product or result of a hobby or sport. All the above are meant to be exemplary embodiments of items that can be displayed in the virtual showcase. As further exemplary embodiments, users can post/showcase videos demonstrating feats of skateboarding or instructional videos or animations for cooking, and other talents. The virtual showcase may contain virtual art galleries, in an exemplary embodiment, featuring art-work of users. Members may be able to browse the virtual art gallery and the gallery environment may be simulated such that it gives the users the illusion of walking in a real art gallery. The art galleries may be simulated 2D or 3D environments, videos, images or any combination thereof and/or may include components of augmented reality. Users can also adorn their virtual rooms and other 2D or 3D spaces with their virtual artwork.
  • The management module 54 allows the user to control and manage their account and settings associated with their account. The user may reset his/her password and enter and edit other profile and preference information that is associated with the user. The profile and preference information that is provided by the user may be used to tailor apparel items, or combinations of apparel items for the user.
  • The environment module 56 allows the user to choose the virtual environment in which to place their user model. As the system 10 allows users to visualize how various apparel items will appear when they are wearing them, the ability to choose respective virtual environments further aids the user in this visualization process. For example, where a user's 3-D model is used to determine the suitability of evening wear or formal wear, the user is better able to appreciate the modeling where a formal background is provided. The virtual environments may be static image or dynamic backgrounds or three-dimensional or multi-dimensional environments, or any suitable combination of the above. In an exemplary embodiment, a dynamic background could include an animated sequence or a video or a virtual reality experience. Images or animations or video or other multimedia that are represented by the respective environments may include, but are not limited to, vacation destinations, tourist destinations, historical sites, natural scenery, period themes (the 60s, 70s, Victorian era etc.), entertainment venues, athletic facilities, runways for modeling, etc. The environments that are provided by the system 10 may be customized and tailored by the users. Specifically, users may be provided the option of removing or adding components associated with the environment and to alter backgrounds in the environments. For example, with respect to adding and or removing physical components, where a living room environment is being used and is provided to the system 10, various components associated with the living room may be added, deleted or modified. With respect to the addition of components, components such as furniture and fixtures may be added through functionality provided to the user. The user in an exemplary embodiment is provided with drag and drop functionality that allows the user to drag the various components into an environment, and out of an environment. The drag-and-drop functionality may incorporate physics based animation to enhance realism. Optionally, the users may specify where things are placed in an environment. In an exemplary embodiment, the users are able to choose from a listing of components that they wish to add. As described below, the respective components that are chosen and placed in the virtual environments may be associated with respective companies that are attempting to promote their products. For example, where a user has placed a sofa in their virtual environment, the user may view the selections of sofas that may be placed in the virtual environment and each sofa that may be selected will have information pertaining to it that will help the user decide whether to place it in their virtual environment. Through partnering with the system 10, retailers of non apparel items can increase exposure to their product offerings. Advertisements may be displayed in these environments and thus, these environments would serve as an advertising medium. For example, a billboard in the background may exhibit a product ad or people in the environment may wear apparel displaying logos of the brand being advertised. There could also be theme-based environments to reflect the nature of the advertising campaign. For example, a company selling a television with a new-age look may find the use of an environment with a futuristic theme useful for advertising.
  • Virtual environments may also represent or incorporate part or whole of a music video or movie or game scene or animation or video. User models would have the ability to interact with virtual embodiments of movie characters and celebrities. As an example, the user model may be placed in a fight scene from a movie. Another feature that would be supported by the entertainment environments is to allow users to purchase apparel and other items shown in the particular movie. For example, the user could purchase apparel worn by the characters in the movie or the cars driven in the movie or the mobile phones used in the movie. Additionally, users could replace the characters in the movie or music video with their user models. The model would be able to orchestrate the exact movements (dialogue, movements, actions, expressions) of the original character. This would involve facial animation and lip syncing of the user model to replicate expressions and facial movements of the original character. Furthermore, the movements of the original character can be extracted, in exemplary embodiment, either manually or using machine learning algorithms (for example: pose tracking and pose recover techniques) and then applied to the user model. For purposes of increasing computational efficiency, the system 10 may provide the user with pre-rendered scenes/environments where the music and environment cannot be manipulated to a great degree by the user but where rendering of the character model can occur so that it can be inserted into the scene, its expressions/actions can be manipulated and it can be viewed from different camera angles/viewpoints within the environment. Users can save or share with other users the various manifestations of their user model after manipulating/modifying it and the animation/video sequence containing the model in various file formats. The modified user model or the animation/video sequence can then be exported to other locations including content sharing sites or displayed on the profile page. In an exemplary embodiment, the user may indicate their display status through the use of their character model with the appropriate backdrop and other digital components. For instance, users may indicate that they are reading a given book by displaying their model on their profile page reading a book against a backdrop that reflects the theme of the book or their model may be engaged with other models in an act from the book or a play or a movie that they are watching. Another feature that the virtual environments along with the user models afford to the user is the ability to take studio portraits of their respective user models with the different environments serving as backdrops. Users can also invite friends and family for group portraits with their models. Features will also be present to add effects and/or enhance the portrait photos or apply various artistic styles (for example, antique look, watercolour effect etc.) and perform various other non-photorealistic renderings.
  • A feature encompassing a virtual space/environment where virtual fashion shows are held is available through system 10. Professional and amateur designers can display their collections on virtual models in virtual fashion shows. The virtual models and virtual environments can be custom made to suit the designer's needs and/or virtual models of real users and celebrities may be employed. Auctions and bidding can take place in these virtual spaces for apparel modeled in the fashion shows. Groups of users can also participate in virtual fashion shows in a shared environment using their 3D models to showcase apparel.
  • The whole or part of a virtual environment may incorporate physics based animation effects to enhance realism of the environment, its contents and interaction with the user. In an exemplary embodiment, an environment representing a basketball court could be integrated with physics based animation effects. In this case, the motion dynamics of the basketball players, the ball, the basket etc. would be based on the physics of real motion and thus, the game sequence would appear realistic. Users are also able to select their own environment, and may upload their own environment to be used in the system 10. Furthermore, the system 10 also includes simulated shopping environments. An animated navigation menu is provided so that the user may locate stores/stalls of interest. The shopping environment, in an exemplary embodiment, may be represented by components of a virtual mall which may contain simulations of components of real stores, or it may be a simulated representation of a real mall which may contain other animated virtual components. As the user browses the shopping environment, the environment may be presented as a virtual reality animation/simulation which may contain video/simulations/images of actual/real stores and components; or it may be presented as a real-time or streaming video or a video/series of images of a real mall with animated stores and components; or as a virtual reality simulation of a real store. System 10 recommends stores to visit based on specific user information such as profession, gender, size, likes/dislikes etc. For instance, for a short female, the system can recommend browsing petite fashion stores. Based on a user's apparel size, the system can point out to the user if a product is available in the user's size as the user is browsing products or selecting products to view. The system may also point out the appropriate size of the user in a different sizing scheme, for example, in the sizing scheme of a different country (US, EUR, UK etc.). In suggesting appropriate sizes to user in products that may vary according to brand, country, and other criteria, the system also takes into account user fit preferences. For instance, a user may want clothes to be a few inches looser than his/her actual fit size. In an exemplary embodiment, the system would add the leeway margin, as specified by the user, to the user's exact fit apparel size in order to find the desired fit for the user. As described below, a user who wishes to view and/or model apparel items may select from the various items of apparel through a shopping environment such as a store or a mall. In these respective environments, the models are allowed to browse the virtual store environment by selecting and inspecting items that are taken from the respective racks and shelves associated with the virtual environment. In the shopping environment, physics based animation can be incorporated to make the shopping environment, its contents and user interaction with the environment realistic. In an exemplary embodiment, the clothes in the shelves and racks can be made to appear realistic by simulating real texture and movement of cloth. Additionally, a live feed can be provided to users from real stores regarding the quantity of a particular item. This information can either be conveyed, for example, either numerically or an animation of a shelf/rack containing the actual number of items in inventory can be displayed or a video of the real store with the items on shelf can be displayed to the user. The live feed feature can be used by the source supplying the apparel to convey other information such as store/brand promotions, special offers, sales, featured items etc. (not restricted to real-time inventory information). Furthermore, the shopping environment can include other stores and fixtures and other items found in a real shopping mall to simulate/replicate real shopping environments as closely as possible. In an exemplary embodiment, food stores and stalls may be augmented in the virtual shopping environment. These ‘virtual food stores’ could represent simulations or images/videos of fictional or non-fictional stores. These virtual stores would serve as an advertising medium for food brands and products as well as superstores, restaurants, corner stores or any other place providing a food service, manufacturing or serving as the retail outlet for a food brand. There could be virtual ads, products and promotions being housed in these virtual stores. Additionally, these could be linked to actual product and store sites. Virtual characters acting as store personnel offer virtual samples of ‘featured food products’, just as in a real mall setting. Other items found in real shopping environments that are incorporated include fountains, in an exemplary embodiment. These virtual fountains can be incorporated with physics based animation techniques to simulate water movement as in a real fountain. Store personnel such as sales representatives and customer service representatives are represented by virtual characters that provide online assistance to the user while shopping, speak and orchestrate movements in a manner similar to real store personnel and interact with the user model. An ‘augmented reality display table’ is featured by system 10 where vendors can display their products to the customer and interact with the customer. For example, a jewelry store personnel may pick out a ring from the glass display for showing the user. A salesperson in a mobile phone store may pick out a given phone and demonstrate specific features. At the same time, specifications related to the object may be displayed and compared with other products. Users also have the ability to interact with the object in 2D, 3D or higher dimensions. The salesperson and customer may interact simultaneously with the object. Physics based modeling may also be supported. This display table may be mapped to a real store and the objects virtually overlaid. In some real malls, one can also find indoor game facilities such as ice-skating rinks, golf parks, basketball etc. Environments that simulate these facilities virtually will be available. Users can engage their models in these activities and participate in a game with others users. As in a real mall, the user can see other ‘people’ in a virtual mall. These may represent real users or fictional virtual characters. The user will have the option to set their user model as invisible or visible so that their model can be viewed by other users browsing the mall.
  • In an exemplary embodiment, this collaborative environment works as follows: The local application 271 provides a visualization engine. Webcam content from the customers and the sales personnel may be integrated into or used in conjunction with the engine. If 3D product models are available, they can be used interactively via the common mode or other modes of operation, as discussed with reference to FIG. 7, for example. If product models are unavailable, then webcam views may be used either directly or converted to models based on webcam images (using techniques similar to those discussed in [3] for going from sketch to model in exemplary embodiment). These models/images can then be used in the visualization engine. Interaction with the engine can take place using conventional input/output (I/O) devices such as a keyboard and a mouse, or using I/O devices discussed with reference to FIG. 54. Video capturing devices may be used to capture the view of a counter or a product display in the store, for example. This content may be transmitted both to the salesperson and the customer. Either party can then augment this content with their own input. The customer may also bring in objects into this augmented world, for example, for colour or style matching. Augmentation may be accomplished using techniques similar to those in [4]. The collaborative environment described here with reference to FIG. 36 may be thought of as a 3D version of the collaborative environment described with reference to FIG. 20. All of the tools available in the collaborative environment discussed with reference to FIG. 20 may be available in the collaborative environment of FIG. 36.
  • The various respective virtual environments that are used, may all have associated with them various multimedia files that may be linked to the respective environments. For example, music, or video files may be linked or embedded into the respective environments. Also, the system 10 may also allow for downloading of music (and other audio files) from a repository of music, in an exemplary embodiment, that may then be played while the user is navigating and/or interacting with their respective environment. The user will have the option of selecting music from the repository and downloading tracks or directly playing the music from a media player within the browser. Additionally, audio files can also run seamlessly in the environment. These can be set by the sponsor of an environment. For example, in a virtual music store environment, the store sponsor can play tracks of new releases or specials being advertised. In another exemplary embodiment, in a movie scene environment, the soundtrack of the movie could play within the environment. These tracks (playlist content, order of tracks, length etc.) can be customized according to the sponsor or user. The sponsor of the environment and the music or media files sponsor do not necessarily have to be the same. Additionally, the user may be given control over the type of media files that are played within or linked with an environment. Instead of a repository of audio files, the medium may also be an online radio, The radio may be mapped to real radio stations. Users have the option to share media files (name, description and other information associated with the file and/or actual content) with their social network or send links of the source of the media files. Users can also order and purchase media files that they are listening to online. In an exemplary embodiment, a ‘buy now’ link would be associated with the media file that would take the user to the transaction processing page to process the purchase of the media file online.
  • Users may create their own 3D or 2D virtual spaces by adding virtual components from catalogues. In an exemplary embodiment, a user may rent or buy virtual rooms (2D or 3D) from a catalogue and add virtual furniture, virtual artwork, virtual home electronics such as a TV, refrigerator, oven, washing machine, home entertainment system etc. and other components. The user may add rooms to create a home with outdoor extensions such as a patio and backyard to which components may also be added. Users may visit each other users' virtual spaces and environments. Users may also buy virtual food products. which may be stored in virtual refrigerators or stores. These virtual food products may be designed such that they decrease over time and eventually finish or become spoilt if unused ‘virtually’. This would help kids or teenagers, for example, to understand the value of food, its lifecycle, handling and storage and other facts. Furthermore, the proceeds from the purchase of virtual food could be used to sponsor aid in developing countries. In an exemplary embodiment, purchasing a bag of virtual rice may be equivalent to donating a bag of virtual rice as food aid to developing countries. Users may furnish their rooms with objects that change or grow with time such as plants. The user may buy a virtual seed and over time, the seed would grow into a full-size virtual plant. The virtual plant may be designed such that it grows automatically or upon proper caretaking by the user such as providing virtual water, nutrients, sunlight and other necessities to the plant. This would help users to become more empathic and acquire useful skills such as gardening or caretaking. Florists and greenhouses may also find this feature useful. They may design virtual plants and flowers such that their requirements are mapped to the real plants or flowers they represent. For instance, roses may require specific nutrients, soil types, sunlight duration etc. for their proper growth. In an exemplary embodiment, virtual rose plants may be designed to grow only if provided with the necessities (virtual) that real roses require. Thus, these virtual plants would prove useful as instructional or training tools for people who would like to learn how to cultivate specific plants properly before purchasing real plants. Depending on how they raise their virtual plants, users may be given scores. Users would also be able to purchase the real plants from florists, greenhouses and other stores subscribing to system 10, whose information would be available to users. Furthermore, users may buy virtual pets. These virtual pets may be designed to grow on their own or upon proper caretaking by their owners just as in the case of virtual plants. This feature could help users to become better pet caretakers before they buy real pets. The concept of virtual pets can be taken further. Proceeds that are collected from the purchase of virtual pets may be used to support animal shelters or humane societies or animal relief or wildlife conservation efforts. A virtual pet may be mapped to an animal that has been saved as a result of the proceeds collected from the purchase of virtual pets. Users may directly sponsor an animal whose virtual representation they would own upon sponsoring the animal. Users would also receive updates about the welfare of the animal they sponsored (if they are not able to directly own the real animal such as in the case of a wild animal) and about related relief, rescue or conservation efforts associated with similar animals.
  • The retailer module 58 allows the system 10 to interact with the various respective retailers with which the system 10 is associated. Specifically, the retailer module 58 tracks the respective items that may be purchased through use of the system 10. The retailer module 58 interacts with the retail servers 26 of retailers with respect to product offerings that may be available through the system 10. Information from the retailer module 58 pertaining to items that can be purchased is acquired by system 10. This information may be encapsulated in a CAD (Computer Aided Design) file for example.
  • The shopping module 60 allows for users to purchase items that may be viewed and/or modeled. Each retailer in the retailer module 58 may have a customizable store page or virtual store available in the shopping module 60. Users can administer their page or virtual/online store as discussed with reference to FIG. 42. Each store can be customized according to the retailer's needs. Retailers may add web and software components to their store available through system 10. These components include those that would allow the retailer to add featured items, special offers, top picks, holiday deals and other categories of items to their virtual store. The retailer can make available their products for sale through these stores/pages. The users of the system 10 as mentioned above have access to various online product catalogues from virtual stores and/or virtual malls. These catalogues may be mapped from virtual stores and/or virtual malls or real stores and/or malls. The user will be asked specific information relating to the shopping interests and style preferences. The shopping module 60, based on the user-specified preferences and information may also make recommendations regarding items of apparel that are based on the user's interests, preference and style that have been determined from previous purchases. This can be accomplished using a variety of machine learning algorithms such as neural networks or support vector machines. Current implementation includes the use of collaborative filtering [5]. Alternatively, Gaussian process methodologies [6] may also be used. In an exemplary embodiment, using Gaussian process classification, recommendations are made to the user based on information collected on the variables in the user's profile (example: preferences, style, interests) as well as based on the user's purchasing and browsing history. Moreover, the uncertainty that is computed in closed form using Gaussian process classification is used to express the degree of confidence in the recommendation that is made. This can be expressed using statements like ‘you may like this’ or ‘you will definitely love this’ etc. The interests of the user may be specified by the user, and alternatively may be profiled by the system 10 based on the user's demographics. The shopping module 60 also provides the user with various search functionalities. The user may perform a search to retrieve apparel items based on criteria that may include, but are not limited to, a description of the apparel including size, price, brand, season, style, occasion, discounts, and retailer. Users can search and shop for apparel based on the look they want to achieve. For example, this could include ‘sporty’, ‘professional’, ‘celebrity’ and other types of looks. Users may also search and shop for apparel belonging to special categories including, but not limited to, maternity wear, uniforms, laboratory apparel etc. Apparel may be presented to the user on virtual mannequins by the shopping module 60. Other forms of display include a ‘revolving virtual display’ or a ‘conveyor belt display’ etc. In an exemplary embodiment, a revolving display may assume the form of a glass-like cube or some other shape with a mannequin on each face of the cube/shape showcasing different apparel and/or jewelry. In another exemplary embodiment, a conveyor belt display may feature virtual mannequins in a window, donning different apparel and/or jewelry. The mannequins may move in the window in a conveyor belt fashion, with a sequence of mannequin displays appearing in the window periodically. The speed of the conveyor belt or the revolving display may be modified. Other displays may be used and other manifestations of the conveyor and revolving display may be used. For instance, the mannequins may be replaced by user models or by simply product images and/or other visual/virtual manifestations of the product. Reference is now made to FIG. 45 where another display scheme—the ‘Style browser’ 755 is shown in an exemplary embodiment, The style browser display operates directly on the user model 650 in that the apparel items in an electronic catalogue are displayed on the user model as the user browses the product catalogue. For example, in the display window 755, the user can browse tops in a catalogue in the window section 756 by using the left 757 and right 758 arrow icons. As the user browses the catalogue, the tops are modeled and displayed directly on the user model 650. Thus, the user is able to examine fit and look information while browsing the catalogue itself. In a similar fashion, the user can browse skirts and pants in the display section 759; shoes in section 760; accessories like earrings, cosmetics and hairstyles in section 760. Right-clicking on a given display section would make available to the user the categories of apparel that the user can browse in that section, in an exemplary embodiment. Displayed apparel (whether in shopping environments, stores or electronic catalogues) may be in 2D or 3D format. Users can also view detailed information regarding apparel. For example, this information includes material properties of the apparel such as composition, texture, etc; cloth care instructions; source information (country, manufacturer/retailer); images describing apparel such as micro-level images that reveal texture; etc. Other information assisting the user in making purchasing decisions may also be displayed. For example, user and customer reviews, ratings, manufacturer's/retailer's/designer's/stylist's notes etc. The display information for each apparel will also include the return policy for that item. This policy may include terms that are different in the case that an item is returned via postal mail versus if the item is taken to a physical store location for return by the customer. In an exemplary embodiment, for the latter case, the return policy may be mapped to the terms and conditions of the physical store itself. This would allow a user to purchase something online and still be able to return it at a physical store location. Alternatively, the retailer may specify a different return policy for the apparel when it is bought online as opposed to when it is bought at the physical store. The return policy may also incorporate separate terms and conditions that take into account the requirements of system 10 for returning any given item. As users are shopping, matching/coordinating items that go with the items the users are looking at or items that are in the users fitting room, shopping cart, or wardrobe, and that fit the users body and their taste, may be presented to the users. Suggestions on coordinating/matching items may also be made across users. For example, if a bride and a bridegroom go on a shopping trip, a wedding dress for the bride and a corresponding/matching tuxedo for the bridegroom that fit them respectively may be presented.
  • At any time while browsing or viewing products, the user may choose to try on apparel of interest on their user model to test the fit of apparel. In order to facilitate this process, a virtual fitting room is available to the user. The virtual fitting room includes items that the user has selected to try on or fit on their user model and that the user may or may not decide to purchase. In exemplary embodiment, the fitting room provides the user with a graphical, simulated representation of a fitting room environment and the apparel items selected for fitting on the user's model. The user can add an item to their fitting room by clicking on an icon next to the item they wish to virtually try on. Once an item has been added to the fitting room, that item will become available to the user in the local application for fitting on their model. An example of user interaction with the fitting room is illustrated in FIG. 27. While browsing apparel catalogues or viewing suggested apparel items by system 10, the user may choose to add an item to the fitting room for trial fit with their user model. Once the item has been added to the fitting room, the user may try on the item on their user model, and/or decide to purchase the item, in which case the apparel item can be added to the virtual wardrobe described later. Alternately, the user may decide not to purchase the item in which case the item will stay in the fitting room until the user chooses to delete it from their fitting room. Users may make the contents of their fitting room publicly accessible or restrict access to members of their social network or provide limited access to anyone they choose. This option will allow users to identify items of interest that other users have in their fitting room and browse and shop for the same or similar items on system 10. Physics based animation can be incorporated to make the fitting room, its contents and user interaction with the fitting room as realistic as possible. In exemplary embodiment, the clothes in the fitting room can be made to appear realistic by simulating real texture and movement of cloth. With regards to interaction with the digital apparel, accessories and other components, users may be able to drag and drop clothes, optical accessories, hairstyles, other apparel, accessories, and digitized components and their manifestations onto their character model. In one exemplary embodiment, they will be able to drag components placed in the fitting room or wardrobe or from an electronic catalogue onto their model. The drag-and-drop functionality may incorporate physics based animation to enhance realism. Optionally, the users may specify where things are placed on their character model. At any time while browsing or viewing products or trying apparel on their user model, the user may choose to order and purchase the real apparel online. The user may also submit fit information (visual as well as text) including information on where alterations may be needed, as provided by the modeling module 50, as well as any additional information associated with an apparel item that the user is purchasing online to a ‘tailoring’ service. This service would be able to make the requisite alterations for the user for a fee. A facility would also be available to the user to custom order clothes online from a designer or supplier of apparel if they (designer, supplier) choose to provide the service. In the case of purchasing gifts for other people, the user may build a model for the person for whom the gift is intended and fit apparel on to this third party model to test goodness of fit before purchasing the apparel. If the user for whom the gift is being purchased already has a user account/profile available in system 10, then their user model may be accessed by the gift-giver upon receiving permission from the user for purposes of testing goodness of fit. If a user wishes to access fit or other information or the user model of a friend, the friend would receive a notification that the specific information has been requested by the user. The friend would have the option to grant or deny access to any or all of their information or their user model. If the friend denies access, the user may still be able to purchase a gift for the friend as the system will be able to access the friend's information and inform the user if a particular apparel is available in their friend's size. The system would, thus, provide subjective information regarding the fit of an apparel with respect to another user without directly revealing any fit or other information of the user for whom the item is being purchased. If an apparel item is available in the friend's size, the user may order it upon which the system would deliver the appropriate sized apparel (based on the sizing and fit information in the friend's profile) to the friend. A confirmation request may be sent to the friend for confirming the size of the apparel before the purchase order is finalized. (This method can be used for other products such as prescription eyewear). Users have the option to display icons on their profile and/or home page that indicate gifts received from other people (items purchased on the site for respective user by other users). A ‘Mix and Match’ section will allow users to view items from different vendors. This could be, for instance, for purposes of coordinating different pieces of apparel (for example tops, bottoms, jewelry, bags). Users may coordinate items and visualize their appearance on the user model. This visualization would assist users in the mix and match process. Items on sale may also be presented from different vendors in the mix and match section. Items on sale/discounted items may also be presented in other areas of the site. Furthermore, there may be other sections on the site featuring special items available for purchase. In exemplary embodiment, these may include autographed apparel and other goods by celebrities. Not only is the user able to purchase real apparel from the site (described later on), but the user can also buy virtual manifestations of apparel, hairstyles, makeup etc. Users may be interested in purchasing these virtual items for use in external sites, gaming environments, for use with virtual characters in other environments etc. Users can also search for and buy items on other users' shopping lists, registries and/or wishlists. Users may also set-up gift registries accessible on their member pages for occasions such as weddings, anniversaries, birthdays etc.
  • The shopping module 60 also determines for each user a preferred or featured style that would be suitable for the respective user. The determination of a preferred or featured style may be based on various inputs. Inputs may include the preferences and picks of a fashion consultant of which the system 10 keeps track. The one or more fashion consultant's choices for featured styles may be updated into the system 10, and the system 10 then provides respective users with updated style choices based on the selections of the fashion consultants. Also, styles and/or apparel items may be presented to the user based on information the system 10 has collected regarding their shopping preferences, stores, brands, styles and types of apparel that are purchased, along with personal information related to their physical profile and age. In addition, the user model may be used to make apparel suggestions by the system. In an exemplary embodiment, the convex hull of the user model is used to determine apparel that would best fit/suit the user. The various featured looks that are selected by the system 10 may be presented to the user upon request of the user, and the selected featured looks may also be presented to the user upon login to the system. Also, various selected styles with a user's model may be presented to the user upon request or upon login where the user model is modeling apparel that is similar to what celebrities or other notable personalities may be wearing. Fashion consultants, stylists and designers may be available on site for providing users with fashion tips, news, recommendations and other fashion related advice. Live assistance may be provided through a chat feature, video and other means. Additionally, it may be possible for users to book appointments with fashion consultants of their choice. Animated virtual characters representing fashion consultants, stylists and designers may also be used for the purpose of providing fashion related advice, tips news and recommendations. Virtual fashion consultants may make suggestions based on the user's wardrobe and fitting room contents. It would also be possible for users interested in giving fashion advice to other users to do so on the site. In an exemplary embodiment, this may be accomplished by joining a ‘fashion amateurs’ network where members may provide fashion advice to other users or even display their own fashion apparel designs. Consultants may be available to provide assistance with other services such as technical, legal, financial etc.
  • The wardrobe module 62 provides the user with a graphical, simulated representation of the contents of their real and/or virtual wardrobe. The virtual wardrobe comprises the respective items of apparel that are associated with the user in the system 10. For example, the virtual wardrobe will store all of the items that the user has purchased. FIG. 27 describes an instance of user interaction with the virtual wardrobe 440 and fitting room 420. The user may browse apparel 400 displayed by the system, an instance of which is described with reference to FIG. 22. Once the user decides to purchase an item, it will be added to the virtual wardrobe. The user may then choose to keep the item in their wardrobe or delete it. If the user decides to return an item, that item will be transferred from the user's wardrobe to the fitting room. The virtual wardrobe may also comprise representations of apparel items that the user owns that are not associated with the system 10. For example, the user may upload respective images, animation, video and other multimedia formats or any combination thereof of various real apparel items to the system 10. Once uploaded, the users are then able to interact with their respective physical wardrobe contents through use of the system 10. Identification (ID) tags on the virtual wardrobe items may assist the user in mapping items from the real to virtual wardrobe. An ID tag can have standard or user defined fields in order to identify a given item. Standard fields, for instance, can include, but are not limited to, ID number, colour, apparel type, occasion, care instructions, price, make and manufacturer, store item was purchased from, return policy etc. User defined fields may include, for example, comments such as ‘Item was gifted to me by this person on this date’, and other fields. Users are able to browse the contents of their wardrobe online. This allows the user the ability to determine which apparel items they may need to purchase based on their need and/or desire. Users may make the contents of their wardrobe publicly accessible or restrict access to members of their social network or provide limited access to anyone they choose. This option will allow users to identify items of interest that other users have in their wardrobe and browse and shop for the same and/or similar items on the system 10. An icon may appear on the profile/home page of the user—‘buy what this user has bought’ to view recent purchases of the user and buy the same and/or similar items via system 10. The user may also decide to conduct an auction of some or all of the real items in their wardrobe. In such a case, the user will be able to mark or tag the virtual representations of these items in their virtual wardrobe and other users with access to the wardrobe can view and purchase auction items of interest to them. In exemplary embodiment, an icon may appear on the profile page of the user indicating that they are conducting an auction to notify other users. It may be possible for users to mark items in their virtual wardrobe for dry-cleaning. This information may be used to notify dry-cleaning services in the area about items for pick-up and delivery from respective users in an exemplary embodiment. Physics based animation can be incorporated to make the wardrobe, its contents and user interaction with the wardrobe as realistic as possible. In exemplary embodiment, the clothes in the wardrobe can be made to appear realistic by simulating real texture and movement of cloth.
  • Users may organize their virtual wardrobe contents according to various criteria. The wardrobe classification criteria may include, but are not limited to, colour, style, occasion, designer, season, size/fit, clothing type, fabric type, date of purchase etc. By indexing the apparel items that belong to the user according to various criteria, the user may then be able to determine through various search criteria what items of apparel to wear. The virtual wardrobe may also have associated with it multimedia files such as music, which provide a more enjoyable experience when perusing the contents of the virtual wardrobe. A virtual/real style consultant and/or other users may be available to advise on the contents of the wardrobe.
  • The advertising module 64 in an exemplary embodiment coordinates the display and use of various apparel items and non-apparel items. Advertisers associated with the system 10 wish for their particular product offering to be displayed to the user in an attempt to increase the product's exposure. The advertising module determines which offering associated with an advertiser is to be displayed to the user. Some components related to the advertising module 64 are linked to the environment module, the details of which were discussed in the section describing the environment module 56. These include, in exemplary embodiments, environments based on a theme reflecting the product being advertised; components associated with environments such as advertisement banners and logos; actual products being advertised furnishing/occupying the environments. Music advertisers can link environments with their playlists/soundtracks/radio players. Movie advertisers can supply theme based environments which may feature music/apparel/effigies and other products related to the movie. Users will be able to display character models on their profile page wearing sponsored apparel (digitized versions) that sponsors can make available to users through the advertising module 64; or users can display images or videos of themselves in their profile wearing real sponsored apparel. In a similar manner, users supporting a cause may buy real or digital apparel sponsoring the cause (for example, a political or charitable cause) and display their character model in such apparel or put up videos or images of themselves in real versions of the apparel. Advertisers belonging to the tourism industry may use specific environments that showcase tourist spots, cultural events, exhibitions, amusement parks, natural and historical sites and other places of interest to the tourist. The above examples have been mentioned as exemplary embodiments to demonstrate how advertisers can take advantage of the environment module 56 for brand/product advertising purposes.
  • The entertainment module 66 encompasses activities that include the user being able to interact and manipulate their model by animating it to perform different activities such as singing, dancing, etc and using it to participate in gaming and augmented reality environments and other activities. Some features associated with the entertainment module 66 have already been discussed in the context of the environment module 56. These include the ability of the user to animate the virtual model's movements, actions, expressions and dialogue; the facility to use the model in creating music videos, movies, portraits; interacting via the model with different users in chat sessions, games, shopping trips etc.; and other means by which the user may interact with the virtual model or engage it in virtual activities. Additionally, the entertainment module 66 features the user model or another virtual character on the user's profile page as an ‘information avatar’ to provide news updates, fashion updates, information in the form of RSS feeds, news and other feeds and other information that is of interest to the user or that the user has subscribed to. The character model may supply this information in various ways, either through speech, or by directing to the appropriate content on the page or by displaying appropriate content at the request of the user, all of which are given as exemplary embodiments. The main purpose of using the virtual model to provide information feeds and updates of interest to the user is to make the process more ‘human’, interactive and to provide an alternative to simple text and image information and feed content. Further to this, the ‘information avatar’ or ‘personal assistant’ can incorporate weather information and latest fashion news and trends, as an exemplary embodiment, to suggest apparel to wear to the user. Information from the media agency servers 25 and entertainment servers 23 is used to keep the content reported and used by the ‘information avatar’ updated. Users will be able to interact with each other using creative virtual tools. An example includes interactive virtual gifts. These gifts may embody virtual manifestations of real gifts and cards. Users may have the option to virtually wrap their presents using containers, wrapping and decoration of their choice. They may also set the time that the virtual gift automatically opens or is allowed to be opened by the gift-receiver. Exemplary embodiments of gifts include pop-up cards and gifts; gifts with text/voice/audio/video/animated messages or coupons and other surprises; gifts that grow or change over time. An example of a gift that changes over time constitutes a tree or a plant that is still a seedling or a baby plant when it is gifted and is displayed on the gift-receiver's home page for example. Over fixed time intervals, this plant/tree animation would change to reflect virtual ‘growth’ until the plant/tree is fully grown at a specified endpoint. The type of plant/tree may be a surprise and may be revealed when the plant/tree is fully grown at the end of the specified period. There may be a surprise message or another virtual surprise/gift that is displayed/revealed to the user when the plant/tree reaches the endpoint of the growth/change interval. Gifts that change over time may include other objects and are not necessarily restricted to the examples above.
  • The server application 22 also has associated with it a data store 70. The server application 22 has access to the data store 70 that is resident upon the portal server 20 or associated with the portal server 20. The data store 70 is a static storage medium that is used to record information associated with the system 10. The data store 70 is illustrated in further detail with respect to FIG. 4.
  • Reference is now made to FIG. 4 where the components of the data store 70 are shown in a block diagram in an exemplary embodiment. The components of the data store 70 shown here are shown for purposes of example, as the data store 70 may have associated with it one or more databases. The databases that are described herein as associated with the data store are described for purposes of example, as the various databases that have been described may be further partitioned into one or more databases, or may be combined with the data records associated with other databases.
  • The data store 70 in an exemplary embodiment comprises a user database 80, an apparel database 82, a 3-D model database 84, and an environment database 86. The user database 80 in an exemplary embodiment is used to record and store information regarding a user of the system 10. Such information includes, but is not limited to a user's access login and password that is associated with the system 10. A user's profile information is also stored in the user database 80 which includes, age, profession, personal information, and user's physical measurements that have been specified by the user, images provided by the user, a user's history, information associated with a user's use of the system. A user's history information may include, but is not limited to, the frequency of their use of the system, the time and season they make purchases, the items they have purchased, the retailers from whom the items were purchased, and information regarding the various items. Information regarding the various items may include, but is not limited to, the colour, style and description of the items. The apparel database 82 stores information regarding the various items of apparel that are available through the system 10. The 3-D model database 86 stores predetermined 3-D models and parts of various 3-D models that are representative of various body types. The 3-D models are used to specify the user model that is associated with the user. The environment database 86 stores the various environments that are provided by the system 10 and that may be uploaded by users as described below.
  • Reference is now made to FIG. 5, where a flowchart illustrating the steps of an access method 100 is shown in an exemplary embodiment. Access method 100 is engaged by the user when the user first logs into the system 10. The access method 100 describes the various options that are available to the user upon first accessing the system. Method 100 begins at step 101, where the user accesses the system 10 by logging into the system 10. Users can also browse the system without authentication as a guest. Guests have access to limited content. As described above in an exemplary embodiment, the system 10 is accessible through the Internet. As the system 10 is accessible through the Internet, the user accesses the system by entering the URL associated with the system 10. Each user of the system 10 has a login and password that is used to access the system 10. Upon successful validation as an authorized user, method 100 proceeds to step 102, where the user is presented with their respective homepage. The user may be shown their user model (if they have previously accessed the system) displaying featured items of apparel when they log in. The user is presented with a variety of options upon logging into the system 10. Method 100 proceeds to step 103 if the user has selected to modify their respective environments associated with the user. At step 103, the user as described in detail below has the ability to modify and alter the respective virtual environments that are associated with the user. Method 100 proceeds to step 104 when the user chooses to manage their friends. Users may add other users from within the system 10, and from external community sites as their friends, and may manage the interaction with their friends. The management of friends in the system 10 is explained in further detail below. Method 100 proceeds to step 105 when the user wishes to generate or interact with their user model. Method 100 proceeds to step 106 where the user wishes to view items that may be purchased. Method 100 proceeds to step 107 where the user may engage in different collaborative and entertainment activities as described in this document. The steps that have been described herein, have been provided for purposes of example, as various additional and alternative steps may be associated with a user's accessing of their respective home page.
  • Reference is now made to FIG. 6A, where the steps of a detailed model generation method 110 are shown in an exemplary embodiment. The model generation method 110 outlines the steps involved in generating the 3-D user model. Method 110 begins at step 111, at which the user provides data to the system 10. The data can be provided all at once or incrementally. The data can be provided by the user or by his/her friends. Friends may grant or deny access to data request and have control over what data is shared. The data provided may include but is not limited to image(s) and/or video(s) of the face 113 and/or body 114; measurements 115 of the body size including the head as described below; apparel size commonly worn by the user and the preferred apparel size(s) and preferences 116 for style of clothing (such as fitted, baggy, preferred placement of pants (above, below, or on waist), color, European, trendy, sophisticated etc.), brands, etc.; laser scan data (obtained, for example, from a booth at a store equipped with a laser scanner), meshes (for instance, those corresponding to impressions of the ear or foot), outlines of body parts (for instance, those of the hands and feet), mould scans, mocap data, magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and computed tomography (CT) data 117; and other data 118 such as correspondence between feature points on the 3D model's surface and the 2D images supplied by the user (for example the location of the feature points on the face as shown in FIG. 11), references to anatomical landmarks on the user supplied data, and user specific info such as the age or age group, gender, ethnicity, size, skin tone, weight of the user. User data may be imported from other sources such as social-networking sites or the virtual operating system described later in this document. (Such importing of data also applies to the other portals discussed in this document).
  • The input to the method 110 includes prior information 112 including, but not limited to, annotated 3D surface models of humans that include information such as anatomical landmarks, age, gender, ethnicity, size, etc.; anatomical information, for instance, probability densities of face and body proportions across gender, age groups, ethnic backgrounds, etc.; prior knowledge on the nature of the input data such as shape-space priors (SSPs) (described below), priors on measurements, priors on acceptable apparel sizes, priors on feature point correspondence; sequencing of steps for various action factors (described below), etc. The prior information 112 includes data stored in the data store 70. The prior information 112 is also used to determine “surprise” as described later in this document.
  • Based on the information provided at step 111 or data from 113-118, system 10 makes recommendations to the user on stores, brands, apparel as well as provides fit information, as described previously. As users browse apparel, the system informs the user about how well an apparel fits, if the apparel is available in a given user's size and the specific size in the apparel that best fits the user. In suggesting fit information, the system takes into account user fit preferences, for example a user's preference for loose fit clothing. The system may suggest whether apparel suits a particular user based on the user's style preferences. In exemplary embodiment, there may be a “your style” field that gives an apparel a score in terms of style preferred by the user. In another exemplary embodiment, the system may recommend a list of items to the user ordered according to user preferences. For instance, a user may prefer collar shirts over V-necks. Furthermore, the user may not like turtlenecks at all. When this user browses a store collection with different shirt styles, the system may present the shirt styles to the user in an ordered list such that the collar shirts are placed above the V-neck shirts and the turtlenecks are placed towards the bottom of the ordered list, so that the user has an easier time sorting out and choosing styles that suit their taste and preferences from the store collection.
  • In another exemplary embodiment, the system may combine style preferences as specified the user, and/or user style based on buying patterns of user and/or other users' ratings of apparel, and/or fashion consultant ratings and/or apparel popularity (assessed according to the number of the particular apparel item purchased for example). Any combination of the above information may be used to calculate the “style score” or “style factor” or “style quotient” of a particular item (algorithm providing the score is referred to as “style calculator). In exemplary embodiment, a user may select the information that the system should use in calculating the style factor of a particular item. The user may inquire about the style score of any particular item in order to guide their shopping decision. The system may use the scores calculated by the style calculator in order to provide apparel recommendations; style ratings of products and apparel items; user-customized catalogues and lists of products that are ordered and sorted according to an individual's preferences and/or popularity of apparel items.
  • Given apparel size, the system can inform a user of the body measurements/dimensions required to fit apparel of the specified size. Alternatively, given a user's body measurements, the system can inform the user of the apparel size that would fit in a given brand or make/manufacturer. Further, the system can suggest sizes to the user in related apparel. In exemplary embodiment, if a user is browsing jackets in a store and the system has information about the shirt size of the user, then based on the user's shirt size, the system can suggest the appropriate jacket sizes for the user. In an exemplary embodiment, the system can provide fit information to the user using a referencing system that involves using as reference a database containing apparel of each type and in each size (based on the standardized sizing system). Body measurements specified by a user are used by the system to estimate and suggest apparel size that best meets the user's fit needs (‘fit’ information incorporates user preferences as well such as preference for comfort, loose or exact fit etc.). The reference apparel size database is also used to suggest size in any of the different types of apparel such as jackets or coats or jeans or dress pants etc. In another exemplary embodiment of providing fit information using the reference apparel database, a user may be looking for dress pants, for instance, and the system may only know the user's apparel size in jeans and not the user's body measurements. In this case, in exemplary embodiment, the system compares jeans in the user's size from the reference apparel database with dress pants the user is interested in trying/buying, and by incorporating any additional user fit preferences, the system suggests dress pants that would best fit the user i.e., are compatible with the user's fit requirements. Fit information may specify an uncertainty along with fit information in order to account for, in exemplary embodiment, any differences that may arise in size/fit as a result of brand differences and/or apparel material properties and/or non-standardized apparel size and/or subjectivity in user preferences and/or inherent system uncertainty, if any exists. In exemplary embodiment, the system informs a user, who prefers exact fit in shirts, that a shirt the user is interested in purchasing, and which is a new polyester material with a different composition of materials and that stretches more as a result, fits with ±5% uncertainty. This is due to the fact that the stretch may or may not result in an exact fit and may be slightly loose or may be exact. Since the material is new and the system may not have information on its material properties and how such a material would fit, it cannot provide an absolute accurate assessment of the fit. It instead uses material information that is close to the new material in order to assess fit, and expresses the uncertainty in fit information. Fit information is communicated to the user, in exemplary embodiment, via text, speech or visually (images, video, animation for example) or any combination thereof. An API (Application Programming Interface) would be open to vendors on the retail server or portal server on system 10 so that vendors can design and make available applications to users of system 10. These applications may include, in exemplary embodiment, widgets/applications that provide fit information specific to their brands and products to users; store locater applications etc. In an exemplary embodiment, an application that lets vendors provide fit information works simply by looking up in a database or using a classifier such as Naïve Bayes [7-9] or k-nearest neighbours (KNN) [9, 10]. For example, an application may state whether a garments that a user(s) is browsing from a catalog fits the user(s). In exemplary embodiments: (1) Database. The application can look up the user's size and the manufacturer of the clothing in a database to find the size(s) corresponding to the given manufacturer that fits the user. If the item currently being viewed is available in the user's size, the item is marked as such. The database can be populated with such information a priori and the application can add to the database as more information becomes available. (2) Naïve Bayes. The a posteriori probability of an apparel size (as) fitting a user given the user's body size (us) information and the manufacturer of the apparel (m) can be computer using the Bayes rule, This can be expressed as the product of the probability of the user's size (us) given the apparel size (as) and the manufacturer (m) of the apparel, and that of the prior probability of the apparel size given the manufacturer, divided by the joint probability of the user's size apparel size given the manufacturer (i.e. p(as|us,m)=p(us|as,m)p(as|m)/p(us,as|m)). The prior probabilities can be learnt by building histograms from sufficiently large data and normalizing them so that the probability density sums to one. The user may be presented with items that fit the user, or the apparel sizes that fit the user may be compared with the item that the user is currently viewing and if the item that is being viewed belongs to the apparel sizes that fit the user, a check mark or a “fits me” indication may be made next to the item. (3) KNN. Information on the body size (for example, measurements of various parts of the body), apparel size for different manufacturers for both males and females, and (optionally) other factors such as age are stored in a database for a sufficiently large number of people. Each of these prices of information (i.e. body size, apparel size) is multiplied by a weight (to avoid biases). Given a new body size, the closest exemplar is found by computing the Euclidean distance between the given body size (multiplied by the associated weights for each measurement) and those in the database, The majority vote of the output value (i.e. the corresponding field of interest in the database, for example, the apparel size corresponding to the body measurements) of the k-nearest neighbours (where k is typically taken to be an odd number) is taken to be the most reasonable output. This output value is then divided by the corresponding weigh (weight can take the value 1 also). This could also be used in any other combination of inputs and outputs. For example, the input could be the apparel size for a given manufacturer and the output could be the body sizes that fit this apparel. In an exemplary embodiment, when browsing for products, given the user's body size (which may be stored in a repository) and the manufacturer whose items the user is currently looking at, the apparel sizes that fit the user may be computed and the user may be presented with the available sizes for the user. The user can also filter catalogs to show only items that fit the user or correspond to the user's preferences.
  • Based on a user's apparel size, the system can point out to the user if a product is available in the user's size as the user is browsing products or selecting products to view. The system may also point out the appropriate size of the user in a different sizing scheme, for example, in the sizing scheme of a different country (US, EUR, UK etc.). In suggesting appropriate sizes to user in products that may vary according to brand, country, and other criteria, the system also takes into account user fit preferences. For instance, a user may want clothes to be a few inches looser than his/her actual fit size. In an exemplary embodiment, the system would add the leeway margin, as specified by the user, to the user's exact fit apparel size in order to find the desired fit for the user.
  • Method 110 begins at the preprocessing step 119 at which it preprocesses the user data 111 using prior knowledge 112 to determine the appropriate combination of modules 120, 123, 124, 125, and 126 to invoke. Method 110 then invokes and passes the appropriate user data and prior knowledge to an appropriate combination of the following modules: image/video analysis module 120, measurements analysis module 123, apparel size analysis module 124, mesh analysis module 125, and a generic module 126 as described in detail below. These modules 120, 123,124, and 125 attempt to construct the relevant regions of the user model based on the input provided. At the information fusion step 127, the data produced by the modules 120, 123,124, 125 and 126 is fused. Method 110 then instantiates a preliminary model at step 128, optimizes it at the model optimization step 129, and details it at step 130. Method 110, then presents the user with a constructed model at step 131 for user modifications, if any. The constructed model and the user changes are passed on to a learning module 132, the output of which is used to update the prior knowledge in order to improve the model construction method 110. As method 110 proceeds, its intermediary progress is shown to the user. At any point during the model construction method 110, the user is allowed to correct the method. In an exemplary embodiment, this is done by displaying the model at the intermediately steps along with the parameters involved and allowing the user to set the values of these parameters though an intuitive interface. At the conclusion of method 110, a user model is generated. Each of the steps of method 110 is described in further detail below.
  • Measurements 115 provided as input to the method 110 include, in an exemplary embodiment, measurements with respect to anatomical landmarks, for example, the circumference of the head and neck, distance from trichion to tip of nose, distance from the tip of the nose to the mental protuberance, width of an eye, length of the region between the lateral clavicle region to anterior superior iliac spine, circumference of the thorax, waist, wrist circumference, thigh circumference, shin length, circumference of digits on right and left hands, thoracic muscle content, abdominal fat content, measurements of the pelvis, measurements of the feet, weight, height, default posture (involving measurements such as elevation of right and left shoulders, stance (upper and lower limbs, neck, seat, waist, etc.), humping, etc.). Apparel size/preferences 116 include, in an exemplary embodiment, clothing size such as dress size (eg. 14, 8, etc.), hat size, shoe size, collar size, length of jacket, trouser inseam, skirt length etc., including an indication of whether measurements represent an exact size or include a preferred margin or are taken over clothes. The specific measurements differ for males and females reflecting the anatomical difference between the genders and differences in clothing. For instance, in the case of females, measurements may include a more elaborate measurement of the upper thorax involving measurements such as those of the largest circumference of the thorax covering the bust, shoulder to bust length, bust to bust length etc. On the other hand, in the case of males, owing to lower curvature, fewer measurements of the chest may be required. Similarly, for the case of clothing, women may provide, for instance, the length of a skirt, while men may provide a tie size. Similarly, children and infants are measured accordingly. The availability of information on anatomical landmarks makes it possible to derive anatomically accurate models and communicate fit information to the user as described below. Strict anatomical accuracy is not guaranteed when not desired by the user or not possible, for example, under stringent computational resources. A printable tape measure is provided to the user as a download to ease the process of measuring. Image(s) and/or video(s) of the face 113 and/or body 114 provided to the system can also be imported from other sources and can also be exported to other destinations. In an exemplary embodiment, the method 110 may use images that the user has uploaded to social networking sites such as Facebook or Myspace or image sharing sites such as Flickr.
  • The method 110 can work with any subset of the data provided in 111, exemplary embodiments of which are described below. The method 110 is robust to incomplete data and missing information. All or part of the information requested may be provided by the user i.e. the information provided by the user is optional. In the absence of information, prior knowledge in the form of symmetry, interpolation and other fill-in methods, etc are used as described below. In the extreme case of limited user data, the method 110 instantiates, in an exemplary embodiment, a generic model which could be based on an average model or a celebrity model. Depending on factors such as the information provided by the user(s), computational power of the client platform, shader support on client machine, browser version, platform information, plugins installed, server load, bandwidth, storage, user's preferences (eg. photorealistic model or a version of nonphotorealistic rendering (NPR)) etc., the method 110 proceeds accordingly as described below. These factors are herein referred to as action factors. Depending on the action factors, a 3D model of appropriate complexity is developed. When a highly complex (a higher order approximation with a higher poly count) model is generated, a downsampled version (a lower poly count model) is also created and stored. This lower poly count model is then used for physical simulations in order to reduce the processing time while the higher poly count model is used for visualization. This allows plausible motion and an appealing visualization. Goodness of fit information for apparel is computed using the higher poly count model unless limited by the action factors.
  • Method 110, at the preprocessing step 119 at which it preprocesses the user input data using prior knowledge to determine which of the modules 120, 123,124, 125 and 126 to invoke; depending on the input provided and the action factors, an appropriate combination of modules 120, 123,124, 125 and 126 is invoked. The method 110 attempts to construct the most accurate model based on the data for the given action factors. The accuracy of a model constructed using each of the modules 120, 123, 124, 125 and 126 is available as prior knowledge 112, and is used to determine the appropriate combination of modules 120, 123, 124, 125 and 126 to invoke. In an exemplary embodiment where the client platform is computationally advanced (modern hardware, latest browser version, shader support, etc.), if only images of the face and body are provided by the user, only the image/video analysis module 120 is invoked; if only body measurements are provided, only the measurements analysis module 123 is invoked; if only apparel size information is provided, only the apparel size analysis module 124 is invoked; if only a full body laser scan is provided, only the mesh analysis module is invoked; if only apparel size information and an image of the face is provided, only the apparel size analysis module 124 and the images/videos analysis module, more specifically the head analysis module 121, are invoked; if only an image of the face is provided, only the generic module 126 and the images/videos analysis module, more specifically the head analysis module 121, are invoked; if an image of the face, body measurements and a laser scan of the foot is provided the image/videos analysis module, more specifically the head analysis module 121, the measurements analysis module and the mesh analysis modules are invoked and so on. For regions of the body, for which information is unavailable, the generic module is invoked. In the extreme case of no user information or very limited computational resources, only the generic module 126 is invoked. Other data 118 such as age and gender, if provided, and prior knowledge is available to each to the modules 120, 123,124, 125 and 126 to assist in the model construction process. Parameters may be shared between the modules 120, 123,124, 125 and 126. Each of the modules 120, 123,124, 125 and 126 are described in detail next.
  • Reference is now made to the images/videos analysis module 120 in FIG. 6A. This module consists of a head analysis module 121 and a body analysis module 122, in an exemplary embodiment. The head analysis module 121 and the body analysis module 122 construct a 3-D model of the user's head and body, respectively, based on the image(s) and video(s) provided. The head analysis module 121 and the body analysis module 122 may work in parallel and influence each other. The head analysis module 121 and the body analysis module 122 are described in detail below.
  • Reference is now made to FIG. 6B where the steps of the model construction process of the images/videos analysis module 120 are outlined in an exemplary embodiment. After receiving image and or video file(s), this module extracts information on the user's physical attributes at step 137 and generates a three-dimensional model at step 138. A detailed description of this process is provided below.
  • Reference is now made to FIG. 6C where it is shown, in an exemplary embodiment, that the steps of the model construction process in the image/video analysis module are handled separately for the user's face and the body. The head analysis module 121 produces a model of the user's head while the body analysis module 122 produces a model of the user's body. These models are then merged at the head-body fusion step. A detailed description of this process is provided below.
  • Reference is now made to FIG. 6D, wherein a detailed description of the model generation process of the images/videos analysis module 120 for steps 121 and 122 is provided in an exemplary embodiment. The steps of the model construction are first described in the context of the head analysis module 121. The body analysis module 122 proceeds in a similar fashion. Once invoked by method 110, the module 120 after receiving image(s) and/or videos and prior knowledge, first sorts the data into images and videos at step 139, based on the file extension, file header, or user tag in an exemplary embodiment. If only image(s) are present, the method proceeds to the preprocessing step 141. If only video(s) are present, the method first extracts images from the video that approximately represent a front view of the face and/or a side view of the face, if available and proceeds to the processing step 141. This is done in an exemplary embodiment using a technique similar to that used in [11]. In another exemplary embodiment, a 3D model of the face is constructed using a technique similar to that in [12]. If a combination of videos and images are present and the resolution of the image(s) is higher than that of the video, the method proceeds to the preprocessing step 141 using the higher resolution images. If a low resolution video is present, for example a video captured using a cell phone, high resolution images are first generated and then the method proceeds to the processing step 141. This can be done, in an exemplary embodiment, using a technique similar to that used in [13]. Stereo images and/or videos can also be processed. In an exemplary embodiment, this can be done using a technique similar to [14].
  • Reference is now made to the preprocess step 141 in FIG. 6D of the image/video analysis module 120, wherein the image(s) are preprocessed. This involves, in an exemplary embodiment, resizing, scaling, de-noising, etc., if necessary to bring the images to a canonical form. An approximate region containing the face region in the images is identified at this step. This is done, in an exemplary embodiment, using a rotationally invariant neural network. In another exemplary embodiment, this can be done using support vector machines (SVMs) in a manner similar to that described in [15]. The location(s) of the face(s) in the image(s) and associated parameters (eg. approximate facial pose, scale, etc.), and a probability density over the image space identifying the foreground (face regions) and the background are then passed to the next step. In an exemplary embodiment, this density is defined as a Gaussian about the location of the face. Facial pose is defined as the 3D orientation of a person's face in 3D space. It can be parameterized, in an exemplary embodiment, by the orientation of the line joining the eyes and the two angles between the facial triangle (formed by the eyes and nose) and the image plane. The scale of the image is computed, in an exemplary embodiment, using (i) the measurement of a reference region as marked by the user, if available, or (ii) the size of a common object (eg. a highlighter) in the image at approximately the same depth as the person in the image, if available, or (ii) the measured size of a known object (eg. a checkered pattern) held by the user in the image. If multiple faces are detected in a single image, the user may be asked which face the user would like a model created for or a model may be created for each face in the image allowing the user to decide which ones to store and which ones to delete. The method 110 then proceeds to step 148, where the global appearance is analyzed, and step 142, where the local features of the head are analyzed. The global appearance analysis step 148 involves, in an exemplary embodiment, projecting the foreground on a manifold constructed, for example, using principal component analysis (PCA), probabilistic principal component analysis (PPCA), 2D PCA, Gaussian Process Latent Variable Models GPLVM, or independent component analysis (ICA). This manifold may be parameterized by global factors such as age, gender, pose, illumination, ethnicity, mood, weight, expression, etc. The coefficients corresponding to the projection are used to produce a likelihood of observing the images given a face model. In an exemplary embodiment, this is given by a Gaussian distribution centered at the coefficients corresponding to the projection. The estimated parameters from the previous step are updated using Bayes rule and the likelihood determined at this step. The posterior global parameters thus computed serve as priors at step 142. Depending on the action factors, the method 110 segments the face into various anatomical regions (steps 143-146), projects these regions onto local manifolds (at steps 149 and 150) to generate local 3D surfaces, fuses these local 3D surfaces and post processes the resulting head surface (steps 151 and 152), optimizes the model 153 and adds detail to the model 154. These steps are described in detail below.
  • The method 110 at step 142 identifies various anatomical regions of the face in the image and uses this information to construct a 3D surface of the head. This is done, in an exemplary embodiment, using shape space priors (SSPs). SSPs are defined here as a probability distribution on the shape of the regions of an object (in this context a face), the relative positions of the different regions of the object, the texture of each of these regions, etc. SSPs define a prior on where to expect the different regions of the object. SSPs are constructed here based on anatomical data. In an exemplary embodiment, an SSP is constructed that defines the relative locations, orientations, and shapes of the eyes, nose, mouth, ears, chin and hair in the images. Using priors from step 148 and SSPs on the face, the method 110 at step 143 extracts basic primitives from the images such as intensity, color, texture, etc. The method 110 at step 2326, to aid in segmentation of facial features, extracts more complex primitives such as the outlines of various parts of the face and proportions of various parts of the face using morphological filters, active contours, level sets, Active Shape Models (ASMs) (for example, [16]), or a Snakes approach [17], in an exemplary embodiment. As an example, the active contours algorithm deforms a contour to lock onto objects or boundaries of interest within an image using energy minimization as the principle of operation. The contour points iteratively approach the object boundary in order to reach a minima in energy levels. There are two energy components to the overall energy equation of an active surface. The ‘internal’ energy component is dependent on the shape of the contour. This component represents the facets acting on the contour surface and constraining it to be smooth. The ‘external’ energy component is dependent on the image properties such as the gradient, properties that draw the contour surface to the target boundary/object. At step 146, the outputs of steps 143 and 144 which define likelihood functions are used together with SSPs, in an exemplary embodiment using Bayes rule, to segment the regions of the head, helmet, eyes, eyebrows, nose, mouth, etc. in the image(s). A helmet is defined here as the outer 3D surface of the head including the chin, and cheeks but excluding the eyes, nose, mouth and hair. The result is a set of hypotheses that provide a segmentation of various parts of the head along with a confidence measure for each segmentation. (Segmentation refers to the sectioning out of specific objects from other objects within an image or video frame. In an exemplary embodiment, an outline that conforms to the object perimeter is generated to localize the object of interest and segregate it from other objects in the same frame). The confidence measure, in an exemplary embodiment, is defined as the maximum value of the probability density function, at the segmented part's location. If the confidence measure is not above a certain threshold (in certain challenging cases eg. partial occlusion, bad lighting, etc.), other methods are invoked at the advanced primitive extraction step 145. (For example methods based on depth from focus, structure from motion, structure from shading, specularity, silhouette, etc.; techniques similar to [18], [19], [20], [21] and [22]). In an exemplary embodiment, this is done by selecting a method in a probabilistic fashion by sampling for a method from a proposal density (such as the one shown in FIG. 6I). For example, if the face of the user is in a shadow region, a proposal density is selected that gives the probability of successfully segmenting the parts of a face under such lighting conditions for each method available. From this density a method is sampled and used to segment the facial features and provide a confidence measure of the resulting segmentation. If the updated confidence is still below the acceptable threshold, the probability density is sampled for another method and the process is repeated until either the confidence measure is over the threshold or the maximum number of iterations is reached at which point the method asks for user assistance in identifying the facial features.
  • As each of the features or parts of the face is successfully segmented, a graphical model is built that predicts the location of the other remaining features or parts of the face. This is done using SSPs to build a graphical model (for eg. a Bayes Net). Reference is made to FIG. 6E, where a graphical model is shown in an exemplary embodiment, and to FIG. 6F, where the corresponding predicted densities are shown in image coordinates. The connections between the nodes can be built in parallel. As the method progresses, the prior on the location from the previous time step is used together with the observation from the image (result of applying a segmentation method mentioned above), to update the probability of the part that is being segmented and the parts that have been segmented, and to predict the locations of the remaining parts using sequential Bayesian estimation. This is done simultaneously for more than one part. For example, if the location of the second eye is observed and updated, it can be used to predict the location of the nose, mouth and the eyebrow over the second eye as shown in FIG. 6E. A simplified walkthrough of the sequential Bayesian estimation for segmenting the regions of the face is shown in FIG. 6F.
  • Simultaneously with steps 143-145, the pose of the face is determined. In an exemplary embodiment, on identification of specific facial features such as the eyes and mouth, an isosceles triangle connecting these features is identified. The angle of facial orientation is then determined by computing the angle between this isosceles triangle and the image plane. The pose thus computed also serves as a parameter at the classification step 151. The segmentation methods used are designed to segment the parts of the head at smooth boundaries. Next, parameters corresponding to these parts such as pose, lighting, gender, age, race, height, weight, mood, face proportions, texture etc. are computed. In an exemplary embodiment, this is done as follows: once a majority of the parts of the head are identified, they are projected onto a corresponding manifold in feature space (eg. edge space). In an exemplary embodiment, a manifold exists for each part of the face. These manifolds are built by projecting the 3D surface corresponding to a part of the face onto an image plane (perspective projection) for a large number of parts (corresponding to different poses, lighting conditions, gender, age, race, height, weight, mood, face proportions, etc.), applying a feature filter (eg. a Canny edge detector) at step 149 to convert to a feature space (eg. edge space, color space, texture space, etc.), and then applying a dimensionality reduction technique such as principal component analysis (PCA), probabilistic principal component analysis (PPCA), 2D PCA, Gaussian Process Latent Variable Models GPLVM, or independent component analysis (ICA). Since the manifolds are parameterized by pose, lighting, gender, age, race, height, weight, mood, face proportions, texture etc., projecting a given segmented part of the head onto the manifold allows recovery of these parameters (for example [23]). These parameters are then passed onto a classifier (at step 151), in an exemplary embodiment, a Naïve Bayes classifier, a support vector machine (SVM), or a Gaussian Process classifier, to output the most plausible 3D surface given the parameters. In an exemplary embodiment, if a particular parameter is already supplied as part of 118, for eg. the gender of the user, then it is used directly with the classifier and the corresponding computation is skipped (eg. estimation of gender). Teeth reconstruction is also handled similarly. The teeth that are constructed are representative of those in the image provided including the color and orientation of teeth. This is needed later for animation and other purposes such as to show virtually results of dental corrections, whitening products, braces, invisalines, etc. Hair are also handled similarly. In this case, the manifold is additionally parameterized by the 3D curvature, length, specularity, color, 3D arrangement, etc. In an exemplary embodiment, a helical model is used as the underlying representation for a hair strand. In an exemplary embodiment hair can be modeled from image(s) using techniques similar to [24-26]. If, however, the action factors do not allow a representation of the teeth, ears and hair exactly as in the image, less complex precomputed models are used. Once 3D surface exemplars for various parts of the head (for example, a helmet defined below, eyes, nose, mouth, etc.) are identified as outputs of the classifier, at step 152 a new model is instantiated by instantiating a copy of the identified exemplar surfaces. Since the instantiated surfaces are parametric by construction, these parametric models are modified slightly (within allowed limits), if necessary, to represent parameters as extracted from the image(s) wherever possible at the optimization step 153. The exemplars that are used with the classifier are rigged models and thus enable easy modifications. In an exemplary embodiment, the size of the skeletal structures and the weight of the nodes are modified to match the extracted parameters. The rigged models also allow user modifications (as described with reference to FIG. 29B) and facilitate animations. At the postprocessing step 154, the 3D surfaces generated at step 153 are merged. The boundaries of the 3D surfaces corresponding to the parts of the face are merged and smoothed using techniques similar to those used at the head-body fusion step 155 (FIG. 6C). Symmetry is used to complete occluded or hidden parts. For example, if the user's hair are partially occluding one side of the face, symmetry is used to complete the missing part. If not enough information is available, the most likely surface and texture are substituted. For example. if the user's teeth not visible owing to the mouth being closed, the most likely set of teeth, given the parameters corresponding to the user. In an exemplary embodiment, the most likely surface and texture are computed using a classifier such as Naïve Bayes, while the placement is computed using SSPs and Bayesian inference. As an alternate embodiment, 3D surfaces of the entire head for different combinations of constituent part parameters are maintained and an appropriate model is instantiated at step 152 based on the output of the classification step 151. At the conclusion of the postprocessing step 154, a preliminary 3D model of the user's head is available which is passed onto the head-body fusion step 155. As mentioned earlier, the body analysis module 122 proceeds similar to the head analysis module 121, where instead of extracting parameters of parts of the face, parameters of the various body parts (excluding the head) are extracted from the image(s) and/or videos. In an exemplary embodiment, the local feature analysis step 142 for the body analysis module 122 involves individually analyzing the upper limbs, the lower limbs, the thorax, the abdomen, and the pelvis. In an exemplary embodiment, the location of the body in the image and its pose is identified at the preprocessing step 141 using a technique similar to that used in [27]. At the conclusion, of the postprocessing step 154 of the body analysis module 122, a preliminary 3D model of the user's body is generated which is passed onto the head-body fusion step 155.
  • At the head-body fusion step 155, the head model estimate and the body model estimate are merged using smoothness assumptions at the boundaries, if necessary. In an exemplary embodiment this is accomplished by treating the regions at the boundaries as B-splines and introducing a new set of B-splines to interconnect the two regions to be merged (analogous to using sutures) and shrinking the introduced links until the boundary points are sufficiently close. A 1-D example is shown in FIG. 6G. Alternatively, the boundaries at the neck region may be approximated as being pseudo-circular and the radii of the body model's neck region and the head model's neck region can be matched. This may involve introducing a small neck region with interpolated radius values. Other methods such as the one proposed in [28] could also be used. The choice of the method used for fusion depends, in an exemplary embodiment, on the action factors. For instance, if limited data is provided by the user leading to a relatively coarse approximation to the user, the pseudo-circular approximation method mentioned above is used. As another example, a particular version of an NPR model desired by the user may not require sophisticated model for which the pseudo-circular approximation method mentioned above is used. The output of the head-body fusion step 155 is passed onto the information fusion step 127.
  • Reference is now made to the measurements analysis module 123 that processes the measurements provided by the user in order to construct a user model or part thereof. These measurements include the various head and body measurements 115 provided by the user. The measurements 115 provided are used to estimate any missing measurements based on anatomical and anthropometric data, and data on plastic surgery available as part of the prior knowledge 112. As an example of the construction of a head model, given the width, x, of one of the user's eyes, the proportions of the remaining parts of the head are generated based on anthropometric data as follows: the diameter of the head, along the eyes and the ears is taken to be 5×, the distance from the trichion to the menton is taken to be 6×. If the user's ethnicity is known, then the shape is appropriately adjusted based on anthropometric data. For example, the shape of an average Asian head as seen from above is circular while that of an average Caucasian is elliptical. This information is then passed to a classifier to output the most plausible 3D surface of the head given the parameters. Measurements of the body are used to instantiate a model corresponding to these measurements from a generative model. A generative model is available as part of the prior knowledge 112 and is constructed, in an exemplary embodiment, using anthropometric data. In an exemplary embodiment, this is done using techniques similar to those used in [29, 30]. If a very limited number of measurements are available in addition to images, they are passed onto the classifier at step 151 and the extraction of the corresponding measurement from the image(s) or video(s) is skipped, in an exemplary embodiment. The output of the measurements analysis module is passed onto the information fusion step 127.
  • Reference is now made to the apparel size analysis module 124 in FIG. 6A that processes the apparel size/preferences 116 provided by the user in order to construct a user model or part thereof. Prior knowledge 112 includes an association of an average 3D model with size data for shirts, dresses, trousers, skirts, etc. For example, there is an average 3D model of the upper body of a male associated with a men's shirt collar size of 42 and similarly a model of the lower body for a trouser waist size of 32 and a length of 32, or a hat size of 40 cm, or a shoe size of 11. This can be done, in an exemplary embodiment, by computing the average of the upper body 3D surface of several models (obtained from rage scans after filtering noise and rigging) of men who have identified a collar size of 42 as their preferred shirt size. In another exemplary embodiment, the generative models learnt from anthropometric data, for example as in [29] may have size parameters mapped to apparel size, thereby giving a generative model that is parameterized by apparel size. These models are also rigged, in an exemplary embodiment using a technique similar to that used in [31], to allow animation. Thus, in an exemplary embodiment, a user model can be created from apparel size data by (i) instantiating the corresponding average 3D model for the various body parts for which an apparel size is specified, or instantiating the part of the body corresponding to the apparel using a generative model parameterized by apparel size, and (ii) merging the 3D surfaces for the various body parts using merging techniques similar to those used at step 155 using most probable generic models for body parts (available from the generic module 126) for which apparel size is not provided. The output of the apparel size analysis module is passed onto the information fusion step 127.
  • Reference is now made to the mesh analysis module 125 in FIG. 6A that processes the laser scan data/meshes/outlines 117 provided by the user in order to construct a user model or part thereof. The steps of the mesh analysis module are shown in FIG. 6H in an exemplary embodiment. After receiving user data 111 and prior knowledge 112, once invoked, this module first sorts 156 the data [such as laser scan data, meshes (for instance, those corresponding to impressions of the ear or foot), outlines of body parts (for instance, those of the hands and feet), mocap (motion capture) data, magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and computed tomography (CT) data] to determine the most accurate choice of data to use for model construction. This is done using knowledge of the accuracy of a model constructed using each piece of the pieces of data above, available as part of prior knowledge 112, and the quality of the data provided such as the poly count of a mesh. The user is also allowed to force the use of a preferred data, for example mocap data as opposed to a laser scan, for model construction by specifying the reliability of the data manually. For meshes, the module 125 then proceeds as follows: The module 125 filters the data at step 157 to remove any noise and to correct any holes in the data. This is done, in an exemplary embodiment, using template-based parameterization and hole-filing techniques similar to those used in [29]. At this step, unnecessary information such as meshes corresponding to background points is also removed. This can be done, in an exemplary embodiment, by asking the user to mark such regions through an intuitive user interface. This is followed by the fill-in step 158 at which symmetry is used to complete missing regions such as an arm, if any, using symmetry. If mesh or volume data is not available for the missing regions, the corresponding regions are generated by the generic module 126 and fused at the information fusion step 127. The model is then rigged at the rigging step 159. Rigging provides a control skeleton for animations and also for easily modifying the body parts of the user's model. The mesh output from step 158 is used with a generic human skeleton and an identification of the orientation of the mesh to automatically rig the mesh. Generic male and female versions one for age group 0-8, 8-12, 13-20, 21-30, 31-60, 60+ in an exemplary embodiment are available as part of the prior knowledge 112. The orientation of the mesh (i.e which side is up) is obtained from the mesh file's header. If unavailable in the header, the orientation of the header is obtained by asking the user through an intuitive user interface. Rigging is done automatically, in an exemplary embodiment, using a technique similar to that used in [31]. It can also be done using techniques similar to those used in [32, 33].
  • For laser scan data, a mesh is first constructed, in an exemplary embodiment, using a technique similar to that used in [34]. This mesh is then passed on to the fill-in step 158 and the rigging step 159 described above. For mocap data, a model is generated using shape completion techniques such as that used in [35], in an exemplary embodiment. The model thus generated is rigged automatically, in an exemplary embodiment, using a technique similar to that used in [31]. For outlines, this module extracts constraints from the outlines and morphs the mesh to satisfy the constraints. In an exemplary embodiment, this is done as follows: (i) Feature points on the outline corresponding to labeled feature points on the mesh (for example, points over the ends of eyebrows, over the ears, and the occipital lobe) are identified by the user through a guided interface such as the one shown in FIG. 11. This can also be automated using perceptual grouping and anatomical knowledge. For example, consider a scenario where a user prints out a sheet that has a reference marker from the website and draws an outline of his/her foot, or takes an image of his/her foot with a penny next to the foot. Given such an image, the image is first scaled to match the units of the coordinate system of the 3D mesh using scale information from the reference markers in the image. If a reference marker is not present, the image is search for commonly known objects such as a highlighter or a penny using template matching and the known size of such objects is used to set the scale of the foot outline. Or, the user may be asked to identify at least one measurement on the foot. The orientation of the foot is then identified. This is done by applying a Canny edge detector to get the edge locations and the orientations, connecting or grouping edgels (a pixel at which an edge has been identified) that have an orientation within a certain threshold, and finding the longest pair of connected edges. This gives the orientation of the foot. Both ends of the foots are searched to identify the region of higher frequency content (using a Fourier Transform or simply projecting the region at each end onto a slice along the foot and looking at the resulting histogram) corresponding to the toes. The big toe is then identified by comparing the widths of the edges defining the toes and picking the one corresponding to the greatest width. Similarly, the little toe and the region corresponding to the heel are identified and reference points on these regions corresponding to those on the 3D meshes are marked which now define a set of constraints. (ii) The corresponding reference points are then displaced towards the identified reference points from the image using Finite Element Analysis (FEM) techniques such as those used in [36], [37], or as in [38]. The extracted constraints are also passed onto the other modules 120, 123, 124 and 126 and a similar method is applied to ensure that the generated model conforms to the constraints. Such morphing of the mesh to conform to constraints is particularly used, if action factors allow, for parts of the body that cannot be easily approximated by a cylinder such as the head. Such morphing of the mesh based on constraints provided by the user such as an outline or an image of their foot or fingers are useful for computing goodness of fit information for apparel such as shoes and rings. (For the case of rings, it is also possible to simply measure the circumference of the ring and let the measurement analysis module construct the appropriate model). For rings, two roughly orthogonal images of the fingers with a reference material in the background or an outline of the fingers on a printable sheet containing a reference marker could be used and analyzed as above. Or, a users hand can be placed in front of a webcam with a ref on paper in the background or a computer screen in the background containing a reference marker. The advantage of such an image based constraint extraction is that it allows multiple fingers to be captured at once. This is particularly useful when buying, say mittens or gloves or a ring, for a friend as a surprise gift. The user simply needs to take an image(s) of the appropriate region of his/her friend's body, mark the size of some known object in the image, for example, the width of the user's face. The more information is provided, the more accurate the user's model becomes. For example, for some people, the ring size for the right index finger is different from that of the left hand; images of both hands ensure a more accurate goodness-of-fit. Imprints and moulds such as those of the foot and ears can be converted to meshes can be done either by laser scanning. It can also be done taking multiple images of the imprints and moulds and constructing the mesh using structure from focus, structure from motion, structure from shading, specularity, etc.; techniques similar to those used in [18] and [22]. Medical images and volumes such as MRI and CT volumes can also be used, if available, to create the user model or part thereof. This can be done using techniques similar to those used in [39, 40].
  • For images from multiple views of a user with known image acquisition geometry, a volume is first created as follows and processed as described above for the case of laser scan data. (i) Each image is preprocessed and a transform is applied producing a feature space image. For example, a silhouette transform is applied which produces an image with a silhouette of the object(s) of interest. This can be done in an exemplary embodiment using a technique similar to that used in [41]. (ii) The silhouette is then backprojected. This can be done, in an exemplary embodiment, by summing the contributions from each of the silhouettes taking into account the geometry provided as shown in FIG. 6J. Using the geometry of the image capture (this is usually a perspective projection or can be approximated with an orthographic projection), rays are traced from pixels on the feature space transformed images to voxels (3D pixels) of a volume (a 3D image). To each of the voxels along the path of a ray, the value of the pixel in the feature space transformed image is added. This added value may be corrected for a 1/r2 effect (inverse square law of light and electromagnetic radiation). Once a mesh is created, knowledge of the silhouette is used to extract the texture of the object of interest and using image acquisition geometry, the model is textured as described at the primary model instantiation step 128. It can also be done in the frequency domain using a technique similar to that described in [42]. Instead of using the silhouette above, any other feature space transform can be used. For images from multiple views of an object(s) with unknown or limited geometry information, the images are processed as described above with geometry information extracted from the images as follows: (i) Detect salient features. This is done in an exemplary embodiment by using statics on regions that are interesting to humans extracted by tracking eye movements. In another exemplary embodiment, it can be done using prior knowledge of the parts of the object of interest. For example, the eyes, nose and mouth can be identified similar to techniques used at step 121 (ii) Form triangles by connecting the salient features. For example, the eyes, nose, and mouth of a person in an image may be connected to form a triangle. (iii) Determine image to image transformations of the corresponding triangles. This can be done in an exemplary embodiment using a technique similar to that used in [43]. These transformations define the image acquisition geometry which is then processed along with the images to construct a model as described above. In Instead of using triangles other structures or network of structures may be used above. The method described above allows construction of a model from arbitrary views of an object or person taken using an ordinary camera. Planes in the image can also be identified by detecting lines diminishing towards a vanishing point. This can be used to construct a model of the environment, if desired. It can also be used to aid in background subtraction. A technique similar to the one presented in [44] can also be used for the environment. The output of the mesh analysis module is passed onto the information fusion step 127.
  • Reference is now made to the generic module 126 in FIG. 6A to construct a user model or part thereof. This module processes other data 118, if available, together with prior knowledge 112 in order to produce a generic model or part thereof. This module is invoked when there is insufficient information for constructing a user model or part thereof via the other modules 120, 123, 124, and 125, or if the action factors do not allow the generation of a more accurate model that is conformal to the user through modules 120, 123, 124, and 125. When invoked, the information in other data 118 or that provided by the modules 120, 123, 124, and 125 is passed onto a classifier similar to that used at step 151. In an exemplary embodiment, a Naïve Bayes classifier, a support vector machine (SVM), or a Gaussian Process classifier is used, to output the most plausible 3D surface given the information. If only a part of the model (such as a limb) is required by the other modules 120, 123, 124, and 125, then only the required part is generated using the classifier. If the whole model is required, then the entire user model is generated using the classifier. In an exemplary embodiment, the classifier outputs an exemplar that is a rigged model. The rigged exemplar is then modified, if necessary, to better match the user. For example, if other data 118 specifies an age of five years and a height of five feet, and the closest exemplar is a user model corresponding to a five year old that is four and half feet tall, the height of this exemplar is changed from four and half to five feet by setting the parameters of the rigged user model accordingly. The classifier is built using labeled training data. In an exemplary embodiment, this is done using rigged 3D surfaces or meshes that have associated with them labels identifying the age, gender, weight, height, ethnicity, color, apparel size etc. of the corresponding 3D surface or mesh. The labeling can be done manually as it only needs to be done once when building the classifier. The classifier is stored and available as part of prior knowledge 112. As more and more data becomes available, the classifier is updated at the learning step 132. In essence, the method 110 is constantly learning and improving its model construction process.
  • The processed information from the modules 120, 123, 124, 125, and 126, if available, is then fused at the information fusion step 127. At this step, merging of the outputs of components of 120, 123, 124, 125, and 126 takes place. There is an accuracy associated with the output of the modules 120, 123, 124, 125, and 126 available as part of prior knowledge 112. Based on this accuracy components of various parts of the user's model are merged. For example, the full body output of the generic module 126 may be merged with a high resolution model of the user's foot available as an output of the mesh analysis module 125. This can be done, in an exemplary embodiment, using techniques similar to those used at the head-body fusion step 155. Parts of the skeleton are also joined at the joint locations. For example, for the above example, the full body skeleton is joined with the foot skeleton at the ankle joint. For regions of the body for which data is unavailable, the output of the generic module is used. For regions of the body for which multiple models of similar accuracy exist, the corresponding models are merged in a probabilistic framework. For example, the expected value of this 3D model's surface is computed over all pieces of data available as outputs of 120, 123, 124, 125, and 126 to produce an estimate of the 3D model of the user's head. In an exemplary embodiment, this is done using Bayesian model averaging, committees, boosting and other techniques for combining models may be used.
  • At step 128, a preliminary 3D model is instantiated using the output of the information fusions step. The model is named and all the appropriate data structures are updated. The model is also textured at this step. This is done by setting up a constrained boundary value problem (BVP) with constrains defined by the feature point correspondence and using texture from the image(s) provided by the user. In an exemplary embodiment, this is done using a technique similar to that presented in [45] for the face. The feature point correspondence between points on the 3D model and those in the images is obtained using the segmentation results from step 146. Alternatively, this correspondence data may be obtained through a user interface. An exemplary embodiment of such a user interface is discussed in reference to FIG. 11. A texture map for the face is obtained by unwrapping a texture map from the input video sequence or input images using a technique similar to the texture mapping technique described in [46]. Before unwrapping the texture, the images may be processed to complete missing or occluded regions (such as occlusion by hair, glasses, etc.) using shape space priors and symmetry. Skin tone is also identified at this step. In an exemplary embodiment, regions representing skin can be identified by converting the image to a representation in the HSV (Hue, Saturation, Value) color space or RGB (Red, Green, Blue) color space. Skin pixels have characteristic HSV and RGB values. By setting the appropriate thresholds for the HSV or RGB parameters, the skin regions may be identified. The skin reflectance model may incorporate diffuse and specular components to better identify the skin. The variation of the pixel values (and higher order statistics) for example in RGB space can be used to estimate the skin texture. This texture is then used to fill in skin surfaces with unspecified texture values, for example, ears that are hidden behind hair. In an exemplary embodiment, skin texture is extracted from the face and used wherever necessary on the head and the body since the face of a user is usually visible in the image or video. Similarly, texture is computed and mapped for teeth, hair, and the iris and pupil of the eyes. If image or video data is unavailable, a generic texture is used. The choice of a generic texture is based on other information provided by the user as part of other data 118 (eg. age, race, gender, etc.), if available.
  • The model is then optimized at step 129. Optimization involves improving the model to better match the user. Optimization procedures similar to those employed at step 125 and 153 are used at a global scale, if necessary or possible, again depending on user data and the action factors. Consistency checks are also made to ensure that scale and orientation of the different regions of the model are plausible and appropriate corrections are made if necessary. Textures on the model are also optimized at this step if the action factors allow. This involves optimizations such as reilluminating the model so that the illumination is globally consistent and so that the model can be placed in new illumination contexts. This is done in an exemplary embodiment using techniques similar to those used in [19, 20, 47]. Forward and backward projection (from the 3D model to the 2D image and vice-versa) may be applied in a stochastic fashion to ensure consistency with the 2D input image, if provided, and to make finer modifications to the model, if necessary depending on action factors. The comparison of the projected 3D model and the 2D image may be done in one or more feature space(s), for example in edge space. All of the actions performed are taken depending on the action factors as described earlier.
  • The method 110 then proceeds to step 130 at which the model is detailed. The photorealism of the model is enhanced and any special effects that are required for NPR are added based on the action factors. The photorealism is enhanced, for example, by using bump maps for, say, wrinkles and incorporating subsurface scattering for skin. Facial hair, facial accessories and finer detail are also added to the model.
  • Method 110 then proceeds to the user modification step 131 at which the user is allowed to make changes to the model if desired. These changes include, in an exemplary embodiment, changes to the skin tone, proportions of various body parts, textures (for example, the user may add scars, birthmarks, henna, etc.), etc. An easy to use user interface allows the user to make such changes as described later in this document. Users are also allowed to set default preferences for their model at this point. For instance, they may choose to have a photorealistic model or a nonphotorealistic (NPR) model as their default model (NPR models may be multi-dimensional-1-D, 2-D, 2.5D, 3-D, 4-D or higher). Users can also create several versions of their NPR model based on their specific taste. Such NPR models can be constructed by simply applying a new texture or using algorithms such as those described in [48-50]. At any point during model construction, the method may ask the user for assistance. The user is allowed to make changes to the model at any time. As the user ages, loses or gains weight, or goes through maternity, the model can be updated accordingly. As newer versions of the software are released, newer, more accurate versions of the model may be created using the information already supplied by the user or prompting the user to provide more (optional) information. All the models created by the user are stored and the user is allowed to use any or all of them at any time. The models created by the user are stored in the user database 80 and are also cached on the client side 14 and 16 for performance purposes.
  • The model generated before user modifications as well as the user modifications and user data 111 are passed onto the learning step 132, the output of which is used to update the prior knowledge 112 in order to improve the model construction method 110 over time. This can be done using reinforcement learning and supervised learning techniques such as Gaussian process regression. In an exemplary embodiment, the manifolds and the classifier used in the model construction process are updated. In an exemplary embodiment, if a model that is created is significantly further away in distance from the existing exemplars of the classifier and has been found frequently, it is added as a new exemplar. At the conclusion of the user modifications step 131, a user model is created.
  • If the user provides more data 111, the method accesses the quality of the data, for example, the resolution of the images, the poly count of the meshes, etc. in order to determine if the newer data can improve the model. If it is determined that the new data can improve the module, the method 110 processes the data to improve the quality of the user model and a new version of the model is created and stored. The measurements of various body parts can be updated at any time as the user ages, gains/loses weight, goes through maternity etc.
  • The method 110 described above can be used for building models of other objects. For example, 3D objects for use in the virtual world. In an exemplary embodiment, the user can identify the class of the object (such as a pen, a laptop, etc.) for which a model is being created. The class of the object for which a model is being created is useful for selecting the appropriate priors for model construction for the given object from the prior knowledge 112. In an alternative embodiment, the class of the object being considered can be automatically determined as discussed with reference to FIG. 49Q.
  • In an exemplary embodiment, a generative model for motion is used. For example, for the case of walking. users are allowed to tune various parameters corresponding to a walking style such as a masculine/feminine walking style, a heavy/light person walking style, a happy/sad walking style etc. Such generative models are learnt, in an exemplary embodiment, using Gaussian process models with style and content separation using a technique similar to that used in [51].
  • When the action factors are very limiting, for example, on limited platforms such as a cell phone or a limited web browser, several approximations may be used to display a 3D model. In an exemplary embodiment, on rotating a user model, the user is presented with a 3D model of the user from a quantized set of views i.e. if a user rotates his/her viewpoint, the viewpoint nearest to this user selected viewpoint from a set of allowed viewpoints is chosen and displayed to the user. In this way, an entire 3D scene can be represented using as only as many viewpoints as the system permits, thereby allowing a more compact and responsive user experience. In an exemplary embodiment, if a generic user model is used, precomputed views of the model corresponding to different viewpoints are used. In an exemplary embodiment, the apparel on a generic user model of a given size and the corresponding fit info is precomputed for various parameters (for example, for different apparel sizes) and the appropriate view is displayed to the user. In an exemplary embodiment, the view may be an image or an animation such as one showing the user walking in a dress. As an exemplary embodiment of how a 3D environment can be displayed when the action factors are limiting, static backgrounds may be used instead of dynamic one. Moreover, instead of displaying a fully 3D environment, a quantized version of the environment may be displayed i.e. as with the case of the user model, when the user chooses to navigate to a certain viewpoint, the closest available viewpoint from a set of allowed viewpoints for the environment is chosen and displayed to the user.
  • Users can also choose to create a strictly 2D user model and try out apparel in 2D. This is one of the several options available for NPR models. In an exemplary embodiment, this is done by invoking the generic module 126 with a 2D option for the classifier i.e. the output of the classifier is a 2D rigged mesh. The 2D classifier is built using the same technique as described for the 3D models but using 2D rigged models instead. Users can also draw a model of themselves. This can then be either manually rigged through a user-interface or automatically using a 2D form of the technique used in [31], in an exemplary embodiment. Users also have the option of creating their own 3D models, and using them for trying out apparel and for various entertainment purposes such as playing games and creating music videos containing their user model.
  • All data provided by the users and models constructed are saved in a repository. In an exemplary embodiment, an application programming interface (API) may be available for developers to build applications using this data. In an exemplary embodiment, an application could use this data to determine items that fit a user as a user browses a catalog, as described later. In another exemplary embodiment, a mobile device or cell phone application could allow users to scan a bar code or an RFID (radio frequency identification) tag on an apparel in a real store and see if the apparel fits the user. (Such scanning of bar codes or RFIDs and looking up of repositories can have other applications such as scanning a food item to check if it is consumable by the user i.e. its ingredients satisfy the dietary restrictions of a user).
  • Reference is now made to FIGS. 7A-D which illustrate protocols for collaborative interaction in exemplary embodiments. These protocols can be used for a number of applications. These protocols are described next for the modes of operation of a Shopping Trip™. Other applications based on these protocols are described later in this document. A user may initiate a shopping trip at any time. There are four modes of operation of a shopping trip: regular, asynchronous, synchronous and common. In the regular mode, a user can shop for products in the standard way—browse catalogues, select items for review and purchase desired items. Whereas the regular mode of shopping involves a single user, the asynchronous, synchronous and common modes are different options for collaborative shopping available to users. In the asynchronous mode, the user can collaborate with other shoppers in an asynchronous fashion. The asynchronous mode does not require that other shoppers the user wishes to collaboratively shop with, be online. The user can share images, videos, reviews and other links (of products and stores for instance) they wish to show other users (by dragging and dropping content into a share folder in an exemplary embodiment). They can send them offline messages, and itemized lists of products sorted according to ratings, price or some other criteria. Any share or communication or other electronic collaborative operation can be performed without requiring other collaborators to be online, in the asynchronous mode at the time of browsing. The synchronous and common modes require all collaborating members to be online and permit synchronized share, communication and other electronic collaborative operations. In these modes, the users can chat and exchange messages synchronously in real-time. In the synchronous mode, ‘synchronized content sharing’ occurs. Reference is made to FIG. 20 to describe this operation in an exemplary embodiment. Users involved in synchronized collaboration can browse products and stores on their own. ‘Synchronized content sharing’ permits the user to display the products/store view and other content being explored by other users who are part of the shopping trip by selecting the specific user whose browsing content is desired, from a list 244 as shown in FIG. 20. For example, consider a shopping trip session involving two users—user 1 and user 2, browsing from their respective computing devices and browsers. Suppose user 1 and user 2 are browsing products by selecting “My view” from 244. Suppose user 1 now selects user 2 from the view list 244. As the selected user (user 2) browses through products/stores, the same content is displayed on user 1's display screen thereby synchronizing the content on the display screens of users 1 and 2. User 1 may switch back to her view whenever she wants and continue browsing on her own. Similarly, user 2 can view the content of user 1 by selecting user 1 from the switch view list. In the common mode, users involved in the collaborative shopping trip are simultaneously engaged in browsing products or stores on their display screens. This mode can assume two forms. In the first form, a user is appointed as the ‘head’ from among the members of the same shopping trip. This head navigates/browses products and stores on their display screen and the same view is broadcast and displayed on the screens of all users of the same shopping trip. In the second form, all users can navigate/browse through product, store or other catalogues and virtual environments and the information/content is delivered in the sequence that it is requested (to resolve user conflicts) and the same content is displayed on all user screens simultaneously using the protocol that is described below. In the common mode, all the users are engaged in a shopping trip in a common environment. This environment may be browsed independently by different members of the shopping trip leading to different views of the same environment. The system in FIG. 20 involving synchronous collaboration between users may be integrated with a ‘One Switch View’ (OSV) button that allows users to switch between user views just by pressing one button/switch, which may be a hardware button or a software icon/button. The user whose view is displayed on pressing the switch is the one on the list following the user whose view is currently being displayed, in exemplary embodiment. This OSV button may be integrated with any of the collaborative environments discussed in this document.
  • The techniques for accomplishing each of the four modes of operation are described next in an exemplary embodiment. Reference is now made to FIG. 7A where the regular mode of operation of a shopping trip is shown. An instance of a client 201 in the regular mode of operation makes a request to the server application 22 to view a product or a store or other data. In exemplary embodiment, the request can be made using HTTP request, RMI (remote method invocation), RPC (remote procedure call). The client instance then receives a response from the server. Reference is now made to FIG. 7B where an asynchronous mode of operation is shown in an exemplary embodiment. In this case, the user instance 201 makes a request to the server. A list 203 of shopping trip members and their information is maintained on the server for any given user. The list 203 is a list of users that have been selected by the client C6111 to participate in the shopping trip. In response to the client's request, the server then sends a response to the client 201 with the requested content. If the item is tagged for sharing, the server adds it to a list of shared items for that user. Other users on the shopping trip may request to view the shared items upon which the server sends the requisite response to this request. For instance, a user may view a product while browsing and may tag it as shared or add it to a share bin/folder. For instance, a user (C6111) may view a product and add it to a share bin. Other users (C6742, C5353) may then view the items in that bin. The shopping trip members list 203 may also be stored locally on the client's side in an alternative exemplary embodiment. Reference is now made to FIG. 7C where the synchronous mode of shopping is shown in exemplary embodiment. When a client instance 201 makes a request to the server to view a product, for example, an appropriate response is sent not only to the client requesting the information but also to all members on the shopping trip list who have selected that client's browsing contents (refer FIG. 20). In another exemplary embodiment, the synchronous mode works as follows: (1) A user, say USER1, visits a product page. (2) The product is registered in a database as USER1's last viewed page. (3) If another user, say USER2, has selected the option to show USER1's view, their view is updated with USER1's last viewed product. (4) When USER2 selects USER1's view, the view is updated every 3 seconds. (If there is no activity on part of USER2 for a given period of time, USER2's client application may pause polling the database to save bandwidth and other computational resources. Upon reactivation by USER2, view updating may resume). Thus, updating of the views may be server driven or client driven. Users can specify user access privileges to content that belongs to them. For example, they can set access privileges to various apparel items in their wardrobe allowing other users to access certain items and denying access to certain others. An icon notifies the user if the current view is being broadcast. The history of a trip is also available to the users. In an exemplary embodiment, this is done by showing the user the items that were registered in the database in step (2) above. This history can also be downloaded and saved by the users and can be viewed later. Reference is now made to FIG. 7D where the common mode of a shopping trip is shown in exemplary embodiment. In this figure, it is shown that several clients can simultaneously make a request and simultaneously receive a response. At any given time, any of the clients can send a request to the server to view an item, to explore an item (as discussed in reference to FIG. 36), etc. in exemplary embodiment. The following is a description of the communication protocol for the common mode of operation of a shopping trip. When a client sends a request to the server, it also monitors a channel on the server (could be a bit or a byte or any other data segment on the server in exemplary embodiment) to see if there any simultaneous requests made by other users. If no simultaneous requests are detected, the client completes the request and the server responds to all clients in the shopping trip with the appropriate information requested. For instance, if a catalogue item is viewed by one of the users, all other clients see that item. As another example, if a client turns over a 3D item, then all other clients see the item turned over from their respective views. If however, a simultaneous request is detected at the channel, then the client aborts its request and waits for a random amount of time before sending the request again. The random wait time increases with the number of unsuccessful attempts. If the response duration is lengthy, then requests are suspended until the response is completed by the server, in exemplary embodiment. Alternatively, a conflict management scheme may be implemented wherein the client also monitors the server's response for a possible conflict and sends the request when there are no conflicts. In yet another exemplary embodiment, the server may respond to requests if there are no conflicts and may simply pause if there is a conflict. These protocols also apply to peer-to-peer environments with the source of the data being the server and the requesting party being the client.
  • While viewing products, the content from audio and video channels of users on the shopping trip, and also the output of common (collaborative) applications (such as a whiteboard-like overlay that users can use to mark items on the web page or in the environment, or write and draw on) can also be shared simultaneously. In an exemplary embodiment, for the asynchronous mode, the user may tag an item for sharing and add it to a bin along with a video, audio and/or text message. When other users request to see items in this bin, they are shown the product along with the audio, video or text message. In exemplary embodiment, for the synchronous mode, the audio channels for all the users are added up and the video channel for whichever user's view is selected (FIG. 20) is shown. For the common mode of operation, in an exemplary embodiment, the audio channels from the users on the shopping trip are added up and presented to all the users while the video stream may correspond to the user who has just completed sending a request successfully through the common mode communication protocol described above. Sessions may be saved as described before. The views and the timeline during any session can be annotated. These pieces of information are cross-referenced to enable the user to browse by any of the pieces of information and view the corresponding information.
  • For each of the above modes, the clients may also interact in a peer to peer fashion as opposed to going through a server. In an exemplary embodiment, in the synchronized mode, if the user makes a request for a webpage to the server, then that information can be passed on to the other clients on the shopping trip via a peer to peer protocol. A user may also be engaged in multiple shopping trips (in multiple shopping trip modes) with different sets of users. Additionally, sub-groups within a shopping may interact separately from the rest of the group and/or disjoin the rest of the members of the shopping trip and then later resume activities with the group.
  • While operating in any of these modes, the user has the option to turn on an ‘automatic’ mode feature whereby the system engages the user in a guided shopping experience. In an exemplary embodiment, the user may select items or categories of items that the user is interested in and specify product criteria, preferences and other parameters. The user may also specify stores that the user is interested in browsing. Once this is done, the system walks the user through relevant products and stores automatically for a simulated guided shopping experience. The automated mode may be guided by a virtual character or a simulated effigy or a real person. The user can indicate at any time if she wishes to switch to the manual mode of shopping. The modes of operation presented here for shopping can be applied to other collaborative applications. For instance, going on a field trip, or virtual treasure hunt, or sharing applications as discussed with reference to FIG. 49 O.
  • Reference is now made to figures that describe the system 10 in greater detail, through sample images that are taken from the system 10. The sample images describe the operation of the system 10 with examples that are provided through sample screen shots of the use of the system 10.
  • Reference is now made to FIG. 8 and FIG. 31, where a sample main page screen 250 is shown, in an exemplary embodiment. The sample main screen 250 is used for purposes of example. The main screen 250, in an exemplary embodiment presents the user with various options. The options in an exemplary embodiment include the menu options 252. The options menu 252 allows a user to select from the various options associated with the system 10 that are available to them. In an exemplary embodiment, the options menu allows a user to select tabs where they can specify further options related to their respective environment 620, friends 622 and wardrobe 624 as has been described in FIG. 5. Users can search the site for appropriate content and for shopping items using the search bar 632; they can browse for items and add them to their shopping trolley 628 which dynamically updates as items are added and removed from it; and complete purchase transactions on the checkout page 626. The options that have been provided here, have been provided for purposes of example, and other options may be provided to the user upon the main page screen 250. Furthermore, users can choose and set the theme, layout, look and feel, colours, and other design and functional elements of the main and other pages associated with their account on system 10, in the preferences section 630. In an exemplary embodiment, users can choose the colour scheme associated with the menu options 252 and the background of the main and other pages. The local application described further below is launched on clicking the button 254. The status bar 256 displays the command dressbot: start which appears as the local application is started. Button 258 starts the model creation process. When the local application 271 is running on the local machine, a notification 634 is displayed inside the browser window 250. Along with apparel shopping and modeling, users can engage, with their virtual model and with other users, in collaborative activities which include, in exemplary an embodiment, participating in virtual tours and visiting virtual destinations 636; taking part in virtual events 638 such as fashion shows, conferences and meetings etc, all or some of which may support elements of augmented reality. A media player or radio may be available/linked available in the browser in an exemplary embodiment 640, Featured apparel items 642 and other current offers or news or events may also appear on the main page 250 in an exemplary embodiment.
  • Reference is now made to FIGS. 9 to 13, to better illustrate the process by which a 3D user model is created. As described above, the 3-D user model is created by first receiving user input, where the user supplies respective images of themselves as requested by the system 10. Reference is now made to FIG. 9, where a sample image upload window is shown in an exemplary embodiment. The image upload window is accessible to the user through accessing the system 10. As described above, in an exemplary embodiment, the system 10 is accessed through the Internet. The sample upload window 260 is used to upload images of the user that are then used by the system 10 to generate the user model. As shown in FIG. 9, the user is requested to upload various images of themselves. The user in an exemplary embodiment uploads images of the facial profile, side perspective and a front perspective. In an exemplary embodiment, the user is able to upload the images from their respective computing device or other storage media that may be accessed from their respective device.
  • Reference is now made to FIG. 10, where a sample image of a client application window 270 is shown. In an exemplary embodiment, the client application 16 resident, or associated with the computing device causes a client application window 270 to be displayed to the user when the user model is being created. The client application can request and submit data back to the server. The protocol for communication between the application 16 and server 20 is the HTTP protocol in an exemplary embodiment. The application 16, in an exemplary embodiment initiates authenticated post requests to a PHP script that resides on the portal server and that script relays the requested information back to the application 16 from the server 20. People are comfortable with shopping on the internet using a browser and with monetary transactions through a browser. In order to provide the user with a rich experience, a rich 2D and/or 3D environment is desired. Such an environment can be a computational burden on the portal server. To reduce the computational load on the portal server, the computationally intensive rendering aspects have been pushed to the client side as an example. In an exemplary embodiment, this computational efficiency can be achieved through the use of a local stand-alone application or a browser plug-in, or run within a browser, or a local application that interacts with the browser and portal server 20. The current implementation, in an exemplary embodiment, involves a local application 271 that interacts with the browser and the portal server and is a component of the client application 270. In a typical setting, the local application and the browser interact with each other and also with the portal server 20, which in turn interacts with other components of the internet. Each of the modules of the portal server 20 may have a corresponding module on the client application. This may be a part of the browser or local application 271, the browser or a combination of the two. The browser and the local application interact in an exemplary embodiment, via protocols like HTTP and this communication may take place via the portal server 20 or directly. The purpose of the local application 271 is to enable computationally intensive tasks to be carried out locally such as computations required for 3D renderings of the apparel, the user's model and the environments. This gives the appearance of running 3D graphics in a browser. This permits online transactions within the browser (buying apparel) and at the same time gives the user a rich experience by using the power of the local machine and not overburdening the server. For those users who are not comfortable with downloading the local application 271, a 2D, 2.5D or less sophisticated 3D rendering of the graphics is displayed within the browser. Details of the browser-local application interaction are described next. In an exemplary embodiment, on a Windows® platform, registering the protocol associates a keyword with the local application 271 on the user's system in the registry. Thus, when the start application button 254 is pressed, the local application 271 is launched. When a user clicks on the ‘try on’ button from the fitting room or wardrobe, a notification is sent to the local application indicating that the user wants to try an apparel item. A callback function is implemented within the local application that listens for such notifications. When a notification is received, the appropriate callback function is invoked. This callback function then queries the portal server or browser for the appropriate parameters and renders the scene. For example, clicking on an apparel item in the fitting room prompts the browser to send the command “dressbot:tryon=5” to the local application which then places the item with ID=5 on the user model. The gathering of information from the server is done using HTTP. Such a framework leverages the advantages of both familiar experience of a browser and the computational power of a local application. The above procedure and details have been described as an exemplary embodiment and may be implemented with other techniques. In an alternative embodiment, local application features may be implemented as part of a web browser.
  • By accessing the user model creation functionalities on the user's local computing device, the speed at which the model is generated and then modified (through the user's commands) is increased. The application window 270 displays to the user the current state of the model, and allows the user to perform various modifications to the user model, as detailed below.
  • As described above, the user is able to modify the respective measurements that are associated with a preliminary user model that has been generated. The measurements specified by the user may be specific measurements that more closely resemble the user's physical profile. However, the measurements that are specified may also be prospective measurements, where the user may wish to specify other measurements. For example, the user may specify measurements that are larger than their current measurements, if for example, they wish to model maternity clothes. Also, the user may specify measurements that are smaller than their current measurements, thereby providing prospective looks with regards to what a user may look like if they were to lose weight.
  • The head and face region of the user's model is simulated by the modeling module 50 utilizing images of the user's face taken from different angles. The face generation process may be completely automated so that the modeling module 50 synthesizes the model's face by extracting the appropriate content from the user's images without any additional input from the user or it may be semi-automated requiring additional user input for the model face generation process. Reference is now made to FIG. 11, where a sample facial synthesis display window 280 is shown illustrating a semi-automated facial synthesis procedure. The reference image 282 shows the user where to apply markers on the face i.e., points on the face to highlight. The sample image 284, in an exemplary embodiment shows points highlighting regions of the user's face corresponding to the markers in the reference image 282. The modeling module 50 may require additional inputs from the user to further assist the face generation process. This input may include information on facial configuration such as the shape or type of face and/or facial features; subjective and/or objective input on facial feature dimensions and relative positions and other information. The type of input acquired by the modeling module 50 may be in the form of text, speech or visual input. Additionally, the modeling module 50 may provide options to the user in order to specify various areas/points upon the respective area of the model that they wish to make further modifications/refinements/improvements to. It may then be possible to tweak or adjust certain facial features using adjustment controls as in the case of the slider control feature for tweaking body measurements described later in exemplary embodiment. To be able to better illustrate the how the user may make modifications to the user model in an exemplary embodiment, reference is made now to FIGS. 12 to 13. Reference is now made to FIG. 12A, where a sample measurement window 290 is shown, in an exemplary embodiment. The measurement window 290 allows the user to specify empirical data that is used to generate or modify the user model. The user is able to specify the measurements through aid of a graphical representation that displays to the user the area or region for which a measurement is being requested. In addition. videos and/or audio may be used to assist the user in making measurements. When a user does not specify the measurements that are to be used, default values are used based on data that is computed from the respective images that the user has provided. Measurements associated with a user's waist have been shown here for purposes of example as the user may specify measurements associated with other areas of their body as described above. The user may specify various modifications of the user model that are not limited to body size measurements. Such modifications may include, but are not limited to, apparel size, body size, muscle/fat content, facial hair, hair style, hair colours, curliness of hair, eye shape, eye color, eyebrow shape, eyebrow color, facial textures including wrinkles and skin tone.
  • Reference is now made to FIGS. 12B and 12C, where a sample image of a constructed model image 300 and 302 are shown, respectively. The model image window allows the user to inspect the created user model, by analyzing various views of the created model. Various features are provided to the user to allow the user to interact with the created model, and to be able to better view various profiles associated with the model. Features 303, 304, 305 and 306 are depicted as examples. Pressing button 306 presents the user with options to animate the user model or the environment. In an exemplary embodiment, the user may be presented with animation options on the same page or directed to a different page. The user may be presented with specific preset expressions/actions in a menu, for example, to apply on their user model. In an alternate exemplary embodiment, the user may animate their model through text/speech commands or commands expressed via other means. The user may also choose to synchronize their model to their own expressions/actions which are captured via a video capture device such as a webcam for example. The user is also provided with environments to embed the character in as it is animated. Icon 306 allows the user to capture images of the model, or to record video sequences of model animation, which may then be shared by the user with other users. The facial icon 303 when engaged causes the face of the generated model to be zoomed in on. The body icon 304 when engaged causes the entire user model to be displayed on the screen.
  • Reference is now made to FIG. 13A, where a set of sample non photorealistic renderings are shown. Specifically, exemplary embodiments of non photorealistic renderings 310A, 310B, and 310C are shown. The non photorealistic renderings display a series of images, illustrating various views that may be seen of a user model. The respective non-photorealistic renderings illustrate the various rotations of the user model that the user may view and interact with. Further, non photorealistic renderings 310A and 310B illustrate how the user may modify the wrist dimensions of the model. In an exemplary embodiment, the user may select areas on the user model where they wish to modify a respective dimension. For example, by engaging the user's model at pre-selected areas or ‘hotspot’ regions, a window will be displayed to the user where they may specify alternative dimensions. FIG. 13A shows the wrist being localized via a highlighted coloured (hotspot) region 312 as an example. The dialog box 313 containing slider controls can be used by the user to adjust measurements of the selected body part and is shown as an exemplary embodiment. FIG. 13B shows more sample images of how users can make body modifications directly on the user model using hotspot regions 312.
  • Reference is now made to FIG. 13C which shows a sample ruler for taking measurements of the user model which may be displayed by clinking on a ruler display icon 316. This ruler allows the user to take physical measurements of the user model and to quickly check measurements visually. The ruler may also prove useful to the user in cases where they wish to check how a given apparel or product affects original measurements. In an exemplary embodiment, the user may try on different pairs of shoes on the user model and check how much the height changes in each case.
  • Reference is now made to FIG. 14, where a sample environment manager window 330 is shown in an exemplary embodiment. The environment module as described above, allows a user to choose respective environment backgrounds. The system 10 has default backgrounds that that the user may select from. Also, the user is provided with functionality that allows them to add a new environment. By uploading an image and providing it with a name, the user is able to add an environment from the list that they may select from. Various types of environments may be added, including static environments, panoramic environments, multidimensional environments and 3-D environments. A 3D environment can be constructed from image(s) using techniques similar to those presented in [44].
  • Reference is now made to FIG. 15A, where a sample user model environment image 340 is shown containing a photorealistic user model. The image 340 is shown for purposes of example, and as explained, various background environments may be used. Further, the user model that is shown in FIG. 15A, has been customized in a variety of areas. Along with the apparel that the user has selected for their respective user model, the user is able to perform different customizations of the model and environment. Examples of which are shown here for purposes of example. With reference to labels 342, the user has customized the hair of the user. The customization of a user model's hair may include, the style, hair and colour. With reference to label 344, the environment may be customized, including the waves that are shown in the respective beach environment that is illustrated herein. With reference to label 346, one example of the types of accessories that the user can adorn their respective model with are shown. In this example image, a bracelet has been placed upon the user model's wrist. As a further example of the various accessories that may adorn the model, reference is made to label 348, wherein shoes are shown upon the respective user model. Reference is now made to FIG. 15B where some aspects of collaborative shopping are illustrated. User model views may be shared between users. Users may also interact via their model in a shared environment. In an exemplary embodiment, window 354 shows two user models in a shared window between users. Product catalogue views 355 may also be shared between users. For example, views of mannequins displaying apparel in product display window 355 may be shared with other users using the share menu 358. In another exemplary embodiment of a collaborative shopping feature, views of shopping malls 356 may be shared with other users as the user is browsing a virtual mall or store.
  • Reference is now made to FIG. 32 and FIG. 33, where more sample environments and the types of activities the user can engage in with their virtual models are shown in exemplary embodiment. FIG. 32 depicts an environment where a fashion show is taking place and where one or more users can participate with their virtual models 650. The environment settings, theme and its components 652 can be changed and customized by the user. This is a feature that designers, professional or amateur, and other representatives of the fashion industry can take advantage of to showcase their products and lines. They may also be able to rent/lease/buy rights to use the virtual model of users whom they would like to model their products. Users may also be able to purchase/obtain tickets and attend live virtual fashion shows with digital models featuring digital apparel whose real and digital versions could be bought by users. FIG. 33 shows a living room scene which can be furnished by the user with furniture 654 and other components from an electronic catalogue in an exemplary embodiment. Users may use their model 650 to pose or perform other activities to examine the look and feel of the room, the setting and furnishing, which they may replicate in their own real rooms. This feature is further representative of ‘interactive’ catalogues where users are not just limited to examining different views of a product before purchasing it from an electronic catalogue but are able to examine it in a setting of their choice, interact with it via their virtual model or directly, acquire different perspectives of the product in 3D, and get acquainted with enhanced depictions of the look and feel of the product. Environments will also be available to users that change with time or other properties. For instance, an environment that represents the time of day may change accordingly and show a daytime scene (with the sun possibly and other daytime environment components) during daylight hours which changes to represent the way the light changes and dims during the evening time which subsequently changes into a night scene with the appropriate lighting, other environmental conditions and components in an exemplary embodiment. Environments that reflect the weather would also be available. Retailers would have the opportunity to make available their apparel digitally with the appropriate environments. For instance, galoshes, raincoats, umbrellas and water-resistant watches and jewelry may be featured in a rainy scene. Users may also customize/program scenes to change after a certain period of time, in an exemplary embodiment. For instance, they can program a given scene or scene components to change after a fixed period of time. User models may also be programmed to reflect changes over time such as ageing, weight loss/gain etc.
  • Reference is now made to FIG. 34, where a sample virtual model is shown in a customized music video that the user has generated. This figure is shown in exemplary embodiment and it illustrates the different activities the user can engage their virtual model in; the different environments they can choose to put their model in as well as the expression/action animation control they have over their virtual character model. Display window 672 shows the virtual model singing in a recording studio; display window 674 shows the model driving in a sports car while display window 676 shows the model waving and smiling. The user can choose to combine the different scenes/animations/frames to form a music video as depicted in FIG. 34. Another feature is a voice/text/image/video to song/music video conversion. Users can upload audio/video/text to the system and the system generates a song or a music video of the genre that the user selects. As an example, a user can enter text and specify a song style such as ‘country’ or ‘rock’ and other styles. Based on this, the system generates a voice that sings the written text in the specified style. The voice may also be selected (based on samples provided by the system) by the user or picked by the computer. (Given some content, the system can find related words to make rhymes while adhering to the provided content. In an exemplary embodiment, this can done by analyzing phonemes and looking up in a thesaurus to find rhyming words where necessary). For purposes of increasing computational efficiency, the system 10 may provide the user with pre-rendered scenes/environments where the music and environment cannot be manipulated to a great degree by the user but where rendering of the character model can occur so that it can be inserted into the scene, its expressions/actions can be manipulated and it can be viewed from different camera angles/viewpoints within the environment. Users can save and/or share with other users the various manifestations of their user model after manipulating/modifying it and the animation/video sequence containing the model in various file formats. The modified user model or the animation/video sequence can then be exported to other locations including content sharing sites or displayed on the profile or other pages. In an exemplary embodiment, users may want to share their vacation experiences with other users. In such a case, users can show their character model engaged in different activities (that they were involved in during their vacation), against different backdrops representing the places they visited. This could also serve as an advertising avenue for the tourism industry. The model may be animated to reflect the status of the user and then displayed on the profile page to indicate other members of the status of the user. For instance, the character model may reflect the mood of the user—happy, excited, curious, surprised etc. The model may be shown running (image/simulation/video) in a jogging suit to indicate that the user is out running or exercising, in one exemplary embodiment. The brand of the digital apparel may appear on the apparel in which case featuring the model on the profile page with the apparel on would serve as brand advertisement for that apparel.
  • Along with the specification of accessories, the users as explained below, are able to modify textures associated with the user model. With reference to label 350, an example of the texture modification of a user model is illustrated. Skin color can be changed by changing HSV or RGB and skin texture parameters as discussed with reference to step 128 in FIG. 6A. Skin embellishments such as henna or natural skin pigmentation such as birthmarks etc. can be added by using an image of the respective object and warping it onto the user model where placed by the user. Color palettes (a colour wheel for example) may be provided with different variations of skin tones for users to pick a skin tone. Similar palettes may exist for makeup application.
  • As described above, the community module allows the respective user to interact with other users of the system 10. Along with other users of the system 10, users are also able to invite other members to be users of the system 10.
  • The system 10 allows for multiple methods of interaction between the respective users of the system. The various methods of interaction are described herein. One such method of interaction is the concept of a collaborative shopping trip that is described in further detail herein. By having multiple users participate in a shopping trip, where users of the system 10 may interact with one another with respect to items of apparel or other products, each other's models, messages, and pictures or images. By creating and participating in a shopping trip as described herein, the real-world concept of inviting friends, shopping, and receiving their respective feedback on purchased items is emulated through the system 10.
  • Reference is now made to FIG. 16, where a sample image of a shopping trip management panel 360 is shown in an exemplary embodiment. The shopping trip management panel 360 allows users to manage existing shopping trips that they have created, or to create new shopping trips. Once the user has created a new shopping trip, the user may then invite other users to become members of their shopping trip as described with reference to FIG. 40. The user may send invites for shopping trips and other synchronized collaboration via the messaging service provided through system 10 and through other online or offline modes of messaging including email, SMS or text, chat and other means. Notifications can also be sent to users on social networking sites inviting them for collaborative activities. Users can also access past sessions that they were on through the panel 360.
  • Reference is now made to FIG. 17, where a sample friends manager window 370 is shown in an exemplary embodiment. The friends manager window 370 allows users to invite other users to join them in their shopping trips. As illustrated with reference to FIGS. 17 and 18, the system 10 allows for friends that are associated with the system 10, and those that may be associated with one or more other community networking sites to be invited. Community networking sites include sites such as Facebook, or My Space and others that allow their API to be used by external applications In an exemplary embodiment, a user's list of friends from social networking sites may be displayed within the system 10. In an exemplary embodiment, a procedure for accessing friends on a user's Facebook account is presented in FIGS. 39 to 42. FIG. 39A presents the sequence of events leading to the availability of one's Facebook friends on their account in system 10. FIGS. 39B to 39D display magnified views of each of the windows shown in FIG. 39A. Upon logging into system 10, the user can view his account information 716 as shown in FIGS. 39A and 39B. A provision 719 exists on the account page 716 for signing into Facebook, an external social networking site, which will facilitate access to Facebook account resources (other social networking sites may be present and accessed through system 10). As illustrated in FIGS. 39A-B, this will take the user to their login page 717 on Facebook, upon which the user may log in to his Facebook account 720. This will take the user back to their account 718 on system 10, this time with access to the user's Facebook friends 721 and other information available through their account on system 10 as shown in FIGS. 39C and 39D. When the user decides to logoff from their account on system 10, the user is asked if he/she wishes to logoff from Facebook as well. Users are also able to import data from external sites. For example, contact information or images may be imported from social networking sites such as Facebook, Personal data such as measurements of the user's body may be imported from a repository containing information on the user's measurements 115 described with reference to FIG. 6A. Pictures may be uploaded to the users account on system 10 from a photo sharing site.
  • Users are able to invite friends from the community network sites to interact with. Upon requesting that a friend from a community networking site join in a shopping expedition, the friend when accessing their account in the community network site, receives a notification that a request has been made. The user may choose to accept or reject the request.
  • Reference is now made to FIG. 18, where a sample system friendship management window 380 is shown in an exemplary embodiment. The system friendship manager is used to manage a user's relationship with other users of the system 10. The manager window 380 lists a user's friends, along with friend requests that are still pending. Search functionality is also provided for, where a user may search for other users by entering their names
  • Reference is now made to FIG. 19, where a sample chat window 390 is shown in an exemplary embodiment. The chat window in an exemplary embodiment may be created for every shopping trip that is associated with the user. Through the chat window 390, users are able to engage in an interactive chat session with one or more other users. The shopping trip feature allows two or more users to collaborate while shopping online. This may entail limited or full sharing of account resources for the duration of the shopping trip. In an exemplary embodiment, users can view the contents of each other's shopping carts, shopping lists, wishlists, fitting rooms, user models, and share audio play lists and other resources. They can set and view shared ratings, feedback, comments and other user-specified information regarding a product. They can mark items with user tags that can be shared between members of the shopping trip. Additionally, users can shop in collaborative environments wherein, in an exemplary embodiment, users can agree on a selected virtual mall environment and browse virtual stores and items concurrently. Reference is now made to FIG. 20 where a collaboration interface for a shopping trip 240 is shown in exemplary embodiment. Members of the shopping trip are shown by clicking on button 241. Here a list of stores that the users can browse is presented in panel 242. This panel may show all the stores subscribing to system 10. Alternately, the members of the shopping trip may add stores of interest to them or remove stores from the panel. The store names may be presented as a list or on a map of a virtual or real mall in an exemplary embodiment. In this example, the stores appear in a list 242. Users can select the shopping environment 243 for a shopping trip session. The shopping environments may be animated and/or video/image representations of fictional malls or real malls, or other manifestations as described previously with reference to the environment module 56, the shopping module 60, and the entertainment module 66. The shopping environments may incorporate a mode with augmented reality features, which were described previously with reference to the shopping module 60. Users can engage in an interactive session within a store environment in 243, as in FIG. 46, when operating via this mode. Users can also view product catalogues and individual products in 243. Users can also view stores in 243 that are available on the retail server 24. Users can acquire different product views, and examine products in 3D in 243. Additionally, a mode with physics based effects may be incorporated to simulate product look and feel as well as simulate realistic interaction with the product virtually via display 243. In an exemplary embodiment, information of a specific mall may be provided in the form of audio and visual (video/image sequences and/or text) feeds via 243 when a user selects a particular mall. This way, users would be able to shop remotely in malls or stores located in other countries such as Paris, Milan, New York and other cities and shopping hubs. Individual stores in the mall may also transmit live feeds via webcams, in exemplary embodiment, (and/or other image, video capture devices) which users can view in 243. This feed content may incorporate information on the latest stock, new arrivals, promotions, sales, window displays, shelf contents, inventory, salespeople, store arrangements, live reviews and other information relevant to the store. Miscellaneous information such as job openings in the store may also be included. Feed information would be uploaded via a web page onto the portal server 20. This information would be broadcast in 243 to clients requesting the feeds. Tools may be available to vendors to edit feed information. For instance, video feed information may be edited, image information may be enhanced through photorealistic effects etc. Feed information would provide a mode of advertising to stores. The facility to publish feed content may be available through an independent plug-in or software application to stores. The feed information does not necessarily have to be generated from physical store locations. This information may be provided by the brand or store head office. In the case that a customer browses a mall, an association file would assist in linking stores and/or brands to malls in which they have physical presence. Feed content may be hyperlinked. In exemplary embodiment, as customers browse store feeds, they may click on a product item to browse its details such as those described with reference to 22. Other details may be included such as inventory details of a particular item; product ratings (maybe assigned by customers or style consultants); style information; links to other products that can be worn with it and/or other similar styles in the store. The hyperlinks may be represented by icon such as animated tags. Other hyperlinks that may be present in the store feeds include links to electronic fashion magazines or videos containing information or demos or reviews about specific store products, styles, brands, etc.
  • On a shopping trip that involves more than one user, shopping trip members may choose to shop collaboratively. There are several ways to engage in a collaborative shopping trip, as described previously in this document. A user may browse the chosen environment and/or products, and at any given time, the video, animation or image sequence information that is displayed on the user's screen while the user is browsing the environment and products is considered the specific user's ‘view’. Users can choose to display the views of all members, which will appear on a split-window screen in an exemplary embodiment. Alternatively, they can choose to display a specific member's view on their screen or return to their own view. Members on a shopping trip can switch between views 244 of individual members browsing the common environment or product 243. Furthermore, users can choose to browse different digital manifestations 245 of the environment and/or product such as streaming video, image sequences, virtual simulation, augmented reality, other media content or any combination thereof. In the asynchronous mode, users can drag-and-drop and/or add items and products that they wish to share with other users from display screen 243 to a sharing folder, the contents of which can be viewed by the members of the shopping trip at any time. Users may view and examine their own account resources such as their virtual/digital model, wardrobe and fitting room contents, shopping cart, wishlist, image and other features during the shopping trip. In an exemplary embodiment, the user may view his resources in the window 246, by selecting from the menu 247. Currently, the user model is displayed in 246. Users can share their account resources such as their profile images, shopping cart contents, character model and fitting room content with other members of the shopping trip. Shared information by other users is viewable in display window 248. By selecting from the tabbed menu 249, shown here in an exemplary embodiment, a user can view the particular resource of the members of the shopping trip in 248. Users can add their virtual models to the environment which can be viewed by the members on the shopping trip who have the required access and permissions. Users on a shopping trip will be able to communicate with each other via multiple-way conferencing, chat (which may include text and/or speech communication; 3D visualization and/or augmented reality viewing and interaction). FIG. 20 shows a chat window 390 in another exemplary embodiment, within the shopping trip scenario. FIG. 20 could also be used in other scenarios as well such as choosing a restaurant to visit for dining. A user and their friends can collaboratively view information on restaurants in 243. Visual 3D menus may be available for viewing restaurant meal choices, for receiving feed information on specials, promotions, reviews and other relevant restaurant information. Users would also be able to collaboratively order a meal for take-out and review restaurant menus and other information online in order to decide where they would like to go for dining.
  • Reference is now made to FIG. 40 where an exemplary embodiment of the process joining a shopping trip through a user interface is shown. In an exemplary embodiment, this process proceeds as follows: When a user clicks on a “Go Shopping” button, he/she is presented with a screen with three columns—left, middle, right. The column on the left lists all existing shopping trips that the user's friends are currently engaged in. The user can choose to join any of these shopping trips by clicking on a “join” button. The user also has the option of searching for a shopping trip of interest. When a key word is searched for the related shopping trips are presented in the left column. The keyword could be the name of a shopping trip or an item of interest that is being shopped for, or an occasion, as examples. When the user clicks on the name of a shopping trip in the left column, the members of that shopping trip are shown in the middle column. The user can also invite other friends by clicking on the name of a friend from the right column and then clicking on the “invite” button. (The right column includes a list of all the user's friends. These friends include friends on from our shopping site, social networking sites such as Facebook, or friends from the virtual operating system/immersive system described in this document. The user can also search for a name of friend to add to the shopping trip. If the friend is found, the name appears in the right column and the user can invite the friend by clicking on the invite button). The friend then receives an invitation via a notification on a social networking site, a phone call, an SMS, an email or other means as described before. The friend's name appears in the middle column in red until the friend accepts the invitation. If the user's friend accepts the invitation, that friend's name appears in the middle column in blue. An orange color indicates that the friend will be joining later. Other cues may also be used to display the status of the friend. The user can also initiate a new shopping trip by specifying a name and clicking on the “new” button. The user also has the option of removing friends from a shopping trip that the user has initiated by clicking on the remove button under the middle column. The user can start the shopping trip or resume a shopping trip by clicking on the “GO” button. The next screen presented on clicking “GO” is a screen listing cities, malls, and stores. The users can pick any city, mall, or store to go to and shop via any of the modes of interaction of a shopping trip described earlier with reference to FIG. 7. At any given time, the user can be engaged in multiple shopping trips and can switch between any of the trips or add/remove friends by coming back to this interface. The name of the shopping trip that the user is currently viewing in appears on top as the user shops. Such an interface is also used for going to events such as those described with respect to the “hand and chill” feature (For example, as described with reference to FIG. 44). In an exemplary embodiment, the main shopping page includes two buttons—“Browse” and “Shopping Trip”. Clicking on “Browse” lets the user shop in the regular mode of shopping. Clicking on “Shopping Trip” loads the screen shown in FIG. 40.
  • Reference is now made to FIG. 41A-F where snapshots of a realization of the system discussed with reference to FIG. 20 are shown in an exemplary embodiment. Upon visiting the site (in a browser in this case), the user is presented with the option of logging in or browsing in regular mode (as shown in FIG. 41A). After logging in, the user can click on the “Shopping Trip” icon from the top menu. As shown in FIG. 41B, this brings up the shopping trip screen discussed with reference to FIG. 40. Shown in the middle column are the friends that are on the selected shopping trip. Friends that have not yet accepted the invitation to join the shopping trip are highlighted in red. Trip requests show up in the panel on the right and/or as a Facebook notification and/or as an SMS, etc. depending on preferences specified by the user. A sliding chat window 390 can be used at any time. In an exemplary embodiment, shown in FIG. 41C is one instance of the synchronous mode of operation of a shopping trip in use. In an exemplary embodiment, after starting a shopping trip, users are presented with a list of stores that they can go to. On selecting a store to visit, the user is presented with a menu (menu on the left in FIG. 41C) for browsing through products. This menu may be customized for each store, for example, by providing the vendors with an application programming interface (API) or by letting the vendors customize the menu and navigation options through the store portal discussed with reference to FIG. 42. Item-dependent views are also provided. Based on the content that is being viewed, an appropriate viewing method is used. For example, the method of displaying cosmetics may be different from that of displaying clothes. The chat window enables the user to chat with a selected user (who could be on our website or on any other social networking site like Facebook or on a chat application such as msn or via email or on a cell phone communicating via text such as through SMS or via voice by employing text to speech conversion, in exemplary embodiments) or with all members of a selected shopping trip. The panel on the right in FIG. 41C (but to the left of the chat window 390) provides various options and controls to the user as described earlier. The “My Friends Views” box in the panel is similar to 244 described earlier. It enables the user to select a view which could be the user's own view or any of the user's friend's views and interact with friends in the modes of operation discussed with reference to FIG. 7A-D, and described next in an exemplary embodiment. In the synchronous mode (which is the default mode), clicking on a friend's name in the “My Friends Views” displays the view 243 as seen by that friend in the current user's view 243. In the common mode (which can be initiated by clicking on a ‘common’ icon next to the friend's name), the view of the current user including navigation options becomes interactable/controllable by all the friends who have been marked as ‘common’. In the asynchronous mode, (which can be entered by clicking on the “shared items” icon on the top menu as described below with reference to FIG. 41D), clicking on a friend's name lists items that are being shared asynchronously by that friend. The view 243 is undockable/dockable/movable/dragable to allow multiple views simultaneously and can also be minimized/maximized/resized. One way to do this is to drag out the view 243 which opens it in a new window that can be placed elsewhere. Multiple views may be opened at any given time. As shown in FIG. 41C in an exemplary embodiment, the multiple views are shown by numbers next to “My View”, or the user's friends' names in 244. This is particularly useful when viewing multiple items collaboratively. For example for mixing and matching; friends may find a skirt that they like and may need to search for a top to go with it. An interface similar to that described with reference to FIG. 45 can also be used here for mixing and matching. The panel is also undockable/dockable and can be moved/dragged around and also be minimized/maximized/resized based on the users' preference. Under “My Friends Views”, users can also see which of the user's friends are online or are actively browsing. This is indicated by the color of a ‘person’ icon next to each name. A shortcut is also located next to each of the friends' names to quickly slide out the chat box 390 and chat with the friend. Users can also click on a phone icon that lets the user talk to a friend or all members of a shopping trip. In an exemplary embodiment this is done either over VoIP (Voice over Internet Protocol) or by dialing out via a telephone/cellular line through a modem. Users can also engage in a video chat with their friends. Clicking on the radio on the left, brings up options for the radio (such as a title to play, a playlist, volume, play individually, play the same music for all members of the shopping trip, etc.) in the view 243. These options can be set using the various modes of interaction as described above, Clicking on the “shared items” icon on the top menu brings the “My Shared Items” and “My Friends Shared Items” boxes in the panel as shown in FIG. 41D in an exemplary embodiment. These boxes list the items that are posted by the user or by the user's friends for sharing with others asynchronously. Clicking on the “My Wardrobe” icon on the top menu brings up a “My Wardrobe” box in the panel as shown in FIG. 41E in an exemplary embodiment. This box lists the items that the user has in his/her wardrobe. In an exemplary embodiment, items get added to the wardrobe once the corresponding real items are purchased. Users can drag and drop items from the “My Wardrobe” box to the view 243 or can mark the items in “My Wardrobe” for sharing. Clicking on the “Consultant” icon brings up a “Chat with a consultant” box in the panel as shown in FIG. 41F in an exemplary embodiment. Users can add consultants from a list. Recommendations on style consultants by friends are also displayed. Users can share views and engage in an audio/video/text chat with consultants similar to the way they interact with their friends as described above. Consultants can also participate in collaborative decision making through votes described as described in this document. Upon clicking on the “Check Out” icon, users are presented with the SPLIT-BILL screen as discussed with reference to FIG. 21. Clicking on the “Logout” icon logs the user out of the system. The user's friends can see that the user has logged out as the colour of the icon next to the name of the user under “My Friends Views” changes. The user may join the shopping trip later and continue shopping. The user can exit from a shopping trip by clicking on the shopping trip icon, which brings up the screen shown in FIG. 40 or 41B, and then clicking on the “exit” icon next to the name of the shopping trip. The interface and system described here can also be used to browse external websites and even purchase items.
  • Store feeds (which could be videos on the latest items in the store or the items on sale in a store, or could also be streaming videos from live webcams in stores displaying items on sale) as described in this document are also viewable in the screen 243. Users of the shopping trip can not only access products offered by various stores but also services. For example, a movie ticket purchase service is offered that works as follows in an exemplary embodiment: Suppose a bunch of friends want to go out to watch a movie. These friends can go on our site. On selecting the name of a cinema from a services menu, the users are presented with a screen that displays the available locations for the cinema. Users can choose the location they want to go, or assign a head to decide on the location or let the system propose a location to go to. If they chose a location themselves, a majority vote is taken and the location corresponding to this majority is proposed as the location that they should go to. If all the users agree to go to the voted location, they can proceed to checkout/booking. Otherwise, the system proposes alternatives. If any of the users assigns a head, the choice of the head is taken as the choice of the user too. The system can also propose locations. For example, it may calculate the location of a theater that minimizes the travel for all the users on a shopping trip such as a location that falls close to all the users. The system may also identify locations where there is a special promotion or a sale or something to do in the proximity. It can make statements such as, “You can go to Blah Theater and then go for dinner at DinnerTime Restaurant which is only five minutes away and food there is at half price today”. In an exemplary embodiment, this can be done by evaluating conditional probabilities that are constructed based on data from several users. After selecting the location, the users are presented with another screen that lets them choose the movie that they would like to watch and the show time. Trailers for each of the movies currently playing may be shown on this page and the users. The selection of movie titles and show time proceeds in a similar manner to that of the location of a theater. Upon selection of a location, movie and time, the users proceed to checkout at which point they have the option of using Split-Bill features if desired. (Users may simply state a movie they would like to watch and the system may propose the nearest location that plays the movie and that works with all the members of the shopping trip). This method works with any of the modes of operation of the shopping trip. In an exemplary embodiment, users can also watch the movie for which tickets have been purchased online collaboratively. Further details are discussed with reference to FIG. 44. Shopping trips can also work on mobile devices.
  • Users of the shopping trip can also collaboratively pick and choose designs, styles, colours, and other aspects of apparel, and share their user model or user data 111 to build customized apparel. Similarly, users can design a room and purchase furniture, or design, build and buy furniture or other items. Collaboration during shopping (using the modes of operation of a shopping trip) can be used not only for product or catalog or mall browsing but with any shopping facility or shopping tool such as the shopping cart, fitting room, wardrobe, user model, consultant, etc. Tools present in toolbar 239 such as editing zooming, panning, tilting, manipulating view, undo, etc, as described with reference to FIG. 20 can also be used during a shopping trip.
  • Reference is now made to FIG. 42 where one form of interaction between various parties with system 10 is shown in exemplary embodiment. Consumers can interact with their various computing devices 14, 16 not shown in the image. Other users may include shipping and handling users, administrative staff, technical support, etc. Consumers browse products, interact together and shop. When a purchase order is received at the portal server 20, vendors selling the product are notified. They then approve the purchase order, upon which the payment received from the customer is deposited in the corresponding vendor's account. The shipment order is placed through shipping and handling users. Alternatively, the customer may pick up order at a store branch using a ‘pick up ID’ and/or other pieces of identification. The store the customer is interested in picking up the order at can be specified through the system. The system may find the vendor store closest in proximity to the customer's location (customer's home, office etc.). An interface exists for interaction between any type of user and system 10, and between different groups of users via system 10. For instance, customers may interact with each other and with store personnel/vendors, and with fashion consultants via a webpage interface. Vendors may interact with customers, consultants and other businesses via a ‘MyStore’ page available to vendors. Vendors can upload store feeds (in audio, video, text formats etc.), product information and updates via this page, as well as interact with customers. Vendors can see (limited information on) who is entering their store in real time and also offline. For example, they can see if a set of users entering their store are on the same shopping trip, the age group of users (arbitrary noise may be added to the age), the gender of the user. This allows the vendor to make comments like, “Hello boys, can I help you with anything?”. Users can set the privacy level they are comfortable with through the preferences panel. Fashion consultants can upload relevant information through pages customized to their need. They can upload the latest fashion tips, magazines, brochures, style information etc. They can easily pull up and display to the user product information, dress ‘how-tos’, style magazines and related information as appropriate. They can also interact via various forms of interaction (such as audio/video/text chat etc.) described in this document.
  • Users on a shopping trip have the opportunity to use the Split-Bill™ feature to make payments for purchases. Split-Bill is a feature that enables users to share the cost of a purchase or the amount of a transaction by allocating some or all of the cost or amount to be paid by each of the users. Optionally, a subset of users that are party to the transaction may be allocated the entire cost or amount of the transaction. This feature also calculates the portion of taxes paid by each individual in a transaction and can be used in conjunction with the receipt management system discussed with reference to FIG. 48D. Split-Bill also enables users to claim their portion of an expense when claiming reimbursement for expenses (for example, expenses incurred on part of an employee for the purposes of work). There are many options for ways of operation of the Split-Bill feature. Most of these ways can be thought of as similar to the modes of operation of a shopping trip as described with reference to FIG. 7A-D. Some of these methods are described next in exemplary embodiments: FIG. 21A demonstrates an exemplary embodiment of Split-Bill 261. Different payment schemes are available to the users of a shopping trip. A member of the shopping trip may pay for the entire bill using option 262 or each member pay for his/her individual purchases using option 263. Alternately, the bill may be split between members by amount or percentage (as illustrated in FIG. 21A) or other means of division using option 264. Such a service would also be applicable to electronic gift cards available through system 10. More than one user may contribute to an electronic gift card and the gift card may be sent to another user via system 10. The recipient of the gift card would be notified by an email message or a notification alert on his/her profile page or other means. The senders of the gift card may specify the number of people contributing to the gift card and the exact amount that each sender would like to put in the gift card or the percentage of the total value of the gift card that they would like to contribute to. In one exemplary embodiment, the Split-Bill method works as follows: When a user decides to split a bill on a supported website or application, they choose the friends that they wish to split the bill with and the portions of the bill that each friend including themselves will pay. After that, they confirm their order as usual and get sent a payment processing gateway to make payment. Once they have paid their portion of the bill, the other participants are notified of the split bill payment. These other users accept the split bill notification and are sent to the confirmation page for an order where they confirm their portion of the bill and are sent to the payment processing gateway. Once each member of the split bill group has made their payment, the order's status is changed to paid and becomes ready for fulfillment. A hold may be placed on authenticated payment until all other participants' payments have been authenticated at which point all the authenticated payments are processed. If a participant declines to accept a payment, then the payments of all other participants may be refunded. Users can also split a bill with a friend (or friends) who is offline. In this case, a user or users come to the Split-Bill screen and indicate the name of the user(s) that they would like to split a portion or all of the bill with. That user(s) is then sent a notification (on our website or on any other social networking site like Facebook or on a chat application such as msn or via email or on a cell phone communicating via text such as through SMS or via voice by employing text to speech conversion, in exemplary embodiments). That user(s) can then decide to accept it in which case the transaction is approved and the payment is processed, or deny it in which case the transaction is disapproved and the payment is denied. This mode of operation is similar to the asynchronous mode of operation as discussed with reference to FIG. 7B.
  • In another exemplary embodiment, the Split-Bill method works as follows: When members of a shopping trip decide to split a bill on a supported website or application, each of them is presented with a screen such as the one shown in FIG. 21B in an exemplary embodiment. In the first (leftmost) column, the user enters the amount that he/she would like to pay (top row) of the total amount. Other users are shown similar screens. As the user enters this amount, it is “flooded” (viewable) to the other users' screens. The user can also enter the amount that he/she would like other members to pay in the first column. The other columns indicate the amounts that others have entered. For example, in FIG. 21B it is shown that Alisha has entered “50” as the amount that she would like to pay. In the 3-by-3 matrix shown, each column is for entering the amount that a member of the trip would like the members of the trip to pay. A user (user A) can optionally override the amount that another user (user B) should pay in their (user A's) column in the row that corresponds to the user's (user B) name. If the amounts entered by all the members for any given row are consistent, a check mark appears. In an exemplary embodiment, a user must enter the value in at least their field and column to indicate approval. The user cannot override the values in the grayed out boxes as these boxes represent the values entered by other users. If there is inconsistency in the values entered in any row, a cross appears next to the row to indicate that the values entered by the users don't match. As the users enter their amounts an “Adds up to box” indicates the sum of the amounts that the users' contributions add up to. In an exemplary embodiment, the amounts along the diagonal are added up in the “Adds up to box”. Another field indicates the required total for a purchase. Yet another field shows how much more money is needed to meet the required total amount. If all rows are consistent, the users are allowed to proceed with the transaction by clicking on the “continue” button. The amounts entered can be the amounts in a currency or percentages of the total. In an exemplary embodiment, users can also view a total of the amounts that each of the users is entering, as shown in FIG. 21C in an exemplary embodiment. Users can also select a radio button or a check box below the column corresponding to a user to indicate that they would like that user's allocation of amounts across friends. For example, as shown in FIG. 21C the user has chosen Alisha's way of splitting the bill. If all members chose Alisha's way of splitting the bill, then a check mark appears below Alisha's column and the users are allowed to proceed by clicking on the “continue” button. The user whom other members are choosing for splitting the bill may also be communicated for example using colours. This mode of operation is similar to the synchronous mode of operation as discussed with reference to FIG. 7C.
  • In another exemplary embodiment, the Split-Bill method works as follows: When members of a shopping trip decide to split a bill on a supported website or application, each of them is presented with a screen such as the one shown in FIG. 21D in an exemplary embodiment. Users can enter the amount that they would like to pay in a field next to their name. If the amount adds up to the required total, the users are allowed to continue with the purchase.
  • In another exemplary embodiment, the Split-Bill method works as follows: When members of a shopping trip decide to split a bill on a supported website or application, each of them is presented with a screen such as the one shown in FIG. 21D in an exemplary embodiment. Users can enter the amount that they would like to pay in a field next to their name. In this case, the users can enter an amount in any of the fields next to the members names simultaneously using the communication protocol described with reference to FIG. 7D. The users also share the same view. Each user also gets to approve his/her amount by checking a box next to their name. If the amount adds up to the required total and each of the users has approved his/her amount, the users are allowed to continue with the purchase. This mode of operation is similar to the common mode of operation as discussed with reference to FIG. 7D.
  • During a shopping session, individual shopping carts as well as shared shopping carts are available. In an exemplary embodiment, changes made by a user of the shared shopping cart are synchronized across all users of the shared shopping cart. An alternative option would be to make the shopping cart only viewable to others (read-only). Split-Bill also enables product-wise division. Users can also pick and choose which items from each of the members shopping carts they would like to pay for. An exemplary embodiment of such a method is illustrated in FIG. 21E. As shown in this figure, a user has chosen to pay for his “Red Jersey”, Alisha's sweater, and Robin's socks and tuque. The user's total is also shown. Items that are paid for are shipped to the respective users (shopping cart owners) or can be shipped to a common address (common to all users). Reference is now made to FIG. 21F where another exemplary embodiment of Split-Bill is shown. Users can drag and drop items from a shared shopping cart into a list under their name. The list indicates the items that the user would like to pay for. At the bottom of the list the total of each user is also shown. Reference is now made to FIG. 21G where another exemplary embodiment of Split-Bill is shown. Users can drag and drop items from a shared shopping list into a list under their name and indicate the amount of the total bill that they would like to pay. This could be an amount in a currency or a percentage of the bill. In another exemplary embodiment, users can state an amount or a maximum amount (which could even be zero) that they can afford to pay. Other users can make payments on behalf of this user.
  • The Split-Bill feature can also work in any combination of the methods described above. In the above embodiments of Split-Bill, options are also available to split a bill evenly between users or to split the outstanding or remaining amount evenly between users. The above embodiments of Split-Bill can also be used in conjunction with multiple shopping trips. A trip leader may also be assigned to decide on how the bill is split. Reoccurring or monthly payments may also be shared between friends using the above methods. This can also take place in a round Robin fashion where one user pays the first month, a second user the second month and so on. The Split-Bill feature allows processing of credit, debit, points cards and/or other supported payment options. Payments can be made using any combination of these options. For example, a product that is about to be purchased may be paid for partially from a debit/bank account, partially via a credit card, partially using a gift card, and partially using points or store credits. Points or credits may come from stores or from a user's friends. Also supported is the borrowing/lending of money and points between friends. This can be used in conjunction with contract management system. The Split-Bill feature enables currency conversion. Users in different countries can view the amount to be shared in their local currency or other currencies of their choice. The Split-Bill feature also enables users to request money or points from their friends (including those on social networks such as Facebook) or other users. This can be done when the user from whom money is being requested is online or offline similar to the method described above. Upon approval money or points get transferred to the account of the user who requests funds. This can then be transferred to the user's debit account, credit account, points account, etc. The amount of a transaction may also be split between companies and other groups. For sites that do not support the Split-Bill feature, two or more parties can deposit to an account using the Split-Bill service on a supported site, upon which a debit or a credit or a points card or an electronic money voucher is created. This account can then be used on a third party site for a shared purchase. In an exemplary embodiment, the Split-Bill method is also available as an independent component on a website for people to share the amount of a translation. Users can collaboratively buy products/services and send them as a gift to other users. Users can also ship gifts to users based on their location as specified in social networking sites or on our site or based on their mobile device location. This allows users to send gifts to an up-to-date address of the users' friends.
  • Investments may be made through Split-Bill. Other financial transactions may be conducted in a collaborative manner, including currency exchange. Currency may be exchanged, in exemplary embodiment, with a friend or someone in a friend's network so that the user may ensure that the transaction is being carried out through a trusted reference. A person traveling to another country may exchange money with a relative or friend in that country. In another exemplary embodiment, shares and stocks may be traded collaboratively, for example through a split bill interface. Tools may be available for investors to collaboratively make investments and assist them in making decisions.
  • Reference is now made to FIG. 35 where a virtual model is shown in display windows illustrating examples of how a user can animate their character model's expressions/movements/actions and/or change their model's look. The expressions/actions/dialogue/movements of the character model can be synchronized with the user's own expressions/actions/dialogue/movements as tracked in the image/video (in an exemplary embodiment using a method similar to [52]) of the user or these can be dictated by the user through text/speech and/or other command modes or through pre-programmed model expression/action control options provided through system 10. The display window 682 shows the virtual model ‘raising an eyebrow’; display window 684 shows the model with a surprised expression sporting a different hairstyle; display window 686 shows the virtual model under different lighting conditions with a different hair colour. The exemplary embodiments in the figure are not restrictive and are meant to illustrate the flexibility of the virtual models and how a user can animate and/or control their virtual model's looks, expressions, actions, background/foreground conditions etc. Facial expressions may be identified or classified using techniques similar to those used in [53]. The virtual model can be thus manipulated even when the user uses it to communicate and interact with other users, for example, as in a virtual chat session. In another exemplary embodiment of collaborative interaction involving a user's model, stylists and friends of the user can apply makeup to the user model's face to illustrate make up tips and procedures. The makeup may be applied to a transparent overlay on top the content (user model's face) being displayed. The system allows the user to save the animation and collaboration sessions involving the user model.
  • Reference is now made to FIG. 36. This figure, in an exemplary embodiment, shows a sample virtual store window 690 involving virtual interaction between the user and a sales service representative in a real jewelry store, and incorporating augmented reality elements as described next. In this example, a sales representative 691 interacts with the user in real-time via streaming video (acquired by a webcam or some other real-time video capture device). The user in this instance interacts with the sales personnel via the user model 650 which is lip-syncing to the user's text and speech input. Panoramic views of the displays 692 in the real jewelry store appear in the store window 690. An ‘augmented reality display table’ 693 is present on which the sales representative can display jewelry items of interest to the user. Virtual interaction takes place via plug and play devices (for example I/O devices such as a keyboard, mouse, game controllers) that control the movement of simulated hands (of the user 694 and sales personnel 695). Additionally, a device that functions as an ‘articulated’ control i.e., not restricted in movement and whose motion can be articulated as in the case of a real hand, can be used to augment reality in the virtual interaction. Store personnel such as sales representatives and customer service representatives are represented by virtual characters that provide online assistance to the user while shopping, speak and orchestrate movements in a manner similar to real store personnel and interact with the user model. The augmented reality display table is featured by system 10 so that vendors can display their products to the customer and interact with the customer. For example, a jewelry store personnel may pick out a ring from the glass display for showing the user. A salesperson in a mobile phone store may pick out a given phone and demonstrate specific features. At the same time, specifications related to the object may be displayed and compared with other products. Users also have the ability to interact with the object 696 in 2D, 3D or higher dimensions. The salesperson and customer may interact simultaneously with the object 696. Physics based modeling, accomplished using techniques similar to those described in [54], is incorporated (these techniques may be utilized elsewhere in the document where physics based modeling is mentioned). This display table can be mapped to the display table in a real store and the objects virtually overlaid. A detailed description 697 of the object the user is interested in is provided on the display while the user browses the store and interacts with the store personnel. A menu providing options to change settings and controls is available in the virtual store window, by clicking icon 540 in an exemplary embodiment. The above example of a virtual store illustrates features that make the virtual store environment more realistic and interaction more life-like and is described as an exemplary embodiment. Other manifestations of this virtual store may be possible and additional features to enhance a virtual store environment including adding elements of augmented reality can be incorporated.
  • Reference is now made to FIG. 22, where an apparel display window 400 is shown in an exemplary embodiment. The display windows provide visual representations of the apparel items that are available to model/purchase to the user. The display window 400 comprises a visual representation 402 of the apparel item. In the example provided herein, a visual representation of a skirt is provided. Further information regarding the pricing, and ordering information, should the user desire to purchase this item is available. The user is able to view reviews of this apparel items that have been submitted by other users by engaging the review icon 404 in an exemplary embodiment. The user is able to further share this particular apparel item with friends by engaging the share icon 406 in an exemplary embodiment. If the user is browsing in the regular mode of operation (not on a shopping trip with friends), clicking on this icon presents the user with a screen to select a mode of operation. If the synchronous mode or the common mode of interaction are chosen, the user is presented with a shopping trip window as described with reference to FIG. 40. If the user chooses the asynchronous mode of operation, the item gets added to the “shared items” list. The user can manage shared items through an interface as described with reference to FIG. 23. If the user is engaged in the synchronous or common modes of interaction, clicking on the icon 406, adds the item to the “shared items” list. The user can also send this item or a link to the item to users of social networking sites. The user is able to try on the apparel items on their respective user model by engaging the fitting room icon 408 in an exemplary embodiment. The method by which a user may try on various apparel items has been described here for purposes of providing one example of such a method. Suitability of fit information may be displayed next to each catalog item. In an exemplary embodiment, this is done by stating that the item fits (‘fits me’) 410 and/or placing an icon that conveys the fit info (for eg. icon 550). Further details of displaying the goodness of fit information is described with reference to FIG. 30. A 2D or 3D silhouette 554 may also be placed next to catalog items to visually show goodness of fit. Information on how the apparel feels is also communicated to the user. This is done in an exemplary embodiment, by displaying a zoomed in image of the apparel 412 (“Feels Like”) illustrating the texture of the apparel. The sound that the apparel makes on rubbing it may also be made available.
  • Models of products (photorealistic 3D models or NPR models) for use in catalogs may also be constructed by using images submitted by users. Images contributed by several users may be stitched together to create models of products. Similarly, images from several users may also be used to create a user model for the users' friend. Holes or missing regions, if any, present in the constructed models may be filled with texture information that corresponds to the most likely texture for a given region. The most likely texture for any given region can be estimated, in an exemplary embodiment, using Naïve Bayes or KNN. This can be done as described earlier, using statistics drawn from regions in images surrounding the holes as the input and the texture in the missing region as the output.
  • When a user has chosen to try on an apparel items, the user is presented with a list of the various apparel items that have selected to try on in an exemplary embodiment. Reference is now made to FIG. 24, where a sample fitting room window 420 is shown in an exemplary embodiment. The fitting room window 420 lists the various apparel items that the user has selected to try on. Each apparel item has an identification number assigned to it by system 10 for purposes of identification. By selecting one of the items from the selection window 422, and clicking on icon 424, the user requests that the system 10 fit and display the apparel item on the user model. The status bar 426 displays the command that is executed—“dressbot:tryon=30” indicating that the item with ID (identification number) equal to 30 is being fitted on the user model.
  • An item of apparel is comprised of patterns (tailoring, stitch-and-sew terminology). All items of apparel are described that are associated with the system 10 have an apparel description file (ADF) associated with them. In exemplary embodiment, the ADF file can be in XML format and the CAD file provided to system 10 by the retailer module 58 can be encapsulated within this ADF file. The apparel description file contains all information regarding the apparel including information necessary to model and display the apparel and to determine its fit on a model. This includes, for example, the pattern information for a given apparel; how the individual components of the apparel are stitched together; material properties such as composition, texture, etc; cloth care instructions; source information (country, manufacturer/retailer); optical properties including BDRF (Bidirectional Reflectance Distribution Function), bump map etc; microscopic images to reveal texture; location of where each piece goes with respect to anatomical landmarks on models. Any and all information related to the actual apparel and any and all information needed by system 10 to create the virtual apparel, display and fit it on a model is contained within the ADF file. An ADF file in XML format is presented in FIG. 37 in an exemplary embodiment, The ADF file 700 contains header information 701 followed by information describing a specific apparel. The apparel tags 702 indicate the start (<apparel>) and end (</apparel>) of apparel description. Specific tags are provided within this region for describing different aspects of the apparel. For instance, the manufacturer description 703 includes the name of the manufacturer, the country source, the composition and size information in this file. The care information 704 provides details on whether the apparel can be washed or dry-cleaned; the pattern tags 705 enclose the CAD filename containing the details on apparel pattern data; the fitting information 706 that describes how a virtual manifestation of the apparel fits on a virtual human model is encapsulated by the fitting tags 706; the media tags 707 enclose filenames that provide visual, audio and other sense (such as feel) information about the apparel, as well as the files and other data containing display information about the specific apparel (the 3D display data for the apparel model lies within the <render> tag in this example). Further store information 708 such as the unique store ID in the system 10, the name of the store and other details relating to a specific store such as the return policy is provided in the ADF file. The ADF file 700 in FIG. 37 is presented for purposes of illustration and is not meant to be restricted to the XML format or the tags given in the file. Other manifestations of the ADF are possible and other tags (descriptors) may be included to describe a given apparel. Much of the information describing the apparel is contained in the CAD file obtained from the retailer 58, while the information necessary to model, display and fit the apparel is augmented with the CAD file to form the ADF. Reference is now made to FIG. 38 where a quick overview is provided of ADF file creation and use, in an exemplary embodiment. Apparel information 711 described previously, as well as information associated with the specific apparel in its CAD file is packaged by the ADF creation software 712 to form the ADF file 700. This ADF file information is then subsequently used in modeling the apparel digitally for purposes of display in electronic catalogues and displays 713; for fitting on 3D user models 714; for displaying and listing in the virtual wardrobe and fitting room 715 as well as other forms of digital apparel viewing and interaction. Pattern information comprising the apparel is extracted. This information is contained in the CAD and/or ADF files and is parsed to form the geometric and physics models of the apparel. In forming the geometric model, a mesh is generated by tessellating 3D apparel pattern data into polygons. This geometric model captures the 3D geometry of the apparel and enables 3D visualization of apparel. The physics model is formed by approximating the apparel to a deformable surface composed of a network of point masses connected by springs. The properties of the springs (stiffness, elongation, compressibility etc.) are adjusted to reflect the properties of the material comprising the apparel. The movement of the cloth and other motion dynamics of the apparel are simulated using fundamental laws of dynamics involving spring masses. Cloth dynamics are specified by a system of PDEs (Partial Differential Equations) governing the springs whose properties are characterized by the apparel material properties. The physics model enables accurate physical modeling of the apparel and its dynamics. Reference points on the apparel specify regions on the apparel corresponding to specific anatomical landmarks on the human body. The information concerning these points and their corresponding landmarks on the body will be contained in the CAD and ADF files. The reference points on the geometric and physics based models of the apparel are then instantiated in 3D space in the neighbourhood of the corresponding anatomical landmarks of the character model. From these initial positions, the reference positions are pushed towards the target anatomical positions. At the same time, springs interconnecting seams are activated to pull together the simulated apparel at the seams. FIG. 29A illustrates an example of the visual sequences 460, from left to right, displayed to the user in a window while the apparel is being fitted on a non photorealistic rendering of the user model. An example of the visual sequences 462, from left to right, presented to the user in a window during hair modeling on the non photorealistic rendered user model is also shown in FIG. 29A. The hair 464 on the user model is animated using physics-based techniques which permit realistic simulation of hair look and feel, movement and behavior.
  • Reference is now made to FIG. 29B where a user model adjustments interface 470 is shown in an exemplary embodiment, containing a non photorealistic rendering of a user model. Options to make body adjustments are displayed upon clicking the menu display icon 476. A sample mechanism is shown for making adjustments to the body. Slider controls 475 and 477 can be used to make skeleton and/or weight related adjustments to the user model. Skeleton adjustments allow modifications to be made to the generative model of the skeletal structure of the user model. This renders anatomically accurate changes to be made to the user model. In an exemplary embodiment, upon moving some of the skeleton adjustment controls 475 to the right, a taller user model (with elongated bones) 472 is obtained whereas, by moving some of the skeleton adjustment controls 475 to the left, a petite user model 471 is obtained. In another similar exemplary embodiment, weight adjustment controls 477 can be used to obtain a heavier user model 474 or a slimmer user model 473. In an exemplary embodiment, manipulating the skeletal adjustment controls increases or decreases the distance between a joint and its parent joint. For example increasing the value of the length of a shin increases the distance between the ankle joint and its parent joint, the knee joint. In an exemplary embodiment, manipulating the weight adjustment controls increases or decreases the weight assigned to the corresponding vertices and moves them closer or farther from the skeleton. For example, increasing the weight of a selected portion of the shin places the vertices corresponding to that region further from the skeleton. Continuity constraints (a sigmoid function in an exemplary embodiment) are imposed at the joints to ensure plausible modifications to the user model. Users can also deform the user model by nudging the vertices corresponding to the user model. Users can also specify the body muscle/fat content which sets the appropriate physical properties. This is used, for example, to produce physically plausible animation corresponding to the user.
  • Reference is now made to FIG. 29C where a sample window is shown demonstrating product catalogue views available to the user from which apparel may be selected for fitting onto their user model. A product catalogue 480 may be displayed by clicking a menu display icon 482. The user may then select a given outfit/apparel/product from the catalogue upon which it will be fit and displayed on the user model. In exemplary embodiments, product catalogues are available in the local application 271 or within the browser or a combination of both as described with reference to FIG. 10 and FIG. 31.
  • By clothing the user's model with apparel chosen by the user, the user is able to visualize and examine the appearance of the apparel on their body from an external perspective and also get an approximate idea of how the apparel fits. In order to communicate fit information to the user in more exact terms, metrics are used that define the suitability of apparel not just based on size information but also as a function of body type and fit preferences. The system will relay suitability of fit information to the user using aspects that are, but not limited to, quantitative and qualitative in nature. For example, goodness of fit is a quantitative metric. In exemplary embodiment, for determining apparel goodness of fit on a user model, the convex hull of the model is compared with the volume occupied by a given piece of clothing. As mentioned previously, apparel can be modeled as springs by system 10. In order to determine regions of tight fit in this case, in exemplary embodiment, physical stress and strain on the apparel and/or model can be computed using the spring constant of the apparel material. Regions of loose fit may be determined by evaluating normals from the surface. The distance between the body surface and the apparel surface can be ascertained by computing the norm of the vector defined by the intersection of the surface normal to the model's surface with the cloth surface. This process can be made computationally efficient by sampling surface normals non-uniformly. For instance, regions of high curvature and greater importance may have many more normals evaluated than regions of low curvature. In assessing suitability of fit, qualitative aspects are also incorporated by system 10. These include, but are not limited to, user preferences. An example of this is the user preference for loose fitting clothes. On their user model, users can visualize suitability of fit through various visualization schemes provided by system 10. In exemplary embodiment, regions of different fit on the apparel may be colored differently. Visual indicators include, but are not limited to, arrows on screen, varying colors, digital effects including transparency/x-ray vision effect where the apparel turns transparent and the user is able to examine fit in the particular region. Some examples are illustrated in FIG. 30. The visualization options are provided to the user via a menu available by clicking the icon 540, in exemplary embodiment. In this figure, different fit regions are depicted using coloured arrows 542, highlighted regions 544 as well as transparency/x-ray effects 546. Transparency/x-ray effects 546 allow fit information to be visualized with respect to body surface. In FIG. 30, the apparel on the 3D body model is made transparent in order for the user to visually examine overall apparel fit information—regions of tight/proper/loose fit. With reference to FIG. 30, regions of tight fit are shown using red coloured highlight regions (armpit region). Loose fitting regions are shown via green arrows (upper leg) and green highlight (hips). Comfort/smug fitting is depicted using orange arrows (waist) and yellow highlight (lower leg). Users may also define the numerical margins that they consider ‘tight’, loose’ and so on for different apparel. For example, the user may consider a shirt to be proper fitting around the arms if the sleeves envelope the arm leaving between 1-2 cm margin. The user may specify these margins and other settings using the options menu 540 available to the user. The transparency/x-ray effect also provides visual information with regards to layers of clothing. The users may wish to select particular items for visualization on the model. In one exemplary embodiment, they may select from the itemized list 552 which lists all of the apparel items the user has selected to fit on the user model as part of an ensemble for instance. Accordingly, the items that are not selected may disappear or become transparent/light in colour (i.e., recede or fade) in order to make more prominent the selected items of apparel. Thus, the transparency effect emphasizes certain items visually while still preserving other layers of clothing so that the highlighted apparel may be examined with respect to other items it will be worn in combination with. The layers worn by the model in FIG. 30 may be examined from different perspectives of the model (cross-sectional view for example). This page also provides the user with the menu (available by clicking icon 540) described previously for setting/manipulating the model and environment as well as setting view options, share options (for example, sharing model views with friends in specific apparel). Other purposes for which visual indicators may be applied includes, but is not limited to, relaying the user with information regarding the quality or make of an apparel. For example, different colours may be used to outline or highlight a shoe sole in order convey whether the given shoe is hard-soled or soft-soled, Separate icons may also be provided such as 548 provided to interact and/or manipulate model as shown in FIG. 30. Additionally, an icon summarizing suitability of fit may be provided 550. This will incorporate all the quantitative and/or qualitative aspects assessing goodness of fit and give the overall consensus on whether the apparel will fit the user (thumbs up) or not (thumbs down) in an exemplary embodiment. The ‘summary’ icon may be programmed by default, for example, to give a ‘thumbs up’ if two qualitative and quantitative aspects are satisfied. This default setting may be changed to suit the user's suitability of fit requirements. More details on the fit are available to the user by clicking on or hovering over the icon 550. The user can also choose to display portions of these details next to the icon through the preferences page. In an exemplary embodiment, the user can see the fit information by taking an item to the fitting room (eg. by dragging and dropping a catalog item into the fitting room). In another exemplary embodiment, the user can see all the items that the user is browsing with the fit information without the need to place the item in the fitting room. All instances of features shown in FIG. 30 are illustrative examples and are not meant to be restricted to these and can embody and encompass other forms, illustrations and techniques.
  • Reference is now made to FIG. 23, where a sample shared item window 430 is shown in an exemplary embodiment. The shared item window 430 displays the various items that the user has shared, in a shared list 432, and a list of items that friends have shared in a friend shared list 434. The snapshots lists 436 allow a user to share various images that they have captured of their user model with other users. When viewing and interacting with the user model, the user is provided the ability to capture an image or snapshot of the image, and share the respective snapshot or image with other users. These features illustrate one exemplary embodiment of the asynchronous mode of operation of a shopping trip.
  • Reference is now made to FIG. 25, where a sample wardrobe image 440 is shown in an exemplary embodiment. Wardrobe images 440 are used in an exemplary embodiment to display to the user the apparel items that a user has added to their wardrobe. A user may browse all of the items that are in their virtual wardrobe, and may request that they receive comments regarding items in their wardrobe from a consultant. The user is presented with options as in the tabbed menu 442 shown in exemplary embodiment, so that they can quickly navigate and browse the apparel in their wardrobe and fitting room; try on apparel on their model as well as get feedback regarding apparel and dressing style options from the style consultant. From left to right, the icons 444 available to the user in their wardrobe include: (1) the icon that displays to the user apparel information such as the make and manufacturer details, care instructions, store it was bought from, return policy etc. as well as user tagged information such as who gifted the apparel, the occasion to wear it for, etc.; (2) the icon to fit selected apparel on the user model; (2) the icon to share selected apparel with other users. The icons shown have been presented as examples and may include icons that perform other functions. The icons shown may be represented with different symbols/pictures in other manifestations. Reference is made to FIG. 28 where a drawing of a 3D realization of a virtual wardrobe is shown. This wardrobe can be incorporated with physics based animation functionality so that users can drag around objects; arrange and place them as desired in the wardrobe; move them into boxes or bins or hangers or racks etc. Users will be able to visualize articles of clothing and other apparel in their wardrobe; tag each item with a virtual label that may contain apparel specific information as well as user specified information such as the date the apparel was bought; the person who gifted the apparel; upcoming events on which it can be worn as well as links to other items in the wardrobe and/or fitting room with which that item can be coordinated or accessorized with etc. Reference is made to FIG. 26, where a sample style consultant window 450 is shown in an exemplary embodiment. The style consultant 452 is able to comment on the user's items in the wardrobe, upon request of the user. The icons 454 shown from left to right include: (1) the icon to obtain information on the specific style consultant; (2) the icon to add/remove style consultants from the user's personal list. Icon 456 provides the user with options to engage in communication with the style consultant either via email or chat which may be text/voice/video based or may involve augmented reality, in exemplary embodiments.
  • Reference is now made to FIG. 27 where a sample diagram is presented illustrating the actions involving the fitting room 420 and wardrobe 440 that the user may engage in while browsing for apparel. While browsing for apparel displayed as in example window 400, the user can add an item to their fitting room by clicking on an icon 424 next to the item they wish to virtually try on. Once an item has been added to the fitting room 420, that item will become available to the user in the local application 271 for fitting on their model. Once the item has been added to the fitting room, the user may model the apparel item on their user model, and/or decide to purchase the item, in which case the apparel item can be added to the virtual wardrobe 440. Alternately, the user may decide not to purchase the item in which case the item will stay in the fitting room until the user chooses to delete it from their fitting room. The user may choose to keep a purchased item in their wardrobe 440 or delete it. If the user decides to return an item, that item will be transferred from the user's wardrobe 440 to the fitting room 420. The user may also decide to conduct an auction or a garage sale of some or all of the real items in their wardrobe. Users with access to the virtual wardrobe can then view and purchase items on sale of interest to them via system 10. The virtual items in the fitting room and wardrobe can also be purchased for use in other sites that employ virtual characters/models. The virtual apparel items in the fitting room and wardrobe may be exported to external sites or software involving virtual characters/models such as gaming sites, ‘virtual worlds’ sites and software.
  • Reference is now made to FIGS. 46A to 46H where other exemplary embodiments of the features described in this patent have been presented. FIG. 46A shows a profile or home page of a user registered with system 10. The user can grant access to this page to other users by setting permissions. A master menu 800 with option tabs—‘profile’, ‘browse’, ‘shopping trip’, ‘cart’, ‘shopping diary’ is shown at the top of the page. These tabs navigate to pages which allow the user to respectively, access their profile page; browse stores and products; manage collaborative shopping trips; view and manage items in cart; access personalized shopping and other miscellaneous features. Icon 801 displays the logo of system 10 and provides the user with a menu containing certain options such as home page access and help with features available to the user on system 10. Display box 802 represents the information card providing profile details of the user. Display box 804 contains hyperlinks to all stores subscribing to system 10 or just the favourite/most frequently visited stores by the user. Additionally, users may engage display box 805 for adding friends they would like to collaborate with. In an exemplary embodiment, users may add friends they normally like to acquire feedback from or go out with for shopping. The user may also add other users registered with system 10 whose fashion/style sense they like and follow (the user would be that person's ‘style fan’ in that case). Another menu 803 is provided in FIG. 46A as an exemplary embodiment which permits the user to access more features available on system 10.
  • Reference is now made to FIG. 46B where a store page 806 is shown. The products available in the store 808 may be categorized according to different fields such as department, category, size etc. Users may also be able to search for products in the store. Stores have the option of personalizing their store pages. In an exemplary embodiment, the season's collection may be displayed in a product display window 809. Items featured by the store and other item collections may also be displayed in another window 810. FIG. 46B also displays a collaborative shopping trip window 807 on the same page. The shopping trip window may be launched by clicking on icon 815. The shopping trip dialog 807 containing collaborative shopping features may open up in a separate window or in the same window/page being viewed by the user. Some collaborative shopping features are illustrated in the shopping trip dialog 807 as exemplary embodiments. A synchronized product viewer 811 enables collaborative shopping between members of that shopping trip displayed in window 814. Products being browsed by other users of the shopping trip may be viewed in the product viewer 811 via menu 812. By selecting a given user in window 814, the user can browse the shopping cart, shopping list, wishlist, wardrobe, and other personalized shopping features shown in 814 of the selected user, if that user has granted permission, by clicking on the ‘GO’ button in window 814. A chat window 813 and/or other synchronous or asynchronous means of communication may be available to enable communication with other users while shopping. Reference is now made to FIG. 46C which illustrates another layout in exemplary embodiment. This layout combines some store page features with collaborative shopping trip features on the same page. A regular store page 806 shown in FIG. 46B may convert to a page as in FIG. 46C upon activating the shopping trip. Reference is now made to FIG. 46D where a sample shopping trip manager window/page is shown. Users can create new shopping trips 816; categorize trips by labeling them and invite friends on shopping trips. Users can view and sort shopping trips 817 according to labels.
  • Reference is now made to FIG. 46E where a user's personalized ‘looks’ window/page is shown in exemplary embodiment. A ‘look’ in this context is defined as a collection of products put together by the user from different product catalogues to create a complete ensemble or attire defining a suggested ‘look’. Other users may gauge a user's fashion sense or style by browsing through the given user's looks page. A browser window 818 allows the user to browse looks they created. Each look 819 is composed of several items put together by the user. In an exemplary embodiment, a look 819 may contain a blazer, a blouse, a skirt, a pair of shoes, a handbag and other accessories to complement the given look. A user may obtain expanded views of products comprising a given look by highlighting a look 819, upon which another dialog or window 820 is launched containing expanded views 821 of items composing 819. Upon selecting an item in the expanded view 820, a product options menu 822 appears which is comprised mainly of the four option boxes outlined in red. The other sub-menus 823-826 appear upon clicking the respective main product menu options besides which they appear. The product options menu 822 is shown in exemplary embodiment and it enables tasks such as product purchase 824, product sharing with other users 823, rating the product according to different criteria 825 and addition of the product to various personalized user lists 826.
  • Reference is now made to FIGS. 46F-G where other exemplary embodiments of the fitting room window have been shown. FIG. 46F shows some features comprising the fitting room 827. These may include the shopping cart 828, or items that the user has selected but is undecided about purchasing 829, and the product viewer 830 which provides product views of the item selected from the shopping cart or the ‘decide later’ cart. Another version of the fitting room is shown in FIG. 46G which incorporates the product viewer 830, the shopping cart, ‘decide later’ items as well as other customized user lists such as shared items, top picks, my looks and others.
  • Reference is now made to FIG. 46H where a shopping diary window/page and its features are shown in an exemplary embodiment. The shopping diary is comprised of personalized user lists such as shopping lists, wishlists, gift registries, multimedia lists and others. Additionally it may incorporate a shopping blog and other features.
  • Reference is now made to FIG. 46I where a layout or directory of the mall comprising stores subscribing to system 10 is shown in an exemplary embodiment. This can be customized to form a user-specific directory that lists businesses and people that a user is associated with in a community. Stores are listed on the left and categorized by gender and age group. A map or layout 1106 of the virtual mall is presented to the user where the stores on system 10 may additionally be shown graphically or using icons. Upon selecting a store 1100 from the list, a store image 1104 may be displayed. A ‘window shopping’ feature permits users to get live feed from the store including information 1105 such as other users browsing the store. The user may be able to identify contacts in their friends list who are browsing the store via this feature and also identify the contact's category (i.e., work—W, personal—P etc.). Additionally, other services 1102 may be listed such as dental and other clinics. Users may be able to book appointments online via a clinic appointment system available through system 10. Users may also make use of a ‘smart check’ feature that checks the user's calendar for available slots and suggests potential dates to the user for booking appointments and/or proceeds to book the appointment for the user by providing the clinic with the user's availability dates. Once the clinic confirms a booking, the smart check calendar feature informs the user of the confirmed date via SMS/email/voicemail/phone call. Users may set their preferred method of communication. It may additionally suggest to the clinic the best dates for scheduling an appointment by cross-referencing both the patient/client's schedule and the clinic's schedule. Users may mark other appointments in their digital calendar. The calendar may send appointment reminders via SMS, email, phone call to the user depending on user preferences and the user will be presented with options to confirm, cancel or postpone the appointment upon receiving the appointment reminder. The calendar would notify the user of the duration after which the appointment is scheduled, for example—‘your dentist appointment is in 15 minutes’. Furthermore, the smart-check feature could also cross-reference the dentist clinic's electronic schedule in real time and inform the user whether their appointment is delayed or postponed because the clinic is not running late or for some other reason. Other services such as food/catering 1103 may be available permitting the user to order online. Another feature available on system 10 is an ‘electronic receipt manager’. This feature allows the user to keep track of all receipts of products purchased through system 10 and other receipts that the user may want to keep track of. This may prove useful to users for purposes such as exchanging or returning merchandise, tax filing, corporate reimbursements and others. Users would be able to categorize receipts (example, business, personal etc.); import and export receipts to other places such as the user's local computer or a tax filing software and other places; conduct calculations involving amounts on those receipts. Stores on system 10 may also find it useful to have and store these electronic receipts in order to validate product purchases during a product return or exchange. (Receipts for purchases made at the physical stores can also be uploaded to the electronic receipt manager. This can also be done at the point of sale (POS)). An interface for the Electronic Receipt Manager and further details are described with reference to FIG. 48D. The store and services layout 1106, and store and services listing may also be customized by the user to comprise favourite stores and services of the user i.e., stores and services such as the dentist, mechanic, family physician, hair salon, eateries etc. most frequently visited by the user (may be entitled ‘My Business’ section in exemplary embodiment). This would permit the user to create their own virtual mall or virtual community providing quick and easy access to stores and services most beneficial to the user as well as their contact and other information. (Users can search for businesses and add them to their ‘community’ or contacts list. On searching for a business using a name, a list of businesses with that name or similar names may be shown and may be displayed in ascending order of the distance from the user's home, office, city, or current location). A user can also visit other users' virtual malls and communities. Alternatively, a virtual mall may be mapped to a real mall and contain stores and services that are present in the real mall. In exemplary embodiment, the ‘My Business’ concept described above may be integrated with social networking sites. Tools may be available to businesses to communicate with the user clients and customers, such as via the clinic appointment system described above. Tools may be available to customers to manage receipts, product information and also to split bills. The system described with reference to FIG. 46I may be integrated with the VOS and/or VS described in this document.
  • Reference is now made to FIGS. 47 A-B which illustrate features that allow the user to customize pages on system 10; to set the theme and other features that allow the user to personalize the browser application's and/or local application's look and feel. FIG. 47A shows a theme options menu 1108 where a user can choose and set the colour theme of the browser pages that they will be viewing during their session on system 10. In the instance shown in FIGS. 47A and 47B, the user has chosen ‘pink’. Accordingly, the theme changes as shown via the windows in FIGS. 47A-B. FIG. 47B also shows features available to the user for specifying the delivery information 1112 of a product upon purchase. Users may specify a friend from their address book or friends' list and also specify the delivery location type (i.e., work, home etc.). The system would then directly access the latest address information of that friend from their user profile. This address would subsequently be used as the delivery address.
  • Reference is made to FIGS. 48A-F, where some features and layout designs of system 10 are illustrated in exemplary embodiment. These features and designs can be used with the local application or a web browser or a website in exemplary embodiments. The description of these figures is provided with respect to the local application but it also holds in the case of a browser implementation or a website implementation of the same.
  • Reference is now made to FIG. 48A. The display screen 1130 is encased by an outer shell 1131, henceforth referred to as the ‘faceplate’ of the local application. The faceplate can be changed by a user by selecting from a catalogue of faceplates with different designs and configurations, which will be available under menu options.
  • On the faceplate are navigation links represented by buttons with icons 1132, in an exemplary embodiment. The lifesaver icon 1133 serves as a link for the help menu. Button 1134 represents the user account navigation link which directs the user to their account or profile space/section on the local application, consisting of the user's personal information, account and other information; settings and options available to the user to configure their local application or browser application; information and links to tools and applications that the user may add to their local or browser application. Navigation link 1135 on the faceplate is discussed with reference to FIG. 48A. Other navigation links on the faceplate will be discussed with reference to the figures that follow. Button 1135 directs the user to the user model space/section of the local application (button 1135 is highlighted with a red glow here to show that it is the active link in this figure i.e., the screen 1130 displays the user model space). In this space, users can access their 3D model 1136. Menu options 1137 for viewing, modifying and using the 3D model are provided on this page. Other features may be present in this space that can be utilized in conjunction with the 3D model. For instance, the fitting room icon 1138 is provided as an exemplary embodiment. Upon activating this icon (by clicking it for example), the fitting room contents are displayed 1139 (in the form of images here) enabling the user easy access to the apparel they would like to fit on their user model 1136.
  • Reference is now made to FIG. 48B. In this figure, navigation link 1145, which represents ‘shopping tools’ is shown as being active. Hence, in this figure, the display screen 1130 displays the shopping tools space of the local application. This space provides the user with applications and options that assist in shopping online and/or electronically via the local application software. Most of these features have been described previously in this document and are discussed here mainly to illustrate an exemplary embodiment of the local application's layout. Icon 1146, when activated (by hovering over icon with mouse or by clicking icon, as examples) displays a menu of user lists 1147 (shopping list, wishlist, registries etc.), which may be used to document shopping needs. This menu 1147 subsides/is hidden when the icon is deactivated (i.e., by moving the mouse away from the icon or by clicking the icon after activating it, as examples). Icons 1148-1152 in FIG. 48B function in a similar way in terms of activation and deactivation. Icon 1148 provides a menu with features to assist in shopping and in making the shopping experience immersive. As shown in the figure, these features include the collaborative shopping trip feature, consultation (online or offline) with a style or fashion expert among others. Feature 1149 provides the user with access to gift catalogues, gift cards/certificates, as well as information on gifts received and sent. Icon 1150 provides the shopping cart menu listing items that the user has chosen for purchase; that the user has selected for making a decision to purchase or not at a later date. It also directs the user to the checkout page. Feature 1151 assists the user in making shopping related searches and also in seeking out products in specific categories such as ‘top bargains’, ‘most selling, ‘highest rated’ etc. Icon 1152 provides features customizable by the user and/or user specific tools such as item ratings, product tags or labels etc.
  • Reference is now made to FIG. 48C. Navigation link 1160, which represents the ‘connect’ feature is shown as being active. This link directs the user to the social networking space of the local application. The list box 1161 provides the user with a listing of the user's friends and other contacts. It may contain contact names, contact images, web pages, personal and other information relating to each contact. Feature 1162 provides the user with the facility to select multiple contacts (in this case, feature 1162 appears in the form of checkboxes as an exemplary embodiment). On the right side of the display screen 1130, social networking features are provided i.e., applications that provide the facility to shop, communicate, interact online, virtually and/or electronically and perform other activities electronically with contacts. Some of these features are illustrated in FIG. 48C as an exemplary embodiment. Icons 1163, 1165, 1167 can be activated and deactivated in a fashion similar to icons 1146, 1148-1152 in FIG. 48B. Upon activating icon 1163, a shopping trip invite menu 1164 appears, providing the user with options to send an automated or user-customized shopping trip invitation message to all or selected contacts from the list 1161. These options are symbolized by the icons in the menu 1164. From left to right, these icons allow the user to send invitations via ‘instant notification’, ‘phone’, email’, ‘SMS’ or ‘text message’, and ‘chat’. Feature 1165 provides a menu with options to communicate with all or selected users in 1161. These options are similar to the ones in menu 1164. Feature 1166 provides the user with gift giving options available on system 10. Users can select friends in 1161 via 1162 and choose the from the gift options available in menu 1167. From left to right in menu 1167, these icons represent the following gift options: ‘gift cards’, ‘shop for gifts’, ‘donate with friends’, ‘virtual gifts’. This list can contain other gift options such as the ones provided by 1149 in FIG. 48B. The arrow 1168 allows the user to navigate to other applications in this space that are not shown here but maybe added later.
  • Reference is now made to FIG. 48D. In this figure, the ‘financial tools’ link 1175 is shown as active and the corresponding space that the user is directed to is shown in the display screen 1130. Some of the features accessible by the user in this space are described next. Feature 1176 and other icons in this space can be activated and deactivated in a manner similar to icons in other spaces of the local application, as explained previously. Upon activating icon 1176, options menu 1177 appears displaying options that can be used to view, manage and perform other activities related to purchase receipts, refunds and similar transactions. Some of these are shown in 1177—‘billing history’ allows the user to view the complete listing of financial transactions conducted through system 10; ‘pay bills’ allows the user to pay for purchases made through system 10 via a credit card provided for making purchases at stores on system 10; ‘refunds’ assists in making and tracking refunds; ‘manage receipts’ allows the user to organize, label electronic receipts, and other housekeeping functions involving their receipts, perform calculations on receipts; ‘edit tags’ allows users to create, modify, delete receipt/bill tag or labels. These could include ‘business’, ‘personal’ and other tags provided by the system or created by the user. The accounts feature 1178 provides options that allow the user to view and manage accounts—balances, transfers and other account related activities, account statistics and other account specific information. These accounts can be mapped to a user's banking accounts, which may be at multiple financial institutions; these could include credit/debit card accounts; accounts for credit cards provided for conducting financial transactions on system 10; gift card accounts. Feature 1179 provides other tools that assist the user in managing financial transactions conducted on system 10, as well as financial accounts, and other personal and business finances. Some of these are shown in the figure and include—‘expense tracker’, ‘split bill’ which was described previously in this document, ‘currency converter’, tax manager’ etc. Since this is a space requiring stringent security measures, icon 1180 details the user on security measures taken by system 10 to protect information in this space. The electronic receipts may be linked with warranty information for products from the manufacturer/retailer, so that users may track remaining and applicable warranty on their products over time. For the manufacturer and retailer, the electronic receipt information on a user's account may serve useful for authenticating product purchase and for warranty application terms. Since the receipt is proof of product purchase, it may also be used to link a user's account containing the receipt for a product, with the user manual, product support information and other exclusive information only available to customers purchasing the product. Other information such as accessories compatible with a product purchased may linked/sent to the user account containing the product's receipt.
  • Reference is now made to FIG. 48E where the ‘share manager’ space (1185) on the local application is described. User files on a local machine or on in the user account on system 10 can be shared by activating a share icon similar to 1186. Items may be shared in other spaces as well but this space provides a comprehensive list of features for sharing items, managing shared items, users and activities involving shared items. Users can keep track of items they have shared with other users (1187, 1188). Users may change share settings and options, view their sharing activity history, tag shared items, add/remove files/folders and perform other actions to manage their sharing activity and items (1189, 1190). Users may maintain lists of other users they share items with, subscribe to and send updates to sharing network on items shared, and maintain groups/forums for facilitating discussion, moderating activities on shared items (1191).
  • Reference is now made to FIG. 48F where the ‘user model tools’ space is described. Here the user can access make changes and manage their 3D simulated user model and model profile information (1212). Style tools are available to assist users in making better fashion choices while shopping for clothes and apparel (1214). These tools include consulting or acquiring fashion tips/advice from a fashion consultant, constructing a style profile which other users or fashion experts may view and provide appropriate fashion related feedback. A ‘my look’ section is also present in this space where users can create their own ensembles/looks by putting together items from electronic clothing and apparel catalogues (available from online stores for example). Further, users may manage browse or search for outfits of a particular style in store catalogues using style tools provided in this space (1214). A virtual fitting room (1216) is present to manage apparel items temporarily as the user browses clothing stores. Apparel in the fitting room may be stored for trying on/fitting on the user model. A virtual wardrobe space (1218) also exists for managing purchased apparel or apparel that already exists in the user's physical wardrobe. The simulations/images/descriptions of apparel in the wardrobe may be coordinated or tagged using the wardrobe tools (1218). The fitting room and wardrobe feature and embodiment descriptions provided earlier also apply here.
  • Throughout the FIGS. 48A-F, the application has been referred to as a ‘local application’. However, this application may also be run as a whole or part of a web application or a website or a web browser or as an application located on a remote server.
  • Operating systems need to be redefined to incorporate collaborative environments and functions. Reference is now made to FIGS. 49A-O where an immersive Application and File Management System (AFMS) or Virtual Operating System (VOS) and its features are described. This AFMS/VOS system or a subset of its features may be packaged as a separate application that can be installed and run on the local or network machine. It can also be implemented as a web browser or as part of a web browser and/or as part of an application that is run from a web server and can be accessed through a website. It can also be packaged as a part of a specialized or reconfigurable hardware or as a piece of software or as an operating system. This application may be platform independent. It may also take the form of a virtual embodiment of a computing device shown in FIG. 2.
  • An exemplary embodiment of AFMS system and its features are described in FIGS. 49A-L. FIG. 49A is a login window that provides a layer of security which may or may not be present when an application using this system is accessed depending on the security level selected.
  • Reference is now made to FIG. 49B where some file category and search level features are demonstrated, in an exemplary embodiment. Default file categories may be provided with the system and are some are shown in the figure in an exemplary embodiment. These are folders to store web links (1250), shopping related content (1252), multimedia related content (1254) and data files (1256). Users may create their own folders or remove any of the default folders provided, if they wish. In this figure, the shopping related folder is selected. It contains the categories or tags 1258, which are shown in exemplary embodiment. The user can create new tags, remove tags, create sub-level tags/categories and so on. The user can also conduct tag-keyword specific files searches within the system. For instance, the user can go the product tag and access the sub-tags (1260) within this category. The user can select the keyword Canon200P (highlighted in orange in the figure). Other tags/sub-tags (1264) can be similarly selected to be used in combination in the keyword specific search. An operator menu 1262 is provided so that the user can combine the tags using either an ‘OR’ or ‘AND’ operator in order to conduct their search, the results of which can be obtained by clicking the search operator 1266. The user may also choose to filter certain results out using the ‘filter’ function 1268 which allows the user to set filter criteria such as tag keywords and/or filename or and/or subject, content or context specific words and other criteria. The user may also choose to filter out tags and/or sub-tags by using a feature that allows the user to mark the tag as shown (in this case with a ‘x’ sign 1270 as shown in exemplary embodiment). User can create multiple levels of tags and sub-tags as shown by 1272.
  • In the description above, a file categorizing system has been defined in terms of tags that can be created and linked/associated with files and folders. Users can view tags, as shown in FIG. 48B, instead of filenames and folder names as in a standard file system. The tagging method can also be used to tag websites while browsing. Tags can be used with documents, images, applications, and any other type of data. Files and folders can be searched and the appropriate content retrieved by looking up on one or a combination of tags associated with the files and folders. Users may also simply specify tags and the AFMS would identify the appropriate location to store/save/backup the file. In exemplary embodiment, if a user is trying to save an image with the tag, ‘Ireland’. The AFMS would identify the file as an image file and the tag ‘Ireland’ as a place/destination that it identifies as not being in the user's vicinity i.e., (not in the same city or country as the user). Then, the AFMS would proceed to store the file in an image space/section/file space in the subspace/subsection entitled or tagged as ‘My Places’ or ‘Travel’. If a subspace does not exist that already contains pictures of Ireland, it would create a new folder with the name/tag ‘Ireland’ and save the image in the newly created subspace, else it would save the image to the existing folder containing pictures of ‘Ireland’. In another exemplary embodiment, the user may want to save a project file tagged as ‘Project X requirements’. The AFMS determines that there are associate accounts, as described later, that share files related to Project X on the owner user's account. The AFMS proceeds to save the file in the space tagged as ‘Project X’ and sets file permissions allowing associate accounts that share Project X's space on the owner user's account to access the newly saved file (Project X requirements). Thus, the AFMS/VOS not only determines the appropriate load/save location for files, but also the permissions to set for any new file on the system. Additionally, the file and folder content may be searched to retrieve relevant files in a keyword search. Users may be provided with the choice of viewing and searching for files according to the standard mode i.e., file and folder names or they may opt for using tagged content. This would offer greater control to users in terms of visualizing, managing and using their files and data. Data and files that are tagged provide the user with more flexibility in terms of organizing and accessing data. In exemplary embodiment, a user may tag a photo showing the user as a child with his mom on the beach, with the term ‘childhood memories’. The user may tag the same photo with the phrase ‘My mommy and me’ and ‘beach’. Anytime the user searches for any of the tags, the photo is included in the collection of photos (or album) with the given tag. Thus, a single photo can comprise multiple albums if it is tagged with multiple keywords/phrases.
  • Applications are designed that make use of the tag concept. In exemplary embodiment, one such application is a photo mixer/slideshow/display program that takes as input a tag name(s), retrieves all photos with the specified tag, and dynamically creates and displays the slideshow/photo album containing those photos.
  • Reference is now made to FIG. 49C. Applications 1280 may be provided by the AFMS/VOS system. Alternatively, external applications may be added to it. In the following figure, examples of two applications are shown in context in order to describe the immersive features of this system. The first application is a blog 1282. This application can be instantiated (i.e., opens up) within the AFMS itself, in an exemplary embodiment. If the blog exists on a website, then the user would navigate to that site and edit its contents from within AFMS. Users can then add multimedia content to their blog with ease. The AFMS provides an interface 1284 for viewing and using files that may be located either on the user's local machine or in the AFMS or on a remote machine connected to the web. The file viewer/manager may open up in a sidebar 1284 as shown in exemplary embodiment, or in a new dialog window or take some other form which allows concurrent viewing of both the application 1282 and files. Snapshots of files can be seen within this file manager as shown by 1284. The user can then simply drag and drop files for use in application 1282. Examples of this are shown in FIG. 49C. The user can drag and drop images or videos 1286 for user with the blog application 1282. The following figure FIG. 49D shows the resulting effect. Further, the complete file repository may be accessed by using a navigation scheme 1288 within the manager to view contents. Here a cursor scheme 1288 is used to navigate within the file manager.
  • Reference is now made to FIG. 49D where the blog application 1282 is shown with the image and video files 1290 that were uploaded by dragging and dropping from their respective file locations using the file manager window 1284. The file manager window 1284 in FIG. 49D shows files that include the tags ‘Products: HP’ and ‘Reviews: CNET’. Web links are shown sorted by date. The figure shows that hyperlinked content can also be embedded within applications via the file manager. Here the link is dragged and dropped 1292 demonstrating ease of use even in such cases. Reference is made to FIG. 49E where the result is shown. The hyperlinked content appears with the title, source and a summary of the content. The way this content appears can be modified by hovering with the mouse over this content, in an exemplary embodiment. This causes a window 1296 to appear which shows options that the user can select to show/hide entire hyperlinked article content, or summary and/or the source of the content.
  • Reference is now made to FIGS. 49F-G where an example of immersive file features comprising the AFMS/VOS is given with reference to another application. In this case, it is a notebook/scrapbook application 1300 as shown in FIG. 49F. Options 1302 for customizing applications and/or changing application settings will be present in the AFMS. Here too is shown the file manager window 1304 from which files under the relevant tags can be dragged and dropped 1306 to the appropriate location in the application 1300. FIG. 49G shows the results 1310 where the selected multimedia files have been uploaded to the application by a simple move of the mouse from the file space to the application space within the AFMS. Content 1312 in the application may be edited or uploaded from the file space right within the AFMS where the users have readily available their file space, applications, the web and other resources.
  • Reference is now made to FIGS. 49H-L where the flexibility of file spaces and their content within the AFMS/VOS is described with reference to an example. FIG. 49H presents the example at the top in terms of a user need. A user may want to create an exclusive file space (also called ‘smart file spaces’) for books where they can store and manage a variety of file types and content. The AFMS/VOS allows the user to create such a section. The procedure starts off by creating and naming the section and picking an icon for it which will be provided in a catalogue 1320 to users. Users may also add their own icons to this catalogue. The result is the user's very own book space 1326 which can be referenced by the iconic section caption 1322. The user may decide to add folders or tags in this space. One such tag/category is shown in 1326 as: ‘Business decision-making’. As the user browses websites in the webspace 1324 provided by the AFMS/VOS, the user can easily upload/copy appropriate content from the website/URL location into their custom-built file section 1326. FIG. 49H shows the user dragging and dropping images of books that the user is interested in, into the books section 1326. The image content thus gets uploaded into the user's customized file space. Images and other content uploaded/copied from a site in this manner into a user's file space may be hyperlinked to the source and/or be associated with other information relating to the source. Users can add tags to describe the data uploaded into the file space. The AFMS/VOS may automatically scan uploaded object for relevant keywords that describe the object for tagging purposes. In the case of images, the system may use computer vision techniques to identify objects within the image and tag the image with appropriate keywords. This is equivalent to establishing correspondence between images and words. This can be accomplished using probabilistic latent semantic analysis [55]. This can also be done in the case of establishing correspondence between words (sentences, phonemes) and audio. FIG. 49I illustrates that textual content/data may also be copied/uploaded into the user's customized file space by selecting and copying the content in the space. This content may be stored as a data file or it may be ‘linked’ to other objects that the user drags the content over to, in the file space. For instance, in FIG. 49I, the user drags the selected content 1328 from the webspace 1324 over the image 1330. Hence the copied content gets linked to this image object 1330. The linked content may be retrieved in a separate file or it may appear alongside the object, or in a separate dialog or pop-up or window when the user selects the particular object, for instance, by clicking on it.
  • FIG. 49J shows the file space 1340 after content from the website has been uploaded. The image objects 1342 along with their source information are present. The content 1344 (corresponding to the selected text 1328 in FIG. 49I) can be viewed alongside the linked image in the file space 1340. Thus, the AFMS/VOS allows for creation and management of ‘context specific file spaces’ where the user can easily load content of different types and organize information that appears to go together best, from a variety of sources and locations, in a flexible way, and without worrying about lower layer details.
  • Organization of information in these file spaces is not tied to data type or file format or application being used, but instead all objects that appear to the user as belonging together can be tied together as a single ‘information unit’. These information units can then organized as bigger information units and so on. In the current example the image of the book, its source and the content it is linked with together comprise one information unit. The book objects together (1342) comprise a higher information unit comprising the ‘My Books’ section. These information units stand apart from standard data files and folders because they contain data of multiple types that is linked or associated together, and hence are flexible. Further, data and content of different types from multiple sources can be assimilated together by the system which will handle the lower layer functionality to create these information units in a manner that is easy to access, view and manage, thus enhancing the value of the information to the user.
  • In FIG. 49J, additional examples are given to demonstrate other ways of combining data with information units. An object in a file space can be cross-referenced with information or data from other applications that is of relevance or related to that object. For instance, the book object or information unit 1346 can be cross referenced with web links, related emails and calendar entries as shown in 1348 and categorized using relevant tags. In this example, the user has added web links of stores that sell the book, emails and calendar entries related to the subject matter and events involving the book. Thus, the user can easily reference different types of files and objects that are related to the same subject matter or object using the features of this file system. The information in any given smart file space can be used by the AFMS/VOS to answer user queries related to objects in the file spaces. For instance, in the present example, the user may query the AFMS for the date of the ‘blink’ book signing event in the ‘My Books’ file space 1340 in FIG. 49J. The AFMS identifies the ‘blink’ object 1346 in the file space and looks up appropriate information linked to or associated with 1346. In this case since the query deals with ‘date’, the AFMS searches for linked calendar entries and emails associated with 1346 related to ‘book signing’, by parsing their subject, tags and content. In this case, the AFMS would identify and parse the email entry on book signing in 1348 in FIG. 49J and answer the query with the relevant date information.
  • In an exemplary embodiment of the smart file space implementation, each file space may be associated with an XML file. When an object or content (image, text, etc.) is dragged and dropped, the code underlying the content is parsed and the appropriate information and properties are identified. This information includes type of content or data, content source, location, link information (for example, this is a link to an image of a house), content description/subject. Other information that the AFMS/VOS determines includes, the application needed to view or run the object being saved into the file space. For instance, when an image is dragged and dropped into a file space from a web page, the HTML code for the web page is parsed by the AFMS in order to identify the object type (image) and its properties. Parsing the image source tag (<src>) in the HTML file for the web page provides the source information for the image, in exemplary embodiment.
  • In FIG. 49K, collaborative features of the AFMS/VOS and its associated file management features are described. Users can maintain a list of friends 1260 and their information in the AFMS/VOS. These friends can have limited access accounts on this system (called ‘associate’ accounts described later) so that they can access and share the primary user's resources or interact with the primary user. Users can set options to share information units/objects in their file spaces, such as book object 1362 in the ‘My Books’ section 1326 in FIG. 49K, with their friends. Users can drag and drop objects directly onto a friend's image/name in order to share those objects with the friend. Another feature in this file system is that when an object 1362 in the file space 1326 and friends 1364 from the friends list 1360 are selected concurrently, a special options window 1366 pops up that presents features relevant to the ‘sharing’ scenario. The AFMS/VOS recognizes that selections from both the friends list and file space have been made and presents users with options/features 1366 that are activated only when such a simultaneous selection occurs and not when either friends or file space objects are exclusively selected. Some of these options are shown in 1366 in exemplary embodiment. For instance, users can set group tasks for themselves and their friends involving the selected object, such as attending the author signing event for the book 1362. Other options include turning on updates, such as the addition of objects, for a section to the selected friends; going on a shopping trip for the object with selected friends.
  • Owners may be able to keep track of physical items they lend to or borrow from their friends. An object in a file space may be a virtual representation of the physical item. Users can set due dates or reminders on items so that items borrowed or lent can be tracked and returned on time. A timestamp may be associated with a borrowed item to indicate the duration for which the item has been borrowed. This method(s) to keep track of items can serve as a Contract Management System. This service can be used to set up contracts (and other legal documents) between users using timestamps, reminders and other features as described. Witnesses and members bound to a contract may establish their presence during contract formation and attestation via a webcam or live video transmission and/or other electronic means for live video capture and transmission. Members bound to a contract and witnesses may attest documents digitally (i.e., use digital signatures captured by electronic handwriting capture devices for example). Users may also create their WILL through this system. User authenticity may be established based on unique pieces of identification such as their Social Insurance Number (SIN), driver's license, passport, electronic birth certificate, retinal scans, fingerprints, health cards, etc. and/or any combination or the above. Once the authenticity of the user has been verified by the system, the system registers the user as an authentic user. Lawyers and witnesses with established credibility and authenticity on the system may be sought by other users of the system who are seeking a lawyer or witness for a legal document signing/creation for example. The credibility of lawyers, witnesses and other people involved in authenticating/witnessing/creating a legal document may further be established by users who have made use of their services. Based on their reliability and service, users may rate them in order to increase their credibility/reliability score through the system. Thus, group options involving data objects and users is a unique file management feature of the AFMS/VOS that allows for shared activities and takes electronic collaboration to a higher level. The Contract Management System may be used/distributed as a standalone system.
  • FIG. 49K shows options/features 1370 that are presented for managing an information unit upon selecting the particular object or information unit 1368 in a file space. These options allow users to send an email or set tasks/reminders related to the object; tag the object, link other objects; receive news feeds related to that object; add it to another file space; and perform other tasks as given in 1370.
  • In another exemplary embodiment of AFMS/VOS usage for information lookup, a user may want to look up information on the last client meeting for a specific project. The file space for the project, created by the user, would contain the calendar entry for the last meeting, the email link containing the meeting minutes as an attachment, and other related objects and files. The user may also share the project file space with other users involved in the project by adding them as ‘friends’ and sharing the file space content, in exemplary embodiment. Thus, the smart file space saves the user time and effort as the user no longer has to perform tedious tasks in order to consolidate items that may ‘belong together’ according to a user's specific needs. For instance, in this case the user does not need to save the meeting minutes or the email content separately; just dragging and dropping the appropriate email from the email application to the project's file space suffices and the email and attachment are automatically linked to/associated with the project. The user does not have to open the calendar application and tediously browse for the last calendar entry pertaining to the meeting. Also, sharing the project space with colleagues is easy so that project members can keep track of all files and information related to a project without worrying about who has or doesn't have a particular file. Other information may be available to users sharing a file space such as the date and time a particular file was accessed by a user, comments posted by shared users etc. Additionally, tools to ease file sharing and collaboration may be available via the VOS as described below with reference to FIG. 20.
  • FIG. 49L represents an exemplary embodiment of the storage structure of the AFMS/VOS. Data stored on a user's local machine or remote sites or servers such as a user's work machine, or online storage, and data of user's friends on the system is managed by the file management layer. The file management layer handles conflict analysis, file synchronization, tagging, indexing, searching, version control, backups, virus scanning and removal, security and fault protection and other administrative tasks. Data (modification, updates, creation, backup) in all user and shared accounts on local or remote machines, on web servers, web sites, mobile device storage and other places can be synchronized by this layer. A property of the file system is that it caches files/and other user data locally when network resources are limited or unavailable and synchronizes data as network resources become available, to ensure smooth operation even during network disruptions. Backups of data conducted by AFMS may be on distributed machines. An abstract layer operates on top of the file management system and provides a unified framework for access by abstracting out the lower layers. The advantage of this is that the VOS offers location transparency to the user. The user may log in anywhere and see a consistent organization of files via the VOS interface, independent of where the files/data may be located or where the user may be accessing them. The VOS allows users to search for data across all of the user's resources independent of the location of the data. Another feature of this file system is the option of storing a user's information, profile and other account resources on the user's resources (for example, the user's home or work computer) instead of a web server to ensure privacy of a user's data and account information. FIG. 49P demonstrates an exemplary embodiment of an application execution protocol run by the Application Resource Manager ARM (which is a part of the virtual operating system). Once a user requests an application 1400, the ARM checks to see whether this application is available on the portal server 1402. If so, then the application is run from the portal server 1404. If not, then the application plug-in is sought 1406. If the plug-in exists, the application is run from the local machine 1412. If a plug-in for the application does not exist, a check for the application on the local machine is conducted 1410. If available, the application is executed from the client's local machine 1412. If not, the application is run from a remote server on which the user has been authenticated (i.e., has access permission) 1414, 1416. If all the decision steps in the algorithm in FIG. 49P yield a negative response, the ARM suggests installation paths and alternate sources for the application to the user 1418. The user's data generated from running the application is saved using the distributed storage model.
  • Another feature of the AFMS is that the user may store files in a “redirect” folder i.e., files moved/saved to this folder are redirected by the AFMS to the appropriate destination folder based on the file's tags and/or content. The user may then be notified of where the file has been stored (i.e., destination folder) via a note or comment or link in the “redirect” folder that directs the user to the appropriate destination. An index file may automatically be generated for folders based on titles/keywords/tags in the documents and/or the filename. This index may display titles/keywords/tags along with snapshots of the corresponding files.
  • Reference is now made to FIG. 49M where a user accounts management structure is shown. Central to this system is a user management layer that manages a given ‘owner’ user's accounts as well as ‘associate’ accounts, which would include accounts of all other friends, users and groups (the owner would like to associate with). Associate accounts would be created to give access to the owner account resources and data. The owner account would have all administrative rights and privileges (read, write, execute, for example) and can set permissions on associate accounts to grant or restrict access to the owner's account and resources. An associate account may be viewed as ‘the set of all owner resources that the associate user has access to, and the set of all activities that the associate user can engage in with the owner user’. An associate account would be linked to and accessible from the associate user's primary/owner account. The owner account may be accessible to and from the owner user's computer, to and from a machine at remote locations such as the office, to and from accounts at social networking sites, and through a web browser/web sites. Account information such as usernames and passwords for the user's accounts on websites and other servers that the user accesses from the VOS may be stored on the system so that the user bypasses the need to enter this information every time the user accesses their external account. The owner may set group policies for the associate accounts so that they have access to specific resources and applications for specific time periods on the owner's account. Owner users have the option of classifying associate users into categories such as acquaintances from work, school, family, strangers etc. As described before, users may share information and files with their friends, and also engage in shared activities such as games, edit documents collaboratively etc., through the VOS. Another feature of the VOS is that over time it allows the user to specify automatic changes in access privileges/permissions of associate accounts on the user's network. In exemplary embodiment, a user may want to let associates accounts, starting out with limited access/privileges, have access to more resources over time. Through the VOS, the user is able to specify the resources that associate accounts may automatically access after a certain period of time has elapsed since their account was created or since their access privileges were last changed. The user may also be able to grant greater access privileges automatically to associate accounts after they demonstrate a certain level of activity. After the specified period of time elapses or the level of activity of an associate account increases/decreases or is maintained, the VOS automatically changes the access privileges of the associate users who have been granted access to increased/decreased resources as pre-specified by the user through options provided by the VOS. This is the ‘Growing Relations’ feature of the VOS where access privileges rules of associate accounts are specified by a user and are changed accordingly by the system, as and when specified by the user. The VOS is able to regulate resource use and change access privileges automatically in the absence of user specified access privilege rules, in another exemplary embodiment. The VOS may monitor activity levels of associate accounts and interactivity between user and associate users and automatically determine which associate users may be allowed greater access privileges. If there is greater interactivity over time between the user and a certain associate user, then the system may deem this associate user as a ‘trusted associate’. It may also use other means of determining the ‘trustworthiness’ of an associate user. The system may seek permission of the user before changing access privileges of the trusted associate user. As the ‘trust score’ (the method used by the system to keep track of the activity levels of an associate account) of an associate user increases, the system would promote the status of the associate account progressively by assigning status levels such as: Stranger, Acquaintance, Friend, Family—in that order from first to last. The higher the status of an account, the more access privileges are granted to that account. In a similar manner, if the VOS detects that there is little interactivity of an associate account over time, or determines lower resource needs of an associate account or assesses that an associate account is less ‘trustworthy’ based on usage patterns of associate account users, then the VOS would regress the status of the account and grant less privileges accordingly. The system may again seek the permission of the user before modifying access privileges of any associate account.
  • The VOS allows password synchronization across websites, networks and machines. For example, if a user changes a password for logging onto a local machine, say a home computer, the password change is synchronized with a password the user may use to login to their account on a webpage. Various levels of access privileges may be granted by the VOS to users, including but not limited to that of a root user, administrator, regular user, super user, guest, limited user, etc., in exemplary embodiment. The VOS also allows execution of shell commands. VOS also provides a software development kit for users to write applications for the VOS.
  • The system may also contain an immersive search engine application that performs searches on queries presented to it. The search engine may be available as a standalone feature for use with browsers and/or network machine(s) or local machine browsing applications. It may be available as part of a VOS browser, containing one or more of the VOS's features. Some of the features unique to this immersive search engine are described next. Reference is made to FIG. 49N where abstraction of a search query is demonstrated in exemplary embodiment. When a user performs a search, the input is not limited to typing text and using a keyboard. Instead a new approach is proposed, where the input could be speech to text, or mouse gestures or other data. In another example, a user may be able to drag and drop content from a newsfeed into the search query field. Context level searches may be performed by the search engine. In an exemplary embodiment, when a user comes across an image while browsing the web, the user may be able to simply drag and drop the image into the search field and the browser would retrieve search results that pertain to the image objects, theme or subject. The user may quote a sentence and the search engine would retrieve searches related to the underpinning of that statement in a context related search, in another exemplary embodiment. Thus, this method effectively provides a layer of abstraction for the conventional search. The search engine can also retrieve search results in the form of lists where each lists contains the results that fall under a specific category or context. Categories and sort criteria may be user specified. In an exemplary embodiment, the user may want to search for cars of a particular year and want them categorized according to color, most selling, safety rating and other criteria. The search engine then retrieves search results of cars of that year sorted according to the specified criteria in different lists. It also keeps track of user information so that it can provide contextual information specific or relevant to the user's life. For example, if a user's friend has a car with the specifications that the user is searching for, then the search engine indicates to the user that the user's friend has a car with the same or similar specifications. The search engine mines the information units present in a user's directory in order to present relevant contextual information along with search results. For instance, the user may be interested in six cylinder engine cars as inferred by the system based on information objects in the user's directory. The search engine then indicates to the user as to which of the search results pertain to six cylinder engine cars. This type of contextual data mining can be done as discussed in reference to FIG. 6E. Additionally, this search engine can present to the user information in a variety of formats, not necessarily restricting the search output to text. For instance, the results may be converted from text to speech.
  • Users can save sites and bookmark sites using tags while browsing web pages. In exemplary embodiment, this can be done using the VOS browser or any other browser that supports tagging and saving. The tags can then be used by web crawlers to rank pages for use in search engines. Conventionally, web crawlers used by search engines rely primarily on the keywords provided by authors of websites, as well as content on web pages. The method described here also utilizes tags provided by ordinary users browsing websites. This method also allows sites to be searched which are not registered with the search engine.
  • Reference is now made to FIG. 49 O where an exemplary embodiment of the VOS is shown running as a website. The user may be presented with this screen upon logging in. There are many applications available for use in the VOS. An API is also available for developers to build applications for the VOS. Any of the applications such as text editors, spreadsheet applications, multimedia applications (audio/video, photo and image editing), white board can be used collaboratively with other users through an intuitive interface. Collaborative application sharing may be accomplished using techniques discussed with reference to FIG. 7A, B, C, D. Shared users may include friends/family members/other associates from social networking sites or work or home computer accounts. Any changes made to data or applications and other resources can be viewed by all users engaged in the collaboration of these resources and accounts. Users can customize i.e., change the look and feel of the VOS including the background, theme etc. The VOS may also provide an interface that allows for text, video and audio overlay. The calendar feature in FIG. 49 O cross-checks calendars of all users for scheduling an event or an appointment or a meeting and suggests dates convenient for all users involved. A time-stamping feature is also available that lets users timestamp documents. This feature also has an encryption option that allows users to encrypt documents before uploading, acquire a timestamp for the document and retrieve it for future use, keeping the confidential all the while. This might serve useful where time-stamping documents serves as proof of ownership of an invention, for example. Encryption may be accomplished using two encryption keys in exemplary embodiment. One key would be available only to the user and the system would maintain the other key. Remote technical assistance is also provided for this interface. FIG. 49 O also incorporates advanced search (described previously with reference to FIG. 49N), distributed data access (FIG. 49L), advanced user management (FIG. 49M), safety deposit box, media room, launch pad, library, TV/radio and other features as shown in FIG. 49 O. The ‘safety deposit box’ would contain sensitive materials such as medical records, legal documents, etc. These contents are encrypted and password protected. In an exemplary embodiment, data is encrypted at the source before backing it up on machines. The files may also be accessible or linked to commercial and other public or private large-scale repositories. For instance, medical records of a user could be linked to or accessible from hospital repositories to which a user has been granted limited access. Application layers may be added that mine the contents of the safety deposit box in order to present information to the user in functional and relevant manner. In exemplary embodiment, a ‘calendar alert’ application may remind the user of pending actions. For instance, based on their medical record, the application would alert the user that a vaccination is due, or a dentist appointment is due. In another instance, the application would alert the user based on financial records that their taxes are due. Similar scenarios may exist for legal documents. The ‘media room’ would include all files and folders and content that the user wishes to publish or make public such as web pages, videos (such as YouTube videos) etc. The launch pad is a feature that allows users to place objects in a region and take appropriate actions with those objects. It provides an interface for programming actions that can be taken with respect to objects in a variety of formats. The launch pad includes community users who can contribute their applications and other software for use. In exemplary embodiment, a user may move 2D onto a “3D-fy” application widget in the launch pad section in order to transform the 2D images into their corresponding 3D versions. In another exemplary embodiment, a user may add an application in the launch pad area that allows document sharing and editing through a webcam. The library section may include e-documents such as e-books, electronic articles, papers, journals, magazines etc. This section will be equipped with the facility whereby electronic magazines, e-papers etc. to which the user may have subscriptions would be ‘delivered’ by the subscribed service and made available in this section. The TV/radio feature allows users to browse and view channels in a more traditional sense online. The channels may be browsed using the keyboard or mouse. It may also be combined with the user interface discussed with reference to FIG. 54D. The output of cable TV could also be viewed via this facility. In exemplary embodiment, this can be done by redirecting output from the user's TV or cable source to the user's current machine via the internet or network. The channels can be changed remotely, for example via the interface provided by the VOS or a web interface independent of the VOS. In exemplary embodiment, this may be done by connecting a universal TV/radio/cable remote to a home computer and pointing the device towards the object being controlled via the remote, if necessary (if it's an infrared or other line-of-sight communication device). A software on the computer communicates with the remote to allow changing of channels and other controls. The audio/video (A/V) output of the TV or cable is connected to the computer. The computer then communicates with the remote device over the Internet, for display/control purposes in exemplary embodiment. The TV/radio content may include files, and other media content on the user's local or remote machine(s), and/or other user accounts and/or shared resources. The radio may play live content from real radio stations. The system may also allow recording of TV/radio shows. On logging off the VOS, the state of the VOS including any open applications may be saved to allow the user to continue from where the user left upon logging in again. Any active sessions may also persist, if desired.
  • FIG. 49Q provides an additional exemplary embodiment of file tagging, sharing and searching features in the VOS/AFMS. As a user browses web pages in a web browser 1440, which may be the VOS browser, the user may choose to save web page content such as an image 1442. The user would be able to choose the format to save it in, and also edit and save different versions of the image. Here the image 1444 is shown with a border around it. The user can tag images to be saved using keywords 1446. Parts of the image can also be labeled as 1448. The user can specify friends and associate users to share the image with 1450. The location 1454 of the image/file can be specified in abstract terms. For instance, the user can specify the location where the file is saved such as the home or office machine, or ‘mom's computer’. Owing to the distributed file storage nature of the VOS, the lower layers can be abstracted out if the user chooses to hide them. The VOS is based on a language processing algorithm. It can recognize keywords and sort them according to grammatical categories such as nouns, verbs, adjectives etc, by looking up a dictionary in exemplary embodiment. It can learn the characteristics of the associated word based on the image. More specifically, the user may be able to train the algorithm by selecting a keyword and highlighting an object or section of the image to create the association between the keyword and its description. For instance, the user may select the keyword ‘horse’ and draw a box around the horse in the image, or the user may select ‘white’ and click on a white area in the image. In this way, the system can be ‘contextually’ trained. Similar training and associative learning can occur in the case of audio and video content. Based on the image keywords, labels and associated characteristics learnt, the system would be able to make contextual suggestions to the user. In exemplary embodiment, the user may search for a ‘black leather purse’. The VOS would remember search terms for a period of time and make suggestions. So for instance, if an associate user or someone on the user's friend list bought a leather purse, the system would notify the user of this fact and the source/store/brand of the purse and check the store catalogue from which the purse was bought, for similar or different purse in ‘black’ and/or ‘leather’. In another exemplary embodiment, the system would inform a user ‘A’ of photos that an associate user ‘B’ has added containing user A's friend whom the user A wishes to receive updates on. The VOS presents search results in a ‘user-friendly’ manner to the user. Some aspects may be pre-programmed, some aspects may be learned over time by the VOS with regards what constitutes a user-friendly presentation, whether it involves displaying images, videos, audio, text, and any other file or data in any other format to the user. In exemplary embodiment, a user may search for a friend's photos and the VOS would display images found of the user's friend after properly orienting them, by applying affine/perspective transformations for example, before displaying them to the user. The user's friend may also be highlighted by using markings or by zooming in, as examples in order to make it easier for the user to identify their friend in a group, for instance. User may conduct search using filters or terms that are adjectives such as ‘dark’, ‘purple’, ‘thick’, ‘lonely’ etc., as well as any class of words that can be used to describe or characterize a noun/object. The VOS searches for relevant information matching these search terms/filters based on tags associated with files and objects. Additionally, computer vision techniques can be used to characterize whole images/video sequences, and objects and components within images/videos.
  • If the user is listening to a soundtrack, the system can make comments, based on user's mined data, such as ‘it's your friend's favourite music track’. It can analyze the soundtrack and find tunes/music similar to the one the user is listening to. It can identify other soundtracks that have been remixed by other users with the track the user is listening to or find soundtracks compatible with the user's taste etc. Extraction of familiar content can be done by the system in exemplary embodiment using a mixture of Gaussians [56] or techniques similar to those in [57]. The user would be able to specify subjective criteria and ask the system to play music accordingly. In exemplary embodiment, the user can specify the mood of the music to listen to, for instance—sad, happy, melodramatic, comical, soothing, etc. Mood recognition of music can be performed via techniques specified in [58]. The system can also monitor user activities or judge user mood through a video or image capture device such as a webcam and play music accordingly or make comments such as ‘hey, you seem a little down today’ and play happy music or suggest an activity that would make the user happy or show links that are compatible with the user's interests to cheer the user up. The tracks can be played either from the user's local machine or from online stores and other repositories or from the user's friends' shared resources. Detecting the mood underlying a soundtrack and content similar to a soundtrack can be detected using techniques specified in [59].
  • The VOS can make recommendations to users in other areas by incorporating user preferences and combining them with friend's preferences, as in the case of a group decision or consult i.e., ‘collaborative decision-making or consulting’. In exemplary embodiment, users may specify their movie preferences such as ‘action, ‘thriller’, ‘drama’, ‘science fiction’, ‘real life’, etc. They may specify other criteria such as day and time of day they prefer to watch a movie, preferred ticket price range, preferred theatre location, etc. In an online collaborative environment, such as that shown in FIG. 20 in exemplary embodiment, users may consult with each other or plan together. For example, a group of friends may want to go and watch a movie together. Every user has their own movie preference, which the system may incorporate to suggest the best option and other associated information, in this case the movie name, genre, show time etc. Other tools and features to facilitate group decisions include taking votes and polls in favour or against the various options available to the users. The system would then tally the votes and give the answer/option/decision that received the maximum votes. The system may also incorporate general information about the subject of decision in order to make recommendations. For instance, in the movie example, the system may take into account the popularity of a movie in theatres (using box office information for example), ticket deals for a movie, etc. in order to make recommendations. Users can also use the modes of operation described with reference to FIG. 7 for collaborative applications on the VOS. For example, when editing a file collaboratively, as a user edits a document, he/she can see the additions/modifications that are being made by other users.
  • Reference is made to FIG. 49R where an example of a user interface for filtering search data is shown. Users can filter files on the basis of location, file types or by file author(s).
  • Reference is now made to FIG. 49S where an exemplary embodiment of an object oriented file system is shown. Users can specify the structure of a folder (used for storing files on a computer). For example, as shown in the figure, a user can create of folder of type “company” in which the user specifies a structure by creating entries for subfolders of type “HR”, “R&D”, “Legal”, and “IT”. Regular folders may also be created. Each of the created folders can have its own structure. The user can have a folder listing all the folders of type “company” as shown in the box on the left in the top row of FIG. 49S. The content of a selected folder is shown in a box on the right in the top row. The user has options to view by “company” or by the structures that constitute that folder, say by “HR”. In FIG. 49S, the top row shows an example of viewing by “company”. If the user chooses to view by “HR”, the view on the right (as shown in the bottom row of FIG. 49S) displays the all the HR folders organized by “company”. Other filters are also available to the users that search according to the desired fields of a folder. Arrows are available on the right and left of the views to go higher up or deeper into folders. In another exemplary embodiment, instead of having a structure, the folders and files can have tags that describe the folder and the files. The proposed object oriented file system simplifies browsing and proves the advantages of a traditional file system and a fully fledged database.
  • Reference is now made to FIG. 20. The collaborative interface shown in FIG. 20 for a shopping trip may be used in the case of other collaborative activities such as application, file, document and data sharing. A generic version of the interface FIG. 20 is now described in exemplary embodiment to illustrate this extension. Panel 241 lists friends involved in the collaboration. An application panel replaces the store panel 242 and displays shared applications of users involved in the collaboration. Panel 247 lists the user's documents, data files and other resources. Panel 248 lists the user's friends' documents, data files and other resources. Window 243 would facilitate collaborative sharing of applications, documents, data, other files and resources between users of a collaboration. Users can direct any signal to 243—video, audio, speech, text, image, including screen capture, i.e., they may specify a region of the screen that they wish to share in 243, which could include the entire desktop screen. (A perspective correction may be applied to documents that are being shared. For example, if a video of a talk is being shared and the video of the slides of the presentation is being shot from an angle (as opposed to the camera being orthogonal to the screen), a perspective transform may be applied so that lines of text on the screen appear horizontal to ease viewing) Users may be able to drag and drop applications, files, documents, data, or screenshots as well as contents/files captured by the screenshots and other resources into window 243 during collaborative sharing. Instances of collaboration include shared use of applications; viewing, creating, editing, saving documents or image files etc. Window 243 has a visual overlay for users to write or draw over to permit increased interactivity during collaborative discussions. This is analogous to whiteboard discussions except that here the overlay may be transparent to permit writing, scribbling, markings, highlighting over content being shared in 243. All this content may be undone or reversed. The overlay information can be saved without affecting the original content in 243 if the user chooses to do so. Overlay information can be saved in association with the original content. The system also allows a ‘snap to object’ feature which allows users to select and modify objects in the view. The toolbar 239 provides overlay tools and application and/or document and file specific tools for use with the specific application and/or file or document or data being shared in 243. View 243 also supports multiple layers of content. These layers could be hidden or viewed. The screen size of 243 is resizable, movable, dockable, undockable. All sessions and content (viewed, edited, text, speech, image, video, etc.), including collaborative content and information may be saved including all environmental variables. When editing a file collaboratively, as a user edits a document, he/she can see the additions/modifications that are being made by other users. Collaborative environments such as these can be specialized to cater to occupation, age group, hobby, tasks, and similar criteria. In an exemplary embodiment, a shared environment with features described above may exist for students where they can collaborate on homework assignments and group projects as well as extracurricular activities such as student council meetings, organization of school events etc. Specialized tools to assist students collaborate on school related activities is provided with toolbar 239. This environment would also contain applications specific to the context. For instance, in the students' collaborative environment, students would be able to provide reviews on courses or teachers using the application provided for this purpose.
  • Furthermore, the whiteboard may be integrated with a ‘convert to physical model’ feature that transforms a sketch or other illustration or animation on the whiteboard to an accurate physical model animation or video sequence. This may be accomplished via techniques similar to those described in [3]. In exemplary embodiment, a user may draw a ball rolling on a floor which then falls off a ledge. The physics feature may convert the sketch to an animation sequence where the floor has a friction coefficient, and the ball follows Newton's Laws of Motion and the Laws of Gravitation while rolling on the floor or free-falling. In addition, voice to model conversion may occur where the semantics underlying speech is analyzed and used to convert to a physical model. This may be accomplished by converting speech to text and then text to picture [60] and then going from picture to model [3]. Objects seen in a webcam may be converted to a model [3]. Users can then be allowed to manipulate this object virtually. The virtual object's behaviour may be modeled to be physically plausible. Based on the content of the whiteboard deciphered through OCR optical character recognition techniques or sketch to model recognition [3] or speech to model recognition, related content (for example advertisements) may be placed in the proximity of the drawing screen and/or related content may be communicated via audio/speech, and/or graphics/images/videos.
  • The interface shown in FIG. 20 may be used for exhibitions, where different vendors can show their product offerings.
  • Reference is now made to FIG. 51A where a communication network demonstrating external connections to system 10, is shown in exemplary embodiment. FIG. 51A shows devices, systems and networks that system 10 can be connected to, in exemplary embodiment. System 10 is connected to the Public Switched Telephone Network (PSTN), to cellular networks such as the Global System for Mobile Communications (GSM) and/or CDMA networks, WiFi networks. The figure also shows connections of system 10 to exemplary embodiments of computing applications 16, and exemplary embodiments of computing devices 14, such as a home computing device, a work computing device, a mobile communication device which could include a cell phone, a handheld device or a car phone as examples. The AFMS/VOS may be connected to external devices, systems and networks in a similar manner as system 10. The AFMS may additionally be connected to system 10 itself to facilitate shopping, entertainment, and other services and features available through system 10.
  • In the following discussion, a ‘Human Responder Service’, its functionality and application is described. This service makes use of the data, and applications connected to the network shown in FIG. 51A. This service may be available on the portal server 20 as part of system 10, or it may be implemented as part of the virtual operating system, or it may be available as an application on a home server or any of the computing devices shown in FIG. 51A and/or as a wearable device and/or as a mobile device. The Human Responder Service or Virtual Secretary is a system that can respond to queries posed by the user regarding user data, applications or services. The system mines user data and application data, as well as information on the Internet in order to answer a given query. An exemplary embodiment of a query that a user can pose to the system through a mobile communication device (cell phone or handheld in an exemplary embodiment) includes “What is the time and location of the meeting with Steve?” or “What is the shortest route to the mall at Eglinton and Crawford road?” or “Where is the nearest coffee shop.” Further refinements in the search can be made by specifying filters. An exemplary embodiment of such a filter includes a time filter in which the period restriction for the query may be specified such as “limit search to this week” or “limit search to this month”. The filters may also be as generic as the query and may not necessarily be restricted to time periods. The input query may be specified in text, voice/audio, image and graphics and/or other formats. In an exemplary embodiment, the user can send a query SMS via their mobile device to the Virtual Secretary (VS) inquiring about the location of the party the user is attending that evening. On receiving the SMS request, the VS looks up the requested information on social networking sites such as Facebook of which the user is a member, the user's calendar and email. After determining the requested information, the VS then responds to the user by sending a reply SMS with the appropriate answer. If multiple pieces of information are found, the VS may ask the user which piece of information the user would like to acquire further details on. The user may also dictate notes or reminders to the VS, which it may write down or post on animated sticky notes for the user.
  • In an exemplary embodiment, the VS may be implemented as an application 16 on a home computing device 14 that is also connected to the home phone line. Calls by the VS can be made or received through VoIP (Voice-over-Internet-Protocol) or the home phone line. The VS can also be connected to appliances, security monitoring units, cameras, GPS (Global Positioning Systems) units. This allows the user to ask the VS questions such as “Is Bob home?” or “Who's at home?” The VS can monitor the activity of kids in the house and keep an eye out for anomalies as described with reference to FIG. 52B. Prior belief on the location of the kids can come from their schedules which may be updated at any time. Other services are available to the user include picking up the home phone and asking the VS to dial a contact's number, which the VS would look up in the user's address book on the user's home computer or on a social networking site or any of the resources available through the VOS. The user may click on an image of a user and ask the VS to dial the number of that user. The user may point to a friend through a webcam connected to the VS and ask the VS to bring up a particular file related to the friend or query the VS for a piece of information related to the friend. The VS may also monitor the local weather for anomalies, and other issues and matters of concern or interest to the user. For instance, if a user is outside and the VS system is aware of a snowstorm approaching, it sends a warning notification to the user on their mobile phone such as, “There is a snow-storm warning in the area John. It would be best if you return home soon.” Other issues that the VS may monitor include currency rates, gas prices, sales at stores etc. This information may be available to or acquired by the VS via feeds from the information sources or via websites that dynamically update the required information.
  • One exemplary embodiment of the VS is described in FIG. 51B. The system waits for external user commands. Commands can come via SMS, voice/audio/speech through a phone call, video, images. These commands are first pre-processed to form instructions. This can be accomplished using speech-to-text conversion for SMS, voice/audio/speech; parsing gestures in videos; and processing images using methods described with reference to FIG. 52A. These instructions are then buffered into memory. The system polls memory to see if an instruction is available. If an instruction is available, the system fetches the instruction, decodes and executes it, and sends it back to memory. The response in memory is then preprocessed and communicated to the external world.
  • The VS answers queries by looking up local information—user, application data on the local machine, and then proceeds to look up information in other networks to which the user has access, such as web-based social networks, and the internet. It may also automatically mine and present information where applicable. In exemplary embodiment, when a user receives a phone call, the VS searches address books on the local machine and/or the Internet and/or social networks such as web-based, home or office networks to look up a person's name, phone and other information, including pictures, and display the appropriate information during an incoming phone call. If the information is not on any of the user's networks, the VS may look up public directories and other public information to identify caller and source. The VS may also look up/search cached information that was previously looked up or that is available on the local machine. Additionally, the VS gives information about the type of caller and relation between caller and user. For instance, the VS informs the user whether the call is from a telemarketing agency or from the dentist or from Aunt May in San Francisco etc. The VS may also specify location of the caller at the time of the call using GPS and positioning and location techniques. The VS may make use of the colloquial language to communicate with the user. The call display feature can be used as a standalone feature with cell phones and landlines and VoIP phones. A user may query the VS with a generic query such as ‘What is an Oscilloscope?’ The VS conducts a semantic analysis to determine the nature of the query. In this case, it determines that the query is related to a definition for a term. In this case, it would look up a source for definitions such as an encyclopaedia, based on its popularity and reliability as a source of information on the internet, or as specified by the user. As an example, it may look up Wikipedia to answer the user's query in this case.
  • Services based on identifying users' location are available through the VS. The VS may also be linked to, accessible to/by mobile phones or handheld devices of members in the user's friends' network, businesses in the user's network and other users and institutions. Location can be computed/determined using mobile position location technologies such as the GPS (Global Positioning System) or triangulation data of base stations, or a built in GPS unit on a cell phone in exemplary embodiment. The VS can inform the user if friends of the users are in the location or vicinity in which the user is located at present; and/or indicate the position of the user's friend relative to the user and/or the precise location of a given friend. In exemplary embodiment, if the user is at a grocery store, and the VS detects that a friend (George) of the user is around the corner, then the VS may point this out to the user saying, “Hey, George is at the baked goods aisle in the store.” In order to establish location in the store, the VS may establish a correspondence between the GPS location coordinates on the store map available via the retail server 24. The VS may additionally overlay the location coordinates on a map of the store and display the information on the user's handheld device. The VS may display a ‘GPS trail’ that highlights the location of a user over time (GPS location coordinates in the recent past of a user being tracked). The trail may be designed to reflect age of data. For example, the colour of a trail may vary from dark to light red where the darker the colour, the more recent the data. The users may communicate via voice and/or text and/or video, and/or any combination of the above. The content of the conversation may be displayed in chat boxes and/or other displays and/or graphics overlaid on the respective positions of the users on the map. Also, the user can query the VS to identify the current geographic location of a friend at any given time. Therefore, identification of a friend's location is not necessarily restricted to when a friend is in the user's vicinity. Users may watch live video content of their friend on their mobile device from their location. They may interact with each other via an overlaid whiteboard display and its accompanying collaborative tools as described with reference to FIG. 20. In exemplary embodiment, with reference to FIG. 56, ‘User A’ may be lost and he may phone his friend, ‘User B’ who can recognize the current location of User A based on the landmarks and video information User A transmits via his mobile. User B may also receive GPS coordinates on her mobile via the VS. User B can then provide directions to User A to go left or right based on the visual information (images/video) that is transmitted to User B's mobile via User A's mobile. User B may also scribble arrows on the transparent overlay on the video, to show directions with reference to User A's location in the video, which would be viewable by User A. Based on the content of the whiteboard deciphered through OCR optical character recognition techniques or sketch to model recognition [3] or speech to model recognition, related content (for example advertisements) may be placed in the proximity of the drawing screen or anywhere else on the mobile's screen and/or related content may be communicated via audio/speech, and/or graphics/images/videos.
  • Users may request to appear invisible or visible to friends and/or other users, so that they cannot be located by the user they appear invisible to. Businesses may use location data for delivery purposes in exemplary embodiment. For instance, pizza stores may deliver an order made via the VS to the user placing the order, based on their GPS coordinates. Users can request to be provided with exact ‘path to product’ in a store (using the communication network and method described with reference to FIG. 50 previously), upon which the VS provides the user with exact coordinates of the product in the store and directions to get there. The product location and directions may be overlaid on a store/mall map. Also, in virtual stores/malls/tourist locations and other virtual places described in this document, users may request ‘path to products’, and they will be provided with product location information and directions overlaid on a map of the virtual world. Alternatively, they may be directed to their destination by a virtual assistant or they may directly arrive at their virtual destination/product location in the virtual world.
  • Order placements and business transactions can also be conducted via a user's mobile device. A user may view a list of products and services on their mobile device. The user may place an order for a product or service via their mobile device via SMS or other services using the WAP protocol or through a cell phone based browser in exemplary embodiment. The vendor is informed of the order placed through a web portal and keeps item ready for pick up or delivers the item to address specified by user or the current location of user, which may be determined using a cell phone location technique such as GPS and cell-phone triangulation. Users may pre-pay for services or make reservations for services such as those provided in a salon via their mobile device and save waiting time at the salon. Vendors may have access to ‘MyStore’ pages, as described in exemplary embodiment previously with reference to FIG. 42. Once the order and transaction is approved, a confirmation is sent to the user. Electronic receipts may be sent to the user on their cell phone via email, SMS, web mail, or any other messaging protocol compatible with cell phones. Other information can be linked to the cell phone based on electronic receipts such as warranty and other information as described previously with reference to electronic receipts.
  • In an exemplary embodiment, a user ‘Ann’ may be a tourist visiting Italy for the first time, and would like to find out which restaurants have good ratings and where they are located. The user can query the system to determine which restaurants ‘Jim’ (a friend who visited Italy recently) ate at, their locations, and the menu items he recommends. The system, in this case, looks up Ann's friend's network on a social networking site, in exemplary embodiment, to access and query Jim's account and acquire the appropriate information. Jim has a virtual map application where he has marked the location of the restaurants he visited when he was in Italy. The restaurants each have a digitized menu available (hyperlinked to the restaurant location on the map) where items can be rated by a given user. Given that Ann has permission to access Jim's information, the information pertaining to location of the restaurants that Jim visited and liked and the ratings of menu items of those restaurants will be made available to Ann on her mobile device. Alternatively, Jim's travel information may be available from a travel itinerary that is in document or other format. In this case, the restaurant location information may be overlaid onto a virtual map and presented to Ann. The menu items that Jim recommended, along with their ratings may be hyperlinked to the restaurant information on the map in document, graphics, video or other format. Other files such as photos taken by Jim at the restaurants, may be hyperlinked to the respective restaurant location on the map. Thus, in this example, the VS utilized information on a friend's account that may be located on a user's machine or other machine on the local network, or on the community server 26 or on a remote machine on the internet; a map application that may be present on the local machine, or on the portal server 20 or other remote machine; and restaurant information on the retail server 24 or other machine. In this manner, the VS can combine information and data and/or services from one or more storage devices and/or from one or more servers in the communication network in FIG. 51A.
  • Users may utilize the VS for sharing content ‘on the fly’. A website or space on a web server may exist where users can create their ‘sharing networks’. Alternatively, sharing networks may be created via a local application software that can be installed on a computing machine. A sharing network comprises member users whom the user would like to share content with. A user may create more than one sharing network based on the type of content he/she would like to share with members of each network. Members may approve/decline request to be added to a sharing network. A space is provided to each sharing network where the members in the sharing network may upload content via their mobile communication device or a computing machine by logging into their sharing network. Once the user uploads content into the sharing space, all members of that particular sharing space are notified of the update. Sharing network members will be informed immediately via an SMS/text message notification broadcast, as an example. Members may change the notification timing. They may also alternatively or additionally opt to receive notification messages via email and/or phone call. In exemplary embodiment, a user may upload videos to a sharing space. Once the video has been uploaded, all the other members of the sharing network are notified of the update. Members of the network may then choose to send comments ‘on the fly’ i.e., members respond to the video update by posting their comments, for which notifications are in turn broadcast to all members of the sharing network. In another exemplary embodiment, the VS may directly broadcast the uploaded content or a summary/preview/teaser of the uploaded content to all members of the sharing network. Real-time communication is also facilitated between members of a sharing network. Chat messages and live video content such as that from a webcam can be broadcast to members of a sharing network in real-time. The sharing network feature may be available as a standalone feature and not necessarily as part of the VS.
  • The tourism industry can make use of the VS to provide users with guided tours as the user is touring the site. Instructions such as ‘on your right is the old Heritage building’, and ‘in front of you are the Green Gardens’, may be provided as the user browses a site and transmits visual and/or text and/or speech information via their mobile and/or other computing device to the VS. In exemplary embodiment, a user may transmit site information in the form of images/videos to the VS, as he browses the site on foot. Alternatively or additionally, the VS can provide tour guide information based on the GPS coordinates of a user. Instructions may be provided live as the user is touring a site. The user may transmit their views via a webcam to the tour application, which is part of the VS. The tour application then processes the images/videos in real-time and transmits information on the what is being viewed by the user (i.e., ‘guided tour’ information). Users may ask the VS/tour application queries such as ‘What is this’ and point to a landmark in the image or ask ‘What is this white structure with black trimmings to my left?’. Thus, the VS tour application may decipher speech information and combine the query with image/video and any visual information provided to answer the user. The tour instructions/information can be integrated with whiteboard features so that landmarks can be highlighted with markings, labels etc., as the user is touring the site. The VS may alternately or additionally transmit site information/tour instructions based on the GPS coordinates and orientation of the user. Orientation information helps to ascertain the direction in which the user is facing so that appropriate landmark referencing may be provided such as ‘to you left is . . . ’, ‘turn right to enter this 14th century monument’ etc. Orientation may be determined by observing two consecutive coordinates and computing the displacement vector. Tour information/instructions may be registered with existing map applications and information and/or street view applications and information (for example Google Street View). Computationally intensive tasks, such as registration of the user's view with maps or other views in a database, may be transmitted to a remote server and the results may be transmitted back to the user's mobile device. Advertisement information may be overlaid/linked to relevant sites on user views on a mobile in exemplary embodiment.
  • Data from the user's mobile device may be used to reconstruct a 3D model of the scene, and may be available for viewing remotely. The reconstruction, if too intensive’ may occur on a remote machine.
  • Instructions may also be catered to users on foot (instead of in a vehicle for example), via the handheld. These include instructions specific to a person on foot, such as ‘turn around’, ‘look up’, in exemplary embodiment. In the case of directions to a location as well, users may be provided alternate instructions to arrive at a destination when traveling by foot (thus, directions are not limited to driving directions).
  • The VS may be integrated with a map application where users can directly or mark recommended places to visit. These marked places may be hyperlinked with to-do lists that specify the activities or events the user can engage in at those places; or blogs that catalogue user experiences. Photos, videos and other graphics and multimedia content may be linked to a place on the map describing the place, its significance and its attractions. These may also be pictures/videos taken by friends, virtual tours etc. A user may add or request to see specific feeds for a given place. In exemplary embodiment, the local news headlines corresponding to a selected place on the map may be displayed. Areas of interest such as general news, weather, science or entertainment, may be selected by the user to filter and display news and other information of interest. Event feeds that display events or activities on a particular month or week or day of the year at a place may be requested. Generic user videos capturing user experience or travel content at a place may be displayed. These may be videos that are extracted from a video uploading site such as YouTube, based on keywords such as place or other default keywords or keywords specified by the user. Local shopping feeds containing information about the places with the most popular or cheap and other categories of shopping items may be linked or associated with the places on the map. Most popular local music and where to buy information may be associated with a place. Other local information such as car rentals, local transit, restaurants, fitness clubs and other information can be requested by the user. Thus, local information is made easily available on any computing or mobile or display device. In addition, map overlays and hyperlinks to appropriate sources/places are used in order to make information presentation as user-friendly as possible. The user can also request the VS to display itineraries that include cities, places, events, attractions, hotels that the user chooses. In addition, the user may specify filters such as price range and time period to include in forming the itinerary. The VS would scan the appropriate databases detailing places, events, attractions and hotels and their associated information such as prices, availability, ticket information etc. in order to draw up a suggested itinerary accommodating user requirements as best as possible. The user may make all reservations and purchases of tickets online. The VS would direct the user to the appropriate reservation, purchasing and ticketing agents. Alternatively, the VS may be equipped with a facility to make hotel, event bookings and ticket purchases (for events, attractions etc.) online.
  • The VS may be used to connect to the services in a local community as well. Users can request an appointment at the dentist's office, upon which the system will connect to a scheduling software at the dentist's end (service's end), in exemplary embodiment. The scheduling software would check for available slots on the day and time requested by the user, schedule an appointment if the slot is available and send a confirmation to the VS. The VS then informs the user of the confirmation. If the available date and time is already taken or not available, the scheduler sends the user a list of available slots around the day and time the user has requested. The VS provides this information to the user in a user-friendly format and responds to the scheduler with the option the user has selected.
  • A facility is a ‘Centralized Communication Portal’ (CCP) which provides users with access to all emails (work, home, web based, local application based), voice messages, text messages, VoIP messages, chat messages, phone calls, faxes and any other messages/calls available through electronic messaging services. The CCP may take the form of a web based software or a mobile device software and/or both and/or local application for use on a computing machine or a mobile device or a landline phone. The CCP is equipped with text-to-speech and speech-to-text conversion so that it is possible for users to access emails in the form of voice messages, and voice messages in text format, in exemplary embodiment. The user can set the display name and number or email address of outgoing phone calls, emails, SMS or the system can determine these automatically based on factors such as who the message is for or what the context of the message is, etc. The system only lets the users set the phone number or email address of outgoing messages if the user owns these phone numbers and email addresses. In an exemplary embodiment, the owner ship of a phone number or email address is established by posing a challenge question to the user the answer to which is sent to the phone number or email address.
  • While a person is on a call, the CCP can simultaneously make a recording of the conversation, if access is granted by the participants of the call; convert the call recording into text; reformat the message if necessary and provide the user with options to do something with the recording such as email or save call recording, in an exemplary embodiment. The CCP can keep track of a call or message duration and/or size. This may be useful in case of professional services that charge per call or message for their services provided via phone or email or other messaging service(s). The CCP allows users to program features. In an exemplary embodiment, users can program the CCP to respond in a certain way to an incoming call. For example, the user may program the CCP to ignore call or forward the call to an answering machine, if the incoming call is from a specific number or person, for instance. In another exemplary embodiment, a user (Ann, for example) may program the CCP to respond to calls by automatically receiving the call after two rings, for example, and playing a message such as ‘please state your name’, or ‘please wait until Ann picks up’, or playing audio tracks from a certain folder available on the user's local machine or a remote machine or through a web page. If the caller user is logged into their CCP account, available through a web page or a local application on their computer or mobile device, then they may be able to view videos that the receiver user (i.e., the user receiving the call) has programmed the CCP to play before they pick up the call (the video may play via a visual interface provided by the CCP). In another exemplary embodiment of programming options, users may be able to set forwarding options for incoming calls and emails. For example, the user may program the CCP to forward all incoming emails (chat or text messages) or incoming emails (chat or text messages) from specific users to a mobile handheld/phone; forward incoming calls to a mobile phone to an email address or to another cell phone(s), in exemplary embodiments. Images in emails/text/chat messages may be converted to text using computer vision techniques such as those described with reference to FIG. 52 and FIG. 6. Text to speech conversion may then be carried out and, thus image information in text/email/chat messages can also be made available via voice messages or voice chat. PBX (Private Branch eXchange) systems may be integrated with the CCP.
  • An easy-to-use visual interface may be provided by the CCP. When a call is made, the interface may display the status of the receiver user. In exemplary embodiment, the status of a user may be: busy, back in 10 minutes, not in, hold/wait, leave message, attending another call, call another number: #####, etc. In another exemplary embodiment, a virtual character may greet the caller via the visual interface and inform the caller of the receiver's status, and instruct the caller to leave a message or direct the caller to another phone number or provide alternate directions. In another exemplary embodiment, a video recording of the receiver user may greet the caller user and provide status information and/or instructions to leave a message, call another number, hold/wait etc. Image to text conversions may also be useful to convey visual elements of a conversation (in addition to the audio/speech elements), in the case that users would like to view webcam/video conversations in text message form or in audio/voice format. Text to image conversion can be carried out using techniques similar to those described in [60]. This conversion may be utilized when users opts to see email/chat/text/SMS messages via the visual interface. In this case, in addition to displaying text information, image information obtained via text-to-image conversion may be displayed. Alternatively, this converted image information can be displayed as a summary or as a supplement to the actual messages.
  • Users may additionally connect to each other during a call or chat or email communication via webcam (s) whose output is available via the CCP's visual interface. Any or all of the collaborative tools, and methods of interaction discussed with reference to FIG. 20 may be made available to users by the CCP for collaborative interaction between participants during a call or chat or email communication via the CCP's visual interface. Users may be able to organize their messages, call information and history in an environment that allows flexibility. In exemplary embodiment, users may be able to create folders and move, add, delete information to and from folders. They may tag messages and calls received/sent. They may organize calls and messages according to tags provided by the system (such as sender, date) or custom tags that they can create. Call and message content and tags are searchable. Spam detection for phone calls, chat, text and voice messages (including VoIP) is integrated with the CCP, in addition to spam detection for email. In an exemplary embodiment, this is accomplished using a classifier such as a Naïve Bayes classifier [7, 61]. In addition, spam feature lists may be created using input from several users as well as dummy accounts. In an exemplary embodiment, if a user's friend who receives the same or similar email, phone call, SMS, etc. marks it as spam then the probability of that message being spam is increased. Dummy accounts may be setup and posted on various sources such as on the internet and messages collected on these accounts are also marked with a high probability of being spam. Users also have the option to unmark these sources/numbers as spam. A signature may be used by the CCP to confirm the authenticity of the source of the message. In an exemplary embodiment, this signature is produced when the user's friend logs into the system. In another exemplary embodiment, this signature may be produced based on the knowledge of the user's friend available to the CCP. Additionally, the CCP may inform the user that a particular number appears to be spam and if the user would like to pick up the phone and/or mark the caller as spam. The CCP may additionally provide the user with options regarding spam calls such as: mute the volume for a spam call (so that rings are not heard), direct to answering machine, respond to spam call with an automated message, or end call, block caller etc. Users may arrange meetings via the CCP. A user may specify meeting information such as the date, time and location options, members of the meeting, topic, agenda. The CCP then arranges the meeting on behalf of the user by contacting the members of the meeting and confirming their attendance and/or acquiring alternate date, time, location and other options pertaining to the meeting that may be more convenient for a particular member. If any of the users is not able to attend, the CCP tries to arrange an alternate meeting using the date/time/location information as specified by the user that is not able to attend and/or seeks an alternate meeting date/time/location from the user wishing to arrange the meeting. The CCP repeats the process until all users confirm that they can attend or until it synchronizes alternate date, time and location parameters specified by all members of the meeting. Users may specify the best mode such as email, phone, fax, voice, chat, text message via which the CCP may contact them to arrange a meeting. Users can also confirm whether they would be attending a meeting in person or via video/phone conferencing etc. Instead of providing only a binary classification (“spam” or “not spam”), the spam detector may provide more levels of spam detection; it may provide several levels of classification. If desired by the user, it can automatically sort emails, phone calls, SMS, etc. based on various criteria such as importance, nature (eg. social, work related, info, confirmation, etc.) etc. This may be done in an exemplary embodiment by learning from labels specified by users and/or attributes extracted from the content of the email, phone call, SMS etc. using Naïve Bayes. In an exemplary embodiment, a technique similar to that used in [62] is used for ranking.
  • The CCP may assign users a unique ID similar to a unique phone number or email address, which may consist of alphanumeric characters and symbols. In exemplary embodiment, it may assume the form ‘username#company’. It may be tied to existing top-level domains (TLDs), for example, the ‘.com’ domain. When someone dials or types this ID, a look up table is used to resolve the intended address which could be a phone number or email/chat address or VoIP ID/address or SMS ID. Users may specify whether they would like to use the CCP ID as the primary address to communicate with any user on their contact list. Users may also use the CCP ID as an alias.
  • The CCP may be integrated with the VS and/or incorporates one or more features of the VS, and vice versa.
  • An example of a “Job Application and Resume Management Service” (JARMS) is described next. This application may be available on the portal server 20. Users can create their “Job Profile” via this service. Forms and fields will be available for users to document their background and qualifications including their personal history, education, work and voluntary experience, extra-curriculars, affiliations, publications, awards and accomplishments, and other information of relevance to their careers. This service would provide questionnaires that may be useful to record or test skill subsets of the user. Hiring managers may find this additional information useful to assess a given job applicant's skills. Furthermore, commonly asked questions by Human Resources (HR) personnel may be made available for users to post answers to. This would assist the employers in further reducing application processing times. The skill and HR questions may be posted in text, audio, video and any other multimedia format. The user responses to those questions may also be posted in text, audio, video and any other multimedia format. A “Portfolio” section is available that assists the user in developing, preparing and uploading documents and other files of relevance to their career, for example, resumes, posters, publications, bibliographies, references, transcripts, reports, manuals, websites etc. This service will make it convenient for the user to upload documents in a variety of formats. Also, the user can design different resumes for application to different types of jobs. A tools suite assists the user in document uploading, manipulation and conversion. In exemplary embodiment, a PDF (Portable Document Format) conversion tool, document mark-up, and other tools are provided to the user. Users can upload transcripts directly from their University Registrar/Transcript offices, or websites through this service. The transcripts may be authenticated by the Universities or certified electronically. In this manner, the employers can be assured of the validity of the transcript uploaded through this service. References and their contact information is provided by the user via this service. Links to the accounts of the referees on JARMS or social networking sites such as Linkedin may also be provided on the user's profile. Videos from YouTube or other sources that document user accomplishments or work such as a conference presentation or an online seminar or a product demonstration and other examples may be uploaded.
  • JARMS is equipped with additional security features so that information is not easily viewed or captured by third party individuals or software etc. Employers to which users are interested in submitting their application to may be provided with access to the user's job profile. Users may also select the account resources they would like to make accessible to the employer.
  • An “Interview Room” facility is available through JARMS which is an online space where real time interviews can be conducted. Visual information along with audio and other content from a webcam, camcorder, phone etc. may be broadcast and displayed in windows that assume a configuration as shown in FIG. 53, so that all users in an interview session can be seen simultaneously. The interview room may be moderated by personnel from the institution or company that is conducting the interview. This session moderator can allow or disallow individuals from joining the session. The interviewee and interviewers can view each other simultaneously during the interview session in the display windows in FIG. 53, by using video capture devices at each end and broadcasting the captured content. The interview may involve video and audio content only or it may be aided by speech to text devices that convert audio content to text and display content as in the ‘Transcript’ display box FIG. 53. Alternately, text input devices such as a keyboard/mouse may be used to enter text. JARMS sessions may be private or public. These sessions may be saved or loaded or continued or restored. The session content including video content may be played, paused, rewinded, forwarded.
  • The collaborative broadcasting and viewing of content in windows arranged as in the configuration given in FIG. 53 may occur during an online shopping session or during a news coverage session online or a technical support session and during other collaborative communication and broadcast sessions online. In exemplary embodiment, during a news broadcast session, questions posed by viewers of the news story will appear in a ‘Live Viewer Feed’ (Feedback) box. Another feature, “Live Image Retrieval” looks up/searches for images corresponding to the words relayed in the broadcast in real-time, either on the local machine or the internet or a file or folder specified by one or more of the users involved in the collaborative session, and displays the appropriate images during the session to the viewers in another display window. The system may look up image tags or filename or other fields characterizing or associate with the image in order to perform the image search and retrieval corresponding to words in the collaborative conversation or broadcast. In exemplary embodiment, this can be accomplished as shown in [60]. The Live Image Retrieval (LIR) application can be used with other applications and in other scenarios. In exemplary embodiment, a user may specify an object in text or voice or other audio format, during online shopping. The LIR would retrieve images corresponding to the specified word from the retail server 24. The user can then select the retrieved image that best matches the user's notion of that object. For instance, the user may specify black purse and the LIR would retrieve images of many different types of black purses from different sources such as a black leather purse, brand name/regular black purses, black purses in stores in proximity of the user's location, fancy/everyday use black purses, etc. When the user's selects the purse meeting characteristics that the user is looking for, system 10 or the VS directs the user to the source of that purse, which may be an online store.
  • Another application (‘Social Bug’—SB) in the portal server 20 is described next that lets users upload content conveying information of interest to the general public such as activities, restaurants, shopping, news etc. These topics may be linked to specific geographical areas, so that users can look up information that pertains to a specific region of interest, such as the local community they reside in. So, in exemplary embodiment, users may look up or search content related to activities and events in their local community. The content may be uploaded by common users or business owners. Such video content will provide more information related to a topic in the form of reviews, user experiences, recommendations etc. The content is as dynamic and topics as wide-ranging as the users' interests. The uploaded content may assume the format of videos in exemplary embodiment. Moderators for each region may filter the content uploaded by users and choose the most relevant videos. The content may be organized or categorized according to fields such as ‘activities’, ‘events’, ‘businesses’, ‘shopping item/store’, ‘news area’ etc. Users can also specify the kind of information they would like to receive more information on via feeds, in an exemplary embodiment. Users may opt to receive feeds on a particular tag/keyword or user or event or business or subject.
  • The user can indicate specific filters like ‘video author’, ‘reviewer’, ‘subject’, ‘region/locality’, ‘date created’, ‘event date’, ‘price range’, and videos, video feeds and related content will be presented grouped according to the filters and/or filter combinations and keywords specified. Users can also specify objects in videos they are looking for, for example, ‘Italian pasta’, or a particular chef, in videos about restaurants. Video tags and other information describing a video (such as title, author, description, location etc.) may be used in order to find and filter videos based on criteria specified by the user. Additionally, video content (for instance, image frames, music and speech content) is mined in order to filter or find videos according to the user specified criteria.
  • This application allows users to indicate whether they liked a given video. Users can specify what they like about a video using keywords. Users may specify what kind of content they would like to see more of. A section/field titled ‘More of . . . ” would assist users in specifying preferences, suggestions about content they like or would like to see more of.
  • Relevant links and applications would be provided to users via this service depending on the content being viewed. In exemplary embodiment, if users are viewing restaurant related content, links would be provided allowing users to send a query to the restaurant, call up the restaurant, or book reservations via SMS, phone, email or chat. Similarly, if the user is viewing news items, news feed items and polls related to the content the user is viewing will be provided in the form of summaries or links. Top rated or most viewed response videos posted by viewers to news stories may also be posted on the same page. Videos may be pre-filtered by moderators. In exemplary embodiment, organizations working for social causes can post response videos to news stories covering issues such as poverty or human rights. They may conduct campaigns or provide information online through the use of videos. Such response videos will help to target specific audiences interested in the issues the organization is working/campaigning for. Since news videos are more popular, traffic can be directed to other videos relaying similar content but which may not necessarily belong to the same genre (for instance, two videos may both talk about poverty, but one may be a news story and the other an advertisement or documentary produced by an NGO). These videos may be posted as response videos to more popular videos, which may not necessarily be news videos.
  • Objects in videos and/or frames may be hyperlinked and/or tagged. In exemplary while browsing a jewelry store advertisement or infomercial, a user may click or hover or select an item of interest (a necklace, for example) and be provided with details on the make, model, materials of the necklace, pricing information etc. on the same or different frame/page. Alternatively/additionally, while a user is browsing the video, tags/comments/links may appear automatically. Users may also be provided with additional information such as deals available at the store; other users browsing the video and user's friends, if any, that are browsing/have browsed the same video or shopped at the store; where similar products or items may be found; store/business ratings/comments/reviews; how the store compares with other stores with reference to specific criteria such as bargains, quality, service, availability of items, location accessibility. Additional features such as those discussed with reference to FIG. 36 may be available. In another exemplary embodiment, tagged/hyperlinked objects within videos/images/simulations (which may be live or not) may be used for providing guided tours. In another exemplary embodiment, videos/image frames may be tagged/hyperlinked. As a video plays and a tagged frame appears, the corresponding tag is displayed to the user. The tags/hyperlinks/comments described above are searchable. On searching for a tag or browsing through tags the corresponding videos are shown.
  • Users can also avail of the translation feature that enables translation of videos in different languages either in real-time or offline. Text, audio and/or video content is translated and presented as audio/speech, text (subtitles for example). Shared viewing of videos between friends is possible. When shared viewing or broadcasting occurs, the same video may be simultaneously viewed by users sharing it, in different languages. The same feature is available in any/all of the chat applications mentioned in this document i.e., text typed in a certain language in a chat application may be translated to multiple languages and made available in real-time or offline to the different users of a chat session in audio/speech, text (subtitles for example). The video presentation/content may be interactive i.e., users watching the videos may interact with each other via the collaborative tools described with reference to FIG. 20 and modes referenced in FIG. 7. Additionally the video may be a live broadcast where the presenter or video author(s) or video participants may interact with the audience watching the broadcast via the collaborative tools described with reference to FIG. 20 and modes referenced in FIG. 7.
  • Summaries of video content may be provided in addition to video lists. Conferences or seminars or news stories or documentaries or movies may be summarized and provided to users. Users may be able to obtain a real-time summary of a given video before choosing to view the complete video. Composite summaries of related videos or videos grouped by subject, tags or title or author or keyword or any other criteria may be provided to users. This involves providing a summary of all videos in the group in one video. As the composite video plays, individual links to the corresponding video being shown in the summary at any given moment, are displayed. Video summarization (VSumm) techniques may involve tracking of most popular keywords. These include most commonly used search terms, and tags of most viewed videos in exemplary embodiment. VSumm may also keep track of important keywords via phrases implicitly referencing them such as ‘important point to be noted is . . . ’ in a video, in order to identify important regions/content in videos (i.e., these regions are namely those audio/video signal sequences in a video in which important keywords are embedded).
  • Additionally, users may specify summarization parameters, such as the length of the summarized video and/or filters. Users can employ filters to specify scenes (video, audio, text content/clips) to include in the summaries. These filters may include keywords or person or object name contained in the video clip to be included in the summary. In exemplary embodiment, a user may specify an actor's name whose scenes are to be contained in the summary of a movie. Other filters may include the kind of content the user would like to pre-filter in the video such as ‘obscene language’ in exemplary embodiment.
  • Given a video/audio/text sequence, the sequence can be summarized according to the procedure illustrated in FIG. 55 and described next, in exemplary embodiment. Given an audio-visual A/V (or audio, or image or video, or text or any combination thereof) sequence, it may be broken down (split) into audio, video, image and text streams, while maintaining association. In exemplary embodiment, if a PowerPoint presentation is the input, then the audio-video-image-text content on any given slide is associated. If an audio-video sequence is being analyzed, then audio and video signals at any given time are associated. Different processing techniques are then applied in different stages as shown in FIG. 55 to carry out the input sequence summarization.
  • At the optional Filtering step, pre-processing is carried out using digital signal processing techniques. In an exemplary embodiment, a transformation is applied to an image sequence to convert it into the corresponding signal in some pre-defined feature space. For example, a Canny Edge detector may be applied to the frames of an image sequence to obtain an edge space version of the image. Multiple filters may be applied at this step. Subsequences can be identified not just over time, but also over frequency and space. The resulting pre-processed data sequences are passed on to the Grouping stage.
  • At the Grouping stage, subsequences are identified and grouped based on their similarity. Distance metrics such as Kullback-Leibler divergence, relative entropy, mutual information, Hellinger distance, L1 or L2 distance are used to provide a measure of similarity between consecutive images, in exemplary embodiment. For instance, when mutual information is computed for consecutive data frames, and a high value is obtained, the data frames are placed in the same group; if a low value is obtained, the frame is placed in a new group. Motion information is also extracted from an image sequence using optical flow for example. Subsequences exhibiting similar motion are grouped together. Frequencies corresponding to different sources, for example different speakers are identified and may be used during synopsis formation. For instance, a script may be composed based on users identified and their spoken words. In exemplary embodiment, frequencies corresponding to different sources are identified using expectation-maximization (EM) with Mixture of Gaussians (MoG). This method may also be used in the context of interviews (as described with reference to FIG. 53), live broadcasts, and other video and data sequence summaries.
  • Semantic analysis is then carried out on the data sequence to identify and localize important pieces of information within a subsequence. For text information, for instance, large-font or bold/italicized/highlighted/underlined and other specially formatted text, which generally indicates highlighted/important points, is identified. Significant objects and scenes within an image or video sequence, may be identified using object recognition and computer vision techniques. Significant speech or audio components may be identified by analyzing tone, mood, expression and other characteristics in the signal. Using expectation-maximization (EM) with Mixture of Gaussians (MoG) for example, the speech signal can be separated from background music or the speech of a celebrity can be separated from background noise.
  • If the input information is associated with a tagged file, such as an XML file for example or the file shown with reference to FIG. 37, then tags may be analyzed to identify important components. In exemplary embodiment, in the case of a text file, the associated tagged file describing the text may contain tags indicating bold/italicized points i.e., important content in the file. From subsequences determined to be significant, exemplars may be extracted. Exemplars may be a portion of the subsequence. For example, in the case of text, it could be a word or a sentence; for an image sequence it could be a frame or a portion of the frame or a set of frames or a composite of frames/frame portions in the subsequence; for an audio signal it could be a syllable(s), or a word, or a music note(s) or a sentence (this system also enables music to text conversion. Notes corresponding to the music may be output as a text file. For example, it may contain C-sharp, A-minor). The subsequences may additionally be compressed (lossless or lossy compression may occur) using Wavelet transform (for example), composited, shortened, decimated, excised or discarded. This summarization procedure is also useful for mobile applications where bandwidth, graphics and memory resources are limiting.
  • In another exemplary embodiment, an image can be divided in space into different regions and the most significant components can be extracted based on an evaluation of the significance of the information in these regions. In yet another exemplary embodiment, significant components can be extracted from a sequence of images, and these significant portions can then be composited together within a single image or a sequence of images, similar to a collage or mosaic.
  • In exemplary embodiment, in FIG. 55 the sequence represents an input data sequence (each square represents a single frame or data unit in the input information sequence). The sequence may consist of different scenes. For example, a given scene could be one that represents the inside of a car; another could be an office scene shot from a particular viewpoint; another could be a lecture slide. At the Grouping step, subsequences are identified based on similarity measures described before. The different subsequences that are identified by the algorithm are shown with different symbols in this figure. Subsequences can be of variable length as illustrated in FIG. 55. The Semantic analysis step then extracts exemplars from each group (in this case +, O). In this case, the algorithm picks out a + frame from the subsequence it labeled as ‘+’, and a portion (O, O) of the subsequence it identified as ‘O’.
  • The associated data—audio, video sequence data are reformatted. In exemplary embodiment, reformatting is based on significance. For instance, if an image is larger, it may occupy a larger portion of the screen. Audio content may be renormalized if necessary. The audio, video and text channels may be merged to produce a new sequence or they may be provided to the user separately without merging.
  • The AFMS, VS, LIR, JARMS, SB systems may be used within a local area network such as a home or office network. Users who wish to share each other's data may be added to the network permitting sharing of applications within the network and restricting access to the data of the shared network users. The AFMS, VS, LIR, JARMS, SB systems and/or the features and methods described with reference to system 10 and/or a combination of any of the above may be used in conjunction with each other or independently. One or more features and methods of the AFMS, VS, LIR, JARMS, SB systems and/or the features and methods described with reference to system 10 and/or any combination of the above may be used as standalone features as part independent systems or as part of other systems not described in this document.
  • The shopping trip feature may be incorporated as a feature that is part of a browser or that may be installed as a browser plug in. This would allow activation of the shopping trip upon visiting almost any site accessible by the browser. All of the features described as part of this invention can also be incorporated as such i.e., as part of a browser or as a browser plug in, making it possible to use these features on any site.
  • This invention further illustrates the 3D browser concept. This browser would incorporate web pages and websites with the depth component in addition to 2D elements. Users will be able to get a sense of 3D space as opposed to 2D space while browsing web pages and websites via the 3D browser.
  • This invention incorporates additional features available on a mobile device such as a mobile phone or a personal digital assistant (PDA) to assist the user while shopping in a physical store. When users enter a store, the mobile device will detect and identify the store by receiving and processing wireless signals that may be sent by a transmitter in the store, and will greet users with the appropriate welcome message. For example, if the store is called ‘ABC’, the user will be greeted with the message ‘welcome to ABC’ on their wireless device. The user may be uniquely identified by the store based on their mobile phone number for example. The store may have a unique ID that will identified by the cell phone and used to also keep track of stores/places visited by the user. Additionally, store specials and offers and other information may be presented to the user on their mobile device (in the form of visual or audio or other forms of relaying digital input on a mobile device). Instead of automatic store identification, the mobile may instead accept user input (text, speech and other forms) for identifying store and then present relevant store information to the user. Users will be able to search for items in the store using their mobile device and will be able to identify the location (such as the department, aisle, counter location etc.) of the product they wish to buy. They will receive an indication of whether they are approaching the location of or are in the vicinity of the product in the store and/or if they have reached or identified the correct location. The user may see a ‘path to product’ as described elsewhere in this document. The mobile device is equipped with a barcode scanner and can be used for checking inventory, price and product information by scanning the barcode on a product. The mobile device may also process the user's shopping list available on the mobile device and automatically generate availability, inventory, location, discounts, product description, reviews and other relevant information pertaining to the product and display it to the user. In an exemplary embodiment, this may be accomplished as follows with reference to FIG. 50. The mobile device 901 may transmit appropriate information request/query signals to a wireless SAP (service access point) in the store which in turn, will transmit relevant store and product information which is received and displayed by the mobile device. Depending on the specific area of the store that the user is in, the products in that area may be displayed on their mobile device. Users may also access their model on their mobile device and try-on apparel on the model, via a local application 271 version for mobile devices. A user may also go on a shopping trip (as discussed with reference to FIG. 20) using their mobile phone 901. Other members of the shopping trip may be using a mobile device 902 as well or a computer. Users will also be able to see whether their friends are in the store using their mobile device 901.
  • Reference is now made to FIG. 52A where an image/video/audio/text analysis module 1550 is shown in an exemplary embodiment. The image/video/audio/text analysis 1550 module outlines the steps of interaction or engagement with the outside world, i.e. external to the computer. The module 1550 may be used for generic image/audio/video/text scene analysis. In an exemplary embodiment, this module works as follows: The module is preloaded with a basic language that is stored in a “memory” database 1554. This language contains a dictionary which in turn contains words and their meanings, grammar (syntax, lexis, semantics, pragmatics, etc.), pronunciation, relation between words, and an appearance library 1556. The appearance library 1556 consists of an appearance based representation of all or a subset of the words in the dictionary. Such a correspondence between words or phrases, their pronunciation including phonemes and audio information, and appearances is established in an exemplary embodiment using Probabilistic Latent Semantic Analysis (PLSA) [55]. In an exemplary embodiment graphs (a set of vertices and edges) or cladograms are used to represent the relation between words. Words are represented by vertices in the graph. Words that are related are connected by edges. Edges encode similarity and differences between the attached words. A visual representation of the similarity could be made by making the length of the edges linking words proportional to the degree of similarity. Vertices converge and diverge as more and more information becomes available. (For example, if the system is only aware of shoes as something that is worn on feet, and it later comes across the word or a picture of sneakers, it may group it with shoes. As it learns more related words such as slippers or sandals, it groups them together but may later create separate groups for each on learning the differences between these apparel) This system also enables conversion from speech to image, image to speech, text to image, image to text, text to speech, speech to text, image to text to speech, speech to text to image or any combination thereof. The memory database 1554 and the appearance library 1556 are analogous to “experience”. The appearance library 1556 and the memory database 1554 may be used during the primitive extraction, fusion, hypothesis formation, scene interpretation, innovation, communication, and other steps to assist the process by providing prior knowledge. Shown on the right in FIG. 52A are the steps of analysis of stimuli from the external world. The stimuli can be images, video, or audio in an exemplary embodiment. It could also include temperature, a representation of taste, atmospheric conditions, etc. From these stimuli basic primitives are extracted. More complex primitives are then extracted these basic primitives. This may be based on an analysis of intra-primitive and inter-primitive relations. This may trigger the extraction of other basic primitives or complex filters in a “focus shifting” loop where focus of the system shifts from one region or aspect of a stimulus to another aspect or region of the stimulus. Associations between the complex primitives are formed and these primitives are then fused. (The primitive extraction and fusion method described here is similar to that described in reference to FIG. 6D for the case of images and video. The prior knowledge 112 is available as part of the appearance library 1556 and the memory database 1554. The method is also applicable for audio stimuli). Hypotheses are then formed and are verified. The output of this step is a set of hypotheses (if multiple hypotheses are found) that are ranked by the degree of certainty or uncertainty. For example, the output of analysis on an image of a scene containing people may be a probability density on the location of people in the scene. The modes or the “humps” in this density may be used to define hypotheses on the location of people in the image. The probability of each mode (obtained for example by computing the maximum value corresponding to the mode or the mean of the mode) may be used to define the certainty of the existence of an instance of a person at the specified location. The variance of each mode may be used to define the spatial uncertainty with which a person can be localized. The output of the hypothesis formation and verification step is passed on to a scene interpretation step at which the information makes interpretations on the scene. For example, if the system identifies a cow, some chickens, and a horse in a video, and identifies the sound of crows, it may identify the scene as a farm scene. This may be done using a classifier as described before. The output of the scene analysis step is passed on to an innovation step. At this step the system innovative remarks to the analyzed stimuli. In an exemplary embodiment, the system looks for things it has seen in the recent past, surprising things, things of interest for example gadgets and makes comments such as—“Hey, I saw this guy last week”, “That's the new gadget that came out yesterday”, or “That's a pleasant surprise”. Surprise is detected using the method described with reference to FIG. 52B. At the innovation step, the system also filters out things that it does not want to communicate with the outside world. This could include information that is obvious or that which is confidential. The output of the innovation model is communicated to the external world. This can be done via text, audio (using text to speech techniques), images [60] or video. The text/audio output may include expressions such as, “I am looking at a farm scene. There are many farm animals here. I am looking at the cow. It is a very tiny cow. The crows are trying to eat the corn. The dog is barking . . . ”, and so on. If the system has the capacity to perform physical activities, it may communicate by interacting physically with the environment. For example, it may pick up an object it likes and view it from other angles and weigh it. The module 1550 may be driven by an intention. The intention can be based on the user's interest. For example, if the user likes hockey, it may pay more attention to things that are related to hockey in the stimuli. If the stimulus is a news article that mentions that a new hockey stick by the name “winstick” is out in the market, the module may perform a search on the “winstick” and extract pricing and availability information and some technical details on how the “winstick” is made to be a better hockey stick.
  • Reference is now made to FIG. 52B where a method 1650 for detecting surprise is shown in an exemplary embodiment. In an exemplary embodiment, the method 1650 operates as follows: The method constantly predicts the state of the system and observes the state of the system. (Alternatively, the method may predict and observe the state only as necessary). The state of the system includes variables that are of interest. For example, the state may include the state of the user which may involve the location of the user in a given camera view or the mood of the user extracted from an image or based on the music the user is listening to, or the location of the user extracted from a Global Positioning System GPS, the mood of the user's friends, etc. Similarly, the state of the environment may include the weather, the day of the week, the location where the user is, the number of people at the user's home, etc. One stage of the predict-update cycle is shown in FIG. 52B. At the ith stage, the system uses the output of the (i−1)th stage i.e. previous stage's output and predicts the state of the system at the prediction step 1652. This can be done, in an exemplary embodiment, using a prediction algorithm such as Gaussian process regression for example as used in [51] or other statistical approaches such as those used in [63]. The output of the prediction stage includes a predicted probability density of the state of the system. This is passed on to an observation step 1654 together with an observation of the system. The output of the observation step 1654 includes an updated probability density called an observed density. An observation of the system, in an exemplary embodiment could be an analysis of an image taken through a webcam (eg. image-based extraction of the pose of the user) or a measurement of the temperature of the room using a thermal sensor, or any other measurement appropriate for the system. In an exemplary embodiment, an observed probability density is computed from the observation and the predicted density by computing the a posteriori density using Bayes rule. In another exemplary embodiment, the observed density is computed based on the observation alone. The difference between the predicted probability density and the observed probability density is then measured at the measurement step 1656. This is done, in an exemplary embodiment, using a distance metric such as the Kullback-Leibler divergence or relative entropy, mutual information, the Hellinger distance, or the L1 or L2 distance. Other statistics or functions drawn from the predicted and observed (or updated) probability densities (or distributions) could also be used. At step 1658, a test is made to determine if the distance is significant. In an exemplary embodiment, this is done based on a threshold—if the distance is over a threshold, the distance is considered significant and if it is below the threshold the distance is considered insignificant. The threshold could be assigned or could be determined automatically. In an exemplary embodiment, the threshold is chosen to be a statistic of the predicted or observed density, In another exemplary embodiment, the threshold is chosen to be a function of the degree of certainty or uncertainty in the estimate of the predicted or observed densities. In yet another exemplary embodiment, the threshold is learnt from training data. If the distance is significant, the system is enters a “surprised” state. Otherwise it remains in an “unsurprised” state. The “surprised” state and the “unsurprised” states are handled by their respective handlers. The degree of surprise may be dependent on the distance between the predicted and observed probability densities. This allows the system to express the degree of surprise. For example, the system may state that it is “a little surprised” or “very surprised” or even “shocked”. (Over time if an event becomes common or occurs frequently the system may incorporate the nature of the event at the prediction step thus leading to a predicted density that is closer to the observed density and essentially getting used to the event). Such a system is used, for example, for detecting anomalies. As discussed with reference to FIG. 51A, the system may monitor the locations of kids of a home by using signals from their cell phones (for example, text messages from their cell phones indicating the GPS coordinates) using a particle filter. If a surprise is observed (for example if the location of the kid is outside the predicted range for the given time), the surprise handler may send a text notification to the kid's parents. The system may also be used in surveillance applications to detect anomalies. As another example, the system may monitor a user's location while he/she is driving a vehicle on the highway. If the user slows down on the highway, the system may lookup weather and traffic conditions and suggest alternative routes to the user's destinations. If the user's vehicle stops when the system didn't expect it to, the system's surprise handler may say to the user things such as—“Do you need a tow truck?”, “Is everything ok?”, “Do you want to call home for help?”, etc. If a response is not heard, the system's surprise handler may notify the user's family or friends. Such a system, may also be used to predict the state of the user, for example, the mood of the user. If the system notices that the user is depressed, the surprise handler may play a comedy video or play a joke to the user to cheer him up. If the user is on a video sharing site or in the TV room for extended hours and the system sees that an assignment is due in a couple of days, the system may suggest to the user to start working on the assignment and may complain to others (such as the user's parents) if the user does not comply. Such a system is also useful for anomaly detection at a plant. Various parameters may be monitored and the state of the system may be predicted. If the distance between the predicted and observed states is high, an anomaly may be reported to the operator. Images and inputs from various sensors monitoring an inpatient may be analyzed by the system and anomalies may be reported when necessary. Another application of method 1650, would be as a form of interaction with the user. The method may be used to monitor the activities of the user which may be used to build a model of the users activities. This model can then be used to predict the activities of the user. If a surprise if found, the surprise handler could inform the user accordingly. For example, if the user's calendar says that the user has an appointment with his/her doctor and the user typically goes to the doctor on time, but on one instance is not on his/her way to the office (the system may have access to the user's GPS location and time of arrival from the current location to the doctor's office or may gather this data from indirect sources such as a chat session with the user's friends indicating that the user is going to be at a friend's party) the surprise handler may state that the user is supposed to be at the doctor's office and is getting late. The surprise handles may make similar comments on the user's friend's activities. The surprise handler may also take actions such as make a phone call, turn off the room's light if the user falls asleep and wake up the user when it's time to go to school. Method 1650 also enables a system to make comments based on visually observing the user. For example, the a system may make comments such as, “Wow! Your eye color is the same as the dress your are wearing”, or “You look pretty today”, based on the user's dressing patterns, method 1650, heuristics that define aesthetics and/or the method used to determine beauty described earlier in this document. The probability densities referred to above can be discrete, continuous, or a sampled version of a continuous density or could even be arbitrary functions or simply scalars that are representative of the belief of the state in exemplary embodiments. There may be cases where the system may expect a surprise, but a surprise is not found. In such situations the systems may express that it is not surprised and explain why. For example, if a tennis player loses, the system may say that it is not surprised because the wind was blowing against her direction during the match or if a football team loses, the system may express to the users that it is not surprised because the positions of the team players was consistently ill-positioned. As another example, the system may parse news and if it is found that a famous person is dead, it may express that is “shocked” to hear the news. This expression by the system can be made through a number of ways, for example through the use of text to speech conversion. The concept of surprise can also used for outlier rejection. For example, a system may employ the method described here during training to identify outliers and either not use them or assign lower weights to them so that the outliers do not corrupt the true patterns that are sought from a data.
  • The concept of a clique session is introduced here. A session is a lasting connection typically between a client (eg. 14) and a server (eg. 20) that is typically initiated when a user is authenticated on the server and ends when a user chooses to exit the session or the session times out. On the other hand, a clique session is one in which multiple users are authenticated and share the same session. A clique session may be initiated by any subset of the set of users who have agreed to collaborate or it may require authentication of all the users. Similarly, a clique session can be terminated if any subset or all the users of the clique session exit. The order of authentication may or may not be important. In an exemplary embodiment, all users of a clique session may have the same unique clique session ID under which the clique session data is stored. Clique sessions are useful for online collaboration applications. Clique session IDs, can also be used for accessing resources that require high security. For example, users of a joint account online may choose to have access to the online resource only if both users are authenticated and log in. As another example, a user of a bank account may have a question for a bank teller about his account. In order for the teller to view the user's account, the teller would first have to log in and then the user would have to log in to the same account to allow the teller to view the user's account and answer his question. Clique sessions may also be used for peer-to-peer connections.
  • Reference is now made to FIG. 54A-F where novel devices for interaction are shown in exemplary embodiments. These devices allow another way for users to communicate with computing devices 14. Reference is now made to FIG. 54A where a novel pointing devices are shown in exemplary embodiments. This could also take a 1D form 1700, a 2D form 1710, or a 3D form 1720. In an exemplary embodiment, the 1D form 11700 works as follows: A source or a transmitter bank 1712 is located on one side of the device and a sink or sensor or a receiver bank is located on the opposite side 1714. The source may emit lasers or other optical signals, or any other directional electromagnetic radiation or even fluids. When the beam is interrupted the by an interrupting unit such as a finger or a pen, the corresponding sensor on the receiver bank is blocked from receiving the signal. This is used to define the location of the object. If lasers are used, a laser frequency different from that of typical background lighting is used. In an alternative embodiment, the interrupting unit emit instead of the source or transmitting bank. The unit also allows the use of multiple interrupting units. In this case, the multiple sensors would be blocked and this would be used to define the location of the interrupting units. In an alternative embodiment, along each side of the device, a transmitter and receiver may be used in an alternating fashion so that each side has both transmitters and receivers. In the 2D form 1710, a second set of receivers and transmitters are placed orthogonal to the first one. Similarly, in the 3D form 1720, three sets of transmitter and receiver banks are used. Reference is now made to another pointing device 1730 in FIG. 54A that is composed of a set of holes. In each of these holes, a transmitter and a receiver are located. Each of these transmitters may employ lasers or other optical signals, or any other directional electromagnetic radiation or even fluids. The transmitter and the receiver are both oriented such that they point out of the device in the direction of the hole. When a hole is covered by an interrupting unit such as a pen or a finger, the signal bounces off the interrupting device and is sensed by the receiver. This signal is then used to define the location of the interrupting unit. In all cases 1700, 1710, 1720, 1730 a sequence of blocked sensors over time can be used to define the direction of motion. Reference is now made to FIG. 54B where an illustration 1732 of the use of the 2D form 1710 is show. The user can simply drag a finger on the unit and use that to point to objects or for free form drawing. The unit may also be placed over a computer screen and used as a mouse. Also shown in FIG. 54B is an illustration 1734 of the use of the 3D form 1720. This can be used to manipulate objects in 3D. For example, this can be used with the technology described with reference to FIG. 36. This device may be used with a hologram for visual feedback or it may be used with any conventional visualizing unit such as a monitor. The device 1720 can also be used with multiple hands as shown in the illustration 1734. Reference is now made to FIG. 54C where another illustration of the use of the device 1710 is shown in an exemplary embodiment. The device 1710 may be placed on paper and the user may use a pen to write as usual on the paper. As the user writes. the device 1710 also captures the position of the pen. This is then used to create a digital version of the writing and may be stored on the unit 1710 or transferred to a computing device. The device 1710 is also portable. The corners of the device 1710 can be pushed inwards and the unit folded as shown in FIG. 54C. The compact form of this device takes the form of a pen as shown in FIG. 54C. The device 1710 can also include a palette that includes drawing tools such as a polygons, selection tools, eraser, etc. The user can also slide the device 1710 as he/she writes to create a larger document than the size of the device. This movement of the device 1710 is captured and a map is built accordingly. The motion may be captured using motion sensors or using optical flow [64] if the unit is equipped with optical sensors. The device 1710 may also be moved arbitrarily in 3D and the motion may be captured along with location of the interrupting device to create art or writing in 3D using the 2D form 1710. The device 1710 can also be used as a regular mouse. The apparatus presented in FIG. 54A-C may also be used as a virtual keyboard. Regions in the grid may be mapped to keyboard keys. In one exemplary embodiment, a user can place the apparatus on a printout of a keyboard (or a virtual keyboard may be projected using for example lasers) and use it for typing.
  • Reference is now made to FIG. 54D where a novel device 1740 for interacting with a computing device or a television in shown in an exemplary embodiment. The device 1740 includes a QWERTY keyboard or any other keyboard 1748 that allows users to enter text or alphanumerics, a mouse 1746, controls for changing the volume or channels 1744, other controls for switching between and controlling computing devices and entertainment devices such as a DVD player, a TV tuner, a cable TV box, a video player, a gaming device. The device may be used as a regular universal TV remote and/or to control a computer. The mouse may be used by rocking the pad 1746 to a preferred direction or sliding a finger over the pad. The device 1740 communicates with other devices via infrared, Bluetooth, WiFi, USB and/or other means. The device 1740 allows users to control the content being viewed and to manipulate content. For example, the device 1740 allows users to watch videos on a video sharing site. Users can use the keyboard 1748 to enter text in a browser to go to a site of their choice and enter text into a search box to bring up the relevant videos to watch. They can then use the mouse 1746 to click on the video to watch. The keyboard 1748 and the mouse 1746 can be used as a regular keyboard and mouse for use with any other application as well. The keyboard may also be used to switch TV/cable channels by typing the name of the channel. A numeric keypad may be present above the keypad, or number keys may be a part of the alpha (alphabets) keyboard and can be accessed by pressing a function key, in an exemplary embodiment. The device 1740 ma also include an LCD screen or a touch screen. The device 1740 may also be used with a stylus. The functionality of the device may be reprogrammable. The device could also be integrated with a phone. The device may be used with one hand or two hands as shown in FIG. 54E in an exemplary embodiment. The device allows easy text entry when watching videos. The device facilitates interactive television. The content of the television may be changed using this remote. The device 1740 may also include motion sensors. The motion of this device may be used change channels, volume, or control characters on a screen. The device may be used to search a video for tags and jump to tags of interest. The device may also feature a numeric keypad that allows easy placement of phone calls.
  • Reference is now made to FIG. 54F where of a novel human computer interface system is illustrated in an exemplary embodiment. This system makes use of a line of sight that includes two or more objects. In an exemplary embodiment, the location of the user's finger and an eye are used to determine the location where the user is pointing. The location of the user's finger(s) or hand(s) and that of one or both of the user's eyes can be used to determine where the user is pointing on the screen. The user may point to a screen 1760 using one or more finger(s)/hand(s) 1762. One or more cameras may monitor the location of 1762 and the user's right eye 1764 and/or left eye 1766. The cameras may be on top of the screen, on the sides, at the bottom or may even be behind the screen 1760. A side view and a top view of the setup are also shown in FIG. 54F. The system may make use of motion parallax to precisely determine the location pointed at by the user.
  • A feature to enhance user experience with documents (for example on the internet) is described below. This feature is referred to as a “quotation system”. This feature allows users to quote from documents. In an exemplary embodiment, documents may be uniquely identifiable. This may be done by assigning a unique identification number to each document that is registered in a database. Documents can be indexed based on tags such as the chapter number and the line number. The tags may be inferred, or extracted or present in the underlying document. Users can embed quotes from documents. For example, a webpage may contain an embedded quote to a line from a chapter of a book. In an exemplary embodiment, hovering over the quotation or clicking on the quotation may display the corresponding quotation. In an exemplary embodiment, embedding a quotation tag with an identification number may display the quotation in the document in which the quotation is embedded. Quotations can be used for text, audio, video, or other media. A version number may be used for related documents. The system enables the user to find related quotes or verses. “Quotation chains” may also be supported. Quotation chains enable the user to quote a document that in turn quotes another document so that the source of the information can be traced.
  • The system 10 has been described herein with regards to being accessible only through the Internet, where a server application is resident upon a server 20. The respective applications that provide the functionalities that have been described above, may be installed on a localized stand-alone devices in alternative embodiments. The respective apparel items and other products that the user may view and or selected, may then be downloaded to the respective device upon connecting to an Internet server. The stand-alone devices in alternative embodiments may communicate with the server, where the server has access to various databases and repositories wherein items and offerings may be stored. These stand-alone devices may be available as terminals or stations at a store, which may be linked to store inventories. Using these terminals, it may be possible to search via keywords, voice, image, barcode and specify filters like price range.
  • While the above description provides examples of the embodiments, it will be appreciated that some features and/or functions of the described embodiments are susceptible to modification without departing from the spirit and principles of operation of the described embodiments. Furthermore, the systems, methods, features and/or functions described above may be used independently or in conjunction with other systems and/or methods; and may be applied or used in other context other than the those mentioned in this document. Accordingly, what has been described above has been intended to be illustrative of the invention and non-limiting and it will be understood by persons skilled in the art that other variants and modifications may be made without departing from the scope of the invention as defined in the claims appended hereto.
  • REFERENCES
    • [1] M. Desbrun, M. Meyer and P. Alliez, “Intrinsic Parameterizations of Surface Meshes,” Comput. Graphics Forum, vol. 21, pp. 209-218, 2002.
    • [2] H. Y. Bennamoun and M., “1D-PCA, 2D-PCA to nD-PCA,” Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 4, pp. 181-184, 2006.
    • [3] R. Davis, “Magic Paper: Sketch-Understanding Research,” Computer, vol. 40, pp. 34-41, 2007.
    • [4] O. Bimber and R. Raskar, Spatial Augmented Reality: Merging Real and Virtual Worlds. A K Peters, Ltd., 2005,
    • [5] G. L. Congdong and Li, “Collaborative Filtering Recommendation Model Through Active Bayesian Classifier,” Information Acquisition, 2006 IEEE International Conference on, pp. 572-577, August 2006.
    • [6] T. Yoshioka and S. Ishii, “Fast Gaussian process regression using representative data,” Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on, vol. 1, pp. 132-137 0.1, 2001.
    • [7] D. J. Hand and K. Yu, “Idiot's Bayes: Not So Stupid After All?” International Statistical Review, vol. 69, pp. 385-398, 2001.
    • [8] P. A. Flach and N. Lachiche, “Naive Bayesian Classification of Structured Data,” Machine Learning, vol. 57, pp. 233-269, December 2004. 2004.
    • [9] T. Hastie, R. Tibshirani and J. H. Friedman, The Elements of Statistical Learning. Springer, 2001,
    • [10] G. Shakhnarovich, T. Darrell and P. Indyk, Nearest-Neighbor Methods in Learning and Vision: Theory and Practice (Neural Information Processing). The MIT Press, 2006,
    • [11] B. Froba and C. Kubibeck, “Robust face detection at video frame rate based on edge orientation features,” Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pp. 327-332, 20-21 May 2002.
    • [12] A. R. Chowdhury, R. Chellappa, S. Krishnamurthy and T. Vo, “3D face reconstruction from video using a generic model,” Multimedia and Expo, 2002. ICME '02. Proceedings. 2002 IEEE International Conference on, vol. 1, pp. 449-452 o. 1, 2002.
    • [13] L. D. Alvarez, J. Mateos, R. Molina and A. K. Katsaggelos, “High&hyphen; resolution images from compressed low&hyphen; resolution video: Motion estimation and observable pixels,” Int. J. Imaging Syst. Technol., vol. 14, pp. 58-66, 2004.2004.
    • [14] U. P. Jain and Anil K., “3D Face Reconstruction from Stereo Video,” Computer and Robot Vision, 2006. the 3rd Canadian Conference on, pp. 41-41, 7-9 Jun. 2006.
    • [15] H. Zhang, A. C. Berg, M. Maire and J. Malik, “SVM-KNN: Discriminative nearest neighbor classification for visual category recognition,” in CVPR '06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006, pp. 2126-2136.
    • [16] V. Perlibakas, “Automatical detection of face features and exact face contour,” Pattern Recognition Letters, vol. 24, pp. 2977-2985, December, 2003.2003.
    • [17] P. Kuo and J. Hannah, “Improved Chin Fitting Algorithm Based on An Adaptive Snake,” Image Processing, 2006 IEEE International Conference on, pp. 205-208, 8-11 Oct. 2006.
    • [18] M. Castel and E. R. Hancock, “Acquiring height data from a single image of a face using local shape indicators,” Comput. Vis. Image Underst., vol. 103, pp. 64-79, 2006.
    • [19] P. L. Worthington, “Reillumination-driven shape from shading,” Comput. Vis. Image Underst., vol. 98, pp. 326-344, 2005.
    • [20] G. Vogiatzis, P. Favaro and R. Cipolla, “Using frontier points to recover shape, reflectance and illumination,” Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, vol. 1, pp. 228-235 o. 1, 17-21 Oct. 2005.
    • [21] C. H. Esteban and F. Schmitt, “Silhouette and stereo fusion for 3D object modeling,” Comput. Vis. Image Underst., vol. 96, pp. 367-392, 2004.
    • [22] R. Dovgard and R. Basri, “Statistical symmetric shape from shading for 3D structure recovery of faces,” Lecture Notes in Computer Science, vol. 3022, pp. 99-113, 2004.
    • [23] H. Murase and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Int. J. Comput. Vision, vol. 14, pp. 5-24, 1995.
    • [24] Y. Iwasaki, T. Kaneko and S. Kuriyama, “3D hair modeling based on CT data and photographs.” in Computer Graphics and Imaging, 2003, pp. 123-128.
    • [25] Y. G. Zhiyong and Huang, “A method of human short hair modeling and real time animation,” Computer Graphics and Applications, 2002. Proceedings. 10th Pacific Conference on, pp. 435-438, 2002.
    • [26] K. Ward, F. Bertails, TaeYong Kim, S. R. Marschner, M. P. Cani and M. C. Lin, “A Survey on Hair Modeling: Styling, Simulation, and Rendering,” Transactions on Visualization and Computer Graphics, vol. 13, pp. 213-234, March-April 2007.
    • [27] A. S. Micilotta, E. Ong and R. Bowden, “Real-time upper body detection and 3D pose estimation in monoscopic images,” in Proceedings of the European Conference on Computer Vision (ECCV'06)- Volume 3; Lecture Notes in Computer Science, 2006, pp. 139-150.
    • [28] L. Tong-Yee and H. Po-Hua, “Fast and intuitive metamorphosis of 3D polyhedral models using SMCC mesh merging scheme,” Visualization and Computer Graphics, IEEE Transactions on, vol. 9, pp. 85-98, 2003.2003.
    • [29] B. Allen, B. Curless and Z. Popovic, “The space of human body shapes: Reconstruction and parameterization from range scans,” in SIGGRAPH '03: ACM SIGGRAPH 2003 Papers, 2003, pp. 587-594.
    • [30] V. Blanz and T. Vetter, “A morphable model for the synthesis of 3D faces,” in SIGGRAPH '99: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 1999, pp. 187-194.
    • [31] I. Baran and J. Popovic, “Automatic rigging and animation of 3D characters,” ACM Trans. Graph., vol. 26, pp. 72, 2007.
    • [32] A. Hilton, J. Starck and G. Collins, “From 3D Shape Capture to Animated Models,” International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT), pp. 246-257, 2002.
    • [33] X. Yang, A. Somasekharan and J. J. Zhang, “Curve skeleton skinning for human and creature characters: Research Articles,” Comput. Animat. Virtual Worlds, vol. 17, pp. 281-292, 2006.
    • [34] W. T. Tang and ChiKeung, “Multiresolution Mesh Reconstruction from Noisy 3D Point Sets,” Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 1, pp. 5-8, 2006.
    • [35] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers and J. Davis, “SCAPE: shape completion and animation of people,” ACM Trans. Graph., vol. 24, pp. 408-416, 2005.
    • [36] F. L. Matthews and J. B. West, “Finite element displacement analysis of a lung,” Journal of Biomechanics, vol. 5, pp. 591-600, November, 1972. 1972.
    • [37] S. H. Sundaram and C. C. Feng, “Finite element analysis of the human thorax,” Journal of Biomechanics, vol. 10, pp. 505-516, 1977. 1977.
    • [38] Y. Zhang, Y. Qiu, D. B. Goldgof, S. Sarkar and L. Li, “3D finite element modeling of nonrigid breast deformation for feature registration in-ray and MR images,” in WACV '07: Proceedings of the Eighth IEEE Workshop on Applications of Computer Vision, 2007, pp. 38.
    • [39] G. Sakas, L. Schreyer and M. Grimm, “Preprocessing and Volume Rendering of 3D Ultrasonic Data,” IEEE Comput. Graph. Appl., vol. 15, pp. 47-54, 1995.
    • [40] Y. Ito, P. Corey Shum, A. M. Shih, B. K. Soni and K. Nakahashi, “Robust generation of high-quality unstructured meshes on realistic biomedical geometry,” Int. J. Numer. Meth. Engng, vol. 65, pp. 943-973, 5 Feb. 2006. 2006.
    • [41] S. Zhao and H. Lee, “Human silhouette extraction based on HMM,” in ICPR '06: Proceedings of the 18th International Conference on Pattern Recognition, 2006, pp. 994-997.
    • [42] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging. Philadelphia, Pa., USA: Society for Industrial and Applied Mathematics, 2001,
    • [43] S. Linnainmaa, D. Harwood and L. S. Davis, “Pose determination of a three-dimensional object using triangle pairs,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 10, pp. 634-647, September 1988. 1988.
    • [44] D. Hoiem, A. A. Efros and M. Hebert, “Automatic photo pop-up,” in SIGGRAPH '05: ACM SIGGRAPH 2005 Papers, 2005, pp. 577-584.
    • [45] M. Desbrun, M. Meyer and P. Alliez, “Intrinsic Parameterizations of Surface Meshes,” Computer Graphics Forum, vol. 21, pp. 209-218, September 2002. 2002.
    • [46] M. C. Lincoln and A. F. Clark, “Pose-independent face identification from video sequences,” in AVBPA '01: Proceedings of the Third International Conference on Audio-and Video-Based Biometric Person Authentication, 2001, pp. 14-19.
    • [47] I. Sato, Y. Sato and K. Ikeuchi, “Illumination from shadows,” Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 290-300, March 2003.
    • [48] P. Rheingans and D. Ebert, “Volume illustration: nonphotorealistic rendering of volume models,” Visualization and Computer Graphics, IEEE Transactions on, vol. 7, pp. 253-264, July 2001. 2001.
    • [49] A. Finkelstein and L. Markosian, “Nonphotorealistic rendering,” IEEE Computer Graphics and Applications, vol. 23, pp. 26-27, July 2003. 2003.
    • [50] J. Lansdown and S. Schofield, “Expressive rendering: a review of nonphotorealistic techniques,” IEEE Computer Graphics and Applications, vol. 15, pp. 29-37, May 1995.1995.
    • [51] J. M. Wang, D. J. Fleet and A. Hertzmann, “Multifactor Gaussian process models for style-content separation,” in ICML '07: Proceedings of the 24th International Conference on Machine Learning, 2007, pp. 975-982.
    • [52] M. D. Cordea, E. M. Petriu and D. C. Petriu, “3D Head Tracking and Facial Expression Recovery using an Anthropometric Muscle-based Active Appearance Model,” Instrumentation and Measurement Technology Conference Proceedings, 2007 IEEE, pp. 1-6, 1-3 May 2007.
    • [53] Z. Hammal, L. Couvreur, A. Caplier and M. Rombaut, “Facial expression classification: An approach based on the fusion of facial deformations using the transferable belief model,” International Journal of Approximate Reasoning, vol. 46, pp. 542-567, December, 2007. 2007.
    • [54] J. X. Chen, Y. Yang and X. Wang, “Physics-Based Modeling and Real-Time Simulation,” Computing in Science and Engg., vol. 3, pp. 98-102, 2001.
    • [55] A. Shahhosseini and G. M. Knapp, “Semantic image retrieval based on probabilistic latent semantic analysis,” in MULTIMEDIA '06: Proceedings of the 14th Annual ACM International Conference on Multimedia, 2006, pp. 703-706.
    • [56] H. Snoussi and A. MohammadDjafari, “Bayesian Unsupervised Learning for Source Separation with Mixture of Gaussians Prior,” J. VLSI Signal Process. Syst., vol. 37, pp. 263-279, 2004.
    • [57] Y. Zhang and C. Zhang, “Separation of music signals by harmonic structure modeling,” in Advances in Neural Information Processing Systems 18 Y. Weiss, B. Scholkopf and J. Platt, Eds. Cambridge, Mass.: MIT Press, 2006, pp. 1617-1624.
    • [58] Y. Yang, Y. Su, Y. Lin and H. H. Chen, “Music emotion recognition: The role of individuality,” in HCM '07: Proceedings of the International Workshop on Human-Centered Multimedia, 2007, pp. 13-22.
    • [59] Tao Li and M. Ogihara, “Content-based music similarity search and emotion detection,” Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on, vol. 5, pp. -705-8 o. 5, 17-21 May 2004.
    • [60] X. Zhu, A. B. Goldberg, M. Eldawy, C. R. Dyer and B. Strock, “A Text-to-Picture Synthesis System for Augmenting Communication.” pp. 1590, 2007.
    • [61] D. D. Lewis, “Naive (Bayes) at forty: The independence assumption in information retrieval,” in ECML '98: Proceedings of the 10th European Conference on Machine Learning, 1998, pp. 4-15.
    • [62] H. Zhang and J. Su, “Naive bayesian classifiers for ranking,” in ECML, 2004, pp. 501-512.
    • [63] D. A. Vasquez Govea and T. Fraichard, “Motion Prediction for Moving Objects: a Statistical Approach,” pp. 3931-3936, April. 2004.
    • [64] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Massachusetts Institute of Technology, Cambridge, Mass., USA, 1980.

Claims (20)

1. A method of sharing the amount of an online purchase transaction comprising the steps of:
a) initiating a transaction between one or more users from among a plurality of users;
b) selecting one or more of the users participating in the transaction;
c) allocating the amount of the transaction between some or all of the selected users; and
d) completing the transaction if each of the selected users pays the portion of the amount distributed to the selected user and if the total of amounts paid by the users matches the amount of the transaction.
2. A method as in claim 1 wherein if any of the selected users does not pay the amount of the transaction allocated to that user within a specified time, the transaction is declined, and:
a) releasing any hold placed on amounts authorized for payment by a selected user; and
b) refunding any amount actually paid by a selected user.
3. A method as in claim 1 wherein information may be shown to the users as they propose an allocation of the amount between users, including the portion of the amount remaining to be allocated and optionally the taxes and tip corresponding to each user.
4. A method as in claim 1 wherein the users can chose to apply any arbitrary allocation between users including allocating the amount evenly between users; each user paying for his/her items; or one or more users paying for all of the users.
5. A method as in claim 1 wherein the users can be online or offline. If the user(s) is offline, then that user(s) is sent a notification to share the transaction via one or more means including a notification on the hosting website or on any other social networking websites like Facebook, MySpace, and Friendster, or on a chat application such as MSN chat or via email or on a cell phone communicating via text such as through SMS or via voice by employing text to speech conversion. Users can also schedule a time to be online via these means and via a shared calendar.
6. A method as in claim 1 wherein users can categorize portions of the amount for various purposes including claiming reimbursement for expenses incurred on the part of an employee for the purposes of work.
7. A method as in claim 1 wherein a copy of the transaction is saved as an electronic receipt for purposes including returns or exchanges.
8. A method as in claim 1 that is further extended to include point of sale terminals.
9. A method of collaborative online shopping comprising:
a) browsing and shopping with other users in shared collaborative environments including web browsers and applications simulating virtual mall and store environments;
b) selective sharing of account resources with selected users, where account resources may include users' current views (the content currently being viewed by a user), shopping carts, shopping lists, wishlists, fitting rooms, user models, products of interest to the user, messages, bills, audio and other multimedia files and play lists, images, and other multimedia content, users' ratings, feedback, user-specified content regarding a product including comments and tags; and
c) communication and interaction between users via means including voice, chat, text and other online and electronic communication means, while shopping.
10. A method as in claim 8 wherein the mode of interaction is asynchronous, in which collaboration including browsing, shopping, sharing, communication and interaction can be performed without requiring other collaborators to be online.
11. A method as in claim 8 wherein the mode of interaction is synchronous, in which collaborators are online and synchronized collaboration including browsing, shopping, sharing, communication and interaction is performed.
12. A method as in claim 8 wherein the mode of interaction is common, in which collaborators are simultaneously engaged in synchronized collaboration, including browsing, shopping, sharing, communication and interaction, in a common environment.
13. A method as in claim 8 wherein users can collaborate with friends on social networks.
14. A method as in claim 8 where tools and assistance are provided by the system to facilitate collaborative activities between users that take into account group preferences and needs. Instances of this include:
a) A tool for scheduling a time to go on a collaborative trip online.
b) The system can also propose locations for group activities including a location of a place of interest that minimizes the travel for all the users in the collaborative session.
c) Facility for users to organize event, activity, or occasion information and description for any activity or event which may involve a group including, but is not limited to, details such as the theme, location, venue, map information, participants, attendees, dress code, news, feeds and articles related to the event, photos, videos and other event related media, user feedback, ratings and comments, which can be posted and viewed. Users can share views of themselves (either their image, photo, video, other media or their 3D character model) in celebrity or movie apparel, or the apparel they plan to wear to a particular event, activity, or occasion to share the spirit of the occasion which they plan to attend.
d) Suggestions on what to wear for a particular occasion, event, or activity, and what to bring to an event, activity, or occasion and where to buy it can be provided by the system taking into account and processing user preferences and event, activity, or occasion details. Apparel and venue decorations suggestions for the event, activity, or occasion are provided based on the season, time of day the event or activity is held, whether the event is indoor or outdoor, and budget allocated. Other event-specific factors may be taken into account to assist in coordinating apparel to be worn by collaborating users who are going to an event.
e) Information on restaurants, shopping plazas, movie studios, games, historical sites, museums and other venues; upcoming events and shows, festivals, concerts, and exhibitions, and music bands/groups, celebrities coming to town is made available and suggestions on where to go are provided by the system. The system may also incorporate users' preferences, and/or proximity of the event and other user-specific or system default criteria to make suggestions. Users may also obtain the latest weather and traffic updates as well as all traffic and weather information relevant to a given event, venue, or activity.
f) Users can collaboratively design a room or any space virtually and purchase virtual furniture, or design, build and buy furniture or other items and the corresponding real furnishings and decorations to furnish the corresponding real space.
15. A method of product recommendation comprising:
a) collecting personal user data including profession, gender, size, preferences, user's apparel size, user's address, who the user's friend are, user's friends' information, users' medical records including eyeglass and contact lens prescription information;
b) collecting vendor data including product size, product description, product location, price; and
c) recommending vendor products that best match the users' personal data.
16. A method as in claim 15 wherein the said user and vendor data are stored for future reference and recommendation.
17. A method as in claim 15 wherein users can shop for and buy products for their friends that are compatible with their friend's personal information including apparel that fits them, without compromising their friend's privacy.
18. A method as in claim 15 in which accurate 3D body models representing the user are generated, comprising:
a) acquisition of multimedia from the user for extraction of data pertaining to physical attributes;
b) controls for dynamically adjusting dimensions of various body parts of the model;
c) use of feedback provided by the user on body information;
d) combining of 2D user images and anthropometric data to construct a 3D body and face model of the user;
e) applying optimization techniques to the generated model to increase precision of match with the user's real face and body;
f) refining the 3D model using texture maps, pattern, color, shape and other information pertaining to the make and material of the apparel to provide photorealism;
g) creating custom looks on the 3D model by selecting apparel, cosmetic, hair and dental products from catalogues or by performing a virtual makeover.
19. A method as in claim 17 wherein real-time goodness of fit information is communicated to the user as the user browses through apparel. The goodness of fit information includes information in the form of:
a) Flashing arrows or varied color regions and/or other graphic or visual indicator, for instance, to indicate type of fit (tight, loose and other degrees of fit) in a region and where adjustments need to be made for proper fitting;
b) Providing the user with a close up view, zooming onto a specific area of interest on the 3D model to view and examine fit in that region;
c) Using a tape measure animation to indicate the dimensions of a particular body segment or region;
d) Digital effects such as a transparency/x-ray vision effect where the apparel's transparency can be changed in order to enable the user to examine fit in the particular region;
e) Specifying numeric measurements to indicate fit information including the gap or margin between apparel and body in different regions, after apparel is worn; an overall goodness of fit rating.
20. A method as in claim 17 wherein products that are relevant to the user's personal data are shown to the user as the user browses through products including apparel that fits the user, products that matches the user's medical records including eyeglasses or contact lenses that match the user's prescription.
US12/409,074 2008-03-21 2009-03-23 System and method for collaborative shopping, business and entertainment Abandoned US20100030578A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US12/409,074 US20100030578A1 (en) 2008-03-21 2009-03-23 System and method for collaborative shopping, business and entertainment
US13/612,593 US10002337B2 (en) 2008-03-21 2012-09-12 Method for collaborative shopping
US13/834,888 US20130215116A1 (en) 2008-03-21 2013-03-15 System and Method for Collaborative Shopping, Business and Entertainment
US15/087,323 US10872322B2 (en) 2008-03-21 2016-03-31 System and method for collaborative shopping, business and entertainment
US17/128,657 US11893558B2 (en) 2008-03-21 2020-12-21 System and method for collaborative shopping, business and entertainment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US6471608P 2008-03-21 2008-03-21
US12/409,074 US20100030578A1 (en) 2008-03-21 2009-03-23 System and method for collaborative shopping, business and entertainment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/612,593 Continuation US10002337B2 (en) 2008-03-21 2012-09-12 Method for collaborative shopping

Publications (1)

Publication Number Publication Date
US20100030578A1 true US20100030578A1 (en) 2010-02-04

Family

ID=40639960

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/409,074 Abandoned US20100030578A1 (en) 2008-03-21 2009-03-23 System and method for collaborative shopping, business and entertainment
US13/612,593 Active 2031-02-06 US10002337B2 (en) 2008-03-21 2012-09-12 Method for collaborative shopping

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/612,593 Active 2031-02-06 US10002337B2 (en) 2008-03-21 2012-09-12 Method for collaborative shopping

Country Status (3)

Country Link
US (2) US20100030578A1 (en)
CA (1) CA2659698C (en)
GB (1) GB2458388A (en)

Cited By (1087)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070005584A1 (en) * 2005-06-30 2007-01-04 At&T Corp. Automated call router for business directory using the world wide web
US20080028302A1 (en) * 2006-07-31 2008-01-31 Steffen Meschkat Method and apparatus for incrementally updating a web page
US20090024641A1 (en) * 2007-07-20 2009-01-22 Thomas Quigley Method and system for utilizing context data tags to catalog data in wireless system
US20090106104A1 (en) * 2007-10-17 2009-04-23 Yahoo! Inc. System and method for implementing an ad management system for an extensible media player
US20090106639A1 (en) * 2007-10-17 2009-04-23 Yahoo! Inc. System and Method for an Extensible Media Player
US20090106315A1 (en) * 2007-10-17 2009-04-23 Yahoo! Inc. Extensions for system and method for an extensible media player
US20090125812A1 (en) * 2007-10-17 2009-05-14 Yahoo! Inc. System and method for an extensible media player
US20090150254A1 (en) * 2007-11-30 2009-06-11 Mark Dickelman Systems, devices and methods for computer automated assistance for disparate networks and internet interfaces
US20090257636A1 (en) * 2008-04-14 2009-10-15 Optovue, Inc. Method of eye registration for optical coherence tomography
US20090259937A1 (en) * 2008-04-11 2009-10-15 Rohall Steven L Brainstorming Tool in a 3D Virtual Environment
US20090276805A1 (en) * 2008-05-03 2009-11-05 Andrews Ii James K Method and system for generation and playback of supplemented videos
US20090287765A1 (en) * 2008-05-15 2009-11-19 Hamilton Ii Rick A Virtual universe desktop exploration for resource acquisition
US20090300639A1 (en) * 2008-06-02 2009-12-03 Hamilton Ii Rick A Resource acquisition and manipulation from within a virtual universe
US20090307110A1 (en) * 2008-06-09 2009-12-10 Boas Betzler Management of virtual universe item returns
US20090306998A1 (en) * 2008-06-06 2009-12-10 Hamilton Ii Rick A Desktop access from within a virtual universe
US20100005007A1 (en) * 2008-07-07 2010-01-07 Aaron Roger Cox Methods of associating real world items with virtual world representations
US20100063854A1 (en) * 2008-07-18 2010-03-11 Disney Enterprises, Inc. System and method for providing location-based data on a wireless portable device
US20100076862A1 (en) * 2008-09-10 2010-03-25 Vegas.Com System and method for reserving and purchasing events
US20100076870A1 (en) * 2008-03-13 2010-03-25 Fuhu. Inc Widgetized avatar and a method and system of virtual commerce including same
US20100082576A1 (en) * 2008-09-25 2010-04-01 Walker Hubert M Associating objects in databases by rate-based tagging
US20100082575A1 (en) * 2008-09-25 2010-04-01 Walker Hubert M Automated tagging of objects in databases
US20100082454A1 (en) * 2008-10-01 2010-04-01 International Business Machines Corporation System and method for generating a view of and interacting with a purchase history
US20100083320A1 (en) * 2008-10-01 2010-04-01 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US20100080364A1 (en) * 2008-09-29 2010-04-01 Yahoo! Inc. System for determining active copresence of users during interactions
US20100088616A1 (en) * 2008-10-06 2010-04-08 Samsung Electronics Co., Ltd. Text entry method and display apparatus using the same
US20100088187A1 (en) * 2008-09-24 2010-04-08 Chris Courtney System and method for localized and/or topic-driven content distribution for mobile devices
US20100094696A1 (en) * 2008-10-14 2010-04-15 Noel Rita Molinelli Personal style server
US20100095298A1 (en) * 2008-09-18 2010-04-15 Manoj Seshadrinathan System and method for adding context to the creation and revision of artifacts
US20100094714A1 (en) * 2008-10-15 2010-04-15 Eli Varon Method of Facilitating a Sale of a Product and/or a Service
US20100100416A1 (en) * 2008-10-17 2010-04-22 Microsoft Corporation Recommender System
US20100100744A1 (en) * 2008-10-17 2010-04-22 Arijit Dutta Virtual image management
US20100134516A1 (en) * 2008-11-28 2010-06-03 Sony Corporation Image processing system
US20100199200A1 (en) * 2008-03-13 2010-08-05 Robb Fujioka Virtual Marketplace Accessible To Widgetized Avatars
US20100198608A1 (en) * 2005-10-24 2010-08-05 CellTrak Technologies, Inc. Home health point-of-care and administration system
US20100205062A1 (en) * 2008-10-09 2010-08-12 Invenstar, Llc Touchscreen Computer System, Software, and Method for Small Business Management and Payment Transactions, Including a Method, a Device, and System for Crediting and Refunding to and from Multiple Merchant Accounts in a Single Transaction and a Method, a Device, and System for Scheduling Appointments
US20100211891A1 (en) * 2009-02-17 2010-08-19 Fuhu, Inc. Widgetized avatar and a method and system of creating and using same including storefronts
US20100226546A1 (en) * 2009-03-06 2010-09-09 Brother Kogyo Kabushiki Kaisha Communication terminal, display control method, and computer-readable medium storing display control program
US20100239121A1 (en) * 2007-07-18 2010-09-23 Metaio Gmbh Method and system for ascertaining the position and orientation of a camera relative to a real object
US20100250714A1 (en) * 2009-03-25 2010-09-30 Digital River, Inc. On-Site Dynamic Personalization System and Method
US20100250398A1 (en) * 2009-03-27 2010-09-30 Ebay, Inc. Systems and methods for facilitating user selection events over a network
US20100250290A1 (en) * 2009-03-27 2010-09-30 Vegas.Com System and method for token-based transactions
US20100257463A1 (en) * 2009-04-03 2010-10-07 Palo Alto Research Center Incorporated System for creating collaborative content
US20100281104A1 (en) * 2009-04-30 2010-11-04 Yahoo! Inc. Creating secure social applications with extensible types
US20100293234A1 (en) * 2009-05-18 2010-11-18 Cbs Interactive, Inc. System and method for incorporating user input into filter-based navigation of an electronic catalog
US20100325016A1 (en) * 2009-06-22 2010-12-23 Vistaprint Technologies Limited Method and system for dynamically generating a gallery of available designs for kit configuration
US20110004852A1 (en) * 2009-07-01 2011-01-06 Jonathon David Baugh Electronic Medical Record System For Dermatology
US20110004501A1 (en) * 2009-07-02 2011-01-06 Pradhan Shekhar S Methods and Apparatus for Automatically Generating Social Events
US20110004508A1 (en) * 2009-07-02 2011-01-06 Shen Huang Method and system of generating guidance information
US20110010087A1 (en) * 2005-10-24 2011-01-13 CellTrak Technologies, Inc. Home Health Point-of-Care and Administration System
US20110022536A1 (en) * 2009-02-24 2011-01-27 Doxo, Inc. Provider relationship management system that facilitates interaction between an individual and organizations
US20110022565A1 (en) * 2009-07-27 2011-01-27 International Business Machines Corporation Coherency of related objects
US20110040539A1 (en) * 2009-08-12 2011-02-17 Szymczyk Matthew Providing a simulation of wearing items such as garments and/or accessories
US20110044512A1 (en) * 2009-03-31 2011-02-24 Myspace Inc. Automatic Image Tagging
US20110047013A1 (en) * 2009-05-21 2011-02-24 Mckenzie Iii James O Merchandising amplification via social networking system and method
US20110043520A1 (en) * 2009-08-21 2011-02-24 Hon Hai Precision Industry Co., Ltd. Garment fitting system and operating method thereof
US20110055186A1 (en) * 2009-09-02 2011-03-03 Xurmo Technologies Private Limited Method for personalizing information retrieval in a communication network
US20110071889A1 (en) * 2009-09-24 2011-03-24 Avaya Inc. Location-Aware Retail Application
US20110078573A1 (en) * 2009-09-28 2011-03-31 Sony Corporation Terminal apparatus, server apparatus, display control method, and program
US20110078306A1 (en) * 2009-09-29 2011-03-31 At&T Intellectual Property I,L.P. Method and apparatus to identify outliers in social networks
US20110082764A1 (en) * 2009-10-02 2011-04-07 Alan Flusser System and method for coordinating and evaluating apparel
US20110087679A1 (en) * 2009-10-13 2011-04-14 Albert Rosato System and method for cohort based content filtering and display
US20110099122A1 (en) * 2009-10-23 2011-04-28 Bright Douglas R System and method for providing customers with personalized information about products
US20110099514A1 (en) * 2009-10-23 2011-04-28 Samsung Electronics Co., Ltd. Method and apparatus for browsing media content and executing functions related to media content
US20110107379A1 (en) * 2009-10-30 2011-05-05 Lajoie Michael L Methods and apparatus for packetized content delivery over a content delivery network
US20110107236A1 (en) * 2009-11-03 2011-05-05 Avaya Inc. Virtual meeting attendee
US20110106662A1 (en) * 2009-10-30 2011-05-05 Matthew Stinchcomb System and method for performing interactive online shopping
US20110119696A1 (en) * 2009-11-13 2011-05-19 At&T Intellectual Property I, L.P. Gifting multimedia content using an electronic address book
US20110126123A1 (en) * 2009-11-20 2011-05-26 Sears Brands, Llc Systems and methods for managing to-do list task items via a computer network
US20110125566A1 (en) * 2009-11-06 2011-05-26 Linemonkey, Inc. Systems and Methods to Implement Point of Sale (POS) Terminals, Process Orders and Manage Order Fulfillment
US20110128223A1 (en) * 2008-08-07 2011-06-02 Koninklijke Phillips Electronics N.V. Method of and system for determining a head-motion/gaze relationship for a user, and an interactive display system
US20110131163A1 (en) * 2009-12-01 2011-06-02 Microsoft Corporation Managing a Portfolio of Experts
US20110138064A1 (en) * 2009-12-04 2011-06-09 Remi Rieger Apparatus and methods for monitoring and optimizing delivery of content in a network
US20110153380A1 (en) * 2009-12-22 2011-06-23 Verizon Patent And Licensing Inc. Method and system of automated appointment management
US20110153451A1 (en) * 2009-12-23 2011-06-23 Sears Brands, Llc Systems and methods for using a social network to provide product related information
US7970661B1 (en) * 2010-01-20 2011-06-28 International Business Machines Corporation Method, medium, and system for allocating a transaction discount during a collaborative shopping session
US20110161424A1 (en) * 2009-12-30 2011-06-30 Sap Ag Audience selection and system anchoring of collaboration threads
US20110184831A1 (en) * 2008-06-02 2011-07-28 Andrew Robert Dalgleish An item recommendation system
US20110191692A1 (en) * 2010-02-03 2011-08-04 Oto Technologies, Llc System and method for e-book contextual communication
US20110191809A1 (en) * 2008-01-30 2011-08-04 Cinsay, Llc Viral Syndicated Interactive Product System and Method Therefor
US20110196761A1 (en) * 2010-02-05 2011-08-11 Microsoft Corporation Value determination for mobile transactions
US20110196714A1 (en) * 2010-02-09 2011-08-11 Avaya, Inc. Method and apparatus for overriding apparent geo-pod attributes
US20110202469A1 (en) * 2010-02-18 2011-08-18 Frontline Consulting Services Private Limited Fcs smart touch for c level executives
US20110208619A1 (en) * 2010-02-24 2011-08-25 Constantine Siounis Remote and/or virtual mall shopping experience
US20110208655A1 (en) * 2010-02-25 2011-08-25 Ryan Steelberg System And Method For Creating And Marketing Authentic Virtual Memorabilia
US20110219229A1 (en) * 2010-03-02 2011-09-08 Chris Cholas Apparatus and methods for rights-managed content and data delivery
US20110219403A1 (en) * 2010-03-08 2011-09-08 Diaz Nesamoney Method and apparatus to deliver video advertisements with enhanced user interactivity
US20110225514A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Visualizing communications within a social setting
US20110221771A1 (en) * 2010-03-12 2011-09-15 Cramer Donald M Merging of Grouped Markers in An Augmented Reality-Enabled Distribution Network
US20110225069A1 (en) * 2010-03-12 2011-09-15 Cramer Donald M Purchase and Delivery of Goods and Services, and Payment Gateway in An Augmented Reality-Enabled Distribution Network
US20110225039A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Virtual social venue feeding multiple video streams
US20110225498A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Personalized avatars in a virtual social venue
US20110221745A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Incorporating media content into a 3d social platform
US20110225515A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Sharing emotional reactions to social media
US20110225519A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Social media platform for simulating a live experience
US20110225518A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Friends toolbar for a virtual social venue
WO2011112296A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Incorporating media content into a 3d platform
US20110225517A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc Pointer tools for a virtual social venue
US20110225516A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Instantiating browser media into a virtual social venue
US20110230160A1 (en) * 2010-03-20 2011-09-22 Arthur Everett Felgate Environmental Monitoring System Which Leverages A Social Networking Service To Deliver Alerts To Mobile Phones Or Devices
US20110227827A1 (en) * 2010-03-16 2011-09-22 Interphase Corporation Interactive Display System
US20110231271A1 (en) * 2010-03-22 2011-09-22 Cris Conf S.P.A. Method and apparatus for presenting articles of clothing and the like
US20110231278A1 (en) * 2010-03-17 2011-09-22 Amanda Fries Garment sizing system
CN102201099A (en) * 2010-04-01 2011-09-28 微软公司 Motion-based interactive shopping environment
US20110239147A1 (en) * 2010-03-25 2011-09-29 Hyun Ju Shim Digital apparatus and method for providing a user interface to produce contents
US20110238645A1 (en) * 2010-03-29 2011-09-29 Ebay Inc. Traffic driver for suggesting stores
US20110246560A1 (en) * 2010-04-05 2011-10-06 Microsoft Corporation Social context for inter-media objects
US20110244952A1 (en) * 2010-04-06 2011-10-06 Multimedia Games, Inc. Wagering game, gaming machine and networked gaming system with customizable player avatar
WO2011123559A1 (en) * 2010-03-30 2011-10-06 George Lebrun Method and apparatus for contextual based search engine and enterprise knowledge management
US20110244954A1 (en) * 2010-03-10 2011-10-06 Oddmobb, Inc. Online social media game
CN102214303A (en) * 2010-04-05 2011-10-12 索尼公司 Information processing device, information processing method and program
US20110252325A1 (en) * 2010-04-09 2011-10-13 Michael Stephen Kernan Social networking webpage application
US20110252044A1 (en) * 2010-04-13 2011-10-13 Konkuk University Industrial Cooperation Corp. Apparatus and method for measuring contents similarity based on feedback information of ranked user and computer readable recording medium storing program thereof
US20110264460A1 (en) * 2010-04-23 2011-10-27 Someone With, LLC System and method for providing a secure registry for healthcare related products and services
US20110274104A1 (en) * 2007-10-24 2011-11-10 Social Communications Company Virtual area based telephony communications
NL1037949C2 (en) * 2010-05-10 2011-11-14 Suitsupply B V METHOD FOR DETERMINING REMOTE SIZES.
WO2011143113A1 (en) * 2010-05-10 2011-11-17 Mcgurk Michael R Methods and systems of using a personalized multi-dimensional avatar (pmda) in commerce
WO2011143273A1 (en) * 2010-05-10 2011-11-17 Icontrol Networks, Inc Control system user interface
US20110289426A1 (en) * 2010-05-20 2011-11-24 Ljl, Inc. Event based interactive network for recommending, comparing and evaluating appearance styles
US20110295875A1 (en) * 2010-05-27 2011-12-01 Microsoft Corporation Location-aware query based event retrieval and alerting
US20110302008A1 (en) * 2008-10-21 2011-12-08 Soza Harry R Assessing engagement and influence using consumer-specific promotions in social networks
US20110298929A1 (en) * 2010-06-08 2011-12-08 Cheryl Garcia Video system
WO2011159356A1 (en) * 2010-06-16 2011-12-22 Ravenwhite Inc. System access determination based on classification of stimuli
US20110310039A1 (en) * 2010-06-16 2011-12-22 Samsung Electronics Co., Ltd. Method and apparatus for user-adaptive data arrangement/classification in portable terminal
US20110310100A1 (en) * 2010-06-21 2011-12-22 Verizon Patent And Licensing, Inc. Three-dimensional shape user interface for media content delivery systems and methods
US20110310220A1 (en) * 2010-06-16 2011-12-22 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US20110317685A1 (en) * 2010-06-29 2011-12-29 Richard Torgersrud Consolidated voicemail platform
US20110320215A1 (en) * 2010-06-24 2011-12-29 Cooper Jeff D System, method, and apparatus for conveying telefamiliarization of a remote location
US20120005598A1 (en) * 2010-06-30 2012-01-05 International Business Machine Corporation Automatic co-browsing invitations
US20120023410A1 (en) * 2010-07-20 2012-01-26 Erik Roth Computing device and displaying method at the computing device
US20120022978A1 (en) * 2010-07-20 2012-01-26 Natalia Manea Online clothing shopping using 3d image of shopper
WO2012016052A1 (en) * 2010-07-28 2012-02-02 True Fit Corporation Fit recommendation via collaborative inference
US20120038750A1 (en) * 2010-08-16 2012-02-16 Pantech Co., Ltd. Apparatus and method for displaying three-dimensional (3d) object
US20120059787A1 (en) * 2010-09-07 2012-03-08 Research In Motion Limited Dynamically Manipulating An Emoticon or Avatar
US20120062689A1 (en) * 2010-09-13 2012-03-15 Polycom, Inc. Personalized virtual video meeting rooms
US20120066075A1 (en) * 2010-09-15 2012-03-15 Samsung Electronics Co., Ltd. Display apparatus and commercial display method of the same
WO2012037559A1 (en) * 2010-09-17 2012-03-22 Zecozi, Inc. System for supporting interactive commerce transactions and social network activity
US20120072304A1 (en) * 2010-09-17 2012-03-22 Homan Sven Method of Shopping Online with Real-Time Data Sharing Between Multiple Clients
US20120084783A1 (en) * 2010-10-01 2012-04-05 Fujifilm Corporation Automated operation list generation device, method and program
US8156013B2 (en) 2010-06-28 2012-04-10 Amazon Technologies, Inc. Methods and apparatus for fulfilling tote deliveries
US20120096354A1 (en) * 2010-10-14 2012-04-19 Park Seungyong Mobile terminal and control method thereof
US20120102409A1 (en) * 2010-10-25 2012-04-26 At&T Intellectual Property I, L.P. Providing interactive services to enhance information presentation experiences using wireless technologies
US20120102125A1 (en) * 2010-10-20 2012-04-26 Jeffrey Albert Dracup Method, apparatus, and computer program product for screened communications
US20120110472A1 (en) * 2010-10-27 2012-05-03 International Business Machines Corporation Persisting annotations within a cobrowsing session
US20120110568A1 (en) * 2010-10-29 2012-05-03 Microsoft Corporation Viral Application Distribution
US8175935B2 (en) 2010-06-28 2012-05-08 Amazon Technologies, Inc. Methods and apparatus for providing multiple product delivery options including a tote delivery option
WO2012061824A1 (en) * 2010-11-05 2012-05-10 Myspace, Inc. Image auto tagging method and application
US20120116840A1 (en) * 2010-11-10 2012-05-10 Omer Alon Method and apparatus for marketing management
US20120123865A1 (en) * 2010-11-12 2012-05-17 Cellco Partnership D/B/A Verizon Wireless Enhanced shopping experience for mobile station users
WO2012071316A1 (en) * 2010-11-22 2012-05-31 Etsy, Inc. Systems and methods for searching in an electronic commerce environment
US20120143761A1 (en) * 2010-12-03 2012-06-07 Ebay, Inc. Social network payment system
US20120158473A1 (en) * 2010-12-15 2012-06-21 International Business Machines Corporation Promoting products in a virtual world
JP2012118948A (en) * 2010-12-03 2012-06-21 Ns Solutions Corp Extended reality presentation device, and extended reality presentation method and program
US20120158538A1 (en) * 2010-12-20 2012-06-21 Electronics And Telecommunications Research Institute Terminal system, shopping system and method for shopping using the same
US20120173419A1 (en) * 2010-12-31 2012-07-05 Ebay, Inc. Visual transactions
US8219463B2 (en) 2010-06-28 2012-07-10 Amazon Technologies, Inc. Methods and apparatus for returning items via a tote delivery service
US20120179572A1 (en) * 2011-01-07 2012-07-12 Ebay, Inc. Conducting Transactions Through a Publisher
US20120179573A1 (en) * 2011-01-06 2012-07-12 Triggerfox Corporation Methods and Systems for Communicating Social Expression
US20120185383A1 (en) * 2011-01-18 2012-07-19 The Western Union Company Universal ledger
US20120185300A1 (en) * 2010-12-17 2012-07-19 Dildy Glenn Alan Methods and systems for analyzing and providing data for business services
US20120191529A1 (en) * 2011-01-26 2012-07-26 Intuit Inc. Methods and systems for a predictive advertising tool
US20120197439A1 (en) * 2011-01-28 2012-08-02 Intouch Health Interfacing with a mobile telepresence robot
US20120197700A1 (en) * 2011-01-28 2012-08-02 Etsy, Inc. Systems and methods for shopping in an electronic commerce environment
US20120200601A1 (en) * 2010-02-28 2012-08-09 Osterhout Group, Inc. Ar glasses with state triggered eye control interaction with advertising facility
WO2012030588A3 (en) * 2010-08-31 2012-08-16 Apple Inc. Networked system with supporting media access and social networking
US20120215805A1 (en) * 2011-02-22 2012-08-23 Sony Corporation Display control device, display control method, search device, search method, program and communication system
US20120218423A1 (en) * 2000-08-24 2012-08-30 Linda Smith Real-time virtual reflection
US20120221418A1 (en) * 2000-08-24 2012-08-30 Linda Smith Targeted Marketing System and Method
US8266018B2 (en) 2010-06-28 2012-09-11 Amazon Technologies, Inc. Methods and apparatus for managing tote orders
US8266017B1 (en) * 2010-06-28 2012-09-11 Amazon Technologies, Inc. Methods and apparatus for providing recommendations and reminders to tote delivery customers
WO2012121908A1 (en) * 2011-03-08 2012-09-13 Facebook, Inc. Selecting social endorsement information for an advertisement for display to a viewing user
US8271474B2 (en) 2008-06-30 2012-09-18 Yahoo! Inc. Automated system and method for creating a content-rich site based on an emerging subject of internet search
WO2012125673A1 (en) * 2011-03-15 2012-09-20 Videodeals.com S.A. System and method for marketing
US20120239485A1 (en) * 2011-03-14 2012-09-20 Bo Hu Associating deals with events in a social networking system
US20120246035A1 (en) * 2011-02-07 2012-09-27 Kenisha Cross Computer software program and fashion styling tool
US20120246027A1 (en) * 2011-03-22 2012-09-27 David Martin Augmented Reality System for Product Selection
US20120246581A1 (en) * 2011-03-24 2012-09-27 Thinglink Oy Mechanisms to share opinions about products
US20120246238A1 (en) * 2011-03-21 2012-09-27 International Business Machines Corporation Asynchronous messaging tags
US20120246585A9 (en) * 2008-07-14 2012-09-27 Microsoft Corporation System for editing an avatar
US20120253993A1 (en) * 2009-12-02 2012-10-04 Nestec S.A. Beverage preparation machine with virtual shopping functionality
US20120259726A1 (en) * 2011-04-06 2012-10-11 Bamin Inc System and method for designing, creating and distributing consumer-specified products
US20120259701A1 (en) * 2009-12-24 2012-10-11 Nikon Corporation Retrieval support system, retrieval support method and retrieval support program
US20120259826A1 (en) * 2011-04-08 2012-10-11 Rym Zalila-Wenkstern Customizable Interfacing Agents, Systems, And Methods
US20120257797A1 (en) * 2011-04-05 2012-10-11 Microsoft Corporation Biometric recognition
US20120259744A1 (en) * 2011-04-07 2012-10-11 Infosys Technologies, Ltd. System and method for augmented reality and social networking enhanced retail shopping
US20120256915A1 (en) * 2010-06-30 2012-10-11 Jenkins Barry L System and method of procedural visibility for interactive and broadcast streaming of entertainment, advertising, and tactical 3d graphical information using a visibility event codec
US8295610B1 (en) * 2010-01-06 2012-10-23 Apple Inc. Feature scaling for face detection
US20120272168A1 (en) * 2011-04-20 2012-10-25 Panafold Methods, apparatus, and systems for visually representing a relative relevance of content elements to an attractor
US20120271684A1 (en) * 2011-04-20 2012-10-25 Jon Shutter Method and System for Providing Location Targeted Advertisements
US20120278252A1 (en) * 2011-04-27 2012-11-01 Sethna Shaun B System and method for recommending establishments and items based on consumption history of similar consumers
WO2012148904A1 (en) * 2011-04-25 2012-11-01 Veveo, Inc. System and method for an intelligent personal timeline assistant
US20120284641A1 (en) * 2011-05-06 2012-11-08 David H. Sitrick Systems And Methodologies Providing For Collaboration By Respective Users Of A Plurality Of Computing Appliances Working Concurrently On A Common Project Having An Associated Display
WO2012155144A1 (en) * 2011-05-12 2012-11-15 John Devecka An interactive mobile-optimized icon-based profile display and associated social network functionality
US20120290987A1 (en) * 2011-05-13 2012-11-15 Gupta Kalyan M System and Method for Virtual Object Placement
US20120287122A1 (en) * 2011-05-09 2012-11-15 Telibrahma Convergent Communications Pvt. Ltd. Virtual apparel fitting system and method
US20120297319A1 (en) * 2011-05-20 2012-11-22 Christopher Craig Collins Solutions Configurator
US20120299912A1 (en) * 2010-04-01 2012-11-29 Microsoft Corporation Avatar-based virtual dressing room
US20120302212A1 (en) * 2011-05-25 2012-11-29 Critical Medical Solutions, Inc. Secure mobile radiology communication system
US20120317309A1 (en) * 2011-06-10 2012-12-13 Benco Davis S Method to synchronize content across networks
WO2012170163A1 (en) * 2011-06-10 2012-12-13 Aliphcom Media device, application, and content management using sensory input
WO2012170919A1 (en) * 2011-06-09 2012-12-13 Tripadvisor Llc Social travel recommendations
US20120324118A1 (en) * 2011-06-14 2012-12-20 Spot On Services, Inc. System and method for facilitating technical support
WO2012172568A1 (en) * 2011-06-14 2012-12-20 Hemanth Kumar Satyanarayana Method and system for virtual collaborative shopping
US20120320054A1 (en) * 2011-06-15 2012-12-20 King Abdullah University Of Science And Technology Apparatus, System, and Method for 3D Patch Compression
US20120330716A1 (en) * 2011-06-27 2012-12-27 Cadio, Inc. Triggering collection of consumer data from external data sources based on location data
US20130007669A1 (en) * 2011-06-29 2013-01-03 Yu-Ling Lu System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof
US20130018724A1 (en) * 2011-07-14 2013-01-17 Enpulz, Llc Buyer group interface for a demand driven promotion system
US8359285B1 (en) * 2009-09-18 2013-01-22 Amazon Technologies, Inc. Generating item recommendations
US20130024507A1 (en) * 2011-07-18 2013-01-24 Yahoo!, Inc. Analyzing Content Demand Using Social Signals
US8365081B1 (en) * 2009-05-28 2013-01-29 Amazon Technologies, Inc. Embedding metadata within content
WO2013020102A1 (en) * 2011-08-04 2013-02-07 Dane Glasgow User commentary systems and methods
US8380542B2 (en) 2005-10-24 2013-02-19 CellTrak Technologies, Inc. System and method for facilitating outcome-based health care
US8379028B1 (en) * 2009-04-30 2013-02-19 Pixar Rigweb
US20130047135A1 (en) * 2011-08-18 2013-02-21 Infosys Limited Enterprise computing platform
US20130051633A1 (en) * 2011-08-26 2013-02-28 Sanyo Electric Co., Ltd. Image processing apparatus
US20130054328A1 (en) * 2011-08-31 2013-02-28 Ncr Corporation Techniques for collaborative shopping
US20130054425A1 (en) * 2011-08-29 2013-02-28 Francesco Alexander Portelos Web-based system permitting a customer to shop online for clothes with their own picture
US20130057544A1 (en) * 2010-04-27 2013-03-07 Seung Woo Oh Automatic 3d clothing transfer method, device and computer-readable recording medium
US20130060873A1 (en) * 2011-08-29 2013-03-07 Saurabh Agrawal Real time event reviewing system and method
US20130057746A1 (en) * 2011-09-02 2013-03-07 Tomohisa Takaoka Information processing apparatus, information processing method, program, recording medium, and information processing system
US8397153B1 (en) 2011-10-17 2013-03-12 Google Inc. Systems and methods for rich presentation overlays
US20130073485A1 (en) * 2011-09-21 2013-03-21 Nokia Corporation Method and apparatus for managing recommendation models
WO2013043346A1 (en) * 2011-09-21 2013-03-28 Facebook, Inc. Structured objects and actions on a social networking system
US20130080161A1 (en) * 2011-09-27 2013-03-28 Kabushiki Kaisha Toshiba Speech recognition apparatus and method
CN103024569A (en) * 2012-12-07 2013-04-03 康佳集团股份有限公司 Method and system for performing parent-child education data interaction through smart television
WO2013049735A2 (en) * 2011-09-29 2013-04-04 Electronic Commodities Exchange, L.P. Methods and systems for providing an interactive communication session with a remote consultant
US20130085931A1 (en) * 2011-09-29 2013-04-04 Ebay, Inc. Social proximity payments
US20130083065A1 (en) * 2011-08-02 2013-04-04 Jessica Schulze Fit prediction on three-dimensional virtual model
WO2013059726A1 (en) * 2011-10-21 2013-04-25 Wal-Mart Stores, Inc. Systems, devices and methods for list display and management
US8433609B2 (en) 2011-08-24 2013-04-30 Raj Vasant Abhyanker Geospatially constrained gastronomic bidding
US8433623B2 (en) 2011-06-03 2013-04-30 Target Brands, Inc. Methods for creating a gift registry web page with recommendations and assistance
US8434002B1 (en) * 2011-10-17 2013-04-30 Google Inc. Systems and methods for collaborative editing of elements in a presentation document
US20130111359A1 (en) * 2011-10-27 2013-05-02 Disney Enterprises, Inc. Relocating a user's online presence across virtual rooms, servers, and worlds based on locations of friends and characters
US20130113830A1 (en) * 2011-11-09 2013-05-09 Sony Corporation Information processing apparatus, display control method, and program
US20130117378A1 (en) * 2011-11-06 2013-05-09 Radoslav P. Kotorov Method for collaborative social shopping engagement
US20130120367A1 (en) * 2011-11-15 2013-05-16 Trimble Navigation Limited Providing A Real-Time Shared Viewing Experience In A Three-Dimensional Modeling Environment
US20130124360A1 (en) * 2011-08-12 2013-05-16 Ebay Inc. Systems and methods for personalized pricing
US8446275B2 (en) 2011-06-10 2013-05-21 Aliphcom General health and wellness management method and apparatus for a wellness application using data from a data-capable band
US20130132298A1 (en) * 2010-01-07 2013-05-23 Sarkar Subhanjan Map topology for navigating a sequence of multimedia
US20130133056A1 (en) * 2011-11-21 2013-05-23 Matthew Christian Taylor Single login Identifier Used Across Multiple Shopping Sites
US20130138532A1 (en) * 2010-06-16 2013-05-30 Ronald DICKE Method and system for upselling to a user of a digital book lending library
US20130145282A1 (en) * 2011-12-05 2013-06-06 Zhenzhen ZHAO Systems and methods for social-event based sharing
WO2013085953A1 (en) * 2011-12-06 2013-06-13 Morot-Gaudry Jean Michel Immediate purchase of goods and services which appear on a public broadcast
US20130151382A1 (en) * 2011-12-09 2013-06-13 Andrew S. Fuller System and method for modeling articles of clothing
US20130151637A1 (en) * 2011-12-13 2013-06-13 Findandremind.Com System and methods for filtering and organizing events and activities
US8468052B2 (en) 2011-01-17 2013-06-18 Vegas.Com, Llc Systems and methods for providing activity and participation incentives
US20130155107A1 (en) * 2011-12-16 2013-06-20 Identive Group, Inc. Systems and Methods for Providing an Augmented Reality Experience
US8471871B1 (en) 2011-10-17 2013-06-25 Google Inc. Authoritative text size measuring
US20130173226A1 (en) * 2012-01-03 2013-07-04 Waymon B. Reed Garment modeling simulation system and process
US20130170715A1 (en) * 2012-01-03 2013-07-04 Waymon B. Reed Garment modeling simulation system and process
US20130185347A1 (en) * 2012-01-15 2013-07-18 Microsoft Corporation Providing contextual information associated with a communication participant
US8498900B1 (en) * 2011-07-25 2013-07-30 Dash Software, LLC Bar or restaurant check-in and payment systems and methods of their operation
US20130211950A1 (en) * 2012-02-09 2013-08-15 Microsoft Corporation Recommender system
US20130217479A1 (en) * 2009-03-06 2013-08-22 Michael Arieh Luxton Limiting Transfer of Virtual Currency in a Multiuser Online Game
US20130219434A1 (en) * 2012-02-20 2013-08-22 Sony Corporation 3d body scan input to tv for virtual fitting of apparel presented on retail store tv channel
US20130227456A1 (en) * 2012-02-24 2013-08-29 Samsung Electronics Co. Ltd. Method of providing capture data and mobile terminal thereof
US20130227471A1 (en) * 2012-02-24 2013-08-29 Samsung Electronics Co., Ltd. Method of providing information and mobile terminal thereof
US20130232037A1 (en) * 2011-06-30 2013-09-05 Ncr Corporation Techniques for personalizing self checkouts
US20130232171A1 (en) * 2011-07-13 2013-09-05 Linkedln Corporation Method and system for semantic search against a document collection
US20130232412A1 (en) * 2012-03-02 2013-09-05 Nokia Corporation Method and apparatus for providing media event suggestions
US20130232017A1 (en) * 2012-03-04 2013-09-05 Tal Zvi NATHANEL Device, system, and method of electronic payment
US8537930B2 (en) 2010-07-20 2013-09-17 Lg Electronics Inc. Electronic device, electronic system, and method of providing information using the same
US8538828B2 (en) 2011-10-18 2013-09-17 Autotrader.Com, Inc. Consumer-to-business exchange auction
US20130241920A1 (en) * 2012-03-15 2013-09-19 Shun-Ching Yang Virtual reality interaction system and method
US20130241937A1 (en) * 2012-03-13 2013-09-19 International Business Machines Corporation Social Interaction Analysis and Display
US20130254006A1 (en) * 2012-03-20 2013-09-26 Pick'ntell Ltd. Apparatus and method for transferring commercial data at a store
US20130262269A1 (en) * 2010-07-06 2013-10-03 James Shaun O'Leary System for electronic transactions
US20130257877A1 (en) * 2012-03-30 2013-10-03 Videx, Inc. Systems and Methods for Generating an Interactive Avatar Model
US20130268887A1 (en) * 2012-04-04 2013-10-10 Adam ROUSSOS Device and process for augmenting an electronic menu using social context data
US20130282344A1 (en) * 2012-04-20 2013-10-24 Matthew Flagg Systems and methods for simulating accessory display on a subject
US20130278626A1 (en) * 2012-04-20 2013-10-24 Matthew Flagg Systems and methods for simulating accessory display on a subject
US20130286014A1 (en) * 2011-06-22 2013-10-31 Gemvision Corporation, LLC Custom Jewelry Configurator
US20130290416A1 (en) * 2012-04-27 2013-10-31 Steve Nelson Method for Securely Distributing Meeting Data from Interactive Whiteboard Projector
US20130300739A1 (en) * 2012-05-09 2013-11-14 Mstar Semiconductor, Inc. Stereoscopic apparel try-on method and device
US20130307851A1 (en) * 2010-12-03 2013-11-21 Rafael Hernández Stark Method for virtually trying on footwear
US20130311339A1 (en) * 2012-05-17 2013-11-21 Leo Jeremias Chat enabled online marketplace systems and methods
US8595082B2 (en) 2011-10-18 2013-11-26 Autotrader.Com, Inc. Consumer-to-business exchange marketplace
US20130314410A1 (en) * 2012-05-23 2013-11-28 1-800 Contacts, Inc. Systems and methods for rendering virtual try-on products
US20130314443A1 (en) * 2012-05-28 2013-11-28 Clayton Grassick Methods, mobile device and server for support of augmented reality on the mobile device
US20130317943A1 (en) * 2012-05-11 2013-11-28 Cassi East Trade show and exhibition application for collectables and its method of use
US8600359B2 (en) 2011-03-21 2013-12-03 International Business Machines Corporation Data session synchronization with phone numbers
CN103430202A (en) * 2010-08-28 2013-12-04 电子湾有限公司 Multilevel silhouettes in an online shopping environment
US8606645B1 (en) * 2012-02-02 2013-12-10 SeeMore Interactive, Inc. Method, medium, and system for an augmented reality retail application
US20130332840A1 (en) * 2012-06-10 2013-12-12 Apple Inc. Image application for creating and sharing image streams
WO2013184407A1 (en) * 2012-06-05 2013-12-12 Mimecast North America Inc. Electronic communicating
US20130339159A1 (en) * 2012-06-18 2013-12-19 Lutebox Ltd. Social networking system and methods of implementation
CN103472985A (en) * 2013-06-17 2013-12-25 展讯通信(上海)有限公司 User editing method of three-dimensional (3D) shopping platform display interface
US20130342564A1 (en) * 2012-06-25 2013-12-26 Peter Tobias Kinnebrew Configured virtual environments
US20130345980A1 (en) * 2012-06-05 2013-12-26 Apple Inc. Providing navigation instructions while operating navigation application in background
US20140007016A1 (en) * 2012-06-27 2014-01-02 Hon Hai Precision Industry Co., Ltd. Product fitting device and method
US20140002472A1 (en) * 2012-06-29 2014-01-02 Disney Enterprises, Inc. Augmented reality surface painting
US8625018B2 (en) * 2010-01-05 2014-01-07 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US20140012911A1 (en) * 2012-07-09 2014-01-09 Jenny Q. Ta Social network system and method
US20140012733A1 (en) * 2009-12-18 2014-01-09 Joel Vidal Method, Device, and System of Accessing Online Accounts
US20140019424A1 (en) * 2012-07-11 2014-01-16 Google Inc. Identifier validation and debugging
US20140024436A1 (en) * 2009-08-23 2014-01-23 DeVona Cole Scheduling and marketing of casino tournaments
US20140032679A1 (en) * 2012-07-30 2014-01-30 Microsoft Corporation Collaboration environments and views
US20140032359A1 (en) * 2012-07-30 2014-01-30 Infosys Limited System and method for providing intelligent recommendations
US20140032332A1 (en) * 2012-07-25 2014-01-30 SocialWire, Inc. Promoting products on a social networking system based on information from a merchant site
US20140040041A1 (en) * 2012-08-03 2014-02-06 Isabelle Ohnemus Garment fitting system and method
US20140039980A1 (en) * 2007-12-14 2014-02-06 The John Nicholas and Kristin Gross Trust U/A/D April 13, 2010 Item Data Collection Systems and Methods with Social Network Integration
US8649612B1 (en) 2010-01-06 2014-02-11 Apple Inc. Parallelizing cascaded face detection
CN103582863A (en) * 2011-05-27 2014-02-12 微软公司 Multi-application environment
US20140047072A1 (en) * 2012-08-09 2014-02-13 Actv8, Inc. Method and apparatus for interactive mobile offer system using time and location for out-of-home display screens
US20140047355A1 (en) * 2012-08-09 2014-02-13 Gface Gmbh Simultaneous evaluation of items via online services
US8655970B1 (en) * 2013-01-29 2014-02-18 Google Inc. Automatic entertainment caching for impending travel
US20140052784A1 (en) * 2012-08-14 2014-02-20 Chicisimo S.L. Online fashion community system and method
US20140053086A1 (en) * 2012-08-20 2014-02-20 Samsung Electronics Co., Ltd. Collaborative data editing and processing system
US8660924B2 (en) * 2009-04-30 2014-02-25 Navera, Inc. Configurable interactive assistant
US8667112B2 (en) 2010-07-20 2014-03-04 Lg Electronics Inc. Selective interaction between networked smart devices
US20140067604A1 (en) * 2012-09-05 2014-03-06 Robert D. Fish Digital Advisor
US8668146B1 (en) 2006-05-25 2014-03-11 Sean I. Mcghie Rewards program with payment artifact permitting conversion/transfer of non-negotiable credits to entity independent funds
US20140075329A1 (en) * 2012-09-10 2014-03-13 Samsung Electronics Co. Ltd. Method and device for transmitting information related to event
US20140082143A1 (en) * 2012-09-17 2014-03-20 Samsung Electronics Co., Ltd. Method and apparatus for tagging multimedia data
US20140081750A1 (en) * 2012-09-19 2014-03-20 Mastercard International Incorporated Social media transaction visualization structure
US20140082493A1 (en) * 2012-09-17 2014-03-20 Adobe Systems Inc. Method and apparatus for measuring perceptible properties of media content
US20140081839A1 (en) * 2012-09-14 2014-03-20 Bank Of America Corporation Gift card association with account
US20140085293A1 (en) * 2012-09-21 2014-03-27 Luxand, Inc. Method of creating avatar from user submitted image
US8684265B1 (en) 2006-05-25 2014-04-01 Sean I. Mcghie Rewards program website permitting conversion/transfer of non-negotiable credits to entity independent funds
US8688746B2 (en) 2006-04-20 2014-04-01 Veveo, Inc. User interface methods and systems for selecting and presenting content based on user relationships
US8688090B2 (en) 2011-03-21 2014-04-01 International Business Machines Corporation Data session preferences
US20140092101A1 (en) * 2012-09-28 2014-04-03 Samsung Electronics Co., Ltd. Apparatus and method for producing animated emoticon
US20140095349A1 (en) * 2012-09-14 2014-04-03 James L. Mabrey System and Method for Facilitating Social E-Commerce
US20140095965A1 (en) * 2012-08-29 2014-04-03 Tencent Technology (Shenzhen) Company Limited Methods and devices for terminal control
US8694686B2 (en) 2010-07-20 2014-04-08 Lg Electronics Inc. User profile based configuration of user experience environment
US20140108426A1 (en) * 2011-04-08 2014-04-17 The Regents Of The University Of California Interactive system for collecting, displaying, and ranking items based on quantitative and textual input from multiple participants
US20140108178A1 (en) * 2012-07-10 2014-04-17 Huawei Technologies Co., Ltd. Information exchange method, user end, and system for online collaborative shopping
US20140108202A1 (en) * 2012-03-30 2014-04-17 Rakuten,Inc. Information processing apparatus, information processing method, information processing program, and recording medium
US20140108235A1 (en) * 2012-10-16 2014-04-17 American Express Travel Related Services Company, Inc. Systems and Methods for Payment Settlement
US8705811B1 (en) * 2010-10-26 2014-04-22 Apple Inc. Luminance adjusted face detection
US20140114884A1 (en) * 2010-11-24 2014-04-24 Dhiraj Daway System and Method for Providing Wardrobe Assistance
US20140122231A1 (en) * 2011-08-19 2014-05-01 Qualcomm Incorporated System and method for interactive promotion of products and services
US20140118482A1 (en) * 2012-10-26 2014-05-01 Korea Advanced Institute Of Science And Technology Method and apparatus for 2d to 3d conversion using panorama image
US20140122291A1 (en) * 2012-10-31 2014-05-01 Microsoft Corporation Bargaining Through a User-Specific Item List
US20140129935A1 (en) * 2012-11-05 2014-05-08 Dolly OVADIA NAHON Method and Apparatus for Developing and Playing Natural User Interface Applications
US20140129370A1 (en) * 2012-09-14 2014-05-08 James L. Mabrey Chroma Key System and Method for Facilitating Social E-Commerce
US20140129378A1 (en) * 2012-11-07 2014-05-08 Hand Held Products, Inc. Computer-assisted shopping and product location
WO2014070293A1 (en) * 2012-11-05 2014-05-08 Nara Logics, Inc. Systems and methods for providing enhanced neural network genesis and recommendations to one or more users
US20140136600A1 (en) * 2012-11-14 2014-05-15 Institute For Information Industry Method and system for processing file stored in cloud storage and computer readable storage medium storing the method
US8732101B1 (en) 2013-03-15 2014-05-20 Nara Logics, Inc. Apparatus and method for providing harmonized recommendations based on an integrated user profile
WO2013142625A3 (en) * 2012-03-20 2014-05-22 A9.Com, Inc. Structured lighting-based content interactions in multiple environments
US20140149247A1 (en) * 2012-11-28 2014-05-29 Josh Frey System and Method for Order Processing
US20140157145A1 (en) * 2012-11-30 2014-06-05 Facebook, Inc Social menu pages
US20140160122A1 (en) * 2012-12-10 2014-06-12 Microsoft Corporation Creating a virtual representation based on camera data
US20140172633A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Payment interchange for use with global shopping cart
US20140173464A1 (en) * 2011-08-31 2014-06-19 Kobi Eisenberg Providing application context for a conversation
US20140168204A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Model based video projection
US20140180654A1 (en) * 2012-12-23 2014-06-26 Stephen Michael Seymour Client Finite Element Submission System
US8769045B1 (en) 2011-10-17 2014-07-01 Google Inc. Systems and methods for incremental loading of collaboratively generated presentations
US8769053B2 (en) 2011-08-29 2014-07-01 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US8763901B1 (en) 2006-05-25 2014-07-01 Sean I. Mcghie Cross marketing between an entity's loyalty point program and a different loyalty program of a commerce partner
US20140189541A1 (en) * 2010-11-01 2014-07-03 Google Inc. Content sharing interface for sharing content in social networks
US20140195359A1 (en) * 2013-01-07 2014-07-10 Andrew William Schulz System and Method for Computer Automated Payment of Hard Copy Bills
US8782690B2 (en) 2008-01-30 2014-07-15 Cinsay, Inc. Interactive product placement system and method therefor
US8781932B2 (en) * 2012-08-08 2014-07-15 At&T Intellectual Property I, L.P. Platform for hosting virtual events
US8781990B1 (en) * 2010-02-25 2014-07-15 Google Inc. Crowdsensus: deriving consensus information from statements made by a crowd of users
US20140201283A1 (en) * 2013-01-11 2014-07-17 International Business Machines Corporation Personalizing a social networking profile page
US20140201039A1 (en) * 2012-10-08 2014-07-17 Livecom Technologies, Llc System and method for an automated process for visually identifying a product's presence and making the product available for viewing
US20140207578A1 (en) * 2011-11-11 2014-07-24 Millennial Media, Inc. System For Targeting Advertising To A Mobile Communication Device Based On Photo Metadata
US20140207609A1 (en) * 2013-01-23 2014-07-24 Facebook, Inc. Generating and maintaining a list of products desired by a social networking system user
US20140214504A1 (en) * 2013-01-31 2014-07-31 Sony Corporation Virtual meeting lobby for waiting for online event
US20140214591A1 (en) * 2013-01-31 2014-07-31 Ebay Inc. System and method to provide a product display in a business
WO2014117019A2 (en) * 2013-01-24 2014-07-31 Barker Jeremiah Timberline Graphical aggregation of virtualized network communication
US20140214629A1 (en) * 2013-01-31 2014-07-31 Hewlett-Packard Development Company, L.P. Interaction in a virtual reality environment
US8798401B1 (en) * 2012-06-15 2014-08-05 Shutterfly, Inc. Image sharing with facial recognition models
WO2014120692A1 (en) * 2013-01-29 2014-08-07 Mobitv, Inc. Scalable networked digital video recordings via shard-based architecture
US8806352B2 (en) 2011-05-06 2014-08-12 David H. Sitrick System for collaboration of a specific image and utilizing selected annotations while viewing and relative to providing a display presentation
US8812946B1 (en) 2011-10-17 2014-08-19 Google Inc. Systems and methods for rendering documents
US20140236652A1 (en) * 2013-02-19 2014-08-21 Wal-Mart Stores, Inc. Remote sales assistance system
US8819156B2 (en) 2011-03-11 2014-08-26 James Robert Miner Systems and methods for message collection
US8825627B1 (en) * 2011-03-29 2014-09-02 Amazon Technologies, Inc. Creating ambience during on-line shopping
US8826147B2 (en) 2011-05-06 2014-09-02 David H. Sitrick System and methodology for collaboration, with selective display of user input annotations among member computing appliances of a group/team
US20140249879A1 (en) * 2011-07-29 2014-09-04 Mark Oleynik Network system and method
US8832116B1 (en) 2012-01-11 2014-09-09 Google Inc. Using mobile application logs to measure and maintain accuracy of business information
US20140257839A1 (en) * 2013-03-07 2014-09-11 Pro Fit Optix Inc. Online Lens Ordering System for Vision Care Professionals or Direct to Customers
US20140253553A1 (en) * 2012-06-17 2014-09-11 Spaceview, Inc. Visualization of three-dimensional models of objects in two-dimensional environment
US20140258141A1 (en) * 2013-03-05 2014-09-11 Bibliotheca Limited Digital Media Lending System and Method
US20140258169A1 (en) * 2013-03-05 2014-09-11 Bental Wong Method and system for automated verification of customer reviews
US20140279235A1 (en) * 2011-12-20 2014-09-18 Thomas E. Sandholm Enabling collaborative reactions to images
US8844003B1 (en) 2006-08-09 2014-09-23 Ravenwhite Inc. Performing authentication
JP2014179135A (en) * 2014-07-01 2014-09-25 Toshiba Corp Image processing system, method and program
US20140289327A1 (en) * 2013-03-25 2014-09-25 Salesforce.Com Inc. Systems and methods of online social environment based translation of entity methods
US8851981B2 (en) 2010-04-06 2014-10-07 Multimedia Games, Inc. Personalized jackpot wagering game, gaming system, and method
US20140310123A1 (en) * 2013-04-16 2014-10-16 Shutterfly, Inc. Check-out path for multiple recipients
WO2014168710A1 (en) * 2013-03-15 2014-10-16 Balluun Ag Method and system of an authentic translation of a physical tradeshow
US8875011B2 (en) 2011-05-06 2014-10-28 David H. Sitrick Systems and methodologies providing for collaboration among a plurality of users at a plurality of computing appliances
US8874653B2 (en) 2012-11-12 2014-10-28 Maximilian A. Chang Vehicle security and customization
US20140324978A1 (en) * 2009-11-20 2014-10-30 Ustream, Inc. Broadcast Notifications Using Social Networking Systems
US20140337163A1 (en) * 2013-05-10 2014-11-13 Dell Products L.P. Forward-Looking Recommendations Using Information from a Plurality of Picks Generated by a Plurality of Users
US20140337162A1 (en) * 2013-05-10 2014-11-13 Dell Products L.P. Process to display picks on product category pages
US8897920B2 (en) 2009-04-17 2014-11-25 Intouch Technologies, Inc. Tele-presence robot system with software modularity, projector and laser pointer
US20140351093A1 (en) * 2012-05-17 2014-11-27 Leo Jeremias Chat enabled online marketplace systems and methods
US20140347435A1 (en) * 2013-05-24 2014-11-27 Polycom, Inc. Method and system for sharing content in videoconferencing
US8903847B2 (en) 2010-03-05 2014-12-02 International Business Machines Corporation Digital media voice tags in social networks
US8902278B2 (en) 2012-04-11 2014-12-02 Intouch Technologies, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US20140358520A1 (en) * 2013-05-31 2014-12-04 Thomson Licensing Real-time online audio filtering
US20140358738A1 (en) * 2012-08-03 2014-12-04 Isabelle Ohnemus Garment fitting system and method
US8909583B2 (en) 2011-09-28 2014-12-09 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US8914735B2 (en) 2011-05-06 2014-12-16 David H. Sitrick Systems and methodologies providing collaboration and display among a plurality of users
WO2013192557A3 (en) * 2012-06-21 2014-12-18 Cinsay, Inc. Peer-assisted shopping
WO2013090743A3 (en) * 2011-12-16 2014-12-18 Illinois Tool Works Inc. Cloud based recipe distribution in an enterprise management system
US8918722B2 (en) 2011-05-06 2014-12-23 David H. Sitrick System and methodology for collaboration in groups with split screen displays
US8918724B2 (en) 2011-05-06 2014-12-23 David H. Sitrick Systems and methodologies providing controlled voice and data communication among a plurality of computing appliances associated as team members of at least one respective team or of a plurality of teams and sub-teams within the teams
US8918723B2 (en) 2011-05-06 2014-12-23 David H. Sitrick Systems and methodologies comprising a plurality of computing appliances having input apparatus and display apparatus and logically structured as a main team
US8924859B2 (en) 2011-05-06 2014-12-30 David H. Sitrick Systems and methodologies supporting collaboration of users as members of a team, among a plurality of computing appliances
US20150007110A1 (en) * 2013-06-26 2015-01-01 Acer Inc. Method for Controlling Electronic Apparatus and Electronic Apparatus Thereof
US20150006334A1 (en) * 2013-06-26 2015-01-01 International Business Machines Corporation Video-based, customer specific, transactions
US20150006715A1 (en) * 2013-06-30 2015-01-01 Jive Software, Inc. User-centered engagement analysis
US20150012362A1 (en) * 2013-07-03 2015-01-08 1-800 Contacts, Inc. Systems and methods for recommending products via crowdsourcing and detecting user characteristics
US20150012332A1 (en) * 2011-01-18 2015-01-08 Caterina Papachristos Business to business to shared communities system and method
US20150026156A1 (en) * 2013-05-31 2015-01-22 Michele Meek Systems and methods for facilitating the retail shopping experience online
US20150039468A1 (en) * 2012-06-21 2015-02-05 Cinsay, Inc. Apparatus and method for peer-assisted e-commerce shopping
US20150046860A1 (en) * 2013-08-06 2015-02-12 Sony Corporation Information processing apparatus and information processing method
US20150052008A1 (en) * 2013-08-16 2015-02-19 iWeave International Mobile Application For Hair Extensions
US20150052444A1 (en) * 2012-12-12 2015-02-19 Huizhou Tcl Mobile Communication Co., Ltd Method of displaying a dlna apparatus, and mobile terminal
US20150052198A1 (en) * 2013-08-16 2015-02-19 Joonsuh KWUN Dynamic social networking service system and respective methods in collecting and disseminating specialized and interdisciplinary knowledge
US20150052200A1 (en) * 2013-08-19 2015-02-19 Cisco Technology, Inc. Acquiring Regions of Remote Shared Content with High Resolution
US20150058083A1 (en) * 2012-03-15 2015-02-26 Isabel Herrera System for personalized fashion services
US20150062116A1 (en) * 2013-08-30 2015-03-05 1-800 Contacts, Inc. Systems and methods for rapidly generating a 3-d model of a user
US20150063678A1 (en) * 2013-08-30 2015-03-05 1-800 Contacts, Inc. Systems and methods for generating a 3-d model of a user using a rear-facing camera
US8977680B2 (en) 2012-02-02 2015-03-10 Vegas.Com Systems and methods for shared access to gaming accounts
US8977622B1 (en) * 2012-09-17 2015-03-10 Amazon Technologies, Inc. Evaluation of nodes
US8983174B2 (en) 2004-07-13 2015-03-17 Intouch Technologies, Inc. Mobile robot with a head-based movement mapping scheme
US8982155B2 (en) * 2010-08-31 2015-03-17 Ns Solutions Corporation Augmented reality providing system, information processing terminal, information processing apparatus, augmented reality providing method, information processing method, and program
US8990677B2 (en) 2011-05-06 2015-03-24 David H. Sitrick System and methodology for collaboration utilizing combined display with evolving common shared underlying image
US8990191B1 (en) * 2014-03-25 2015-03-24 Linkedin Corporation Method and system to determine a category score of a social network member
US20150088622A1 (en) * 2012-04-06 2015-03-26 LiveOne, Inc. Social media application for a media content providing platform
US20150088661A1 (en) * 2013-09-25 2015-03-26 Sears Brands, Llc Method and system for gesture-based cross channel commerce and marketing
US20150089530A1 (en) * 2013-09-26 2015-03-26 Pixwel Platform, LLC Localization process system
US20150085128A1 (en) * 2013-09-25 2015-03-26 Oncam Global, Inc. Mobile terminal security systems
US8996165B2 (en) 2008-10-21 2015-03-31 Intouch Technologies, Inc. Telepresence robot with a camera boom
US9001118B2 (en) 2012-06-21 2015-04-07 Microsoft Technology Licensing, Llc Avatar construction using depth camera
US20150106205A1 (en) * 2013-10-16 2015-04-16 Google Inc. Generating an offer sheet based on offline content
US9014832B2 (en) 2009-02-02 2015-04-21 Eloy Technology, Llc Augmenting media content in a media sharing group
US20150111547A1 (en) * 2012-05-08 2015-04-23 Nokia Corporation Method and apparatus for providing immersive interaction via everyday devices
US20150120505A1 (en) * 2013-10-31 2015-04-30 International Business Machines Corporation In-store omnichannel inventory exposure
US20150127489A1 (en) * 2013-11-04 2015-05-07 Deepak Kumar Vasthimal Dynamic creation of temporal networks based on similar search queries
US20150134302A1 (en) * 2013-11-14 2015-05-14 Jatin Chhugani 3-dimensional digital garment creation from planar garment photographs
US20150134496A1 (en) * 2012-07-10 2015-05-14 Dressformer, Inc. Method for providing for the remote fitting and/or selection of clothing
US20150135048A1 (en) * 2011-04-20 2015-05-14 Panafold Methods, apparatus, and systems for visually representing a relative relevance of content elements to an attractor
US9044682B1 (en) * 2013-09-26 2015-06-02 Matthew B. Rappaport Methods and apparatus for electronic commerce initiated through use of video games and fulfilled by delivery of physical goods
US20150156228A1 (en) * 2013-11-18 2015-06-04 Ronald Langston Social networking interacting system
WO2015081060A1 (en) * 2013-11-26 2015-06-04 Dash Software, LLC Mobile application check-in and payment systems and methods of their operation
US20150154419A1 (en) * 2013-12-03 2015-06-04 Sony Corporation Computer ecosystem with digital rights management (drm) transfer mechanism
US20150170254A1 (en) * 2012-06-28 2015-06-18 Unijunction (Pty) Ltd System and method for processing an electronic order
US9064015B2 (en) * 2011-12-14 2015-06-23 Artist Growth, Llc Action alignment for event planning, project management and process structuring
US9064377B2 (en) 2010-04-06 2015-06-23 Multimedia Games, Inc. Wagering game, gaming machine, networked gaming system and method with a base game and a simultaneous bonus currency game
US9069380B2 (en) 2011-06-10 2015-06-30 Aliphcom Media device, application, and content management using sensory input
US9076247B2 (en) * 2012-08-10 2015-07-07 Ppg Industries Ohio, Inc. System and method for visualizing an object in a simulated environment
US20150199910A1 (en) * 2014-01-10 2015-07-16 Cox Communications, Inc. Systems and methods for an educational platform providing a multi faceted learning environment
US20150199366A1 (en) * 2014-01-15 2015-07-16 Avigilon Corporation Storage management of data streamed from a video source device
US20150199095A1 (en) * 2012-06-20 2015-07-16 Maquet Critical Care Ab Breathing apparatus having a display with user selectable background
US9087403B2 (en) 2012-07-26 2015-07-21 Qualcomm Incorporated Maintaining continuity of augmentations
US9089972B2 (en) 2010-03-04 2015-07-28 Intouch Technologies, Inc. Remote presence system including a cart that supports a robot face and an overhead camera
US9091851B2 (en) 2010-02-28 2015-07-28 Microsoft Technology Licensing, Llc Light control in head mounted displays
US20150215243A1 (en) * 2012-08-22 2015-07-30 Nokia Corporation Method and apparatus for exchanging status updates while collaborating
US9097891B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
US9098611B2 (en) 2012-11-26 2015-08-04 Intouch Technologies, Inc. Enhanced video interaction for a user interface of a telepresence network
US9097890B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc Grating in a light transmissive illumination system for see-through near-eye display glasses
US20150216413A1 (en) * 2014-02-05 2015-08-06 Self Care Catalysts Inc. Systems, devices, and methods for analyzing and enhancing patient health
CN104854623A (en) * 2012-08-02 2015-08-19 微软技术许可有限责任公司 Avatar-based virtual dressing room
US20150248667A1 (en) * 2012-12-31 2015-09-03 Ebay Inc. Dongle facilitated wireless consumer payments
WO2015089523A3 (en) * 2013-12-09 2015-09-03 Premium Lubricants (Pty) Ltd Web based marketplace
US9128981B1 (en) 2008-07-29 2015-09-08 James L. Geer Phone assisted ‘photographic memory’
US9129302B2 (en) * 2011-03-17 2015-09-08 Sears Brands, L.L.C. Methods and systems for coupon service applications
US9128281B2 (en) 2010-09-14 2015-09-08 Microsoft Technology Licensing, Llc Eyepiece with uniformly illuminated reflective display
US9129295B2 (en) 2010-02-28 2015-09-08 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear
US9134534B2 (en) 2010-02-28 2015-09-15 Microsoft Technology Licensing, Llc See-through near-eye display glasses including a modular image source
US20150261647A1 (en) * 2012-10-02 2015-09-17 Nec Corporation Information system construction assistance device, information system construction assistance method, and recording medium
US9143542B1 (en) * 2013-06-05 2015-09-22 Google Inc. Media content collaboration
US9140566B1 (en) 2009-03-25 2015-09-22 Waldeck Technology, Llc Passive crowd-sourced map updates and alternative route recommendations
US9138891B2 (en) 2008-11-25 2015-09-22 Intouch Technologies, Inc. Server connectivity control for tele-presence robot
US20150269547A1 (en) * 2010-12-30 2015-09-24 Futurewei Technologies, Inc. System for Managing, Storing and Providing Shared Digital Content to Users in a User Relationship Defined Group in a Multi-Platform Environment
US20150278841A1 (en) * 2014-03-31 2015-10-01 United Video Properties, Inc. Systems and methods for receiving coupon and vendor data
US20150278905A1 (en) * 2014-04-01 2015-10-01 Electronic Commodities Exchange Virtual jewelry shopping experience with in-store preview
US20150278911A1 (en) * 2014-03-31 2015-10-01 Sap Ag System and Method for Apparel Size Suggestion Based on Sales Transaction Data Analysis
US9160783B2 (en) 2007-05-09 2015-10-13 Intouch Technologies, Inc. Robot system that operates through a network firewall
US20150302011A1 (en) * 2012-12-26 2015-10-22 Rakuten, Inc. Image management device, image generation program, image management method, and image management program
US9171315B1 (en) 2012-04-04 2015-10-27 Google Inc. System and method for negotiating item prices
US9177081B2 (en) 2005-08-26 2015-11-03 Veveo, Inc. Method and system for processing ambiguous, multi-term search queries
US9174342B2 (en) 2012-05-22 2015-11-03 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
US9185341B2 (en) 2010-09-03 2015-11-10 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
WO2015085028A3 (en) * 2013-12-06 2015-11-12 The Dun & Bradstreet Corporation Method and system for collecting data on businesses via mobile and geolocation communications
US20150334142A1 (en) * 2009-04-01 2015-11-19 Shindig, Inc. Systems and methods for creating and publishing customizable images from within online events
US20150332273A1 (en) * 2014-05-19 2015-11-19 American Express Travel Related Services Company, Inc. Authentication via biometric passphrase
US9195834B1 (en) 2007-03-19 2015-11-24 Ravenwhite Inc. Cloud authentication
US9193065B2 (en) 2008-07-10 2015-11-24 Intouch Technologies, Inc. Docking system for a tele-presence robot
US9196003B2 (en) 2012-12-20 2015-11-24 Wal-Mart Stores, Inc. Pre-purchase feedback apparatus and method
US20150341430A1 (en) * 2012-12-03 2015-11-26 Tencent Technology (Shenzhen) Company Limited Method, apparatus, and system for interaction between terminals
US9198728B2 (en) 2005-09-30 2015-12-01 Intouch Technologies, Inc. Multi-camera mobile teleconferencing platform
US20150346954A1 (en) * 2014-05-30 2015-12-03 International Business Machines Corporation Flexible control in resizing of visual displays
US9213420B2 (en) 2012-03-20 2015-12-15 A9.Com, Inc. Structured lighting based content interactions
US9215423B2 (en) 2009-03-30 2015-12-15 Time Warner Cable Enterprises Llc Recommendation engine apparatus and methods
US20150371260A1 (en) * 2014-06-19 2015-12-24 Elwha Llc Systems and methods for providing purchase options to consumers
US9223134B2 (en) 2010-02-28 2015-12-29 Microsoft Technology Licensing, Llc Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses
US9224129B2 (en) 2011-05-06 2015-12-29 David H. Sitrick System and methodology for multiple users concurrently working and viewing on a common project
US20150379613A1 (en) * 2012-06-30 2015-12-31 At&T Mobility Ii Llc Enhancing a User's Shopping Experience
US20150379623A1 (en) * 2014-06-25 2015-12-31 Akshay Gadre Digital avatars in online marketplaces
US20150379532A1 (en) * 2012-12-11 2015-12-31 Beijing Jingdong Century Trading Co., Ltd. Method and system for identifying bad commodities based on user purchase behaviors
US9230283B1 (en) 2007-12-14 2016-01-05 Consumerinfo.Com, Inc. Card registry systems and methods
US9230014B1 (en) * 2011-09-13 2016-01-05 Sri International Method and apparatus for recommending work artifacts based on collaboration events
US9229227B2 (en) 2010-02-28 2016-01-05 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US9241184B2 (en) 2011-06-01 2016-01-19 At&T Intellectual Property I, L.P. Clothing visualization
US9251313B2 (en) 2012-04-11 2016-02-02 Intouch Technologies, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US9258670B2 (en) 2011-06-10 2016-02-09 Aliphcom Wireless enabled cap for a data-capable device
US9256904B1 (en) 2008-08-14 2016-02-09 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US9256829B2 (en) 2010-12-17 2016-02-09 Microsoft Technology Licensing, Llc Information propagation probability for a social network
US20160042402A1 (en) * 2014-08-07 2016-02-11 Akshay Gadre Evaluating digital inventories
US20160042233A1 (en) * 2014-08-06 2016-02-11 ProSent Mobile Corporation Method and system for facilitating evaluation of visual appeal of two or more objects
US9264664B2 (en) 2010-12-03 2016-02-16 Intouch Technologies, Inc. Systems and methods for dynamic bandwidth allocation
US9268406B2 (en) 2011-09-30 2016-02-23 Microsoft Technology Licensing, Llc Virtual spectator experience with a personal audio/visual apparatus
US20160055368A1 (en) * 2014-08-22 2016-02-25 Microsoft Corporation Face alignment with shape regression
US20160063613A1 (en) * 2014-08-30 2016-03-03 Lucy Ma Zhao Providing a virtual shopping environment for an item
US9280529B2 (en) 2010-04-12 2016-03-08 Google Inc. Collaborative cursors in a hosted word processor
US9287727B1 (en) 2013-03-15 2016-03-15 Icontrol Networks, Inc. Temporal voltage adaptive lithium battery charger
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9285589B2 (en) 2010-02-28 2016-03-15 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered control of AR eyepiece applications
US9286711B2 (en) 2011-09-30 2016-03-15 Microsoft Technology Licensing, Llc Representing a location at a previous time period using an augmented reality display
US20160086161A1 (en) * 2002-10-01 2016-03-24 Andrew H. B. Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
US9296107B2 (en) 2003-12-09 2016-03-29 Intouch Technologies, Inc. Protocol for a remotely controlled videoconferencing robot
US9300445B2 (en) 2010-05-27 2016-03-29 Time Warner Cable Enterprise LLC Digital domain content processing and distribution apparatus and methods
US9300919B2 (en) 2009-06-08 2016-03-29 Time Warner Cable Enterprises Llc Media bridge apparatus and methods
US9299099B1 (en) 2012-04-04 2016-03-29 Google Inc. Providing recommendations in a social shopping trip
US9304646B2 (en) 2012-03-20 2016-04-05 A9.Com, Inc. Multi-user content interactions
US9306809B2 (en) 2007-06-12 2016-04-05 Icontrol Networks, Inc. Security system with networked touchscreen
US20160098775A1 (en) * 2014-10-07 2016-04-07 Comenity Llc Sharing an ensemble of items
US9313530B2 (en) 2004-07-20 2016-04-12 Time Warner Cable Enterprises Llc Technique for securely communicating programming content
US9313458B2 (en) 2006-10-20 2016-04-12 Time Warner Cable Enterprises Llc Downloadable security and protection methods and apparatus
US9311622B2 (en) 2013-01-15 2016-04-12 Google Inc. Resolving mutations in a partially-loaded spreadsheet model
US9323250B2 (en) 2011-01-28 2016-04-26 Intouch Technologies, Inc. Time-dependent navigation of telepresence robots
US9323871B2 (en) 2011-06-27 2016-04-26 Trimble Navigation Limited Collaborative development of a model on a network
US20160117339A1 (en) * 2014-10-27 2016-04-28 Chegg, Inc. Automated Lecture Deconstruction
US9330366B2 (en) 2011-05-06 2016-05-03 David H. Sitrick System and method for collaboration via team and role designation and control and management of annotations
US9329689B2 (en) 2010-02-28 2016-05-03 Microsoft Technology Licensing, Llc Method and apparatus for biometric data capture
US9329774B2 (en) 2011-05-27 2016-05-03 Microsoft Technology Licensing, Llc Switching back to a previously-interacted-with application
US9336137B2 (en) 2011-09-02 2016-05-10 Google Inc. System and method for performing data management in a collaborative development environment
US9336607B1 (en) * 2012-11-28 2016-05-10 Amazon Technologies, Inc. Automatic identification of projection surfaces
US20160132533A1 (en) * 2014-04-22 2016-05-12 Sk Planet Co., Ltd. Device for providing image related to replayed music and method using same
US9341843B2 (en) 2010-02-28 2016-05-17 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a small scale image source
US9342783B1 (en) 2007-03-30 2016-05-17 Consumerinfo.Com, Inc. Systems and methods for data verification
US20160142995A1 (en) * 2012-08-09 2016-05-19 Actv8, Inc. Method and apparatus for interactive mobile offer system based on proximity of mobile device to media source
US20160139742A1 (en) * 2013-06-18 2016-05-19 Samsung Electronics Co., Ltd. Method for managing media contents and apparatus for the same
US9348803B2 (en) 2013-10-22 2016-05-24 Google Inc. Systems and methods for providing just-in-time preview of suggestion resolutions
US9349276B2 (en) 2010-09-28 2016-05-24 Icontrol Networks, Inc. Automated reporting of account and sensor information
US20160150079A1 (en) * 2010-05-05 2016-05-26 Knapp Investment Company Limited Caller id surfing
US9357247B2 (en) 2008-11-24 2016-05-31 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US9356904B1 (en) * 2012-05-14 2016-05-31 Google Inc. Event invitations having cinemagraphs
US9361021B2 (en) 2012-05-22 2016-06-07 Irobot Corporation Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US9366862B2 (en) 2010-02-28 2016-06-14 Microsoft Technology Licensing, Llc System and method for delivering content to a group of see-through near eye display eyepieces
US9367522B2 (en) 2012-04-13 2016-06-14 Google Inc. Time-based presentation editing
US9367124B2 (en) 2012-03-20 2016-06-14 A9.Com, Inc. Multi-application content interactions
US20160171570A1 (en) * 2012-12-14 2016-06-16 Mastercard International Incorporated System and method for payment, data management, and interchanges for use with global shopping cart
US9373025B2 (en) 2012-03-20 2016-06-21 A9.Com, Inc. Structured lighting-based content interactions in multiple environments
USD759690S1 (en) 2014-03-25 2016-06-21 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
USD759689S1 (en) 2014-03-25 2016-06-21 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
US20160180327A1 (en) * 2014-12-19 2016-06-23 Capital One Services, Llc Systems and methods for contactless and secure data transfer
US20160179908A1 (en) * 2014-12-19 2016-06-23 At&T Intellectual Property I, L.P. System and method for creating and sharing plans through multimodal dialog
US9380329B2 (en) 2009-03-30 2016-06-28 Time Warner Cable Enterprises Llc Personal media channel apparatus and methods
USD760256S1 (en) 2014-03-25 2016-06-28 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
US20160189173A1 (en) * 2014-12-30 2016-06-30 The Nielsen Company (Us), Llc Methods and apparatus to predict attitudes of consumers
US9381654B2 (en) 2008-11-25 2016-07-05 Intouch Technologies, Inc. Server connectivity control for tele-presence robot
US9383917B2 (en) 2011-03-28 2016-07-05 Microsoft Technology Licensing, Llc Predictive tiling
US20160196668A1 (en) * 2013-08-19 2016-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for processing virtual fitting model image
US9396570B2 (en) * 2012-12-28 2016-07-19 Rakuten, Inc. Image processing method to superimpose item image onto model image and image processing device thereof
US9400589B1 (en) 2002-05-30 2016-07-26 Consumerinfo.Com, Inc. Circular rotational interface for display of consumer credit information
US9401058B2 (en) 2012-01-30 2016-07-26 International Business Machines Corporation Zone based presence determination via voiceprint location awareness
US9406085B1 (en) 2013-03-14 2016-08-02 Consumerinfo.Com, Inc. System and methods for credit dispute processing, resolution, and reporting
US9412248B1 (en) 2007-02-28 2016-08-09 Icontrol Networks, Inc. Security, monitoring and automation controller access and use of legacy security control panel information
US9419928B2 (en) 2011-03-11 2016-08-16 James Robert Miner Systems and methods for message collection
US9423951B2 (en) 2010-12-31 2016-08-23 Microsoft Technology Licensing, Llc Content-based snap point
US9429934B2 (en) 2008-09-18 2016-08-30 Intouch Technologies, Inc. Mobile videoconferencing robot system with network adaptive driving
US9430130B2 (en) 2010-12-20 2016-08-30 Microsoft Technology Licensing, Llc Customization of an immersive environment
US9443268B1 (en) 2013-08-16 2016-09-13 Consumerinfo.Com, Inc. Bill payment and reporting
US9450776B2 (en) 2005-03-16 2016-09-20 Icontrol Networks, Inc. Forming a security network including integrated security system components
US20160274759A1 (en) 2008-08-25 2016-09-22 Paul J. Dawes Security system with networked touchscreen and gateway
US9462037B2 (en) 2013-01-07 2016-10-04 Google Inc. Dynamically sizing chunks in a partially loaded spreadsheet model
US9460557B1 (en) * 2016-03-07 2016-10-04 Bao Tran Systems and methods for footwear fitting
US9460542B2 (en) 2011-11-15 2016-10-04 Trimble Navigation Limited Browser-based collaborative development of a 3D model
US9460342B1 (en) * 2013-08-05 2016-10-04 Google Inc. Determining body measurements
US20160293032A1 (en) * 2015-04-03 2016-10-06 Drexel University Video Instruction Methods and Devices
US20160292390A1 (en) * 2013-10-31 2016-10-06 Michele SCULATI Method and system for a customized definition of food quantities based on the determination of anthropometric parameters
US9465504B1 (en) * 2013-05-06 2016-10-11 Hrl Laboratories, Llc Automated collaborative behavior analysis using temporal motifs
US9467723B2 (en) 2012-04-04 2016-10-11 Time Warner Cable Enterprises Llc Apparatus and methods for automated highlight reel creation in a content delivery network
US9477737B1 (en) 2013-11-20 2016-10-25 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US20160330133A1 (en) * 2015-05-08 2016-11-10 Accenture Global Services Limited Providing network resources based on available user information
CN106130788A (en) * 2016-08-05 2016-11-16 珠海市魅族科技有限公司 A kind of method and device of subject document adaptive terminal
US20160335485A1 (en) * 2015-05-13 2016-11-17 Electronics And Telecommunications Research Institute User intention analysis apparatus and method based on image information of three-dimensional space
US9501588B1 (en) 2013-10-28 2016-11-22 Kenneth S. Rowe Garden simulation
US9501840B2 (en) * 2014-10-20 2016-11-22 Toshiba Tec Kabushiki Kaisha Information processing apparatus and clothes proposing method
US9501782B2 (en) 2010-03-20 2016-11-22 Arthur Everett Felgate Monitoring system
WO2016185400A2 (en) 2015-05-18 2016-11-24 Embl Retail Inc Method and system for recommending fitting footwear
US9510065B2 (en) 2007-04-23 2016-11-29 Icontrol Networks, Inc. Method and system for automatically providing alternate network access for telecommunications
US9516167B2 (en) * 2014-07-24 2016-12-06 Genesys Telecommunications Laboratories, Inc. Media channel management apparatus for network communications sessions
US20160365090A1 (en) * 2015-06-11 2016-12-15 Nice-Systems Ltd. System and method for automatic language model generation
US20160364664A1 (en) * 2015-06-14 2016-12-15 Grant Patrick Henderson Method and system for high-speed business method switching
US9529785B2 (en) 2012-11-27 2016-12-27 Google Inc. Detecting relationships between edits and acting on a subset of edits
US9529851B1 (en) 2013-12-02 2016-12-27 Experian Information Solutions, Inc. Server architecture for electronic data quality processing
US9531593B2 (en) 2007-06-12 2016-12-27 Icontrol Networks, Inc. Takeover processes in security network integrated with premise security system
US9531878B2 (en) 2012-12-12 2016-12-27 Genesys Telecommunications Laboratories, Inc. System and method for access number distribution in a contact center
US20160378887A1 (en) * 2015-06-24 2016-12-29 Juan Elias Maldonado Augmented Reality for Architectural Interior Placement
US9535597B2 (en) 2011-05-27 2017-01-03 Microsoft Technology Licensing, Llc Managing an immersive interface in a multi-application immersive environment
US9536263B1 (en) 2011-10-13 2017-01-03 Consumerinfo.Com, Inc. Debt services candidate locator
US9544655B2 (en) 2013-12-13 2017-01-10 Nant Holdings Ip, Llc Visual hash tags via trending recognition activities, systems and methods
US9542553B1 (en) 2011-09-16 2017-01-10 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
US20170012984A1 (en) * 2013-11-11 2017-01-12 Amazon Technologies, Inc. Access control for a document management and collaboration system
US9552552B1 (en) 2011-04-29 2017-01-24 Google Inc. Identification of over-clustered map features
EP3121793A1 (en) 2015-07-22 2017-01-25 Adidas AG Method and apparatus for generating an artificial picture
US9557909B2 (en) 2011-09-09 2017-01-31 Microsoft Technology Licensing, Llc Semantic zoom linguistic helpers
US9565472B2 (en) 2012-12-10 2017-02-07 Time Warner Cable Enterprises Llc Apparatus and methods for content transfer protection
US20170053205A1 (en) * 2013-03-15 2017-02-23 Whoknows, Inc. System and method for tracking knowledge and expertise
US20170076335A1 (en) * 2015-09-15 2017-03-16 International Business Machines Corporation Big data enabled insights based personalized 3d offers
US9602765B2 (en) 2009-08-26 2017-03-21 Intouch Technologies, Inc. Portable remote presence robot
US9602414B2 (en) 2011-02-09 2017-03-21 Time Warner Cable Enterprises Llc Apparatus and methods for controlled bandwidth reclamation
US9607336B1 (en) 2011-06-16 2017-03-28 Consumerinfo.Com, Inc. Providing credit inquiry alerts
US9606992B2 (en) 2011-09-30 2017-03-28 Microsoft Technology Licensing, Llc Personal audio/visual apparatus providing resource management
US9609003B1 (en) 2007-06-12 2017-03-28 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US20170091830A9 (en) * 2012-05-02 2017-03-30 James Plankey System and method for managing multimedia sales promotions
US9614899B1 (en) * 2014-05-30 2017-04-04 Intuit Inc. System and method for user contributed website scripts
US9621408B2 (en) 2006-06-12 2017-04-11 Icontrol Networks, Inc. Gateway registry methods and systems
US9616576B2 (en) 2008-04-17 2017-04-11 Intouch Technologies, Inc. Mobile tele-presence system with a microphone system
US20170103120A1 (en) * 2003-02-20 2017-04-13 Dell Software Inc. Using distinguishing properties to classify messages
US9628440B2 (en) 2008-11-12 2017-04-18 Icontrol Networks, Inc. Takeover processes in security network integrated with premise security system
US9635421B2 (en) 2009-11-11 2017-04-25 Time Warner Cable Enterprises Llc Methods and apparatus for audience data collection and analysis in a content delivery network
US20170124160A1 (en) * 2015-10-30 2017-05-04 International Business Machines Corporation Collecting social media users in a specific customer segment
US9652654B2 (en) 2012-06-04 2017-05-16 Ebay Inc. System and method for providing an interactive shopping experience via webcam
US9654541B1 (en) 2012-11-12 2017-05-16 Consumerinfo.Com, Inc. Aggregating user web browsing data
DE102015222782A1 (en) 2015-11-18 2017-05-18 Sirona Dental Systems Gmbh Method for visualizing a dental situation
US9661270B2 (en) 2008-11-24 2017-05-23 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US9658766B2 (en) 2011-05-27 2017-05-23 Microsoft Technology Licensing, Llc Edge gesture
US9665384B2 (en) 2005-08-30 2017-05-30 Microsoft Technology Licensing, Llc Aggregation of computing device settings
US9674224B2 (en) 2007-01-24 2017-06-06 Time Warner Cable Enterprises Llc Apparatus and methods for provisioning in a download-enabled system
US9684905B1 (en) 2010-11-22 2017-06-20 Experian Information Solutions, Inc. Systems and methods for data verification
US20170186228A1 (en) * 2010-06-07 2017-06-29 Gary Stephen Shuster Creation and use of virtual places
US9697504B2 (en) 2013-09-27 2017-07-04 Cinsay, Inc. N-level replication of supplemental content
US9696888B2 (en) 2010-12-20 2017-07-04 Microsoft Technology Licensing, Llc Application-launching interface for multiple modes
US9697263B1 (en) 2013-03-04 2017-07-04 Experian Information Solutions, Inc. Consumer data request fulfillment system
US9699123B2 (en) 2014-04-01 2017-07-04 Ditto Technologies, Inc. Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
US9704174B1 (en) 2006-05-25 2017-07-11 Sean I. Mcghie Conversion of loyalty program points to commerce partner points per terms of a mutual agreement
US9710852B1 (en) 2002-05-30 2017-07-18 Consumerinfo.Com, Inc. Credit report timeline user interface
US9710841B2 (en) 2013-09-30 2017-07-18 Comenity Llc Method and medium for recommending a personalized ensemble
US9710801B2 (en) 2014-04-22 2017-07-18 American Express Travel Related Services Company, Inc. Systems and methods for charge splitting
US9715337B2 (en) 2011-11-08 2017-07-25 Intouch Technologies, Inc. Tele-presence system with a user interface that displays different communication links
US9721147B1 (en) 2013-05-23 2017-08-01 Consumerinfo.Com, Inc. Digital identity
WO2017132689A1 (en) * 2016-01-29 2017-08-03 Curio Search, Inc. Method and system for product discovery
US9729342B2 (en) 2010-12-20 2017-08-08 Icontrol Networks, Inc. Defining and implementing sensor triggered response rules
US9733333B2 (en) 2014-05-08 2017-08-15 Shindig, Inc. Systems and methods for monitoring participant attentiveness within events and group assortments
US9742768B2 (en) 2006-11-01 2017-08-22 Time Warner Cable Enterprises Llc Methods and apparatus for premises content distribution
US9741059B1 (en) * 2014-05-23 2017-08-22 Intuit Inc. System and method for managing website scripts
US9763048B2 (en) 2009-07-21 2017-09-12 Waldeck Technology, Llc Secondary indications of user locations and use thereof by a location-based service
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US9766790B2 (en) 2010-12-23 2017-09-19 Microsoft Technology Licensing, Llc Application reporting in an application-selectable user interface
US9763581B2 (en) 2003-04-23 2017-09-19 P Tech, Llc Patient monitoring apparatus and method for orthosis and other devices
US20170270686A1 (en) * 2016-03-19 2017-09-21 Jessica V. Couch Use of Camera on Mobile Device to Extract Measurements From Garments
US20170277365A1 (en) * 2016-03-28 2017-09-28 Intel Corporation Control system for user apparel selection
US9779708B2 (en) 2009-04-24 2017-10-03 Shinding, Inc. Networks of portable electronic devices that collectively generate sound
US20170287044A1 (en) * 2016-03-31 2017-10-05 Under Armour, Inc. Methods and Apparatus for Enhanced Product Recommendations
US20170287226A1 (en) * 2016-04-03 2017-10-05 Integem Inc Methods and systems for real-time image and signal processing in augmented reality based communications
US9792361B1 (en) 2008-07-29 2017-10-17 James L. Geer Photographic memory
US9805408B2 (en) 2013-06-17 2017-10-31 Dell Products L.P. Automated creation of collages from a collection of assets
RU2634734C2 (en) * 2013-01-25 2017-11-03 Маттиас Рат Unified multimedia instrument, system and method for researching and studying virtual human body
US9813463B2 (en) 2007-10-24 2017-11-07 Sococo, Inc. Phoning into virtual communication environments
US9830646B1 (en) 2012-11-30 2017-11-28 Consumerinfo.Com, Inc. Credit score goals and alerts systems and methods
US9836183B1 (en) * 2016-09-14 2017-12-05 Quid, Inc. Summarized network graph for semantic similarity graphs of large corpora
US9842192B2 (en) 2008-07-11 2017-12-12 Intouch Technologies, Inc. Tele-presence robot system with multi-cast features
US9843552B2 (en) 2010-08-31 2017-12-12 Apple Inc. Classification and status of users of networking and social activity systems
US9853959B1 (en) 2012-05-07 2017-12-26 Consumerinfo.Com, Inc. Storage and maintenance of personal data
US9849593B2 (en) 2002-07-25 2017-12-26 Intouch Technologies, Inc. Medical tele-robotic system with a master remote station with an arbitrator
CN107526433A (en) * 2016-06-21 2017-12-29 宏达国际电子股份有限公司 To provide the method for customized information and simulation system in simulated environment
US9867143B1 (en) 2013-03-15 2018-01-09 Icontrol Networks, Inc. Adaptive Power Modulation
US9870589B1 (en) 2013-03-14 2018-01-16 Consumerinfo.Com, Inc. Credit utilization tracking and reporting
US20180018519A1 (en) * 2016-07-12 2018-01-18 Wal-Mart Stores, Inc. Systems and Methods for Automated Assessment of Physical Objects
US9875489B2 (en) 2013-09-11 2018-01-23 Cinsay, Inc. Dynamic binding of video content
US9880019B2 (en) 2012-06-05 2018-01-30 Apple Inc. Generation of intersection information by a mapping service
US9881303B2 (en) 2014-06-05 2018-01-30 Paypal, Inc. Systems and methods for implementing automatic payer authentication
US20180035168A1 (en) * 2016-07-28 2018-02-01 Baidu Online Network Technology (Beijing) Co., Ltd. Method and Apparatus for Providing Combined Barrage Information
US9886794B2 (en) 2012-06-05 2018-02-06 Apple Inc. Problem reporting in maps
US9892457B1 (en) 2014-04-16 2018-02-13 Consumerinfo.Com, Inc. Providing credit data in search results
US9892447B2 (en) 2013-05-08 2018-02-13 Ebay Inc. Performing image searches in a network-based publication system
US9906838B2 (en) 2010-07-12 2018-02-27 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US20180059881A1 (en) * 2016-09-01 2018-03-01 Samsung Electronics Co., Ltd. Refrigerator storage system having a display
US20180060948A1 (en) * 2016-08-24 2018-03-01 Wal-Mart Stores, Inc. Apparatus and method for providing a virtual shopping environment
US20180060740A1 (en) * 2016-08-23 2018-03-01 International Business Machines Corporation Virtual resource t-shirt size generation and recommendation based on crowd sourcing
US9911149B2 (en) 2015-01-21 2018-03-06 Paypal, Inc. Systems and methods for online shopping cart management
US9910927B2 (en) 2014-03-13 2018-03-06 Ebay Inc. Interactive mirror displays for presenting product recommendations
US9918345B2 (en) 2016-01-20 2018-03-13 Time Warner Cable Enterprises Llc Apparatus and method for wireless network services in moving vehicles
US9922052B1 (en) * 2013-04-26 2018-03-20 A9.Com, Inc. Custom image data store
CN107833145A (en) * 2017-09-19 2018-03-23 翔创科技(北京)有限公司 The database building method and source tracing method of livestock, storage medium and electronic equipment
US9928975B1 (en) 2013-03-14 2018-03-27 Icontrol Networks, Inc. Three-way switch
US9935833B2 (en) 2014-11-05 2018-04-03 Time Warner Cable Enterprises Llc Methods and apparatus for determining an optimized wireless interface installation configuration
US20180096506A1 (en) * 2016-10-04 2018-04-05 Facebook, Inc. Controls and Interfaces for User Interactions in Virtual Spaces
US9947366B2 (en) 2009-04-01 2018-04-17 Shindig, Inc. Group portraits composed using video chat systems
US9961413B2 (en) 2010-07-22 2018-05-01 Time Warner Cable Enterprises Llc Apparatus and methods for packetized content delivery over a bandwidth efficient network
US9965792B2 (en) 2013-05-10 2018-05-08 Dell Products L.P. Picks API which facilitates dynamically injecting content onto a web page for search engines
US9971752B2 (en) 2013-08-19 2018-05-15 Google Llc Systems and methods for resolving privileged edits within suggested edits
WO2018089676A1 (en) * 2016-11-10 2018-05-17 Dga Inc. Product tagging and purchasing method and system
US9974612B2 (en) 2011-05-19 2018-05-22 Intouch Technologies, Inc. Enhanced diagnostics for a telepresence robot
US9986578B2 (en) 2015-12-04 2018-05-29 Time Warner Cable Enterprises Llc Apparatus and methods for selective data network access
US9984408B1 (en) * 2012-05-30 2018-05-29 Amazon Technologies, Inc. Method, medium, and system for live video cooperative shopping
US9984357B2 (en) * 2013-11-11 2018-05-29 International Business Machines Corporation Contextual searching via a mobile computing device
US9996909B2 (en) * 2012-08-30 2018-06-12 Rakuten, Inc. Clothing image processing device, clothing image display method and program
US9996981B1 (en) * 2016-03-07 2018-06-12 Bao Tran Augmented reality system
US9997069B2 (en) 2012-06-05 2018-06-12 Apple Inc. Context-aware voice guidance
US10002208B2 (en) 2014-05-13 2018-06-19 Atheer, Inc. Method for interactive catalog for 3D objects within the 2D environment
US10006505B2 (en) 2012-06-05 2018-06-26 Apple Inc. Rendering road signs during navigation
US10012505B2 (en) * 2016-11-11 2018-07-03 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable system for providing walking directions
US10018478B2 (en) 2012-06-05 2018-07-10 Apple Inc. Voice instructions during navigation
US20180197423A1 (en) * 2017-01-12 2018-07-12 American National Elt Yayincilik Egtim Ve Danismanlik Ltd. Sti. Education model utilizing a qr-code smart book
US10027611B2 (en) 2003-02-20 2018-07-17 Sonicwall Inc. Method and apparatus for classifying electronic messages
US10051078B2 (en) 2007-06-12 2018-08-14 Icontrol Networks, Inc. WiFi-to-serial encapsulation in systems
US20180232781A1 (en) * 2015-08-10 2018-08-16 Je Hyung Kim Advertisement system and advertisement method using 3d model
CN108431849A (en) * 2015-10-05 2018-08-21 陈仕东 Tele-robotic dress ornament exhibition, the system and method tried on and done shopping
US10062096B2 (en) 2013-03-01 2018-08-28 Vegas.Com, Llc System and method for listing items for purchase based on revenue per impressions
US10062062B1 (en) 2006-05-25 2018-08-28 Jbshbm, Llc Automated teller machine (ATM) providing money for loyalty points
US10062245B2 (en) 2005-03-16 2018-08-28 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US10062273B2 (en) 2010-09-28 2018-08-28 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10068276B2 (en) 2013-12-05 2018-09-04 Walmart Apollo, Llc System and method for coupling a mobile device and point of sale device to transmit mobile shopping cart and provide shopping recommendations
US10075446B2 (en) 2008-06-26 2018-09-11 Experian Marketing Solutions, Inc. Systems and methods for providing an integrated identifier
US10078958B2 (en) 2010-12-17 2018-09-18 Icontrol Networks, Inc. Method and system for logging security event data
US10078867B1 (en) 2014-01-10 2018-09-18 Wells Fargo Bank, N.A. Augmented reality virtual banker
US10079839B1 (en) 2007-06-12 2018-09-18 Icontrol Networks, Inc. Activation of gateway device
US10083411B2 (en) 2012-11-15 2018-09-25 Impel It! Inc. Methods and systems for the sale of consumer services
US10091014B2 (en) 2005-03-16 2018-10-02 Icontrol Networks, Inc. Integrated security network with security alarm signaling system
US10095980B1 (en) 2011-04-29 2018-10-09 Google Llc Moderation of user-generated content
US10102536B1 (en) 2013-11-15 2018-10-16 Experian Information Solutions, Inc. Micro-geographic aggregation system
US10102513B2 (en) * 2014-07-31 2018-10-16 Walmart Apollo, Llc Integrated online and in-store shopping experience
US10102570B1 (en) 2013-03-14 2018-10-16 Consumerinfo.Com, Inc. Account vulnerability alerts
US10102591B2 (en) 2011-01-21 2018-10-16 Livingsocial, Inc. Systems and methods to implement point of sale (POS) terminals, process orders and manage order fulfillment
US10114865B2 (en) 2011-09-09 2018-10-30 Microsoft Technology Licensing, Llc Tile cache
US10116676B2 (en) 2015-02-13 2018-10-30 Time Warner Cable Enterprises Llc Apparatus and methods for data collection, analysis and service modification based on online activity
US20180315117A1 (en) * 2017-04-26 2018-11-01 David Lynton Jephcott On-Line Retail
US10127801B2 (en) 2005-03-16 2018-11-13 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
CN108830783A (en) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
US10133916B2 (en) 2016-09-07 2018-11-20 Steven M. Gottlieb Image and identity validation in video chat events
US10142392B2 (en) 2007-01-24 2018-11-27 Icontrol Networks, Inc. Methods and systems for improved system performance
US10148623B2 (en) 2010-11-12 2018-12-04 Time Warner Cable Enterprises Llc Apparatus and methods ensuring data privacy in a content distribution network
CN108960005A (en) * 2017-05-19 2018-12-07 内蒙古大学 The foundation and display methods, system of subjects visual label in a kind of intelligent vision Internet of Things
US10156831B2 (en) 2004-03-16 2018-12-18 Icontrol Networks, Inc. Automation system with mobile interface
US10156959B2 (en) 2005-03-16 2018-12-18 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
CN109074586A (en) * 2016-03-29 2018-12-21 飞力凯网路股份有限公司 Terminal installation, communication means, settlement processing device, settlement method and settlement system
US10164858B2 (en) 2016-06-15 2018-12-25 Time Warner Cable Enterprises Llc Apparatus and methods for monitoring and diagnosing a wireless network
US10163118B2 (en) * 2015-02-18 2018-12-25 Adobe Systems Incorporated Method and apparatus for associating user engagement data received from a user with portions of a webpage visited by the user
US20180374128A1 (en) * 2017-06-23 2018-12-27 Perfect365 Technology Company Ltd. Method and system for a styling platform
US10169782B2 (en) * 2014-11-13 2019-01-01 Adobe Systems Incorporated Targeting ads engaged by a user to related users
US10169761B1 (en) 2013-03-15 2019-01-01 ConsumerInfo.com Inc. Adjustment of knowledge-based authentication
US10176233B1 (en) 2011-07-08 2019-01-08 Consumerinfo.Com, Inc. Lifescore
US10176508B2 (en) * 2015-12-31 2019-01-08 Walmart Apollo, Llc System, method, and non-transitory computer-readable storage media for evaluating search results for online grocery personalization
US10176633B2 (en) 2012-06-05 2019-01-08 Apple Inc. Integrated mapping and navigation application
US10178072B2 (en) 2004-07-20 2019-01-08 Time Warner Cable Enterprises Llc Technique for securely communicating and storing programming material in a trusted domain
US10178435B1 (en) 2009-10-20 2019-01-08 Time Warner Cable Enterprises Llc Methods and apparatus for enabling media functionality in a content delivery network
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US10185776B2 (en) * 2013-10-06 2019-01-22 Shocase, Inc. System and method for dynamically controlled rankings and social network privacy settings
US20190026013A1 (en) * 2011-12-15 2019-01-24 Modiface Inc. Method and system for interactive cosmetic enhancements interface
US10200504B2 (en) 2007-06-12 2019-02-05 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10204375B2 (en) 2014-12-01 2019-02-12 Ebay Inc. Digital wardrobe using simulated forces on garment models
US10204366B2 (en) 2011-09-29 2019-02-12 Electronic Commodities Exchange Apparatus, article of manufacture and methods for customized design of a jewelry item
US10204086B1 (en) 2011-03-16 2019-02-12 Google Llc Document processing service for displaying comments included in messages
US20190051057A1 (en) * 2017-08-08 2019-02-14 Reald Spark, Llc Adjusting a digital representation of a head region
US10217031B2 (en) * 2016-10-13 2019-02-26 International Business Machines Corporation Identifying complimentary physical components to known physical components
US10218652B2 (en) 2014-08-08 2019-02-26 Mastercard International Incorporated Systems and methods for integrating a chat function into an e-reader application
US20190073798A1 (en) * 2016-04-03 2019-03-07 Eliza Yingzi Du Photorealistic human holographic augmented reality communication with interactive control in real-time using a cluster of servers
US20190082211A1 (en) * 2016-02-10 2019-03-14 Nitin Vats Producing realistic body movement using body Images
US10237237B2 (en) 2007-06-12 2019-03-19 Icontrol Networks, Inc. Communication protocols in integrated systems
US10237081B1 (en) * 2009-12-23 2019-03-19 8X8, Inc. Web-enabled conferencing and meeting implementations with flexible user calling and content sharing features
US10235663B2 (en) * 2013-11-06 2019-03-19 Tencent Technology (Shenzhen) Company Limited Method, system and server system of payment based on a conversation group
US10242068B1 (en) * 2013-12-31 2019-03-26 Massachusetts Mutual Life Insurance Company Methods and systems for ranking leads based on given characteristics
US10242351B1 (en) 2014-05-07 2019-03-26 Square, Inc. Digital wallet for groups
US10248195B2 (en) 2010-04-19 2019-04-02 The Nielsen Company (Us), Llc. Short imagery task (SIT) research method
US10254942B2 (en) 2014-07-31 2019-04-09 Microsoft Technology Licensing, Llc Adaptive sizing and positioning of application windows
US10257355B1 (en) 2017-08-29 2019-04-09 Massachusetts Mutual Life Insurance Company System and method for managing customer call-backs
US10254955B2 (en) 2011-09-10 2019-04-09 Microsoft Technology Licensing, Llc Progressively indicating new content in an application-selectable user interface
US10255598B1 (en) 2012-12-06 2019-04-09 Consumerinfo.Com, Inc. Credit card account data extraction
US10262364B2 (en) 2007-12-14 2019-04-16 Consumerinfo.Com, Inc. Card registry systems and methods
US10262362B1 (en) 2014-02-14 2019-04-16 Experian Information Solutions, Inc. Automatic generation of code for attributes
US10269036B2 (en) 2009-10-29 2019-04-23 The Nielsen Company (Us), Llc Analysis of controlled and automatic attention for introduction of stimulus material
US10271010B2 (en) 2013-10-31 2019-04-23 Shindig, Inc. Systems and methods for controlling the display of content
US20190130082A1 (en) * 2017-10-26 2019-05-02 Motorola Mobility Llc Authentication Methods and Devices for Allowing Access to Private Data
US20190147248A1 (en) * 2016-06-15 2019-05-16 International Business Machines Corporation AUGMENTED VIDEO ANALYTICS FOR TESTING INTERNET OF THINGS (IoT) DEVICES
US10296962B2 (en) 2012-02-13 2019-05-21 International Business Machines Corporation Collaborative shopping across multiple shopping channels using shared virtual shopping carts
US10313480B2 (en) 2017-06-22 2019-06-04 Bank Of America Corporation Data transmission between networked resources
US10313303B2 (en) 2007-06-12 2019-06-04 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US10310616B2 (en) 2015-03-31 2019-06-04 Ebay Inc. Modification of three-dimensional garments using gestures
US10318104B2 (en) 2012-06-05 2019-06-11 Apple Inc. Navigation application with adaptive instruction text
US10325314B1 (en) 2013-11-15 2019-06-18 Consumerinfo.Com, Inc. Payment reporting systems
US20190188449A1 (en) * 2016-10-28 2019-06-20 Boe Technology Group Co., Ltd. Clothes positioning device and method
US10332176B2 (en) 2014-08-28 2019-06-25 Ebay Inc. Methods and systems for virtual fitting rooms or hybrid stores
US10339791B2 (en) 2007-06-12 2019-07-02 Icontrol Networks, Inc. Security network integrated with premise security system
US10343283B2 (en) 2010-05-24 2019-07-09 Intouch Technologies, Inc. Telepresence robot system that can be accessed by a cellular phone
US10348575B2 (en) 2013-06-27 2019-07-09 Icontrol Networks, Inc. Control system user interface
US10353566B2 (en) 2011-09-09 2019-07-16 Microsoft Technology Licensing, Llc Semantic zoom animations
US10354310B2 (en) 2013-05-10 2019-07-16 Dell Products L.P. Mobile application enabling product discovery and obtaining feedback from network
US10354311B2 (en) 2014-10-07 2019-07-16 Comenity Llc Determining preferences of an ensemble of items
CN110034998A (en) * 2017-11-07 2019-07-19 奥誓公司 Control the computer system and method for electronic information and its response after transmitting
US10365810B2 (en) 2007-06-12 2019-07-30 Icontrol Networks, Inc. Control system user interface
US10368255B2 (en) 2017-07-25 2019-07-30 Time Warner Cable Enterprises Llc Methods and apparatus for client-based dynamic control of connections to co-existing radio access networks
US10366439B2 (en) 2013-12-27 2019-07-30 Ebay Inc. Regional item reccomendations
CN110069699A (en) * 2018-07-27 2019-07-30 阿里巴巴集团控股有限公司 Order models training method and device
US10373240B1 (en) 2014-04-25 2019-08-06 Csidentity Corporation Systems, methods and computer-program products for eligibility verification
US10375375B2 (en) * 2017-05-15 2019-08-06 Lg Electronics Inc. Method of providing fixed region information or offset region information for subtitle in virtual reality system and device for controlling the same
US10373464B2 (en) 2016-07-07 2019-08-06 Walmart Apollo, Llc Apparatus and method for updating partiality vectors based on monitoring of person and his or her home
US10382452B1 (en) 2007-06-12 2019-08-13 Icontrol Networks, Inc. Communication protocols in integrated systems
US10387845B2 (en) * 2015-07-10 2019-08-20 Bank Of America Corporation System for facilitating appointment calendaring based on perceived customer requirements
US10387846B2 (en) * 2015-07-10 2019-08-20 Bank Of America Corporation System for affecting appointment calendaring on a mobile device based on dependencies
US10389736B2 (en) 2007-06-12 2019-08-20 Icontrol Networks, Inc. Communication protocols in integrated systems
US10394834B1 (en) * 2013-12-31 2019-08-27 Massachusetts Mutual Life Insurance Company Methods and systems for ranking leads based on given characteristics
US10402798B1 (en) 2014-05-11 2019-09-03 Square, Inc. Open tab transactions
US10404758B2 (en) 2016-02-26 2019-09-03 Time Warner Cable Enterprises Llc Apparatus and methods for centralized message exchange in a user premises device
US10402485B2 (en) 2011-05-06 2019-09-03 David H. Sitrick Systems and methodologies providing controlled collaboration among a plurality of users
US20190272679A1 (en) * 2018-03-01 2019-09-05 Yuliya Brodsky Cloud-based garment design system
WO2019167061A1 (en) * 2018-02-27 2019-09-06 Soul Vision Creations Private Limited Virtual representation creation of user for fit and style of apparel and accessories
US10417686B2 (en) 2011-09-29 2019-09-17 Electronic Commodities Exchange Apparatus, article of manufacture, and methods for recommending a jewelry item
US10423309B2 (en) 2007-06-12 2019-09-24 Icontrol Networks, Inc. Device integration framework
US10423220B2 (en) 2014-08-08 2019-09-24 Kabushiki Kaisha Toshiba Virtual try-on apparatus, virtual try-on method, and computer program product
US10432990B2 (en) 2001-09-20 2019-10-01 Time Warner Cable Enterprises Llc Apparatus and methods for carrier allocation in a communications network
US10432603B2 (en) 2014-09-29 2019-10-01 Amazon Technologies, Inc. Access to documents in a document management and collaboration system
US10430388B1 (en) 2011-10-17 2019-10-01 Google Llc Systems and methods for incremental loading of collaboratively generated presentations
US10430817B2 (en) 2016-04-15 2019-10-01 Walmart Apollo, Llc Partiality vector refinement systems and methods through sample probing
US10445608B2 (en) * 2017-10-25 2019-10-15 Motorola Mobility Llc Identifying object representations in image data
US10445414B1 (en) 2011-11-16 2019-10-15 Google Llc Systems and methods for collaborative document editing
US10445775B2 (en) * 2010-08-27 2019-10-15 Oath Inc. Social aggregation communications
US10453061B2 (en) 2018-03-01 2019-10-22 Capital One Services, Llc Network of trust
US10460085B2 (en) 2008-03-13 2019-10-29 Mattel, Inc. Tablet computer
US20190332864A1 (en) * 2018-04-27 2019-10-31 Microsoft Technology Licensing, Llc Context-awareness
US10467677B2 (en) 2011-09-28 2019-11-05 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US20190342698A1 (en) * 2018-05-07 2019-11-07 Bayerische Motoren Werke Aktiengesellschaft Method and System for Modeling User and Location
US10475113B2 (en) 2014-12-23 2019-11-12 Ebay Inc. Method system and medium for generating virtual contexts from three dimensional models
US10471588B2 (en) 2008-04-14 2019-11-12 Intouch Technologies, Inc. Robotic based health care system
US10477349B2 (en) 2018-02-13 2019-11-12 Charter Communications Operating, Llc Apparatus and methods for device location determination
US10481771B1 (en) 2011-10-17 2019-11-19 Google Llc Systems and methods for controlling the display of online documents
US10492034B2 (en) 2016-03-07 2019-11-26 Time Warner Cable Enterprises Llc Apparatus and methods for dynamic open-access networks
US10498830B2 (en) 2007-06-12 2019-12-03 Icontrol Networks, Inc. Wi-Fi-to-serial encapsulation in systems
US10504251B1 (en) * 2017-12-13 2019-12-10 A9.Com, Inc. Determining a visual hull of an object
US10504384B1 (en) * 2018-10-12 2019-12-10 Haier Us Appliance Solutions, Inc. Augmented reality user engagement system
US10511692B2 (en) 2017-06-22 2019-12-17 Bank Of America Corporation Data transmission to a networked resource based on contextual information
US10510054B1 (en) 2013-12-30 2019-12-17 Wells Fargo Bank, N.A. Augmented reality enhancements for financial activities
US10523689B2 (en) 2007-06-12 2019-12-31 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10524165B2 (en) 2017-06-22 2019-12-31 Bank Of America Corporation Dynamic utilization of alternative resources based on token association
US10522026B2 (en) 2008-08-11 2019-12-31 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US10530841B2 (en) 2017-10-03 2020-01-07 The Toronto-Dominion Bank System and method for transferring value between database records
US10530839B2 (en) 2008-08-11 2020-01-07 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US10539787B2 (en) 2010-02-28 2020-01-21 Microsoft Technology Licensing, Llc Head-worn adaptive display
US10540404B1 (en) 2014-02-07 2020-01-21 Amazon Technologies, Inc. Forming a document collection in a document management and collaboration system
US10542237B2 (en) 2008-11-24 2020-01-21 Shindig, Inc. Systems and methods for facilitating communications amongst multiple users
US20200043200A1 (en) * 2016-03-25 2020-02-06 Ebay Inc. Publication modification using body coordinates
US20200042160A1 (en) * 2018-06-18 2020-02-06 Alessandro Gabbi System and Method for Providing Virtual-Reality Based Interactive Archives for Therapeutic Interventions, Interactions and Support
US10559019B1 (en) * 2011-07-19 2020-02-11 Ken Beauvais System for centralized E-commerce overhaul
US10559193B2 (en) 2002-02-01 2020-02-11 Comcast Cable Communications, Llc Premises management systems
US10560772B2 (en) 2013-07-23 2020-02-11 Time Warner Cable Enterprises Llc Apparatus and methods for selective data network access
US10580055B2 (en) 2016-10-13 2020-03-03 International Business Machines Corporation Identifying physical tools to manipulate physical components based on analyzing digital images of the physical components
US10579250B2 (en) 2011-09-01 2020-03-03 Microsoft Technology Licensing, Llc Arranging tiles
US10588175B1 (en) 2018-10-24 2020-03-10 Capital One Services, Llc Network of trust with blockchain
US10592959B2 (en) 2016-04-15 2020-03-17 Walmart Apollo, Llc Systems and methods for facilitating shopping in a physical retail facility
US10592080B2 (en) 2014-07-31 2020-03-17 Microsoft Technology Licensing, Llc Assisted presentation of application windows
US10599753B1 (en) 2013-11-11 2020-03-24 Amazon Technologies, Inc. Document version control in collaborative environment
US10602231B2 (en) 2009-08-06 2020-03-24 Time Warner Cable Enterprises Llc Methods and apparatus for local channel insertion in an all-digital content distribution network
US10600100B2 (en) 2016-09-07 2020-03-24 Walmart Apollo, Llc Apparatus and method for providing item interaction with a virtual store
US10616075B2 (en) 2007-06-12 2020-04-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US10614504B2 (en) 2016-04-15 2020-04-07 Walmart Apollo, Llc Systems and methods for providing content-based product recommendations
US10614921B2 (en) * 2016-05-24 2020-04-07 Cal-Comp Big Data, Inc. Personalized skin diagnosis and skincare
US10621657B2 (en) 2008-11-05 2020-04-14 Consumerinfo.Com, Inc. Systems and methods of credit information reporting
US20200128400A1 (en) * 2018-09-27 2020-04-23 Palo Alto Networks, Inc. Service-based security per user location in mobile networks
US10638361B2 (en) 2017-06-06 2020-04-28 Charter Communications Operating, Llc Methods and apparatus for dynamic control of connections to co-existing radio access networks
US20200134600A1 (en) * 2018-10-24 2020-04-30 Capital One Services, Llc Network of trust for bill splitting
US10645347B2 (en) 2013-08-09 2020-05-05 Icn Acquisition, Llc System, method and apparatus for remote monitoring
US10645547B2 (en) 2017-06-02 2020-05-05 Charter Communications Operating, Llc Apparatus and methods for providing wireless service in a venue
JP2020071884A (en) * 2018-10-31 2020-05-07 株式会社sole Information processor
US10653962B2 (en) 2014-08-01 2020-05-19 Ebay Inc. Generating and utilizing digital avatar data for online marketplaces
US10657578B2 (en) 2014-10-31 2020-05-19 Walmart Apollo, Llc Order processing systems and methods
US10664936B2 (en) 2013-03-15 2020-05-26 Csidentity Corporation Authentication systems and methods for on-demand products
US10666523B2 (en) 2007-06-12 2020-05-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US10671749B2 (en) 2018-09-05 2020-06-02 Consumerinfo.Com, Inc. Authenticated access and aggregation database platform
CN111222264A (en) * 2019-11-01 2020-06-02 长春英利汽车工业股份有限公司 Manufacturing method of composite continuous glass fiber reinforced front-end module
US20200175890A1 (en) * 2013-03-14 2020-06-04 Apple Inc. Device, method, and graphical user interface for a group reading environment
US20200175589A1 (en) * 2018-11-29 2020-06-04 Matrix Financial Technologies, Inc. System and Methodology for Collaborative Trading with Share and Follow Capabilities
US10678999B2 (en) 2010-04-12 2020-06-09 Google Llc Real-time collaboration in a hosted word processor
US10678956B2 (en) * 2018-06-25 2020-06-09 Dell Products, L.P. Keyboard for provisioning security credentials
US10678412B2 (en) 2014-07-31 2020-06-09 Microsoft Technology Licensing, Llc Dynamic joint dividers for application windows
US10685398B1 (en) 2013-04-23 2020-06-16 Consumerinfo.Com, Inc. Presenting credit score information
US10691877B1 (en) 2014-02-07 2020-06-23 Amazon Technologies, Inc. Homogenous insertion of interactions into documents
US10701127B2 (en) 2013-09-27 2020-06-30 Aibuy, Inc. Apparatus and method for supporting relationships associated with content provisioning
US10712811B2 (en) * 2017-12-12 2020-07-14 Facebook, Inc. Providing a digital model of a corresponding product in a camera feed
US10721087B2 (en) 2005-03-16 2020-07-21 Icontrol Networks, Inc. Method for networked touchscreen with integrated interfaces
CN111445283A (en) * 2020-03-25 2020-07-24 北京百度网讯科技有限公司 Digital human processing method and device based on interactive device and storage medium
US10726451B1 (en) 2012-05-02 2020-07-28 James E Plankey System and method for creating and managing multimedia sales promotions
US20200242686A1 (en) * 2013-02-07 2020-07-30 Crisalix S.A. 3D Platform For Aesthetic Simulation
US10742500B2 (en) * 2017-09-20 2020-08-11 Microsoft Technology Licensing, Llc Iteratively updating a collaboration site or template
US10748001B2 (en) 2018-04-27 2020-08-18 Microsoft Technology Licensing, Llc Context-awareness
US10747216B2 (en) 2007-02-28 2020-08-18 Icontrol Networks, Inc. Method and system for communicating with and controlling an alarm system from a remote server
US10765948B2 (en) 2017-12-22 2020-09-08 Activision Publishing, Inc. Video game content aggregation, normalization, and publication systems and methods
US10769739B2 (en) 2011-04-25 2020-09-08 Intouch Technologies, Inc. Systems and methods for management of information among medical providers and facilities
US10785319B2 (en) 2006-06-12 2020-09-22 Icontrol Networks, Inc. IP device discovery systems and methods
US10789526B2 (en) 2012-03-09 2020-09-29 Nara Logics, Inc. Method, system, and non-transitory computer-readable medium for constructing and applying synaptic networks
US10803478B2 (en) 2010-10-05 2020-10-13 Facebook, Inc. Providing social endorsements with online advertising
US10812971B2 (en) 2018-09-27 2020-10-20 Palo Alto Networks, Inc. Service-based security per data network name in mobile networks
US10808882B2 (en) 2010-05-26 2020-10-20 Intouch Technologies, Inc. Tele-robotic system with a robot face placed on a chair
US10825022B1 (en) * 2017-03-03 2020-11-03 Wells Fargo Bank, N.A. Systems and methods for purchases locked by video
US20200356883A1 (en) * 2011-03-22 2020-11-12 Nant Holdings Ip, Llc Distributed relationship reasoning engine for generating hypothesis about relations between aspects of objects in response to an inquiry
US10839302B2 (en) 2015-11-24 2020-11-17 The Research Foundation For The State University Of New York Approximate value iteration with complex returns by bounding
US10841660B2 (en) 2016-12-29 2020-11-17 Dressbot Inc. System and method for multi-user digital interactive experience
US10846562B2 (en) * 2018-01-12 2020-11-24 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for image matching
US10860100B2 (en) 2010-02-28 2020-12-08 Microsoft Technology Licensing, Llc AR glasses with predictive control of external device based on event input
US10868890B2 (en) 2011-11-22 2020-12-15 Trimble Navigation Limited 3D modeling system distributed between a client device web browser and a server
US10867128B2 (en) 2017-09-12 2020-12-15 Microsoft Technology Licensing, Llc Intelligently updating a collaboration site or template
US20200394699A1 (en) * 2019-06-13 2020-12-17 Knot Standard LLC Systems and/or methods for presenting dynamic content for articles of clothing
US10877953B2 (en) 2013-11-11 2020-12-29 Amazon Technologies, Inc. Processing service requests for non-transactional databases
US10875182B2 (en) 2008-03-20 2020-12-29 Teladoc Health, Inc. Remote presence system mounted to operating room hardware
CN112184356A (en) * 2019-07-03 2021-01-05 苹果公司 Guided retail experience
US10887771B2 (en) * 2013-03-11 2021-01-05 Time Warner Cable Enterprises Llc Access control, establishing trust in a wireless network
US20210027401A1 (en) * 2019-07-22 2021-01-28 Vmware, Inc. Processes and systems that determine sustainability of a virtual infrastructure of a distributed computing system
US10911234B2 (en) 2018-06-22 2021-02-02 Experian Information Solutions, Inc. System and method for a token gateway environment
US10915881B2 (en) 2017-01-27 2021-02-09 American Express Travel Related Services Company, Inc. Transaction account charge splitting
CN112365572A (en) * 2020-09-30 2021-02-12 深圳市为汉科技有限公司 Rendering method based on tessellation and related product thereof
US10924442B2 (en) 2019-03-05 2021-02-16 Capital One Services, Llc Conversation agent for collaborative search engine
US10938592B2 (en) * 2017-07-21 2021-03-02 Pearson Education, Inc. Systems and methods for automated platform-based algorithm monitoring
US10944796B2 (en) 2018-09-27 2021-03-09 Palo Alto Networks, Inc. Network slice-based security in mobile networks
US10956667B2 (en) 2013-01-07 2021-03-23 Google Llc Operational transformations proxy for thin clients
US10963735B2 (en) * 2013-04-11 2021-03-30 Digimarc Corporation Methods for object recognition and related arrangements
US10963434B1 (en) 2018-09-07 2021-03-30 Experian Information Solutions, Inc. Data architecture for supporting multiple search models
US10965727B2 (en) 2009-06-08 2021-03-30 Time Warner Cable Enterprises Llc Methods and apparatus for premises content distribution
US10963657B2 (en) * 2011-08-30 2021-03-30 Digimarc Corporation Methods and arrangements for identifying objects
US10979389B2 (en) 2004-03-16 2021-04-13 Icontrol Networks, Inc. Premises management configuration and control
US10981069B2 (en) 2008-03-07 2021-04-20 Activision Publishing, Inc. Methods and systems for determining the authenticity of copied objects in a virtual environment
USD916860S1 (en) 2017-09-26 2021-04-20 Amazon Technologies, Inc. Display system with a virtual reality graphical user interface
US10992764B1 (en) * 2018-12-11 2021-04-27 Amazon Technologies, Inc. Automatic user profiling using video streaming history
US10987015B2 (en) 2009-08-24 2021-04-27 Nielsen Consumer Llc Dry electrodes for electroencephalography
CN112751837A (en) * 2020-12-25 2021-05-04 苏州星舟知识产权代理有限公司 Open type synchronous online conference system
US10999254B2 (en) 2005-03-16 2021-05-04 Icontrol Networks, Inc. System for data routing in networks
US11003858B2 (en) * 2017-12-22 2021-05-11 Microsoft Technology Licensing, Llc AI system to determine actionable intent
US11017020B2 (en) * 2011-06-09 2021-05-25 MemoryWeb, LLC Method and apparatus for managing digital files
US11019077B2 (en) 2018-09-27 2021-05-25 Palo Alto Networks, Inc. Multi-access distributed edge security in mobile networks
US11032518B2 (en) 2005-07-20 2021-06-08 Time Warner Cable Enterprises Llc Method and apparatus for boundary-based network operation
US20210174422A1 (en) * 2019-12-04 2021-06-10 Lg Electronics Inc. Smart apparatus
US11042923B2 (en) 2011-09-29 2021-06-22 Electronic Commodities Exchange, L.P. Apparatus, article of manufacture and methods for recommending a jewelry item
US20210192074A1 (en) * 2019-12-19 2021-06-24 Capital One Services, Llc System and method for controlling access to account transaction information
US11055758B2 (en) 2014-09-30 2021-07-06 Ebay Inc. Garment size mapping
US11055356B2 (en) 2006-02-15 2021-07-06 Kurtis John Ritchey Mobile user borne brain activity data and surrounding environment data correlation system
WO2021138057A1 (en) * 2019-12-31 2021-07-08 Paypal, Inc. Dynamically rendered interface elements during online chat sessions
US11069112B2 (en) * 2017-11-17 2021-07-20 Sony Interactive Entertainment LLC Systems, methods, and devices for creating a spline-based video animation sequence
US11069093B2 (en) * 2019-04-26 2021-07-20 Adobe Inc. Generating contextualized image variants of multiple component images
US11076203B2 (en) 2013-03-12 2021-07-27 Time Warner Cable Enterprises Llc Methods and apparatus for providing and uploading content to personalized network storage
US11075899B2 (en) 2006-08-09 2021-07-27 Ravenwhite Security, Inc. Cloud authentication
CN113203984A (en) * 2021-04-25 2021-08-03 华中科技大学 Multi-user-end online cooperative positioning system
US11089122B2 (en) 2007-06-12 2021-08-10 Icontrol Networks, Inc. Controlling data routing among networks
US11100054B2 (en) 2018-10-09 2021-08-24 Ebay Inc. Digital image suitability determination to generate AR/VR digital content
US11107149B2 (en) * 2018-05-11 2021-08-31 Lemon Hat Collaborative list management
US11107105B1 (en) * 2013-02-23 2021-08-31 Mwe Live, Llc Systems and methods for merging a virtual world, live events and an entertainment channel
US11113950B2 (en) 2005-03-16 2021-09-07 Icontrol Networks, Inc. Gateway integrated with premises security system
US11113536B2 (en) * 2019-03-15 2021-09-07 Boe Technology Group Co., Ltd. Video identification method, video identification device, and storage medium
US11122316B2 (en) 2009-07-15 2021-09-14 Time Warner Cable Enterprises Llc Methods and apparatus for targeted secondary content insertion
US20210294940A1 (en) * 2019-10-07 2021-09-23 Conor Haas Dodd System, apparatus, and method for simulating the value of a product idea
US11132391B2 (en) 2010-03-29 2021-09-28 Ebay Inc. Finding products that are similar to a product selected from a plurality of products
US11138281B2 (en) * 2019-05-22 2021-10-05 Microsoft Technology Licensing, Llc System user attribute relevance based on activity
US11146637B2 (en) 2014-03-03 2021-10-12 Icontrol Networks, Inc. Media content management
US11151486B1 (en) 2013-12-30 2021-10-19 Massachusetts Mutual Life Insurance Company System and method for managing routing of leads
US11151617B2 (en) 2012-03-09 2021-10-19 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
WO2021211875A1 (en) * 2020-04-15 2021-10-21 Tekion Corp Document sharing with annotations
US11159851B2 (en) 2012-09-14 2021-10-26 Time Warner Cable Enterprises Llc Apparatus and methods for providing enhanced or interactive features
US11154981B2 (en) 2010-02-04 2021-10-26 Teladoc Health, Inc. Robot user interface for telepresence robot system
US11157995B2 (en) 2010-08-06 2021-10-26 Dkr Consulting Llc System and method for generating and distributing embeddable electronic commerce stores
US11164362B1 (en) 2017-09-26 2021-11-02 Amazon Technologies, Inc. Virtual reality user interface generation
US11170419B1 (en) * 2016-08-26 2021-11-09 SharePay, Inc. Methods and systems for transaction division
US11176629B2 (en) * 2018-12-21 2021-11-16 FreightVerify, Inc. System and method for monitoring logistical locations and transit entities using a canonical model
US11176461B1 (en) 2017-08-29 2021-11-16 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US20210357468A1 (en) * 2020-05-15 2021-11-18 Baidu Online Network Technology (Beijing) Co., Ltd. Method for sorting geographic location point, method for training sorting model and corresponding apparatuses
US11182060B2 (en) 2004-03-16 2021-11-23 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US11182634B2 (en) * 2019-02-05 2021-11-23 Disney Enterprises, Inc. Systems and methods for modifying labeled content
US11197050B2 (en) 2013-03-15 2021-12-07 Charter Communications Operating, Llc Methods and apparatus for client-based dynamic control of connections to co-existing radio access networks
EP2641539B1 (en) * 2012-03-21 2021-12-08 OneFID GmbH Method for determining the dimensions of a foot
US11201755B2 (en) 2004-03-16 2021-12-14 Icontrol Networks, Inc. Premises system management using status signal
CN113837138A (en) * 2021-09-30 2021-12-24 重庆紫光华山智安科技有限公司 Dressing monitoring method, system, medium and electronic terminal
US11212192B2 (en) 2007-06-12 2021-12-28 Icontrol Networks, Inc. Communication protocols in integrated systems
US11212593B2 (en) 2016-09-27 2021-12-28 Time Warner Cable Enterprises Llc Apparatus and methods for automated secondary content management in a digital network
US11218878B2 (en) 2007-06-12 2022-01-04 Icontrol Networks, Inc. Communication protocols in integrated systems
US11227315B2 (en) 2008-01-30 2022-01-18 Aibuy, Inc. Interactive product placement system and method therefor
US11227001B2 (en) 2017-01-31 2022-01-18 Experian Information Solutions, Inc. Massive scale heterogeneous data ingestion and user resolution
US11227008B2 (en) * 2016-08-10 2022-01-18 Zeekit Online Shopping Ltd. Method, system, and device of virtual dressing utilizing image processing, machine learning, and computer vision
US11232671B1 (en) * 2009-09-30 2022-01-25 Zynga Inc. Socially-based dynamic rewards in multiuser online games
US11237714B2 (en) 2007-06-12 2022-02-01 Control Networks, Inc. Control system user interface
US11238656B1 (en) 2019-02-22 2022-02-01 Consumerinfo.Com, Inc. System and method for an augmented reality experience via an artificial intelligence bot
US11244545B2 (en) 2004-03-16 2022-02-08 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11244381B2 (en) 2018-08-21 2022-02-08 International Business Machines Corporation Collaborative virtual reality computing system
US11244345B2 (en) 2007-07-30 2022-02-08 Nielsen Consumer Llc Neuro-response stimulus and stimulus attribute resonance estimator
US11250098B2 (en) * 2013-09-13 2022-02-15 Reflektion, Inc. Creation and delivery of individually customized web pages
US11250465B2 (en) 2007-03-29 2022-02-15 Nielsen Consumer Llc Analysis of marketing and entertainment effectiveness using central nervous system, autonomic nervous sytem, and effector data
US11258625B2 (en) 2008-08-11 2022-02-22 Icontrol Networks, Inc. Mobile premises automation platform
US11258834B2 (en) * 2018-10-05 2022-02-22 Explain Everything, Inc. System and method for recording online collaboration
US11271986B2 (en) 2011-10-28 2022-03-08 Microsoft Technology Licensing, Llc Document sharing through browser
US11277465B2 (en) 2004-03-16 2022-03-15 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US11281876B2 (en) 2011-08-30 2022-03-22 Digimarc Corporation Retail store with sensor-fusion enhancements
US20220101418A1 (en) * 2020-09-28 2022-03-31 Snap Inc. Providing augmented reality-based makeup product sets in a messaging system
US11297688B2 (en) 2018-03-22 2022-04-05 goTenna Inc. Mesh network deployment kit
US11310199B2 (en) 2004-03-16 2022-04-19 Icontrol Networks, Inc. Premises management configuration and control
US11316753B2 (en) 2007-06-12 2022-04-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US11315179B1 (en) 2018-11-16 2022-04-26 Consumerinfo.Com, Inc. Methods and apparatuses for customized card recommendations
US11316958B2 (en) 2008-08-11 2022-04-26 Icontrol Networks, Inc. Virtual device systems and methods
US11336551B2 (en) 2010-11-11 2022-05-17 Time Warner Cable Enterprises Llc Apparatus and methods for identifying and characterizing latency in a content delivery network
US20220157020A1 (en) * 2020-11-16 2022-05-19 Clo Virtual Fashion Inc. Method and apparatus for online fitting
US11343380B2 (en) 2004-03-16 2022-05-24 Icontrol Networks, Inc. Premises system automation
US11341337B1 (en) * 2021-06-11 2022-05-24 Winter Chat Pty Ltd Semantic messaging collaboration system
US20220172173A1 (en) * 2019-03-18 2022-06-02 Obshchestvo S Ogranichennoi Otvetstvennostiu "Headhunter" Recommender system for staff recruitment using machine learning with multivariate data dimension reduction and staff recruitment method using machine learning with multivariate data dimension reduction
US11354728B2 (en) * 2019-03-24 2022-06-07 We.R Augmented Reality Cloud Ltd. System, device, and method of augmented reality based mapping of a venue and navigation within a venue
US11354377B2 (en) 2020-06-29 2022-06-07 Walmart Apollo, Llc Methods and apparatus for automatically providing item reviews and suggestions
US20220179419A1 (en) * 2020-12-04 2022-06-09 Mitsubishi Electric Research Laboratories, Inc. Method and System for Modelling and Control Partially Measurable Systems
US11368327B2 (en) 2008-08-11 2022-06-21 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11386408B2 (en) * 2019-11-01 2022-07-12 Intuit Inc. System and method for nearest neighbor-based bank account number validation
US11392659B2 (en) * 2019-02-28 2022-07-19 Adobe Inc. Utilizing machine learning models to generate experience driven search results based on digital canvas gesture inputs
US11389064B2 (en) 2018-04-27 2022-07-19 Teladoc Health, Inc. Telehealth cart that supports a removable tablet with seamless audio/video switching
US11398307B2 (en) 2006-06-15 2022-07-26 Teladoc Health, Inc. Remote controlled robot system that provides medical images
US11399153B2 (en) 2009-08-26 2022-07-26 Teladoc Health, Inc. Portable telepresence apparatus
US11405463B2 (en) 2014-03-03 2022-08-02 Icontrol Networks, Inc. Media content management
US11410394B2 (en) 2020-11-04 2022-08-09 West Texas Technology Partners, Inc. Method for interactive catalog for 3D objects within the 2D environment
US20220254338A1 (en) * 2010-01-18 2022-08-11 Apple Inc. Intelligent automated assistant
US11424980B2 (en) 2005-03-16 2022-08-23 Icontrol Networks, Inc. Forming a security network including integrated security system components
US11423756B2 (en) 2007-06-12 2022-08-23 Icontrol Networks, Inc. Communication protocols in integrated systems
US11423110B1 (en) * 2021-09-22 2022-08-23 Finvar Corporation Intelligent timeline and commercialization system with social networking features
US11451409B2 (en) 2005-03-16 2022-09-20 Icontrol Networks, Inc. Security network integrating security system and network devices
US20220327580A1 (en) * 2019-09-19 2022-10-13 Beijing Bytedance Network Technology Co., Ltd. Method and apparatus for interacting with image, and medium and electronic device
US11481788B2 (en) 2009-10-29 2022-10-25 Nielsen Consumer Llc Generating ratings predictions using neuro-response data
US11489812B2 (en) 2004-03-16 2022-11-01 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US11488198B2 (en) 2007-08-28 2022-11-01 Nielsen Consumer Llc Stimulus placement system using subject neuro-response measurements
US20220351158A1 (en) * 2013-03-01 2022-11-03 Toshiba Tec Kabushiki Kaisha Electronic receipt system, electronic receipt management server, and program therefor
US11494851B1 (en) 2021-06-11 2022-11-08 Winter Chat Pty Ltd. Messaging system and method for providing management views
US11494757B2 (en) 2018-10-24 2022-11-08 Capital One Services, Llc Remote commands using network of trust
US11496568B2 (en) 2005-03-16 2022-11-08 Icontrol Networks, Inc. Security system with networked touchscreen
US20220358905A1 (en) * 2021-05-05 2022-11-10 Deep Media Inc. Audio and video translator
US11501508B2 (en) * 2010-06-10 2022-11-15 Brown University Parameterized model of 2D articulated human shape
US11509771B1 (en) 2013-12-30 2022-11-22 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls
US11509866B2 (en) 2004-12-15 2022-11-22 Time Warner Cable Enterprises Llc Method and apparatus for multi-band distribution of digital content
US20220387895A1 (en) * 2021-06-02 2022-12-08 Yariv Glazer Method and System for Managing Virtual Personal Space
US11526931B2 (en) * 2017-03-16 2022-12-13 EyesMatch Ltd. Systems and methods for digital mirror
US11540148B2 (en) 2014-06-11 2022-12-27 Time Warner Cable Enterprises Llc Methods and apparatus for access point location
US11552845B1 (en) * 2013-03-29 2023-01-10 Wells Fargo Bank, N.A. Systems and methods for providing user preferences for a connected device
US11568236B2 (en) 2018-01-25 2023-01-31 The Research Foundation For The State University Of New York Framework and methods of diverse exploration for fast and safe policy improvement
US11574324B1 (en) 2021-09-22 2023-02-07 Finvar Corporation Logic extraction and application subsystem for intelligent timeline and commercialization system
US11582065B2 (en) 2007-06-12 2023-02-14 Icontrol Networks, Inc. Systems and methods for device communication
US20230050482A1 (en) * 2021-08-16 2023-02-16 Capital One Services, Llc System and methods for dynamically routing and rating customer service communications
US11601810B2 (en) 2007-06-12 2023-03-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US11605116B2 (en) 2010-03-29 2023-03-14 Ebay Inc. Methods and systems for reducing item selection error in an e-commerce environment
US11611595B2 (en) 2011-05-06 2023-03-21 David H. Sitrick Systems and methodologies providing collaboration among a plurality of computing appliances, utilizing a plurality of areas of memory to store user input as associated with an associated computing appliance providing the input
US11615697B2 (en) 2005-03-16 2023-03-28 Icontrol Networks, Inc. Premise management systems and methods
US11616992B2 (en) 2010-04-23 2023-03-28 Time Warner Cable Enterprises Llc Apparatus and methods for dynamic secondary content and data insertion and delivery
CN115861488A (en) * 2022-12-22 2023-03-28 中国科学技术大学 High-resolution virtual reloading method, system, equipment and storage medium
US11636944B2 (en) 2017-08-25 2023-04-25 Teladoc Health, Inc. Connectivity infrastructure for a telehealth platform
US11646907B2 (en) 2007-06-12 2023-05-09 Icontrol Networks, Inc. Communication protocols in integrated systems
DE102021129282A1 (en) 2021-11-10 2023-05-11 EPLAN GmbH & Co. KG Flexible management of resources for multiple users
US11651414B1 (en) 2013-03-29 2023-05-16 Wells Fargo Bank, N.A. System and medium for managing lists using an information storage and communication system
US11657380B2 (en) 2017-02-06 2023-05-23 American Express Travel Related Services Company, Inc. Charge splitting across multiple payment systems
US11677577B2 (en) 2004-03-16 2023-06-13 Icontrol Networks, Inc. Premises system management using status signal
US11700142B2 (en) 2005-03-16 2023-07-11 Icontrol Networks, Inc. Security network integrating security system and network devices
US11706279B2 (en) 2007-01-24 2023-07-18 Icontrol Networks, Inc. Methods and systems for data communication
US11706045B2 (en) 2005-03-16 2023-07-18 Icontrol Networks, Inc. Modular electronic display platform
US11704681B2 (en) 2009-03-24 2023-07-18 Nielsen Consumer Llc Neurological profiles for market matching and stimulus presentation
US11712627B2 (en) 2019-11-08 2023-08-01 Activision Publishing, Inc. System and method for providing conditional access to virtual gaming items
US20230244724A1 (en) * 2022-02-01 2023-08-03 Jpmorgan Chase Bank, N.A. Method and system for automated public information discovery
US11727249B2 (en) 2011-09-28 2023-08-15 Nara Logics, Inc. Methods for constructing and applying synaptic networks
US11729255B2 (en) 2008-08-11 2023-08-15 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US11741681B2 (en) * 2012-12-10 2023-08-29 Nant Holdings Ip, Llc Interaction analysis systems and methods
US11743389B1 (en) 2013-12-30 2023-08-29 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls
US11742094B2 (en) 2017-07-25 2023-08-29 Teladoc Health, Inc. Modular telehealth cart with thermal imaging and touch screen user interface
US20230273714A1 (en) * 2022-02-25 2023-08-31 ShredMetrix LLC Systems And Methods For Visualizing Sporting Equipment
CN116703534A (en) * 2023-08-08 2023-09-05 申合信科技集团有限公司 Intelligent management method for data of electronic commerce orders
US11750414B2 (en) 2010-12-16 2023-09-05 Icontrol Networks, Inc. Bidirectional security sensor communication for a premises security system
US11758026B2 (en) 2008-08-11 2023-09-12 Icontrol Networks, Inc. Virtual device systems and methods
US11765221B2 (en) 2020-12-14 2023-09-19 The Western Union Company Systems and methods for adaptive security and cooperative multi-system operations with dynamic protocols
US11763304B1 (en) 2013-03-29 2023-09-19 Wells Fargo Bank, N.A. User and entity authentication through an information storage and communication system
US11792462B2 (en) 2014-05-29 2023-10-17 Time Warner Cable Enterprises Llc Apparatus and methods for recording, accessing, and delivering packetized content
US11792330B2 (en) 2005-03-16 2023-10-17 Icontrol Networks, Inc. Communication and automation in a premises management system
US11792036B2 (en) 2008-08-11 2023-10-17 Icontrol Networks, Inc. Mobile premises automation platform
US11798202B2 (en) 2020-09-28 2023-10-24 Snap Inc. Providing augmented reality-based makeup in a messaging system
US11811845B2 (en) 2004-03-16 2023-11-07 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11816800B2 (en) 2019-07-03 2023-11-14 Apple Inc. Guided consumer experience
US11816323B2 (en) 2008-06-25 2023-11-14 Icontrol Networks, Inc. Automation system user interface
US11831462B2 (en) 2007-08-24 2023-11-28 Icontrol Networks, Inc. Controlling data routing in premises management systems
US11831794B1 (en) 2013-12-30 2023-11-28 Massachusetts Mutual Life Insurance Company System and method for managing routing of leads
US11850757B2 (en) 2009-01-29 2023-12-26 Teladoc Health, Inc. Documentation through a remote presence robot
WO2023249614A1 (en) * 2022-06-21 2023-12-28 Dxm, Inc. Manufacturing system for manufacturing articles of clothing and other goods
US11862302B2 (en) 2017-04-24 2024-01-02 Teladoc Health, Inc. Automated transcription and documentation of tele-health encounters
CN117392352A (en) * 2023-12-11 2024-01-12 南京市文化投资控股集团有限责任公司 Model modeling operation management system and method for meta universe
US11877028B2 (en) 2018-12-04 2024-01-16 The Nielsen Company (Us), Llc Methods and apparatus to identify media presentations by analyzing network traffic
US11880377B1 (en) 2021-03-26 2024-01-23 Experian Information Solutions, Inc. Systems and methods for entity resolution
US11889159B2 (en) 2016-12-29 2024-01-30 Dressbot Inc. System and method for multi-user digital interactive experience
US20240037858A1 (en) * 2022-07-28 2024-02-01 Snap Inc. Virtual wardrobe ar experience
US11916870B2 (en) 2004-03-16 2024-02-27 Icontrol Networks, Inc. Gateway registry methods and systems
US11916928B2 (en) 2008-01-24 2024-02-27 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11922472B1 (en) 2013-03-29 2024-03-05 Wells Fargo Bank, N.A. Systems and methods for transferring a gift using an information storage and communication system
US11935103B2 (en) 2021-12-29 2024-03-19 Ebay Inc. Methods and systems for reducing item selection error in an e-commerce environment

Families Citing this family (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872322B2 (en) * 2008-03-21 2020-12-22 Dressbot, Inc. System and method for collaborative shopping, business and entertainment
US9053196B2 (en) 2008-05-09 2015-06-09 Commerce Studios Llc, Inc. Methods for interacting with and manipulating information and systems thereof
US7822648B2 (en) 2008-06-27 2010-10-26 eHaggle, LLC Methods for electronic commerce using aggregated consumer interest
US8321361B1 (en) * 2009-06-15 2012-11-27 Google Inc Featured items of distributed discussion collaboration
CA2684540A1 (en) * 2009-11-05 2011-05-05 Ibm Canada Limited - Ibm Canada Limitee Navigation through historical stored interactions associated with a multi-user view
US20110157218A1 (en) * 2009-12-29 2011-06-30 Ptucha Raymond W Method for interactive display
US9253447B2 (en) * 2009-12-29 2016-02-02 Kodak Alaris Inc. Method for group interactivity
US9319640B2 (en) * 2009-12-29 2016-04-19 Kodak Alaris Inc. Camera and display system interactivity
US20110202603A1 (en) * 2010-02-12 2011-08-18 Nokia Corporation Method and apparatus for providing object based media mixing
KR20120085476A (en) * 2011-01-24 2012-08-01 삼성전자주식회사 Method and apparatus for reproducing image, and computer-readable storage medium
US20120197753A1 (en) * 2011-01-28 2012-08-02 Etsy, Inc. Systems and methods for shopping in an electronic commerce environment
US8810598B2 (en) 2011-04-08 2014-08-19 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US9727910B1 (en) 2011-04-29 2017-08-08 Intuit Inc. Methods, systems, and articles of manufacture for implementing an antecedent, location-based budget alert to a user
US9191615B1 (en) * 2011-05-02 2015-11-17 Needle, Inc. Chat window
US20130024770A1 (en) * 2011-07-21 2013-01-24 Cooler Master Co., Ltd. Apparatus and method capable of outputting spatial information of device component
US8776043B1 (en) 2011-09-29 2014-07-08 Amazon Technologies, Inc. Service image notifications
EP2786528A4 (en) * 2011-12-02 2015-07-29 Blackberry Ltd Methods and devices for configuring a web browser based on an other party's profile
US9070099B2 (en) 2011-12-16 2015-06-30 Identive Group, Inc. Developing and executing workflow processes associated with data-encoded tags
US8909706B2 (en) * 2012-01-12 2014-12-09 Facebook, Inc. Social networking data augmented gaming kiosk
US9633385B1 (en) * 2012-01-30 2017-04-25 Intuit Inc. Financial management system categorization utilizing image or video acquired with mobile communication device
WO2013130136A1 (en) * 2012-02-29 2013-09-06 Identive Group, Inc. Systems and methods for providing an augmented reality experience
US20130297465A1 (en) * 2012-05-02 2013-11-07 James Plankey Software and method for selling products
US11284251B2 (en) 2012-06-11 2022-03-22 Samsung Electronics Co., Ltd. Mobile device and control method thereof
EP3379441B1 (en) 2012-06-11 2019-12-18 Samsung Electronics Co., Ltd. Mobile device and control method thereof
KR102071692B1 (en) * 2012-06-11 2020-01-31 삼성전자주식회사 Mobile device and control method thereof
US8861866B2 (en) * 2012-06-20 2014-10-14 Hewlett-Packard Development Company, L.P. Identifying a style of clothing based on an ascertained feature
US20140012924A1 (en) * 2012-07-06 2014-01-09 Research In Motion Limited System and Method for Providing Application Feedback
US9633125B1 (en) * 2012-08-10 2017-04-25 Dropbox, Inc. System, method, and computer program for enabling a user to synchronize, manage, and share folders across a plurality of client devices and a synchronization server
US20140074569A1 (en) * 2012-09-11 2014-03-13 First Data Corporation Systems and methods for facilitating loyalty and reward functionality in mobile commerce
US9578457B2 (en) * 2012-09-28 2017-02-21 Verizon Patent And Licensing Inc. Privacy-based device location proximity
US9418480B2 (en) * 2012-10-02 2016-08-16 Augmented Reailty Lab LLC Systems and methods for 3D pose estimation
US8943412B2 (en) * 2012-11-12 2015-01-27 Intel Corporation Game-based selection system
US20140201023A1 (en) * 2013-01-11 2014-07-17 Xiaofan Tang System and Method for Virtual Fitting and Consumer Interaction
US9953359B2 (en) * 2013-01-29 2018-04-24 Wal-Mart Stores, Inc. Cooperative execution of an electronic shopping list
US10546352B2 (en) * 2013-03-14 2020-01-28 Facebook, Inc. Method for selectively advertising items in an image
US10521830B2 (en) * 2013-03-14 2019-12-31 Facebook, Inc. Method for displaying a product-related image to a user while shopping
US9818224B1 (en) * 2013-06-20 2017-11-14 Amazon Technologies, Inc. Augmented reality images based on color and depth information
CN104281577B (en) * 2013-07-02 2018-11-16 威盛电子股份有限公司 The sort method of data file
US9558262B2 (en) 2013-07-02 2017-01-31 Via Technologies, Inc. Sorting method of data documents and display method for sorting landmark data
US9177410B2 (en) * 2013-08-09 2015-11-03 Ayla Mandel System and method for creating avatars or animated sequences using human body features extracted from a still image
WO2015031687A1 (en) * 2013-08-28 2015-03-05 Appareo Systems, Llc Interactive component ordering and servicing system and method
US9582516B2 (en) 2013-10-17 2017-02-28 Nant Holdings Ip, Llc Wide area augmented reality location-based services
WO2015061008A1 (en) * 2013-10-26 2015-04-30 Amazon Technologies, Inc. Unmanned aerial vehicle delivery system
WO2015077653A1 (en) * 2013-11-22 2015-05-28 Hair Construction, Inc. Networked style logistics
EP2881898A1 (en) * 2013-12-09 2015-06-10 Accenture Global Services Limited Virtual assistant interactivity platform
US10002374B2 (en) 2014-03-07 2018-06-19 International Business Machines Corporation Dynamic group formation for electronically collaborative group events
US9234764B2 (en) * 2014-05-20 2016-01-12 Honda Motor Co., Ltd. Navigation system initiating conversation with driver
US20160019626A1 (en) * 2014-07-21 2016-01-21 Thanh Pham Clothing Fitting System
CN106716473A (en) * 2014-08-08 2017-05-24 万事达卡国际股份有限公司 Systems and methods for managing group chats during ecommerce sessions
US10445798B2 (en) * 2014-09-12 2019-10-15 Onu, Llc Systems and computer-readable medium for configurable online 3D catalog
WO2016060637A1 (en) * 2014-10-13 2016-04-21 Kimberly-Clark Worldwide, Inc. Systems and methods for providing a 3-d shopping experience to online shopping environments
ES2765277T3 (en) 2014-12-22 2020-06-08 Reactive Reality Gmbh Method and system to generate garment model data
KR20160084151A (en) * 2015-01-05 2016-07-13 주식회사 모르페우스 Method, system and non-transitory computer-readable recording medium for providing face-based service
US9847998B2 (en) * 2015-05-21 2017-12-19 Go Daddy Operating Company, LLC System and method for delegation of permissions to a third party
WO2017053462A1 (en) 2015-09-23 2017-03-30 Integenx Inc. Systems and methods for live help
US10373383B1 (en) 2015-09-30 2019-08-06 Groupon, Inc. Interactive virtual reality system
WO2017079838A1 (en) * 2015-11-10 2017-05-18 Muhire Augustin System and method for offering automated promotions and special offers in real-time
US10771423B2 (en) * 2015-11-24 2020-09-08 Facebook, Inc. Systems and methods to control event based information
JP6338192B2 (en) * 2016-04-22 2018-06-06 Necプラットフォームズ株式会社 Information processing apparatus, information processing method, and program
US11108708B2 (en) 2016-06-06 2021-08-31 Global Tel*Link Corporation Personalized chatbots for inmates
ES2592354B1 (en) * 2016-06-29 2017-06-23 Rosa PÉREZ ARGEMÍ Computer application procedure to provide information on the provision of services.
US10565550B1 (en) * 2016-09-07 2020-02-18 Target Brands, Inc. Real time scanning of a retail store
CN107016420B (en) * 2016-12-08 2022-01-28 创新先进技术有限公司 Service processing method and device
US10783546B2 (en) 2017-05-17 2020-09-22 Blue Storm Media, Inc. Color and symbol coded display on a digital badge for communicating permission to approach and activate further digital content interaction
US20180211304A1 (en) 2017-01-23 2018-07-26 Tête-à-Tête, Inc. Systems, apparatuses, and methods for generating inventory recommendations
US10404804B2 (en) 2017-01-30 2019-09-03 Global Tel*Link Corporation System and method for personalized virtual reality experience in a controlled environment
US10740601B2 (en) 2017-04-10 2020-08-11 Pearson Education, Inc. Electronic handwriting analysis through adaptive machine-learning
US11164233B2 (en) * 2017-05-15 2021-11-02 Savitude, Inc. Computer system for filtering and matching garments with users
US10796484B2 (en) * 2017-06-14 2020-10-06 Anand Babu Chitavadigi System and method for interactive multimedia and multi-lingual guided tour/panorama tour
US10964423B2 (en) 2017-09-12 2021-03-30 AebeZe Labs System and method for labeling a therapeutic value to digital content
CN107666518B (en) * 2017-09-27 2023-03-03 百度在线网络技术(北京)有限公司 Information pushing method and device, terminal equipment and computer readable storage medium
US10426424B2 (en) 2017-11-21 2019-10-01 General Electric Company System and method for generating and performing imaging protocol simulations
CN108334536B (en) * 2017-11-30 2023-10-24 中国电子科技集团公司电子科学研究院 Information recommendation method, device and storage medium
US10896457B2 (en) * 2017-12-12 2021-01-19 Amazon Technologies, Inc. Synchronized audiovisual responses to user requests
US10885536B2 (en) * 2018-02-01 2021-01-05 Ebay Inc. Garnering interest on potential listing in a photo or video
US11093563B2 (en) * 2018-02-05 2021-08-17 Microsoft Technology Licensing, Llc Sharing measured values of physical space parameters
WO2019164741A1 (en) * 2018-02-26 2019-08-29 Seddi, Inc. Avatar matching in on-line shopping
CN108829690A (en) * 2018-04-03 2018-11-16 广州市宝比万像软件科技有限公司 Comprehensive service platform and management method are created in scenic spot culture
US20200219176A1 (en) * 2018-06-28 2020-07-09 Maria Ioana Marin Accessory selection and rendering system and method of use
US11210499B2 (en) * 2018-07-06 2021-12-28 Kepler Vision Technologies Bv Determining a social group to which customers belong from appearance and using artificial intelligence, machine learning, and computer vision, for estimating customer preferences and intent, and for improving customer services
CN110825214A (en) * 2018-08-08 2020-02-21 海南博树创造科技有限公司 AR intelligent dish ordering system
CN110825215A (en) * 2018-08-08 2020-02-21 海南博树创造科技有限公司 AR technology interactive projection system applied to catering field
US10929894B2 (en) * 2018-08-10 2021-02-23 At&T Intellectual Property I, L.P. System for delivery of XR ad programs
USD917527S1 (en) * 2018-10-23 2021-04-27 Yoox Net-A-Porter Group Spa Display screen with graphical user interface
CN109711867B (en) * 2018-12-07 2023-05-30 广州市诚毅科技软件开发有限公司 Shopping image construction marketing method and system based on audience big data
CN109712068A (en) * 2018-12-21 2019-05-03 云南大学 Image Style Transfer and analogy method for cucurbit pyrography
WO2020168355A2 (en) * 2019-02-15 2020-08-20 Levi Strauss & Co. Anti-ozone treatment of base templates in laser finishing
US11137875B2 (en) * 2019-02-22 2021-10-05 Microsoft Technology Licensing, Llc Mixed reality intelligent tether for dynamic attention direction
CN109903090A (en) * 2019-02-26 2019-06-18 江苏品德网络科技有限公司 A kind of big data management service method
WO2020227434A1 (en) 2019-05-07 2020-11-12 Cerebri AI Inc. Predictive, machine-learning, locale-aware computer models suitable for location- and trajectory-aware training sets
TWI706292B (en) * 2019-05-28 2020-10-01 醒吾學校財團法人醒吾科技大學 Virtual Theater Broadcasting System
US11903785B2 (en) * 2019-06-22 2024-02-20 Porchview, LLC Shade matching and localized laboratory aesthetics for restorative dentistry
US11620389B2 (en) 2019-06-24 2023-04-04 University Of Maryland Baltimore County Method and system for reducing false positives in static source code analysis reports using machine learning and classification techniques
US11348163B1 (en) * 2019-07-25 2022-05-31 Amazon Technologies, Inc. System for presenting simplified user interface
WO2021025879A2 (en) * 2019-08-07 2021-02-11 Size Stream Llc Mobile 3d crowd scanning methods and apparatus
USD916828S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
USD916826S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
USD916831S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
USD916829S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
USD916825S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
USD916832S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with an animated graphical user interface
USD916830S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
USD916827S1 (en) 2019-08-16 2021-04-20 Fevo, Inc. Display screen with a graphical user interface
JP7354750B2 (en) * 2019-10-10 2023-10-03 富士フイルムビジネスイノベーション株式会社 information processing system
CN110910887B (en) * 2019-12-30 2022-06-28 思必驰科技股份有限公司 Voice wake-up method and device
CN111465021B (en) * 2020-04-01 2023-06-09 北京中亦安图科技股份有限公司 Graph-based crank call identification model construction method
WO2022003414A1 (en) 2020-06-30 2022-01-06 L'oreal Augmented reality smart drawer system and method
US11631073B2 (en) 2020-07-01 2023-04-18 Capital One Services, Llc Recommendation engine for bill splitting
US20220036378A1 (en) * 2020-07-29 2022-02-03 Salesforce.Com, Inc. Customer interaction systems and methods
WO2022096076A1 (en) * 2020-11-03 2022-05-12 Elmohsen Mohamed Virtual mall
US20220172278A1 (en) * 2020-12-02 2022-06-02 Blue Yellow Green Inc. Shared Shopping System
CN112907314A (en) * 2020-12-28 2021-06-04 桂林旅游学院 Support Vector Machine (SVM) -based e-commerce recommendation method
US11134217B1 (en) 2021-01-11 2021-09-28 Surendra Goel System that provides video conferencing with accent modification and multiple video overlaying
WO2022168118A1 (en) * 2021-02-06 2022-08-11 Sociograph Solutions Private Limited System and method to provide a virtual store-front
US11250112B1 (en) * 2021-02-24 2022-02-15 Shawn Joseph Graphical user interface and console management, modeling, and analysis system
US11748795B2 (en) 2021-03-11 2023-09-05 Dhana Inc. System and a method for providing an optimized online garment creation platform
US11790430B2 (en) 2021-03-15 2023-10-17 Tata Consultancy Services Limited Method and system for determining unified user intention from digital environment for plurality of strategies
US20220327608A1 (en) * 2021-04-12 2022-10-13 Snap Inc. Home based augmented reality shopping
CN113329231A (en) * 2021-04-20 2021-08-31 北京达佳互联信息技术有限公司 Resource distribution method, device, system, electronic equipment and storage medium
US11475091B1 (en) * 2021-04-29 2022-10-18 Shopify Inc. Session subscription for commerce events
US11854069B2 (en) * 2021-07-16 2023-12-26 Snap Inc. Personalized try-on ads
US11526909B1 (en) 2021-09-17 2022-12-13 Honda Motor Co., Ltd. Real-time targeting of advertisements across multiple platforms
US11556403B1 (en) 2021-10-19 2023-01-17 Bank Of America Corporation System and method for an application programming interface (API) service modification
WO2023069086A1 (en) * 2021-10-20 2023-04-27 Innopeak Technology, Inc. System and method for dynamic portrait relighting
CN113672817B (en) * 2021-10-21 2022-02-11 深圳市发掘科技有限公司 Menu recommendation method, system, storage medium and electronic device
CN113988801B (en) * 2021-10-27 2023-11-10 北京百度网讯科技有限公司 Office system, work task management method and device
WO2023139614A1 (en) * 2022-01-24 2023-07-27 Easyrewardz Software Services Private Limited Systems and method for parameter-based dynamic virtual store digitization
TR2022006455A2 (en) * 2022-04-21 2022-05-23 Ipek Ekin Ogras FASHION PRESENTATION APPLICATION SYSTEM AND WORKING METHOD
US20240037862A1 (en) * 2022-07-26 2024-02-01 Margot Osmundsen-Repp System and method for providing a platform for virtual augmented reality fitting of an item

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070143185A1 (en) * 2005-12-12 2007-06-21 Harmon Richard M Systems and Methods for Allocating a Consumer Access Right to a Live Event
US20070150368A1 (en) * 2005-09-06 2007-06-28 Samir Arora On-line personalized content and merchandising environment
US20080040474A1 (en) * 2006-08-11 2008-02-14 Mark Zuckerberg Systems and methods for providing dynamically selected media content to a user of an electronic device in a social network environment
US20090132341A1 (en) * 2007-11-20 2009-05-21 Theresa Klinger Method and System for Monetizing User-Generated Content

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6901379B1 (en) * 2000-07-07 2005-05-31 4-D Networks, Inc. Online shopping with virtual modeling and peer review
WO2002013048A2 (en) 2000-08-03 2002-02-14 Prelude Systems, Inc. System and method for client-server communication
US7356507B2 (en) 2000-10-30 2008-04-08 Amazon.Com, Inc. Network based user-to-user payment service
US20020133247A1 (en) 2000-11-11 2002-09-19 Smith Robert D. System and method for seamlessly switching between media streams
US7356487B2 (en) * 2001-06-14 2008-04-08 Qurio Holdings, Inc. Efficient transportation of digital files in a peer-to-peer file delivery network
US8224700B2 (en) 2002-08-19 2012-07-17 Andrew Silver System and method for managing restaurant customer data elements
US20050261970A1 (en) 2004-05-21 2005-11-24 Wayport, Inc. Method for providing wireless services
US7647247B2 (en) * 2004-12-06 2010-01-12 International Business Machines Corporation Method and system to enhance web-based shopping collaborations
KR20070013048A (en) * 2005-07-25 2007-01-30 주식회사 팬택앤큐리텔 Common payment system of electronic commerce and method thereof
US7487116B2 (en) * 2005-12-01 2009-02-03 International Business Machines Corporation Consumer representation rendering with selected merchandise
JP4666167B2 (en) * 2006-04-28 2011-04-06 日本電気株式会社 Discount payment system, server, split payment determination method and program
US7761340B2 (en) 2006-11-06 2010-07-20 Dawson Yee Real-time federated auctions and purchasing
US8078515B2 (en) * 2007-05-04 2011-12-13 Michael Sasha John Systems and methods for facilitating electronic transactions and deterring fraud

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070150368A1 (en) * 2005-09-06 2007-06-28 Samir Arora On-line personalized content and merchandising environment
US20070143185A1 (en) * 2005-12-12 2007-06-21 Harmon Richard M Systems and Methods for Allocating a Consumer Access Right to a Live Event
US20080040474A1 (en) * 2006-08-11 2008-02-14 Mark Zuckerberg Systems and methods for providing dynamically selected media content to a user of an electronic device in a social network environment
US20090132341A1 (en) * 2007-11-20 2009-05-21 Theresa Klinger Method and System for Monetizing User-Generated Content

Cited By (1962)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783528B2 (en) * 2000-08-24 2020-09-22 Facecake Marketing Technologies, Inc. Targeted marketing system and method
US20120218423A1 (en) * 2000-08-24 2012-08-30 Linda Smith Real-time virtual reflection
US20120221418A1 (en) * 2000-08-24 2012-08-30 Linda Smith Targeted Marketing System and Method
US11303944B2 (en) 2001-09-20 2022-04-12 Time Warner Cable Enterprises Llc Apparatus and methods for carrier allocation in a communications network
US10432990B2 (en) 2001-09-20 2019-10-01 Time Warner Cable Enterprises Llc Apparatus and methods for carrier allocation in a communications network
US10559193B2 (en) 2002-02-01 2020-02-11 Comcast Cable Communications, Llc Premises management systems
US9400589B1 (en) 2002-05-30 2016-07-26 Consumerinfo.Com, Inc. Circular rotational interface for display of consumer credit information
US9710852B1 (en) 2002-05-30 2017-07-18 Consumerinfo.Com, Inc. Credit report timeline user interface
US9849593B2 (en) 2002-07-25 2017-12-26 Intouch Technologies, Inc. Medical tele-robotic system with a master remote station with an arbitrator
US10315312B2 (en) 2002-07-25 2019-06-11 Intouch Technologies, Inc. Medical tele-robotic system with a master remote station with an arbitrator
US9489671B2 (en) * 2002-10-01 2016-11-08 Andrew H B Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
US20160086161A1 (en) * 2002-10-01 2016-03-24 Andrew H. B. Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
US20170103120A1 (en) * 2003-02-20 2017-04-13 Dell Software Inc. Using distinguishing properties to classify messages
US10042919B2 (en) * 2003-02-20 2018-08-07 Sonicwall Inc. Using distinguishing properties to classify messages
US10027611B2 (en) 2003-02-20 2018-07-17 Sonicwall Inc. Method and apparatus for classifying electronic messages
US10785176B2 (en) 2003-02-20 2020-09-22 Sonicwall Inc. Method and apparatus for classifying electronic messages
US9763581B2 (en) 2003-04-23 2017-09-19 P Tech, Llc Patient monitoring apparatus and method for orthosis and other devices
US9956690B2 (en) 2003-12-09 2018-05-01 Intouch Technologies, Inc. Protocol for a remotely controlled videoconferencing robot
US9375843B2 (en) 2003-12-09 2016-06-28 Intouch Technologies, Inc. Protocol for a remotely controlled videoconferencing robot
US9296107B2 (en) 2003-12-09 2016-03-29 Intouch Technologies, Inc. Protocol for a remotely controlled videoconferencing robot
US10882190B2 (en) 2003-12-09 2021-01-05 Teladoc Health, Inc. Protocol for a remotely controlled videoconferencing robot
US10691295B2 (en) 2004-03-16 2020-06-23 Icontrol Networks, Inc. User interface in a premises network
US10992784B2 (en) 2004-03-16 2021-04-27 Control Networks, Inc. Communication protocols over internet protocol (IP) networks
US11626006B2 (en) 2004-03-16 2023-04-11 Icontrol Networks, Inc. Management of a security system at a premises
US11893874B2 (en) 2004-03-16 2024-02-06 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US11601397B2 (en) 2004-03-16 2023-03-07 Icontrol Networks, Inc. Premises management configuration and control
US10796557B2 (en) 2004-03-16 2020-10-06 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US11916870B2 (en) 2004-03-16 2024-02-27 Icontrol Networks, Inc. Gateway registry methods and systems
US11082395B2 (en) 2004-03-16 2021-08-03 Icontrol Networks, Inc. Premises management configuration and control
US11656667B2 (en) 2004-03-16 2023-05-23 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11410531B2 (en) 2004-03-16 2022-08-09 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US11588787B2 (en) 2004-03-16 2023-02-21 Icontrol Networks, Inc. Premises management configuration and control
US10754304B2 (en) 2004-03-16 2020-08-25 Icontrol Networks, Inc. Automation system with mobile interface
US11043112B2 (en) 2004-03-16 2021-06-22 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10735249B2 (en) 2004-03-16 2020-08-04 Icontrol Networks, Inc. Management of a security system at a premises
US10692356B2 (en) 2004-03-16 2020-06-23 Icontrol Networks, Inc. Control system user interface
US11378922B2 (en) 2004-03-16 2022-07-05 Icontrol Networks, Inc. Automation system with mobile interface
US11368429B2 (en) 2004-03-16 2022-06-21 Icontrol Networks, Inc. Premises management configuration and control
US11537186B2 (en) 2004-03-16 2022-12-27 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11037433B2 (en) 2004-03-16 2021-06-15 Icontrol Networks, Inc. Management of a security system at a premises
US11343380B2 (en) 2004-03-16 2022-05-24 Icontrol Networks, Inc. Premises system automation
US11310199B2 (en) 2004-03-16 2022-04-19 Icontrol Networks, Inc. Premises management configuration and control
US11153266B2 (en) 2004-03-16 2021-10-19 Icontrol Networks, Inc. Gateway registry methods and systems
US11677577B2 (en) 2004-03-16 2023-06-13 Icontrol Networks, Inc. Premises system management using status signal
US10142166B2 (en) 2004-03-16 2018-11-27 Icontrol Networks, Inc. Takeover of security network
US11159484B2 (en) 2004-03-16 2021-10-26 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US11625008B2 (en) 2004-03-16 2023-04-11 Icontrol Networks, Inc. Premises management networking
US10156831B2 (en) 2004-03-16 2018-12-18 Icontrol Networks, Inc. Automation system with mobile interface
US11175793B2 (en) 2004-03-16 2021-11-16 Icontrol Networks, Inc. User interface in a premises network
US11182060B2 (en) 2004-03-16 2021-11-23 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US11184322B2 (en) 2004-03-16 2021-11-23 Icontrol Networks, Inc. Communication protocols in integrated systems
US11782394B2 (en) 2004-03-16 2023-10-10 Icontrol Networks, Inc. Automation system with mobile interface
US11201755B2 (en) 2004-03-16 2021-12-14 Icontrol Networks, Inc. Premises system management using status signal
US11244545B2 (en) 2004-03-16 2022-02-08 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US10890881B2 (en) 2004-03-16 2021-01-12 Icontrol Networks, Inc. Premises management networking
US11277465B2 (en) 2004-03-16 2022-03-15 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US11489812B2 (en) 2004-03-16 2022-11-01 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US11757834B2 (en) 2004-03-16 2023-09-12 Icontrol Networks, Inc. Communication protocols in integrated systems
US11449012B2 (en) 2004-03-16 2022-09-20 Icontrol Networks, Inc. Premises management networking
US11810445B2 (en) 2004-03-16 2023-11-07 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US10979389B2 (en) 2004-03-16 2021-04-13 Icontrol Networks, Inc. Premises management configuration and control
US10447491B2 (en) 2004-03-16 2019-10-15 Icontrol Networks, Inc. Premises system management using status signal
US11811845B2 (en) 2004-03-16 2023-11-07 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US8983174B2 (en) 2004-07-13 2015-03-17 Intouch Technologies, Inc. Mobile robot with a head-based movement mapping scheme
US10241507B2 (en) 2004-07-13 2019-03-26 Intouch Technologies, Inc. Mobile robot with a head-based movement mapping scheme
US9766624B2 (en) 2004-07-13 2017-09-19 Intouch Technologies, Inc. Mobile robot with a head-based movement mapping scheme
US10178072B2 (en) 2004-07-20 2019-01-08 Time Warner Cable Enterprises Llc Technique for securely communicating and storing programming material in a trusted domain
US9973798B2 (en) 2004-07-20 2018-05-15 Time Warner Cable Enterprises Llc Technique for securely communicating programming content
US11088999B2 (en) 2004-07-20 2021-08-10 Time Warner Cable Enterprises Llc Technique for securely communicating and storing programming material in a trusted domain
US10848806B2 (en) 2004-07-20 2020-11-24 Time Warner Cable Enterprises Llc Technique for securely communicating programming content
US9313530B2 (en) 2004-07-20 2016-04-12 Time Warner Cable Enterprises Llc Technique for securely communicating programming content
US11509866B2 (en) 2004-12-15 2022-11-22 Time Warner Cable Enterprises Llc Method and apparatus for multi-band distribution of digital content
US11367340B2 (en) 2005-03-16 2022-06-21 Icontrol Networks, Inc. Premise management systems and methods
US10091014B2 (en) 2005-03-16 2018-10-02 Icontrol Networks, Inc. Integrated security network with security alarm signaling system
US11615697B2 (en) 2005-03-16 2023-03-28 Icontrol Networks, Inc. Premise management systems and methods
US10380871B2 (en) 2005-03-16 2019-08-13 Icontrol Networks, Inc. Control system user interface
US10930136B2 (en) 2005-03-16 2021-02-23 Icontrol Networks, Inc. Premise management systems and methods
US11792330B2 (en) 2005-03-16 2023-10-17 Icontrol Networks, Inc. Communication and automation in a premises management system
US11706045B2 (en) 2005-03-16 2023-07-18 Icontrol Networks, Inc. Modular electronic display platform
US11700142B2 (en) 2005-03-16 2023-07-11 Icontrol Networks, Inc. Security network integrating security system and network devices
US10999254B2 (en) 2005-03-16 2021-05-04 Icontrol Networks, Inc. System for data routing in networks
US10127801B2 (en) 2005-03-16 2018-11-13 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10721087B2 (en) 2005-03-16 2020-07-21 Icontrol Networks, Inc. Method for networked touchscreen with integrated interfaces
US11113950B2 (en) 2005-03-16 2021-09-07 Icontrol Networks, Inc. Gateway integrated with premises security system
US11824675B2 (en) 2005-03-16 2023-11-21 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US10062245B2 (en) 2005-03-16 2018-08-28 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US10156959B2 (en) 2005-03-16 2018-12-18 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11424980B2 (en) 2005-03-16 2022-08-23 Icontrol Networks, Inc. Forming a security network including integrated security system components
US10841381B2 (en) 2005-03-16 2020-11-17 Icontrol Networks, Inc. Security system with networked touchscreen
US9450776B2 (en) 2005-03-16 2016-09-20 Icontrol Networks, Inc. Forming a security network including integrated security system components
US11496568B2 (en) 2005-03-16 2022-11-08 Icontrol Networks, Inc. Security system with networked touchscreen
US11595364B2 (en) 2005-03-16 2023-02-28 Icontrol Networks, Inc. System for data routing in networks
US11451409B2 (en) 2005-03-16 2022-09-20 Icontrol Networks, Inc. Security network integrating security system and network devices
US8090084B2 (en) * 2005-06-30 2012-01-03 At&T Intellectual Property Ii, L.P. Automated call router for business directory using the world wide web
US20070005584A1 (en) * 2005-06-30 2007-01-04 At&T Corp. Automated call router for business directory using the world wide web
US11032518B2 (en) 2005-07-20 2021-06-08 Time Warner Cable Enterprises Llc Method and apparatus for boundary-based network operation
US9177081B2 (en) 2005-08-26 2015-11-03 Veveo, Inc. Method and system for processing ambiguous, multi-term search queries
US9665384B2 (en) 2005-08-30 2017-05-30 Microsoft Technology Licensing, Llc Aggregation of computing device settings
US10259119B2 (en) 2005-09-30 2019-04-16 Intouch Technologies, Inc. Multi-camera mobile teleconferencing platform
US9198728B2 (en) 2005-09-30 2015-12-01 Intouch Technologies, Inc. Multi-camera mobile teleconferencing platform
US20110010087A1 (en) * 2005-10-24 2011-01-13 CellTrak Technologies, Inc. Home Health Point-of-Care and Administration System
US8380542B2 (en) 2005-10-24 2013-02-19 CellTrak Technologies, Inc. System and method for facilitating outcome-based health care
US20100198608A1 (en) * 2005-10-24 2010-08-05 CellTrak Technologies, Inc. Home health point-of-care and administration system
US8019622B2 (en) 2005-10-24 2011-09-13 CellTrak Technologies, Inc. Home health point-of-care and administration system
US11055356B2 (en) 2006-02-15 2021-07-06 Kurtis John Ritchey Mobile user borne brain activity data and surrounding environment data correlation system
US8688746B2 (en) 2006-04-20 2014-04-01 Veveo, Inc. User interface methods and systems for selecting and presenting content based on user relationships
US10146840B2 (en) 2006-04-20 2018-12-04 Veveo, Inc. User interface methods and systems for selecting and presenting content based on user relationships
US9087109B2 (en) 2006-04-20 2015-07-21 Veveo, Inc. User interface methods and systems for selecting and presenting content based on user relationships
US8794518B1 (en) 2006-05-25 2014-08-05 Sean I. Mcghie Conversion of loyalty points for a financial institution to a different loyalty point program for services
US8684265B1 (en) 2006-05-25 2014-04-01 Sean I. Mcghie Rewards program website permitting conversion/transfer of non-negotiable credits to entity independent funds
US8763901B1 (en) 2006-05-25 2014-07-01 Sean I. Mcghie Cross marketing between an entity's loyalty point program and a different loyalty program of a commerce partner
US10062062B1 (en) 2006-05-25 2018-08-28 Jbshbm, Llc Automated teller machine (ATM) providing money for loyalty points
US9704174B1 (en) 2006-05-25 2017-07-11 Sean I. Mcghie Conversion of loyalty program points to commerce partner points per terms of a mutual agreement
US8973821B1 (en) 2006-05-25 2015-03-10 Sean I. Mcghie Conversion/transfer of non-negotiable credits to entity independent funds
US8668146B1 (en) 2006-05-25 2014-03-11 Sean I. Mcghie Rewards program with payment artifact permitting conversion/transfer of non-negotiable credits to entity independent funds
US8950669B1 (en) 2006-05-25 2015-02-10 Sean I. Mcghie Conversion of non-negotiable credits to entity independent funds
US8783563B1 (en) 2006-05-25 2014-07-22 Sean I. Mcghie Conversion of loyalty points for gaming to a different loyalty point program for services
US8789752B1 (en) 2006-05-25 2014-07-29 Sean I. Mcghie Conversion/transfer of in-game credits to entity independent or negotiable funds
US8944320B1 (en) 2006-05-25 2015-02-03 Sean I. Mcghie Conversion/transfer of non-negotiable credits to in-game funds for in-game purchases
US8833650B1 (en) 2006-05-25 2014-09-16 Sean I. Mcghie Online shopping sites for redeeming loyalty points
US10785319B2 (en) 2006-06-12 2020-09-22 Icontrol Networks, Inc. IP device discovery systems and methods
US9621408B2 (en) 2006-06-12 2017-04-11 Icontrol Networks, Inc. Gateway registry methods and systems
US10616244B2 (en) 2006-06-12 2020-04-07 Icontrol Networks, Inc. Activation of gateway device
US11418518B2 (en) 2006-06-12 2022-08-16 Icontrol Networks, Inc. Activation of gateway device
US11398307B2 (en) 2006-06-15 2022-07-26 Teladoc Health, Inc. Remote controlled robot system that provides medical images
US20080028302A1 (en) * 2006-07-31 2008-01-31 Steffen Meschkat Method and apparatus for incrementally updating a web page
US8844003B1 (en) 2006-08-09 2014-09-23 Ravenwhite Inc. Performing authentication
US10348720B2 (en) 2006-08-09 2019-07-09 Ravenwhite Inc. Cloud authentication
US11277413B1 (en) 2006-08-09 2022-03-15 Ravenwhite Security, Inc. Performing authentication
US10791121B1 (en) 2006-08-09 2020-09-29 Ravenwhite Security, Inc. Performing authentication
US11075899B2 (en) 2006-08-09 2021-07-27 Ravenwhite Security, Inc. Cloud authentication
US11381549B2 (en) 2006-10-20 2022-07-05 Time Warner Cable Enterprises Llc Downloadable security and protection methods and apparatus
US9923883B2 (en) 2006-10-20 2018-03-20 Time Warner Cable Enterprises Llc Downloadable security and protection methods and apparatus
US9313458B2 (en) 2006-10-20 2016-04-12 Time Warner Cable Enterprises Llc Downloadable security and protection methods and apparatus
US10362018B2 (en) 2006-10-20 2019-07-23 Time Warner Cable Enterprises Llc Downloadable security and protection methods and apparatus
US9742768B2 (en) 2006-11-01 2017-08-22 Time Warner Cable Enterprises Llc Methods and apparatus for premises content distribution
US10069836B2 (en) 2006-11-01 2018-09-04 Time Warner Cable Enterprises Llc Methods and apparatus for premises content distribution
US10404752B2 (en) 2007-01-24 2019-09-03 Time Warner Cable Enterprises Llc Apparatus and methods for provisioning in a download-enabled system
US11412027B2 (en) 2007-01-24 2022-08-09 Icontrol Networks, Inc. Methods and systems for data communication
US10225314B2 (en) 2007-01-24 2019-03-05 Icontrol Networks, Inc. Methods and systems for improved system performance
US11552999B2 (en) 2007-01-24 2023-01-10 Time Warner Cable Enterprises Llc Apparatus and methods for provisioning in a download-enabled system
US11706279B2 (en) 2007-01-24 2023-07-18 Icontrol Networks, Inc. Methods and systems for data communication
US9674224B2 (en) 2007-01-24 2017-06-06 Time Warner Cable Enterprises Llc Apparatus and methods for provisioning in a download-enabled system
US10142392B2 (en) 2007-01-24 2018-11-27 Icontrol Networks, Inc. Methods and systems for improved system performance
US11418572B2 (en) 2007-01-24 2022-08-16 Icontrol Networks, Inc. Methods and systems for improved system performance
US10657794B1 (en) 2007-02-28 2020-05-19 Icontrol Networks, Inc. Security, monitoring and automation controller access and use of legacy security control panel information
US10747216B2 (en) 2007-02-28 2020-08-18 Icontrol Networks, Inc. Method and system for communicating with and controlling an alarm system from a remote server
US11809174B2 (en) 2007-02-28 2023-11-07 Icontrol Networks, Inc. Method and system for managing communication connectivity
US11194320B2 (en) 2007-02-28 2021-12-07 Icontrol Networks, Inc. Method and system for managing communication connectivity
US9412248B1 (en) 2007-02-28 2016-08-09 Icontrol Networks, Inc. Security, monitoring and automation controller access and use of legacy security control panel information
US9195834B1 (en) 2007-03-19 2015-11-24 Ravenwhite Inc. Cloud authentication
US11790393B2 (en) 2007-03-29 2023-10-17 Nielsen Consumer Llc Analysis of marketing and entertainment effectiveness using central nervous system, autonomic nervous system, and effector data
US11250465B2 (en) 2007-03-29 2022-02-15 Nielsen Consumer Llc Analysis of marketing and entertainment effectiveness using central nervous system, autonomic nervous sytem, and effector data
US10437895B2 (en) 2007-03-30 2019-10-08 Consumerinfo.Com, Inc. Systems and methods for data verification
US9342783B1 (en) 2007-03-30 2016-05-17 Consumerinfo.Com, Inc. Systems and methods for data verification
US11308170B2 (en) 2007-03-30 2022-04-19 Consumerinfo.Com, Inc. Systems and methods for data verification
US9510065B2 (en) 2007-04-23 2016-11-29 Icontrol Networks, Inc. Method and system for automatically providing alternate network access for telecommunications
US10672254B2 (en) 2007-04-23 2020-06-02 Icontrol Networks, Inc. Method and system for providing alternate network access
US11132888B2 (en) 2007-04-23 2021-09-28 Icontrol Networks, Inc. Method and system for providing alternate network access
US10140840B2 (en) 2007-04-23 2018-11-27 Icontrol Networks, Inc. Method and system for providing alternate network access
US11663902B2 (en) 2007-04-23 2023-05-30 Icontrol Networks, Inc. Method and system for providing alternate network access
US10682763B2 (en) 2007-05-09 2020-06-16 Intouch Technologies, Inc. Robot system that operates through a network firewall
US9160783B2 (en) 2007-05-09 2015-10-13 Intouch Technologies, Inc. Robot system that operates through a network firewall
US10389736B2 (en) 2007-06-12 2019-08-20 Icontrol Networks, Inc. Communication protocols in integrated systems
US10666523B2 (en) 2007-06-12 2020-05-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US9306809B2 (en) 2007-06-12 2016-04-05 Icontrol Networks, Inc. Security system with networked touchscreen
US11632308B2 (en) 2007-06-12 2023-04-18 Icontrol Networks, Inc. Communication protocols in integrated systems
US11625161B2 (en) 2007-06-12 2023-04-11 Icontrol Networks, Inc. Control system user interface
US11894986B2 (en) 2007-06-12 2024-02-06 Icontrol Networks, Inc. Communication protocols in integrated systems
US10523689B2 (en) 2007-06-12 2019-12-31 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11611568B2 (en) 2007-06-12 2023-03-21 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11601810B2 (en) 2007-06-12 2023-03-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US10079839B1 (en) 2007-06-12 2018-09-18 Icontrol Networks, Inc. Activation of gateway device
US11582065B2 (en) 2007-06-12 2023-02-14 Icontrol Networks, Inc. Systems and methods for device communication
US9609003B1 (en) 2007-06-12 2017-03-28 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US11212192B2 (en) 2007-06-12 2021-12-28 Icontrol Networks, Inc. Communication protocols in integrated systems
US10142394B2 (en) 2007-06-12 2018-11-27 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US10200504B2 (en) 2007-06-12 2019-02-05 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11218878B2 (en) 2007-06-12 2022-01-04 Icontrol Networks, Inc. Communication protocols in integrated systems
US11237714B2 (en) 2007-06-12 2022-02-01 Control Networks, Inc. Control system user interface
US10237237B2 (en) 2007-06-12 2019-03-19 Icontrol Networks, Inc. Communication protocols in integrated systems
US10313303B2 (en) 2007-06-12 2019-06-04 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US11423756B2 (en) 2007-06-12 2022-08-23 Icontrol Networks, Inc. Communication protocols in integrated systems
US10339791B2 (en) 2007-06-12 2019-07-02 Icontrol Networks, Inc. Security network integrated with premise security system
US10365810B2 (en) 2007-06-12 2019-07-30 Icontrol Networks, Inc. Control system user interface
US10382452B1 (en) 2007-06-12 2019-08-13 Icontrol Networks, Inc. Communication protocols in integrated systems
US10051078B2 (en) 2007-06-12 2018-08-14 Icontrol Networks, Inc. WiFi-to-serial encapsulation in systems
US11089122B2 (en) 2007-06-12 2021-08-10 Icontrol Networks, Inc. Controlling data routing among networks
US10498830B2 (en) 2007-06-12 2019-12-03 Icontrol Networks, Inc. Wi-Fi-to-serial encapsulation in systems
US11316753B2 (en) 2007-06-12 2022-04-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US10423309B2 (en) 2007-06-12 2019-09-24 Icontrol Networks, Inc. Device integration framework
US10444964B2 (en) 2007-06-12 2019-10-15 Icontrol Networks, Inc. Control system user interface
US9531593B2 (en) 2007-06-12 2016-12-27 Icontrol Networks, Inc. Takeover processes in security network integrated with premise security system
US10616075B2 (en) 2007-06-12 2020-04-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US11722896B2 (en) 2007-06-12 2023-08-08 Icontrol Networks, Inc. Communication protocols in integrated systems
US11646907B2 (en) 2007-06-12 2023-05-09 Icontrol Networks, Inc. Communication protocols in integrated systems
US9008371B2 (en) * 2007-07-18 2015-04-14 Metaio Gmbh Method and system for ascertaining the position and orientation of a camera relative to a real object
US20100239121A1 (en) * 2007-07-18 2010-09-23 Metaio Gmbh Method and system for ascertaining the position and orientation of a camera relative to a real object
US8355706B2 (en) * 2007-07-20 2013-01-15 Broadcom Corporation Method and system for utilizing context data tags to catalog data in wireless system
US20090024641A1 (en) * 2007-07-20 2009-01-22 Thomas Quigley Method and system for utilizing context data tags to catalog data in wireless system
US11244345B2 (en) 2007-07-30 2022-02-08 Nielsen Consumer Llc Neuro-response stimulus and stimulus attribute resonance estimator
US11763340B2 (en) 2007-07-30 2023-09-19 Nielsen Consumer Llc Neuro-response stimulus and stimulus attribute resonance estimator
US11815969B2 (en) 2007-08-10 2023-11-14 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11831462B2 (en) 2007-08-24 2023-11-28 Icontrol Networks, Inc. Controlling data routing in premises management systems
US11488198B2 (en) 2007-08-28 2022-11-01 Nielsen Consumer Llc Stimulus placement system using subject neuro-response measurements
US20090106315A1 (en) * 2007-10-17 2009-04-23 Yahoo! Inc. Extensions for system and method for an extensible media player
US20090125812A1 (en) * 2007-10-17 2009-05-14 Yahoo! Inc. System and method for an extensible media player
US20090106104A1 (en) * 2007-10-17 2009-04-23 Yahoo! Inc. System and method for implementing an ad management system for an extensible media player
US20090106639A1 (en) * 2007-10-17 2009-04-23 Yahoo! Inc. System and Method for an Extensible Media Player
US9843774B2 (en) * 2007-10-17 2017-12-12 Excalibur Ip, Llc System and method for implementing an ad management system for an extensible media player
US9813463B2 (en) 2007-10-24 2017-11-07 Sococo, Inc. Phoning into virtual communication environments
US9357025B2 (en) * 2007-10-24 2016-05-31 Social Communications Company Virtual area based telephony communications
US20110274104A1 (en) * 2007-10-24 2011-11-10 Social Communications Company Virtual area based telephony communications
US20090150254A1 (en) * 2007-11-30 2009-06-11 Mark Dickelman Systems, devices and methods for computer automated assistance for disparate networks and internet interfaces
US10733643B2 (en) 2007-11-30 2020-08-04 U.S. Bank National Association Systems, devices and methods for computer automated assistance for disparate networks and internet interfaces
US11610243B2 (en) 2007-11-30 2023-03-21 U.S. Bank National Association Systems, devices and methods for computer automated assistance for disparate networks and internet interfaces
US9542682B1 (en) 2007-12-14 2017-01-10 Consumerinfo.Com, Inc. Card registry systems and methods
US10614519B2 (en) 2007-12-14 2020-04-07 Consumerinfo.Com, Inc. Card registry systems and methods
US10878499B2 (en) 2007-12-14 2020-12-29 Consumerinfo.Com, Inc. Card registry systems and methods
US9037515B2 (en) 2007-12-14 2015-05-19 John Nicholas and Kristin Gross Social networking websites and systems for publishing sampling event data
US10482484B2 (en) * 2007-12-14 2019-11-19 John Nicholas And Kristin Gross Trust U/A/D April 13, 2010 Item data collection systems and methods with social network integration
US9230283B1 (en) 2007-12-14 2016-01-05 Consumerinfo.Com, Inc. Card registry systems and methods
US9767513B1 (en) 2007-12-14 2017-09-19 Consumerinfo.Com, Inc. Card registry systems and methods
US11379916B1 (en) 2007-12-14 2022-07-05 Consumerinfo.Com, Inc. Card registry systems and methods
US20140039980A1 (en) * 2007-12-14 2014-02-06 The John Nicholas and Kristin Gross Trust U/A/D April 13, 2010 Item Data Collection Systems and Methods with Social Network Integration
US10262364B2 (en) 2007-12-14 2019-04-16 Consumerinfo.Com, Inc. Card registry systems and methods
US11916928B2 (en) 2008-01-24 2024-02-27 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US20110191809A1 (en) * 2008-01-30 2011-08-04 Cinsay, Llc Viral Syndicated Interactive Product System and Method Therefor
US11227315B2 (en) 2008-01-30 2022-01-18 Aibuy, Inc. Interactive product placement system and method therefor
US10055768B2 (en) 2008-01-30 2018-08-21 Cinsay, Inc. Interactive product placement system and method therefor
US9338499B2 (en) 2008-01-30 2016-05-10 Cinsay, Inc. Interactive product placement system and method therefor
US10425698B2 (en) 2008-01-30 2019-09-24 Aibuy, Inc. Interactive product placement system and method therefor
US9332302B2 (en) 2008-01-30 2016-05-03 Cinsay, Inc. Interactive product placement system and method therefor
US10438249B2 (en) 2008-01-30 2019-10-08 Aibuy, Inc. Interactive product system and method therefor
US9986305B2 (en) 2008-01-30 2018-05-29 Cinsay, Inc. Interactive product placement system and method therefor
US8782690B2 (en) 2008-01-30 2014-07-15 Cinsay, Inc. Interactive product placement system and method therefor
US9338500B2 (en) 2008-01-30 2016-05-10 Cinsay, Inc. Interactive product placement system and method therefor
US8893173B2 (en) 2008-01-30 2014-11-18 Cinsay, Inc. Interactive product placement system and method therefor
US9351032B2 (en) 2008-01-30 2016-05-24 Cinsay, Inc. Interactive product placement system and method therefor
US9674584B2 (en) 2008-01-30 2017-06-06 Cinsay, Inc. Interactive product placement system and method therefor
US9344754B2 (en) 2008-01-30 2016-05-17 Cinsay, Inc. Interactive product placement system and method therefor
US10981069B2 (en) 2008-03-07 2021-04-20 Activision Publishing, Inc. Methods and systems for determining the authenticity of copied objects in a virtual environment
US10460085B2 (en) 2008-03-13 2019-10-29 Mattel, Inc. Tablet computer
US20100076870A1 (en) * 2008-03-13 2010-03-25 Fuhu. Inc Widgetized avatar and a method and system of virtual commerce including same
US20100211479A1 (en) * 2008-03-13 2010-08-19 Fuhu, Inc. Virtual marketplace accessible to widgetized avatars
US20100199200A1 (en) * 2008-03-13 2010-08-05 Robb Fujioka Virtual Marketplace Accessible To Widgetized Avatars
US11787060B2 (en) 2008-03-20 2023-10-17 Teladoc Health, Inc. Remote presence system mounted to operating room hardware
US10875182B2 (en) 2008-03-20 2020-12-29 Teladoc Health, Inc. Remote presence system mounted to operating room hardware
US20090259937A1 (en) * 2008-04-11 2009-10-15 Rohall Steven L Brainstorming Tool in a 3D Virtual Environment
US20090257636A1 (en) * 2008-04-14 2009-10-15 Optovue, Inc. Method of eye registration for optical coherence tomography
US10471588B2 (en) 2008-04-14 2019-11-12 Intouch Technologies, Inc. Robotic based health care system
US11472021B2 (en) 2008-04-14 2022-10-18 Teladoc Health, Inc. Robotic based health care system
US8205991B2 (en) * 2008-04-14 2012-06-26 Optovue, Inc. Method of eye registration for optical coherence tomography
US9616576B2 (en) 2008-04-17 2017-04-11 Intouch Technologies, Inc. Mobile tele-presence system with a microphone system
US9813770B2 (en) 2008-05-03 2017-11-07 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US20090276805A1 (en) * 2008-05-03 2009-11-05 Andrews Ii James K Method and system for generation and playback of supplemented videos
US10986412B2 (en) 2008-05-03 2021-04-20 Aibuy, Inc. Methods and system for generation and playback of supplemented videos
US9210472B2 (en) 2008-05-03 2015-12-08 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US9113214B2 (en) 2008-05-03 2015-08-18 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US8813132B2 (en) 2008-05-03 2014-08-19 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US10225614B2 (en) 2008-05-03 2019-03-05 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US8676975B2 (en) 2008-05-15 2014-03-18 International Business Machines Corporation Virtual universe desktop exploration for resource acquisition
US9069442B2 (en) 2008-05-15 2015-06-30 International Business Machines Corporation Virtual universe desktop exploration for resource acquisition
US20090287765A1 (en) * 2008-05-15 2009-11-19 Hamilton Ii Rick A Virtual universe desktop exploration for resource acquisition
US20090300639A1 (en) * 2008-06-02 2009-12-03 Hamilton Ii Rick A Resource acquisition and manipulation from within a virtual universe
US20110184831A1 (en) * 2008-06-02 2011-07-28 Andrew Robert Dalgleish An item recommendation system
US8671198B2 (en) 2008-06-02 2014-03-11 International Business Machines Corporation Resource acquisition and manipulation from within a virtual universe
US20090306998A1 (en) * 2008-06-06 2009-12-10 Hamilton Ii Rick A Desktop access from within a virtual universe
US20090307110A1 (en) * 2008-06-09 2009-12-10 Boas Betzler Management of virtual universe item returns
US8099338B2 (en) * 2008-06-09 2012-01-17 International Business Machines Corporation Management of virtual universe item returns
US11816323B2 (en) 2008-06-25 2023-11-14 Icontrol Networks, Inc. Automation system user interface
US11157872B2 (en) 2008-06-26 2021-10-26 Experian Marketing Solutions, Llc Systems and methods for providing an integrated identifier
US10075446B2 (en) 2008-06-26 2018-09-11 Experian Marketing Solutions, Inc. Systems and methods for providing an integrated identifier
US11769112B2 (en) 2008-06-26 2023-09-26 Experian Marketing Solutions, Llc Systems and methods for providing an integrated identifier
US8271474B2 (en) 2008-06-30 2012-09-18 Yahoo! Inc. Automated system and method for creating a content-rich site based on an emerging subject of internet search
US20100005007A1 (en) * 2008-07-07 2010-01-07 Aaron Roger Cox Methods of associating real world items with virtual world representations
US9193065B2 (en) 2008-07-10 2015-11-24 Intouch Technologies, Inc. Docking system for a tele-presence robot
US10493631B2 (en) 2008-07-10 2019-12-03 Intouch Technologies, Inc. Docking system for a tele-presence robot
US10878960B2 (en) 2008-07-11 2020-12-29 Teladoc Health, Inc. Tele-presence robot system with multi-cast features
US9842192B2 (en) 2008-07-11 2017-12-12 Intouch Technologies, Inc. Tele-presence robot system with multi-cast features
US20120246585A9 (en) * 2008-07-14 2012-09-27 Microsoft Corporation System for editing an avatar
US11755961B2 (en) * 2008-07-18 2023-09-12 Disney Enterprises, Inc. System and method for providing location-based data on a wireless portable device
US20210073687A1 (en) * 2008-07-18 2021-03-11 Disney Enterprises, Inc. System and Method for Providing Location-Based Data on a Wireless Portable Device
US20100063854A1 (en) * 2008-07-18 2010-03-11 Disney Enterprises, Inc. System and method for providing location-based data on a wireless portable device
US10885471B2 (en) * 2008-07-18 2021-01-05 Disney Enterprises, Inc. System and method for providing location-based data on a wireless portable device
US9792361B1 (en) 2008-07-29 2017-10-17 James L. Geer Photographic memory
US9128981B1 (en) 2008-07-29 2015-09-08 James L. Geer Phone assisted ‘photographic memory’
US11086929B1 (en) 2008-07-29 2021-08-10 Mimzi LLC Photographic memory
US11308156B1 (en) 2008-07-29 2022-04-19 Mimzi, Llc Photographic memory
US11782975B1 (en) 2008-07-29 2023-10-10 Mimzi, Llc Photographic memory
US20110128223A1 (en) * 2008-08-07 2011-06-02 Koninklijke Phillips Electronics N.V. Method of and system for determining a head-motion/gaze relationship for a user, and an interactive display system
US11758026B2 (en) 2008-08-11 2023-09-12 Icontrol Networks, Inc. Virtual device systems and methods
US11792036B2 (en) 2008-08-11 2023-10-17 Icontrol Networks, Inc. Mobile premises automation platform
US10522026B2 (en) 2008-08-11 2019-12-31 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US11729255B2 (en) 2008-08-11 2023-08-15 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US11190578B2 (en) 2008-08-11 2021-11-30 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US11368327B2 (en) 2008-08-11 2022-06-21 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11316958B2 (en) 2008-08-11 2022-04-26 Icontrol Networks, Inc. Virtual device systems and methods
US11616659B2 (en) 2008-08-11 2023-03-28 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11711234B2 (en) 2008-08-11 2023-07-25 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11258625B2 (en) 2008-08-11 2022-02-22 Icontrol Networks, Inc. Mobile premises automation platform
US11641391B2 (en) 2008-08-11 2023-05-02 Icontrol Networks Inc. Integrated cloud system with lightweight gateway for premises automation
US10530839B2 (en) 2008-08-11 2020-01-07 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US10115155B1 (en) 2008-08-14 2018-10-30 Experian Information Solution, Inc. Multi-bureau credit file freeze and unfreeze
US9792648B1 (en) 2008-08-14 2017-10-17 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US9256904B1 (en) 2008-08-14 2016-02-09 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US9489694B2 (en) 2008-08-14 2016-11-08 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US10650448B1 (en) 2008-08-14 2020-05-12 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US11004147B1 (en) 2008-08-14 2021-05-11 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US11636540B1 (en) 2008-08-14 2023-04-25 Experian Information Solutions, Inc. Multi-bureau credit file freeze and unfreeze
US20160274759A1 (en) 2008-08-25 2016-09-22 Paul J. Dawes Security system with networked touchscreen and gateway
US10375253B2 (en) 2008-08-25 2019-08-06 Icontrol Networks, Inc. Security system with networked touchscreen and gateway
US20100076862A1 (en) * 2008-09-10 2010-03-25 Vegas.Com System and method for reserving and purchasing events
US8745052B2 (en) * 2008-09-18 2014-06-03 Accenture Global Services Limited System and method for adding context to the creation and revision of artifacts
US9429934B2 (en) 2008-09-18 2016-08-30 Intouch Technologies, Inc. Mobile videoconferencing robot system with network adaptive driving
US20100095298A1 (en) * 2008-09-18 2010-04-15 Manoj Seshadrinathan System and method for adding context to the creation and revision of artifacts
US20100088187A1 (en) * 2008-09-24 2010-04-08 Chris Courtney System and method for localized and/or topic-driven content distribution for mobile devices
US8407216B2 (en) * 2008-09-25 2013-03-26 Yahoo! Inc. Automated tagging of objects in databases
US20100082575A1 (en) * 2008-09-25 2010-04-01 Walker Hubert M Automated tagging of objects in databases
US20100082576A1 (en) * 2008-09-25 2010-04-01 Walker Hubert M Associating objects in databases by rate-based tagging
US8713009B2 (en) * 2008-09-25 2014-04-29 Yahoo! Inc. Associating objects in databases by rate-based tagging
US20100080364A1 (en) * 2008-09-29 2010-04-01 Yahoo! Inc. System for determining active copresence of users during interactions
US8045695B2 (en) * 2008-09-29 2011-10-25 Yahoo! Inc System for determining active copresence of users during interactions
US8935723B2 (en) 2008-10-01 2015-01-13 At&T Intellectual Property I, Lp System and method for a communication exchange with an avatar in a media communication system
US8180682B2 (en) * 2008-10-01 2012-05-15 International Business Machines Corporation System and method for generating a view of and interacting with a purchase history
US20100083320A1 (en) * 2008-10-01 2010-04-01 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US9749683B2 (en) 2008-10-01 2017-08-29 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US8631432B2 (en) 2008-10-01 2014-01-14 At&T Intellectual Property I, Lp System and method for a communication exchange with an avatar in a media communication system
US9462321B2 (en) 2008-10-01 2016-10-04 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US8316393B2 (en) * 2008-10-01 2012-11-20 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US20100082454A1 (en) * 2008-10-01 2010-04-01 International Business Machines Corporation System and method for generating a view of and interacting with a purchase history
US8977983B2 (en) * 2008-10-06 2015-03-10 Samsung Electronics Co., Ltd. Text entry method and display apparatus using the same
US20100088616A1 (en) * 2008-10-06 2010-04-08 Samsung Electronics Co., Ltd. Text entry method and display apparatus using the same
US8583495B2 (en) * 2008-10-09 2013-11-12 Invenstar, Llc Method and system for crediting multiple merchant accounts on a single bill
US20100205062A1 (en) * 2008-10-09 2010-08-12 Invenstar, Llc Touchscreen Computer System, Software, and Method for Small Business Management and Payment Transactions, Including a Method, a Device, and System for Crediting and Refunding to and from Multiple Merchant Accounts in a Single Transaction and a Method, a Device, and System for Scheduling Appointments
US8751335B2 (en) * 2008-10-14 2014-06-10 Noel Rita Molinelli Personal style server
US20100094696A1 (en) * 2008-10-14 2010-04-15 Noel Rita Molinelli Personal style server
US8108267B2 (en) * 2008-10-15 2012-01-31 Eli Varon Method of facilitating a sale of a product and/or a service
US20100094714A1 (en) * 2008-10-15 2010-04-15 Eli Varon Method of Facilitating a Sale of a Product and/or a Service
US8781915B2 (en) * 2008-10-17 2014-07-15 Microsoft Corporation Recommending items to users utilizing a bi-linear collaborative filtering model
US20100100416A1 (en) * 2008-10-17 2010-04-22 Microsoft Corporation Recommender System
US20100100744A1 (en) * 2008-10-17 2010-04-22 Arijit Dutta Virtual image management
US8996165B2 (en) 2008-10-21 2015-03-31 Intouch Technologies, Inc. Telepresence robot with a camera boom
US20110302008A1 (en) * 2008-10-21 2011-12-08 Soza Harry R Assessing engagement and influence using consumer-specific promotions in social networks
US10621657B2 (en) 2008-11-05 2020-04-14 Consumerinfo.Com, Inc. Systems and methods of credit information reporting
US9628440B2 (en) 2008-11-12 2017-04-18 Icontrol Networks, Inc. Takeover processes in security network integrated with premise security system
US9357247B2 (en) 2008-11-24 2016-05-31 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US10542237B2 (en) 2008-11-24 2020-01-21 Shindig, Inc. Systems and methods for facilitating communications amongst multiple users
US10587906B2 (en) 2008-11-24 2020-03-10 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US11343554B2 (en) 2008-11-24 2022-05-24 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US9661270B2 (en) 2008-11-24 2017-05-23 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US10136172B2 (en) 2008-11-24 2018-11-20 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US9381654B2 (en) 2008-11-25 2016-07-05 Intouch Technologies, Inc. Server connectivity control for tele-presence robot
US10875183B2 (en) 2008-11-25 2020-12-29 Teladoc Health, Inc. Server connectivity control for tele-presence robot
US10059000B2 (en) 2008-11-25 2018-08-28 Intouch Technologies, Inc. Server connectivity control for a tele-presence robot
US9138891B2 (en) 2008-11-25 2015-09-22 Intouch Technologies, Inc. Server connectivity control for tele-presence robot
US8130244B2 (en) * 2008-11-28 2012-03-06 Sony Corporation Image processing system
US20100134516A1 (en) * 2008-11-28 2010-06-03 Sony Corporation Image processing system
US11850757B2 (en) 2009-01-29 2023-12-26 Teladoc Health, Inc. Documentation through a remote presence robot
US9014832B2 (en) 2009-02-02 2015-04-21 Eloy Technology, Llc Augmenting media content in a media sharing group
US20100211891A1 (en) * 2009-02-17 2010-08-19 Fuhu, Inc. Widgetized avatar and a method and system of creating and using same including storefronts
US20110022536A1 (en) * 2009-02-24 2011-01-27 Doxo, Inc. Provider relationship management system that facilitates interaction between an individual and organizations
US20110047147A1 (en) * 2009-02-24 2011-02-24 Doxo, Inc. Provider relationship management system that facilitates interaction between an individual and organizations
US8862575B2 (en) * 2009-02-24 2014-10-14 Doxo, Inc. Provider relationship management system that facilitates interaction between an individual and organizations
US8504928B2 (en) * 2009-03-06 2013-08-06 Brother Kogyo Kabushiki Kaisha Communication terminal, display control method, and computer-readable medium storing display control program
US9704344B2 (en) * 2009-03-06 2017-07-11 Zynga Inc. Limiting transfer of virtual currency in a multiuser online game
US20100226546A1 (en) * 2009-03-06 2010-09-09 Brother Kogyo Kabushiki Kaisha Communication terminal, display control method, and computer-readable medium storing display control program
US20130217479A1 (en) * 2009-03-06 2013-08-22 Michael Arieh Luxton Limiting Transfer of Virtual Currency in a Multiuser Online Game
US11704681B2 (en) 2009-03-24 2023-07-18 Nielsen Consumer Llc Neurological profiles for market matching and stimulus presentation
US9410814B2 (en) 2009-03-25 2016-08-09 Waldeck Technology, Llc Passive crowd-sourced map updates and alternate route recommendations
US20100250714A1 (en) * 2009-03-25 2010-09-30 Digital River, Inc. On-Site Dynamic Personalization System and Method
US8230089B2 (en) * 2009-03-25 2012-07-24 Digital River, Inc. On-site dynamic personalization system and method
US9140566B1 (en) 2009-03-25 2015-09-22 Waldeck Technology, Llc Passive crowd-sourced map updates and alternative route recommendations
US20100250290A1 (en) * 2009-03-27 2010-09-30 Vegas.Com System and method for token-based transactions
US20100250398A1 (en) * 2009-03-27 2010-09-30 Ebay, Inc. Systems and methods for facilitating user selection events over a network
US9215423B2 (en) 2009-03-30 2015-12-15 Time Warner Cable Enterprises Llc Recommendation engine apparatus and methods
US11659224B2 (en) 2009-03-30 2023-05-23 Time Warner Cable Enterprises Llc Personal media channel apparatus and methods
US11012749B2 (en) 2009-03-30 2021-05-18 Time Warner Cable Enterprises Llc Recommendation engine apparatus and methods
US9380329B2 (en) 2009-03-30 2016-06-28 Time Warner Cable Enterprises Llc Personal media channel apparatus and methods
US11076189B2 (en) 2009-03-30 2021-07-27 Time Warner Cable Enterprises Llc Personal media channel apparatus and methods
US10313755B2 (en) 2009-03-30 2019-06-04 Time Warner Cable Enterprises Llc Recommendation engine apparatus and methods
US20110044512A1 (en) * 2009-03-31 2011-02-24 Myspace Inc. Automatic Image Tagging
US9712579B2 (en) * 2009-04-01 2017-07-18 Shindig. Inc. Systems and methods for creating and publishing customizable images from within online events
US9947366B2 (en) 2009-04-01 2018-04-17 Shindig, Inc. Group portraits composed using video chat systems
US20150334142A1 (en) * 2009-04-01 2015-11-19 Shindig, Inc. Systems and methods for creating and publishing customizable images from within online events
US20100257463A1 (en) * 2009-04-03 2010-10-07 Palo Alto Research Center Incorporated System for creating collaborative content
US8943419B2 (en) * 2009-04-03 2015-01-27 Palo Alto Research Center Incorporated System for creating collaborative content
US8897920B2 (en) 2009-04-17 2014-11-25 Intouch Technologies, Inc. Tele-presence robot system with software modularity, projector and laser pointer
US10969766B2 (en) 2009-04-17 2021-04-06 Teladoc Health, Inc. Tele-presence robot system with software modularity, projector and laser pointer
US9779708B2 (en) 2009-04-24 2017-10-03 Shinding, Inc. Networks of portable electronic devices that collectively generate sound
US20100281104A1 (en) * 2009-04-30 2010-11-04 Yahoo! Inc. Creating secure social applications with extensible types
US11284331B2 (en) 2009-04-30 2022-03-22 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US9426720B2 (en) 2009-04-30 2016-08-23 Icontrol Networks, Inc. Controller and interface for home security, monitoring and automation having customizable audio alerts for SMA events
US11778534B2 (en) 2009-04-30 2023-10-03 Icontrol Networks, Inc. Hardware configurable security, monitoring and automation controller having modular communication protocol interfaces
US10275999B2 (en) 2009-04-30 2019-04-30 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US10674428B2 (en) 2009-04-30 2020-06-02 Icontrol Networks, Inc. Hardware configurable security, monitoring and automation controller having modular communication protocol interfaces
US11601865B2 (en) 2009-04-30 2023-03-07 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US11856502B2 (en) 2009-04-30 2023-12-26 Icontrol Networks, Inc. Method, system and apparatus for automated inventory reporting of security, monitoring and automation hardware and software at customer premises
US11223998B2 (en) 2009-04-30 2022-01-11 Icontrol Networks, Inc. Security, monitoring and automation controller access and use of legacy security control panel information
US10867337B2 (en) 2009-04-30 2020-12-15 Verizon Media Inc. Creating secure social applications with extensible types
US9600800B2 (en) * 2009-04-30 2017-03-21 Yahoo! Inc. Creating secure social applications with extensible types
US10813034B2 (en) 2009-04-30 2020-10-20 Icontrol Networks, Inc. Method, system and apparatus for management of applications for an SMA controller
US10237806B2 (en) 2009-04-30 2019-03-19 Icontrol Networks, Inc. Activation of a home automation controller
US11553399B2 (en) 2009-04-30 2023-01-10 Icontrol Networks, Inc. Custom content for premises management
US8379028B1 (en) * 2009-04-30 2013-02-19 Pixar Rigweb
US10332363B2 (en) 2009-04-30 2019-06-25 Icontrol Networks, Inc. Controller and interface for home security, monitoring and automation having customizable audio alerts for SMA events
US8660924B2 (en) * 2009-04-30 2014-02-25 Navera, Inc. Configurable interactive assistant
US11356926B2 (en) 2009-04-30 2022-06-07 Icontrol Networks, Inc. Hardware configurable security, monitoring and automation controller having modular communication protocol interfaces
US11129084B2 (en) 2009-04-30 2021-09-21 Icontrol Networks, Inc. Notification of event subsequent to communication failure with security system
US11665617B2 (en) 2009-04-30 2023-05-30 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US20100293234A1 (en) * 2009-05-18 2010-11-18 Cbs Interactive, Inc. System and method for incorporating user input into filter-based navigation of an electronic catalog
US20110047013A1 (en) * 2009-05-21 2011-02-24 Mckenzie Iii James O Merchandising amplification via social networking system and method
US8365081B1 (en) * 2009-05-28 2013-01-29 Amazon Technologies, Inc. Embedding metadata within content
US9749677B2 (en) 2009-06-08 2017-08-29 Time Warner Cable Enterprises Llc Media bridge apparatus and methods
US10652607B2 (en) 2009-06-08 2020-05-12 Time Warner Cable Enterprises Llc Media bridge apparatus and methods
US9602864B2 (en) 2009-06-08 2017-03-21 Time Warner Cable Enterprises Llc Media bridge apparatus and methods
US10965727B2 (en) 2009-06-08 2021-03-30 Time Warner Cable Enterprises Llc Methods and apparatus for premises content distribution
US9300919B2 (en) 2009-06-08 2016-03-29 Time Warner Cable Enterprises Llc Media bridge apparatus and methods
US8676658B2 (en) * 2009-06-22 2014-03-18 Vistaprint Schweiz Gmbh Method and system for dynamically generating a gallery of available designs for kit configuration
US20100325016A1 (en) * 2009-06-22 2010-12-23 Vistaprint Technologies Limited Method and system for dynamically generating a gallery of available designs for kit configuration
US20110004852A1 (en) * 2009-07-01 2011-01-06 Jonathon David Baugh Electronic Medical Record System For Dermatology
US20110004508A1 (en) * 2009-07-02 2011-01-06 Shen Huang Method and system of generating guidance information
US20110004501A1 (en) * 2009-07-02 2011-01-06 Pradhan Shekhar S Methods and Apparatus for Automatically Generating Social Events
US11122316B2 (en) 2009-07-15 2021-09-14 Time Warner Cable Enterprises Llc Methods and apparatus for targeted secondary content insertion
US9763048B2 (en) 2009-07-21 2017-09-12 Waldeck Technology, Llc Secondary indications of user locations and use thereof by a location-based service
US8543531B2 (en) * 2009-07-27 2013-09-24 International Business Machines Corporation Coherency of related objects
US20110022565A1 (en) * 2009-07-27 2011-01-27 International Business Machines Corporation Coherency of related objects
US10602231B2 (en) 2009-08-06 2020-03-24 Time Warner Cable Enterprises Llc Methods and apparatus for local channel insertion in an all-digital content distribution network
US10482517B2 (en) 2009-08-12 2019-11-19 Zugara, Inc. Providing a simulation of wearing items such as garments and/or accessories
US9183581B2 (en) 2009-08-12 2015-11-10 Zugara, Inc. Providing a simulation of wearing items such as garments and/or accessories
US8275590B2 (en) * 2009-08-12 2012-09-25 Zugara, Inc. Providing a simulation of wearing items such as garments and/or accessories
US20110040539A1 (en) * 2009-08-12 2011-02-17 Szymczyk Matthew Providing a simulation of wearing items such as garments and/or accessories
US20110043520A1 (en) * 2009-08-21 2011-02-24 Hon Hai Precision Industry Co., Ltd. Garment fitting system and operating method thereof
US20140024436A1 (en) * 2009-08-23 2014-01-23 DeVona Cole Scheduling and marketing of casino tournaments
US10987015B2 (en) 2009-08-24 2021-04-27 Nielsen Consumer Llc Dry electrodes for electroencephalography
US9602765B2 (en) 2009-08-26 2017-03-21 Intouch Technologies, Inc. Portable remote presence robot
US11399153B2 (en) 2009-08-26 2022-07-26 Teladoc Health, Inc. Portable telepresence apparatus
US10404939B2 (en) 2009-08-26 2019-09-03 Intouch Technologies, Inc. Portable remote presence robot
US10911715B2 (en) 2009-08-26 2021-02-02 Teladoc Health, Inc. Portable remote presence robot
US8386482B2 (en) * 2009-09-02 2013-02-26 Xurmo Technologies Private Limited Method for personalizing information retrieval in a communication network
US20110055186A1 (en) * 2009-09-02 2011-03-03 Xurmo Technologies Private Limited Method for personalizing information retrieval in a communication network
US8359285B1 (en) * 2009-09-18 2013-01-22 Amazon Technologies, Inc. Generating item recommendations
US20110071889A1 (en) * 2009-09-24 2011-03-24 Avaya Inc. Location-Aware Retail Application
US20110078573A1 (en) * 2009-09-28 2011-03-31 Sony Corporation Terminal apparatus, server apparatus, display control method, and program
US9811349B2 (en) * 2009-09-28 2017-11-07 Sony Corporation Displaying operations performed by multiple users
US9443024B2 (en) 2009-09-29 2016-09-13 At&T Intellectual Property I, Lp Method and apparatus to identify outliers in social networks
US9665651B2 (en) 2009-09-29 2017-05-30 At&T Intellectual Property I, L.P. Method and apparatus to identify outliers in social networks
US9059897B2 (en) 2009-09-29 2015-06-16 At&T Intellectual Property I, Lp Method and apparatus to identify outliers in social networks
US8775605B2 (en) * 2009-09-29 2014-07-08 At&T Intellectual Property I, L.P. Method and apparatus to identify outliers in social networks
US20110078306A1 (en) * 2009-09-29 2011-03-31 At&T Intellectual Property I,L.P. Method and apparatus to identify outliers in social networks
US9965563B2 (en) 2009-09-29 2018-05-08 At&T Intellectual Property I, L.P. Method and apparatus to identify outliers in social networks
US11232671B1 (en) * 2009-09-30 2022-01-25 Zynga Inc. Socially-based dynamic rewards in multiuser online games
US8260684B2 (en) * 2009-10-02 2012-09-04 Bespeak Inc. System and method for coordinating and evaluating apparel
US20110082764A1 (en) * 2009-10-02 2011-04-07 Alan Flusser System and method for coordinating and evaluating apparel
US20110087679A1 (en) * 2009-10-13 2011-04-14 Albert Rosato System and method for cohort based content filtering and display
US10178435B1 (en) 2009-10-20 2019-01-08 Time Warner Cable Enterprises Llc Methods and apparatus for enabling media functionality in a content delivery network
US8762292B2 (en) 2009-10-23 2014-06-24 True Fit Corporation System and method for providing customers with personalized information about products
US8543940B2 (en) * 2009-10-23 2013-09-24 Samsung Electronics Co., Ltd Method and apparatus for browsing media content and executing functions related to media content
US20110099514A1 (en) * 2009-10-23 2011-04-28 Samsung Electronics Co., Ltd. Method and apparatus for browsing media content and executing functions related to media content
US20110099122A1 (en) * 2009-10-23 2011-04-28 Bright Douglas R System and method for providing customers with personalized information about products
US11669858B2 (en) 2009-10-29 2023-06-06 Nielsen Consumer Llc Analysis of controlled and automatic attention for introduction of stimulus material
US10269036B2 (en) 2009-10-29 2019-04-23 The Nielsen Company (Us), Llc Analysis of controlled and automatic attention for introduction of stimulus material
US11481788B2 (en) 2009-10-29 2022-10-25 Nielsen Consumer Llc Generating ratings predictions using neuro-response data
US11170400B2 (en) 2009-10-29 2021-11-09 Nielsen Consumer Llc Analysis of controlled and automatic attention for introduction of stimulus material
US8516529B2 (en) * 2009-10-30 2013-08-20 Time Warner Cable Enterprises Llc Methods and apparatus for packetized content delivery over a content delivery network
US20110107379A1 (en) * 2009-10-30 2011-05-05 Lajoie Michael L Methods and apparatus for packetized content delivery over a content delivery network
US9531760B2 (en) 2009-10-30 2016-12-27 Time Warner Cable Enterprises Llc Methods and apparatus for packetized content delivery over a content delivery network
US8924261B2 (en) * 2009-10-30 2014-12-30 Etsy, Inc. Method for performing interactive online shopping
US20110107364A1 (en) * 2009-10-30 2011-05-05 Lajoie Michael L Methods and apparatus for packetized content delivery over a content delivery network
US10264029B2 (en) 2009-10-30 2019-04-16 Time Warner Cable Enterprises Llc Methods and apparatus for packetized content delivery over a content delivery network
US11368498B2 (en) 2009-10-30 2022-06-21 Time Warner Cable Enterprises Llc Methods and apparatus for packetized content delivery over a content delivery network
US20110106662A1 (en) * 2009-10-30 2011-05-05 Matthew Stinchcomb System and method for performing interactive online shopping
US20110107236A1 (en) * 2009-11-03 2011-05-05 Avaya Inc. Virtual meeting attendee
US20110125566A1 (en) * 2009-11-06 2011-05-26 Linemonkey, Inc. Systems and Methods to Implement Point of Sale (POS) Terminals, Process Orders and Manage Order Fulfillment
US20160155108A1 (en) * 2009-11-06 2016-06-02 Livingsocial, Inc. Systems and Methods to Implement Point of Sale (POS) Terminals, Process Orders and Manage Order Fulfillment
US9275407B2 (en) * 2009-11-06 2016-03-01 Livingsocial, Inc. Systems and methods to implement point of sale (POS) terminals, process orders and manage order fulfillment
US11488129B2 (en) 2009-11-06 2022-11-01 Livingsocial, Inc. Systems and methods to implement point of sale (POS) terminals, process orders and manage order fulfillment
US9693103B2 (en) 2009-11-11 2017-06-27 Time Warner Cable Enterprises Llc Methods and apparatus for audience data collection and analysis in a content delivery network
US9635421B2 (en) 2009-11-11 2017-04-25 Time Warner Cable Enterprises Llc Methods and apparatus for audience data collection and analysis in a content delivery network
US20110119696A1 (en) * 2009-11-13 2011-05-19 At&T Intellectual Property I, L.P. Gifting multimedia content using an electronic address book
US9460422B2 (en) * 2009-11-20 2016-10-04 Sears Brands, L.L.C. Systems and methods for managing to-do list task items to automatically suggest and add purchasing items via a computer network
US11223659B2 (en) 2009-11-20 2022-01-11 International Business Machines Corporation Broadcast notifications using social networking systems
US20140324978A1 (en) * 2009-11-20 2014-10-30 Ustream, Inc. Broadcast Notifications Using Social Networking Systems
US20110126123A1 (en) * 2009-11-20 2011-05-26 Sears Brands, Llc Systems and methods for managing to-do list task items via a computer network
US9813457B2 (en) * 2009-11-20 2017-11-07 International Business Machines Corporation Broadcast notifications using social networking systems
US8433660B2 (en) 2009-12-01 2013-04-30 Microsoft Corporation Managing a portfolio of experts
US20110131163A1 (en) * 2009-12-01 2011-06-02 Microsoft Corporation Managing a Portfolio of Experts
US20120253993A1 (en) * 2009-12-02 2012-10-04 Nestec S.A. Beverage preparation machine with virtual shopping functionality
US9560932B2 (en) * 2009-12-02 2017-02-07 Nestec S.A. Method, medium, and system for a beverage preparation machine with virtual shopping functionality
US11563995B2 (en) 2009-12-04 2023-01-24 Time Warner Cable Enterprises Llc Apparatus and methods for monitoring and optimizing delivery of content in a network
US9519728B2 (en) 2009-12-04 2016-12-13 Time Warner Cable Enterprises Llc Apparatus and methods for monitoring and optimizing delivery of content in a network
US10455262B2 (en) 2009-12-04 2019-10-22 Time Warner Cable Enterprises Llc Apparatus and methods for monitoring and optimizing delivery of content in a network
US20110138064A1 (en) * 2009-12-04 2011-06-09 Remi Rieger Apparatus and methods for monitoring and optimizing delivery of content in a network
US10742641B2 (en) 2009-12-18 2020-08-11 Google Llc Method, device, and system of accessing online accounts
US20140012733A1 (en) * 2009-12-18 2014-01-09 Joel Vidal Method, Device, and System of Accessing Online Accounts
US10033725B2 (en) 2009-12-18 2018-07-24 Google Llc Method, device, and system of accessing online accounts
US20110153380A1 (en) * 2009-12-22 2011-06-23 Verizon Patent And Licensing Inc. Method and system of automated appointment management
US20110153451A1 (en) * 2009-12-23 2011-06-23 Sears Brands, Llc Systems and methods for using a social network to provide product related information
US9141989B2 (en) * 2009-12-23 2015-09-22 Sears Brands, L.L.C. Systems and methods for using a social network to provide product related information
US10237081B1 (en) * 2009-12-23 2019-03-19 8X8, Inc. Web-enabled conferencing and meeting implementations with flexible user calling and content sharing features
US20120259701A1 (en) * 2009-12-24 2012-10-11 Nikon Corporation Retrieval support system, retrieval support method and retrieval support program
US9665894B2 (en) * 2009-12-24 2017-05-30 Nikon Corporation Method, medium, and system for recommending associated products
US11250047B2 (en) * 2009-12-24 2022-02-15 Nikon Corporation Retrieval support system, retrieval support method and retrieval support program
US20110161424A1 (en) * 2009-12-30 2011-06-30 Sap Ag Audience selection and system anchoring of collaboration threads
US8788645B2 (en) * 2009-12-30 2014-07-22 Sap Ag Audience selection and system anchoring of collaboration threads
US11721073B2 (en) 2010-01-05 2023-08-08 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US9305402B2 (en) 2010-01-05 2016-04-05 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US10854008B2 (en) 2010-01-05 2020-12-01 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US10176637B2 (en) 2010-01-05 2019-01-08 Apple, Inc. Synchronized, interactive augmented reality displays for multifunction devices
US8625018B2 (en) * 2010-01-05 2014-01-07 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US8649612B1 (en) 2010-01-06 2014-02-11 Apple Inc. Parallelizing cascaded face detection
US8295610B1 (en) * 2010-01-06 2012-10-23 Apple Inc. Feature scaling for face detection
US9286611B2 (en) * 2010-01-07 2016-03-15 Sarkar Subhanjan Map topology for navigating a sequence of multimedia
US20130132298A1 (en) * 2010-01-07 2013-05-23 Sarkar Subhanjan Map topology for navigating a sequence of multimedia
US20220254338A1 (en) * 2010-01-18 2022-08-11 Apple Inc. Intelligent automated assistant
US20110178889A1 (en) * 2010-01-20 2011-07-21 International Business Machines Corporation A method, medium, and system for allocating a transaction discount during a collaborative shopping session
US7970661B1 (en) * 2010-01-20 2011-06-28 International Business Machines Corporation Method, medium, and system for allocating a transaction discount during a collaborative shopping session
US20110191692A1 (en) * 2010-02-03 2011-08-04 Oto Technologies, Llc System and method for e-book contextual communication
US11154981B2 (en) 2010-02-04 2021-10-26 Teladoc Health, Inc. Robot user interface for telepresence robot system
US20110196761A1 (en) * 2010-02-05 2011-08-11 Microsoft Corporation Value determination for mobile transactions
US8380576B2 (en) * 2010-02-05 2013-02-19 Microsoft Corporation Value determination for mobile transactions
US20110196714A1 (en) * 2010-02-09 2011-08-11 Avaya, Inc. Method and apparatus for overriding apparent geo-pod attributes
US20110202469A1 (en) * 2010-02-18 2011-08-18 Frontline Consulting Services Private Limited Fcs smart touch for c level executives
US20110208619A1 (en) * 2010-02-24 2011-08-25 Constantine Siounis Remote and/or virtual mall shopping experience
US8606642B2 (en) * 2010-02-24 2013-12-10 Constantine Siounis Remote and/or virtual mall shopping experience
US20110208655A1 (en) * 2010-02-25 2011-08-25 Ryan Steelberg System And Method For Creating And Marketing Authentic Virtual Memorabilia
US8781990B1 (en) * 2010-02-25 2014-07-15 Google Inc. Crowdsensus: deriving consensus information from statements made by a crowd of users
US9134534B2 (en) 2010-02-28 2015-09-15 Microsoft Technology Licensing, Llc See-through near-eye display glasses including a modular image source
US9223134B2 (en) 2010-02-28 2015-12-29 Microsoft Technology Licensing, Llc Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
US9285589B2 (en) 2010-02-28 2016-03-15 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered control of AR eyepiece applications
US9329689B2 (en) 2010-02-28 2016-05-03 Microsoft Technology Licensing, Llc Method and apparatus for biometric data capture
US9341843B2 (en) 2010-02-28 2016-05-17 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a small scale image source
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US10860100B2 (en) 2010-02-28 2020-12-08 Microsoft Technology Licensing, Llc AR glasses with predictive control of external device based on event input
US9875406B2 (en) 2010-02-28 2018-01-23 Microsoft Technology Licensing, Llc Adjustable extension for temple arm
US9366862B2 (en) 2010-02-28 2016-06-14 Microsoft Technology Licensing, Llc System and method for delivering content to a group of see-through near eye display eyepieces
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US9229227B2 (en) 2010-02-28 2016-01-05 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US10268888B2 (en) 2010-02-28 2019-04-23 Microsoft Technology Licensing, Llc Method and apparatus for biometric data capture
US10539787B2 (en) 2010-02-28 2020-01-21 Microsoft Technology Licensing, Llc Head-worn adaptive display
US9129295B2 (en) 2010-02-28 2015-09-08 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear
US20120200601A1 (en) * 2010-02-28 2012-08-09 Osterhout Group, Inc. Ar glasses with state triggered eye control interaction with advertising facility
US9091851B2 (en) 2010-02-28 2015-07-28 Microsoft Technology Licensing, Llc Light control in head mounted displays
US9097890B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc Grating in a light transmissive illumination system for see-through near-eye display glasses
US9097891B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
US20110219229A1 (en) * 2010-03-02 2011-09-08 Chris Cholas Apparatus and methods for rights-managed content and data delivery
US9817952B2 (en) 2010-03-02 2017-11-14 Time Warner Cable Enterprises Llc Apparatus and methods for rights-managed content and data delivery
US10339281B2 (en) 2010-03-02 2019-07-02 Time Warner Cable Enterprises Llc Apparatus and methods for rights-managed content and data delivery
US11609972B2 (en) 2010-03-02 2023-03-21 Time Warner Cable Enterprises Llc Apparatus and methods for rights-managed data delivery
US9342661B2 (en) 2010-03-02 2016-05-17 Time Warner Cable Enterprises Llc Apparatus and methods for rights-managed content and data delivery
US9089972B2 (en) 2010-03-04 2015-07-28 Intouch Technologies, Inc. Remote presence system including a cart that supports a robot face and an overhead camera
US10887545B2 (en) 2010-03-04 2021-01-05 Teladoc Health, Inc. Remote presence system including a cart that supports a robot face and an overhead camera
US11798683B2 (en) 2010-03-04 2023-10-24 Teladoc Health, Inc. Remote presence system including a cart that supports a robot face and an overhead camera
US8903847B2 (en) 2010-03-05 2014-12-02 International Business Machines Corporation Digital media voice tags in social networks
US20110219403A1 (en) * 2010-03-08 2011-09-08 Diaz Nesamoney Method and apparatus to deliver video advertisements with enhanced user interactivity
US9693013B2 (en) * 2010-03-08 2017-06-27 Jivox Corporation Method and apparatus to deliver video advertisements with enhanced user interactivity
WO2011112296A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Incorporating media content into a 3d platform
US8572177B2 (en) 2010-03-10 2013-10-29 Xmobb, Inc. 3D social platform for sharing videos and webpages
US20110225518A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Friends toolbar for a virtual social venue
US8667402B2 (en) 2010-03-10 2014-03-04 Onset Vi, L.P. Visualizing communications within a social setting
US20110244954A1 (en) * 2010-03-10 2011-10-06 Oddmobb, Inc. Online social media game
US20110225515A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Sharing emotional reactions to social media
US20110225039A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Virtual social venue feeding multiple video streams
US20110225498A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Personalized avatars in a virtual social venue
US9292163B2 (en) 2010-03-10 2016-03-22 Onset Vi, L.P. Personalized 3D avatars in a virtual social venue
US20110225519A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Social media platform for simulating a live experience
US9292164B2 (en) 2010-03-10 2016-03-22 Onset Vi, L.P. Virtual social supervenue for sharing multiple video streams
US20110225517A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc Pointer tools for a virtual social venue
US20110225516A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Instantiating browser media into a virtual social venue
US20110225514A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Visualizing communications within a social setting
US20110221745A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Incorporating media content into a 3d social platform
WO2011112941A1 (en) * 2010-03-12 2011-09-15 Tagwhat, Inc. Purchase and delivery of goods and services, and payment gateway in an augmented reality-enabled distribution network
US20110225069A1 (en) * 2010-03-12 2011-09-15 Cramer Donald M Purchase and Delivery of Goods and Services, and Payment Gateway in An Augmented Reality-Enabled Distribution Network
WO2011112940A1 (en) * 2010-03-12 2011-09-15 Tagwhat, Inc. Merging of grouped markers in an augmented reality-enabled distribution network
US20110221771A1 (en) * 2010-03-12 2011-09-15 Cramer Donald M Merging of Grouped Markers in An Augmented Reality-Enabled Distribution Network
US8217997B2 (en) * 2010-03-16 2012-07-10 Interphase Corporation Interactive display system
US20110227827A1 (en) * 2010-03-16 2011-09-22 Interphase Corporation Interactive Display System
US8429025B2 (en) * 2010-03-17 2013-04-23 Amanda Fries Method, medium, and system of ascertaining garment size of a particular garment type for a consumer
US20110231278A1 (en) * 2010-03-17 2011-09-22 Amanda Fries Garment sizing system
US9460448B2 (en) * 2010-03-20 2016-10-04 Nimbelink Corp. Environmental monitoring system which leverages a social networking service to deliver alerts to mobile phones or devices
US9501782B2 (en) 2010-03-20 2016-11-22 Arthur Everett Felgate Monitoring system
US20110230160A1 (en) * 2010-03-20 2011-09-22 Arthur Everett Felgate Environmental Monitoring System Which Leverages A Social Networking Service To Deliver Alerts To Mobile Phones Or Devices
US20110231271A1 (en) * 2010-03-22 2011-09-22 Cris Conf S.P.A. Method and apparatus for presenting articles of clothing and the like
US20110239147A1 (en) * 2010-03-25 2011-09-29 Hyun Ju Shim Digital apparatus and method for providing a user interface to produce contents
US20110238645A1 (en) * 2010-03-29 2011-09-29 Ebay Inc. Traffic driver for suggesting stores
US11132391B2 (en) 2010-03-29 2021-09-28 Ebay Inc. Finding products that are similar to a product selected from a plurality of products
US9529919B2 (en) * 2010-03-29 2016-12-27 Paypal, Inc. Traffic driver for suggesting stores
US11605116B2 (en) 2010-03-29 2023-03-14 Ebay Inc. Methods and systems for reducing item selection error in an e-commerce environment
US20140337312A1 (en) * 2010-03-29 2014-11-13 Ebay Inc. Traffic driver for suggesting stores
US8819052B2 (en) * 2010-03-29 2014-08-26 Ebay Inc. Traffic driver for suggesting stores
WO2011123559A1 (en) * 2010-03-30 2011-10-06 George Lebrun Method and apparatus for contextual based search engine and enterprise knowledge management
US9646340B2 (en) * 2010-04-01 2017-05-09 Microsoft Technology Licensing, Llc Avatar-based virtual dressing room
US20120299912A1 (en) * 2010-04-01 2012-11-29 Microsoft Corporation Avatar-based virtual dressing room
CN102201099A (en) * 2010-04-01 2011-09-28 微软公司 Motion-based interactive shopping environment
US9098873B2 (en) * 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US20110246329A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Motion-based interactive shopping environment
US8583725B2 (en) * 2010-04-05 2013-11-12 Microsoft Corporation Social context for inter-media objects
US8744178B2 (en) * 2010-04-05 2014-06-03 Sony Corporation Information processing apparatus, information processing method and program
CN102214303A (en) * 2010-04-05 2011-10-12 索尼公司 Information processing device, information processing method and program
US20110246560A1 (en) * 2010-04-05 2011-10-06 Microsoft Corporation Social context for inter-media objects
US20110274346A1 (en) * 2010-04-05 2011-11-10 Tatsuhito Sato Information Processing Apparatus, Information Processing Method and Program
US20110244952A1 (en) * 2010-04-06 2011-10-06 Multimedia Games, Inc. Wagering game, gaming machine and networked gaming system with customizable player avatar
US8851981B2 (en) 2010-04-06 2014-10-07 Multimedia Games, Inc. Personalized jackpot wagering game, gaming system, and method
US9064377B2 (en) 2010-04-06 2015-06-23 Multimedia Games, Inc. Wagering game, gaming machine, networked gaming system and method with a base game and a simultaneous bonus currency game
US9064369B2 (en) * 2010-04-06 2015-06-23 Multimedia Games, Inc. Wagering game, gaming machine and networked gaming system with customizable player avatar
US8661345B2 (en) * 2010-04-09 2014-02-25 Michael Stephen Kernan Social networking webpage application
US20110252325A1 (en) * 2010-04-09 2011-10-13 Michael Stephen Kernan Social networking webpage application
US10082927B2 (en) 2010-04-12 2018-09-25 Google Llc Collaborative cursors in a hosted word processor
US10678999B2 (en) 2010-04-12 2020-06-09 Google Llc Real-time collaboration in a hosted word processor
US9280529B2 (en) 2010-04-12 2016-03-08 Google Inc. Collaborative cursors in a hosted word processor
US8903822B2 (en) * 2010-04-13 2014-12-02 Konkuk University Industrial Cooperation Corp. Apparatus and method for measuring contents similarity based on feedback information of ranked user and computer readable recording medium storing program thereof
US20110252044A1 (en) * 2010-04-13 2011-10-13 Konkuk University Industrial Cooperation Corp. Apparatus and method for measuring contents similarity based on feedback information of ranked user and computer readable recording medium storing program thereof
US11200964B2 (en) 2010-04-19 2021-12-14 Nielsen Consumer Llc Short imagery task (SIT) research method
US10248195B2 (en) 2010-04-19 2019-04-02 The Nielsen Company (Us), Llc. Short imagery task (SIT) research method
US11616992B2 (en) 2010-04-23 2023-03-28 Time Warner Cable Enterprises Llc Apparatus and methods for dynamic secondary content and data insertion and delivery
US20110264460A1 (en) * 2010-04-23 2011-10-27 Someone With, LLC System and method for providing a secure registry for healthcare related products and services
US20130057544A1 (en) * 2010-04-27 2013-03-07 Seung Woo Oh Automatic 3d clothing transfer method, device and computer-readable recording medium
US20160150079A1 (en) * 2010-05-05 2016-05-26 Knapp Investment Company Limited Caller id surfing
US9866685B2 (en) * 2010-05-05 2018-01-09 Knapp Investment Company Limited Caller ID surfing
NL1037949C2 (en) * 2010-05-10 2011-11-14 Suitsupply B V METHOD FOR DETERMINING REMOTE SIZES.
WO2011142655A3 (en) * 2010-05-10 2012-06-14 Suitsupply B.V. Method for remotely determining clothes dimensions
WO2011143273A1 (en) * 2010-05-10 2011-11-17 Icontrol Networks, Inc Control system user interface
WO2011143113A1 (en) * 2010-05-10 2011-11-17 Mcgurk Michael R Methods and systems of using a personalized multi-dimensional avatar (pmda) in commerce
US20130060610A1 (en) * 2010-05-10 2013-03-07 Michael R. McGurk Methods and systems of using personalized multi-dimensional avatar (pmda) in commerce
CN102985915A (en) * 2010-05-10 2013-03-20 网际网路控制架构网络有限公司 Control system user interface
US9076023B2 (en) 2010-05-10 2015-07-07 Suit Supply B.V. Method for remotely determining clothes dimensions
US20110289426A1 (en) * 2010-05-20 2011-11-24 Ljl, Inc. Event based interactive network for recommending, comparing and evaluating appearance styles
CN102939606A (en) * 2010-05-20 2013-02-20 Ljl公司 Event based interactive network for recommending, comparing and evaluating appearance styles
US10343283B2 (en) 2010-05-24 2019-07-09 Intouch Technologies, Inc. Telepresence robot system that can be accessed by a cellular phone
US11389962B2 (en) 2010-05-24 2022-07-19 Teladoc Health, Inc. Telepresence robot system that can be accessed by a cellular phone
US10808882B2 (en) 2010-05-26 2020-10-20 Intouch Technologies, Inc. Tele-robotic system with a robot face placed on a chair
US9300445B2 (en) 2010-05-27 2016-03-29 Time Warner Cable Enterprise LLC Digital domain content processing and distribution apparatus and methods
US9942077B2 (en) 2010-05-27 2018-04-10 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US10892932B2 (en) 2010-05-27 2021-01-12 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US20110295875A1 (en) * 2010-05-27 2011-12-01 Microsoft Corporation Location-aware query based event retrieval and alerting
US10411939B2 (en) 2010-05-27 2019-09-10 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US11605203B2 (en) 2010-06-07 2023-03-14 Pfaqutruma Research Llc Creation and use of virtual places
US20170186228A1 (en) * 2010-06-07 2017-06-29 Gary Stephen Shuster Creation and use of virtual places
US10984594B2 (en) * 2010-06-07 2021-04-20 Pfaqutruma Research Llc Creation and use of virtual places
US9025030B2 (en) * 2010-06-08 2015-05-05 Cheryl Garcia Video system
US20110298929A1 (en) * 2010-06-08 2011-12-08 Cheryl Garcia Video system
US11501508B2 (en) * 2010-06-10 2022-11-15 Brown University Parameterized model of 2D articulated human shape
WO2011159356A1 (en) * 2010-06-16 2011-12-22 Ravenwhite Inc. System access determination based on classification of stimuli
US20130138532A1 (en) * 2010-06-16 2013-05-30 Ronald DICKE Method and system for upselling to a user of a digital book lending library
US8670029B2 (en) * 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US20110310039A1 (en) * 2010-06-16 2011-12-22 Samsung Electronics Co., Ltd. Method and apparatus for user-adaptive data arrangement/classification in portable terminal
US20110310220A1 (en) * 2010-06-16 2011-12-22 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US9432199B2 (en) 2010-06-16 2016-08-30 Ravenwhite Inc. System access determination based on classification of stimuli
US8537157B2 (en) * 2010-06-21 2013-09-17 Verizon Patent And Licensing Inc. Three-dimensional shape user interface for media content delivery systems and methods
US20110310100A1 (en) * 2010-06-21 2011-12-22 Verizon Patent And Licensing, Inc. Three-dimensional shape user interface for media content delivery systems and methods
US20110320215A1 (en) * 2010-06-24 2011-12-29 Cooper Jeff D System, method, and apparatus for conveying telefamiliarization of a remote location
US8266017B1 (en) * 2010-06-28 2012-09-11 Amazon Technologies, Inc. Methods and apparatus for providing recommendations and reminders to tote delivery customers
US8219463B2 (en) 2010-06-28 2012-07-10 Amazon Technologies, Inc. Methods and apparatus for returning items via a tote delivery service
US8175935B2 (en) 2010-06-28 2012-05-08 Amazon Technologies, Inc. Methods and apparatus for providing multiple product delivery options including a tote delivery option
US8156013B2 (en) 2010-06-28 2012-04-10 Amazon Technologies, Inc. Methods and apparatus for fulfilling tote deliveries
US8266018B2 (en) 2010-06-28 2012-09-11 Amazon Technologies, Inc. Methods and apparatus for managing tote orders
US20110317685A1 (en) * 2010-06-29 2011-12-29 Richard Torgersrud Consolidated voicemail platform
US8261198B2 (en) * 2010-06-30 2012-09-04 International Business Machines Corporation Automatic co-browsing invitations
US9171396B2 (en) * 2010-06-30 2015-10-27 Primal Space Systems Inc. System and method of procedural visibility for interactive and broadcast streaming of entertainment, advertising, and tactical 3D graphical information using a visibility event codec
US20120256915A1 (en) * 2010-06-30 2012-10-11 Jenkins Barry L System and method of procedural visibility for interactive and broadcast streaming of entertainment, advertising, and tactical 3d graphical information using a visibility event codec
US20120005598A1 (en) * 2010-06-30 2012-01-05 International Business Machine Corporation Automatic co-browsing invitations
US20130262269A1 (en) * 2010-07-06 2013-10-03 James Shaun O'Leary System for electronic transactions
US10917694B2 (en) 2010-07-12 2021-02-09 Time Warner Cable Enterprises Llc Apparatus and methods for content management and account linking across multiple content delivery networks
US11831955B2 (en) 2010-07-12 2023-11-28 Time Warner Cable Enterprises Llc Apparatus and methods for content management and account linking across multiple content delivery networks
US9906838B2 (en) 2010-07-12 2018-02-27 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US8537930B2 (en) 2010-07-20 2013-09-17 Lg Electronics Inc. Electronic device, electronic system, and method of providing information using the same
US8321301B2 (en) * 2010-07-20 2012-11-27 Sony Corporation Method and system for clothing shopping using an image of a shopper
US8611458B2 (en) 2010-07-20 2013-12-17 Lg Electronics Inc. Electronic device, electronic system, and method of providing information using the same
US8667112B2 (en) 2010-07-20 2014-03-04 Lg Electronics Inc. Selective interaction between networked smart devices
US20120023410A1 (en) * 2010-07-20 2012-01-26 Erik Roth Computing device and displaying method at the computing device
US8694686B2 (en) 2010-07-20 2014-04-08 Lg Electronics Inc. User profile based configuration of user experience environment
US20120022978A1 (en) * 2010-07-20 2012-01-26 Natalia Manea Online clothing shopping using 3d image of shopper
US10448117B2 (en) 2010-07-22 2019-10-15 Time Warner Cable Enterprises Llc Apparatus and methods for packetized content delivery over a bandwidth-efficient network
US9961413B2 (en) 2010-07-22 2018-05-01 Time Warner Cable Enterprises Llc Apparatus and methods for packetized content delivery over a bandwidth efficient network
US8478663B2 (en) 2010-07-28 2013-07-02 True Fit Corporation Fit recommendation via collaborative inference
WO2012016052A1 (en) * 2010-07-28 2012-02-02 True Fit Corporation Fit recommendation via collaborative inference
US11157995B2 (en) 2010-08-06 2021-10-26 Dkr Consulting Llc System and method for generating and distributing embeddable electronic commerce stores
US11651421B2 (en) 2010-08-06 2023-05-16 Dkr Consulting Llc System and method for facilitating social shopping
US11488237B2 (en) 2010-08-06 2022-11-01 Dkr Consulting Llc System and method for facilitating social shopping
US11455678B2 (en) 2010-08-06 2022-09-27 Dkr Consulting Llc System and method for distributable e-commerce product listings
US11900446B2 (en) 2010-08-06 2024-02-13 Dkr Consulting Llc System and method for facilitating social shopping
US20120038750A1 (en) * 2010-08-16 2012-02-16 Pantech Co., Ltd. Apparatus and method for displaying three-dimensional (3d) object
US10445775B2 (en) * 2010-08-27 2019-10-15 Oath Inc. Social aggregation communications
WO2012033654A3 (en) * 2010-08-28 2015-01-08 Ebay Inc. Multilevel silhouettes in an online shopping environment
CN103430202A (en) * 2010-08-28 2013-12-04 电子湾有限公司 Multilevel silhouettes in an online shopping environment
US11295374B2 (en) 2010-08-28 2022-04-05 Ebay Inc. Multilevel silhouettes in an online shopping environment
US8982155B2 (en) * 2010-08-31 2015-03-17 Ns Solutions Corporation Augmented reality providing system, information processing terminal, information processing apparatus, augmented reality providing method, information processing method, and program
US9843552B2 (en) 2010-08-31 2017-12-12 Apple Inc. Classification and status of users of networking and social activity systems
GB2498116A (en) * 2010-08-31 2013-07-03 Apple Inc Networked system with supporting media access and social networking
WO2012030588A3 (en) * 2010-08-31 2012-08-16 Apple Inc. Networked system with supporting media access and social networking
US9900642B2 (en) 2010-09-03 2018-02-20 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
USRE47760E1 (en) 2010-09-03 2019-12-03 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US11153622B2 (en) 2010-09-03 2021-10-19 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US9185341B2 (en) 2010-09-03 2015-11-10 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US10681405B2 (en) 2010-09-03 2020-06-09 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US10200731B2 (en) 2010-09-03 2019-02-05 Time Warner Cable Enterprises Llc Digital domain content processing and distribution apparatus and methods
US20120059787A1 (en) * 2010-09-07 2012-03-08 Research In Motion Limited Dynamically Manipulating An Emoticon or Avatar
US8620850B2 (en) * 2010-09-07 2013-12-31 Blackberry Limited Dynamically manipulating an emoticon or avatar
US20120062689A1 (en) * 2010-09-13 2012-03-15 Polycom, Inc. Personalized virtual video meeting rooms
US9128281B2 (en) 2010-09-14 2015-09-08 Microsoft Technology Licensing, Llc Eyepiece with uniformly illuminated reflective display
US20120066075A1 (en) * 2010-09-15 2012-03-15 Samsung Electronics Co., Ltd. Display apparatus and commercial display method of the same
WO2012037559A1 (en) * 2010-09-17 2012-03-22 Zecozi, Inc. System for supporting interactive commerce transactions and social network activity
US20120072304A1 (en) * 2010-09-17 2012-03-22 Homan Sven Method of Shopping Online with Real-Time Data Sharing Between Multiple Clients
US10127802B2 (en) 2010-09-28 2018-11-13 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11900790B2 (en) 2010-09-28 2024-02-13 Icontrol Networks, Inc. Method, system and apparatus for automated reporting of account and sensor zone information to a central station
US10062273B2 (en) 2010-09-28 2018-08-28 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10223903B2 (en) 2010-09-28 2019-03-05 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US9349276B2 (en) 2010-09-28 2016-05-24 Icontrol Networks, Inc. Automated reporting of account and sensor information
US11398147B2 (en) 2010-09-28 2022-07-26 Icontrol Networks, Inc. Method, system and apparatus for automated reporting of account and sensor zone information to a central station
US20120084783A1 (en) * 2010-10-01 2012-04-05 Fujifilm Corporation Automated operation list generation device, method and program
US8893136B2 (en) * 2010-10-01 2014-11-18 Fujifilm Corporation Automated operation list generation device, method and program
US10803478B2 (en) 2010-10-05 2020-10-13 Facebook, Inc. Providing social endorsements with online advertising
US20120096354A1 (en) * 2010-10-14 2012-04-19 Park Seungyong Mobile terminal and control method thereof
US20120102125A1 (en) * 2010-10-20 2012-04-26 Jeffrey Albert Dracup Method, apparatus, and computer program product for screened communications
US20120102409A1 (en) * 2010-10-25 2012-04-26 At&T Intellectual Property I, L.P. Providing interactive services to enhance information presentation experiences using wireless technologies
US9143881B2 (en) * 2010-10-25 2015-09-22 At&T Intellectual Property I, L.P. Providing interactive services to enhance information presentation experiences using wireless technologies
US8705811B1 (en) * 2010-10-26 2014-04-22 Apple Inc. Luminance adjusted face detection
US9141710B2 (en) * 2010-10-27 2015-09-22 International Business Machines Corporation Persisting annotations within a cobrowsing session
US20120110472A1 (en) * 2010-10-27 2012-05-03 International Business Machines Corporation Persisting annotations within a cobrowsing session
US9703539B2 (en) * 2010-10-29 2017-07-11 Microsoft Technology Licensing, Llc Viral application distribution
US20120110568A1 (en) * 2010-10-29 2012-05-03 Microsoft Corporation Viral Application Distribution
US9967335B2 (en) 2010-11-01 2018-05-08 Google Llc Social circles in social networks
US10122791B2 (en) 2010-11-01 2018-11-06 Google Llc Social circles in social networks
US20140189541A1 (en) * 2010-11-01 2014-07-03 Google Inc. Content sharing interface for sharing content in social networks
WO2012061824A1 (en) * 2010-11-05 2012-05-10 Myspace, Inc. Image auto tagging method and application
US20120116840A1 (en) * 2010-11-10 2012-05-10 Omer Alon Method and apparatus for marketing management
US11336551B2 (en) 2010-11-11 2022-05-17 Time Warner Cable Enterprises Llc Apparatus and methods for identifying and characterizing latency in a content delivery network
US10148623B2 (en) 2010-11-12 2018-12-04 Time Warner Cable Enterprises Llc Apparatus and methods ensuring data privacy in a content distribution network
US11271909B2 (en) 2010-11-12 2022-03-08 Time Warner Cable Enterprises Llc Apparatus and methods ensuring data privacy in a content distribution network
US20120123865A1 (en) * 2010-11-12 2012-05-17 Cellco Partnership D/B/A Verizon Wireless Enhanced shopping experience for mobile station users
US8762217B2 (en) 2010-11-22 2014-06-24 Etsy, Inc. Systems and methods for searching in an electronic commerce environment
US9684905B1 (en) 2010-11-22 2017-06-20 Experian Information Solutions, Inc. Systems and methods for data verification
WO2012071316A1 (en) * 2010-11-22 2012-05-31 Etsy, Inc. Systems and methods for searching in an electronic commerce environment
US20140114884A1 (en) * 2010-11-24 2014-04-24 Dhiraj Daway System and Method for Providing Wardrobe Assistance
US9710812B2 (en) * 2010-12-03 2017-07-18 Paypal, Inc. Social network payment system
US20130307851A1 (en) * 2010-12-03 2013-11-21 Rafael Hernández Stark Method for virtually trying on footwear
US11250426B2 (en) 2010-12-03 2022-02-15 Paypal, Inc. Social network payment system
JP2012118948A (en) * 2010-12-03 2012-06-21 Ns Solutions Corp Extended reality presentation device, and extended reality presentation method and program
US20120143761A1 (en) * 2010-12-03 2012-06-07 Ebay, Inc. Social network payment system
US10218748B2 (en) 2010-12-03 2019-02-26 Intouch Technologies, Inc. Systems and methods for dynamic bandwidth allocation
US9264664B2 (en) 2010-12-03 2016-02-16 Intouch Technologies, Inc. Systems and methods for dynamic bandwidth allocation
US8620730B2 (en) * 2010-12-15 2013-12-31 International Business Machines Corporation Promoting products in a virtual world
US20120158473A1 (en) * 2010-12-15 2012-06-21 International Business Machines Corporation Promoting products in a virtual world
US11750414B2 (en) 2010-12-16 2023-09-05 Icontrol Networks, Inc. Bidirectional security sensor communication for a premises security system
US9870594B2 (en) * 2010-12-17 2018-01-16 Glenn Alan Dildy Methods and systems for analyzing and providing data for business services
US20120185300A1 (en) * 2010-12-17 2012-07-19 Dildy Glenn Alan Methods and systems for analyzing and providing data for business services
US10796387B2 (en) 2010-12-17 2020-10-06 Glenn Alan DILDY Methods and systems for analyzing and providing data for business services
US11341840B2 (en) 2010-12-17 2022-05-24 Icontrol Networks, Inc. Method and system for processing security event data
US11798105B2 (en) 2010-12-17 2023-10-24 Glenn Alan DILDY Methods and systems for analyzing and providing data for business services
US10078958B2 (en) 2010-12-17 2018-09-18 Icontrol Networks, Inc. Method and system for logging security event data
US9256829B2 (en) 2010-12-17 2016-02-09 Microsoft Technology Licensing, Llc Information propagation probability for a social network
US10741057B2 (en) 2010-12-17 2020-08-11 Icontrol Networks, Inc. Method and system for processing security event data
US20120158538A1 (en) * 2010-12-20 2012-06-21 Electronics And Telecommunications Research Institute Terminal system, shopping system and method for shopping using the same
US11240059B2 (en) 2010-12-20 2022-02-01 Icontrol Networks, Inc. Defining and implementing sensor triggered response rules
US9729342B2 (en) 2010-12-20 2017-08-08 Icontrol Networks, Inc. Defining and implementing sensor triggered response rules
US9696888B2 (en) 2010-12-20 2017-07-04 Microsoft Technology Licensing, Llc Application-launching interface for multiple modes
US9430130B2 (en) 2010-12-20 2016-08-30 Microsoft Technology Licensing, Llc Customization of an immersive environment
US9864494B2 (en) 2010-12-23 2018-01-09 Microsoft Technology Licensing, Llc Application reporting in an application-selectable user interface
US9870132B2 (en) 2010-12-23 2018-01-16 Microsoft Technology Licensing, Llc Application reporting in an application-selectable user interface
US11126333B2 (en) 2010-12-23 2021-09-21 Microsoft Technology Licensing, Llc Application reporting in an application-selectable user interface
US10969944B2 (en) 2010-12-23 2021-04-06 Microsoft Technology Licensing, Llc Application reporting in an application-selectable user interface
US9766790B2 (en) 2010-12-23 2017-09-19 Microsoft Technology Licensing, Llc Application reporting in an application-selectable user interface
US20150269547A1 (en) * 2010-12-30 2015-09-24 Futurewei Technologies, Inc. System for Managing, Storing and Providing Shared Digital Content to Users in a User Relationship Defined Group in a Multi-Platform Environment
US10783503B2 (en) 2010-12-30 2020-09-22 Futurewei Technologies, Inc. System for managing, storing and providing shared digital content to users in a user relationship defined group in a multi-platform environment
US11810088B2 (en) 2010-12-30 2023-11-07 Huawei Technologies Co., Ltd. System for managing, storing and providing shared digital content to users in a user relationship defined group in a multi-platform environment
US20120173419A1 (en) * 2010-12-31 2012-07-05 Ebay, Inc. Visual transactions
US9423951B2 (en) 2010-12-31 2016-08-23 Microsoft Technology Licensing, Llc Content-based snap point
US20170076404A1 (en) * 2011-01-06 2017-03-16 Influitive Corporation Methods and Systems For Communicating Social Expression
US20120179573A1 (en) * 2011-01-06 2012-07-12 Triggerfox Corporation Methods and Systems for Communicating Social Expression
US20120179572A1 (en) * 2011-01-07 2012-07-12 Ebay, Inc. Conducting Transactions Through a Publisher
US8468052B2 (en) 2011-01-17 2013-06-18 Vegas.Com, Llc Systems and methods for providing activity and participation incentives
US9697567B2 (en) 2011-01-18 2017-07-04 The Western Union Company Universal ledger
US10984459B2 (en) 2011-01-18 2021-04-20 The Western Union Company Universal ledger
US20120185383A1 (en) * 2011-01-18 2012-07-19 The Western Union Company Universal ledger
US8620799B2 (en) * 2011-01-18 2013-12-31 The Western Union Company Universal ledger
US10102571B2 (en) 2011-01-18 2018-10-16 The Western Union Company Universal ledger
US10235704B2 (en) 2011-01-18 2019-03-19 The Western Union Company Universal ledger
US11663639B2 (en) 2011-01-18 2023-05-30 The Western Union Company Universal ledger
US20150012332A1 (en) * 2011-01-18 2015-01-08 Caterina Papachristos Business to business to shared communities system and method
US10102591B2 (en) 2011-01-21 2018-10-16 Livingsocial, Inc. Systems and methods to implement point of sale (POS) terminals, process orders and manage order fulfillment
US11562443B2 (en) 2011-01-21 2023-01-24 Livingsocial, Inc. Systems and methods to implement point of sale (POS) terminals, process orders and manage order fulfillment
US10867359B2 (en) 2011-01-21 2020-12-15 Livingsocial, Inc. Systems and methods to implement point of sale (POS) terminals, process orders and manage order fulfillment
US20120191529A1 (en) * 2011-01-26 2012-07-26 Intuit Inc. Methods and systems for a predictive advertising tool
US11468983B2 (en) 2011-01-28 2022-10-11 Teladoc Health, Inc. Time-dependent navigation of telepresence robots
US11501325B2 (en) 2011-01-28 2022-11-15 Etsy, Inc. Systems and methods for shopping in an electronic commerce environment
US9469030B2 (en) 2011-01-28 2016-10-18 Intouch Technologies Interfacing with a mobile telepresence robot
US8965579B2 (en) * 2011-01-28 2015-02-24 Intouch Technologies Interfacing with a mobile telepresence robot
US20220199253A1 (en) * 2011-01-28 2022-06-23 Intouch Technologies, Inc. Interfacing With a Mobile Telepresence Robot
US11830618B2 (en) * 2011-01-28 2023-11-28 Teladoc Health, Inc. Interfacing with a mobile telepresence robot
US10591921B2 (en) 2011-01-28 2020-03-17 Intouch Technologies, Inc. Time-dependent navigation of telepresence robots
US11289192B2 (en) * 2011-01-28 2022-03-29 Intouch Technologies, Inc. Interfacing with a mobile telepresence robot
US9323250B2 (en) 2011-01-28 2016-04-26 Intouch Technologies, Inc. Time-dependent navigation of telepresence robots
US20120197439A1 (en) * 2011-01-28 2012-08-02 Intouch Health Interfacing with a mobile telepresence robot
US20230044151A1 (en) * 2011-01-28 2023-02-09 Etsy, Inc. Systems and Methods for Shopping in an Electronic Commerce Environment
US9785149B2 (en) 2011-01-28 2017-10-10 Intouch Technologies, Inc. Time-dependent navigation of telepresence robots
US20120197700A1 (en) * 2011-01-28 2012-08-02 Etsy, Inc. Systems and methods for shopping in an electronic commerce environment
US10399223B2 (en) 2011-01-28 2019-09-03 Intouch Technologies, Inc. Interfacing with a mobile telepresence robot
US10650399B2 (en) * 2011-01-28 2020-05-12 Etsy, Inc. Systems and methods for shopping in an electronic commerce environment
US20120246035A1 (en) * 2011-02-07 2012-09-27 Kenisha Cross Computer software program and fashion styling tool
US9602414B2 (en) 2011-02-09 2017-03-21 Time Warner Cable Enterprises Llc Apparatus and methods for controlled bandwidth reclamation
US8898581B2 (en) * 2011-02-22 2014-11-25 Sony Corporation Display control device, display control method, search device, search method, program and communication system
US9430795B2 (en) * 2011-02-22 2016-08-30 Sony Corporation Display control device, display control method, search device, search method, program and communication system
US20150012386A1 (en) * 2011-02-22 2015-01-08 Sony Corporation Display control device, display control method, search device, search method, program and communication system
US9886709B2 (en) 2011-02-22 2018-02-06 Sony Corporation Display control device, display control method, search device, search method, program and communication system
CN102708122A (en) * 2011-02-22 2012-10-03 索尼公司 Display control device, display control method, search device, search method, program and communication system
US20120215805A1 (en) * 2011-02-22 2012-08-23 Sony Corporation Display control device, display control method, search device, search method, program and communication system
CN103415865A (en) * 2011-03-08 2013-11-27 脸谱公司 Selecting social endorsement information for an advertisement for display to a viewing user
WO2012121908A1 (en) * 2011-03-08 2012-09-13 Facebook, Inc. Selecting social endorsement information for an advertisement for display to a viewing user
US9455943B2 (en) 2011-03-11 2016-09-27 James Robert Miner Systems and methods for message collection
US9419928B2 (en) 2011-03-11 2016-08-16 James Robert Miner Systems and methods for message collection
US8819156B2 (en) 2011-03-11 2014-08-26 James Robert Miner Systems and methods for message collection
US20120239485A1 (en) * 2011-03-14 2012-09-20 Bo Hu Associating deals with events in a social networking system
US10540692B2 (en) 2011-03-14 2020-01-21 Facebook, Inc. Presenting deals to a user of social networking system
US10346880B2 (en) 2011-03-14 2019-07-09 Facebook, Inc. Offering social deals based on activities of connections in a social networking system
US10504152B2 (en) 2011-03-14 2019-12-10 Facebook, Inc. Platform for distributing deals via a social networking system
WO2012125673A1 (en) * 2011-03-15 2012-09-20 Videodeals.com S.A. System and method for marketing
US10204086B1 (en) 2011-03-16 2019-02-12 Google Llc Document processing service for displaying comments included in messages
US11669674B1 (en) 2011-03-16 2023-06-06 Google Llc Document processing service for displaying comments included in messages
US9129302B2 (en) * 2011-03-17 2015-09-08 Sears Brands, L.L.C. Methods and systems for coupon service applications
US8600359B2 (en) 2011-03-21 2013-12-03 International Business Machines Corporation Data session synchronization with phone numbers
US20120246238A1 (en) * 2011-03-21 2012-09-27 International Business Machines Corporation Asynchronous messaging tags
US8688090B2 (en) 2011-03-21 2014-04-01 International Business Machines Corporation Data session preferences
US20130005366A1 (en) * 2011-03-21 2013-01-03 International Business Machines Corporation Asynchronous messaging tags
US8959165B2 (en) * 2011-03-21 2015-02-17 International Business Machines Corporation Asynchronous messaging tags
US11900276B2 (en) * 2011-03-22 2024-02-13 Nant Holdings Ip, Llc Distributed relationship reasoning engine for generating hypothesis about relations between aspects of objects in response to an inquiry
US10455089B2 (en) * 2011-03-22 2019-10-22 Fmr Llc Augmented reality system for product selection
US20120246027A1 (en) * 2011-03-22 2012-09-27 David Martin Augmented Reality System for Product Selection
US20200356883A1 (en) * 2011-03-22 2020-11-12 Nant Holdings Ip, Llc Distributed relationship reasoning engine for generating hypothesis about relations between aspects of objects in response to an inquiry
US20120246581A1 (en) * 2011-03-24 2012-09-27 Thinglink Oy Mechanisms to share opinions about products
US9383917B2 (en) 2011-03-28 2016-07-05 Microsoft Technology Licensing, Llc Predictive tiling
US8825627B1 (en) * 2011-03-29 2014-09-02 Amazon Technologies, Inc. Creating ambience during on-line shopping
US20120257797A1 (en) * 2011-04-05 2012-10-11 Microsoft Corporation Biometric recognition
US9539500B2 (en) 2011-04-05 2017-01-10 Microsoft Technology Licensing, Llc Biometric recognition
US8824749B2 (en) * 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
US20120259726A1 (en) * 2011-04-06 2012-10-11 Bamin Inc System and method for designing, creating and distributing consumer-specified products
US20120259744A1 (en) * 2011-04-07 2012-10-11 Infosys Technologies, Ltd. System and method for augmented reality and social networking enhanced retail shopping
US20140108426A1 (en) * 2011-04-08 2014-04-17 The Regents Of The University Of California Interactive system for collecting, displaying, and ranking items based on quantitative and textual input from multiple participants
US20120259826A1 (en) * 2011-04-08 2012-10-11 Rym Zalila-Wenkstern Customizable Interfacing Agents, Systems, And Methods
US20150135048A1 (en) * 2011-04-20 2015-05-14 Panafold Methods, apparatus, and systems for visually representing a relative relevance of content elements to an attractor
US20120271684A1 (en) * 2011-04-20 2012-10-25 Jon Shutter Method and System for Providing Location Targeted Advertisements
US20120272168A1 (en) * 2011-04-20 2012-10-25 Panafold Methods, apparatus, and systems for visually representing a relative relevance of content elements to an attractor
WO2012148904A1 (en) * 2011-04-25 2012-11-01 Veveo, Inc. System and method for an intelligent personal timeline assistant
US10769739B2 (en) 2011-04-25 2020-09-08 Intouch Technologies, Inc. Systems and methods for management of information among medical providers and facilities
CN103688279A (en) * 2011-04-25 2014-03-26 韦韦欧股份有限公司 System and method for intelligent personal timeline assistant
US20120278252A1 (en) * 2011-04-27 2012-11-01 Sethna Shaun B System and method for recommending establishments and items based on consumption history of similar consumers
US11443214B2 (en) 2011-04-29 2022-09-13 Google Llc Moderation of user-generated content
US10095980B1 (en) 2011-04-29 2018-10-09 Google Llc Moderation of user-generated content
US11868914B2 (en) 2011-04-29 2024-01-09 Google Llc Moderation of user-generated content
US9552552B1 (en) 2011-04-29 2017-01-24 Google Inc. Identification of over-clustered map features
US8806352B2 (en) 2011-05-06 2014-08-12 David H. Sitrick System for collaboration of a specific image and utilizing selected annotations while viewing and relative to providing a display presentation
US9224129B2 (en) 2011-05-06 2015-12-29 David H. Sitrick System and methodology for multiple users concurrently working and viewing on a common project
US9330366B2 (en) 2011-05-06 2016-05-03 David H. Sitrick System and method for collaboration via team and role designation and control and management of annotations
US11611595B2 (en) 2011-05-06 2023-03-21 David H. Sitrick Systems and methodologies providing collaboration among a plurality of computing appliances, utilizing a plurality of areas of memory to store user input as associated with an associated computing appliance providing the input
US20120284641A1 (en) * 2011-05-06 2012-11-08 David H. Sitrick Systems And Methodologies Providing For Collaboration By Respective Users Of A Plurality Of Computing Appliances Working Concurrently On A Common Project Having An Associated Display
US8918721B2 (en) * 2011-05-06 2014-12-23 David H. Sitrick Systems and methodologies providing for collaboration by respective users of a plurality of computing appliances working concurrently on a common project having an associated display
US8918724B2 (en) 2011-05-06 2014-12-23 David H. Sitrick Systems and methodologies providing controlled voice and data communication among a plurality of computing appliances associated as team members of at least one respective team or of a plurality of teams and sub-teams within the teams
US8918722B2 (en) 2011-05-06 2014-12-23 David H. Sitrick System and methodology for collaboration in groups with split screen displays
US8990677B2 (en) 2011-05-06 2015-03-24 David H. Sitrick System and methodology for collaboration utilizing combined display with evolving common shared underlying image
US8875011B2 (en) 2011-05-06 2014-10-28 David H. Sitrick Systems and methodologies providing for collaboration among a plurality of users at a plurality of computing appliances
US10402485B2 (en) 2011-05-06 2019-09-03 David H. Sitrick Systems and methodologies providing controlled collaboration among a plurality of users
US8826147B2 (en) 2011-05-06 2014-09-02 David H. Sitrick System and methodology for collaboration, with selective display of user input annotations among member computing appliances of a group/team
US8914735B2 (en) 2011-05-06 2014-12-16 David H. Sitrick Systems and methodologies providing collaboration and display among a plurality of users
US8924859B2 (en) 2011-05-06 2014-12-30 David H. Sitrick Systems and methodologies supporting collaboration of users as members of a team, among a plurality of computing appliances
US8918723B2 (en) 2011-05-06 2014-12-23 David H. Sitrick Systems and methodologies comprising a plurality of computing appliances having input apparatus and display apparatus and logically structured as a main team
US20120287122A1 (en) * 2011-05-09 2012-11-15 Telibrahma Convergent Communications Pvt. Ltd. Virtual apparel fitting system and method
WO2012155144A1 (en) * 2011-05-12 2012-11-15 John Devecka An interactive mobile-optimized icon-based profile display and associated social network functionality
US20120290987A1 (en) * 2011-05-13 2012-11-15 Gupta Kalyan M System and Method for Virtual Object Placement
WO2012158479A2 (en) * 2011-05-13 2012-11-22 Knexus Research Corporation System and method for virtual object placement
US8893048B2 (en) * 2011-05-13 2014-11-18 Kalyan M. Gupta System and method for virtual object placement
WO2012158479A3 (en) * 2011-05-13 2014-05-08 Knexus Research Corporation System and method for virtual object placement
US9974612B2 (en) 2011-05-19 2018-05-22 Intouch Technologies, Inc. Enhanced diagnostics for a telepresence robot
US20120297319A1 (en) * 2011-05-20 2012-11-22 Christopher Craig Collins Solutions Configurator
US20120302212A1 (en) * 2011-05-25 2012-11-29 Critical Medical Solutions, Inc. Secure mobile radiology communication system
US9329774B2 (en) 2011-05-27 2016-05-03 Microsoft Technology Licensing, Llc Switching back to a previously-interacted-with application
US9535597B2 (en) 2011-05-27 2017-01-03 Microsoft Technology Licensing, Llc Managing an immersive interface in a multi-application immersive environment
US10303325B2 (en) 2011-05-27 2019-05-28 Microsoft Technology Licensing, Llc Multi-application environment
US9658766B2 (en) 2011-05-27 2017-05-23 Microsoft Technology Licensing, Llc Edge gesture
CN103582863A (en) * 2011-05-27 2014-02-12 微软公司 Multi-application environment
US11698721B2 (en) 2011-05-27 2023-07-11 Microsoft Technology Licensing, Llc Managing an immersive interface in a multi-application immersive environment
US10462513B2 (en) 2011-06-01 2019-10-29 At&T Intellectual Property I, L.P. Object image generation
US9241184B2 (en) 2011-06-01 2016-01-19 At&T Intellectual Property I, L.P. Clothing visualization
US8433623B2 (en) 2011-06-03 2013-04-30 Target Brands, Inc. Methods for creating a gift registry web page with recommendations and assistance
US11768882B2 (en) 2011-06-09 2023-09-26 MemoryWeb, LLC Method and apparatus for managing digital files
US11163823B2 (en) 2011-06-09 2021-11-02 MemoryWeb, LLC Method and apparatus for managing digital files
US11636150B2 (en) 2011-06-09 2023-04-25 MemoryWeb, LLC Method and apparatus for managing digital files
US11481433B2 (en) 2011-06-09 2022-10-25 MemoryWeb, LLC Method and apparatus for managing digital files
WO2012170919A1 (en) * 2011-06-09 2012-12-13 Tripadvisor Llc Social travel recommendations
US20130024391A1 (en) * 2011-06-09 2013-01-24 Tripadvisor Llc Social travel recommendations
US11017020B2 (en) * 2011-06-09 2021-05-25 MemoryWeb, LLC Method and apparatus for managing digital files
US11636149B1 (en) 2011-06-09 2023-04-25 MemoryWeb, LLC Method and apparatus for managing digital files
US11899726B2 (en) 2011-06-09 2024-02-13 MemoryWeb, LLC Method and apparatus for managing digital files
US11170042B1 (en) 2011-06-09 2021-11-09 MemoryWeb, LLC Method and apparatus for managing digital files
US11599573B1 (en) 2011-06-09 2023-03-07 MemoryWeb, LLC Method and apparatus for managing digital files
US20120317309A1 (en) * 2011-06-10 2012-12-13 Benco Davis S Method to synchronize content across networks
WO2012170163A1 (en) * 2011-06-10 2012-12-13 Aliphcom Media device, application, and content management using sensory input
US8446275B2 (en) 2011-06-10 2013-05-21 Aliphcom General health and wellness management method and apparatus for a wellness application using data from a data-capable band
US9160795B2 (en) * 2011-06-10 2015-10-13 Alcatel Lucent Method to synchronize content across networks
US9258670B2 (en) 2011-06-10 2016-02-09 Aliphcom Wireless enabled cap for a data-capable device
US9069380B2 (en) 2011-06-10 2015-06-30 Aliphcom Media device, application, and content management using sensory input
US20140149264A1 (en) * 2011-06-14 2014-05-29 Hemanth Kumar Satyanarayana Method and system for virtual collaborative shopping
US20120324118A1 (en) * 2011-06-14 2012-12-20 Spot On Services, Inc. System and method for facilitating technical support
WO2012172568A1 (en) * 2011-06-14 2012-12-20 Hemanth Kumar Satyanarayana Method and system for virtual collaborative shopping
US20120320054A1 (en) * 2011-06-15 2012-12-20 King Abdullah University Of Science And Technology Apparatus, System, and Method for 3D Patch Compression
US10115079B1 (en) 2011-06-16 2018-10-30 Consumerinfo.Com, Inc. Authentication alerts
US11232413B1 (en) 2011-06-16 2022-01-25 Consumerinfo.Com, Inc. Authentication alerts
US9665854B1 (en) 2011-06-16 2017-05-30 Consumerinfo.Com, Inc. Authentication alerts
US10719873B1 (en) 2011-06-16 2020-07-21 Consumerinfo.Com, Inc. Providing credit inquiry alerts
US10685336B1 (en) 2011-06-16 2020-06-16 Consumerinfo.Com, Inc. Authentication alerts
US9607336B1 (en) 2011-06-16 2017-03-28 Consumerinfo.Com, Inc. Providing credit inquiry alerts
US20130286014A1 (en) * 2011-06-22 2013-10-31 Gemvision Corporation, LLC Custom Jewelry Configurator
US9323871B2 (en) 2011-06-27 2016-04-26 Trimble Navigation Limited Collaborative development of a model on a network
US20120330716A1 (en) * 2011-06-27 2012-12-27 Cadio, Inc. Triggering collection of consumer data from external data sources based on location data
US20130007669A1 (en) * 2011-06-29 2013-01-03 Yu-Ling Lu System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof
US8966402B2 (en) * 2011-06-29 2015-02-24 National Taipei University Of Education System and method for editing interactive three-dimension multimedia, and online editing and exchanging architecture and method thereof
US9754312B2 (en) * 2011-06-30 2017-09-05 Ncr Corporation Techniques for personalizing self checkouts
US20130232037A1 (en) * 2011-06-30 2013-09-05 Ncr Corporation Techniques for personalizing self checkouts
US10176233B1 (en) 2011-07-08 2019-01-08 Consumerinfo.Com, Inc. Lifescore
US11665253B1 (en) 2011-07-08 2023-05-30 Consumerinfo.Com, Inc. LifeScore
US10798197B2 (en) 2011-07-08 2020-10-06 Consumerinfo.Com, Inc. Lifescore
US9116948B2 (en) * 2011-07-13 2015-08-25 Linkedin Corporation Method and system for semantic search against a document collection
US9710518B2 (en) 2011-07-13 2017-07-18 Linkedin Corporation Method and system for semantic search against a document collection
US20130232171A1 (en) * 2011-07-13 2013-09-05 Linkedln Corporation Method and system for semantic search against a document collection
US20130018724A1 (en) * 2011-07-14 2013-01-17 Enpulz, Llc Buyer group interface for a demand driven promotion system
US20130018746A1 (en) * 2011-07-14 2013-01-17 Enpulz, Llc Buyer group definition for a demand driven promotion system
US20130018748A1 (en) * 2011-07-14 2013-01-17 Enpulz, Llc Integrated buyer group and social networking interface for a demand driven promotion system
US20130024507A1 (en) * 2011-07-18 2013-01-24 Yahoo!, Inc. Analyzing Content Demand Using Social Signals
US8756279B2 (en) * 2011-07-18 2014-06-17 Yahoo! Inc. Analyzing content demand using social signals
US10559019B1 (en) * 2011-07-19 2020-02-11 Ken Beauvais System for centralized E-commerce overhaul
US8498900B1 (en) * 2011-07-25 2013-07-30 Dash Software, LLC Bar or restaurant check-in and payment systems and methods of their operation
US20140249879A1 (en) * 2011-07-29 2014-09-04 Mark Oleynik Network system and method
US20130083065A1 (en) * 2011-08-02 2013-04-04 Jessica Schulze Fit prediction on three-dimensional virtual model
US11438665B2 (en) 2011-08-04 2022-09-06 Ebay Inc. User commentary systems and methods
WO2013020102A1 (en) * 2011-08-04 2013-02-07 Dane Glasgow User commentary systems and methods
US9532110B2 (en) 2011-08-04 2016-12-27 Ebay Inc. User commentary systems and methods
US11765433B2 (en) 2011-08-04 2023-09-19 Ebay Inc. User commentary systems and methods
US10827226B2 (en) * 2011-08-04 2020-11-03 Ebay Inc. User commentary systems and methods
AU2012289870B2 (en) * 2011-08-04 2015-07-02 Ebay Inc. User commentary systems and methods
US9301015B2 (en) 2011-08-04 2016-03-29 Ebay Inc. User commentary systems and methods
US20170164057A1 (en) * 2011-08-04 2017-06-08 Ebay Inc. User commentary systems and methods
US20160156980A1 (en) * 2011-08-04 2016-06-02 Ebay Inc. User commentary systems and methods
US9967629B2 (en) * 2011-08-04 2018-05-08 Ebay Inc. User commentary systems and methods
US9584866B2 (en) * 2011-08-04 2017-02-28 Ebay Inc. User commentary systems and methods
US10319010B2 (en) * 2011-08-12 2019-06-11 Ebay Inc. Systems and methods for personalized pricing
US20130124360A1 (en) * 2011-08-12 2013-05-16 Ebay Inc. Systems and methods for personalized pricing
US11341552B2 (en) 2011-08-12 2022-05-24 Ebay Inc. Systems and methods for personalized pricing
US9513874B2 (en) * 2011-08-18 2016-12-06 Infosys Limited Enterprise computing platform with support for editing documents via logical views
US20130047135A1 (en) * 2011-08-18 2013-02-21 Infosys Limited Enterprise computing platform
US20140122231A1 (en) * 2011-08-19 2014-05-01 Qualcomm Incorporated System and method for interactive promotion of products and services
US8433609B2 (en) 2011-08-24 2013-04-30 Raj Vasant Abhyanker Geospatially constrained gastronomic bidding
US20130051633A1 (en) * 2011-08-26 2013-02-28 Sanyo Electric Co., Ltd. Image processing apparatus
US11005917B2 (en) 2011-08-29 2021-05-11 Aibuy, Inc. Containerized software for virally copying from one endpoint to another
US20130060873A1 (en) * 2011-08-29 2013-03-07 Saurabh Agrawal Real time event reviewing system and method
US9451010B2 (en) 2011-08-29 2016-09-20 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US20130054425A1 (en) * 2011-08-29 2013-02-28 Francesco Alexander Portelos Web-based system permitting a customer to shop online for clothes with their own picture
US10171555B2 (en) 2011-08-29 2019-01-01 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US8769053B2 (en) 2011-08-29 2014-07-01 Cinsay, Inc. Containerized software for virally copying from one endpoint to another
US10963657B2 (en) * 2011-08-30 2021-03-30 Digimarc Corporation Methods and arrangements for identifying objects
US11288472B2 (en) * 2011-08-30 2022-03-29 Digimarc Corporation Cart-based shopping arrangements employing probabilistic item identification
US11281876B2 (en) 2011-08-30 2022-03-22 Digimarc Corporation Retail store with sensor-fusion enhancements
US20170364975A1 (en) * 2011-08-31 2017-12-21 Ncr Corporation Techniques for collaborative shopping
US20140173464A1 (en) * 2011-08-31 2014-06-19 Kobi Eisenberg Providing application context for a conversation
US9754298B2 (en) * 2011-08-31 2017-09-05 Ncr Corporation Techniques for collaborative shopping
US20130054328A1 (en) * 2011-08-31 2013-02-28 Ncr Corporation Techniques for collaborative shopping
US20170364974A1 (en) * 2011-08-31 2017-12-21 Ncr Corporation Techniques for collaborative shopping
US10482509B2 (en) * 2011-08-31 2019-11-19 Ncr Corporation Techniques for collaborative shopping
US10579250B2 (en) 2011-09-01 2020-03-03 Microsoft Technology Licensing, Llc Arranging tiles
US20130057746A1 (en) * 2011-09-02 2013-03-07 Tomohisa Takaoka Information processing apparatus, information processing method, program, recording medium, and information processing system
US9336137B2 (en) 2011-09-02 2016-05-10 Google Inc. System and method for performing data management in a collaborative development environment
US9538021B2 (en) * 2011-09-02 2017-01-03 Sony Corporation Information processing apparatus, information processing method, program, recording medium, and information processing system
US10257129B2 (en) 2011-09-02 2019-04-09 Sony Corporation Information processing apparatus, information processing method, program, recording medium, and information processing system for selecting an information poster and displaying a view image of the selected information poster
CN103152375A (en) * 2011-09-02 2013-06-12 索尼公司 Information processing apparatus, information processing method, program, recording medium, and information processing system
US10114865B2 (en) 2011-09-09 2018-10-30 Microsoft Technology Licensing, Llc Tile cache
US10353566B2 (en) 2011-09-09 2019-07-16 Microsoft Technology Licensing, Llc Semantic zoom animations
US11392288B2 (en) 2011-09-09 2022-07-19 Microsoft Technology Licensing, Llc Semantic zoom animations
US9557909B2 (en) 2011-09-09 2017-01-31 Microsoft Technology Licensing, Llc Semantic zoom linguistic helpers
US10254955B2 (en) 2011-09-10 2019-04-09 Microsoft Technology Licensing, Llc Progressively indicating new content in an application-selectable user interface
US9230014B1 (en) * 2011-09-13 2016-01-05 Sri International Method and apparatus for recommending work artifacts based on collaboration events
US11087022B2 (en) 2011-09-16 2021-08-10 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
US10061936B1 (en) 2011-09-16 2018-08-28 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
US9542553B1 (en) 2011-09-16 2017-01-10 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
US11790112B1 (en) 2011-09-16 2023-10-17 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
US10642999B2 (en) 2011-09-16 2020-05-05 Consumerinfo.Com, Inc. Systems and methods of identity protection and management
WO2013043346A1 (en) * 2011-09-21 2013-03-28 Facebook, Inc. Structured objects and actions on a social networking system
US20130073485A1 (en) * 2011-09-21 2013-03-21 Nokia Corporation Method and apparatus for managing recommendation models
US10614365B2 (en) 2011-09-21 2020-04-07 Wsou Investments, Llc Method and apparatus for managing recommendation models
US8849721B2 (en) 2011-09-21 2014-09-30 Facebook, Inc. Structured objects and actions on a social networking system
US9218605B2 (en) * 2011-09-21 2015-12-22 Nokia Technologies Oy Method and apparatus for managing recommendation models
US20130080161A1 (en) * 2011-09-27 2013-03-28 Kabushiki Kaisha Toshiba Speech recognition apparatus and method
US11651412B2 (en) 2011-09-28 2023-05-16 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US11727249B2 (en) 2011-09-28 2023-08-15 Nara Logics, Inc. Methods for constructing and applying synaptic networks
US10467677B2 (en) 2011-09-28 2019-11-05 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US9009088B2 (en) 2011-09-28 2015-04-14 Nara Logics, Inc. Apparatus and method for providing harmonized recommendations based on an integrated user profile
US9449336B2 (en) 2011-09-28 2016-09-20 Nara Logics, Inc. Apparatus and method for providing harmonized recommendations based on an integrated user profile
US8909583B2 (en) 2011-09-28 2014-12-09 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US10423880B2 (en) 2011-09-28 2019-09-24 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US20130085931A1 (en) * 2011-09-29 2013-04-04 Ebay, Inc. Social proximity payments
CN103843021A (en) * 2011-09-29 2014-06-04 电子湾有限公司 Social proximity payments
WO2013049735A3 (en) * 2011-09-29 2014-05-08 Electronic Commodities Exchange, L.P. Methods and systems for providing an interactive communication session with a remote consultant
US11100551B2 (en) 2011-09-29 2021-08-24 Electronic Commodities Exchange Apparatus, article of manufacture and methods for customized design of a jewelry item
US11042923B2 (en) 2011-09-29 2021-06-22 Electronic Commodities Exchange, L.P. Apparatus, article of manufacture and methods for recommending a jewelry item
US10417686B2 (en) 2011-09-29 2019-09-17 Electronic Commodities Exchange Apparatus, article of manufacture, and methods for recommending a jewelry item
CN111626809A (en) * 2011-09-29 2020-09-04 电子商品交易合伙人有限公司 Method and system for providing an interactive communication session with a remote advisor
US20170228718A1 (en) * 2011-09-29 2017-08-10 Paypal, Inc. Social proximity payments
US10496978B2 (en) * 2011-09-29 2019-12-03 Paypal, Inc. Social proximity payments
US9152991B2 (en) 2011-09-29 2015-10-06 Electronic Commodities Exchange, L.P. Methods and systems for generating an interactive communication session with a live consultant
US9576284B2 (en) * 2011-09-29 2017-02-21 Paypal, Inc. Social proximity payments
US9679324B2 (en) 2011-09-29 2017-06-13 Electronic Commodities Exchange, L.P. Systems and methods for interactive jewelry design
CN109450961A (en) * 2011-09-29 2019-03-08 电子商品交易合伙人有限公司 For providing and the method and system of the interactive communication session of long-range consultant
US10650428B2 (en) 2011-09-29 2020-05-12 Electronic Commodities Exchange, L.P. Systems and methods for interactive jewelry design
US8626601B2 (en) 2011-09-29 2014-01-07 Electronic Commodities Exchange, L.P. Methods and systems for providing an interactive communication session with a remote consultant
CN104321798A (en) * 2011-09-29 2015-01-28 电子商品交易合伙人有限公司 Methods and systems for providing an interactive communication session with a remote consultant
US10204366B2 (en) 2011-09-29 2019-02-12 Electronic Commodities Exchange Apparatus, article of manufacture and methods for customized design of a jewelry item
WO2013049735A2 (en) * 2011-09-29 2013-04-04 Electronic Commodities Exchange, L.P. Methods and systems for providing an interactive communication session with a remote consultant
US9268406B2 (en) 2011-09-30 2016-02-23 Microsoft Technology Licensing, Llc Virtual spectator experience with a personal audio/visual apparatus
US9286711B2 (en) 2011-09-30 2016-03-15 Microsoft Technology Licensing, Llc Representing a location at a previous time period using an augmented reality display
US9606992B2 (en) 2011-09-30 2017-03-28 Microsoft Technology Licensing, Llc Personal audio/visual apparatus providing resource management
US11200620B2 (en) 2011-10-13 2021-12-14 Consumerinfo.Com, Inc. Debt services candidate locator
US9972048B1 (en) 2011-10-13 2018-05-15 Consumerinfo.Com, Inc. Debt services candidate locator
US9536263B1 (en) 2011-10-13 2017-01-03 Consumerinfo.Com, Inc. Debt services candidate locator
US9621541B1 (en) 2011-10-17 2017-04-11 Google Inc. Systems and methods for incremental loading of collaboratively generated presentations
US8434002B1 (en) * 2011-10-17 2013-04-30 Google Inc. Systems and methods for collaborative editing of elements in a presentation document
US10430388B1 (en) 2011-10-17 2019-10-01 Google Llc Systems and methods for incremental loading of collaboratively generated presentations
US8769045B1 (en) 2011-10-17 2014-07-01 Google Inc. Systems and methods for incremental loading of collaboratively generated presentations
US10481771B1 (en) 2011-10-17 2019-11-19 Google Llc Systems and methods for controlling the display of online documents
US8397153B1 (en) 2011-10-17 2013-03-12 Google Inc. Systems and methods for rich presentation overlays
US8812946B1 (en) 2011-10-17 2014-08-19 Google Inc. Systems and methods for rendering documents
US9946725B1 (en) 2011-10-17 2018-04-17 Google Llc Systems and methods for incremental loading of collaboratively generated presentations
US8471871B1 (en) 2011-10-17 2013-06-25 Google Inc. Authoritative text size measuring
US8538828B2 (en) 2011-10-18 2013-09-17 Autotrader.Com, Inc. Consumer-to-business exchange auction
US8595082B2 (en) 2011-10-18 2013-11-26 Autotrader.Com, Inc. Consumer-to-business exchange marketplace
WO2013059726A1 (en) * 2011-10-21 2013-04-25 Wal-Mart Stores, Inc. Systems, devices and methods for list display and management
US20130111359A1 (en) * 2011-10-27 2013-05-02 Disney Enterprises, Inc. Relocating a user's online presence across virtual rooms, servers, and worlds based on locations of friends and characters
US8869044B2 (en) * 2011-10-27 2014-10-21 Disney Enterprises, Inc. Relocating a user's online presence across virtual rooms, servers, and worlds based on locations of friends and characters
US11271986B2 (en) 2011-10-28 2022-03-08 Microsoft Technology Licensing, Llc Document sharing through browser
US20130117378A1 (en) * 2011-11-06 2013-05-09 Radoslav P. Kotorov Method for collaborative social shopping engagement
US10331323B2 (en) 2011-11-08 2019-06-25 Intouch Technologies, Inc. Tele-presence system with a user interface that displays different communication links
US9715337B2 (en) 2011-11-08 2017-07-25 Intouch Technologies, Inc. Tele-presence system with a user interface that displays different communication links
US20130113830A1 (en) * 2011-11-09 2013-05-09 Sony Corporation Information processing apparatus, display control method, and program
US20140207578A1 (en) * 2011-11-11 2014-07-24 Millennial Media, Inc. System For Targeting Advertising To A Mobile Communication Device Based On Photo Metadata
US10565625B2 (en) 2011-11-11 2020-02-18 Millennial Media Llc Identifying a same user of multiple communication devices based on application use patterns
US9898852B2 (en) * 2011-11-15 2018-02-20 Trimble Navigation Limited Providing a real-time shared viewing experience in a three-dimensional modeling environment
US9460542B2 (en) 2011-11-15 2016-10-04 Trimble Navigation Limited Browser-based collaborative development of a 3D model
US20130120367A1 (en) * 2011-11-15 2013-05-16 Trimble Navigation Limited Providing A Real-Time Shared Viewing Experience In A Three-Dimensional Modeling Environment
US10445414B1 (en) 2011-11-16 2019-10-15 Google Llc Systems and methods for collaborative document editing
US20130133056A1 (en) * 2011-11-21 2013-05-23 Matthew Christian Taylor Single login Identifier Used Across Multiple Shopping Sites
US10868890B2 (en) 2011-11-22 2020-12-15 Trimble Navigation Limited 3D modeling system distributed between a client device web browser and a server
US20130145282A1 (en) * 2011-12-05 2013-06-06 Zhenzhen ZHAO Systems and methods for social-event based sharing
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
WO2013085953A1 (en) * 2011-12-06 2013-06-13 Morot-Gaudry Jean Michel Immediate purchase of goods and services which appear on a public broadcast
RU2637461C2 (en) * 2011-12-06 2017-12-04 Жан Мишель МОРО-ГОДРИ Method of electronic commerce through public broadcasting environment
EP2788837A4 (en) * 2011-12-06 2015-08-12 Jean Michel Morot-Gaudry Immediate purchase of goods and services which appear on a public broadcast
US20130151382A1 (en) * 2011-12-09 2013-06-13 Andrew S. Fuller System and method for modeling articles of clothing
US20130151637A1 (en) * 2011-12-13 2013-06-13 Findandremind.Com System and methods for filtering and organizing events and activities
US9064015B2 (en) * 2011-12-14 2015-06-23 Artist Growth, Llc Action alignment for event planning, project management and process structuring
US10956009B2 (en) * 2011-12-15 2021-03-23 L'oreal Method and system for interactive cosmetic enhancements interface
US20190026013A1 (en) * 2011-12-15 2019-01-24 Modiface Inc. Method and system for interactive cosmetic enhancements interface
US10332045B2 (en) 2011-12-16 2019-06-25 Illinois Tool Works Inc. Tagging of assets for content distribution in an enterprise management system
WO2013090743A3 (en) * 2011-12-16 2014-12-18 Illinois Tool Works Inc. Cloud based recipe distribution in an enterprise management system
US9740998B2 (en) 2011-12-16 2017-08-22 Illinois Tool Works, Inc. Cloud based recipe distribution in an enterprise management system
WO2013090737A3 (en) * 2011-12-16 2015-05-14 Illinois Tool Works Inc. Content provider feeds in a food product asset related network
US11301788B2 (en) 2011-12-16 2022-04-12 Illinois Tool Works, Inc. Data usage and aggregation in a food product asset related network
US20130155107A1 (en) * 2011-12-16 2013-06-20 Identive Group, Inc. Systems and Methods for Providing an Augmented Reality Experience
US20140279235A1 (en) * 2011-12-20 2014-09-18 Thomas E. Sandholm Enabling collaborative reactions to images
US9215272B2 (en) 2011-12-21 2015-12-15 Seiko Epson Corporation Method for securely distributing meeting data from interactive whiteboard projector
US20130170715A1 (en) * 2012-01-03 2013-07-04 Waymon B. Reed Garment modeling simulation system and process
US20130173226A1 (en) * 2012-01-03 2013-07-04 Waymon B. Reed Garment modeling simulation system and process
US8832116B1 (en) 2012-01-11 2014-09-09 Google Inc. Using mobile application logs to measure and maintain accuracy of business information
US9762404B2 (en) * 2012-01-15 2017-09-12 Microsoft Technology Licensing, Llc Providing contextual information associated with a communication participant
US20130185347A1 (en) * 2012-01-15 2013-07-18 Microsoft Corporation Providing contextual information associated with a communication participant
US9401058B2 (en) 2012-01-30 2016-07-26 International Business Machines Corporation Zone based presence determination via voiceprint location awareness
US8977680B2 (en) 2012-02-02 2015-03-10 Vegas.Com Systems and methods for shared access to gaming accounts
US8606645B1 (en) * 2012-02-02 2013-12-10 SeeMore Interactive, Inc. Method, medium, and system for an augmented reality retail application
US20130211950A1 (en) * 2012-02-09 2013-08-15 Microsoft Corporation Recommender system
US10438268B2 (en) * 2012-02-09 2019-10-08 Microsoft Technology Licensing, Llc Recommender system
US10296962B2 (en) 2012-02-13 2019-05-21 International Business Machines Corporation Collaborative shopping across multiple shopping channels using shared virtual shopping carts
US20130219434A1 (en) * 2012-02-20 2013-08-22 Sony Corporation 3d body scan input to tv for virtual fitting of apparel presented on retail store tv channel
US9529520B2 (en) * 2012-02-24 2016-12-27 Samsung Electronics Co., Ltd. Method of providing information and mobile terminal thereof
US20130227471A1 (en) * 2012-02-24 2013-08-29 Samsung Electronics Co., Ltd. Method of providing information and mobile terminal thereof
US9659034B2 (en) * 2012-02-24 2017-05-23 Samsung Electronics Co., Ltd. Method of providing capture data and mobile terminal thereof
US20130227456A1 (en) * 2012-02-24 2013-08-29 Samsung Electronics Co. Ltd. Method of providing capture data and mobile terminal thereof
US20130232412A1 (en) * 2012-03-02 2013-09-05 Nokia Corporation Method and apparatus for providing media event suggestions
US20130232017A1 (en) * 2012-03-04 2013-09-05 Tal Zvi NATHANEL Device, system, and method of electronic payment
US9633344B2 (en) * 2012-03-04 2017-04-25 Quick Check Ltd. Device, system, and method of electronic payment
US10789526B2 (en) 2012-03-09 2020-09-29 Nara Logics, Inc. Method, system, and non-transitory computer-readable medium for constructing and applying synaptic networks
US11151617B2 (en) 2012-03-09 2021-10-19 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US20130241937A1 (en) * 2012-03-13 2013-09-19 International Business Machines Corporation Social Interaction Analysis and Display
US20150058083A1 (en) * 2012-03-15 2015-02-26 Isabel Herrera System for personalized fashion services
US8767016B2 (en) * 2012-03-15 2014-07-01 Shun-Ching Yang Virtual reality interaction system and method
US20130241920A1 (en) * 2012-03-15 2013-09-19 Shun-Ching Yang Virtual reality interaction system and method
WO2013142625A3 (en) * 2012-03-20 2014-05-22 A9.Com, Inc. Structured lighting-based content interactions in multiple environments
US20130254006A1 (en) * 2012-03-20 2013-09-26 Pick'ntell Ltd. Apparatus and method for transferring commercial data at a store
US9304646B2 (en) 2012-03-20 2016-04-05 A9.Com, Inc. Multi-user content interactions
US9367124B2 (en) 2012-03-20 2016-06-14 A9.Com, Inc. Multi-application content interactions
US9213420B2 (en) 2012-03-20 2015-12-15 A9.Com, Inc. Structured lighting based content interactions
US9373025B2 (en) 2012-03-20 2016-06-21 A9.Com, Inc. Structured lighting-based content interactions in multiple environments
EP2641539B1 (en) * 2012-03-21 2021-12-08 OneFID GmbH Method for determining the dimensions of a foot
US20140108202A1 (en) * 2012-03-30 2014-04-17 Rakuten,Inc. Information processing apparatus, information processing method, information processing program, and recording medium
US20130257877A1 (en) * 2012-03-30 2013-10-03 Videx, Inc. Systems and Methods for Generating an Interactive Avatar Model
US11109090B2 (en) 2012-04-04 2021-08-31 Time Warner Cable Enterprises Llc Apparatus and methods for automated highlight reel creation in a content delivery network
US9467723B2 (en) 2012-04-04 2016-10-11 Time Warner Cable Enterprises Llc Apparatus and methods for automated highlight reel creation in a content delivery network
US20130268887A1 (en) * 2012-04-04 2013-10-10 Adam ROUSSOS Device and process for augmenting an electronic menu using social context data
US9299099B1 (en) 2012-04-04 2016-03-29 Google Inc. Providing recommendations in a social shopping trip
US9171315B1 (en) 2012-04-04 2015-10-27 Google Inc. System and method for negotiating item prices
US10250932B2 (en) 2012-04-04 2019-04-02 Time Warner Cable Enterprises Llc Apparatus and methods for automated highlight reel creation in a content delivery network
US20150088622A1 (en) * 2012-04-06 2015-03-26 LiveOne, Inc. Social media application for a media content providing platform
US9251313B2 (en) 2012-04-11 2016-02-02 Intouch Technologies, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US10762170B2 (en) 2012-04-11 2020-09-01 Intouch Technologies, Inc. Systems and methods for visualizing patient and telepresence device statistics in a healthcare network
US8902278B2 (en) 2012-04-11 2014-12-02 Intouch Technologies, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US11205510B2 (en) 2012-04-11 2021-12-21 Teladoc Health, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US9367522B2 (en) 2012-04-13 2016-06-14 Google Inc. Time-based presentation editing
US20130282344A1 (en) * 2012-04-20 2013-10-24 Matthew Flagg Systems and methods for simulating accessory display on a subject
US9058605B2 (en) * 2012-04-20 2015-06-16 Taaz, Inc. Systems and methods for simulating accessory display on a subject
US20130278626A1 (en) * 2012-04-20 2013-10-24 Matthew Flagg Systems and methods for simulating accessory display on a subject
US20130290416A1 (en) * 2012-04-27 2013-10-31 Steve Nelson Method for Securely Distributing Meeting Data from Interactive Whiteboard Projector
US8874657B2 (en) * 2012-04-27 2014-10-28 Seiko Epson Corporation Method for securely distributing meeting data from interactive whiteboard projector
US10726451B1 (en) 2012-05-02 2020-07-28 James E Plankey System and method for creating and managing multimedia sales promotions
US20170091830A9 (en) * 2012-05-02 2017-03-30 James Plankey System and method for managing multimedia sales promotions
US9865007B2 (en) * 2012-05-02 2018-01-09 James E. Plankey System and method for managing multimedia sales promotions
US9853959B1 (en) 2012-05-07 2017-12-26 Consumerinfo.Com, Inc. Storage and maintenance of personal data
US11356430B1 (en) 2012-05-07 2022-06-07 Consumerinfo.Com, Inc. Storage and maintenance of personal data
US9635159B2 (en) * 2012-05-08 2017-04-25 Nokia Technologies Oy Method and apparatus for providing immersive interaction via everyday devices
US20150111547A1 (en) * 2012-05-08 2015-04-23 Nokia Corporation Method and apparatus for providing immersive interaction via everyday devices
US20130300739A1 (en) * 2012-05-09 2013-11-14 Mstar Semiconductor, Inc. Stereoscopic apparel try-on method and device
US20130317943A1 (en) * 2012-05-11 2013-11-28 Cassi East Trade show and exhibition application for collectables and its method of use
US9356904B1 (en) * 2012-05-14 2016-05-31 Google Inc. Event invitations having cinemagraphs
US20130311339A1 (en) * 2012-05-17 2013-11-21 Leo Jeremias Chat enabled online marketplace systems and methods
US20140351093A1 (en) * 2012-05-17 2014-11-27 Leo Jeremias Chat enabled online marketplace systems and methods
US10328576B2 (en) 2012-05-22 2019-06-25 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
US10780582B2 (en) 2012-05-22 2020-09-22 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
US11628571B2 (en) 2012-05-22 2023-04-18 Teladoc Health, Inc. Social behavior rules for a medical telepresence robot
US10603792B2 (en) 2012-05-22 2020-03-31 Intouch Technologies, Inc. Clinical workflows utilizing autonomous and semiautonomous telemedicine devices
US9776327B2 (en) 2012-05-22 2017-10-03 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
US11515049B2 (en) 2012-05-22 2022-11-29 Teladoc Health, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US10061896B2 (en) 2012-05-22 2018-08-28 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US10658083B2 (en) 2012-05-22 2020-05-19 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US10892052B2 (en) 2012-05-22 2021-01-12 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US11453126B2 (en) 2012-05-22 2022-09-27 Teladoc Health, Inc. Clinical workflows utilizing autonomous and semi-autonomous telemedicine devices
US9361021B2 (en) 2012-05-22 2016-06-07 Irobot Corporation Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US9174342B2 (en) 2012-05-22 2015-11-03 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
US9235929B2 (en) 2012-05-23 2016-01-12 Glasses.Com Inc. Systems and methods for efficiently processing virtual 3-D data
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9378584B2 (en) * 2012-05-23 2016-06-28 Glasses.Com Inc. Systems and methods for rendering virtual try-on products
US10147233B2 (en) 2012-05-23 2018-12-04 Glasses.Com Inc. Systems and methods for generating a 3-D model of a user for a virtual try-on product
US20130314410A1 (en) * 2012-05-23 2013-11-28 1-800 Contacts, Inc. Systems and methods for rendering virtual try-on products
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US9311746B2 (en) 2012-05-23 2016-04-12 Glasses.Com Inc. Systems and methods for generating a 3-D model of a virtual try-on product
US20130314443A1 (en) * 2012-05-28 2013-11-28 Clayton Grassick Methods, mobile device and server for support of augmented reality on the mobile device
US9984408B1 (en) * 2012-05-30 2018-05-29 Amazon Technologies, Inc. Method, medium, and system for live video cooperative shopping
US9652654B2 (en) 2012-06-04 2017-05-16 Ebay Inc. System and method for providing an interactive shopping experience via webcam
US9880019B2 (en) 2012-06-05 2018-01-30 Apple Inc. Generation of intersection information by a mapping service
US10018478B2 (en) 2012-06-05 2018-07-10 Apple Inc. Voice instructions during navigation
US10318104B2 (en) 2012-06-05 2019-06-11 Apple Inc. Navigation application with adaptive instruction text
GB2516595A (en) * 2012-06-05 2015-01-28 Mimecast North America Inc Electronic communicating
US10718625B2 (en) 2012-06-05 2020-07-21 Apple Inc. Voice instructions during navigation
US9903732B2 (en) 2012-06-05 2018-02-27 Apple Inc. Providing navigation instructions while device is in locked mode
US20130345980A1 (en) * 2012-06-05 2013-12-26 Apple Inc. Providing navigation instructions while operating navigation application in background
WO2013184407A1 (en) * 2012-06-05 2013-12-12 Mimecast North America Inc. Electronic communicating
US10911872B2 (en) 2012-06-05 2021-02-02 Apple Inc. Context-aware voice guidance
US8965696B2 (en) * 2012-06-05 2015-02-24 Apple Inc. Providing navigation instructions while operating navigation application in background
US10006505B2 (en) 2012-06-05 2018-06-26 Apple Inc. Rendering road signs during navigation
US10176633B2 (en) 2012-06-05 2019-01-08 Apple Inc. Integrated mapping and navigation application
US9886794B2 (en) 2012-06-05 2018-02-06 Apple Inc. Problem reporting in maps
US10508926B2 (en) 2012-06-05 2019-12-17 Apple Inc. Providing navigation instructions while device is in locked mode
US11290820B2 (en) 2012-06-05 2022-03-29 Apple Inc. Voice instructions during navigation
US11082773B2 (en) 2012-06-05 2021-08-03 Apple Inc. Context-aware voice guidance
US11055912B2 (en) 2012-06-05 2021-07-06 Apple Inc. Problem reporting in maps
US10732003B2 (en) 2012-06-05 2020-08-04 Apple Inc. Voice instructions during navigation
US10156455B2 (en) 2012-06-05 2018-12-18 Apple Inc. Context-aware voice guidance
US11727641B2 (en) 2012-06-05 2023-08-15 Apple Inc. Problem reporting in maps
US10323701B2 (en) 2012-06-05 2019-06-18 Apple Inc. Rendering road signs during navigation
US9997069B2 (en) 2012-06-05 2018-06-12 Apple Inc. Context-aware voice guidance
US20130332840A1 (en) * 2012-06-10 2013-12-12 Apple Inc. Image application for creating and sharing image streams
US8798401B1 (en) * 2012-06-15 2014-08-05 Shutterfly, Inc. Image sharing with facial recognition models
US20140253553A1 (en) * 2012-06-17 2014-09-11 Spaceview, Inc. Visualization of three-dimensional models of objects in two-dimensional environment
US20130339159A1 (en) * 2012-06-18 2013-12-19 Lutebox Ltd. Social networking system and methods of implementation
US10540067B2 (en) 2012-06-20 2020-01-21 Maquet Critical Care Ab Breathing apparatus having a display with user selectable background
US20150199095A1 (en) * 2012-06-20 2015-07-16 Maquet Critical Care Ab Breathing apparatus having a display with user selectable background
US10296181B2 (en) * 2012-06-20 2019-05-21 Maquet Critical Care Ab Breathing apparatus having a display with user selectable background
US9607330B2 (en) 2012-06-21 2017-03-28 Cinsay, Inc. Peer-assisted shopping
US10726458B2 (en) 2012-06-21 2020-07-28 Aibuy, Inc. Peer-assisted shopping
US9001118B2 (en) 2012-06-21 2015-04-07 Microsoft Technology Licensing, Llc Avatar construction using depth camera
US20150039468A1 (en) * 2012-06-21 2015-02-05 Cinsay, Inc. Apparatus and method for peer-assisted e-commerce shopping
US20210027349A1 (en) * 2012-06-21 2021-01-28 Aibuy, Inc. Apparatus and method for peer-assisted e-commerce shopping
WO2013192557A3 (en) * 2012-06-21 2014-12-18 Cinsay, Inc. Peer-assisted shopping
US10789631B2 (en) * 2012-06-21 2020-09-29 Aibuy, Inc. Apparatus and method for peer-assisted e-commerce shopping
CN104769627A (en) * 2012-06-21 2015-07-08 辛塞伊公司 Peer-assisted shopping
US20130342564A1 (en) * 2012-06-25 2013-12-26 Peter Tobias Kinnebrew Configured virtual environments
US9645394B2 (en) * 2012-06-25 2017-05-09 Microsoft Technology Licensing, Llc Configured virtual environments
US20140007016A1 (en) * 2012-06-27 2014-01-02 Hon Hai Precision Industry Co., Ltd. Product fitting device and method
US20150170254A1 (en) * 2012-06-28 2015-06-18 Unijunction (Pty) Ltd System and method for processing an electronic order
US10068547B2 (en) * 2012-06-29 2018-09-04 Disney Enterprises, Inc. Augmented reality surface painting
US20140002472A1 (en) * 2012-06-29 2014-01-02 Disney Enterprises, Inc. Augmented reality surface painting
US10019747B2 (en) * 2012-06-30 2018-07-10 At&T Intellectual Property I, L.P. Enhancing a user's shopping experience
US20150379613A1 (en) * 2012-06-30 2015-12-31 At&T Mobility Ii Llc Enhancing a User's Shopping Experience
US20140012911A1 (en) * 2012-07-09 2014-01-09 Jenny Q. Ta Social network system and method
US10025448B2 (en) 2012-07-10 2018-07-17 Huawei Technologies Co., Ltd. Information exchange method, user end, and system for online collaborative shopping
US20140108178A1 (en) * 2012-07-10 2014-04-17 Huawei Technologies Co., Ltd. Information exchange method, user end, and system for online collaborative shopping
US20150134496A1 (en) * 2012-07-10 2015-05-14 Dressformer, Inc. Method for providing for the remote fitting and/or selection of clothing
US9211239B2 (en) * 2012-07-10 2015-12-15 Huawei Technologies Co., Ltd. Information exchange method, user end, and system for online collaborative shopping
US20140019424A1 (en) * 2012-07-11 2014-01-16 Google Inc. Identifier validation and debugging
US11798035B2 (en) 2012-07-25 2023-10-24 Rakuten Group, Inc. Promoting products on a social networking system based on information from a merchant site
US20140032332A1 (en) * 2012-07-25 2014-01-30 SocialWire, Inc. Promoting products on a social networking system based on information from a merchant site
US10909574B2 (en) * 2012-07-25 2021-02-02 Rakuten Usa, Inc. Promoting products on a social networking system based on information from a merchant site
US9349218B2 (en) 2012-07-26 2016-05-24 Qualcomm Incorporated Method and apparatus for controlling augmented reality
US9361730B2 (en) 2012-07-26 2016-06-07 Qualcomm Incorporated Interactions of tangible and augmented reality objects
US9087403B2 (en) 2012-07-26 2015-07-21 Qualcomm Incorporated Maintaining continuity of augmentations
US9514570B2 (en) 2012-07-26 2016-12-06 Qualcomm Incorporated Augmentation of tangible objects as user interface controller
US9813255B2 (en) * 2012-07-30 2017-11-07 Microsoft Technology Licensing, Llc Collaboration environments and views
US20140032679A1 (en) * 2012-07-30 2014-01-30 Microsoft Corporation Collaboration environments and views
US20140032359A1 (en) * 2012-07-30 2014-01-30 Infosys Limited System and method for providing intelligent recommendations
US20180123816A1 (en) * 2012-07-30 2018-05-03 Microsoft Technology Licensing, Llc Collaboration environments and views
CN104854623A (en) * 2012-08-02 2015-08-19 微软技术许可有限责任公司 Avatar-based virtual dressing room
US10664901B2 (en) * 2012-08-03 2020-05-26 Eyefitu Ag Garment fitting system and method
US20140358738A1 (en) * 2012-08-03 2014-12-04 Isabelle Ohnemus Garment fitting system and method
US9799064B2 (en) * 2012-08-03 2017-10-24 Eyefitu Ag Garment fitting system and method
US20140040041A1 (en) * 2012-08-03 2014-02-06 Isabelle Ohnemus Garment fitting system and method
US8781932B2 (en) * 2012-08-08 2014-07-15 At&T Intellectual Property I, L.P. Platform for hosting virtual events
US20160142995A1 (en) * 2012-08-09 2016-05-19 Actv8, Inc. Method and apparatus for interactive mobile offer system based on proximity of mobile device to media source
US20140047355A1 (en) * 2012-08-09 2014-02-13 Gface Gmbh Simultaneous evaluation of items via online services
US20160219407A1 (en) * 2012-08-09 2016-07-28 Actv8, Inc. Method and apparatus for interactive mobile offer system using time location for out-of-home display screens
US9258342B2 (en) * 2012-08-09 2016-02-09 Actv8, Inc. Method and apparatus for interactive mobile offer system using time and location for out-of-home display screens
US9426772B2 (en) * 2012-08-09 2016-08-23 Actv8, Inc. Method and apparatus for interactive mobile offer system based on proximity of mobile device to media source
US20140047072A1 (en) * 2012-08-09 2014-02-13 Actv8, Inc. Method and apparatus for interactive mobile offer system using time and location for out-of-home display screens
US9596569B2 (en) * 2012-08-09 2017-03-14 Actv8, Inc. Method and apparatus for interactive mobile offer system using time location for out-of-home display screens
US9076247B2 (en) * 2012-08-10 2015-07-07 Ppg Industries Ohio, Inc. System and method for visualizing an object in a simulated environment
US20160140754A1 (en) * 2012-08-10 2016-05-19 Ppg Industries Ohio, Inc. System and method for visualizing an object in a simulated environment
US10121281B2 (en) * 2012-08-10 2018-11-06 Vitro, S.A.B. De C.V. System and method for visualizing an object in a simulated environment
US10771544B2 (en) 2012-08-14 2020-09-08 Bloompapers Sl Online fashion community system and method
US9479577B2 (en) * 2012-08-14 2016-10-25 Chicisimo S.L. Online fashion community system and method
US11509712B2 (en) * 2012-08-14 2022-11-22 Bloompapers Sl Fashion item analysis based on user ensembles in online fashion community
US20140052784A1 (en) * 2012-08-14 2014-02-20 Chicisimo S.L. Online fashion community system and method
US20200358847A1 (en) * 2012-08-14 2020-11-12 Bloompapers Sl Fashion item analysis based on user ensembles in online fashion community
US10311508B2 (en) * 2012-08-15 2019-06-04 Fashpose, Llc Garment modeling simulation system and process
US20150235305A1 (en) * 2012-08-15 2015-08-20 Fashpose, Llc Garment modeling simulation system and process
US9894115B2 (en) * 2012-08-20 2018-02-13 Samsung Electronics Co., Ltd. Collaborative data editing and processing system
US20140053086A1 (en) * 2012-08-20 2014-02-20 Samsung Electronics Co., Ltd. Collaborative data editing and processing system
US20150215243A1 (en) * 2012-08-22 2015-07-30 Nokia Corporation Method and apparatus for exchanging status updates while collaborating
US9787616B2 (en) * 2012-08-22 2017-10-10 Nokia Technologies Oy Method and apparatus for exchanging status updates while collaborating
US10664646B2 (en) 2012-08-29 2020-05-26 Tencent Technology (Shenzhen) Company Limited Methods and devices for using one terminal to control a multimedia application executed on another terminal
US20140095965A1 (en) * 2012-08-29 2014-04-03 Tencent Technology (Shenzhen) Company Limited Methods and devices for terminal control
US9846685B2 (en) * 2012-08-29 2017-12-19 Tencent Technology (Shenzhen) Company Limited Methods and devices for terminal control
US9996909B2 (en) * 2012-08-30 2018-06-12 Rakuten, Inc. Clothing image processing device, clothing image display method and program
US20140067604A1 (en) * 2012-09-05 2014-03-06 Robert D. Fish Digital Advisor
US20140075329A1 (en) * 2012-09-10 2014-03-13 Samsung Electronics Co. Ltd. Method and device for transmitting information related to event
US11159851B2 (en) 2012-09-14 2021-10-26 Time Warner Cable Enterprises Llc Apparatus and methods for providing enhanced or interactive features
US9633342B2 (en) 2012-09-14 2017-04-25 Bank Of America Corporation Gift card association with account
US20140081839A1 (en) * 2012-09-14 2014-03-20 Bank Of America Corporation Gift card association with account
US20140129370A1 (en) * 2012-09-14 2014-05-08 James L. Mabrey Chroma Key System and Method for Facilitating Social E-Commerce
US20140095349A1 (en) * 2012-09-14 2014-04-03 James L. Mabrey System and Method for Facilitating Social E-Commerce
US9355392B2 (en) * 2012-09-14 2016-05-31 Bank Of America Corporation Gift card association with account
US9519895B2 (en) 2012-09-14 2016-12-13 Bank Of America Corporation Gift card association with account
US20140082143A1 (en) * 2012-09-17 2014-03-20 Samsung Electronics Co., Ltd. Method and apparatus for tagging multimedia data
US9811865B2 (en) * 2012-09-17 2017-11-07 Adobe Systems Incorporated Method and apparatus for measuring perceptible properties of media content
US20140082493A1 (en) * 2012-09-17 2014-03-20 Adobe Systems Inc. Method and apparatus for measuring perceptible properties of media content
US9654578B2 (en) * 2012-09-17 2017-05-16 Samsung Electronics Co., Ltd. Method and apparatus for tagging multimedia data
US8977622B1 (en) * 2012-09-17 2015-03-10 Amazon Technologies, Inc. Evaluation of nodes
US9830344B2 (en) 2012-09-17 2017-11-28 Amazon Techonoligies, Inc. Evaluation of nodes
US20140081750A1 (en) * 2012-09-19 2014-03-20 Mastercard International Incorporated Social media transaction visualization structure
US10853890B2 (en) * 2012-09-19 2020-12-01 Mastercard International Incorporated Social media transaction visualization structure
US20140085293A1 (en) * 2012-09-21 2014-03-27 Luxand, Inc. Method of creating avatar from user submitted image
US9314692B2 (en) * 2012-09-21 2016-04-19 Luxand, Inc. Method of creating avatar from user submitted image
US20140092101A1 (en) * 2012-09-28 2014-04-03 Samsung Electronics Co., Ltd. Apparatus and method for producing animated emoticon
US20150261647A1 (en) * 2012-10-02 2015-09-17 Nec Corporation Information system construction assistance device, information system construction assistance method, and recording medium
US20140201039A1 (en) * 2012-10-08 2014-07-17 Livecom Technologies, Llc System and method for an automated process for visually identifying a product's presence and making the product available for viewing
US20140108235A1 (en) * 2012-10-16 2014-04-17 American Express Travel Related Services Company, Inc. Systems and Methods for Payment Settlement
US20140118482A1 (en) * 2012-10-26 2014-05-01 Korea Advanced Institute Of Science And Technology Method and apparatus for 2d to 3d conversion using panorama image
US20140122291A1 (en) * 2012-10-31 2014-05-01 Microsoft Corporation Bargaining Through a User-Specific Item List
US20140129935A1 (en) * 2012-11-05 2014-05-08 Dolly OVADIA NAHON Method and Apparatus for Developing and Playing Natural User Interface Applications
US9501140B2 (en) * 2012-11-05 2016-11-22 Onysus Software Ltd Method and apparatus for developing and playing natural user interface applications
WO2014070293A1 (en) * 2012-11-05 2014-05-08 Nara Logics, Inc. Systems and methods for providing enhanced neural network genesis and recommendations to one or more users
US9741071B2 (en) * 2012-11-07 2017-08-22 Hand Held Products, Inc. Computer-assisted shopping and product location
US20140129378A1 (en) * 2012-11-07 2014-05-08 Hand Held Products, Inc. Computer-assisted shopping and product location
US10402895B2 (en) * 2012-11-07 2019-09-03 Hand Held Products, Inc. Computer-assisted shopping and product location
US8874653B2 (en) 2012-11-12 2014-10-28 Maximilian A. Chang Vehicle security and customization
US11863310B1 (en) 2012-11-12 2024-01-02 Consumerinfo.Com, Inc. Aggregating user web browsing data
US9654541B1 (en) 2012-11-12 2017-05-16 Consumerinfo.Com, Inc. Aggregating user web browsing data
US11012491B1 (en) 2012-11-12 2021-05-18 ConsumerInfor.com, Inc. Aggregating user web browsing data
US10277659B1 (en) 2012-11-12 2019-04-30 Consumerinfo.Com, Inc. Aggregating user web browsing data
US8892639B2 (en) * 2012-11-14 2014-11-18 Institute for Information Instustry Method and system for processing file stored in cloud storage and computer readable storage medium storing the method
US20140136600A1 (en) * 2012-11-14 2014-05-15 Institute For Information Industry Method and system for processing file stored in cloud storage and computer readable storage medium storing the method
US11694132B2 (en) 2012-11-15 2023-07-04 Impel It! Inc. Methods and systems for electronic form identification and population
US10824975B2 (en) 2012-11-15 2020-11-03 Impel It! Inc. Methods and systems for electronic form identification and population
US10083411B2 (en) 2012-11-15 2018-09-25 Impel It! Inc. Methods and systems for the sale of consumer services
US10402760B2 (en) * 2012-11-15 2019-09-03 Impel It! Inc. Methods and systems for the sale of consumer services
US8807427B1 (en) 2012-11-20 2014-08-19 Sean I. Mcghie Conversion/transfer of non-negotiable credits to in-game funds for in-game purchases
US9098611B2 (en) 2012-11-26 2015-08-04 Intouch Technologies, Inc. Enhanced video interaction for a user interface of a telepresence network
US11910128B2 (en) 2012-11-26 2024-02-20 Teladoc Health, Inc. Enhanced video interaction for a user interface of a telepresence network
US10924708B2 (en) 2012-11-26 2021-02-16 Teladoc Health, Inc. Enhanced video interaction for a user interface of a telepresence network
US10334205B2 (en) 2012-11-26 2019-06-25 Intouch Technologies, Inc. Enhanced video interaction for a user interface of a telepresence network
US9529785B2 (en) 2012-11-27 2016-12-27 Google Inc. Detecting relationships between edits and acting on a subset of edits
US20140149247A1 (en) * 2012-11-28 2014-05-29 Josh Frey System and Method for Order Processing
US9336607B1 (en) * 2012-11-28 2016-05-10 Amazon Technologies, Inc. Automatic identification of projection surfaces
US20140157145A1 (en) * 2012-11-30 2014-06-05 Facebook, Inc Social menu pages
US11651426B1 (en) 2012-11-30 2023-05-16 Consumerlnfo.com, Inc. Credit score goals and alerts systems and methods
US10366450B1 (en) 2012-11-30 2019-07-30 Consumerinfo.Com, Inc. Credit data analysis
US11308551B1 (en) 2012-11-30 2022-04-19 Consumerinfo.Com, Inc. Credit data analysis
US10963959B2 (en) 2012-11-30 2021-03-30 Consumerinfo. Com, Inc. Presentation of credit score factors
US9495714B2 (en) * 2012-11-30 2016-11-15 Facebook, Inc. Implementing menu pages in a social networking system
US11132742B1 (en) 2012-11-30 2021-09-28 Consumerlnfo.com, Inc. Credit score goals and alerts systems and methods
US9830646B1 (en) 2012-11-30 2017-11-28 Consumerinfo.Com, Inc. Credit score goals and alerts systems and methods
US10171576B2 (en) * 2012-12-03 2019-01-01 Tencent Technology (Shenzhen) Company Limited Method, apparatus, and system for interaction between terminals
US20150341430A1 (en) * 2012-12-03 2015-11-26 Tencent Technology (Shenzhen) Company Limited Method, apparatus, and system for interaction between terminals
US10255598B1 (en) 2012-12-06 2019-04-09 Consumerinfo.Com, Inc. Credit card account data extraction
CN103024569A (en) * 2012-12-07 2013-04-03 康佳集团股份有限公司 Method and system for performing parent-child education data interaction through smart television
US9565472B2 (en) 2012-12-10 2017-02-07 Time Warner Cable Enterprises Llc Apparatus and methods for content transfer protection
US10958629B2 (en) 2012-12-10 2021-03-23 Time Warner Cable Enterprises Llc Apparatus and methods for content transfer protection
US20140160122A1 (en) * 2012-12-10 2014-06-12 Microsoft Corporation Creating a virtual representation based on camera data
US11741681B2 (en) * 2012-12-10 2023-08-29 Nant Holdings Ip, Llc Interaction analysis systems and methods
US10050945B2 (en) 2012-12-10 2018-08-14 Time Warner Cable Enterprises Llc Apparatus and methods for content transfer protection
US20150379532A1 (en) * 2012-12-11 2015-12-31 Beijing Jingdong Century Trading Co., Ltd. Method and system for identifying bad commodities based on user purchase behaviors
US20150052444A1 (en) * 2012-12-12 2015-02-19 Huizhou Tcl Mobile Communication Co., Ltd Method of displaying a dlna apparatus, and mobile terminal
US9531878B2 (en) 2012-12-12 2016-12-27 Genesys Telecommunications Laboratories, Inc. System and method for access number distribution in a contact center
US20140168204A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Model based video projection
US20140172633A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Payment interchange for use with global shopping cart
US20140172631A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Global shopping cart
US20140172629A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Merchant interchange for use with global shopping cart
US20140172630A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Social media interface for use with a global shopping cart
US20140172632A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Shopping cart interchange
US20160171570A1 (en) * 2012-12-14 2016-06-16 Mastercard International Incorporated System and method for payment, data management, and interchanges for use with global shopping cart
US20140172634A1 (en) * 2012-12-14 2014-06-19 Mastercard International Incorporated Data management in a global shopping cart
US20200111139A1 (en) * 2012-12-14 2020-04-09 Mastercard International Incorporated Payment interchange for use with global shopping cart
US10504163B2 (en) * 2012-12-14 2019-12-10 Mastercard International Incorporated System for payment, data management, and interchanges for use with global shopping cart
US9196003B2 (en) 2012-12-20 2015-11-24 Wal-Mart Stores, Inc. Pre-purchase feedback apparatus and method
US20140180654A1 (en) * 2012-12-23 2014-06-26 Stephen Michael Seymour Client Finite Element Submission System
US20150302011A1 (en) * 2012-12-26 2015-10-22 Rakuten, Inc. Image management device, image generation program, image management method, and image management program
US9396570B2 (en) * 2012-12-28 2016-07-19 Rakuten, Inc. Image processing method to superimpose item image onto model image and image processing device thereof
US10380577B2 (en) 2012-12-31 2019-08-13 Paypal, Inc. Wireless dongle facilitated mobile transactions
US20150248667A1 (en) * 2012-12-31 2015-09-03 Ebay Inc. Dongle facilitated wireless consumer payments
US11270287B2 (en) 2012-12-31 2022-03-08 Paypal, Inc. Wireless dongle facilitated mobile transactions
US11893565B2 (en) 2012-12-31 2024-02-06 Paypal, Inc. Wireless dongle facilitated mobile transactions
US9471917B2 (en) * 2012-12-31 2016-10-18 Paypal, Inc. Dongle facilitated wireless consumer payments
US20140195359A1 (en) * 2013-01-07 2014-07-10 Andrew William Schulz System and Method for Computer Automated Payment of Hard Copy Bills
US10956667B2 (en) 2013-01-07 2021-03-23 Google Llc Operational transformations proxy for thin clients
US9462037B2 (en) 2013-01-07 2016-10-04 Google Inc. Dynamically sizing chunks in a partially loaded spreadsheet model
US10419500B2 (en) * 2013-01-11 2019-09-17 International Business Machines Corporation Personalizing a social networking profile page
US10778735B2 (en) 2013-01-11 2020-09-15 International Business Machines Corporation Personalizing a social networking profile page
US20140201283A1 (en) * 2013-01-11 2014-07-17 International Business Machines Corporation Personalizing a social networking profile page
US20140201269A1 (en) * 2013-01-11 2014-07-17 International Business Machines Corporation Personalizing a social networking profile page
US10432677B2 (en) * 2013-01-11 2019-10-01 International Business Machines Corporation Personalizing a social networking profile page
US9311622B2 (en) 2013-01-15 2016-04-12 Google Inc. Resolving mutations in a partially-loaded spreadsheet model
US20140207609A1 (en) * 2013-01-23 2014-07-24 Facebook, Inc. Generating and maintaining a list of products desired by a social networking system user
WO2014117019A2 (en) * 2013-01-24 2014-07-31 Barker Jeremiah Timberline Graphical aggregation of virtualized network communication
WO2014117019A3 (en) * 2013-01-24 2014-10-16 Barker Jeremiah Timberline Graphical aggregation of virtualized network communication
RU2634734C2 (en) * 2013-01-25 2017-11-03 Маттиас Рат Unified multimedia instrument, system and method for researching and studying virtual human body
US8655970B1 (en) * 2013-01-29 2014-02-18 Google Inc. Automatic entertainment caching for impending travel
US9229944B2 (en) 2013-01-29 2016-01-05 Mobitv, Inc. Scalable networked digital video recordings via shard-based architecture
WO2014120692A1 (en) * 2013-01-29 2014-08-07 Mobitv, Inc. Scalable networked digital video recordings via shard-based architecture
US20140214591A1 (en) * 2013-01-31 2014-07-31 Ebay Inc. System and method to provide a product display in a business
US20140214504A1 (en) * 2013-01-31 2014-07-31 Sony Corporation Virtual meeting lobby for waiting for online event
US20140214629A1 (en) * 2013-01-31 2014-07-31 Hewlett-Packard Development Company, L.P. Interaction in a virtual reality environment
US11501363B2 (en) * 2013-02-07 2022-11-15 Crisalix S.A. 3D platform for aesthetic simulation
US20200242686A1 (en) * 2013-02-07 2020-07-30 Crisalix S.A. 3D Platform For Aesthetic Simulation
US20140236652A1 (en) * 2013-02-19 2014-08-21 Wal-Mart Stores, Inc. Remote sales assistance system
US9082149B2 (en) * 2013-02-19 2015-07-14 Wal-Mart Stores, Inc. System and method for providing sales assistance to a consumer wearing an augmented reality device in a physical store
US11107105B1 (en) * 2013-02-23 2021-08-31 Mwe Live, Llc Systems and methods for merging a virtual world, live events and an entertainment channel
US11922434B2 (en) 2013-02-23 2024-03-05 Mwe Live, Llc Systems and methods for merging a virtual world, live events and an entertainment channel
US20220351158A1 (en) * 2013-03-01 2022-11-03 Toshiba Tec Kabushiki Kaisha Electronic receipt system, electronic receipt management server, and program therefor
US10062096B2 (en) 2013-03-01 2018-08-28 Vegas.Com, Llc System and method for listing items for purchase based on revenue per impressions
US9697263B1 (en) 2013-03-04 2017-07-04 Experian Information Solutions, Inc. Consumer data request fulfillment system
US20140258169A1 (en) * 2013-03-05 2014-09-11 Bental Wong Method and system for automated verification of customer reviews
US20140258141A1 (en) * 2013-03-05 2014-09-11 Bibliotheca Limited Digital Media Lending System and Method
US10915947B2 (en) * 2013-03-05 2021-02-09 Bibliotheca Limited Digital media lending system and method
WO2014138575A1 (en) * 2013-03-07 2014-09-12 Pro Fit Optix Inc. Online lens ordering system for vision care professionals or direct to customers
US20140257839A1 (en) * 2013-03-07 2014-09-11 Pro Fit Optix Inc. Online Lens Ordering System for Vision Care Professionals or Direct to Customers
US10887771B2 (en) * 2013-03-11 2021-01-05 Time Warner Cable Enterprises Llc Access control, establishing trust in a wireless network
US11076203B2 (en) 2013-03-12 2021-07-27 Time Warner Cable Enterprises Llc Methods and apparatus for providing and uploading content to personalized network storage
US11769200B1 (en) 2013-03-14 2023-09-26 Consumerinfo.Com, Inc. Account vulnerability alerts
US20200175890A1 (en) * 2013-03-14 2020-06-04 Apple Inc. Device, method, and graphical user interface for a group reading environment
US11514519B1 (en) 2013-03-14 2022-11-29 Consumerinfo.Com, Inc. System and methods for credit dispute processing, resolution, and reporting
US9928975B1 (en) 2013-03-14 2018-03-27 Icontrol Networks, Inc. Three-way switch
US11553579B2 (en) 2013-03-14 2023-01-10 Icontrol Networks, Inc. Three-way switch
US9406085B1 (en) 2013-03-14 2016-08-02 Consumerinfo.Com, Inc. System and methods for credit dispute processing, resolution, and reporting
US10043214B1 (en) 2013-03-14 2018-08-07 Consumerinfo.Com, Inc. System and methods for credit dispute processing, resolution, and reporting
US9697568B1 (en) 2013-03-14 2017-07-04 Consumerinfo.Com, Inc. System and methods for credit dispute processing, resolution, and reporting
US11113759B1 (en) 2013-03-14 2021-09-07 Consumerinfo.Com, Inc. Account vulnerability alerts
US10102570B1 (en) 2013-03-14 2018-10-16 Consumerinfo.Com, Inc. Account vulnerability alerts
US9870589B1 (en) 2013-03-14 2018-01-16 Consumerinfo.Com, Inc. Credit utilization tracking and reporting
US10929925B1 (en) 2013-03-14 2021-02-23 Consumerlnfo.com, Inc. System and methods for credit dispute processing, resolution, and reporting
US10169761B1 (en) 2013-03-15 2019-01-01 ConsumerInfo.com Inc. Adjustment of knowledge-based authentication
US10659179B2 (en) 2013-03-15 2020-05-19 Icontrol Networks, Inc. Adaptive power modulation
US11197050B2 (en) 2013-03-15 2021-12-07 Charter Communications Operating, Llc Methods and apparatus for client-based dynamic control of connections to co-existing radio access networks
US10664936B2 (en) 2013-03-15 2020-05-26 Csidentity Corporation Authentication systems and methods for on-demand products
US11775979B1 (en) 2013-03-15 2023-10-03 Consumerinfo.Com, Inc. Adjustment of knowledge-based authentication
US9867143B1 (en) 2013-03-15 2018-01-09 Icontrol Networks, Inc. Adaptive Power Modulation
US20170053205A1 (en) * 2013-03-15 2017-02-23 Whoknows, Inc. System and method for tracking knowledge and expertise
US8732101B1 (en) 2013-03-15 2014-05-20 Nara Logics, Inc. Apparatus and method for providing harmonized recommendations based on an integrated user profile
US11288677B1 (en) 2013-03-15 2022-03-29 Consumerlnfo.com, Inc. Adjustment of knowledge-based authentication
US9287727B1 (en) 2013-03-15 2016-03-15 Icontrol Networks, Inc. Temporal voltage adaptive lithium battery charger
US11164271B2 (en) 2013-03-15 2021-11-02 Csidentity Corporation Systems and methods of delayed authentication and billing for on-demand products
US10740762B2 (en) 2013-03-15 2020-08-11 Consumerinfo.Com, Inc. Adjustment of knowledge-based authentication
US11790473B2 (en) 2013-03-15 2023-10-17 Csidentity Corporation Systems and methods of delayed authentication and billing for on-demand products
WO2014168710A1 (en) * 2013-03-15 2014-10-16 Balluun Ag Method and system of an authentic translation of a physical tradeshow
US10117191B2 (en) 2013-03-15 2018-10-30 Icontrol Networks, Inc. Adaptive power modulation
US9203881B2 (en) * 2013-03-25 2015-12-01 Salesforce.Com, Inc. Systems and methods of online social environment based translation of entity methods
US20140289327A1 (en) * 2013-03-25 2014-09-25 Salesforce.Com Inc. Systems and methods of online social environment based translation of entity methods
US20160072759A1 (en) * 2013-03-25 2016-03-10 Salesforce.Com, Inc. Systems and methods of online social environment based translation of entity mentions
US9736107B2 (en) * 2013-03-25 2017-08-15 Salesforce.Com, Inc. Systems and methods of online social environment based translation of entity mentions
US11651414B1 (en) 2013-03-29 2023-05-16 Wells Fargo Bank, N.A. System and medium for managing lists using an information storage and communication system
US11763304B1 (en) 2013-03-29 2023-09-19 Wells Fargo Bank, N.A. User and entity authentication through an information storage and communication system
US11552845B1 (en) * 2013-03-29 2023-01-10 Wells Fargo Bank, N.A. Systems and methods for providing user preferences for a connected device
US11757714B1 (en) 2013-03-29 2023-09-12 Wells Fargo Bank, N.A. Systems and methods for providing user preferences for a connected device
US11922472B1 (en) 2013-03-29 2024-03-05 Wells Fargo Bank, N.A. Systems and methods for transferring a gift using an information storage and communication system
US10963735B2 (en) * 2013-04-11 2021-03-30 Digimarc Corporation Methods for object recognition and related arrangements
US20140310123A1 (en) * 2013-04-16 2014-10-16 Shutterfly, Inc. Check-out path for multiple recipients
US10685398B1 (en) 2013-04-23 2020-06-16 Consumerinfo.Com, Inc. Presenting credit score information
US9922052B1 (en) * 2013-04-26 2018-03-20 A9.Com, Inc. Custom image data store
US9465504B1 (en) * 2013-05-06 2016-10-11 Hrl Laboratories, Llc Automated collaborative behavior analysis using temporal motifs
US9892447B2 (en) 2013-05-08 2018-02-13 Ebay Inc. Performing image searches in a network-based publication system
US10354310B2 (en) 2013-05-10 2019-07-16 Dell Products L.P. Mobile application enabling product discovery and obtaining feedback from network
US9965792B2 (en) 2013-05-10 2018-05-08 Dell Products L.P. Picks API which facilitates dynamically injecting content onto a web page for search engines
US20140337163A1 (en) * 2013-05-10 2014-11-13 Dell Products L.P. Forward-Looking Recommendations Using Information from a Plurality of Picks Generated by a Plurality of Users
US20140337162A1 (en) * 2013-05-10 2014-11-13 Dell Products L.P. Process to display picks on product category pages
US11803929B1 (en) 2013-05-23 2023-10-31 Consumerinfo.Com, Inc. Digital identity
US11120519B2 (en) 2013-05-23 2021-09-14 Consumerinfo.Com, Inc. Digital identity
US10453159B2 (en) 2013-05-23 2019-10-22 Consumerinfo.Com, Inc. Digital identity
US9721147B1 (en) 2013-05-23 2017-08-01 Consumerinfo.Com, Inc. Digital identity
US9729822B2 (en) * 2013-05-24 2017-08-08 Polycom, Inc. Method and system for sharing content in videoconferencing
US20140347435A1 (en) * 2013-05-24 2014-11-27 Polycom, Inc. Method and system for sharing content in videoconferencing
US20140358520A1 (en) * 2013-05-31 2014-12-04 Thomson Licensing Real-time online audio filtering
US20150026156A1 (en) * 2013-05-31 2015-01-22 Michele Meek Systems and methods for facilitating the retail shopping experience online
US10482512B2 (en) * 2013-05-31 2019-11-19 Michele Meek Systems and methods for facilitating the retail shopping experience online
US10198714B1 (en) 2013-06-05 2019-02-05 Google Llc Media content collaboration
US9143542B1 (en) * 2013-06-05 2015-09-22 Google Inc. Media content collaboration
CN103472985A (en) * 2013-06-17 2013-12-25 展讯通信(上海)有限公司 User editing method of three-dimensional (3D) shopping platform display interface
US20150286364A1 (en) * 2013-06-17 2015-10-08 Spreadtrum Communications (Shanghai) Co., Ltd. Editing method of the three-dimensional shopping platform display interface for users
US9805408B2 (en) 2013-06-17 2017-10-31 Dell Products L.P. Automated creation of collages from a collection of assets
US20160139742A1 (en) * 2013-06-18 2016-05-19 Samsung Electronics Co., Ltd. Method for managing media contents and apparatus for the same
US20150006334A1 (en) * 2013-06-26 2015-01-01 International Business Machines Corporation Video-based, customer specific, transactions
US20150007110A1 (en) * 2013-06-26 2015-01-01 Acer Inc. Method for Controlling Electronic Apparatus and Electronic Apparatus Thereof
US11296950B2 (en) 2013-06-27 2022-04-05 Icontrol Networks, Inc. Control system user interface
US10348575B2 (en) 2013-06-27 2019-07-09 Icontrol Networks, Inc. Control system user interface
US9369340B2 (en) * 2013-06-30 2016-06-14 Jive Software, Inc. User-centered engagement analysis
US20150006715A1 (en) * 2013-06-30 2015-01-01 Jive Software, Inc. User-centered engagement analysis
US20150012362A1 (en) * 2013-07-03 2015-01-08 1-800 Contacts, Inc. Systems and methods for recommending products via crowdsourcing and detecting user characteristics
US10560772B2 (en) 2013-07-23 2020-02-11 Time Warner Cable Enterprises Llc Apparatus and methods for selective data network access
US9460342B1 (en) * 2013-08-05 2016-10-04 Google Inc. Determining body measurements
US20150046860A1 (en) * 2013-08-06 2015-02-12 Sony Corporation Information processing apparatus and information processing method
US10042541B2 (en) * 2013-08-06 2018-08-07 Sony Corporation Information processing apparatus and information processing method for utilizing various cross-sectional types of user input
US11432055B2 (en) 2013-08-09 2022-08-30 Icn Acquisition, Llc System, method and apparatus for remote monitoring
US11438553B1 (en) 2013-08-09 2022-09-06 Icn Acquisition, Llc System, method and apparatus for remote monitoring
US10645347B2 (en) 2013-08-09 2020-05-05 Icn Acquisition, Llc System, method and apparatus for remote monitoring
US11722806B2 (en) 2013-08-09 2023-08-08 Icn Acquisition, Llc System, method and apparatus for remote monitoring
US10841668B2 (en) 2013-08-09 2020-11-17 Icn Acquisition, Llc System, method and apparatus for remote monitoring
US20150052008A1 (en) * 2013-08-16 2015-02-19 iWeave International Mobile Application For Hair Extensions
US9443268B1 (en) 2013-08-16 2016-09-13 Consumerinfo.Com, Inc. Bill payment and reporting
US20150052198A1 (en) * 2013-08-16 2015-02-19 Joonsuh KWUN Dynamic social networking service system and respective methods in collecting and disseminating specialized and interdisciplinary knowledge
US11087075B2 (en) 2013-08-19 2021-08-10 Google Llc Systems and methods for resolving privileged edits within suggested edits
US10380232B2 (en) 2013-08-19 2019-08-13 Google Llc Systems and methods for resolving privileged edits within suggested edits
US20160196668A1 (en) * 2013-08-19 2016-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for processing virtual fitting model image
US9792700B2 (en) * 2013-08-19 2017-10-17 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for processing virtual fitting model image
US10044979B2 (en) * 2013-08-19 2018-08-07 Cisco Technology, Inc. Acquiring regions of remote shared content with high resolution
US20150052200A1 (en) * 2013-08-19 2015-02-19 Cisco Technology, Inc. Acquiring Regions of Remote Shared Content with High Resolution
US11663396B2 (en) 2013-08-19 2023-05-30 Google Llc Systems and methods for resolving privileged edits within suggested edits
US9971752B2 (en) 2013-08-19 2018-05-15 Google Llc Systems and methods for resolving privileged edits within suggested edits
US20150063678A1 (en) * 2013-08-30 2015-03-05 1-800 Contacts, Inc. Systems and methods for generating a 3-d model of a user using a rear-facing camera
US20150062116A1 (en) * 2013-08-30 2015-03-05 1-800 Contacts, Inc. Systems and methods for rapidly generating a 3-d model of a user
US9875489B2 (en) 2013-09-11 2018-01-23 Cinsay, Inc. Dynamic binding of video content
US10559010B2 (en) 2013-09-11 2020-02-11 Aibuy, Inc. Dynamic binding of video content
US11763348B2 (en) 2013-09-11 2023-09-19 Aibuy, Inc. Dynamic binding of video content
US9953347B2 (en) 2013-09-11 2018-04-24 Cinsay, Inc. Dynamic binding of live video content
US11074620B2 (en) 2013-09-11 2021-07-27 Aibuy, Inc. Dynamic binding of content transactional items
US11250098B2 (en) * 2013-09-13 2022-02-15 Reflektion, Inc. Creation and delivery of individually customized web pages
US9852515B1 (en) 2013-09-25 2017-12-26 Oncam Global, Inc. Mobile terminal security systems
US10713688B2 (en) * 2013-09-25 2020-07-14 Transform Sr Brands Llc Method and system for gesture-based cross channel commerce and marketing
US9361775B2 (en) * 2013-09-25 2016-06-07 Oncam Global, Inc. Mobile terminal security systems
US20150088661A1 (en) * 2013-09-25 2015-03-26 Sears Brands, Llc Method and system for gesture-based cross channel commerce and marketing
US20150085128A1 (en) * 2013-09-25 2015-03-26 Oncam Global, Inc. Mobile terminal security systems
US9578359B2 (en) 2013-09-26 2017-02-21 Pixwell Platform, LLC Localization process system
US9271050B2 (en) * 2013-09-26 2016-02-23 Pixwell Platform, LLC Localization process system
US9044682B1 (en) * 2013-09-26 2015-06-02 Matthew B. Rappaport Methods and apparatus for electronic commerce initiated through use of video games and fulfilled by delivery of physical goods
US20150089530A1 (en) * 2013-09-26 2015-03-26 Pixwel Platform, LLC Localization process system
US10701127B2 (en) 2013-09-27 2020-06-30 Aibuy, Inc. Apparatus and method for supporting relationships associated with content provisioning
US11017362B2 (en) 2013-09-27 2021-05-25 Aibuy, Inc. N-level replication of supplemental content
US9697504B2 (en) 2013-09-27 2017-07-04 Cinsay, Inc. N-level replication of supplemental content
US10268994B2 (en) 2013-09-27 2019-04-23 Aibuy, Inc. N-level replication of supplemental content
US9710841B2 (en) 2013-09-30 2017-07-18 Comenity Llc Method and medium for recommending a personalized ensemble
US10185776B2 (en) * 2013-10-06 2019-01-22 Shocase, Inc. System and method for dynamically controlled rankings and social network privacy settings
US20150106205A1 (en) * 2013-10-16 2015-04-16 Google Inc. Generating an offer sheet based on offline content
US9348803B2 (en) 2013-10-22 2016-05-24 Google Inc. Systems and methods for providing just-in-time preview of suggestion resolutions
US9501588B1 (en) 2013-10-28 2016-11-22 Kenneth S. Rowe Garden simulation
US20150120505A1 (en) * 2013-10-31 2015-04-30 International Business Machines Corporation In-store omnichannel inventory exposure
CN104599135A (en) * 2013-10-31 2015-05-06 国际商业机器公司 Method and system for displaying product information
US10271010B2 (en) 2013-10-31 2019-04-23 Shindig, Inc. Systems and methods for controlling the display of content
US20160292390A1 (en) * 2013-10-31 2016-10-06 Michele SCULATI Method and system for a customized definition of food quantities based on the determination of anthropometric parameters
US20150127489A1 (en) * 2013-11-04 2015-05-07 Deepak Kumar Vasthimal Dynamic creation of temporal networks based on similar search queries
US11568498B2 (en) 2013-11-04 2023-01-31 Ebay Inc. Dynamic creation of networks
US10467707B2 (en) * 2013-11-04 2019-11-05 Ebay Inc. Dynamic creation of networks
US10235663B2 (en) * 2013-11-06 2019-03-19 Tencent Technology (Shenzhen) Company Limited Method, system and server system of payment based on a conversation group
US10970692B2 (en) 2013-11-06 2021-04-06 Tencent Technology (Shenzhen) Company Limited Method, system and server system of payment based on a conversation group
US10223668B2 (en) * 2013-11-11 2019-03-05 International Business Machines Corporation Contextual searching via a mobile computing device
US11336648B2 (en) 2013-11-11 2022-05-17 Amazon Technologies, Inc. Document management and collaboration system
US10686788B2 (en) 2013-11-11 2020-06-16 Amazon Technologies, Inc. Developer based document collaboration
US10567382B2 (en) * 2013-11-11 2020-02-18 Amazon Technologies, Inc. Access control for a document management and collaboration system
US10257196B2 (en) 2013-11-11 2019-04-09 Amazon Technologies, Inc. Access control for a document management and collaboration system
US9984357B2 (en) * 2013-11-11 2018-05-29 International Business Machines Corporation Contextual searching via a mobile computing device
US10877953B2 (en) 2013-11-11 2020-12-29 Amazon Technologies, Inc. Processing service requests for non-transactional databases
US20170012984A1 (en) * 2013-11-11 2017-01-12 Amazon Technologies, Inc. Access control for a document management and collaboration system
US10599753B1 (en) 2013-11-11 2020-03-24 Amazon Technologies, Inc. Document version control in collaborative environment
US10410414B2 (en) 2013-11-14 2019-09-10 Ebay Inc. Extraction of body dimensions from planar garment photographs of fitting garments
US20150134302A1 (en) * 2013-11-14 2015-05-14 Jatin Chhugani 3-dimensional digital garment creation from planar garment photographs
US9953460B2 (en) 2013-11-14 2018-04-24 Ebay Inc. Garment simulation using thread and data level parallelism
US10068371B2 (en) 2013-11-14 2018-09-04 Ebay Inc. Extraction of body dimensions from planar garment photographs of fitting garments
US11145118B2 (en) 2013-11-14 2021-10-12 Ebay Inc. Extraction of body dimensions from planar garment photographs of fitting garments
US10102536B1 (en) 2013-11-15 2018-10-16 Experian Information Solutions, Inc. Micro-geographic aggregation system
US10325314B1 (en) 2013-11-15 2019-06-18 Consumerinfo.Com, Inc. Payment reporting systems
US10580025B2 (en) 2013-11-15 2020-03-03 Experian Information Solutions, Inc. Micro-geographic aggregation system
US10269065B1 (en) 2013-11-15 2019-04-23 Consumerinfo.Com, Inc. Bill payment and reporting
US20150156228A1 (en) * 2013-11-18 2015-06-04 Ronald Langston Social networking interacting system
US10628448B1 (en) 2013-11-20 2020-04-21 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
US10025842B1 (en) 2013-11-20 2018-07-17 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
US11461364B1 (en) 2013-11-20 2022-10-04 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
US9477737B1 (en) 2013-11-20 2016-10-25 Consumerinfo.Com, Inc. Systems and user interfaces for dynamic access of multiple remote databases and synchronization of data based on user rules
WO2015081060A1 (en) * 2013-11-26 2015-06-04 Dash Software, LLC Mobile application check-in and payment systems and methods of their operation
US9529851B1 (en) 2013-12-02 2016-12-27 Experian Information Solutions, Inc. Server architecture for electronic data quality processing
US20150154419A1 (en) * 2013-12-03 2015-06-04 Sony Corporation Computer ecosystem with digital rights management (drm) transfer mechanism
US10068276B2 (en) 2013-12-05 2018-09-04 Walmart Apollo, Llc System and method for coupling a mobile device and point of sale device to transmit mobile shopping cart and provide shopping recommendations
US11907998B2 (en) 2013-12-05 2024-02-20 Walmart Apollo, Llc System and method for coupling a user computing device and a point of sale device
US11263682B2 (en) 2013-12-05 2022-03-01 Walmart Apollo, Llc System and method for coupling a user computing device and a point of sale device
WO2015085028A3 (en) * 2013-12-06 2015-11-12 The Dun & Bradstreet Corporation Method and system for collecting data on businesses via mobile and geolocation communications
WO2015089523A3 (en) * 2013-12-09 2015-09-03 Premium Lubricants (Pty) Ltd Web based marketplace
US10521840B2 (en) 2013-12-09 2019-12-31 Lor Technologies (Pty) Ltd Virtual interactive marketplace
US10469912B2 (en) 2013-12-13 2019-11-05 Nant Holdings Ip, Llc Visual hash tags via trending recognition activities, systems and methods
US11115724B2 (en) 2013-12-13 2021-09-07 Nant Holdings Ip, Llc Visual hash tags via trending recognition activities, systems and methods
US9860601B2 (en) 2013-12-13 2018-01-02 Nant Holdings Ip, Llc Visual hash tags via trending recognition activities, systems and methods
US9544655B2 (en) 2013-12-13 2017-01-10 Nant Holdings Ip, Llc Visual hash tags via trending recognition activities, systems and methods
US11100564B2 (en) 2013-12-27 2021-08-24 Ebay Inc. Regional item recommendations
US10366439B2 (en) 2013-12-27 2019-07-30 Ebay Inc. Regional item reccomendations
US10510054B1 (en) 2013-12-30 2019-12-17 Wells Fargo Bank, N.A. Augmented reality enhancements for financial activities
US11831794B1 (en) 2013-12-30 2023-11-28 Massachusetts Mutual Life Insurance Company System and method for managing routing of leads
US11151486B1 (en) 2013-12-30 2021-10-19 Massachusetts Mutual Life Insurance Company System and method for managing routing of leads
US11509771B1 (en) 2013-12-30 2022-11-22 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls
US11743389B1 (en) 2013-12-30 2023-08-29 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls
US10242068B1 (en) * 2013-12-31 2019-03-26 Massachusetts Mutual Life Insurance Company Methods and systems for ranking leads based on given characteristics
US10394834B1 (en) * 2013-12-31 2019-08-27 Massachusetts Mutual Life Insurance Company Methods and systems for ranking leads based on given characteristics
US10860593B1 (en) * 2013-12-31 2020-12-08 Massachusetts Mutual Life Insurance Company Methods and systems for ranking leads based on given characteristics
US20150199910A1 (en) * 2014-01-10 2015-07-16 Cox Communications, Inc. Systems and methods for an educational platform providing a multi faceted learning environment
US10078867B1 (en) 2014-01-10 2018-09-18 Wells Fargo Bank, N.A. Augmented reality virtual banker
US20150199366A1 (en) * 2014-01-15 2015-07-16 Avigilon Corporation Storage management of data streamed from a video source device
US9489387B2 (en) * 2014-01-15 2016-11-08 Avigilon Corporation Storage management of data streamed from a video source device
US11197057B2 (en) 2014-01-15 2021-12-07 Avigilon Corporation Storage management of data streamed from a video source device
US10231622B2 (en) * 2014-02-05 2019-03-19 Self Care Catalysts Inc. Systems, devices, and methods for analyzing and enhancing patient health
US10791930B2 (en) * 2014-02-05 2020-10-06 Self Care Catalysts Inc. Systems, devices, and methods for analyzing and enhancing patient health
US20190159677A1 (en) * 2014-02-05 2019-05-30 Self Care Catalysts Inc. Systems, devices, and methods for analyzing and enhancing patient health
US20150216413A1 (en) * 2014-02-05 2015-08-06 Self Care Catalysts Inc. Systems, devices, and methods for analyzing and enhancing patient health
US10691877B1 (en) 2014-02-07 2020-06-23 Amazon Technologies, Inc. Homogenous insertion of interactions into documents
US10540404B1 (en) 2014-02-07 2020-01-21 Amazon Technologies, Inc. Forming a document collection in a document management and collaboration system
US11107158B1 (en) 2014-02-14 2021-08-31 Experian Information Solutions, Inc. Automatic generation of code for attributes
US10262362B1 (en) 2014-02-14 2019-04-16 Experian Information Solutions, Inc. Automatic generation of code for attributes
US11847693B1 (en) 2014-02-14 2023-12-19 Experian Information Solutions, Inc. Automatic generation of code for attributes
US11146637B2 (en) 2014-03-03 2021-10-12 Icontrol Networks, Inc. Media content management
US11405463B2 (en) 2014-03-03 2022-08-02 Icontrol Networks, Inc. Media content management
US11544341B2 (en) 2014-03-13 2023-01-03 Ebay Inc. Social shopping experience utilizing interactive mirror and polling of target audience members identified by a relationship with product information about an item being worn by a user
US10083243B2 (en) 2014-03-13 2018-09-25 Ebay Inc. Interactive mirror displays for presenting product information
US10706117B2 (en) 2014-03-13 2020-07-07 Ebay Inc. System, method, and medium for utilizing wear time to recommend items
US9990438B2 (en) 2014-03-13 2018-06-05 Ebay Inc. Customized fitting room environment
US9910927B2 (en) 2014-03-13 2018-03-06 Ebay Inc. Interactive mirror displays for presenting product recommendations
US11188606B2 (en) 2014-03-13 2021-11-30 Ebay Inc. Interactive displays based on user interest
US10664543B2 (en) 2014-03-13 2020-05-26 Ebay Inc. System, method, and machine-readable storage medium for providing a customized fitting room environment
US10311161B2 (en) 2014-03-13 2019-06-04 Ebay Inc. Interactive displays based on user interest
US10366174B2 (en) * 2014-03-13 2019-07-30 Ebay Inc. Social fitting room experience utilizing interactive mirror and polling of target users experienced with garment type
US9418119B2 (en) 2014-03-25 2016-08-16 Linkedin Corporation Method and system to determine a category score of a social network member
US8990191B1 (en) * 2014-03-25 2015-03-24 Linkedin Corporation Method and system to determine a category score of a social network member
USD760256S1 (en) 2014-03-25 2016-06-28 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
USD759690S1 (en) 2014-03-25 2016-06-21 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
USD759689S1 (en) 2014-03-25 2016-06-21 Consumerinfo.Com, Inc. Display screen or portion thereof with graphical user interface
US20150278841A1 (en) * 2014-03-31 2015-10-01 United Video Properties, Inc. Systems and methods for receiving coupon and vendor data
US20150278911A1 (en) * 2014-03-31 2015-10-01 Sap Ag System and Method for Apparel Size Suggestion Based on Sales Transaction Data Analysis
US9699123B2 (en) 2014-04-01 2017-07-04 Ditto Technologies, Inc. Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
US10679282B2 (en) 2014-04-01 2020-06-09 Electronic Commodities Exchange, L.P. Method, apparatus, and manufacture for virtual jewelry shopping in secondary markets
US10176515B2 (en) 2014-04-01 2019-01-08 Electronic Commodities Exchange, L.P. Virtual jewelry shopping in secondary markets
US20150278905A1 (en) * 2014-04-01 2015-10-01 Electronic Commodities Exchange Virtual jewelry shopping experience with in-store preview
US10482532B1 (en) 2014-04-16 2019-11-19 Consumerinfo.Com, Inc. Providing credit data in search results
US9892457B1 (en) 2014-04-16 2018-02-13 Consumerinfo.Com, Inc. Providing credit data in search results
US20160132533A1 (en) * 2014-04-22 2016-05-12 Sk Planet Co., Ltd. Device for providing image related to replayed music and method using same
US10339176B2 (en) * 2014-04-22 2019-07-02 Groovers Inc. Device for providing image related to replayed music and method using same
US9710801B2 (en) 2014-04-22 2017-07-18 American Express Travel Related Services Company, Inc. Systems and methods for charge splitting
US10373240B1 (en) 2014-04-25 2019-08-06 Csidentity Corporation Systems, methods and computer-program products for eligibility verification
US11074641B1 (en) 2014-04-25 2021-07-27 Csidentity Corporation Systems, methods and computer-program products for eligibility verification
US11587150B1 (en) 2014-04-25 2023-02-21 Csidentity Corporation Systems and methods for eligibility verification
US10242351B1 (en) 2014-05-07 2019-03-26 Square, Inc. Digital wallet for groups
US9733333B2 (en) 2014-05-08 2017-08-15 Shindig, Inc. Systems and methods for monitoring participant attentiveness within events and group assortments
US10402798B1 (en) 2014-05-11 2019-09-03 Square, Inc. Open tab transactions
US11783331B2 (en) 2014-05-11 2023-10-10 Block, Inc. Cardless transaction using account automatically generated based on previous transaction
US11645651B2 (en) 2014-05-11 2023-05-09 Block, Inc. Open tab transactions
US10860749B2 (en) 2014-05-13 2020-12-08 Atheer, Inc. Method for interactive catalog for 3D objects within the 2D environment
US10002208B2 (en) 2014-05-13 2018-06-19 Atheer, Inc. Method for interactive catalog for 3D objects within the 2D environment
US10438204B2 (en) * 2014-05-19 2019-10-08 American Express Travel Related Services Copmany, Inc. Authentication via biometric passphrase
US20150332273A1 (en) * 2014-05-19 2015-11-19 American Express Travel Related Services Company, Inc. Authentication via biometric passphrase
US11282081B2 (en) * 2014-05-19 2022-03-22 American Express Travel Related Services Company, Inc. Authentication via biometric passphrase
US9741059B1 (en) * 2014-05-23 2017-08-22 Intuit Inc. System and method for managing website scripts
US11792462B2 (en) 2014-05-29 2023-10-17 Time Warner Cable Enterprises Llc Apparatus and methods for recording, accessing, and delivering packetized content
US9996898B2 (en) * 2014-05-30 2018-06-12 International Business Machines Corporation Flexible control in resizing of visual displays
US9535890B2 (en) * 2014-05-30 2017-01-03 International Business Machines Corporation Flexible control in resizing of visual displays
US9710884B2 (en) 2014-05-30 2017-07-18 International Business Machines Corporation Flexible control in resizing of visual displays
US10540744B2 (en) 2014-05-30 2020-01-21 International Business Machines Corporation Flexible control in resizing of visual displays
US9614899B1 (en) * 2014-05-30 2017-04-04 Intuit Inc. System and method for user contributed website scripts
US9710883B2 (en) 2014-05-30 2017-07-18 International Business Machines Corporation Flexible control in resizing of visual displays
US20150346954A1 (en) * 2014-05-30 2015-12-03 International Business Machines Corporation Flexible control in resizing of visual displays
US9881303B2 (en) 2014-06-05 2018-01-30 Paypal, Inc. Systems and methods for implementing automatic payer authentication
US11540148B2 (en) 2014-06-11 2022-12-27 Time Warner Cable Enterprises Llc Methods and apparatus for access point location
US20150371260A1 (en) * 2014-06-19 2015-12-24 Elwha Llc Systems and methods for providing purchase options to consumers
US20150379623A1 (en) * 2014-06-25 2015-12-31 Akshay Gadre Digital avatars in online marketplaces
US11494833B2 (en) * 2014-06-25 2022-11-08 Ebay Inc. Digital avatars in online marketplaces
US10529009B2 (en) * 2014-06-25 2020-01-07 Ebay Inc. Digital avatars in online marketplaces
US20200143456A1 (en) * 2014-06-25 2020-05-07 Ebay Inc. Digital avatars in online marketplaces
JP2014179135A (en) * 2014-07-01 2014-09-25 Toshiba Corp Image processing system, method and program
US10476917B2 (en) 2014-07-24 2019-11-12 Genesys Telecommunications Laboratories, Inc. Media channel management apparatus for network communications sessions
US9516167B2 (en) * 2014-07-24 2016-12-06 Genesys Telecommunications Laboratories, Inc. Media channel management apparatus for network communications sessions
US20190050836A1 (en) * 2014-07-31 2019-02-14 Walmart Apollo, Llc Integrated online and in-store shopping experience
US10592080B2 (en) 2014-07-31 2020-03-17 Microsoft Technology Licensing, Llc Assisted presentation of application windows
US10102513B2 (en) * 2014-07-31 2018-10-16 Walmart Apollo, Llc Integrated online and in-store shopping experience
US10956886B2 (en) * 2014-07-31 2021-03-23 Walmart Apollo, Llc Integrated online and in-store shopping experience
US10254942B2 (en) 2014-07-31 2019-04-09 Microsoft Technology Licensing, Llc Adaptive sizing and positioning of application windows
US10678412B2 (en) 2014-07-31 2020-06-09 Microsoft Technology Licensing, Llc Dynamic joint dividers for application windows
US10653962B2 (en) 2014-08-01 2020-05-19 Ebay Inc. Generating and utilizing digital avatar data for online marketplaces
US11273378B2 (en) 2014-08-01 2022-03-15 Ebay, Inc. Generating and utilizing digital avatar data for online marketplaces
US20160042233A1 (en) * 2014-08-06 2016-02-11 ProSent Mobile Corporation Method and system for facilitating evaluation of visual appeal of two or more objects
US20160042402A1 (en) * 2014-08-07 2016-02-11 Akshay Gadre Evaluating digital inventories
US10218652B2 (en) 2014-08-08 2019-02-26 Mastercard International Incorporated Systems and methods for integrating a chat function into an e-reader application
US10423220B2 (en) 2014-08-08 2019-09-24 Kabushiki Kaisha Toshiba Virtual try-on apparatus, virtual try-on method, and computer program product
US20160055368A1 (en) * 2014-08-22 2016-02-25 Microsoft Corporation Face alignment with shape regression
US10019622B2 (en) * 2014-08-22 2018-07-10 Microsoft Technology Licensing, Llc Face alignment with shape regression
US10332176B2 (en) 2014-08-28 2019-06-25 Ebay Inc. Methods and systems for virtual fitting rooms or hybrid stores
US11301912B2 (en) 2014-08-28 2022-04-12 Ebay Inc. Methods and systems for virtual fitting rooms or hybrid stores
US20160063613A1 (en) * 2014-08-30 2016-03-03 Lucy Ma Zhao Providing a virtual shopping environment for an item
US10366447B2 (en) * 2014-08-30 2019-07-30 Ebay Inc. Providing a virtual shopping environment for an item
US11017462B2 (en) 2014-08-30 2021-05-25 Ebay Inc. Providing a virtual shopping environment for an item
US10432603B2 (en) 2014-09-29 2019-10-01 Amazon Technologies, Inc. Access to documents in a document management and collaboration system
US11734740B2 (en) 2014-09-30 2023-08-22 Ebay Inc. Garment size mapping
US11055758B2 (en) 2014-09-30 2021-07-06 Ebay Inc. Garment size mapping
US10354311B2 (en) 2014-10-07 2019-07-16 Comenity Llc Determining preferences of an ensemble of items
US20160098775A1 (en) * 2014-10-07 2016-04-07 Comenity Llc Sharing an ensemble of items
US9953357B2 (en) * 2014-10-07 2018-04-24 Comenity Llc Sharing an ensemble of items
US9501840B2 (en) * 2014-10-20 2016-11-22 Toshiba Tec Kabushiki Kaisha Information processing apparatus and clothes proposing method
US20160117339A1 (en) * 2014-10-27 2016-04-28 Chegg, Inc. Automated Lecture Deconstruction
US11797597B2 (en) 2014-10-27 2023-10-24 Chegg, Inc. Automated lecture deconstruction
US10140379B2 (en) * 2014-10-27 2018-11-27 Chegg, Inc. Automated lecture deconstruction
US11151188B2 (en) 2014-10-27 2021-10-19 Chegg, Inc. Automated lecture deconstruction
US10915943B2 (en) 2014-10-31 2021-02-09 Walmart Apollo, Llc Order processing systems and methods
US10657578B2 (en) 2014-10-31 2020-05-19 Walmart Apollo, Llc Order processing systems and methods
US9935833B2 (en) 2014-11-05 2018-04-03 Time Warner Cable Enterprises Llc Methods and apparatus for determining an optimized wireless interface installation configuration
US10169782B2 (en) * 2014-11-13 2019-01-01 Adobe Systems Incorporated Targeting ads engaged by a user to related users
US11599937B2 (en) 2014-12-01 2023-03-07 Ebay Inc. Digital wardrobe
US10977721B2 (en) 2014-12-01 2021-04-13 Ebay Inc. Digital wardrobe
US10204375B2 (en) 2014-12-01 2019-02-12 Ebay Inc. Digital wardrobe using simulated forces on garment models
US9904450B2 (en) * 2014-12-19 2018-02-27 At&T Intellectual Property I, L.P. System and method for creating and sharing plans through multimodal dialog
US11200560B2 (en) * 2014-12-19 2021-12-14 Capital One Services, Llc Systems and methods for contactless and secure data transfer
US20160180327A1 (en) * 2014-12-19 2016-06-23 Capital One Services, Llc Systems and methods for contactless and secure data transfer
US10739976B2 (en) * 2014-12-19 2020-08-11 At&T Intellectual Property I, L.P. System and method for creating and sharing plans through multimodal dialog
US20160179908A1 (en) * 2014-12-19 2016-06-23 At&T Intellectual Property I, L.P. System and method for creating and sharing plans through multimodal dialog
US11514426B2 (en) 2014-12-19 2022-11-29 Capital One Services, Llc Systems and methods for contactless and secure data transfer
US11270373B2 (en) 2014-12-23 2022-03-08 Ebay Inc. Method system and medium for generating virtual contexts from three dimensional models
US10475113B2 (en) 2014-12-23 2019-11-12 Ebay Inc. Method system and medium for generating virtual contexts from three dimensional models
US20160189173A1 (en) * 2014-12-30 2016-06-30 The Nielsen Company (Us), Llc Methods and apparatus to predict attitudes of consumers
US9911149B2 (en) 2015-01-21 2018-03-06 Paypal, Inc. Systems and methods for online shopping cart management
US11057408B2 (en) 2015-02-13 2021-07-06 Time Warner Cable Enterprises Llc Apparatus and methods for data collection, analysis and service modification based on online activity
US11606380B2 (en) 2015-02-13 2023-03-14 Time Warner Cable Enterprises Llc Apparatus and methods for data collection, analysis and service modification based on online activity
US10116676B2 (en) 2015-02-13 2018-10-30 Time Warner Cable Enterprises Llc Apparatus and methods for data collection, analysis and service modification based on online activity
US10163118B2 (en) * 2015-02-18 2018-12-25 Adobe Systems Incorporated Method and apparatus for associating user engagement data received from a user with portions of a webpage visited by the user
US11662829B2 (en) 2015-03-31 2023-05-30 Ebay Inc. Modification of three-dimensional garments using gestures
US10310616B2 (en) 2015-03-31 2019-06-04 Ebay Inc. Modification of three-dimensional garments using gestures
US11073915B2 (en) 2015-03-31 2021-07-27 Ebay Inc. Modification of three-dimensional garments using gestures
US20160293032A1 (en) * 2015-04-03 2016-10-06 Drexel University Video Instruction Methods and Devices
US10027598B2 (en) * 2015-05-08 2018-07-17 Accenture Global Services Limited Providing network resources based on available user information
US20160330133A1 (en) * 2015-05-08 2016-11-10 Accenture Global Services Limited Providing network resources based on available user information
US20160335485A1 (en) * 2015-05-13 2016-11-17 Electronics And Telecommunications Research Institute User intention analysis apparatus and method based on image information of three-dimensional space
US9886623B2 (en) * 2015-05-13 2018-02-06 Electronics And Telecommunications Research Institute User intention analysis apparatus and method based on image information of three-dimensional space
WO2016185400A2 (en) 2015-05-18 2016-11-24 Embl Retail Inc Method and system for recommending fitting footwear
WO2016185400A3 (en) * 2015-05-18 2017-03-23 Embl Retail Inc Method and system for recommending fitting footwear
CN107851328A (en) * 2015-05-18 2018-03-27 Embl零售股份有限公司 For the method and system for recommending to be adapted to footwear
US9905224B2 (en) * 2015-06-11 2018-02-27 Nice Ltd. System and method for automatic language model generation
US20160365090A1 (en) * 2015-06-11 2016-12-15 Nice-Systems Ltd. System and method for automatic language model generation
US20160364664A1 (en) * 2015-06-14 2016-12-15 Grant Patrick Henderson Method and system for high-speed business method switching
US20160378887A1 (en) * 2015-06-24 2016-12-29 Juan Elias Maldonado Augmented Reality for Architectural Interior Placement
US10387846B2 (en) * 2015-07-10 2019-08-20 Bank Of America Corporation System for affecting appointment calendaring on a mobile device based on dependencies
US10387845B2 (en) * 2015-07-10 2019-08-20 Bank Of America Corporation System for facilitating appointment calendaring based on perceived customer requirements
DE102015213832A1 (en) 2015-07-22 2017-01-26 Adidas Ag Method and device for generating an artificial image
DE102015213832B4 (en) 2015-07-22 2023-07-13 Adidas Ag Method and device for generating an artificial image
EP4089615A1 (en) 2015-07-22 2022-11-16 adidas AG Method and apparatus for generating an artificial picture
EP3121793A1 (en) 2015-07-22 2017-01-25 Adidas AG Method and apparatus for generating an artificial picture
US20180232781A1 (en) * 2015-08-10 2018-08-16 Je Hyung Kim Advertisement system and advertisement method using 3d model
US20170076335A1 (en) * 2015-09-15 2017-03-16 International Business Machines Corporation Big data enabled insights based personalized 3d offers
CN108431849A (en) * 2015-10-05 2018-08-21 陈仕东 Tele-robotic dress ornament exhibition, the system and method tried on and done shopping
US20170124160A1 (en) * 2015-10-30 2017-05-04 International Business Machines Corporation Collecting social media users in a specific customer segment
US10783592B2 (en) * 2015-10-30 2020-09-22 International Business Machines Corporation Collecting social media users in a specific customer segment
DE102015222782A1 (en) 2015-11-18 2017-05-18 Sirona Dental Systems Gmbh Method for visualizing a dental situation
WO2017085160A1 (en) 2015-11-18 2017-05-26 Sirona Dental Systems Gmbh Method for visualizing a tooth situation
US10839302B2 (en) 2015-11-24 2020-11-17 The Research Foundation For The State University Of New York Approximate value iteration with complex returns by bounding
US11412320B2 (en) 2015-12-04 2022-08-09 Time Warner Cable Enterprises Llc Apparatus and methods for selective data network access
US9986578B2 (en) 2015-12-04 2018-05-29 Time Warner Cable Enterprises Llc Apparatus and methods for selective data network access
US10176508B2 (en) * 2015-12-31 2019-01-08 Walmart Apollo, Llc System, method, and non-transitory computer-readable storage media for evaluating search results for online grocery personalization
US10687371B2 (en) 2016-01-20 2020-06-16 Time Warner Cable Enterprises Llc Apparatus and method for wireless network services in moving vehicles
US9918345B2 (en) 2016-01-20 2018-03-13 Time Warner Cable Enterprises Llc Apparatus and method for wireless network services in moving vehicles
WO2017132689A1 (en) * 2016-01-29 2017-08-03 Curio Search, Inc. Method and system for product discovery
US20190082211A1 (en) * 2016-02-10 2019-03-14 Nitin Vats Producing realistic body movement using body Images
US11736756B2 (en) * 2016-02-10 2023-08-22 Nitin Vats Producing realistic body movement using body images
US11843641B2 (en) 2016-02-26 2023-12-12 Time Warner Cable Enterprises Llc Apparatus and methods for centralized message exchange in a user premises device
US10404758B2 (en) 2016-02-26 2019-09-03 Time Warner Cable Enterprises Llc Apparatus and methods for centralized message exchange in a user premises device
US11258832B2 (en) 2016-02-26 2022-02-22 Time Warner Cable Enterprises Llc Apparatus and methods for centralized message exchange in a user premises device
US20190340671A1 (en) * 2016-03-07 2019-11-07 Bao Tran Systems and methods for fitting products
US9996981B1 (en) * 2016-03-07 2018-06-12 Bao Tran Augmented reality system
US10492034B2 (en) 2016-03-07 2019-11-26 Time Warner Cable Enterprises Llc Apparatus and methods for dynamic open-access networks
US20200143558A1 (en) * 2016-03-07 2020-05-07 Bao Tran Extended reality system
US11232580B2 (en) * 2016-03-07 2022-01-25 Bao Tran Extended reality system
US9721384B1 (en) * 2016-03-07 2017-08-01 Bao Tran Systems and methods for fitting products to users
US20180253906A1 (en) * 2016-03-07 2018-09-06 Bao Tran Augmented reality system
US10540776B2 (en) * 2016-03-07 2020-01-21 Bao Tran Augmented reality product selection
US10157503B2 (en) * 2016-03-07 2018-12-18 Bao Tran Augmented reality system
US9460557B1 (en) * 2016-03-07 2016-10-04 Bao Tran Systems and methods for footwear fitting
US11665509B2 (en) 2016-03-07 2023-05-30 Time Warner Cable Enterprises Llc Apparatus and methods for dynamic open-access networks
US20170270686A1 (en) * 2016-03-19 2017-09-21 Jessica V. Couch Use of Camera on Mobile Device to Extract Measurements From Garments
US11494949B2 (en) * 2016-03-25 2022-11-08 Ebay Inc. Publication modification using body coordinates
US20200043200A1 (en) * 2016-03-25 2020-02-06 Ebay Inc. Publication modification using body coordinates
US20170277365A1 (en) * 2016-03-28 2017-09-28 Intel Corporation Control system for user apparel selection
CN109074586A (en) * 2016-03-29 2018-12-21 飞力凯网路股份有限公司 Terminal installation, communication means, settlement processing device, settlement method and settlement system
US11393007B2 (en) * 2016-03-31 2022-07-19 Under Armour, Inc. Methods and apparatus for enhanced product recommendations
US20170287044A1 (en) * 2016-03-31 2017-10-05 Under Armour, Inc. Methods and Apparatus for Enhanced Product Recommendations
US20190073798A1 (en) * 2016-04-03 2019-03-07 Eliza Yingzi Du Photorealistic human holographic augmented reality communication with interactive control in real-time using a cluster of servers
US10796456B2 (en) * 2016-04-03 2020-10-06 Eliza Yingzi Du Photorealistic human holographic augmented reality communication with interactive control in real-time using a cluster of servers
US10580040B2 (en) * 2016-04-03 2020-03-03 Integem Inc Methods and systems for real-time image and signal processing in augmented reality based communications
US20170287226A1 (en) * 2016-04-03 2017-10-05 Integem Inc Methods and systems for real-time image and signal processing in augmented reality based communications
US10614504B2 (en) 2016-04-15 2020-04-07 Walmart Apollo, Llc Systems and methods for providing content-based product recommendations
US10430817B2 (en) 2016-04-15 2019-10-01 Walmart Apollo, Llc Partiality vector refinement systems and methods through sample probing
US10592959B2 (en) 2016-04-15 2020-03-17 Walmart Apollo, Llc Systems and methods for facilitating shopping in a physical retail facility
US10614921B2 (en) * 2016-05-24 2020-04-07 Cal-Comp Big Data, Inc. Personalized skin diagnosis and skincare
US10565451B2 (en) * 2016-06-15 2020-02-18 International Business Machines Corporation Augmented video analytics for testing internet of things (IoT) devices
US10164858B2 (en) 2016-06-15 2018-12-25 Time Warner Cable Enterprises Llc Apparatus and methods for monitoring and diagnosing a wireless network
US11146470B2 (en) 2016-06-15 2021-10-12 Time Warner Cable Enterprises Llc Apparatus and methods for monitoring and diagnosing a wireless network
US20190147248A1 (en) * 2016-06-15 2019-05-16 International Business Machines Corporation AUGMENTED VIDEO ANALYTICS FOR TESTING INTERNET OF THINGS (IoT) DEVICES
US10692113B2 (en) 2016-06-21 2020-06-23 Htc Corporation Method for providing customized information through advertising in simulation environment, and associated simulation system
CN107526433A (en) * 2016-06-21 2017-12-29 宏达国际电子股份有限公司 To provide the method for customized information and simulation system in simulated environment
US10373464B2 (en) 2016-07-07 2019-08-06 Walmart Apollo, Llc Apparatus and method for updating partiality vectors based on monitoring of person and his or her home
US10733444B2 (en) * 2016-07-12 2020-08-04 Walmart Apollo, Llc Systems and methods for automated assessment of physical objects
US20180018519A1 (en) * 2016-07-12 2018-01-18 Wal-Mart Stores, Inc. Systems and Methods for Automated Assessment of Physical Objects
US10499109B2 (en) * 2016-07-28 2019-12-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for providing combined barrage information
US20180035168A1 (en) * 2016-07-28 2018-02-01 Baidu Online Network Technology (Beijing) Co., Ltd. Method and Apparatus for Providing Combined Barrage Information
CN106130788A (en) * 2016-08-05 2016-11-16 珠海市魅族科技有限公司 A kind of method and device of subject document adaptive terminal
US11227008B2 (en) * 2016-08-10 2022-01-18 Zeekit Online Shopping Ltd. Method, system, and device of virtual dressing utilizing image processing, machine learning, and computer vision
US10891547B2 (en) * 2016-08-23 2021-01-12 International Business Machines Corporation Virtual resource t-shirt size generation and recommendation based on crowd sourcing
US20180060740A1 (en) * 2016-08-23 2018-03-01 International Business Machines Corporation Virtual resource t-shirt size generation and recommendation based on crowd sourcing
US20180060948A1 (en) * 2016-08-24 2018-03-01 Wal-Mart Stores, Inc. Apparatus and method for providing a virtual shopping environment
US11170419B1 (en) * 2016-08-26 2021-11-09 SharePay, Inc. Methods and systems for transaction division
US11016634B2 (en) * 2016-09-01 2021-05-25 Samsung Electronics Co., Ltd. Refrigerator storage system having a display
US20180059881A1 (en) * 2016-09-01 2018-03-01 Samsung Electronics Co., Ltd. Refrigerator storage system having a display
US10600100B2 (en) 2016-09-07 2020-03-24 Walmart Apollo, Llc Apparatus and method for providing item interaction with a virtual store
US10133916B2 (en) 2016-09-07 2018-11-20 Steven M. Gottlieb Image and identity validation in video chat events
US9836183B1 (en) * 2016-09-14 2017-12-05 Quid, Inc. Summarized network graph for semantic similarity graphs of large corpora
US11212593B2 (en) 2016-09-27 2021-12-28 Time Warner Cable Enterprises Llc Apparatus and methods for automated secondary content management in a digital network
US20180096506A1 (en) * 2016-10-04 2018-04-05 Facebook, Inc. Controls and Interfaces for User Interactions in Virtual Spaces
US10691983B2 (en) 2016-10-13 2020-06-23 International Business Machines Corporation Identifying complimentary physical components to known physical components
US10217031B2 (en) * 2016-10-13 2019-02-26 International Business Machines Corporation Identifying complimentary physical components to known physical components
US10580055B2 (en) 2016-10-13 2020-03-03 International Business Machines Corporation Identifying physical tools to manipulate physical components based on analyzing digital images of the physical components
US20190188449A1 (en) * 2016-10-28 2019-06-20 Boe Technology Group Co., Ltd. Clothes positioning device and method
WO2018089676A1 (en) * 2016-11-10 2018-05-17 Dga Inc. Product tagging and purchasing method and system
US10012505B2 (en) * 2016-11-11 2018-07-03 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable system for providing walking directions
US10841660B2 (en) 2016-12-29 2020-11-17 Dressbot Inc. System and method for multi-user digital interactive experience
US11457283B2 (en) 2016-12-29 2022-09-27 Dressbot Inc. System and method for multi-user digital interactive experience
US11889159B2 (en) 2016-12-29 2024-01-30 Dressbot Inc. System and method for multi-user digital interactive experience
US20180197423A1 (en) * 2017-01-12 2018-07-12 American National Elt Yayincilik Egtim Ve Danismanlik Ltd. Sti. Education model utilizing a qr-code smart book
US11710115B1 (en) 2017-01-27 2023-07-25 American Express Travel Related Services Company, Inc. Transaction account charge splitting
US10915881B2 (en) 2017-01-27 2021-02-09 American Express Travel Related Services Company, Inc. Transaction account charge splitting
US11227001B2 (en) 2017-01-31 2022-01-18 Experian Information Solutions, Inc. Massive scale heterogeneous data ingestion and user resolution
US11681733B2 (en) 2017-01-31 2023-06-20 Experian Information Solutions, Inc. Massive scale heterogeneous data ingestion and user resolution
US11657380B2 (en) 2017-02-06 2023-05-23 American Express Travel Related Services Company, Inc. Charge splitting across multiple payment systems
US10825022B1 (en) * 2017-03-03 2020-11-03 Wells Fargo Bank, N.A. Systems and methods for purchases locked by video
US11526931B2 (en) * 2017-03-16 2022-12-13 EyesMatch Ltd. Systems and methods for digital mirror
US11862302B2 (en) 2017-04-24 2024-01-02 Teladoc Health, Inc. Automated transcription and documentation of tele-health encounters
US20180315117A1 (en) * 2017-04-26 2018-11-01 David Lynton Jephcott On-Line Retail
US10375375B2 (en) * 2017-05-15 2019-08-06 Lg Electronics Inc. Method of providing fixed region information or offset region information for subtitle in virtual reality system and device for controlling the same
US10666922B2 (en) 2017-05-15 2020-05-26 Lg Electronics Inc. Method of transmitting 360-degree video, method of receiving 360-degree video, device for transmitting 360-degree video, and device for receiving 360-degree video
US11109013B2 (en) * 2017-05-15 2021-08-31 Lg Electronics Inc. Method of transmitting 360-degree video, method of receiving 360-degree video, device for transmitting 360-degree video, and device for receiving 360-degree video
US10757392B2 (en) 2017-05-15 2020-08-25 Lg Electronics Inc. Method of transmitting 360-degree video, method of receiving 360-degree video, device for transmitting 360-degree video, and device for receiving 360-degree video
CN108960005A (en) * 2017-05-19 2018-12-07 内蒙古大学 The foundation and display methods, system of subjects visual label in a kind of intelligent vision Internet of Things
US11356819B2 (en) 2017-06-02 2022-06-07 Charter Communications Operating, Llc Apparatus and methods for providing wireless service in a venue
US10645547B2 (en) 2017-06-02 2020-05-05 Charter Communications Operating, Llc Apparatus and methods for providing wireless service in a venue
US11350310B2 (en) 2017-06-06 2022-05-31 Charter Communications Operating, Llc Methods and apparatus for dynamic control of connections to co-existing radio access networks
US10638361B2 (en) 2017-06-06 2020-04-28 Charter Communications Operating, Llc Methods and apparatus for dynamic control of connections to co-existing radio access networks
US10511692B2 (en) 2017-06-22 2019-12-17 Bank Of America Corporation Data transmission to a networked resource based on contextual information
US11190617B2 (en) 2017-06-22 2021-11-30 Bank Of America Corporation Data transmission to a networked resource based on contextual information
US10313480B2 (en) 2017-06-22 2019-06-04 Bank Of America Corporation Data transmission between networked resources
US10524165B2 (en) 2017-06-22 2019-12-31 Bank Of America Corporation Dynamic utilization of alternative resources based on token association
US10986541B2 (en) 2017-06-22 2021-04-20 Bank Of America Corporation Dynamic utilization of alternative resources based on token association
US20180374128A1 (en) * 2017-06-23 2018-12-27 Perfect365 Technology Company Ltd. Method and system for a styling platform
US10540697B2 (en) * 2017-06-23 2020-01-21 Perfect365 Technology Company Ltd. Method and system for a styling platform
US11621865B2 (en) * 2017-07-21 2023-04-04 Pearson Education, Inc. Systems and methods for automated platform-based algorithm monitoring
US10938592B2 (en) * 2017-07-21 2021-03-02 Pearson Education, Inc. Systems and methods for automated platform-based algorithm monitoring
US20210152385A1 (en) * 2017-07-21 2021-05-20 Pearson Education, Inc. Systems and methods for automated platform-based algorithm monitoring
US11742094B2 (en) 2017-07-25 2023-08-29 Teladoc Health, Inc. Modular telehealth cart with thermal imaging and touch screen user interface
US10368255B2 (en) 2017-07-25 2019-07-30 Time Warner Cable Enterprises Llc Methods and apparatus for client-based dynamic control of connections to co-existing radio access networks
US20190051057A1 (en) * 2017-08-08 2019-02-14 Reald Spark, Llc Adjusting a digital representation of a head region
US10740985B2 (en) * 2017-08-08 2020-08-11 Reald Spark, Llc Adjusting a digital representation of a head region
US11636944B2 (en) 2017-08-25 2023-04-25 Teladoc Health, Inc. Connectivity infrastructure for a telehealth platform
US10540593B1 (en) 2017-08-29 2020-01-21 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10984330B1 (en) 2017-08-29 2021-04-20 Massachusetts Mutual Life Insurance Company System and method for managing customer call-backs
US10997506B1 (en) 2017-08-29 2021-05-04 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10582060B1 (en) 2017-08-29 2020-03-03 Massachusetts Mutual Life Insurance Company System and method for managing customer call-backs
US11736617B1 (en) 2017-08-29 2023-08-22 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10547748B1 (en) 2017-08-29 2020-01-28 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10257355B1 (en) 2017-08-29 2019-04-09 Massachusetts Mutual Life Insurance Company System and method for managing customer call-backs
US10909463B1 (en) 2017-08-29 2021-02-02 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US11669749B1 (en) 2017-08-29 2023-06-06 Massachusetts Mutual Life Insurance Company System and method for managing customer call-backs
US10565529B1 (en) 2017-08-29 2020-02-18 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10412224B1 (en) 2017-08-29 2019-09-10 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10860937B1 (en) 2017-08-29 2020-12-08 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10395184B1 (en) 2017-08-29 2019-08-27 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10346750B1 (en) 2017-08-29 2019-07-09 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US11176461B1 (en) 2017-08-29 2021-11-16 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10769538B1 (en) 2017-08-29 2020-09-08 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US11551108B1 (en) 2017-08-29 2023-01-10 Massachusetts Mutual Life Insurance Company System and method for managing routing of customer calls to agents
US10867128B2 (en) 2017-09-12 2020-12-15 Microsoft Technology Licensing, Llc Intelligently updating a collaboration site or template
CN107833145A (en) * 2017-09-19 2018-03-23 翔创科技(北京)有限公司 The database building method and source tracing method of livestock, storage medium and electronic equipment
US10742500B2 (en) * 2017-09-20 2020-08-11 Microsoft Technology Licensing, Llc Iteratively updating a collaboration site or template
USD916860S1 (en) 2017-09-26 2021-04-20 Amazon Technologies, Inc. Display system with a virtual reality graphical user interface
US11164362B1 (en) 2017-09-26 2021-11-02 Amazon Technologies, Inc. Virtual reality user interface generation
US10530841B2 (en) 2017-10-03 2020-01-07 The Toronto-Dominion Bank System and method for transferring value between database records
US11122113B2 (en) 2017-10-03 2021-09-14 The Toronto-Dominion Bank System and method for transferring value between database records
US10445608B2 (en) * 2017-10-25 2019-10-15 Motorola Mobility Llc Identifying object representations in image data
US20190130082A1 (en) * 2017-10-26 2019-05-02 Motorola Mobility Llc Authentication Methods and Devices for Allowing Access to Private Data
CN110034998A (en) * 2017-11-07 2019-07-19 奥誓公司 Control the computer system and method for electronic information and its response after transmitting
US11140113B2 (en) 2017-11-07 2021-10-05 Verizon Media Inc. Computerized system and method for controlling electronic messages and their responses after delivery
US10454869B2 (en) * 2017-11-07 2019-10-22 Oath Inc. Computerized system and method for controlling electronic messages and their responses after delivery
US11876764B2 (en) 2017-11-07 2024-01-16 Yahoo Assets Llc Computerized system and method for controlling electronic messages and their responses after delivery
US11069112B2 (en) * 2017-11-17 2021-07-20 Sony Interactive Entertainment LLC Systems, methods, and devices for creating a spline-based video animation sequence
US20210350603A1 (en) * 2017-11-17 2021-11-11 Sony Interactive Entertainment LLC Systems, methods, and devices for creating a spline-based video animation sequence
US11688115B2 (en) * 2017-11-17 2023-06-27 Sony Interactive Entertainment LLC Systems, methods, and devices for creating a spline-based video animation sequence
US10712811B2 (en) * 2017-12-12 2020-07-14 Facebook, Inc. Providing a digital model of a corresponding product in a camera feed
US10504251B1 (en) * 2017-12-13 2019-12-10 A9.Com, Inc. Determining a visual hull of an object
US11413536B2 (en) 2017-12-22 2022-08-16 Activision Publishing, Inc. Systems and methods for managing virtual items across multiple video game environments
US11003858B2 (en) * 2017-12-22 2021-05-11 Microsoft Technology Licensing, Llc AI system to determine actionable intent
US10765948B2 (en) 2017-12-22 2020-09-08 Activision Publishing, Inc. Video game content aggregation, normalization, and publication systems and methods
US10846562B2 (en) * 2018-01-12 2020-11-24 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for image matching
US11568236B2 (en) 2018-01-25 2023-01-31 The Research Foundation For The State University Of New York Framework and methods of diverse exploration for fast and safe policy improvement
US11019454B2 (en) 2018-02-13 2021-05-25 Charter Communications Operating, Llc Apparatus and methods for device location determination
US10477349B2 (en) 2018-02-13 2019-11-12 Charter Communications Operating, Llc Apparatus and methods for device location determination
US11758355B2 (en) 2018-02-13 2023-09-12 Charter Communications Operating, Llc Apparatus and methods for device location determination
WO2019167061A1 (en) * 2018-02-27 2019-09-06 Soul Vision Creations Private Limited Virtual representation creation of user for fit and style of apparel and accessories
US10872475B2 (en) 2018-02-27 2020-12-22 Soul Vision Creations Private Limited 3D mobile renderer for user-generated avatar, apparel, and accessories
US10777021B2 (en) 2018-02-27 2020-09-15 Soul Vision Creations Private Limited Virtual representation creation of user for fit and style of apparel and accessories
US10777020B2 (en) 2018-02-27 2020-09-15 Soul Vision Creations Private Limited Virtual representation creation of user for fit and style of apparel and accessories
US10453061B2 (en) 2018-03-01 2019-10-22 Capital One Services, Llc Network of trust
US11127006B2 (en) 2018-03-01 2021-09-21 Capital One Services Llc Network of trust
US20190272679A1 (en) * 2018-03-01 2019-09-05 Yuliya Brodsky Cloud-based garment design system
US11297688B2 (en) 2018-03-22 2022-04-05 goTenna Inc. Mesh network deployment kit
US11389064B2 (en) 2018-04-27 2022-07-19 Teladoc Health, Inc. Telehealth cart that supports a removable tablet with seamless audio/video switching
US20190332864A1 (en) * 2018-04-27 2019-10-31 Microsoft Technology Licensing, Llc Context-awareness
US10748001B2 (en) 2018-04-27 2020-08-18 Microsoft Technology Licensing, Llc Context-awareness
US10748002B2 (en) * 2018-04-27 2020-08-18 Microsoft Technology Licensing, Llc Context-awareness
US10791418B2 (en) * 2018-05-07 2020-09-29 Bayerische Motoren Werke Aktiengesellschaft Method and system for modeling user and location
US20190342698A1 (en) * 2018-05-07 2019-11-07 Bayerische Motoren Werke Aktiengesellschaft Method and System for Modeling User and Location
US11107149B2 (en) * 2018-05-11 2021-08-31 Lemon Hat Collaborative list management
US11288796B2 (en) 2018-05-31 2022-03-29 Beijing Sensetime Technology Development Co., Ltd. Image processing method, terminal device, and computer storage medium
CN108830783A (en) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
US20200042160A1 (en) * 2018-06-18 2020-02-06 Alessandro Gabbi System and Method for Providing Virtual-Reality Based Interactive Archives for Therapeutic Interventions, Interactions and Support
US10911234B2 (en) 2018-06-22 2021-02-02 Experian Information Solutions, Inc. System and method for a token gateway environment
US11588639B2 (en) 2018-06-22 2023-02-21 Experian Information Solutions, Inc. System and method for a token gateway environment
US10678956B2 (en) * 2018-06-25 2020-06-09 Dell Products, L.P. Keyboard for provisioning security credentials
CN110069699A (en) * 2018-07-27 2019-07-30 阿里巴巴集团控股有限公司 Order models training method and device
US11244381B2 (en) 2018-08-21 2022-02-08 International Business Machines Corporation Collaborative virtual reality computing system
US11265324B2 (en) 2018-09-05 2022-03-01 Consumerinfo.Com, Inc. User permissions for access to secure data at third-party
US10671749B2 (en) 2018-09-05 2020-06-02 Consumerinfo.Com, Inc. Authenticated access and aggregation database platform
US11399029B2 (en) 2018-09-05 2022-07-26 Consumerinfo.Com, Inc. Database platform for realtime updating of user data from third party sources
US10880313B2 (en) 2018-09-05 2020-12-29 Consumerinfo.Com, Inc. Database platform for realtime updating of user data from third party sources
US10963434B1 (en) 2018-09-07 2021-03-30 Experian Information Solutions, Inc. Data architecture for supporting multiple search models
US11734234B1 (en) 2018-09-07 2023-08-22 Experian Information Solutions, Inc. Data architecture for supporting multiple search models
US11019077B2 (en) 2018-09-27 2021-05-25 Palo Alto Networks, Inc. Multi-access distributed edge security in mobile networks
US10944796B2 (en) 2018-09-27 2021-03-09 Palo Alto Networks, Inc. Network slice-based security in mobile networks
US10812971B2 (en) 2018-09-27 2020-10-20 Palo Alto Networks, Inc. Service-based security per data network name in mobile networks
US10812972B2 (en) * 2018-09-27 2020-10-20 Palo Alto Networks, Inc. Service-based security per user location in mobile networks
US20200128400A1 (en) * 2018-09-27 2020-04-23 Palo Alto Networks, Inc. Service-based security per user location in mobile networks
US11582264B2 (en) 2018-09-27 2023-02-14 Palo Alto Networks, Inc. Network slice-based security in mobile networks
US11792235B2 (en) 2018-09-27 2023-10-17 Palo Alto Networks, Inc. Network slice-based security in mobile networks
US11258834B2 (en) * 2018-10-05 2022-02-22 Explain Everything, Inc. System and method for recording online collaboration
US11487712B2 (en) 2018-10-09 2022-11-01 Ebay Inc. Digital image suitability determination to generate AR/VR digital content
US11100054B2 (en) 2018-10-09 2021-08-24 Ebay Inc. Digital image suitability determination to generate AR/VR digital content
US10504384B1 (en) * 2018-10-12 2019-12-10 Haier Us Appliance Solutions, Inc. Augmented reality user engagement system
US10588175B1 (en) 2018-10-24 2020-03-10 Capital One Services, Llc Network of trust with blockchain
US11842331B2 (en) * 2018-10-24 2023-12-12 Capital One Services, Llc Network of trust for bill splitting
US20200134600A1 (en) * 2018-10-24 2020-04-30 Capital One Services, Llc Network of trust for bill splitting
US11212871B2 (en) 2018-10-24 2021-12-28 Capital One Services, Llc Network of trust with blockchain
US11494757B2 (en) 2018-10-24 2022-11-08 Capital One Services, Llc Remote commands using network of trust
US11900354B2 (en) 2018-10-24 2024-02-13 Capital One Services, Llc Remote commands using network of trust
JP2020071884A (en) * 2018-10-31 2020-05-07 株式会社sole Information processor
US11315179B1 (en) 2018-11-16 2022-04-26 Consumerinfo.Com, Inc. Methods and apparatuses for customized card recommendations
US20200175589A1 (en) * 2018-11-29 2020-06-04 Matrix Financial Technologies, Inc. System and Methodology for Collaborative Trading with Share and Follow Capabilities
US11877028B2 (en) 2018-12-04 2024-01-16 The Nielsen Company (Us), Llc Methods and apparatus to identify media presentations by analyzing network traffic
US10992764B1 (en) * 2018-12-11 2021-04-27 Amazon Technologies, Inc. Automatic user profiling using video streaming history
US11176629B2 (en) * 2018-12-21 2021-11-16 FreightVerify, Inc. System and method for monitoring logistical locations and transit entities using a canonical model
US11182634B2 (en) * 2019-02-05 2021-11-23 Disney Enterprises, Inc. Systems and methods for modifying labeled content
US11842454B1 (en) 2019-02-22 2023-12-12 Consumerinfo.Com, Inc. System and method for an augmented reality experience via an artificial intelligence bot
US11238656B1 (en) 2019-02-22 2022-02-01 Consumerinfo.Com, Inc. System and method for an augmented reality experience via an artificial intelligence bot
US11392659B2 (en) * 2019-02-28 2022-07-19 Adobe Inc. Utilizing machine learning models to generate experience driven search results based on digital canvas gesture inputs
US10924442B2 (en) 2019-03-05 2021-02-16 Capital One Services, Llc Conversation agent for collaborative search engine
US11113536B2 (en) * 2019-03-15 2021-09-07 Boe Technology Group Co., Ltd. Video identification method, video identification device, and storage medium
US20220172173A1 (en) * 2019-03-18 2022-06-02 Obshchestvo S Ogranichennoi Otvetstvennostiu "Headhunter" Recommender system for staff recruitment using machine learning with multivariate data dimension reduction and staff recruitment method using machine learning with multivariate data dimension reduction
US20230118119A1 (en) * 2019-03-24 2023-04-20 We.R Augmented Reality Cloud Ltd. System, Device, and Method of Augmented Reality based Mapping of a Venue and Navigation within a Venue
US11354728B2 (en) * 2019-03-24 2022-06-07 We.R Augmented Reality Cloud Ltd. System, device, and method of augmented reality based mapping of a venue and navigation within a venue
US11069093B2 (en) * 2019-04-26 2021-07-20 Adobe Inc. Generating contextualized image variants of multiple component images
US11354828B2 (en) 2019-04-26 2022-06-07 Adobe Inc. Generating contextualized image variants of multiple component images
US11138281B2 (en) * 2019-05-22 2021-10-05 Microsoft Technology Licensing, Llc System user attribute relevance based on activity
US20200394699A1 (en) * 2019-06-13 2020-12-17 Knot Standard LLC Systems and/or methods for presenting dynamic content for articles of clothing
US11615454B2 (en) * 2019-06-13 2023-03-28 Knot Standard LLC Systems and/or methods for presenting dynamic content for articles of clothing
US11816800B2 (en) 2019-07-03 2023-11-14 Apple Inc. Guided consumer experience
US11775130B2 (en) * 2019-07-03 2023-10-03 Apple Inc. Guided retail experience
US20210004137A1 (en) * 2019-07-03 2021-01-07 Apple Inc. Guided retail experience
CN112184356A (en) * 2019-07-03 2021-01-05 苹果公司 Guided retail experience
US11508021B2 (en) * 2019-07-22 2022-11-22 Vmware, Inc. Processes and systems that determine sustainability of a virtual infrastructure of a distributed computing system
US20210027401A1 (en) * 2019-07-22 2021-01-28 Vmware, Inc. Processes and systems that determine sustainability of a virtual infrastructure of a distributed computing system
US20220327580A1 (en) * 2019-09-19 2022-10-13 Beijing Bytedance Network Technology Co., Ltd. Method and apparatus for interacting with image, and medium and electronic device
US20210294940A1 (en) * 2019-10-07 2021-09-23 Conor Haas Dodd System, apparatus, and method for simulating the value of a product idea
US11386408B2 (en) * 2019-11-01 2022-07-12 Intuit Inc. System and method for nearest neighbor-based bank account number validation
CN111222264A (en) * 2019-11-01 2020-06-02 长春英利汽车工业股份有限公司 Manufacturing method of composite continuous glass fiber reinforced front-end module
US11712627B2 (en) 2019-11-08 2023-08-01 Activision Publishing, Inc. System and method for providing conditional access to virtual gaming items
US20210174422A1 (en) * 2019-12-04 2021-06-10 Lg Electronics Inc. Smart apparatus
US11854059B2 (en) * 2019-12-04 2023-12-26 Lg Electronics Inc. Smart apparatus
US20210192074A1 (en) * 2019-12-19 2021-06-24 Capital One Services, Llc System and method for controlling access to account transaction information
US11928235B2 (en) * 2019-12-19 2024-03-12 Capital One Services, Llc System and method for controlling access to account transaction information
WO2021138057A1 (en) * 2019-12-31 2021-07-08 Paypal, Inc. Dynamically rendered interface elements during online chat sessions
AU2020417722B2 (en) * 2019-12-31 2023-10-12 Paypal, Inc. Dynamically rendered interface elements during online chat sessions
US11423463B2 (en) 2019-12-31 2022-08-23 Paypal, Inc. Dynamically rendered interface elements during online chat sessions
CN111445283A (en) * 2020-03-25 2020-07-24 北京百度网讯科技有限公司 Digital human processing method and device based on interactive device and storage medium
WO2021211875A1 (en) * 2020-04-15 2021-10-21 Tekion Corp Document sharing with annotations
US11334241B2 (en) 2020-04-15 2022-05-17 Tekion Corp Document sharing with annotations
US11847312B2 (en) 2020-04-15 2023-12-19 Tekion Corp Document sharing with annotations
US20210357468A1 (en) * 2020-05-15 2021-11-18 Baidu Online Network Technology (Beijing) Co., Ltd. Method for sorting geographic location point, method for training sorting model and corresponding apparatuses
US11556601B2 (en) * 2020-05-15 2023-01-17 Baidu Online Network Technology (Beijing) Co., Ltd. Method for sorting geographic location point, method for training sorting model and corresponding apparatuses
US11354377B2 (en) 2020-06-29 2022-06-07 Walmart Apollo, Llc Methods and apparatus for automatically providing item reviews and suggestions
US11798202B2 (en) 2020-09-28 2023-10-24 Snap Inc. Providing augmented reality-based makeup in a messaging system
US20220101418A1 (en) * 2020-09-28 2022-03-31 Snap Inc. Providing augmented reality-based makeup product sets in a messaging system
CN112365572A (en) * 2020-09-30 2021-02-12 深圳市为汉科技有限公司 Rendering method based on tessellation and related product thereof
US11410394B2 (en) 2020-11-04 2022-08-09 West Texas Technology Partners, Inc. Method for interactive catalog for 3D objects within the 2D environment
US11922577B2 (en) * 2020-11-16 2024-03-05 Clo Virtual Fashion Inc. Method and apparatus for online fitting
US20220157020A1 (en) * 2020-11-16 2022-05-19 Clo Virtual Fashion Inc. Method and apparatus for online fitting
US20220179419A1 (en) * 2020-12-04 2022-06-09 Mitsubishi Electric Research Laboratories, Inc. Method and System for Modelling and Control Partially Measurable Systems
US11765221B2 (en) 2020-12-14 2023-09-19 The Western Union Company Systems and methods for adaptive security and cooperative multi-system operations with dynamic protocols
CN112751837A (en) * 2020-12-25 2021-05-04 苏州星舟知识产权代理有限公司 Open type synchronous online conference system
US11880377B1 (en) 2021-03-26 2024-01-23 Experian Information Solutions, Inc. Systems and methods for entity resolution
CN113203984A (en) * 2021-04-25 2021-08-03 华中科技大学 Multi-user-end online cooperative positioning system
US20220358905A1 (en) * 2021-05-05 2022-11-10 Deep Media Inc. Audio and video translator
US20230088322A1 (en) * 2021-05-05 2023-03-23 Deep Media Inc. Audio and video translator
US11908449B2 (en) * 2021-05-05 2024-02-20 Deep Media Inc. Audio and video translator
US11551664B2 (en) * 2021-05-05 2023-01-10 Deep Media Inc. Audio and video translator
US20220387895A1 (en) * 2021-06-02 2022-12-08 Yariv Glazer Method and System for Managing Virtual Personal Space
US11642598B2 (en) * 2021-06-02 2023-05-09 Yariv Glazer Method and system for managing virtual personal space
US11494851B1 (en) 2021-06-11 2022-11-08 Winter Chat Pty Ltd. Messaging system and method for providing management views
US11341337B1 (en) * 2021-06-11 2022-05-24 Winter Chat Pty Ltd Semantic messaging collaboration system
US11716421B2 (en) * 2021-08-16 2023-08-01 Capital One Services, Llc System and methods for dynamically routing and rating customer service communications
US20230050482A1 (en) * 2021-08-16 2023-02-16 Capital One Services, Llc System and methods for dynamically routing and rating customer service communications
US11423110B1 (en) * 2021-09-22 2022-08-23 Finvar Corporation Intelligent timeline and commercialization system with social networking features
US11574324B1 (en) 2021-09-22 2023-02-07 Finvar Corporation Logic extraction and application subsystem for intelligent timeline and commercialization system
CN113837138A (en) * 2021-09-30 2021-12-24 重庆紫光华山智安科技有限公司 Dressing monitoring method, system, medium and electronic terminal
WO2023083405A1 (en) * 2021-11-10 2023-05-19 EPLAN GmbH & Co. KG Flexible management of resources for multiple users
DE102021129282A1 (en) 2021-11-10 2023-05-11 EPLAN GmbH & Co. KG Flexible management of resources for multiple users
US11935103B2 (en) 2021-12-29 2024-03-19 Ebay Inc. Methods and systems for reducing item selection error in an e-commerce environment
US20230244724A1 (en) * 2022-02-01 2023-08-03 Jpmorgan Chase Bank, N.A. Method and system for automated public information discovery
US20230273714A1 (en) * 2022-02-25 2023-08-31 ShredMetrix LLC Systems And Methods For Visualizing Sporting Equipment
WO2023249614A1 (en) * 2022-06-21 2023-12-28 Dxm, Inc. Manufacturing system for manufacturing articles of clothing and other goods
US20240037858A1 (en) * 2022-07-28 2024-02-01 Snap Inc. Virtual wardrobe ar experience
CN115861488A (en) * 2022-12-22 2023-03-28 中国科学技术大学 High-resolution virtual reloading method, system, equipment and storage medium
CN116703534A (en) * 2023-08-08 2023-09-05 申合信科技集团有限公司 Intelligent management method for data of electronic commerce orders
CN117392352A (en) * 2023-12-11 2024-01-12 南京市文化投资控股集团有限责任公司 Model modeling operation management system and method for meta universe

Also Published As

Publication number Publication date
GB2458388A (en) 2009-09-23
CA2659698A1 (en) 2009-09-21
GB0904911D0 (en) 2009-05-06
CA2659698C (en) 2020-06-16
US10002337B2 (en) 2018-06-19
US20130066750A1 (en) 2013-03-14

Similar Documents

Publication Publication Date Title
US11893558B2 (en) System and method for collaborative shopping, business and entertainment
US10002337B2 (en) Method for collaborative shopping
US20130215116A1 (en) System and Method for Collaborative Shopping, Business and Entertainment
US10013713B2 (en) Computer implemented methods and systems for generating virtual body models for garment fit visualisation
Wodtke Information architecture: Blueprints for the Web
US9870636B2 (en) Method for sharing emotions through the creation of three dimensional avatars and their interaction
CN109478192A (en) For providing the method for the product centered on media of one or more customizations
US20230090253A1 (en) Systems and methods for authoring and managing extended reality (xr) avatars
Mu et al. Fashion intelligence in the Metaverse: promise and future prospects
Baje et al. Cherie: A Proposed Design for a Mobile Application with AI Outfit Assistance and 3D Virtual Wardrobe
MCDOUGALL Digital Tools
KR20230095395A (en) Apparatus and method for managing merchandise sales
Hahn From Discovery to Purchase: Improving the User Experience for Buyers in eCommerce

Legal Events

Date Code Title Description
AS Assignment

Owner name: DRESSBOT, INC.,CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIDDIQUE, M. A. SAMI;RAOUF, ABIDA;RAOUF, ABDUL AZIZ;AND OTHERS;REEL/FRAME:023670/0143

Effective date: 20090519

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION