US20150302422A1 - Systems and methods for multi-user behavioral research - Google Patents

Systems and methods for multi-user behavioral research Download PDF

Info

Publication number
US20150302422A1
US20150302422A1 US14/254,643 US201414254643A US2015302422A1 US 20150302422 A1 US20150302422 A1 US 20150302422A1 US 201414254643 A US201414254643 A US 201414254643A US 2015302422 A1 US2015302422 A1 US 2015302422A1
Authority
US
United States
Prior art keywords
participant
simulated environment
environment
logic
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/254,643
Inventor
James Edward Bryson
Kathryn Kersey HARLAN
Isaac David ROGERS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Frost Ventures LLC
Original Assignee
2020 Ip LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 2020 Ip LLC filed Critical 2020 Ip LLC
Priority to US14/254,643 priority Critical patent/US20150302422A1/en
Priority to US14/274,351 priority patent/US10354261B2/en
Priority to US14/466,643 priority patent/US20150301597A1/en
Assigned to 2020 IP LLC reassignment 2020 IP LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRYSON, JAMES EDWARD, HARLAN, KATHRYN KERSEY, ROGERS, ISAAC DAVID
Publication of US20150302422A1 publication Critical patent/US20150302422A1/en
Priority to US16/512,149 priority patent/US10600066B2/en
Assigned to FROST VENTURES, LLC reassignment FROST VENTURES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: 20/20 IP, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Definitions

  • Behavioral research, and particularly behavioral research relating to consumers' product preferences, may be a time consuming and expensive process. Even when behavioral research is conducted with a significant investment of time and money, the results of the research may not be wholly accurate or representative of consumers' true views.
  • focus groups of one or more participants are brought together in a common location and presented with products for evaluation. Participants may be brought to a special facility expressly designed for focus group testing (e.g., a facility with special conference rooms that allow the participants to be observed or recorded), which may add to the cost of conducting a focus group. Furthermore, costs may be driven up by the need to produce non-production product mockups or prototypes for the focus group, or simulated two-dimensional models to be displayed on a computer.
  • focus group testing may not yield entirely accurate or satisfactory results.
  • products may be viewed in isolation and/or out of a purchasing context. This may make it difficult to draw conclusions about how a consumer would interact with the product in a retail establishment or online, where the user would be confronted with multiple products and different environmental conditions.
  • Exemplary embodiments described herein relate to methods, mediums, and systems for performing behavioral research in a simulated environment, such as a virtual reality environment.
  • a simulated environment such as a virtual reality environment.
  • the research can be conducted either in person or remotely, allowing for increased flexibility and cost savings.
  • participants may interact with a product in a more natural way (e.g., by observing the product side-by-side with other products in a simulated retail establishment).
  • Exemplary embodiments may be configured to record the participant's observational data (e.g., the location where the observer is directing their gaze, the amount of time spent looking at a particular product, and whether the participant revisited the product after moving on to another product).
  • the participant's observational data e.g., the location where the observer is directing their gaze, the amount of time spent looking at a particular product, and whether the participant revisited the product after moving on to another product.
  • a centralized server that hosts the environment and/or research may be provided.
  • the centralized server may be located at a central facility (e.g., a facility associated with the researcher), at a remote location, or may be distributed (e.g., using cloud-based resources).
  • Different types of users having different roles may connect to the server.
  • the server may expose multiple interfaces that provide different capabilities.
  • a participant in a research project may be placed in the simulated environment and may control their own location (and the location of their gaze) within the environment.
  • a participant interface may therefore be provided, where the participant interface allows the participant to change positions in the environment and records participant observational data.
  • Another type of user may include a moderator responsible for running the research project.
  • the moderator may communicate with the participants, observe what the participants are looking at, may manually move the participants to specified locations in the environment, and may trigger questions about products in the environment that appear on the user's display.
  • a user connecting to the server through a moderator interface may be provided with these capabilities.
  • a third type of user may include a client interested in the outcome of the behavioral research.
  • the client may be a product designer whose product is being reviewed by the participants in the simulated environment.
  • a client interface may permit the client to observe what the participants are observing, and may potentially communicate with the moderator. However, it may be undesirable to allow the client to affect the participant's observations, and hence the client interface may be limited to observation and communication with the moderator.
  • the central server may build and/or maintain a simulated environment, and provide functionality for interacting with the simulated environment on behalf of multiple different types of users in such a way that meaningful behavioral research may be conducted.
  • a system for monitoring behaviors of a participant by a moderator and a client may be provided.
  • the system may include a non-transitory storage medium storing logic, and a processor for executing the logic.
  • the logic may include logic for implementing a participant interface that sends and receives instructions for simulating an environment and observing the simulated environment.
  • the participant interface logic may include demographic rules that cause the environment to be simulated in a different manner depending on demographics the participant.
  • the participant interface logic may also include logic for changing a position of a participant avatar in the simulated environment, and/or logic for changing a location of a participant's gaze in the simulated environment.
  • the logic may further include logic for implementing a moderator interface that sends and receives instructions for simulating the environment and manipulating the simulated environment.
  • the moderator interface logic may include logic for moving the participant to a specified location in the simulated environment.
  • the moderator interface logic may further include logic for manually triggering a survey question.
  • the logic may further include logic for implementing a client interface that sends and receives instructions for viewing the simulated environment from the perspective of the participant.
  • the client interface logic may limit the actions of the client in the simulated environment to viewing the simulated environment from the perspective of the participant.
  • the processor may further be programmed to maintain the simulated environment, receive observational data about the simulated environment from the participant interface logic, and store the observational data in the storage medium.
  • the processor may calculate one or more viewing windows for the participant's gaze.
  • the processor may calculate scores for each of the viewing windows, the calculated scores representing an amount of attention given to an object in the viewing windows.
  • the processor may identify that the location of the participant's gaze encompasses a predefined trigger point, retrieve a survey question associated with the predefined trigger point, and transmit an instruction to the visual display device to display the retrieved survey question.
  • an interface may be provided to connect the system to a visual display device for displaying the simulated environment.
  • the visual display device may be, for example, a virtual reality headset or a browser.
  • the storage medium may store one or more hardware agnostic canvases that represent the simulated environment in a manner that is not specific to the visual display device, and the processor may translate the one or more hardware agnostic canvases into a format that is interpretable by the visual display device.
  • Further exemplary embodiments provide methods for monitoring behaviors of a participant by a moderator and a client.
  • the methods may include simulating an environment comprising an object of study.
  • Instructions may be transmitted to a participant visual display device, where the instructions include instructions for displaying a participant perspective of the simulated environment on the participant visual display device.
  • Participant location data describing a change in a position or a gaze location of the participant in the simulated environment may be received and analyzed.
  • a score may be calculated based on the participant location data, where the score represents an amount of attention paid by the participant to the object of study in the simulated environment.
  • the score may be stored in a non-transitory storage medium.
  • second instructions may be transmitted to a client visual display device.
  • the second instructions may include instructions for displaying the participant perspective of the simulated environment on the client visual display device.
  • the method may include connecting to a participant interface of an environmental server responsible for maintaining a simulated environment comprising an object of study.
  • the environmental server may maintain a plurality of different types of interfaces, each type of interface corresponding to a different type of user interacting with the simulated environment and providing different capabilities for the different types of users.
  • Information about the simulated environment may be received from the participant interface, and the simulated environment may be rendered for a participant based on the received information.
  • Participant location data describing a change in a position or a gaze location of the participant in the simulated environment may be transmitted to the environmental server using the participant interface.
  • Updated information about the simulated environment may be received, and the rendered simulated environment may be updated based on the updated information.
  • a manipulation of the simulated environment may also be received, where the manipulation comes from an instruction transmitted through a moderator interface of the environmental server.
  • the manipulation may be executed in the simulated environment.
  • the manipulation may include an instruction that the participant be moved to a specified location in the simulated environment, and executing the manipulation may include moving the participant to the specified location.
  • FIG. 1 depicts an exemplary system for hosting, managing, and displaying a simulated environment according to an exemplary embodiment.
  • FIGS. 2A-2C depict examples of different simulated environments.
  • FIG. 3A-3D depict views of an exemplary simulated environment.
  • FIG. 4 depicts exemplary data representative of different types of users and interfaces.
  • FIGS. 5A-5B depict exemplary embodiments in which one or more participants interact with the simulated environment.
  • FIG. 6 depicts an exemplary format for objects and triggers suitable for use in exemplary simulated environments.
  • FIG. 7 depicts a hardware-agnostic canvas suitable for use in exemplary embodiments
  • FIG. 8 is a flowchart describing an exemplary method for building a hardware-agnostic canvas representing a simulated environment.
  • FIG. 9 is a flowchart describing an exemplary method for translating a hardware-agnostic canvas into viewer-specific code suitable for use on exemplary environment viewers.
  • FIG. 10 is a data flow diagram showing exemplary information-routing paths for displaying and managing the simulated environment.
  • FIG. 11 is a flowchart describing an exemplary method for interacting with the simulated environment through a participant interface.
  • FIG. 12 describes an exemplary method for gathering and aggregating data from participants in the simulated environment.
  • FIG. 13 depicts a map of aggregated data superimposed on the simulated environment.
  • FIG. 14 depicts an exemplary electronic device suitable for use with exemplary embodiments.
  • Exemplary embodiments relate to methods, mediums, and systems for conducting behavioral research in a simulated environment.
  • One or more devices may work together to maintain the simulated environment and analyze data indicative of where a user is placing their attention within the environment.
  • multiple different types of users including participants, moderators, and clients, may interact with the simulated environment.
  • Exemplary embodiments provide different interfaces having different capabilities for each of the different types of users.
  • a participant refers to a person whose behavior is being monitored or observed in a behavioral research project.
  • the participant may be placed into a simulated environment and allowed to freely or semi-freely interact with the environment, changing the location of their gaze within the environment.
  • the participant's gaze location may be analyzed to determine which objects in the simulated environment are more likely to capture a consumer's attention.
  • the simulated environment and the participant(s)′ interactions with the environment may be curated by a moderator.
  • a “moderator” refers to an entity or entities that interactively guide the participant's experience in the simulated environment. This interaction may include audio, visual and/or haptic cues. The interaction may involve directing the participant's attention to particular features within the simulated environment, posing questions to the participant, and manually moving the participant within the simulated environment.
  • a client may have an interest in the participant's views of the objects in the simulated environment.
  • the client may be a product designer whose products are being tested in the simulated environment.
  • a client is limited to passive observation: e.g., viewing the simulated environment from the perspective of the participant.
  • the client may be permitted limited interaction with the participant, such as by triggering survey questions.
  • Participants, moderators, and users are collectively referred to herein as users.
  • One or more different types of interfaces may be defined for allowing the different types of users to connect to, and interact with, the simulated environment.
  • Each of the different types of interfaces may support a different type of user by providing the above-described functionality for a user connecting to the interface.
  • a participant interface may allow a user connecting through it to move about the simulated environment, change the location of their gaze, and receive and answer survey questions about objects in the environment.
  • the participant interface may lack the ability to (for example) manually trigger survey questions or change the location of other participants, which may be capabilities reserved for the moderator interface.
  • FIG. 1 depicts an exemplary system for supporting the different types of users in a simulated environment.
  • the system may include a virtual reality (VR) server 10 and a VR client 12 .
  • the VR server 10 may be responsible for maintaining a simulated environment and coordinating the use of the simulated environment among multiple users.
  • the users which may include a participant 14 , a moderator 16 , and a client 18 , may interact with the simulated environment through one or more VR clients 12 .
  • the simulated environment may be displayed on a visual display device 40 , such as a VR headset.
  • Visual display devices 40 come in multiple different types, some of which may use proprietary or custom display formats. Examples of visual display devices 40 include, but are not limited to, the Oculus Rift headset of Facebook, Inc. of Menlo Park, Calif. and the Project Morpheus headset produced by Sony Corp. of Tokyo, Japan.
  • the VR server 10 may store hardware agnostic input data 20 .
  • “hardware agnostic” refers to a neutral format that is not specific to, or usable by, a single particular type of device. Rather, the hardware agnostic input data 20 is saved in a format that is readily translated into a format that can be understood by a particular hardware device.
  • input data used to create the simulated environment may be stored in an proprietary or hardware non-agnostic format, and then translated into other formats as necessary (potentially by translating the input data from a first hardware-specific format into an intermediate hardware agnostic format, and then from the hardware agnostic format into a second hardware-specific format).
  • the hardware agnostic input data 20 may include hardware agnostic canvases 22 that represent the simulated environment and the objects in it.
  • the canvases 22 may represent databases of stored objects and locations for the stored objects, which are rendered in the simulated environment.
  • the hardware agnostic canvases may define a location for the objects in a 3D or 2D coordinate system, which can be used by the VR client 12 to render the objects at an appropriate location with respect to the user's position in the simulated environment.
  • An example of a hardware agnostic canvas is depicted in FIG. 7 and discussed in more detail below.
  • the hardware agnostic input data 20 may further include survey questions 24 .
  • the survey questions 24 may include questions that are triggered, either manually (e.g., by a moderator) or when a certain set of conditions with respect to the user, the environment, and/or an object in the environment are met.
  • the survey questions 24 may define a trigger location at which the question may be triggered.
  • the survey questions 24 may further define an attention score required before the questions are triggered.
  • the VR server 10 may calculate a score for one or more objects or locations in the simulated environment based on how much attention a participant gives to the object or location. For example, a participant that stared at an object for ten seconds might yield a higher score for the object than a participant who glances at the object in passing. The score may be accumulated by increasing amounts if the participant re-visits an object (e.g., the participant glances at the object, moves away from the object for a certain period of time, and then returns to the object).
  • exemplary survey questions 24 are shown in Table 1 below. In Table 1, each of the four questions is triggered at the same location. However, depending on how much attention score the user has accumulated for the object at that location, different questions may be posed.
  • the hardware agnostic input data 20 may include split tests 26 , which define variants of a product that may be tested in the simulated environment.
  • a split test 26 may define two different types of packaging that may be applied to a product.
  • the different types of packaging may be displayed randomly to different participants, or may be displayed based on participant demographics (e.g., men view a product in green packaging, whereas women view a product in yellow packaging).
  • the hardware agnostic input data 20 may be translated into a format understandable by the VR client 12 by translation logic 28 .
  • the translation logic may accept the object definitions in the canvases 22 , which are defined using a coordinate system, and provide instructions for the VR client that allows the VR client to accurately render the objects.
  • the translation logic may account for (among other things) the resolution, color capabilities, and size of the visual display device 40 in determining how the object should be rendered in the simulated environment on that particular visual display device 40 .
  • An exemplary method for translating the hardware agnostic input data 20 which may be implemented by the translation logic 28 , is described in more detail with respect to FIG. 9 .
  • the translation logic 28 may also work in reverse. That is, the translation logic 28 may accept data (2D or 3D data) returned from the VR client 12 and translate the data into a hardware agnostic format for processing. For instance, the VR client 12 may provide information as to where the display was pointing at a particular moment in time. The translation logic may accept this information and determine the participant's location and/or the direction in which the participant was looking with respect to the hardware-agnostic coordinate system. This information may be used for data processing and aggregation across multiple users (potentially using multiple different types of visual display devices 40 ).
  • the hardware agnostic input data 20 may be used to generate a simulated environment. Because each of the participant(s) 14 , the moderator(s) 16 , and the client(s) 18 interact with the simulated environment in different ways, different types of interfaces 30 into the VR server may be provided. By accessing a particular type of interface 30 , the user defines what type of user they are and what kinds of capabilities they will have to interact with the environment and other users in the environment.
  • a participant interface 32 may sends and receives instructions for simulating an environment and observing the simulated environment.
  • the participant interface 32 may allow a participant 14 to change their position (e.g., the position of a participant avatar) in the simulated environment.
  • the participant interface 32 may further allow the participant 14 to change a location of the participant's 14 gaze in the simulated environment.
  • the participant interface 32 may include demographic rules 34 that cause the environment to be simulated in a different manner depending on demographic attributes of the participant 14 . For example, different products may be displayed to participants 14 having different demographic attributes, or the participant 14 could be placed in an entirely different simulated environment depending on their demographic attributes.
  • the interfaces 30 may further include a moderator interface 36 that sends and receives instructions for simulating the environment and manipulating the simulated environment.
  • the moderator interface 36 may allow the moderator 16 to interact with the simulated environment using their own avatar (e.g., the moderator 16 may move through the simulated environment in the same manner as a participant 14 ), or may allow the moderator 16 to view the simulated environment from the perspective of one of the participants 14 (e.g., viewing the environment through the eyes of the participant).
  • the moderator interface 30 may include a switch or selection mechanism that allows the moderator 16 to switch the moderator's view from a moderator avatar to a participant's perspective. The switch or selection mechanism may be activated during a research session in order to allow for real-time switching between perspectives.
  • the moderator interface 36 may allow a moderator 16 to move a selected participant 14 to a specified location in the simulated environment.
  • the moderator interface 36 may further include logic for manually triggering a survey question.
  • the interfaces 30 may further include a client interface 38 that sends and receives instructions for viewing the simulated environment from the perspective of the participant 14 .
  • the client interface 38 may limit the actions of the client 18 in the simulated environment to viewing the simulated environment from the perspective of the participant 14 .
  • the client 18 may be provided with some limited ability to interact with the participant 14 (e.g., by triggering survey questions 24 ).
  • the interfaces 30 may be implemented in a number of ways.
  • the VR server 10 may expose different ports through which different types of users may connect over a network.
  • a user connecting through port 1 may be identified as a participant 14
  • a user connecting through port 2 may be identified as a moderator 16
  • a user connecting through port 3 may be identified as a client 18 .
  • the interfaces 30 may define different packet formats (e.g., a first format for a participant 14 , a second format for a moderator 16 , and a third format for a client 18 ).
  • the interfaces 30 may identify the packet format, determine what type of user is associated with the format, and provide appropriate functionality.
  • instructions from the VR client 12 may be tagged with different flags depending on what type of user is interacting with the VR client 12 .
  • the interfaces 30 may recognize the flags and provide different types of functionality according to what type of user is associated with each flag.
  • the interfaces 30 may be programmed with a library of users and a type associated with each user. When instructions or information is received from a particular user (e.g., tagged by a user ID), the interfaces 30 may consult the library and determine what functionality the user is able to implement.
  • the different types of interfaces 30 may interpret commands differently depending on what type of interface 30 the command is received on. Furthermore, the interfaces 30 may instruct the visual display device 40 to provide different displays, graphical interfaces, and or menu options depending on which type of interface the user connects through.
  • a user connecting through the participant interface 32 may be provided with the functionality to move their avatar through the simulated environment. If the user is interacting with the environment using (e.g.) a joystick, then commands from the joystick may be interpreted as a command to move an avatar present in the simulated environment according to the joystick commands.
  • a moderator 16 may or may not be in control of an avatar. If the moderator 16 is not controlling an avatar, and is instead observing the simulated environment from a camera perspective or “bird's eye view,” then the joystick commands received through the moderator interface 36 may be interpreted as a command to move the moderator's 16 camera. Still further, joystick commands from a client 18 may be interpreted as an instruction to change the participant 14 whose perspective the client 18 is currently observing.
  • a participant 14 may be presented with a view of the simulated environment through the visual display device 40 .
  • the view may include a window for presenting survey questions 24 , when the survey questions 24 are triggered.
  • the participant interface 32 may transmit instructions for displaying such an interface on the participant's 14 visual display device 40 .
  • the moderator 14 may be provided with a display of the simulated environment, but may also be provided with administrative menu options.
  • the menu options might include, for example, a command to move a user to a specified location, an “enable communication” command that allows the moderator to transmit audio signals to the VR client 12 of a participant 14 , a command to manually trigger a survey question 24 , etc.
  • the client 18 may be provided with interface options for changing perspective to a different participant 14 , triggering survey questions, etc.
  • the interfaces 30 may include instructions for rendering different types of displays and different types of display options depending on what kind of user has accessed the interface.
  • the simulated environment as viewed through the interfaces 30 may be displayed on the visual display device 40 and/or a browser 42 of the VR client 12 .
  • the browser 42 may be, for example, a two-dimensional representation of the simulated environment (e.g., a representation viewed on a web browser or a 2D gaming console).
  • the VR client 12 may generate VR data 44 describing the participant's 14 interaction with the environment.
  • the VR client 12 may collect data regarding the location of the participant's 14 avatar in the simulated environment, and the location at which the participant 14 is directing their gaze.
  • the location of the participant's 14 avatar may be determined, for example, based on relative movement data.
  • the participant's 14 avatar may be initially placed at a known location (or, during the course of the simulation, may be moved to a known location).
  • the participant 14 may be provided with the capability of moving their avatar, for example through the use of keyboard input, a joystick, body movements, etc.
  • the instructions for moving the avatar may be transmitted to the VR server 10 or may be executed locally at the VR client 12 . Based on the instructions, an updated location for the participant's 14 avatar in the simulated environment may be determined, and an updated view of the environment may be rendered.
  • the location of the participant's 14 avatar may be recorded at the VR server 10 as 3D data 46 . The location may be recorded each time the avatar location changes, or may be sampled at regular intervals.
  • the system may record information about the direction of the participant's 14 gaze.
  • the direction of the participant's 14 gaze may be determined directly, indirectly, and/or may be imputed.
  • the participant's 14 gaze location may be determined directly, for example, by tracking the movement of the participant's 14 eyes using eye tracking hardware.
  • the eye tracking hardware may be present in the visual display device 40 , or may be provided separately.
  • the participant's 14 gaze location may be indirectly determined by measuring a variable that is correlated to eye movement. For example, in a virtual reality environment, a user may change their perspective by turning their head. In this case, it may be assumed that the user is primarily directing their attention to the center of the display field. If the user wishes to see something in their periphery, the user will likely turn their head in that direction. Accordingly, The participant's 14 gaze location may be estimated to be the center of the display field of the visual display device 40 .
  • the participant's 14 gaze location may be imputed using logic that analyzes the user's behavior. For example, if the participant 14 interacts with the simulated environment by clicking in a browser 42 , the location of the participant's 14 clicks may be used as a proxy for the location at which the participant 14 has placed their attention. Alternatively, a survey question may be presented directly asking the user where they have placed their attention. The survey responses may be analyzed to impute the user's behavior.
  • the VR data may optionally be translated into, or combined with, legacy data 48 .
  • legacy data 48 For example, 2D data (such as mouse clicks or hover times over a 2D canvas) and eye-mapping data 52 (representing the results of eye mapping studies) may be existent in the VR server 10 .
  • This data may have been previously analyzed to determine consumer preferences, and this preference information may be correlated with the new VR data 44 in order to avoid the duplication of existent work.
  • Data mapping logic 54 may translate the VR data 44 into legacy data 48 and/or vice versa.
  • the VR data 44 may be processed by data processing logic 56 to evaluate where the participant 14 has directed their attention.
  • the data processing logic may include, for example, a gaze box calculator 58 and scoring rules 60 .
  • the gaze box calculator 58 may analyze the location data to determine where the user's gaze was directed (i.e., what part of the simulated environment the user looked at). The gaze box calculator 58 may calculate one or more areas in the participant's 14 view and use the scoring rules 60 to assign a score to each area, depending on the amount of attention the participant 14 gave to the area or the likelihood that the participant 14 was looking at the identified area. The gaze box calculator 58 and scoring rules 60 are discussed in more detail with respect to FIG. 12 below.
  • the participant's 14 gaze location and/or location information may be provided to trigger logic 62 .
  • the trigger logic 62 may compare the participant's 14 gaze location or avatar location to a list of trigger points in the simulated environment. If the participant gazed at, or moved to, a trigger point, then the trigger logic 62 may trigger an action, such as the posing of a survey question 24 to the participant 14 .
  • the trigger logic 62 may retrieve a survey question 24 from the hardware agnostic input data 20 and forward the survey question 24 to survey logic 64 located at the VR client 12 .
  • the survey logic 64 may cause the survey question 24 to be presented to the participant 14 , for example by popping up a survey window in the participant's 14 field of view.
  • the survey question may be presented using auditory cues (e.g., a recording of the question may be played on a speaker associated with the participant's 14 VR client 12 ).
  • the participant 14 may indicate an answer to the survey question.
  • the answer may be provided, for example, via keyboard input, through a microphone, or through a gesture (such as moving the participant's 14 head, which may be recognized by an accelerometer in the visual display device 40 ).
  • the participant's 14 answers to the survey questions may be stored in the VR data 44 at the VR server 10 .
  • FIG. 1 depicts particular entities in particular locations, one of ordinary skill in the art will understand that more, fewer, or different entities may be employed. Furthermore, the entities depicted may be provided in different locations.
  • FIG. 1 depicts the translation logic 28 as being resident on the VR server 10
  • the translation logic 28 may alternatively be located at the VR client 12 , so that the VR server 10 sends the hardware agnostic input data 20 to the VR client 12 , and the VR client 12 performs the translation.
  • the trigger logic 62 and/or the data processing logic 56 may be located at the VR client 12
  • the survey logic 64 may be located at the VR server 10 .
  • the entities depicted in FIG. 1 may also be split between the VR server 10 and the VR client 12 .
  • some of the logic for implementing the interfaces 30 or the trigger logic 62 may be provided at the VR server 10 , while the rest of the logic is provided at the VR client 12 .
  • some or all of the entities of FIG. 1 may be provided at an intermediate device distinct from the VR server 10 and the VR client 12 .
  • the VR server(s) 10 and VR client(s) 12 may interoperate to provide a simulated environment and allow multiple different types of users to interact with the simulated environment in order to perform behavioral research. Examples of simulated environments are described next.
  • FIGS. 2A-2C depict examples of simulated environments 66 suitable for use with exemplary embodiments.
  • FIG. 2A depicts a simulated environment 66 representing a focus group.
  • Several participant avatars 68 are present in the simulated environment 66 , as well as a moderator avatar 70 .
  • Each participant 14 may view the simulated environment 66 from the perspective of the participant's avatar 68
  • the moderator may view the simulated environment 66 from the perspective of the moderator avatar 70 .
  • the simulated environment 66 may be populated by one or more setting objects 72 .
  • Setting objects may represent objects placed in the simulated environment 66 in order to provide context or realism, such as table and chairs.
  • products may be presented for comparison. The products may be represented by objects placed in the simulated environment 66 , referred to herein as environment objects 74 .
  • FIG. 2A depicts an example of a simulated environment 66 representing a car dealership. Participant avatars may move through the simulated car dealership, observing products in their natural context.
  • FIG. 2C presents an example of a simulated environment 66 which includes a product carousel 76 .
  • a product carousel 76 Within the product carousel 76 , different products (or different variations on the same product) may be viewed and moved between.
  • a product carousel 76 may thus allow for a direct comparison between products or between different versions of a single product.
  • FIG. 3A-3D provide an in-depth example of a simulated environment 66 .
  • the simulated environment 66 represents a supermarket through which participant avatars can move.
  • Products represented by environment objects 74
  • shelves represented by setting objects 72 ).
  • FIG. 3A is an overhead view of the simulated environment 66
  • FIG. 3B is a perspective view of the simulated environment 66
  • the moderator 16 or the client 18 may be presented with an overhead or perspective view similar to the ones depicted in FIGS. 3A and 3B .
  • FIGS. 3C and 3D depict the simulated environment 66 as viewed from the perspective of an avatar, such as a participant avatar.
  • FIGS. 3C and 3D provide a ground-level view of the simulated environment 66 as the user moves through the simulated environment 66 , and are representative of what the user might see in the visual display device 40 .
  • the VR server 10 may store different information for each of the different types of users in order to allow the users to effectively perform their roles.
  • the stored information pertaining to each type of user may be collected through the respective interfaces, and is described in more detail below.
  • FIG. 4 depicts examples of the types of data that may be stored for each type of user.
  • the VR server 10 may store participant data 80 , which may include a number of attribute 82 of the participant.
  • the attributes 82 may include demographic details that describe the demographics of the participant. Exemplary demographic details are described in Table 4:
  • the attributes 82 may further include hardware interface data 86 describing the type of hardware (e.g. visual display device 40 , browser 42 , and/or VR client 12 ) used by the participant.
  • hardware interface data 86 is described in Table 5:
  • the attributes 82 may further include previous study data 88 describing the results of previous behavioral studies performed by the participant through the VR server 10 and/or using traditional methods. Exemplary previous study data 88 is described in Table 6:
  • the attributes 82 may further include avatar data 90 representing information used to generate the participant's avatar in the simulated environment.
  • avatar data 90 may include image data used for rendering the participant's avatar, as well as other descriptive details (e.g., height, weight, gender, etc.).
  • the attributes 82 may further include access credentials that are used by the participant to access the VR server 10 and/or the simulated environment.
  • Exemplary access credentials 92 are described in Table 7:
  • the moderator may 16 be associated with moderator data 94 , which includes attributes 94 similar to the attributes of the participant.
  • the moderator data 94 may include demographic details 98 , hardware interface data 100 , avatar data 104 , and access credentials 106 generally corresponding to those of the participant data 80 .
  • the moderator data 94 may also include manual trigger questions 102 , which may include survey questions that the moderator may cause to be asked of some or all participants at any time.
  • the manual trigger questions 102 may be displayed on a heads up display (HUD) of the moderator, so that the moderator may ask the participants the manual trigger questions (e.g., through a microphone and speaker).
  • HUD heads up display
  • the client 18 may be associated with client data 108 . Because (in some embodiments) the client does not interact with the simulated environment except to observe the simulated environment, it may not be necessary to collect as many attributes 110 for the client as for the participants and the moderators.
  • the client data 108 may include manual trigger questions 112 similar to the manual trigger questions 102 of the moderator data 94 , and access credentials 114 for allowing the client to access the simulated environment 66 and/or the VR server 10 .
  • FIGS. 5A and 5B depict the participant(s) 14 , the moderator(s) 16 , and the client(s) 18 interacting with the simulated environment 66 .
  • FIG. 5A is an example in which a single participant 14 is present in the simulated environment while being directed by a single moderator 16 .
  • Multiple clients 18 may view the simulated environment, e.g. in a top-down perspective or from the perspective of the participant avatar 68 .
  • FIG. 5B is an example in which multiple participants 14 , moderators 16 , and clients 18 interact with the simulated environment 66 .
  • each participant 14 may be provided with a participant avatar 68 , and participants 66 may see other avatars in the simulated environment 66 .
  • Clients 18 and moderators 16 may choose which participants they wish to observe (e.g., by viewing the simulated environment 66 from the perspective of the selected participant, or by attaching an overhead “camera” to the selected participant and watching the participant from a third-person view).
  • the clients 18 and the moderators 16 may observe the simulated environment 66 from a third person perspective, without following a particular participant.
  • the clients 18 and the moderators 16 may be provided with interface options for switching their perspectives among the available options in real time.
  • the simulated environment may be made up of setting objects 116 , environment objects 126 , and object triggers 134 .
  • the setting objects 116 may represent objects that define the setting and/or context of the simulated environment.
  • the setting objects may include background image vector files 118 , which may be images that are rendered in the background of the simulated environment and may change depending on what type of simulated environment is being rendered.
  • the background image vector files 118 may include images representing the walls and shelves of a grocery store, a sales floor in a car dealership, a design showroom, etc.
  • the setting objects 16 may further include environmental variables 120 .
  • the environmental variables may further define how the simulated environment is represented, and may include elements such as music or other audio to be played in the simulated environment, details regarding lighting settings, etc.
  • the setting objects may also include non-user avatars 122 and user avatars 124 .
  • User avatars 124 may represent any participants, moderators, and/or clients (if client avatars are enabled) that are present in the simulated environment.
  • Non-user avatars 122 may include simulated avatars that are not associated with any particular user, such as simulated virtual shoppers that behave according to pre-programmed and/or dynamic behaviors.
  • Non-user avatars 122 may be entirely pre-programmed, and/or may be synthesized from other participant movements or legacy participant data.
  • the environment objects 126 include items that may be found in the simulated environment, such as cars, tires, products, etc.
  • the environment objects 126 may include master objects 128 .
  • the master objects 128 include objects under study in the simulated environment, such as consumer products.
  • the master objects 128 may include high resolution 3D vector maps of the target products.
  • the environment objects 126 may further include variable objects 130 .
  • the variable objects 130 may include variable visual information data points that may be mapped to the environment, such as changing price labels, varied product quantities, etc.
  • the environment objects 126 may further include fill objects 132 .
  • the fill objects 132 may include objects that are not an object of study, but which are present in the simulated environment to provide for a more realistic setting.
  • fill objects 132 may include product shelf displays, advertisements, etc.
  • the object triggers 134 may represent points in the simulated environment that, when interacted with, may cause an event (such as the posing of a survey question) to occur.
  • the object triggers 134 may include product triggers 136 .
  • the product triggers 136 may be trigger locations associated with a particular product (e.g., a particular master object 128 or class of master objects 128 ), and may cause the display of a probing question based on an amount of gaze time or gaze points associated with the object.
  • the object triggers 134 may also include location triggers 138 .
  • the location triggers 138 may provide a visual display of a probing question based on the participant's avatar location in the simulated environment, or the amount of time that it takes the participant's avatar to reach a particular location.
  • the object triggers 134 may further include manual triggers 140 , which may be triggers that can be activated by the moderator or a client.
  • the triggers may cause a selected question from a question library to be posed, and may be triggered at any time.
  • FIG. 7 depicts examples of objects that may be used to make up the simulated environment in more detail. Specifically, FIG. 7 depicts a hardware agnostic canvas 22 having a number of environment objects 126 , and translation mapping information 142 that may be used by the translation logic 28 to render the environment objects 126 in the simulated environment.
  • the environment objects in the hardware agnostic canvas may include a number of details, such as an object ID for uniquely identifying the object, an object type, a location at which the object's data files (e.g., images for rendering the object, audio files played by the object, etc.) are stored, any trigger IDs associated with the object, and hardware-agnostic 3D coordinates for defining the object's location in the simulated environment.
  • an object ID for uniquely identifying the object
  • an object type e.g., images for rendering the object, audio files played by the object, etc.
  • any trigger IDs associated with the object e.g., images for rendering the object, audio files played by the object, etc.
  • the translation mapping information 142 may include hardware-specific information allowing the translation logic 28 to determine how the environment objects should be represented on particular hardware. For example, the translation logic may determine where (in an objective Cartesian coordinate system) the object should be displayed with respect to the participant's current perspective in the simulated environment, and may display the object at the location in the participant's field of view.
  • the translation logic 28 may use information such as the resolution of the participant's hardware viewer, the hardware viewer's brightness and color settings, and information about whether the hardware viewer is capable of audio playback (among other hardware-specific information) in order to render the object appropriately for the hardware. For example, in the case of an environment object having vector image data, the image data may be stretched, rotated, etc. in order to be rendered properly on the participant's hardware at the specified location.
  • the setting objects 116 , environment objects 126 , and object triggers 134 may be used to build a simulated environment.
  • FIG. 8 is a flowchart describing an exemplary process for building the simulated environment.
  • the simulated environment may, in some embodiments, be built by a moderator 16 . Accordingly, at step 144 a user may log into the VR server 10 through the moderator interface 36 . Among other options in the moderator's user interface, the VR server 10 may display an option for creating a simulated environment. Upon selection of this option, the VR server 10 may provide an interface for building a hardware agnostic canvas 22 for the simulated environment.
  • Previously built settings may be stored in a library for re-use.
  • the moderator 16 may be presented with an option for loading a pre-built setting from the library. If the moderator 16 chooses to load a pre-built setting at step 146 , then processing may proceed to step 148 and the selected setting may be retrieved from the library. Processing may then (optionally) proceed to step 150 , where additional setting objects may be added to the pre-built setting. If moderator does not choose to load a pre-built setting at step 146 , then processing may proceed directly to step 150 and the setting may be built by placing setting objects in the blank setting.
  • processing may proceed to step 152 and the moderator 16 may be presented with the option to save the built canvas in the canvas library for future use.
  • processing may proceed to step 154 and the moderator 16 may be provided with an interface for placing environment objects in the simulated environment.
  • the moderator 16 may choose to rely on environment objects stored with the saved setting retrieved in step 148 and/or a previously stored environment object set that may be imported into the setting developed at steps 146 - 152 .
  • processing may proceed to step 156 where the object set may be loaded (e.g., from the canvas library) and added to the simulated environment.
  • processing may then proceed to step 158 , where additional environment objects may be added (e.g., from the canvas library), and from there to step 160 where the environment objects added to the simulated environment may optionally be saved in the canvas library for future use.
  • processing may then proceed to step 162 , where object triggers may be defined or loaded from the hardware agnostic input data 20 .
  • object triggers may be defined or loaded from the hardware agnostic input data 20 .
  • an interface may be provided for allowing the moderator 16 to define survey questions, locations at which the questions are triggered, a required number of gaze points in order to trigger the questions, etc.
  • the moderator 16 may define participant demographic information and access credentials.
  • the moderator 16 may provide a list of users (e.g., a list of user IDs) who are permitted to participate in a research project involving the simulated environment established in steps 144 - 162 .
  • the participants may access the simulated environment through a participant interface 32 in the VR server 10 .
  • the moderator 16 may define a list of demographics which a participant must have in order to access the simulated environment. In such a situation, the VR server 10 may assign participants to different simulated environments depending on their demographics.
  • the moderator 16 may define client access data for allowing clients to access the simulated environment.
  • the moderator 16 may provide a list of client user IDs allowing the clients to log into client interfaces 38 in the VR server 10 .
  • the moderator 16 may provide session time information.
  • the session time information may define at time at which a research project in the simulated environment is scheduled to take place. If a user attempts to log into the simulated environment at a time outside of the session time defined in step 168 , an error message may be displayed informing the user when the research project is scheduled to begin.
  • users may be allowed to log into the research project a short predetermined amount of time prior to the session time defined in step 168 . In this case, the user may be placed into a waiting room until the appointed time for the research project, and then may be placed in the simulated environment.
  • processing may proceed to step 170 , and the research project session may begin.
  • the VR server 10 may employ the translation logic 28 in order to render the simulated environment defined in steps 144 - 162 on user-specific hardware.
  • FIG. 9 is a flowchart describing exemplary steps that may be performed by the translation logic 28 .
  • Processing may begin at step 172 , where a stored hardware-agnostic canvas associated with the current research project may be retrieved from the canvas library 22 .
  • a stored hardware-agnostic canvas associated with the current research project may be retrieved from the canvas library 22 .
  • translation mapping information describing how to render an environment on the user-specific hardware may be used.
  • Such translation mapping information may be retrieved at step 174 .
  • the translation mapping information may be stored with, or separately from, the hardware agnostic canvas.
  • the translation logic may retrieve or construct a blank hardware-specific scene or template. This may serve as the basis for a hardware-specific scene, to which setting and environment objects will be added. Alternatively, in some embodiments an entire scene may be generated in a hardware agnostic format, and then displayed on user-specific hardware by translating the finished scene.
  • the translation object may retrieve a setting object from the canvas. For example, if the setting objects are stored in a database, the translation logic may retrieve the next setting object from the database.
  • the setting object may be associated with location information, such as coordinates in a Cartesian plane that are defined with respect to the simulated environment and/or the blank scene or template. This location information may be retrieved from the canvas library at step 180 .
  • appearance properties for the setting object may be retrieved.
  • a definition of the setting object may include a pointer or reference to image files (e.g., vector graphic images) that are used to draw the setting object in the simulated environment.
  • the pointer or reference may be followed to extract the vector images from the associated image files.
  • viewer-specific code or image data may be generated and added to the blank template generated at step 176 .
  • the code or image data may be generated, at least in part, based on the appearance properties determined at step 182 , the object coordinates retrieved at step 180 , and the translation mapping information retrieved at step 174 .
  • the translation logic may consult the translation mapping information to determine display properties for the user-specific viewer hardware.
  • the translation logic may use the location information to determine where, with respect to the direction the user may be looking (or how the user would observe the setting object from various angles), the object should be placed.
  • the translation logic may place the object at the location, and may correct the object's image data based on the translation mapping information (e.g., by manipulating the object's image data, such as by stretching or rotating the object).
  • the translation logic may determine whether there are additional setting objects to be added to the simulated environment. If so, processing may return to step 178 and additional setting objects may be added to the scene.
  • Step 188 generally corresponds to step 178
  • step 190 generally corresponds to step 180
  • step 192 generally corresponds to step 182
  • step 196 generally corresponds to step 184
  • step 198 generally corresponds to step 186 .
  • One additional step may be performed at step 194 with respect to the environment objects, which may involve identifying any triggers associated with the environment objects.
  • the triggers may be associated with object or location data, and survey questions that may be displayed when the location or object is approached or viewed.
  • Step 194 may involve generating code for the user-specific hardware that causes the survey questions to be posed when the user-specific hardware identifies that the triggering conditions are met.
  • the trigger points may be triggered by the VR server 10 when the user-specific hardware reports that the user has approached or viewed the location associated with the trigger point.
  • triggers may be associated with locations in the simulated environment rather than, or in addition to, associating the triggers with the environment objects.
  • processing may proceed to step 200 where the now-completed view of the simulated environment may be sent to the user-specific hardware, rendered by the user-specific hardware, and/or saved for future use.
  • a simulated VR environment may be constructed and rendered for a variety of users.
  • User interaction with the VR environment is next described with respect to FIGS. 10-11 .
  • FIG. 10 is a data flow diagram describing user interactions with the simulated environment.
  • the VR server 10 may host a copy of the simulated VR environment 202 , or data associated with the VR environment 202 that allows each participant VR client 12 to generate their own copy of the simulated VR environment.
  • the VR server 10 may maintain information regarding the different users in the VR environment so that each user's avatar can be displayed to other users in the VR environment.
  • the moderator interface may allow the moderator VR client to transmit a change instruction causing a change in the VR environment 202 .
  • the change instruction may be an instruction to move a specified participant avatar to a specified location, to manually change the gaze direction of the participant, or to add new objects to the VR environment.
  • the VR server 10 may provide VR environment data to the VR clients 12 of participants, moderators, and clients, thereby allowing the VR clients 12 to render the VR environment 202 .
  • the VR clients 12 may be of homogeneous or heterogeneous types of hardware.
  • Each type of user may interact with the VR server 10 through an appropriate type of interface 30 , which may interpret instructions from the users differently according to the user's role.
  • the VR client 12 may be provided with one or more input devices 204 allowing the user to interact with the VR environment 202 .
  • the input devices 204 may include a joystick allowing the user to change the location of their avatar in the VR environment 202 and an accelerometer in a VR headset allowing the user's gaze location to be determined.
  • each of the VR clients 12 associated with an avatar and/or viewer location may transmit location data and gaze data to data processing logic 56 of the VR server 10 .
  • the data processing logic may, in turn, provide the obtained information to trigger logic 62 , which may determine if the user's avatar location or gaze location has triggered a survey question 24 . If so, the triggered question may be provided to the VR environment 202 of the participant's VR client 12 and displayed on a user interface 206 .
  • the survey question may be read aloud through a speaker in the participant VR client (and may be manually read by the moderator, or automatically played, e.g., through a previously-recorded sound file).
  • the participant may use the input device(s) 204 to answer the survey questions, and the resulting question responses may be transmitted back to the VR serer 10 and stored in the VR data 44 .
  • FIG. 11 A flowchart of exemplary steps performed by the VR server 10 as the participant VR client 12 provides information about the participant's interaction with the VR environment 202 is depicted in FIG. 11 .
  • the VR server 10 may access a participant interface through which the participant VR client 12 provides data and information.
  • the VR server 10 may receive VR data through the participant interface, which may include (for example) an updated participant avatar location and an updated participant gaze location.
  • the VR server 10 may compare the updated location and gaze data to previous location and gaze data to determine whether the user's position or gaze has changed (and thus needs to be updated). If so, processing may proceed to either or both of steps 212 and 214 , where the participant's view of the VR environment and/or position in the environment may be updated. If necessary, new environment view data may be transmitted to the participant VR client 12 , and the view of the environment may be updated on the VR client 12 . If the participant's environment location is changed at step 212 and other users are also represented in the VR environment 202 by avatars, the updated participant location information may be transmitted to the other users' VR clients 12 so that the participant's updated avatar location can be rendered in the other users' VR clients 12 .
  • step 216 it may be determined whether updating the participant's position or gaze location has caused the participant to activate a trigger point. If not, processing may return to step 210 where next VR data from the participant may be received. If a trigger point is activated, processing may proceed to step 218 where the user may be presented with a survey interface for answering the survey questions. Upon providing an input responsive to the survey question, processing the input may be transmitted to the VR server 10 and received at step 220 . The answers to the survey questions may be stored with the VR data 44 .
  • the VR data 44 may include individual and/or aggregated scores calculated based on participant's gaze locations. Exemplary score calculations are discussed below with respect to FIGS. 12 and 13 .
  • a participant may approach one or more environment objects representing different products on a display.
  • the products may be placed in the simulated environment according to 3D coordinates associated with the associated environment objects.
  • the VR server may extract 2D coordinates of the environment objects to identify a viewing plane representative of the areas of the participant's view in which the objects representative of a particular type of product is present.
  • Different products may be associated with different sets of 2D coordinates.
  • a set of “gaze points” may be calculated for each type of product.
  • the gaze points may represent an amount of attention (e.g., based on viewing time and the number of “second looks” given to the product).
  • the participant's gaze may be represented as a single point (e.g., the center of the participants view), or may be represented as a series of gaze boxes.
  • the boxes may be centered at the center of the participant's view, and may expand concentrically from that point.
  • the more central gaze boxes may be assigned more gaze points on the assumption that the user is paying the most attention to the center of their view.
  • Peripheral gaze boxes may be given decreasing number of gaze points on the assumption that the user is paying less attention, but nonetheless some attention, to the peripheral gaze boxes.
  • a first gaze box may be represented as the central area of the participant's field of view (e.g., extending 10 degrees from the center of the participant's field of view). Any environment objects or products present in the first gaze box may accumulate, for example, 30 points per millisecond.
  • a second gaze box may extend 10-20 degrees from center. Any environment objects or products present in the second gaze box may accumulate, for example, 10 points per millisecond.
  • a third gaze box may extend 20-40 degrees from center. Any environment objects or products present in the third gaze box may accumulate, for example, 3 points per millisecond.
  • a fourth gaze box may extend 40-180 degrees from center and may accumulate gaze points at a rate of 1 per millisecond, while a fifth gaze box may include anything unseen and out of peripheral range, and may not accumulate any gaze points.
  • the gaze score may be calculated in the manner above based on the first glance that the participant gives to a product.
  • the initial gaze score may be supplemented with additional accumulated gaze scores based on additional looks given to the product.
  • these second looks may be associated with a multiplier, on the assumption that a user directing their gaze away from the product and then returning to the product for a second look carries added significance.
  • a formula may be used to calculate a gaze score.
  • one exemplary formula may be:
  • F is the final gaze score
  • A is the initial set of gaze points (described above)
  • B is the number of second look points (calculated in the same manner as described above but only after the user has initially viewed a product and then moved their gaze away from the product)
  • M is a “second look multiplier,” given as:
  • T represents an amount of time spent away from the product (e.g., the time in second since the object entered gaze box 2 , then completely left gaze box 4 ).
  • the gaze scores may be aggregated across multiple participants and/or stored separately for each participant.
  • the gaze scores (individual or aggregate) may be represented visually in the simulated environment in the form of a gaze map. This may allow the moderator or client to quickly and easily determine which products have received the most attention.
  • An exemplary gaze map 222 is depicted in FIG. 13 . Areas at which gaze points have been accumulated to a greater degree may be distinguished, for example using different colors or patterns, among other means of visually distinguishing different areas of attention.
  • Some or all of the exemplary embodiments described herein may be embodied as a method performed in an electronic device having a processor that carries out the steps of the method. Furthermore, some or all of the exemplary embodiments described herein may be embodied as a system including a memory for storing instructions and a processor that is configured to execute the instructions in order to carry out the functionality described herein.
  • one or more of the acts described herein may be encoded as computer-executable instructions executable by processing logic.
  • the computer-executable instructions may be stored on one or more non-transitory computer readable media.
  • One or more of the above acts described herein may be performed in a suitably-programmed electronic device.
  • the electronic device 224 may take many forms, including but not limited to a computer, workstation, server, network computer, quantum computer, optical computer, Internet appliance, mobile device, a pager, a tablet computer, a smart sensor, application specific processing device, etc.
  • the electronic device 224 described herein is illustrative and may take other forms.
  • an alternative implementation of the electronic device may have fewer components, more components, or components that are in a configuration that differs from the configuration described below.
  • the components described below may be implemented using hardware based logic, software based logic and/or logic that is a combination of hardware and software based logic (e.g., hybrid logic); therefore, components described herein are not limited to a specific type of logic.
  • the electronic device 224 may include a processor 226 .
  • the processor 226 may include hardware based logic or a combination of hardware based logic and software to execute instructions on behalf of the electronic device 224 .
  • the processor 226 may include one or more cores 228 that execute instructions on behalf of the processor 226 .
  • the processor 2326 may include logic that may interpret, execute, and/or otherwise process information contained in, for example, a memory 234 .
  • the information may include computer-executable instructions and/or data that may implement one or more embodiments of the invention.
  • the processor 226 may comprise a variety of homogeneous or heterogeneous hardware.
  • the hardware may include, for example, some combination of one or more processors, microprocessors, field programmable gate arrays (FPGAs), application specific instruction set processors (ASIPs), application specific integrated circuits (ASICs), complex programmable logic devices (CPLDs), graphics processing units (GPUs), or other types of processing logic that may interpret, execute, manipulate, and/or otherwise process the information.
  • the processor 226 may include a single core or multiple cores.
  • the processor may include a system-on-chip (SoC) or system-in-package (SiP).
  • the electronic device 224 may include a memory 234 , which may be embodied as one or more tangible non-transitory computer-readable storage media for storing one or more computer-executable instructions or software that may implement one or more embodiments of the invention.
  • the memory 234 may comprise a RAM that may include RAM devices that may store the information.
  • the RAM devices may be volatile or non-volatile and may include, for example, one or more DRAM devices, flash memory devices, SRAM devices, zero-capacitor RAM (ZRAM) devices, twin transistor RAM (TTRAM) devices, read-only memory (ROM) devices, ferroelectric RAM (FeRAM) devices, magneto-resistive RAM (MRAM) devices, phase change memory RAM (PRAM) devices, or other types of RAM devices.
  • the electronic device 224 may include a virtual machine (VM) 230 for executing the instructions loaded in the memory 234 .
  • VM virtual machine
  • a virtual machine 230 may be provided to handle a process running on multiple processors 226 so that the process may appear to be using only one computing resource rather than multiple computing resources. Virtualization may be employed in the electronic device 224 so that infrastructure and resources in the electronic device 224 may be shared dynamically. Multiple VMs 230 may be resident on a single electronic device 224 .
  • a hardware accelerator 238 may be implemented in an ASIC, FPGA, or some other device.
  • the hardware accelerator 238 may be used to reduce the general processing time of the electronic device 238 .
  • the electronic device 224 may include a network interface 236 to interface to a Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., T1, T3, 56kb, X.25), broadband connections (e.g., integrated services digital network (ISDN), Frame Relay, asynchronous transfer mode (ATM), wireless connections (e.g., 802.11), high-speed interconnects (e.g., InfiniBand, gigabit Ethernet, Myrinet) or some combination of any or all of the above.
  • LAN Local Area Network
  • WAN Wide Area Network
  • the network interface 236 may include a built-in network adapter, network interface card, personal computer memory card international association (PCMCIA) network card, card bus network adapter, wireless network adapter, universal serial bus (USB) network adapter, modem or any other device suitable for interfacing the electronic device to any type of network capable of communication and performing the operations described herein.
  • PCMCIA personal computer memory card international association
  • USB universal serial bus
  • the electronic device 224 may include one or more input devices 204 , such as a keyboard, a multi-point touch interface, a pointing device (e.g., a mouse), a joystick or gaming device, a gyroscope, an accelerometer, a haptic device, a tactile device, a neural device, a microphone, or a camera that may be used to receive input from, for example, a user.
  • input devices 204 such as a keyboard, a multi-point touch interface, a pointing device (e.g., a mouse), a joystick or gaming device, a gyroscope, an accelerometer, a haptic device, a tactile device, a neural device, a microphone, or a camera that may be used to receive input from, for example, a user.
  • electronic device 224 may include other suitable I/O peripherals.
  • the input devices 204 may include an audio input device 240 , such as a microphone or array of microphones, and an attention tracking module 242 .
  • the attention tracking module 242 may be, for example, a device for directly tracking the user's attention (e.g., eye-tracking hardware that monitors the location to which the user's eyes are directed), a device for indirectly tracking the user's attention (e.g., a virtual reality headset that determines the location in which the user is looking based on accelerometer or compass data indicating the direction in which the user is pointing their head), and/or logic for imputing the user's attention based on the user's behavior (e.g., logic for interpreting a user's mouse clicks on a canvas or analyzing a survey response).
  • a device for directly tracking the user's attention e.g., eye-tracking hardware that monitors the location to which the user's eyes are directed
  • a device for indirectly tracking the user's attention e.g., a virtual
  • the input devices 204 may allow a user to provide input that is registered on a visual display device 40 .
  • the visual display device may be, for example, a virtual reality headset, a mobile device screen, or a PC or laptop screen.
  • a simulated environment 66 may be displayed on the visual display device 40 .
  • a graphical user interface (GUI) 206 may be shown on the display device 40 .
  • the GUI 206 may display, for example, forms on which information, such as user information or survey questions, may be presented.
  • the input devices 204 and visual display device 40 may be used to interact with a virtual reality environment 202 hosted or supported by the electronic device 224 .
  • the virtual reality environment 202 may track user positions 244 (e.g., a location of user avatars within the virtual reality environment 202 ), provide vector graphics 246 for rendering objects and avatars in the environment, object data 248 , trigger data 250 , and gaze data 252 representing locations to which participants have directed their gaze.
  • a storage device 254 may also be associated with the electronic device 224 .
  • the storage device 254 may be accessible to the processor 226 via an I/O bus. Information stored in the storage 254 may be executed, interpreted, manipulated, and/or otherwise processed by the processor.
  • the storage device 254 may include, for example, a magnetic disk, optical disk (e.g., CD-ROM, DVD player), random-access memory (RAM) disk, tape unit, and/or flash drive.
  • the information may be stored on one or more non-transient tangible computer-readable media contained in the storage device. This media may include, for example, magnetic discs, optical discs, magnetic tape, and/or memory devices (e.g., flash memory devices, static RAM (SRAM) devices, dynamic RAM (DRAM) devices, or other memory devices).
  • the information may include data and/or computer-executable instructions that may implement one or more embodiments of the invention
  • the storage device 254 may further store files 260 , applications 258 , and the electronic device 224 can be running an operating system (OS) 256 .
  • OSes may include the Microsoft® Windows® operating systems, the Unix and Linux operating systems, the MacOS® for Macintosh computers, an embedded operating system, such as the Symbian OS, a real-time operating system, an open source operating system, a proprietary operating system, operating systems for mobile electronic devices, or other operating system capable of running on the electronic device 224 and performing the operations described herein.
  • the operating system 256 may be running in native mode or emulated mode.
  • the files 260 may include files storing the user data 80 , 94 , 108 (see FIG. 4 ), input data 20 (such as hardware-agnostic canvases and survey questions), VR data 44 including translation mapping information 142 for different types of proprietary VR devices (see FIG. 7 ), legacy data 48 , and project data 262 describing the current behavioral research project.
  • the storage device may further store the logic for implementing above-described participant interface 32 , moderator interface 36 , client interface 38 , data processing logic 56 , translation logic 28 , survey logic 64 , trigger logic 62 , and data mapping logic 54 , along with any other logic suitable for carrying out the procedures described in the present application.
  • one or more implementations consistent with principles of the invention may be implemented using one or more devices and/or configurations other than those illustrated in the Figures and described in the Specification without departing from the spirit of the invention.
  • One or more devices and/or components may be added and/or removed from the implementations of the figures depending on specific deployments and/or applications.
  • one or more disclosed implementations may not be limited to a specific combination of hardware.
  • logic may perform one or more functions.
  • This logic may include hardware, such as hardwired logic, an application-specific integrated circuit, a field programmable gate array, a microprocessor, software, or a combination of hardware and software.

Abstract

Exemplary embodiments provide methods, mediums, and systems for behavioral research. In some embodiments, an environment supporting three types of users may be provided. A first type of user may represent a participant whose behavior is being monitored. The first type of user may freely interact with the environment. A second type of user may represent a moderator directing the experience of the participant. The moderator may be provided with an ability to manipulate the environment or the participants' interactions with the environment. A third type of user may represent a client interested in the participants' behavior, and the third type of user may be provided with an ability to view the environment from the perspective of the participant. Different interfaces may be provided for allowing the different types of users to interact with the environment according to their roles.

Description

    BACKGROUND
  • Behavioral research, and particularly behavioral research relating to consumers' product preferences, may be a time consuming and expensive process. Even when behavioral research is conducted with a significant investment of time and money, the results of the research may not be wholly accurate or representative of consumers' true views.
  • In some behavioral research scenarios, focus groups of one or more participants are brought together in a common location and presented with products for evaluation. Participants may be brought to a special facility expressly designed for focus group testing (e.g., a facility with special conference rooms that allow the participants to be observed or recorded), which may add to the cost of conducting a focus group. Furthermore, costs may be driven up by the need to produce non-production product mockups or prototypes for the focus group, or simulated two-dimensional models to be displayed on a computer.
  • Moreover, traditional focus group testing may not yield entirely accurate or satisfactory results. In a focus group, products may be viewed in isolation and/or out of a purchasing context. This may make it difficult to draw conclusions about how a consumer would interact with the product in a retail establishment or online, where the user would be confronted with multiple products and different environmental conditions.
  • Still further, traditional behavioral research methods may rely on self-reporting by the participants, which may be inaccurate or easily manipulated. For example, consider a participant who is presented with several designs for product packaging and asked which one he or she prefers. The participant may favor a first product design, but may report a different choice. For instance, in a group setting the participant may feel social pressure to select a popular option preferred by the rest of the group.
  • The present application is addressed to these and other issues that may constrain conventional behavioral research and consumer preference testing.
  • SUMMARY
  • Exemplary embodiments described herein relate to methods, mediums, and systems for performing behavioral research in a simulated environment, such as a virtual reality environment. By moving behavioral research into a simulated environment, the research can be conducted either in person or remotely, allowing for increased flexibility and cost savings. Furthermore, participants may interact with a product in a more natural way (e.g., by observing the product side-by-side with other products in a simulated retail establishment).
  • Exemplary embodiments may be configured to record the participant's observational data (e.g., the location where the observer is directing their gaze, the amount of time spent looking at a particular product, and whether the participant revisited the product after moving on to another product). Thus, a researcher's reliance on participant self-reporting may be reduced.
  • In order to conduct the behavioral research, a centralized server that hosts the environment and/or research may be provided. The centralized server may be located at a central facility (e.g., a facility associated with the researcher), at a remote location, or may be distributed (e.g., using cloud-based resources). Different types of users having different roles may connect to the server. In order to facilitate the different roles of the different types of users, the server may expose multiple interfaces that provide different capabilities.
  • For example, a participant in a research project may be placed in the simulated environment and may control their own location (and the location of their gaze) within the environment. A participant interface may therefore be provided, where the participant interface allows the participant to change positions in the environment and records participant observational data.
  • Another type of user may include a moderator responsible for running the research project. The moderator may communicate with the participants, observe what the participants are looking at, may manually move the participants to specified locations in the environment, and may trigger questions about products in the environment that appear on the user's display. A user connecting to the server through a moderator interface may be provided with these capabilities.
  • A third type of user may include a client interested in the outcome of the behavioral research. For example, the client may be a product designer whose product is being reviewed by the participants in the simulated environment. A client interface may permit the client to observe what the participants are observing, and may potentially communicate with the moderator. However, it may be undesirable to allow the client to affect the participant's observations, and hence the client interface may be limited to observation and communication with the moderator.
  • Thus, the central server may build and/or maintain a simulated environment, and provide functionality for interacting with the simulated environment on behalf of multiple different types of users in such a way that meaningful behavioral research may be conducted.
  • For example, according to a first embodiment, a system for monitoring behaviors of a participant by a moderator and a client may be provided. The system may include a non-transitory storage medium storing logic, and a processor for executing the logic.
  • The logic may include logic for implementing a participant interface that sends and receives instructions for simulating an environment and observing the simulated environment. The participant interface logic may include demographic rules that cause the environment to be simulated in a different manner depending on demographics the participant. The participant interface logic may also include logic for changing a position of a participant avatar in the simulated environment, and/or logic for changing a location of a participant's gaze in the simulated environment.
  • The logic may further include logic for implementing a moderator interface that sends and receives instructions for simulating the environment and manipulating the simulated environment. The moderator interface logic may include logic for moving the participant to a specified location in the simulated environment. The moderator interface logic may further include logic for manually triggering a survey question.
  • The logic may further include logic for implementing a client interface that sends and receives instructions for viewing the simulated environment from the perspective of the participant. In some embodiment, the client interface logic may limit the actions of the client in the simulated environment to viewing the simulated environment from the perspective of the participant.
  • The processor may further be programmed to maintain the simulated environment, receive observational data about the simulated environment from the participant interface logic, and store the observational data in the storage medium.
  • For example, in some embodiments the processor may calculate one or more viewing windows for the participant's gaze. The processor may calculate scores for each of the viewing windows, the calculated scores representing an amount of attention given to an object in the viewing windows. Alternatively or in addition, the processor may identify that the location of the participant's gaze encompasses a predefined trigger point, retrieve a survey question associated with the predefined trigger point, and transmit an instruction to the visual display device to display the retrieved survey question.
  • According to some exemplary embodiments, an interface may be provided to connect the system to a visual display device for displaying the simulated environment. The visual display device may be, for example, a virtual reality headset or a browser.
  • According to some exemplary embodiments, the storage medium may store one or more hardware agnostic canvases that represent the simulated environment in a manner that is not specific to the visual display device, and the processor may translate the one or more hardware agnostic canvases into a format that is interpretable by the visual display device.
  • Further exemplary embodiments provide methods for monitoring behaviors of a participant by a moderator and a client. The methods may include simulating an environment comprising an object of study. Instructions may be transmitted to a participant visual display device, where the instructions include instructions for displaying a participant perspective of the simulated environment on the participant visual display device.
  • Participant location data describing a change in a position or a gaze location of the participant in the simulated environment may be received and analyzed. A score may be calculated based on the participant location data, where the score represents an amount of attention paid by the participant to the object of study in the simulated environment. The score may be stored in a non-transitory storage medium.
  • In some embodiments, second instructions may be transmitted to a client visual display device. The second instructions may include instructions for displaying the participant perspective of the simulated environment on the client visual display device.
  • Further embodiments provide a non-transitory electronic device readable medium storing instructions that, when executed, cause a processor to perform a method. The method may include connecting to a participant interface of an environmental server responsible for maintaining a simulated environment comprising an object of study. The environmental server may maintain a plurality of different types of interfaces, each type of interface corresponding to a different type of user interacting with the simulated environment and providing different capabilities for the different types of users.
  • Information about the simulated environment may be received from the participant interface, and the simulated environment may be rendered for a participant based on the received information. Participant location data describing a change in a position or a gaze location of the participant in the simulated environment may be transmitted to the environmental server using the participant interface.
  • Updated information about the simulated environment may be received, and the rendered simulated environment may be updated based on the updated information.
  • A manipulation of the simulated environment may also be received, where the manipulation comes from an instruction transmitted through a moderator interface of the environmental server. The manipulation may be executed in the simulated environment. For example, the manipulation may include an instruction that the participant be moved to a specified location in the simulated environment, and executing the manipulation may include moving the participant to the specified location.
  • Using the exemplary embodiments described herein, behavioral research can be carried out in an efficient, inexpensive, and reliable manner. These and other features of exemplary embodiments will be apparent from the detailed description below, and the accompanying Figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts an exemplary system for hosting, managing, and displaying a simulated environment according to an exemplary embodiment.
  • FIGS. 2A-2C depict examples of different simulated environments.
  • FIG. 3A-3D depict views of an exemplary simulated environment.
  • FIG. 4 depicts exemplary data representative of different types of users and interfaces.
  • FIGS. 5A-5B depict exemplary embodiments in which one or more participants interact with the simulated environment.
  • FIG. 6 depicts an exemplary format for objects and triggers suitable for use in exemplary simulated environments.
  • FIG. 7 depicts a hardware-agnostic canvas suitable for use in exemplary embodiments
  • FIG. 8 is a flowchart describing an exemplary method for building a hardware-agnostic canvas representing a simulated environment.
  • FIG. 9 is a flowchart describing an exemplary method for translating a hardware-agnostic canvas into viewer-specific code suitable for use on exemplary environment viewers.
  • FIG. 10 is a data flow diagram showing exemplary information-routing paths for displaying and managing the simulated environment.
  • FIG. 11 is a flowchart describing an exemplary method for interacting with the simulated environment through a participant interface.
  • FIG. 12 describes an exemplary method for gathering and aggregating data from participants in the simulated environment.
  • FIG. 13 depicts a map of aggregated data superimposed on the simulated environment.
  • FIG. 14 depicts an exemplary electronic device suitable for use with exemplary embodiments.
  • DETAILED DESCRIPTION
  • Exemplary embodiments relate to methods, mediums, and systems for conducting behavioral research in a simulated environment. One or more devices may work together to maintain the simulated environment and analyze data indicative of where a user is placing their attention within the environment. In order to conduct the research, multiple different types of users, including participants, moderators, and clients, may interact with the simulated environment. Exemplary embodiments provide different interfaces having different capabilities for each of the different types of users.
  • As used herein, a participant refers to a person whose behavior is being monitored or observed in a behavioral research project. The participant may be placed into a simulated environment and allowed to freely or semi-freely interact with the environment, changing the location of their gaze within the environment. The participant's gaze location may be analyzed to determine which objects in the simulated environment are more likely to capture a consumer's attention.
  • The simulated environment and the participant(s)′ interactions with the environment may be curated by a moderator. As used herein, a “moderator” refers to an entity or entities that interactively guide the participant's experience in the simulated environment. This interaction may include audio, visual and/or haptic cues. The interaction may involve directing the participant's attention to particular features within the simulated environment, posing questions to the participant, and manually moving the participant within the simulated environment.
  • A client may have an interest in the participant's views of the objects in the simulated environment. For example, the client may be a product designer whose products are being tested in the simulated environment. However, it may be undesirable to allow the client to directly interact with the participant, as this may affect the impartiality of the participant's observations. Accordingly, in some embodiments a client is limited to passive observation: e.g., viewing the simulated environment from the perspective of the participant. In other embodiments, the client may be permitted limited interaction with the participant, such as by triggering survey questions.
  • Participants, moderators, and users are collectively referred to herein as users. One or more different types of interfaces may be defined for allowing the different types of users to connect to, and interact with, the simulated environment. Each of the different types of interfaces may support a different type of user by providing the above-described functionality for a user connecting to the interface. For example, a participant interface may allow a user connecting through it to move about the simulated environment, change the location of their gaze, and receive and answer survey questions about objects in the environment. The participant interface may lack the ability to (for example) manually trigger survey questions or change the location of other participants, which may be capabilities reserved for the moderator interface.
  • An overview of the system for providing the simulated environment will first be described.
  • System Overview
  • FIG. 1 depicts an exemplary system for supporting the different types of users in a simulated environment.
  • The system may include a virtual reality (VR) server 10 and a VR client 12. The VR server 10 may be responsible for maintaining a simulated environment and coordinating the use of the simulated environment among multiple users. The users, which may include a participant 14, a moderator 16, and a client 18, may interact with the simulated environment through one or more VR clients 12.
  • The simulated environment may be displayed on a visual display device 40, such as a VR headset. Visual display devices 40 come in multiple different types, some of which may use proprietary or custom display formats. Examples of visual display devices 40 include, but are not limited to, the Oculus Rift headset of Facebook, Inc. of Menlo Park, Calif. and the Project Morpheus headset produced by Sony Corp. of Tokyo, Japan.
  • Because each of the different types of VR headsets may use unique display formats, it may be desirable to store information used to create the simulated environment in a hardware agnostic manner. Thus, the VR server 10 may store hardware agnostic input data 20. In this regards, “hardware agnostic” refers to a neutral format that is not specific to, or usable by, a single particular type of device. Rather, the hardware agnostic input data 20 is saved in a format that is readily translated into a format that can be understood by a particular hardware device. In other embodiments, input data used to create the simulated environment may be stored in an proprietary or hardware non-agnostic format, and then translated into other formats as necessary (potentially by translating the input data from a first hardware-specific format into an intermediate hardware agnostic format, and then from the hardware agnostic format into a second hardware-specific format).
  • The hardware agnostic input data 20 may include hardware agnostic canvases 22 that represent the simulated environment and the objects in it. For example, the canvases 22 may represent databases of stored objects and locations for the stored objects, which are rendered in the simulated environment. The hardware agnostic canvases may define a location for the objects in a 3D or 2D coordinate system, which can be used by the VR client 12 to render the objects at an appropriate location with respect to the user's position in the simulated environment. An example of a hardware agnostic canvas is depicted in FIG. 7 and discussed in more detail below.
  • The hardware agnostic input data 20 may further include survey questions 24. The survey questions 24 may include questions that are triggered, either manually (e.g., by a moderator) or when a certain set of conditions with respect to the user, the environment, and/or an object in the environment are met. For example, the survey questions 24 may define a trigger location at which the question may be triggered.
  • The survey questions 24 may further define an attention score required before the questions are triggered. As will be described in more detail below, the VR server 10 may calculate a score for one or more objects or locations in the simulated environment based on how much attention a participant gives to the object or location. For example, a participant that stared at an object for ten seconds might yield a higher score for the object than a participant who glances at the object in passing. The score may be accumulated by increasing amounts if the participant re-visits an object (e.g., the participant glances at the object, moves away from the object for a certain period of time, and then returns to the object).
  • By using the attention score to trigger questions, different questions can be posed to a participant depending on how much attention the participant has given to the object. For instance, exemplary survey questions 24 are shown in Table 1 below. In Table 1, each of the four questions is triggered at the same location. However, depending on how much attention score the user has accumulated for the object at that location, different questions may be posed.
  • TABLE 1
    Exemplary Survey Questions
    Score
    Question Required on
    ID Question Responses Trigger Location Location
    1 What do you think of Voice audio response (21.8, 77.2, 99.2) 2300
    this package? (max 30 sec)
    2 Did you notice the Yes/No (21.8, 77.2, 99.2) 1000
    price?
    3 Have you seen this Yes/No/Don't Recall (21.8, 77.2, 99.2) 5000
    product before?
    4 What was the name of Open Text (21.8, 77.2, 99.2) 2000
    this product?
  • In addition to the canvases 22 and the survey questions 24, the hardware agnostic input data 20 may include split tests 26, which define variants of a product that may be tested in the simulated environment. For example, a split test 26 may define two different types of packaging that may be applied to a product. The different types of packaging may be displayed randomly to different participants, or may be displayed based on participant demographics (e.g., men view a product in green packaging, whereas women view a product in yellow packaging).
  • The hardware agnostic input data 20 may be translated into a format understandable by the VR client 12 by translation logic 28. Among other functionality, the translation logic may accept the object definitions in the canvases 22, which are defined using a coordinate system, and provide instructions for the VR client that allows the VR client to accurately render the objects. The translation logic may account for (among other things) the resolution, color capabilities, and size of the visual display device 40 in determining how the object should be rendered in the simulated environment on that particular visual display device 40. An exemplary method for translating the hardware agnostic input data 20, which may be implemented by the translation logic 28, is described in more detail with respect to FIG. 9.
  • The translation logic 28 may also work in reverse. That is, the translation logic 28 may accept data (2D or 3D data) returned from the VR client 12 and translate the data into a hardware agnostic format for processing. For instance, the VR client 12 may provide information as to where the display was pointing at a particular moment in time. The translation logic may accept this information and determine the participant's location and/or the direction in which the participant was looking with respect to the hardware-agnostic coordinate system. This information may be used for data processing and aggregation across multiple users (potentially using multiple different types of visual display devices 40).
  • Once the hardware agnostic input data 20 is translated by the translation logic 28, it may be used to generate a simulated environment. Because each of the participant(s) 14, the moderator(s) 16, and the client(s) 18 interact with the simulated environment in different ways, different types of interfaces 30 into the VR server may be provided. By accessing a particular type of interface 30, the user defines what type of user they are and what kinds of capabilities they will have to interact with the environment and other users in the environment.
  • For example, a participant interface 32 may sends and receives instructions for simulating an environment and observing the simulated environment. The participant interface 32 may allow a participant 14 to change their position (e.g., the position of a participant avatar) in the simulated environment. The participant interface 32 may further allow the participant 14 to change a location of the participant's 14 gaze in the simulated environment.
  • The participant interface 32 may include demographic rules 34 that cause the environment to be simulated in a different manner depending on demographic attributes of the participant 14. For example, different products may be displayed to participants 14 having different demographic attributes, or the participant 14 could be placed in an entirely different simulated environment depending on their demographic attributes.
  • The interfaces 30 may further include a moderator interface 36 that sends and receives instructions for simulating the environment and manipulating the simulated environment. The moderator interface 36 may allow the moderator 16 to interact with the simulated environment using their own avatar (e.g., the moderator 16 may move through the simulated environment in the same manner as a participant 14), or may allow the moderator 16 to view the simulated environment from the perspective of one of the participants 14 (e.g., viewing the environment through the eyes of the participant). The moderator interface 30 may include a switch or selection mechanism that allows the moderator 16 to switch the moderator's view from a moderator avatar to a participant's perspective. The switch or selection mechanism may be activated during a research session in order to allow for real-time switching between perspectives.
  • The moderator interface 36 may allow a moderator 16 to move a selected participant 14 to a specified location in the simulated environment. The moderator interface 36 may further include logic for manually triggering a survey question.
  • The interfaces 30 may further include a client interface 38 that sends and receives instructions for viewing the simulated environment from the perspective of the participant 14. In some embodiments, the client interface 38 may limit the actions of the client 18 in the simulated environment to viewing the simulated environment from the perspective of the participant 14. In others, the client 18 may be provided with some limited ability to interact with the participant 14 (e.g., by triggering survey questions 24).
  • The interfaces 30 may be implemented in a number of ways. For example, the VR server 10 may expose different ports through which different types of users may connect over a network. A user connecting through port 1 may be identified as a participant 14, a user connecting through port 2 may be identified as a moderator 16, and a user connecting through port 3 may be identified as a client 18.
  • Alternatively or in addition, the interfaces 30 may define different packet formats (e.g., a first format for a participant 14, a second format for a moderator 16, and a third format for a client 18). When a packet is received by the interfaces 30, the interfaces 30 may identify the packet format, determine what type of user is associated with the format, and provide appropriate functionality.
  • Alternatively or in addition, instructions from the VR client 12 may be tagged with different flags depending on what type of user is interacting with the VR client 12. The interfaces 30 may recognize the flags and provide different types of functionality according to what type of user is associated with each flag.
  • Still further, the interfaces 30 may be programmed with a library of users and a type associated with each user. When instructions or information is received from a particular user (e.g., tagged by a user ID), the interfaces 30 may consult the library and determine what functionality the user is able to implement.
  • Providing the different types of functionality to different types of users may be achieved in several ways. The different types of interfaces 30 may interpret commands differently depending on what type of interface 30 the command is received on. Furthermore, the interfaces 30 may instruct the visual display device 40 to provide different displays, graphical interfaces, and or menu options depending on which type of interface the user connects through.
  • For example, a user connecting through the participant interface 32 may be provided with the functionality to move their avatar through the simulated environment. If the user is interacting with the environment using (e.g.) a joystick, then commands from the joystick may be interpreted as a command to move an avatar present in the simulated environment according to the joystick commands. On the other hand, a moderator 16 may or may not be in control of an avatar. If the moderator 16 is not controlling an avatar, and is instead observing the simulated environment from a camera perspective or “bird's eye view,” then the joystick commands received through the moderator interface 36 may be interpreted as a command to move the moderator's 16 camera. Still further, joystick commands from a client 18 may be interpreted as an instruction to change the participant 14 whose perspective the client 18 is currently observing.
  • In another example, a participant 14 may be presented with a view of the simulated environment through the visual display device 40. The view may include a window for presenting survey questions 24, when the survey questions 24 are triggered. The participant interface 32 may transmit instructions for displaying such an interface on the participant's 14 visual display device 40.
  • In contrast, the moderator 14 may be provided with a display of the simulated environment, but may also be provided with administrative menu options. The menu options might include, for example, a command to move a user to a specified location, an “enable communication” command that allows the moderator to transmit audio signals to the VR client 12 of a participant 14, a command to manually trigger a survey question 24, etc.
  • Similarly, the client 18 may be provided with interface options for changing perspective to a different participant 14, triggering survey questions, etc.
  • Thus, the interfaces 30 may include instructions for rendering different types of displays and different types of display options depending on what kind of user has accessed the interface.
  • The simulated environment as viewed through the interfaces 30 may be displayed on the visual display device 40 and/or a browser 42 of the VR client 12. The browser 42 may be, for example, a two-dimensional representation of the simulated environment (e.g., a representation viewed on a web browser or a 2D gaming console).
  • As the participant 14 interacts with the simulated environment through the VR client 12, the VR client 12 may generate VR data 44 describing the participant's 14 interaction with the environment. In one exemplary embodiment, the VR client 12 may collect data regarding the location of the participant's 14 avatar in the simulated environment, and the location at which the participant 14 is directing their gaze.
  • The location of the participant's 14 avatar may be determined, for example, based on relative movement data. The participant's 14 avatar may be initially placed at a known location (or, during the course of the simulation, may be moved to a known location). The participant 14 may be provided with the capability of moving their avatar, for example through the use of keyboard input, a joystick, body movements, etc. The instructions for moving the avatar may be transmitted to the VR server 10 or may be executed locally at the VR client 12. Based on the instructions, an updated location for the participant's 14 avatar in the simulated environment may be determined, and an updated view of the environment may be rendered. The location of the participant's 14 avatar may be recorded at the VR server 10 as 3D data 46. The location may be recorded each time the avatar location changes, or may be sampled at regular intervals.
  • Exemplary location data is shown in Table 2, below:
  • TABLE 2
    Exemplary Location Data
    User ID Project ID Arena ID Timestamp Location
    123456 987 859 12:01:01 (21.6, 77.2, 99.2)
    123456 987 859 12:01:02 (21.6, 77.2, 99.2)
    123456 987 859 12:01:03 (22.7, 74.2, 99.2)
    123456 987 859 12:01:04 (19.1, 73.2, 99.2)
  • In addition to the location data, the system may record information about the direction of the participant's 14 gaze. The direction of the participant's 14 gaze may be determined directly, indirectly, and/or may be imputed.
  • The participant's 14 gaze location may be determined directly, for example, by tracking the movement of the participant's 14 eyes using eye tracking hardware. The eye tracking hardware may be present in the visual display device 40, or may be provided separately.
  • The participant's 14 gaze location may be indirectly determined by measuring a variable that is correlated to eye movement. For example, in a virtual reality environment, a user may change their perspective by turning their head. In this case, it may be assumed that the user is primarily directing their attention to the center of the display field. If the user wishes to see something in their periphery, the user will likely turn their head in that direction. Accordingly, The participant's 14 gaze location may be estimated to be the center of the display field of the visual display device 40.
  • Alternatively or in addition, the participant's 14 gaze location may be imputed using logic that analyzes the user's behavior. For example, if the participant 14 interacts with the simulated environment by clicking in a browser 42, the location of the participant's 14 clicks may be used as a proxy for the location at which the participant 14 has placed their attention. Alternatively, a survey question may be presented directly asking the user where they have placed their attention. The survey responses may be analyzed to impute the user's behavior.
  • Exemplary gaze data is shown in Table 3, below:
  • TABLE 3
    Exemplary Gaze Data
    User ID Project ID Arena ID Timestamp Center Gaze Location
    123456 987 859 12:01:01 (21.6, 77.2, 99.2)
    123456 987 859 12:01:02 (21.6, 77.2, 99.2)
    123456 987 859 12:01:03 (22.7, 74.2, 99.2)
    123456 987 859 12:01:04 (19.1, 73.2, 99.2)
  • Once the location and gaze information are collected as VR data 44, the VR data may optionally be translated into, or combined with, legacy data 48. For example, 2D data (such as mouse clicks or hover times over a 2D canvas) and eye-mapping data 52 (representing the results of eye mapping studies) may be existent in the VR server 10. This data may have been previously analyzed to determine consumer preferences, and this preference information may be correlated with the new VR data 44 in order to avoid the duplication of existent work. Data mapping logic 54 may translate the VR data 44 into legacy data 48 and/or vice versa.
  • The VR data 44 may be processed by data processing logic 56 to evaluate where the participant 14 has directed their attention. The data processing logic may include, for example, a gaze box calculator 58 and scoring rules 60.
  • The gaze box calculator 58 may analyze the location data to determine where the user's gaze was directed (i.e., what part of the simulated environment the user looked at). The gaze box calculator 58 may calculate one or more areas in the participant's 14 view and use the scoring rules 60 to assign a score to each area, depending on the amount of attention the participant 14 gave to the area or the likelihood that the participant 14 was looking at the identified area. The gaze box calculator 58 and scoring rules 60 are discussed in more detail with respect to FIG. 12 below.
  • Furthermore, the participant's 14 gaze location and/or location information may be provided to trigger logic 62. The trigger logic 62 may compare the participant's 14 gaze location or avatar location to a list of trigger points in the simulated environment. If the participant gazed at, or moved to, a trigger point, then the trigger logic 62 may trigger an action, such as the posing of a survey question 24 to the participant 14. For example, the trigger logic 62 may retrieve a survey question 24 from the hardware agnostic input data 20 and forward the survey question 24 to survey logic 64 located at the VR client 12. The survey logic 64 may cause the survey question 24 to be presented to the participant 14, for example by popping up a survey window in the participant's 14 field of view. Alternatively or in addition, the survey question may be presented using auditory cues (e.g., a recording of the question may be played on a speaker associated with the participant's 14 VR client 12).
  • Upon receiving the survey question 24, the participant 14 may indicate an answer to the survey question. The answer may be provided, for example, via keyboard input, through a microphone, or through a gesture (such as moving the participant's 14 head, which may be recognized by an accelerometer in the visual display device 40). The participant's 14 answers to the survey questions may be stored in the VR data 44 at the VR server 10.
  • Although FIG. 1 depicts particular entities in particular locations, one of ordinary skill in the art will understand that more, fewer, or different entities may be employed. Furthermore, the entities depicted may be provided in different locations. For example, although FIG. 1 depicts the translation logic 28 as being resident on the VR server 10, the translation logic 28 may alternatively be located at the VR client 12, so that the VR server 10 sends the hardware agnostic input data 20 to the VR client 12, and the VR client 12 performs the translation. Similarly, the trigger logic 62 and/or the data processing logic 56 may be located at the VR client 12, the survey logic 64 may be located at the VR server 10.
  • The entities depicted in FIG. 1 may also be split between the VR server 10 and the VR client 12. For example, some of the logic for implementing the interfaces 30 or the trigger logic 62 may be provided at the VR server 10, while the rest of the logic is provided at the VR client 12. Alternatively or in addition, some or all of the entities of FIG. 1 may be provided at an intermediate device distinct from the VR server 10 and the VR client 12.
  • Thus, the VR server(s) 10 and VR client(s) 12 may interoperate to provide a simulated environment and allow multiple different types of users to interact with the simulated environment in order to perform behavioral research. Examples of simulated environments are described next.
  • Exemplary Simulated Environments
  • FIGS. 2A-2C depict examples of simulated environments 66 suitable for use with exemplary embodiments.
  • For example, FIG. 2A depicts a simulated environment 66 representing a focus group. Several participant avatars 68 are present in the simulated environment 66, as well as a moderator avatar 70. Each participant 14 may view the simulated environment 66 from the perspective of the participant's avatar 68, and the moderator may view the simulated environment 66 from the perspective of the moderator avatar 70.
  • In addition to the avatars 68, 70, the simulated environment 66 may be populated by one or more setting objects 72. Setting objects may represent objects placed in the simulated environment 66 in order to provide context or realism, such as table and chairs. Moreover, within the simulated environment 66, products may be presented for comparison. The products may be represented by objects placed in the simulated environment 66, referred to herein as environment objects 74.
  • The simulated focus group of FIG. 2A may allow products to be tested in a social or group setting, wherein the product is discussed among the participants 14. Other types of simulated environments are also possible. For example, FIG. 2B depicts an example of a simulated environment 66 representing a car dealership. Participant avatars may move through the simulated car dealership, observing products in their natural context.
  • Still further, FIG. 2C presents an example of a simulated environment 66 which includes a product carousel 76. Within the product carousel 76, different products (or different variations on the same product) may be viewed and moved between. A product carousel 76 may thus allow for a direct comparison between products or between different versions of a single product.
  • FIG. 3A-3D provide an in-depth example of a simulated environment 66. In this example, the simulated environment 66 represents a supermarket through which participant avatars can move. Products (represented by environment objects 74) may be placed on shelves (represented by setting objects 72).
  • FIG. 3A is an overhead view of the simulated environment 66, while FIG. 3B is a perspective view of the simulated environment 66. In the event that a moderator 16 or a client 18 are not viewing the simulated environment from the perspective of one of the participants 14 or from the perspective of their own avatars, the moderator 16 or the client 18 may be presented with an overhead or perspective view similar to the ones depicted in FIGS. 3A and 3B.
  • FIGS. 3C and 3D depict the simulated environment 66 as viewed from the perspective of an avatar, such as a participant avatar. FIGS. 3C and 3D provide a ground-level view of the simulated environment 66 as the user moves through the simulated environment 66, and are representative of what the user might see in the visual display device 40.
  • As noted above, the different types of users present in the simulated environment 66 may have different roles and/or capabilities. The VR server 10 may store different information for each of the different types of users in order to allow the users to effectively perform their roles. The stored information pertaining to each type of user may be collected through the respective interfaces, and is described in more detail below.
  • User Data
  • FIG. 4 depicts examples of the types of data that may be stored for each type of user.
  • For a participant 14, the VR server 10 may store participant data 80, which may include a number of attribute 82 of the participant. For example, the attributes 82 may include demographic details that describe the demographics of the participant. Exemplary demographic details are described in Table 4:
  • TABLE 4
    Demographic Details
    Variable Notes Comment
    Name First/Last Name and/or
    Alias
    User ID Serialized ID across system Allows a single user to exist
    across multiple environments
    or projects
    Contact Email address, phone Contact details for the user
    number, etc.
    Previous Array of previous study Used to calibrate experience
    Studies information quotient
    Age Used to calibrate experience
    quotient
    Total Calculated value of total Used to calibrate experience
    Experience time in VR research quotient
    Time environments
    General Data Income, gender, race, ZIP General background data used
    code, etc. for profiling respondent
    Social Data Facebook ID, Twitter
    handle, etc.
  • The attributes 82 may further include hardware interface data 86 describing the type of hardware (e.g. visual display device 40, browser 42, and/or VR client 12) used by the participant. Exemplary hardware interface data 86 is described in Table 5:
  • TABLE 5
    Hardware Interface Data
    Variable Notes Comment
    IP Address Current logged in IP address
    Hardware Virtual Reality headset device Allows Virtual Reality
    Profile type, PC or gaming device Experience to be
    information, profile data about customized to the user's
    connected devices, etc. headset or gaming unit
    VR Experience List of current simulated
    Status environments loaded on the
    local device, including
    percentage downloaded of
    each
    VR Device Current device statuses (e.g.,
    Status online, connected,
    disconnected, high latency,
    etc.)
  • The attributes 82 may further include previous study data 88 describing the results of previous behavioral studies performed by the participant through the VR server 10 and/or using traditional methods. Exemplary previous study data 88 is described in Table 6:
  • TABLE 6
    Previous Study Data
    Variable Notes Comment
    Previous Array of previous studies
    Studies completed in VR or using
    traditional methods
    Study Gaze Map converted results Allows a user to synthetically
    Results from previous studies replay previous study answers
    in VR space
  • The attributes 82 may further include avatar data 90 representing information used to generate the participant's avatar in the simulated environment. For example, the avatar data 90 may include image data used for rendering the participant's avatar, as well as other descriptive details (e.g., height, weight, gender, etc.).
  • The attributes 82 may further include access credentials that are used by the participant to access the VR server 10 and/or the simulated environment. Exemplary access credentials 92 are described in Table 7:
  • TABLE 7
    Access Credentials
    Variable Notes Comment
    User ID e.g., username or email
    address
    Password User or system created
    password
  • Similarly to the participant 14, the moderator may 16 be associated with moderator data 94, which includes attributes 94 similar to the attributes of the participant. For example, the moderator data 94 may include demographic details 98, hardware interface data 100, avatar data 104, and access credentials 106 generally corresponding to those of the participant data 80.
  • The moderator data 94 may also include manual trigger questions 102, which may include survey questions that the moderator may cause to be asked of some or all participants at any time. In some embodiments, the manual trigger questions 102 may be displayed on a heads up display (HUD) of the moderator, so that the moderator may ask the participants the manual trigger questions (e.g., through a microphone and speaker).
  • The client 18 may be associated with client data 108. Because (in some embodiments) the client does not interact with the simulated environment except to observe the simulated environment, it may not be necessary to collect as many attributes 110 for the client as for the participants and the moderators. For example, the client data 108 may include manual trigger questions 112 similar to the manual trigger questions 102 of the moderator data 94, and access credentials 114 for allowing the client to access the simulated environment 66 and/or the VR server 10.
  • FIGS. 5A and 5B depict the participant(s) 14, the moderator(s) 16, and the client(s) 18 interacting with the simulated environment 66. FIG. 5A is an example in which a single participant 14 is present in the simulated environment while being directed by a single moderator 16. Multiple clients 18 may view the simulated environment, e.g. in a top-down perspective or from the perspective of the participant avatar 68.
  • FIG. 5B is an example in which multiple participants 14, moderators 16, and clients 18 interact with the simulated environment 66. As can be seen in FIG. 5B, each participant 14 may be provided with a participant avatar 68, and participants 66 may see other avatars in the simulated environment 66. Clients 18 and moderators 16 may choose which participants they wish to observe (e.g., by viewing the simulated environment 66 from the perspective of the selected participant, or by attaching an overhead “camera” to the selected participant and watching the participant from a third-person view). Alternatively or in addition, the clients 18 and the moderators 16 may observe the simulated environment 66 from a third person perspective, without following a particular participant. The clients 18 and the moderators 16 may be provided with interface options for switching their perspectives among the available options in real time.
  • The establishment and configuration of a simulated environment will be discussed next.
  • Simulated Environment Initial Setup and Configuration
  • As noted above, and as depicted in FIG. 6, the simulated environment may be made up of setting objects 116, environment objects 126, and object triggers 134.
  • The setting objects 116 may represent objects that define the setting and/or context of the simulated environment. The setting objects may include background image vector files 118, which may be images that are rendered in the background of the simulated environment and may change depending on what type of simulated environment is being rendered. For example, the background image vector files 118 may include images representing the walls and shelves of a grocery store, a sales floor in a car dealership, a design showroom, etc.
  • The setting objects 16 may further include environmental variables 120. The environmental variables may further define how the simulated environment is represented, and may include elements such as music or other audio to be played in the simulated environment, details regarding lighting settings, etc.
  • The setting objects may also include non-user avatars 122 and user avatars 124. User avatars 124 may represent any participants, moderators, and/or clients (if client avatars are enabled) that are present in the simulated environment. Non-user avatars 122 may include simulated avatars that are not associated with any particular user, such as simulated virtual shoppers that behave according to pre-programmed and/or dynamic behaviors. Non-user avatars 122 may be entirely pre-programmed, and/or may be synthesized from other participant movements or legacy participant data.
  • The environment objects 126 include items that may be found in the simulated environment, such as cars, tires, products, etc. The environment objects 126 may include master objects 128. The master objects 128 include objects under study in the simulated environment, such as consumer products. The master objects 128 may include high resolution 3D vector maps of the target products.
  • The environment objects 126 may further include variable objects 130. The variable objects 130 may include variable visual information data points that may be mapped to the environment, such as changing price labels, varied product quantities, etc.
  • The environment objects 126 may further include fill objects 132. The fill objects 132 may include objects that are not an object of study, but which are present in the simulated environment to provide for a more realistic setting. For example, fill objects 132 may include product shelf displays, advertisements, etc.
  • The object triggers 134 may represent points in the simulated environment that, when interacted with, may cause an event (such as the posing of a survey question) to occur. The object triggers 134 may include product triggers 136. The product triggers 136 may be trigger locations associated with a particular product (e.g., a particular master object 128 or class of master objects 128), and may cause the display of a probing question based on an amount of gaze time or gaze points associated with the object.
  • The object triggers 134 may also include location triggers 138. The location triggers 138 may provide a visual display of a probing question based on the participant's avatar location in the simulated environment, or the amount of time that it takes the participant's avatar to reach a particular location.
  • The object triggers 134 may further include manual triggers 140, which may be triggers that can be activated by the moderator or a client. The triggers may cause a selected question from a question library to be posed, and may be triggered at any time.
  • FIG. 7 depicts examples of objects that may be used to make up the simulated environment in more detail. Specifically, FIG. 7 depicts a hardware agnostic canvas 22 having a number of environment objects 126, and translation mapping information 142 that may be used by the translation logic 28 to render the environment objects 126 in the simulated environment.
  • As can be seen in FIG. 7, the environment objects in the hardware agnostic canvas may include a number of details, such as an object ID for uniquely identifying the object, an object type, a location at which the object's data files (e.g., images for rendering the object, audio files played by the object, etc.) are stored, any trigger IDs associated with the object, and hardware-agnostic 3D coordinates for defining the object's location in the simulated environment.
  • The translation mapping information 142 may include hardware-specific information allowing the translation logic 28 to determine how the environment objects should be represented on particular hardware. For example, the translation logic may determine where (in an objective Cartesian coordinate system) the object should be displayed with respect to the participant's current perspective in the simulated environment, and may display the object at the location in the participant's field of view. The translation logic 28 may use information such as the resolution of the participant's hardware viewer, the hardware viewer's brightness and color settings, and information about whether the hardware viewer is capable of audio playback (among other hardware-specific information) in order to render the object appropriately for the hardware. For example, in the case of an environment object having vector image data, the image data may be stretched, rotated, etc. in order to be rendered properly on the participant's hardware at the specified location.
  • The setting objects 116, environment objects 126, and object triggers 134 may be used to build a simulated environment. FIG. 8 is a flowchart describing an exemplary process for building the simulated environment.
  • The simulated environment may, in some embodiments, be built by a moderator 16. Accordingly, at step 144 a user may log into the VR server 10 through the moderator interface 36. Among other options in the moderator's user interface, the VR server 10 may display an option for creating a simulated environment. Upon selection of this option, the VR server 10 may provide an interface for building a hardware agnostic canvas 22 for the simulated environment.
  • Previously built settings (e.g., generic settings such as grocery stores, car dealerships, or focus group rooms which may or may not be populated with environment objects) may be stored in a library for re-use. At step 146, the moderator 16 may be presented with an option for loading a pre-built setting from the library. If the moderator 16 chooses to load a pre-built setting at step 146, then processing may proceed to step 148 and the selected setting may be retrieved from the library. Processing may then (optionally) proceed to step 150, where additional setting objects may be added to the pre-built setting. If moderator does not choose to load a pre-built setting at step 146, then processing may proceed directly to step 150 and the setting may be built by placing setting objects in the blank setting.
  • After building the setting with setting objects at step 150, processing may proceed to step 152 and the moderator 16 may be presented with the option to save the built canvas in the canvas library for future use.
  • Once the moderator 16 is done placing setting objects, processing may proceed to step 154 and the moderator 16 may be provided with an interface for placing environment objects in the simulated environment. Alternatively or in addition, the moderator 16 may choose to rely on environment objects stored with the saved setting retrieved in step 148 and/or a previously stored environment object set that may be imported into the setting developed at steps 146-152.
  • If the moderator 16 chooses to rely on a previously-stored environment object set, processing may proceed to step 156 where the object set may be loaded (e.g., from the canvas library) and added to the simulated environment. Optionally, processing may then proceed to step 158, where additional environment objects may be added (e.g., from the canvas library), and from there to step 160 where the environment objects added to the simulated environment may optionally be saved in the canvas library for future use.
  • Processing may then proceed to step 162, where object triggers may be defined or loaded from the hardware agnostic input data 20. For example, an interface may be provided for allowing the moderator 16 to define survey questions, locations at which the questions are triggered, a required number of gaze points in order to trigger the questions, etc.
  • At step 164, the moderator 16 may define participant demographic information and access credentials. For example the moderator 16 may provide a list of users (e.g., a list of user IDs) who are permitted to participate in a research project involving the simulated environment established in steps 144-162. The participants may access the simulated environment through a participant interface 32 in the VR server 10. In some embodiments, the moderator 16 may define a list of demographics which a participant must have in order to access the simulated environment. In such a situation, the VR server 10 may assign participants to different simulated environments depending on their demographics.
  • At step 166 the moderator 16 may define client access data for allowing clients to access the simulated environment. For example, the moderator 16 may provide a list of client user IDs allowing the clients to log into client interfaces 38 in the VR server 10.
  • At step 168, the moderator 16 may provide session time information. The session time information may define at time at which a research project in the simulated environment is scheduled to take place. If a user attempts to log into the simulated environment at a time outside of the session time defined in step 168, an error message may be displayed informing the user when the research project is scheduled to begin. In some embodiments, users may be allowed to log into the research project a short predetermined amount of time prior to the session time defined in step 168. In this case, the user may be placed into a waiting room until the appointed time for the research project, and then may be placed in the simulated environment.
  • At the appointed time defined in step 168, processing may proceed to step 170, and the research project session may begin.
  • Once the research project begins, the VR server 10 may employ the translation logic 28 in order to render the simulated environment defined in steps 144-162 on user-specific hardware. FIG. 9 is a flowchart describing exemplary steps that may be performed by the translation logic 28.
  • Processing may begin at step 172, where a stored hardware-agnostic canvas associated with the current research project may be retrieved from the canvas library 22. In order to appropriately render the hardware agnostic canvas on user-specific hardware, translation mapping information describing how to render an environment on the user-specific hardware may be used. Such translation mapping information may be retrieved at step 174. The translation mapping information may be stored with, or separately from, the hardware agnostic canvas.
  • At step 176, the translation logic may retrieve or construct a blank hardware-specific scene or template. This may serve as the basis for a hardware-specific scene, to which setting and environment objects will be added. Alternatively, in some embodiments an entire scene may be generated in a hardware agnostic format, and then displayed on user-specific hardware by translating the finished scene.
  • At step 178, the translation object mat retrieve a setting object from the canvas. For example, if the setting objects are stored in a database, the translation logic may retrieve the next setting object from the database. The setting object may be associated with location information, such as coordinates in a Cartesian plane that are defined with respect to the simulated environment and/or the blank scene or template. This location information may be retrieved from the canvas library at step 180.
  • At step 182, appearance properties for the setting object may be retrieved. For example, a definition of the setting object may include a pointer or reference to image files (e.g., vector graphic images) that are used to draw the setting object in the simulated environment. The pointer or reference may be followed to extract the vector images from the associated image files.
  • At step 184, viewer-specific code or image data may be generated and added to the blank template generated at step 176. The code or image data may be generated, at least in part, based on the appearance properties determined at step 182, the object coordinates retrieved at step 180, and the translation mapping information retrieved at step 174. For example, the translation logic may consult the translation mapping information to determine display properties for the user-specific viewer hardware. The translation logic may use the location information to determine where, with respect to the direction the user may be looking (or how the user would observe the setting object from various angles), the object should be placed. The translation logic may place the object at the location, and may correct the object's image data based on the translation mapping information (e.g., by manipulating the object's image data, such as by stretching or rotating the object).
  • At step 186, the translation logic may determine whether there are additional setting objects to be added to the simulated environment. If so, processing may return to step 178 and additional setting objects may be added to the scene.
  • Once all the setting objects have been added to the scene, processing may proceed to step 188 and a similar process to that described at steps 178-184 may be carried out for environment objects. Step 188 generally corresponds to step 178, step 190 generally corresponds to step 180, step 192 generally corresponds to step 182, step 196 generally corresponds to step 184, and step 198 generally corresponds to step 186.
  • One additional step may be performed at step 194 with respect to the environment objects, which may involve identifying any triggers associated with the environment objects. The triggers may be associated with object or location data, and survey questions that may be displayed when the location or object is approached or viewed. Step 194 may involve generating code for the user-specific hardware that causes the survey questions to be posed when the user-specific hardware identifies that the triggering conditions are met. Alternatively or in addition, the trigger points may be triggered by the VR server 10 when the user-specific hardware reports that the user has approached or viewed the location associated with the trigger point.
  • In some embodiments, triggers may be associated with locations in the simulated environment rather than, or in addition to, associating the triggers with the environment objects.
  • Once the trigger points and environment objects have been added to the scene, processing may proceed to step 200 where the now-completed view of the simulated environment may be sent to the user-specific hardware, rendered by the user-specific hardware, and/or saved for future use.
  • Thus, a simulated VR environment may be constructed and rendered for a variety of users. User interaction with the VR environment is next described with respect to FIGS. 10-11.
  • Virtual Reality Environment Interaction
  • User-specific hardware may use the scene information generated in FIG. 9 to render the simulated environment and allow different users to interact with the simulated environment. FIG. 10 is a data flow diagram describing user interactions with the simulated environment.
  • The VR server 10 may host a copy of the simulated VR environment 202, or data associated with the VR environment 202 that allows each participant VR client 12 to generate their own copy of the simulated VR environment. In some embodiments, the VR server 10 may maintain information regarding the different users in the VR environment so that each user's avatar can be displayed to other users in the VR environment.
  • In some embodiments, the moderator interface may allow the moderator VR client to transmit a change instruction causing a change in the VR environment 202. For example, the change instruction may be an instruction to move a specified participant avatar to a specified location, to manually change the gaze direction of the participant, or to add new objects to the VR environment.
  • The VR server 10 may provide VR environment data to the VR clients 12 of participants, moderators, and clients, thereby allowing the VR clients 12 to render the VR environment 202. The VR clients 12 may be of homogeneous or heterogeneous types of hardware. Each type of user may interact with the VR server 10 through an appropriate type of interface 30, which may interpret instructions from the users differently according to the user's role.
  • If the user associated with a VR client 12 maintains an avatar in the VR environment 202, the VR client 12 may be provided with one or more input devices 204 allowing the user to interact with the VR environment 202. For example, the input devices 204 may include a joystick allowing the user to change the location of their avatar in the VR environment 202 and an accelerometer in a VR headset allowing the user's gaze location to be determined. Accordingly, each of the VR clients 12 associated with an avatar and/or viewer location (such as an invisible “camera” observing the VR environment 202) may transmit location data and gaze data to data processing logic 56 of the VR server 10.
  • The data processing logic may, in turn, provide the obtained information to trigger logic 62, which may determine if the user's avatar location or gaze location has triggered a survey question 24. If so, the triggered question may be provided to the VR environment 202 of the participant's VR client 12 and displayed on a user interface 206. In some embodiments, the survey question may be read aloud through a speaker in the participant VR client (and may be manually read by the moderator, or automatically played, e.g., through a previously-recorded sound file). The participant may use the input device(s) 204 to answer the survey questions, and the resulting question responses may be transmitted back to the VR serer 10 and stored in the VR data 44.
  • A flowchart of exemplary steps performed by the VR server 10 as the participant VR client 12 provides information about the participant's interaction with the VR environment 202 is depicted in FIG. 11.
  • At step 208, the VR server 10 may access a participant interface through which the participant VR client 12 provides data and information. At step 210, the VR server 10 may receive VR data through the participant interface, which may include (for example) an updated participant avatar location and an updated participant gaze location.
  • The VR server 10 may compare the updated location and gaze data to previous location and gaze data to determine whether the user's position or gaze has changed (and thus needs to be updated). If so, processing may proceed to either or both of steps 212 and 214, where the participant's view of the VR environment and/or position in the environment may be updated. If necessary, new environment view data may be transmitted to the participant VR client 12, and the view of the environment may be updated on the VR client 12. If the participant's environment location is changed at step 212 and other users are also represented in the VR environment 202 by avatars, the updated participant location information may be transmitted to the other users' VR clients 12 so that the participant's updated avatar location can be rendered in the other users' VR clients 12.
  • At step 216, it may be determined whether updating the participant's position or gaze location has caused the participant to activate a trigger point. If not, processing may return to step 210 where next VR data from the participant may be received. If a trigger point is activated, processing may proceed to step 218 where the user may be presented with a survey interface for answering the survey questions. Upon providing an input responsive to the survey question, processing the input may be transmitted to the VR server 10 and received at step 220. The answers to the survey questions may be stored with the VR data 44.
  • In addition to the answers to the survey questions, the VR data 44 may include individual and/or aggregated scores calculated based on participant's gaze locations. Exemplary score calculations are discussed below with respect to FIGS. 12 and 13.
  • Score Calculations
  • As shown in FIG. 12, a participant may approach one or more environment objects representing different products on a display. The products may be placed in the simulated environment according to 3D coordinates associated with the associated environment objects. The VR server may extract 2D coordinates of the environment objects to identify a viewing plane representative of the areas of the participant's view in which the objects representative of a particular type of product is present. Different products may be associated with different sets of 2D coordinates.
  • Based on the 2D coordinates, a set of “gaze points” may be calculated for each type of product. The gaze points may represent an amount of attention (e.g., based on viewing time and the number of “second looks” given to the product). The participant's gaze may be represented as a single point (e.g., the center of the participants view), or may be represented as a series of gaze boxes. The boxes may be centered at the center of the participant's view, and may expand concentrically from that point. The more central gaze boxes may be assigned more gaze points on the assumption that the user is paying the most attention to the center of their view. Peripheral gaze boxes may be given decreasing number of gaze points on the assumption that the user is paying less attention, but nonetheless some attention, to the peripheral gaze boxes.
  • For example, a first gaze box may be represented as the central area of the participant's field of view (e.g., extending 10 degrees from the center of the participant's field of view). Any environment objects or products present in the first gaze box may accumulate, for example, 30 points per millisecond.
  • A second gaze box may extend 10-20 degrees from center. Any environment objects or products present in the second gaze box may accumulate, for example, 10 points per millisecond.
  • A third gaze box may extend 20-40 degrees from center. Any environment objects or products present in the third gaze box may accumulate, for example, 3 points per millisecond.
  • A fourth gaze box may extend 40-180 degrees from center and may accumulate gaze points at a rate of 1 per millisecond, while a fifth gaze box may include anything unseen and out of peripheral range, and may not accumulate any gaze points.
  • These values are intended to be exemplary, and one of ordinary skill in the art will recognize that other configurations or values may also be used.
  • The gaze score may be calculated in the manner above based on the first glance that the participant gives to a product. In some embodiments, the initial gaze score may be supplemented with additional accumulated gaze scores based on additional looks given to the product. In some embodiments, these second looks may be associated with a multiplier, on the assumption that a user directing their gaze away from the product and then returning to the product for a second look carries added significance.
  • Based on the raw gaze data, a formula may be used to calculate a gaze score. For example, one exemplary formula may be:

  • F=A+M*B
  • Where F is the final gaze score, A is the initial set of gaze points (described above), B is the number of second look points (calculated in the same manner as described above but only after the user has initially viewed a product and then moved their gaze away from the product), and M is a “second look multiplier,” given as:

  • M=1+(T*0.1)
  • , where T represents an amount of time spent away from the product (e.g., the time in second since the object entered gaze box 2, then completely left gaze box 4).
  • One of ordinary skill in the art will recognize that this formula is exemplary only, and may be modified based on the application. Further, the same logic may be extended to give different (e.g., increasing) scores based on a “third look,” “fourth look,” etc.
  • The gaze scores may be aggregated across multiple participants and/or stored separately for each participant. The gaze scores (individual or aggregate) may be represented visually in the simulated environment in the form of a gaze map. This may allow the moderator or client to quickly and easily determine which products have received the most attention.
  • An exemplary gaze map 222 is depicted in FIG. 13. Areas at which gaze points have been accumulated to a greater degree may be distinguished, for example using different colors or patterns, among other means of visually distinguishing different areas of attention.
  • An exemplary computing system or electronic device for implementing the above-described technologies is next described.
  • Computer-Implemented Embodiments
  • Some or all of the exemplary embodiments described herein may be embodied as a method performed in an electronic device having a processor that carries out the steps of the method. Furthermore, some or all of the exemplary embodiments described herein may be embodied as a system including a memory for storing instructions and a processor that is configured to execute the instructions in order to carry out the functionality described herein.
  • Still further, one or more of the acts described herein may be encoded as computer-executable instructions executable by processing logic. The computer-executable instructions may be stored on one or more non-transitory computer readable media. One or more of the above acts described herein may be performed in a suitably-programmed electronic device.
  • An exemplary electronic device 224 is depicted in FIG. 14. The electronic device 224 may take many forms, including but not limited to a computer, workstation, server, network computer, quantum computer, optical computer, Internet appliance, mobile device, a pager, a tablet computer, a smart sensor, application specific processing device, etc.
  • The electronic device 224 described herein is illustrative and may take other forms. For example, an alternative implementation of the electronic device may have fewer components, more components, or components that are in a configuration that differs from the configuration described below. The components described below may be implemented using hardware based logic, software based logic and/or logic that is a combination of hardware and software based logic (e.g., hybrid logic); therefore, components described herein are not limited to a specific type of logic.
  • The electronic device 224 may include a processor 226. The processor 226 may include hardware based logic or a combination of hardware based logic and software to execute instructions on behalf of the electronic device 224. The processor 226 may include one or more cores 228 that execute instructions on behalf of the processor 226. The processor 2326 may include logic that may interpret, execute, and/or otherwise process information contained in, for example, a memory 234. The information may include computer-executable instructions and/or data that may implement one or more embodiments of the invention. The processor 226 may comprise a variety of homogeneous or heterogeneous hardware. The hardware may include, for example, some combination of one or more processors, microprocessors, field programmable gate arrays (FPGAs), application specific instruction set processors (ASIPs), application specific integrated circuits (ASICs), complex programmable logic devices (CPLDs), graphics processing units (GPUs), or other types of processing logic that may interpret, execute, manipulate, and/or otherwise process the information. The processor 226 may include a single core or multiple cores. Moreover, the processor may include a system-on-chip (SoC) or system-in-package (SiP).
  • The electronic device 224 may include a memory 234, which may be embodied as one or more tangible non-transitory computer-readable storage media for storing one or more computer-executable instructions or software that may implement one or more embodiments of the invention. The memory 234 may comprise a RAM that may include RAM devices that may store the information. The RAM devices may be volatile or non-volatile and may include, for example, one or more DRAM devices, flash memory devices, SRAM devices, zero-capacitor RAM (ZRAM) devices, twin transistor RAM (TTRAM) devices, read-only memory (ROM) devices, ferroelectric RAM (FeRAM) devices, magneto-resistive RAM (MRAM) devices, phase change memory RAM (PRAM) devices, or other types of RAM devices.
  • The electronic device 224 may include a virtual machine (VM) 230 for executing the instructions loaded in the memory 234. A virtual machine 230 may be provided to handle a process running on multiple processors 226 so that the process may appear to be using only one computing resource rather than multiple computing resources. Virtualization may be employed in the electronic device 224 so that infrastructure and resources in the electronic device 224 may be shared dynamically. Multiple VMs 230 may be resident on a single electronic device 224.
  • A hardware accelerator 238 may be implemented in an ASIC, FPGA, or some other device. The hardware accelerator 238 may be used to reduce the general processing time of the electronic device 238.
  • The electronic device 224 may include a network interface 236 to interface to a Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., T1, T3, 56kb, X.25), broadband connections (e.g., integrated services digital network (ISDN), Frame Relay, asynchronous transfer mode (ATM), wireless connections (e.g., 802.11), high-speed interconnects (e.g., InfiniBand, gigabit Ethernet, Myrinet) or some combination of any or all of the above. The network interface 236 may include a built-in network adapter, network interface card, personal computer memory card international association (PCMCIA) network card, card bus network adapter, wireless network adapter, universal serial bus (USB) network adapter, modem or any other device suitable for interfacing the electronic device to any type of network capable of communication and performing the operations described herein.
  • The electronic device 224 may include one or more input devices 204, such as a keyboard, a multi-point touch interface, a pointing device (e.g., a mouse), a joystick or gaming device, a gyroscope, an accelerometer, a haptic device, a tactile device, a neural device, a microphone, or a camera that may be used to receive input from, for example, a user. Note that electronic device 224 may include other suitable I/O peripherals.
  • Among other possibilities, the input devices 204 may include an audio input device 240, such as a microphone or array of microphones, and an attention tracking module 242. The attention tracking module 242 may be, for example, a device for directly tracking the user's attention (e.g., eye-tracking hardware that monitors the location to which the user's eyes are directed), a device for indirectly tracking the user's attention (e.g., a virtual reality headset that determines the location in which the user is looking based on accelerometer or compass data indicating the direction in which the user is pointing their head), and/or logic for imputing the user's attention based on the user's behavior (e.g., logic for interpreting a user's mouse clicks on a canvas or analyzing a survey response).
  • The input devices 204 may allow a user to provide input that is registered on a visual display device 40. The visual display device may be, for example, a virtual reality headset, a mobile device screen, or a PC or laptop screen. A simulated environment 66 may be displayed on the visual display device 40. Furthermore, a graphical user interface (GUI) 206 may be shown on the display device 40. The GUI 206 may display, for example, forms on which information, such as user information or survey questions, may be presented.
  • The input devices 204 and visual display device 40 may be used to interact with a virtual reality environment 202 hosted or supported by the electronic device 224. The virtual reality environment 202 may track user positions 244 (e.g., a location of user avatars within the virtual reality environment 202), provide vector graphics 246 for rendering objects and avatars in the environment, object data 248, trigger data 250, and gaze data 252 representing locations to which participants have directed their gaze.
  • A storage device 254 may also be associated with the electronic device 224. The storage device 254 may be accessible to the processor 226 via an I/O bus. Information stored in the storage 254 may be executed, interpreted, manipulated, and/or otherwise processed by the processor. The storage device 254 may include, for example, a magnetic disk, optical disk (e.g., CD-ROM, DVD player), random-access memory (RAM) disk, tape unit, and/or flash drive. The information may be stored on one or more non-transient tangible computer-readable media contained in the storage device. This media may include, for example, magnetic discs, optical discs, magnetic tape, and/or memory devices (e.g., flash memory devices, static RAM (SRAM) devices, dynamic RAM (DRAM) devices, or other memory devices). The information may include data and/or computer-executable instructions that may implement one or more embodiments of the invention
  • The storage device 254 may further store files 260, applications 258, and the electronic device 224 can be running an operating system (OS) 256. Examples of OSes may include the Microsoft® Windows® operating systems, the Unix and Linux operating systems, the MacOS® for Macintosh computers, an embedded operating system, such as the Symbian OS, a real-time operating system, an open source operating system, a proprietary operating system, operating systems for mobile electronic devices, or other operating system capable of running on the electronic device 224 and performing the operations described herein. The operating system 256 may be running in native mode or emulated mode.
  • The files 260 may include files storing the user data 80, 94, 108 (see FIG. 4), input data 20 (such as hardware-agnostic canvases and survey questions), VR data 44 including translation mapping information 142 for different types of proprietary VR devices (see FIG. 7), legacy data 48, and project data 262 describing the current behavioral research project.
  • The storage device may further store the logic for implementing above-described participant interface 32, moderator interface 36, client interface 38, data processing logic 56, translation logic 28, survey logic 64, trigger logic 62, and data mapping logic 54, along with any other logic suitable for carrying out the procedures described in the present application.
  • The foregoing description may provide illustration and description of various embodiments of the invention, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations may be possible in light of the above teachings or may be acquired from practice of the invention. For example, while a series of acts has been described above, the order of the acts may be modified in other implementations consistent with the principles of the invention. Further, non-dependent acts may be performed in parallel.
  • In addition, one or more implementations consistent with principles of the invention may be implemented using one or more devices and/or configurations other than those illustrated in the Figures and described in the Specification without departing from the spirit of the invention. One or more devices and/or components may be added and/or removed from the implementations of the figures depending on specific deployments and/or applications. Also, one or more disclosed implementations may not be limited to a specific combination of hardware.
  • Furthermore, certain portions of the invention may be implemented as logic that may perform one or more functions. This logic may include hardware, such as hardwired logic, an application-specific integrated circuit, a field programmable gate array, a microprocessor, software, or a combination of hardware and software.

Claims (20)

1. A system for monitoring behaviors of a participant by a moderator and a client, the system comprising:
a non-transitory storage medium storing logic, the logic comprising: participant interface logic that sends and receives instructions for simulating an environment and observing the simulated environment, moderator interface logic that sends and receives instructions for simulating the environment and manipulating the simulated environment, and client interface logic that sends and receives instructions for viewing the simulated environment from the perspective of the participant; and
a processor programmed to execute the stored logic.
2. The system of claim 1, wherein the processor is further programmed to:
maintain the simulated environment,
receive observational data about the simulated environment from the participant interface logic, and
store the observational data in the storage medium; and
further comprising an interface configured to connect the system to a visual display device for displaying the simulated environment.
3. The system of claim 1, wherein the participant interface logic comprises demographic rules that cause the environment to be simulated in a different manner depending on demographics the participant.
4. The system of claim 1, wherein the participant interface logic for observing the simulated environment comprises logic for changing a position of a participant avatar in the simulated environment.
5. The system of claim 1, wherein the participant interface logic for observing the simulated environment comprises logic for changing a location of a participant's gaze in the simulated environment.
6. The system of claim 5, wherein the processor is further configured to calculate one or more viewing windows for the participant's gaze.
7. The system of claim 6, wherein the processor is further configured to calculate scores for each of the viewing windows, the calculated scores representing an amount of attention given to an object in the viewing windows.
8. The system of claim 5, wherein the processor is further configured to:
identify that the location of the participant's gaze encompasses a predefined trigger point;
retrieve a survey question associated with the predefined trigger point; and
transmit an instruction to the visual display device to display the retrieved survey question.
9. The system of claim 1, wherein the moderator interface logic for manipulating the simulated environment comprises logic for moving the participant to a specified location in the simulated environment.
10. The system of claim 1, wherein the moderator interface logic for manipulating the simulated environment comprises logic for manually triggering a survey question.
11. The system of claim 1, wherein the client interface logic limits the actions of the client in the simulated environment to viewing the simulated environment from the perspective of the participant.
12. The system of claim 1, wherein:
the storage medium further stores one or more hardware agnostic canvases that represent the simulated environment in a manner that is not specific to the visual display device, and
the processor is further configured to translate the one or more hardware agnostic canvases into a format that is interpretable by the visual display device.
13. A method for monitoring behaviors of a participant by a moderator and a client, the method comprising:
simulating an environment comprising an object of study;
transmitting first instructions to a participant visual display device, the transmitted first instructions comprising instructions for displaying a participant perspective of the simulated environment on the participant visual display device;
receiving participant location data describing a change in a position or a gaze location of the participant in the simulated environment;
analyzing the participant location data to calculate a score based on an amount of attention paid by the participant to the object of study in the simulated environment; and
storing the calculated score in a non-transitory storage medium.
14. The method of claim 13, further comprising transmitting second instructions to a client visual display device, the transmitted second instructions comprising instructions for displaying the participant perspective of the simulated environment on the client visual display device.
15. The method of claim 13, further comprising calculating one or more viewing windows for the participant's gaze location.
16. The method of claim 13, further comprising:
identifying that the participant's gaze location encompasses a predefined trigger point;
retrieving a survey question associated with the predefined trigger point; and
transmitting an instruction to the participant visual display device to display the retrieved survey question.
17. A non-transitory electronic device readable medium storing instructions that, when executed, cause a processor to:
connect to a participant interface of an environmental server responsible for maintaining a simulated environment comprising an object of study, wherein the environmental server maintains a plurality of different types of interfaces, each type of interface corresponding to a different type of user interacting with the simulated environment and providing different capabilities for the different types of users;
receive information about the simulated environment from the participant interface;
render the simulated environment for a participant;
transmit participant location data describing a change in a position or a gaze location of the participant in the simulated environment to the environmental server using the participant interface;
receive updated information about the simulated environment, and updating the rendered simulated environment based on the updated information;
receive a manipulation of the environment from an instruction transmitted through a moderator interface of the environmental server; and
execute the manipulation in the simulated environment.
18. The medium of claim 17, wherein the instructions for displaying the simulated environment comprise instructions for displaying the simulated environment on a virtual reality headset.
19. The medium of claim 17, wherein the instructions for displaying the simulated environment comprise instructions for displaying the simulated environment in a two-dimensional browser.
20. The method of claim 13, wherein:
the manipulation of the environment comprises an instruction that the participant be moved to a specified location in the simulated environment, and
executing the manipulation comprises moving the participant to the specified location.
US14/254,643 2014-04-16 2014-04-16 Systems and methods for multi-user behavioral research Abandoned US20150302422A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/254,643 US20150302422A1 (en) 2014-04-16 2014-04-16 Systems and methods for multi-user behavioral research
US14/274,351 US10354261B2 (en) 2014-04-16 2014-05-09 Systems and methods for virtual environment construction for behavioral research
US14/466,643 US20150301597A1 (en) 2014-04-16 2014-08-22 Calculation of an analytical trail in behavioral research
US16/512,149 US10600066B2 (en) 2014-04-16 2019-07-15 Systems and methods for virtual environment construction for behavioral research

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/254,643 US20150302422A1 (en) 2014-04-16 2014-04-16 Systems and methods for multi-user behavioral research

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US14/274,351 Continuation US10354261B2 (en) 2014-04-16 2014-05-09 Systems and methods for virtual environment construction for behavioral research
US14/466,643 Continuation-In-Part US20150301597A1 (en) 2014-04-16 2014-08-22 Calculation of an analytical trail in behavioral research

Publications (1)

Publication Number Publication Date
US20150302422A1 true US20150302422A1 (en) 2015-10-22

Family

ID=54322361

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/254,643 Abandoned US20150302422A1 (en) 2014-04-16 2014-04-16 Systems and methods for multi-user behavioral research
US14/274,351 Active 2037-12-20 US10354261B2 (en) 2014-04-16 2014-05-09 Systems and methods for virtual environment construction for behavioral research
US16/512,149 Active US10600066B2 (en) 2014-04-16 2019-07-15 Systems and methods for virtual environment construction for behavioral research

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/274,351 Active 2037-12-20 US10354261B2 (en) 2014-04-16 2014-05-09 Systems and methods for virtual environment construction for behavioral research
US16/512,149 Active US10600066B2 (en) 2014-04-16 2019-07-15 Systems and methods for virtual environment construction for behavioral research

Country Status (1)

Country Link
US (3) US20150302422A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078354A1 (en) * 2014-09-16 2016-03-17 International Business Machines Corporation Managing inferred questions
CN106096572A (en) * 2016-06-23 2016-11-09 惠州Tcl移动通信有限公司 Living habit detecting and control method based on virtual reality device and virtual reality device
CN108542404A (en) * 2018-03-16 2018-09-18 成都虚实梦境科技有限责任公司 Attention appraisal procedure, device, VR equipment and readable storage medium storing program for executing
US10482653B1 (en) 2018-05-22 2019-11-19 At&T Intellectual Property I, L.P. System for active-focus prediction in 360 video
US10721510B2 (en) 2018-05-17 2020-07-21 At&T Intellectual Property I, L.P. Directing user focus in 360 video consumption
US10827225B2 (en) 2018-06-01 2020-11-03 AT&T Intellectual Propety I, L.P. Navigation for 360-degree video streaming
US20220191577A1 (en) * 2020-06-19 2022-06-16 Apple Inc. Changing Resource Utilization associated with a Media Object based on an Engagement Score

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5977377B2 (en) * 2012-02-29 2016-08-24 ネステク ソシエテ アノニム Apparatus for conducting consumer surveys and method for using the same
US20150302422A1 (en) * 2014-04-16 2015-10-22 2020 Ip Llc Systems and methods for multi-user behavioral research
US10719193B2 (en) * 2016-04-20 2020-07-21 Microsoft Technology Licensing, Llc Augmenting search with three-dimensional representations
US10586257B2 (en) 2016-06-07 2020-03-10 At&T Mobility Ii Llc Facilitation of real-time interactive feedback
US10269116B2 (en) * 2016-12-26 2019-04-23 Intel Corporation Proprioception training method and apparatus
US20180315117A1 (en) * 2017-04-26 2018-11-01 David Lynton Jephcott On-Line Retail
US10810773B2 (en) * 2017-06-14 2020-10-20 Dell Products, L.P. Headset display control based upon a user's pupil state
US10643399B2 (en) 2017-08-29 2020-05-05 Target Brands, Inc. Photorealistic scene generation system and method
USD936074S1 (en) * 2019-04-01 2021-11-16 Igt Display screen or portion thereof with graphical user interface
CA3162928A1 (en) 2019-11-29 2021-06-03 Electric Puppets Incorporated System and method for virtual reality based human biological metrics collection and stimulus presentation

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577329B1 (en) * 1999-02-25 2003-06-10 International Business Machines Corporation Method and system for relevance feedback through gaze tracking and ticker interfaces
US20060064342A1 (en) * 2001-06-18 2006-03-23 Quality Resources Worldwide, Llc. Internet based qualitative research method and system and Synchronous and Asynchronous audio and video message board
US20070179867A1 (en) * 2004-03-11 2007-08-02 American Express Travel Related Services Company, Inc. Virtual reality shopping experience
US20080065468A1 (en) * 2006-09-07 2008-03-13 Charles John Berg Methods for Measuring Emotive Response and Selection Preference
US20080163054A1 (en) * 2006-12-30 2008-07-03 Pieper Christopher M Tools for product development comprising collections of avatars and virtual reality business models for avatar use
WO2008081413A1 (en) * 2006-12-30 2008-07-10 Kimberly-Clark Worldwide, Inc. Virtual reality system for environment building
US20080172680A1 (en) * 2007-01-16 2008-07-17 Motorola, Inc. System and Method for Managing Interactions in a Virtual Environment
US20110010266A1 (en) * 2006-12-30 2011-01-13 Red Dot Square Solutions Limited Virtual reality system for environment building
US20130022950A1 (en) * 2011-07-22 2013-01-24 Muniz Simas Fernando Moreira Method and system for generating behavioral studies of an individual
US20130035989A1 (en) * 2011-08-05 2013-02-07 Disney Enterprises, Inc. Conducting market research using social games
US20130325546A1 (en) * 2012-05-29 2013-12-05 Shopper Scientist, Llc Purchase behavior analysis based on visual history
WO2014088906A1 (en) * 2012-12-04 2014-06-12 Crutchfield Corporation System and method for customizing sales processes with virtual simulations and psychographic processing
US20140315635A1 (en) * 2013-04-19 2014-10-23 Upfront Anayltics Ltd. Method and apparatus to elicit market research using game play
US20140365333A1 (en) * 2013-06-07 2014-12-11 Bby Solutions, Inc. Retail store customer natural-gesture interaction with animated 3d images using sensor array
US20150193979A1 (en) * 2014-01-08 2015-07-09 Andrej Grek Multi-user virtual reality interaction environment

Family Cites Families (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5914720A (en) * 1994-04-21 1999-06-22 Sandia Corporation Method of using multiple perceptual channels to increase user absorption of an N-dimensional presentation environment
US5583795A (en) * 1995-03-17 1996-12-10 The United States Of America As Represented By The Secretary Of The Army Apparatus for measuring eye gaze and fixation duration, and method therefor
US20020059095A1 (en) * 1998-02-26 2002-05-16 Cook Rachael Linette System and method for generating, capturing, and managing customer lead information over a computer network
US6106119A (en) * 1998-10-16 2000-08-22 The Board Of Trustees Of The Leland Stanford Junior University Method for presenting high level interpretations of eye tracking data correlated to saved display images
US6474159B1 (en) * 2000-04-21 2002-11-05 Intersense, Inc. Motion-tracking
US7979314B2 (en) * 2001-08-23 2011-07-12 Jonas Ulenas Method and apparatus for obtaining consumer product preferences through interactive product selection and evaluation
WO2004036515A1 (en) * 2002-10-16 2004-04-29 Suzanne Jaffe Stillman Interactive vending system(s) featuring product customization, multimedia, education and entertainment, with business opportunities, models, and methods
US7562056B2 (en) * 2004-10-12 2009-07-14 Microsoft Corporation Method and system for learning an attention model for an image
CA2622365A1 (en) * 2005-09-16 2007-09-13 Imotions-Emotion Technology A/S System and method for determining human emotion by analyzing eye properties
CA2639125A1 (en) * 2006-03-13 2007-09-13 Imotions-Emotion Technology A/S Visual attention and emotional response detection and display system
US8487775B2 (en) * 2006-06-11 2013-07-16 Volvo Technology Corporation Method and apparatus for determining and analyzing a location of visual interest
US20100004977A1 (en) * 2006-09-05 2010-01-07 Innerscope Research Llc Method and System For Measuring User Experience For Interactive Activities
WO2008056330A1 (en) * 2006-11-08 2008-05-15 Kimberly Clark Worldwide, Inc. System and method for capturing test subject feedback
US20100010366A1 (en) * 2006-12-22 2010-01-14 Richard Bernard Silberstein Method to evaluate psychological responses to visual objects
US8370207B2 (en) * 2006-12-30 2013-02-05 Red Dot Square Solutions Limited Virtual reality system including smart objects
KR20100047865A (en) * 2007-08-28 2010-05-10 뉴로포커스, 인크. Consumer experience assessment system
US8036930B2 (en) * 2008-01-17 2011-10-11 International Business Machines Corporation Market segmentation analyses in virtual universes
US20130215116A1 (en) * 2008-03-21 2013-08-22 Dressbot, Inc. System and Method for Collaborative Shopping, Business and Entertainment
US8473356B2 (en) * 2008-08-26 2013-06-25 International Business Machines Corporation System and method for tagging objects for heterogeneous searches
US8401248B1 (en) * 2008-12-30 2013-03-19 Videomining Corporation Method and system for measuring emotional and attentional response to dynamic digital media content
US9495589B2 (en) * 2009-01-26 2016-11-15 Tobii Ab Detection of gaze point assisted by optical reference signal
US20120105486A1 (en) * 2009-04-09 2012-05-03 Dynavox Systems Llc Calibration free, motion tolerent eye-gaze direction detector with contextually aware computer interaction and communication methods
WO2011008793A1 (en) * 2009-07-13 2011-01-20 Emsense Corporation Systems and methods for generating bio-sensory metrics
US8412656B1 (en) * 2009-08-13 2013-04-02 Videomining Corporation Method and system for building a consumer decision tree in a hierarchical decision tree structure based on in-store behavior analysis
JP5613025B2 (en) * 2009-11-18 2014-10-22 パナソニック株式会社 Gaze detection apparatus, gaze detection method, electrooculogram measurement apparatus, wearable camera, head mounted display, electronic glasses, and ophthalmologic diagnosis apparatus
JP5490664B2 (en) * 2009-11-18 2014-05-14 パナソニック株式会社 Electrooculogram estimation device, electrooculogram calculation method, eye gaze detection device, wearable camera, head mounted display, and electronic glasses
US9373123B2 (en) * 2009-12-30 2016-06-21 Iheartmedia Management Services, Inc. Wearable advertising ratings methods and systems
US8964298B2 (en) * 2010-02-28 2015-02-24 Microsoft Corporation Video display modification based on sensor input for a see-through near-to-eye display
US8684742B2 (en) * 2010-04-19 2014-04-01 Innerscope Research, Inc. Short imagery task (SIT) research method
US9298985B2 (en) * 2011-05-16 2016-03-29 Wesley W. O. Krueger Physiological biosensor system and method for controlling a vehicle or powered equipment
CN103347437B (en) * 2011-02-09 2016-06-08 苹果公司 Gaze detection in 3D mapping environment
US8510166B2 (en) * 2011-05-11 2013-08-13 Google Inc. Gaze tracking system
US20140139616A1 (en) * 2012-01-27 2014-05-22 Intouch Technologies, Inc. Enhanced Diagnostics for a Telepresence Robot
US20130022947A1 (en) * 2011-07-22 2013-01-24 Muniz Simas Fernando Moreira Method and system for generating behavioral studies of an individual
CA2750287C (en) * 2011-08-29 2012-07-03 Microsoft Corporation Gaze detection in a see-through, near-eye, mixed reality display
WO2013059940A1 (en) * 2011-10-27 2013-05-02 Tandemlaunch Technologies Inc. System and method for calibrating eye gaze data
US8903176B2 (en) * 2011-11-14 2014-12-02 Sensory Logic, Inc. Systems and methods using observed emotional data
US8611015B2 (en) 2011-11-22 2013-12-17 Google Inc. User interface
DE112011105941B4 (en) 2011-12-12 2022-10-20 Intel Corporation Scoring the interestingness of areas of interest in a display element
US8864310B2 (en) * 2012-05-01 2014-10-21 RightEye, LLC Systems and methods for evaluating human eye tracking
US20130293530A1 (en) * 2012-05-04 2013-11-07 Kathryn Stone Perez Product augmentation and advertising in see through displays
US9004687B2 (en) * 2012-05-18 2015-04-14 Sync-Think, Inc. Eye tracking headset and system for neuropsychological testing including the detection of brain damage
WO2014035895A2 (en) * 2012-08-27 2014-03-06 Lamontagne Entertainment, Inc. A system and method for qualifying events based on behavioral patterns and traits in digital environments
US20140164056A1 (en) * 2012-12-07 2014-06-12 Cascade Strategies, Inc. Biosensitive response evaluation for design and research
US8549001B1 (en) * 2013-03-15 2013-10-01 DLZTX, Inc. Method and system for gathering and providing consumer intelligence
JP2016522465A (en) * 2013-03-15 2016-07-28 ジボ インコーポレイテッド Apparatus and method for providing a persistent companion device
US10706132B2 (en) * 2013-03-22 2020-07-07 Nok Nok Labs, Inc. System and method for adaptive user authentication
US20140340639A1 (en) * 2013-05-06 2014-11-20 Langbourne Rust Research Inc. Method and system for determining the relative gaze-attracting power of visual stimuli
US9189095B2 (en) * 2013-06-06 2015-11-17 Microsoft Technology Licensing, Llc Calibrating eye tracking system by touch input
EP3022652A2 (en) * 2013-07-19 2016-05-25 eyeQ Insights System for monitoring and analyzing behavior and uses thereof
US9804753B2 (en) * 2014-03-20 2017-10-31 Microsoft Technology Licensing, Llc Selection using eye gaze evaluation over time
US20150301597A1 (en) * 2014-04-16 2015-10-22 2020 Ip, Llc Calculation of an analytical trail in behavioral research
US20150302422A1 (en) * 2014-04-16 2015-10-22 2020 Ip Llc Systems and methods for multi-user behavioral research

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577329B1 (en) * 1999-02-25 2003-06-10 International Business Machines Corporation Method and system for relevance feedback through gaze tracking and ticker interfaces
US20060064342A1 (en) * 2001-06-18 2006-03-23 Quality Resources Worldwide, Llc. Internet based qualitative research method and system and Synchronous and Asynchronous audio and video message board
US20070179867A1 (en) * 2004-03-11 2007-08-02 American Express Travel Related Services Company, Inc. Virtual reality shopping experience
US20080065468A1 (en) * 2006-09-07 2008-03-13 Charles John Berg Methods for Measuring Emotive Response and Selection Preference
US20110010266A1 (en) * 2006-12-30 2011-01-13 Red Dot Square Solutions Limited Virtual reality system for environment building
US20080163054A1 (en) * 2006-12-30 2008-07-03 Pieper Christopher M Tools for product development comprising collections of avatars and virtual reality business models for avatar use
WO2008081413A1 (en) * 2006-12-30 2008-07-10 Kimberly-Clark Worldwide, Inc. Virtual reality system for environment building
US20080172680A1 (en) * 2007-01-16 2008-07-17 Motorola, Inc. System and Method for Managing Interactions in a Virtual Environment
US20130022950A1 (en) * 2011-07-22 2013-01-24 Muniz Simas Fernando Moreira Method and system for generating behavioral studies of an individual
US20130035989A1 (en) * 2011-08-05 2013-02-07 Disney Enterprises, Inc. Conducting market research using social games
US20130325546A1 (en) * 2012-05-29 2013-12-05 Shopper Scientist, Llc Purchase behavior analysis based on visual history
WO2014088906A1 (en) * 2012-12-04 2014-06-12 Crutchfield Corporation System and method for customizing sales processes with virtual simulations and psychographic processing
US20140315635A1 (en) * 2013-04-19 2014-10-23 Upfront Anayltics Ltd. Method and apparatus to elicit market research using game play
US20140365333A1 (en) * 2013-06-07 2014-12-11 Bby Solutions, Inc. Retail store customer natural-gesture interaction with animated 3d images using sensor array
US20150193979A1 (en) * 2014-01-08 2015-07-09 Andrej Grek Multi-user virtual reality interaction environment

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078354A1 (en) * 2014-09-16 2016-03-17 International Business Machines Corporation Managing inferred questions
US10460239B2 (en) * 2014-09-16 2019-10-29 International Business Machines Corporation Generation of inferred questions for a question answering system
CN106096572A (en) * 2016-06-23 2016-11-09 惠州Tcl移动通信有限公司 Living habit detecting and control method based on virtual reality device and virtual reality device
CN108542404A (en) * 2018-03-16 2018-09-18 成都虚实梦境科技有限责任公司 Attention appraisal procedure, device, VR equipment and readable storage medium storing program for executing
US11218758B2 (en) 2018-05-17 2022-01-04 At&T Intellectual Property I, L.P. Directing user focus in 360 video consumption
US10721510B2 (en) 2018-05-17 2020-07-21 At&T Intellectual Property I, L.P. Directing user focus in 360 video consumption
US10783701B2 (en) 2018-05-22 2020-09-22 At&T Intellectual Property I, L.P. System for active-focus prediction in 360 video
US11100697B2 (en) 2018-05-22 2021-08-24 At&T Intellectual Property I, L.P. System for active-focus prediction in 360 video
US10482653B1 (en) 2018-05-22 2019-11-19 At&T Intellectual Property I, L.P. System for active-focus prediction in 360 video
US11651546B2 (en) 2018-05-22 2023-05-16 At&T Intellectual Property I, L.P. System for active-focus prediction in 360 video
US10827225B2 (en) 2018-06-01 2020-11-03 AT&T Intellectual Propety I, L.P. Navigation for 360-degree video streaming
US11197066B2 (en) 2018-06-01 2021-12-07 At&T Intellectual Property I, L.P. Navigation for 360-degree video streaming
US20220191577A1 (en) * 2020-06-19 2022-06-16 Apple Inc. Changing Resource Utilization associated with a Media Object based on an Engagement Score

Also Published As

Publication number Publication date
US10600066B2 (en) 2020-03-24
US10354261B2 (en) 2019-07-16
US20190354999A1 (en) 2019-11-21
US20150302426A1 (en) 2015-10-22

Similar Documents

Publication Publication Date Title
US10600066B2 (en) Systems and methods for virtual environment construction for behavioral research
US20210166300A1 (en) Virtual reality platform for retail environment simulation
US20150301597A1 (en) Calculation of an analytical trail in behavioral research
US9202313B2 (en) Virtual interaction with image projection
TWI567659B (en) Theme-based augmentation of photorepresentative view
US20170084084A1 (en) Mapping of user interaction within a virtual reality environment
CN107771309A (en) Three dimensional user inputs
US11250321B2 (en) Immersive feedback loop for improving AI
Morotti et al. Fostering fashion retail experiences through virtual reality and voice assistants
US11928384B2 (en) Systems and methods for virtual and augmented reality
Datcu et al. Comparing presence, workload and situational awareness in a collaborative real world and augmented reality scenario
US20190378335A1 (en) Viewer position coordination in simulated reality
EP4113413A1 (en) Automatic purchase of digital wish lists content based on user set thresholds
EP3884390A1 (en) Experience driven development of mixed reality devices with immersive feedback
McNamara et al. Investigating low-cost virtual reality technologies in the context of an immersive maintenance training application
US20240127538A1 (en) Scene understanding using occupancy grids
Kucherenko Webvr api description and a-frame application implementation
Čisar et al. Development Concepts of Virtual Reality Software
Al Jundi Design and implementation of a high-fidelity virtual reality manufacturing planning framework
Murala et al. The Role of Immersive Reality (AR/VR/MR/XR) in Metaverse
EP4288935A1 (en) Scene understanding using occupancy grids
Fontana Molecular manipulation in augmented reality: A user experience design applied research on new paradigms of interaction.

Legal Events

Date Code Title Description
AS Assignment

Owner name: 2020 IP LLC, TENNESSEE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BRYSON, JAMES EDWARD;HARLAN, KATHRYN KERSEY;ROGERS, ISAAC DAVID;SIGNING DATES FROM 20140423 TO 20140424;REEL/FRAME:034015/0484

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: FROST VENTURES, LLC, TENNESSEE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:20/20 IP, LLC;REEL/FRAME:052284/0482

Effective date: 20200331