US20120156652A1 - Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction - Google Patents

Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction Download PDF

Info

Publication number
US20120156652A1
US20120156652A1 US12/969,844 US96984410A US2012156652A1 US 20120156652 A1 US20120156652 A1 US 20120156652A1 US 96984410 A US96984410 A US 96984410A US 2012156652 A1 US2012156652 A1 US 2012156652A1
Authority
US
United States
Prior art keywords
participant
weapon
virtual environment
capture
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/969,844
Inventor
Ken Lane
Jeremy Aker
Eric Burns
David Easter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lockheed Martin Corp
Original Assignee
Lockheed Martin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lockheed Martin Corp filed Critical Lockheed Martin Corp
Priority to US12/969,844 priority Critical patent/US20120156652A1/en
Assigned to LOCKHEED MARTIN CORPORATION reassignment LOCKHEED MARTIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AKER, JEREMY, BURNS, ERIC, EASTER, DAVID, LANE, KENNETH
Publication of US20120156652A1 publication Critical patent/US20120156652A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41JTARGETS; TARGET RANGES; BULLET CATCHERS
    • F41J9/00Moving targets, i.e. moving when fired at
    • F41J9/14Cinematographic targets, e.g. moving-picture targets
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/26Teaching or practice apparatus for gun-aiming or gun-laying
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/003Simulators for teaching or training purposes for military purposes and tactics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/006Simulators for teaching or training purposes for locating or ranging of objects

Definitions

  • simulators such as vehicle, weapon, and flight simulators, action games, and engineering workstations, among other simulator types.
  • Simulators are frequently used as training devices which permit a participant to interact with a realistic simulated environment without the necessity of actually going out into the field to train in a real environment.
  • different simulators may enable a live participant, such as a police officer, pilot, or tank gunner to acquire, maintain, and improve skills while minimizing costs, and, in some cases the risks and dangers that are often associated with live training
  • simulators perform satisfactorily in many applications.
  • customers for simulators such as branches of the military, law enforcement agencies, industrial and commercial entities, etc.
  • simulator customers typically seek to improve the quality of the simulated training environments supported by simulators by increasing realism in simulations and finding ways to make the simulated experiences more immersive.
  • shooting simulations in particular, customers have shown a desire for more accurate and complex simulations that go beyond the typical shoot/no shoot scenarios that are currently available.
  • a simulator system includes functionality for dynamically tracking position and orientation of one or more simulation participants and objects as they move throughout a capture volume using an array of motion capture video cameras so that two- or three-dimensional (“2D” and “3D”) views of a virtual environment, which are unique to each participant's point of view, may be generated by the system and rendered on a display.
  • 2D and 3D three-dimensional
  • the unique views are decoded from a commonly utilized display by equipping the participants with glasses that are configured with shutter lenses, polarizing filters, or a combination of both.
  • the object tracking supports the provision and use of an optical signaling capability that may be added to an object so that manipulation of the object by the participant can be communicated to the simulator system over the optical communications path that is enabled by use of the video cameras.
  • the simulator system supports a shoot wall simulation where the simulated personnel (i.e., avatars) can be generated and rendered in the virtual environment so they react to the position and/or motion of the simulation participant.
  • the gaze and/or weapon aim of the avatars will move in response to the location of the participant so that the avatars realistically appear to be looking and/or aiming their weapons at the participant.
  • the participant's weapon may be tracked using the object tracking capability by tracking markers affixed to the weapon at known locations.
  • a light source affixed to the weapon and operatively coupled to the weapon's trigger is actuated by a trigger pull to optically indicate to the simulator system that the participant has fired the weapon.
  • the participant's head is tracked through motion capture of markers that are affixed to a helmet or other garment/device worn by the participant when interacting with the simulation.
  • By correlating head position in the capture volume to the participant's gaze direction an accurate estimate can be made as to where the participant is looking
  • a dynamic view of the virtual environment from the participant's point of view can then be generated and rendered.
  • Such dynamic view generation and rendering from the point of view of the participant enables the participant to interact with the virtual environment in a realistic and believable manner by being enabled, for example, to change positions in the capture volume to look around an obstacle to reveal an otherwise hidden target.
  • the present simulator system supports a richly immersive and realistic simulation by enabling the participant's interaction with the virtual environment that more closely matches interactions with an actual physical environment.
  • the participant-based point of view affords the virtual environment with the appearance and response that would be expected of a real environment—avatars react with gaze direction and weapon aim as would their real world counterparts, rounds sent downrange hit where expected, and the rendered virtual environment has realistic depth well past the plane of the shoot wall.
  • FIG. 1 shows a pictorial view of an illustrative simulation environment that may be facilitated by implementation of the present simulator system with a 3D space and reactive avatars;
  • FIG. 2 shows an illustrative implementation of the present simulator system using CAVE (Cave Automatic Virtual Environment) configuration
  • FIG. 3 shows an illustrative arrangement in which a capture volume may be monitored for motion capture using an array of video cameras
  • FIG. 4 shows an illustrative six degree-of-freedom coordinate system
  • FIG. 5 shows an illustrative motion capture video camera
  • FIG. 6 shows a simplified block diagram of illustrative functional components of a motion video camera
  • FIG. 7 shows a set of illustrative markers that are applied to a helmet worn by the participant at known locations
  • FIG. 8 depicts an illustrative idealized object that is arranged with multiple spherical retro-reflective markers that are rigidly fixed to an object at known locations;
  • FIG. 9 shows an illustrative example of markers and light sources as applied to a long arm weapon at known locations
  • FIG. 10 shows a simulation participant wearing glasses that may be configured with shutter lenses, polarizing filters, or both to decode participant-specific views of a virtual environment
  • FIG. 11 shows a pictorial representation of a modeled environment
  • FIG. 11A shows the modeled environment as rendered when captured from a first point of view
  • FIG. 12 shows a pictorial representation of a modeled environment in which imaginary cameras which capture the environment are located coincidentally with the participant's head;
  • FIG. 12A shows the modeled environment as rendered from a second point of view
  • FIG. 13 illustrates the divergence between an actual trajectory and perpendicular trajectory of a round discharged from a weapon when the target is relatively close to the plane of the shoot wall;
  • FIG. 14 illustrates the divergence between an actual trajectory and perpendicular trajectory of a round discharged from a weapon when the target is relatively distant from the plane of the shoot wall;
  • FIG. 15 shows an illustrative architecture that may be used to implement the present simulator system.
  • FIG. 16 is a flowchart of an illustrative method of operating the present simulator system.
  • FIG. 1 shows a pictorial view of an illustrative simulation environment 100 that may be facilitated by implementation of the present simulator system with a 3D space and reactive avatars.
  • the simulation environment 100 supports a participant 105 in the simulation.
  • the participant 105 is a single soldier, using a simulated weapon 110 , who is engaging in training that is intended to provide a realistic and immersive shooting simulation.
  • the present simulator system is not limited to military applications or shooting simulations.
  • the present simulator system may be adapted to a wide variety of usage scenarios including, for example, industrial, emergency response/911, law enforcement, air traffic control, firefighting, education, sports, commercial, engineering, medicine, gaming/entertainment, and the like.
  • the simulation environment 100 may also support multiple participants if needed to meet the needs of a particular training scenario.
  • the present simulator system may be configured to support two participants, each of whom is provided with unique and independent 3D views of the virtual environment generated by the system.
  • a configuration may be utilized that may support up to four participants, each of whom is provided with independent 2D views of the virtual environment generated by the system. Discussion of the configurations used to support multiple participants is provided in more detail below.
  • the participant 105 trains within a space (designated by reference numeral 115 ) that is termed a “capture volume.”
  • the participant 105 is typically free to move within the capture volume 115 as a given training simulation unfolds.
  • the capture volume 115 is indicated with a circle in FIG. 1 , it is noted that this particular shape is arbitrary and various sizes, shapes, and configurations of capture volumes may be utilized as may be needed to meet the requirements of a particular implementation.
  • the capture volume 115 is monitored, in this illustrative example, by an optical motion capture system. Motion capture is also referred to as “motion tracking ” Utilization of such a motion capture system enables the simulator system to maintain knowledge of the position and orientation of the soldier and weapon as the soldier moves through the capture volume 115 during the course of the training simulation.
  • a simulation display screen 120 is also supported in the environment 100 .
  • the display screen 120 provides a dynamic view 125 of the virtual environment that is generated by the simulator system.
  • a video projector is used to project the view 125 onto the display screen 120 , although direct view systems using flat panel emissive displays can also be utilized in some applications.
  • the view 125 shows a snapshot of an illustrative avatar 130 , who in this example is part of an enemy force and thus a target of the shooting simulation.
  • An avatar is typically a model of a virtual person who is generated and animated by the simulator system.
  • the simulation environment 100 shown in FIG. 1 is commonly termed a “shoot wall” because a single display screen is utilized in a vertical planar configuration that the participant 105 faces to view the projected virtual environment.
  • the present simulator system is not necessarily limited to shoot wall applications and can be arranged to support other configurations.
  • a CAVE configuration may be supported in which four non-co-planar display screens 205 1, 2 . . . 4 are typically utilized to provide a richly immersive virtual environment that is projected across three walls and the floor.
  • the capture volume 115 is coextensive with the space enclosed by the CAVE projection screens, as shown in FIG. 2 .
  • the display screens 205 1, 2 . . . 4 enclose a space that is approximately 10 feet wide, 10 feet long, and 8 feet high, however, other dimensions may also be utilized as may be required by a particular implementation.
  • the CAVE paradigm has also been applied to fifth and/or sixth display screens (i.e., the rear wall and ceiling) to provide simulations that may be even more encompassing for the participant 105 .
  • Video projectors 210 1, 2 . . . 4 may be used to project appropriate portions of the virtual environment onto the corresponding display screens 205 1, 2 . . . 4 .
  • the virtual environment is projected stereoscopically to support 3D observations for the participant 105 and interactive experiences with substantially full-scale images.
  • the capture volume 115 is within the field of view of an array of multiple video cameras 305 1, 2 . . . N that are part of a motion capture system so that the position and orientation of the participant 105 and weapon 110 ( FIG. 1 ) may be tracked within the capture volume as the participant moves as a simulation unfolds.
  • Such tracking utilizes images of markers (not shown in FIG. 3 ) that are captured by the video cameras 305 .
  • the markers are placed on the participant 105 and weapon 110 at known locations.
  • the centers of the marker images are matched from the various camera views using triangulation to compute frame-to-frame spatial positions of the participant 105 and weapon 110 within the 3D capture volume 115 .
  • the positions are defined by six degrees-of-freedom (“dof”), as depicted by the coordinate system 400 shown in FIG. 4 , including translation along each of the x, y, and z axes, as well as rotation about each axis.
  • position location of an object in the capture volume
  • orientation rotation about each of the axes
  • trusses, or similar supports are typically used to arrange the video cameras 305 around the periphery 315 of the capture volume 115 .
  • the number of video cameras N may vary from 6 to 24 in many typical applications. While fewer cameras can be successfully used in some implementations, six is generally considered to be the minimum number that can be utilized to provide accurate head tracking since tracking markers can be obscured from a given camera in some situations depending on the movement and position of the participant 105 . Additional cameras can be utilized to provide full body tracking, additional tracking robustness, and/or redundancy.
  • the video cameras 305 may be configured as part of a reflective optical motion capture system.
  • reflective systems typically use multiple IR LEDs (infra-red light emitting diodes), as representatively indicated by reference numeral 505 , that are arranged around the perimeter of the lens 510 or aperture of a video camera 305 .
  • An IR-pass filter may also be utilized over the lens 510 in some camera designs.
  • the IR LEDs 505 will function as light sources to illuminate the markers on the participant 105 and weapon 110 ( FIG. 1 ).
  • FIG. 6 shows a simplified block diagram of illustrative functional components of a motion capture video camera 305 .
  • a video camera 305 will generally include an image capture subsystem 605 comprising a solid-state image sensor and optics such as one or more lenses.
  • the image capture subsystem, along with a processor 610 and memory 615 will typically be configured to give the video camera 305 the capability to capture video with an appropriate resolution and frame capture rate to enable motion tracking at the simulator system level that meets a desired accuracy in real time. For example, presently commercially available video cameras having multiple megapixels of resolution and a 60 frames-per-second capture rate may provide satisfactory performance in many typical motion capture usage scenarios.
  • the video cameras 305 will typically include a high speed communications interface 620 that facilitates operative connection and data exchange with external subsystems and systems.
  • the interface 620 may be embodied as a USB (Universal Serial Bus) interface.
  • FIG. 7 shows a set of illustrative markers 705 that are applied to a helmet 710 worn by the participant 105 and secured with a chinstrap 715 .
  • the markers 705 can be applied to hat, headband, skullcap or other relatively tight-fitting device/garment so that the motion of the markers closely matches the motions of participant (i.e., extraneous motion of the markers is minimized).
  • the markers 705 are substantially spherically shaped in many typical applications and formed using retro-reflective materials which reflect incident light back to a light source with minimal scatter.
  • the number of markers 705 utilized in a given implementation can vary, but generally a minimum of three are used to enable six dof head tracking
  • the markers 705 are rigidly mounted in known locations on the helmet 710 to enable the triangulation calculation to be performed to determine position within the capture volume 115 .
  • More markers 705 may be utilized in some usage scenarios to provide redundancy when markers would otherwise be obscured during the course of a simulation (for example, the participant lies on the floor, ducks behind cover when so provided in the capture volume, etc.), or to enhance tracking accuracy and/or robustness in some cases.
  • the markers 705 are used to dynamically track the position and orientation of the participant's head during interaction with a simulation. Head position is generally well correlated to gaze direction of the participant 105 . In other words, knowledge of the motion and position of the participant's head enables an accurate inference to be drawn as to what or who the participant is looking at within the virtual environment.
  • additional markers may be applied to the participant, for example, using a body suit, harness, or similar device, to enable full body tracking within the capture volume 115 . Real time full body tracking can typically be expected to consume more processing cycles and system resources as compared to head tracking, but may be desirable in some applications where, for example, a simulation is operated over distributed simulator infrastructure and avatars of local participants need to be generated for display on remote systems.
  • FIG. 8 depicts an illustrative idealized object 805 that is arranged with multiple spherical retro-reflective markers 810 1, 2 . . . N that are rigidly fixed to the object 805 at known locations.
  • the object 805 is implemented as a weapon such as a long arm
  • two markers 810 fixed in positions along the long axis of the barrel are typically sufficient to triangulate the location of the object 805 within the capture volume 115 ( FIG. 1 ), as the knowledge of the rotation of the object about the long axis is generally unnecessary.
  • additional markers may be utilized to support marker redundancy, for example, or when needed to meet the other requirements posed by a particular implementation.
  • the object 805 is also configured to support one or more light sources 815 1, 2 . . . N that may be selectively user-actuated via a switch 820 that is operatively coupled to the lights, as indicated by line 825 .
  • the light sources 815 may be implemented, for example, using IR LEDs that are powered by a power source, such as a battery (not shown), that is internally disposed in the object or arranged as an externally-coupled power pack.
  • the light sources 815 are rigidly fixed to the object 805 at known locations.
  • the light sources 815 may be located on object 805 both along the long axis as well as off-axis, as shown.
  • the number of light sources 815 utilized and their location on the object 805 can vary by application. Typically, however, at least one light source 815 will be utilized to provide one bit, binary (i.e., on and off) signaling capability.
  • FIG. 9 shows an illustrative example of markers 910 and light sources 915 as particularly applied to the simulated weapon 110 shown in FIG. 1 .
  • Simulated weapons are typically similar in appearance and weight as their real counterparts, but are not capable of firing live ammunition. In some cases, simulated weapons are real weapons that have been appropriately reconfigured and/or temporarily modified for simulation purposes.
  • markers 910 1 , 910 2 , and 910 3 are located along the long axis defined by the barrel of the weapon 110 while marker 910 N is located off the long axis.
  • Light source 915 1 is located off axis and operatively coupled to the trigger 920 of the weapon 110 .
  • Light source 915 N is also located off-axis as shown, and may be alternatively or optionally utilized.
  • At least two markers 910 located along the long axis of the weapon and one light source 915 can be utilized in typical applications to track the position of the weapon 110 in the capture volume 115 and implement the binary signaling capability.
  • the participant's actuation of the trigger 920 will activate a light source 915 to signal that the weapon has been virtually fired.
  • different light activation patterns can signal different types of discharge patterns such as a single round per trigger pull, 3-round burst per trigger pull, fully automatic fire with a trigger pull, and the like.
  • Such patterns can be implemented, for example, by various flash patterns using a single light source or multiple light sources 915 .
  • Activation of the light source 915 will be detected by one or more of the video cameras 305 ( FIG. 3 ) that monitor the capture volume 115 . Since the position of the weapon 110 at the time it is fired can be known from motion capture, a trajectory for the virtually discharged round (or rounds) in the virtual environment can be determined and used for purposes of the simulation. Further description of this aspect of the present simulator system is provided below.
  • FIG. 10 shows the participant 105 wearing a pair of glasses 1005 that are used, in this illustrative example, to provide a 3D view of the virtual environment that is projected on to the display screen 120 ( FIG. 1 ).
  • Such 3D viewing is typically implemented by providing an eye-specific view in which a unique view is projected for each of the left eye and right eye to create the 3D effect by using the participant's stereoscopic vision. That is, what each eye sees is slightly mismatched (what is termed “binocular disparity”) and the human brain uses the mismatch to perceive depth.
  • the projected virtual environment will comprise two unique, separately-encoded dynamic views that are shown on the display screen 120 .
  • Eye-specific views can be generated by configuring the left-eye and right-eye lenses (as respectively indicated by reference numerals 1010 and 1015 ) as LCD (liquid crystal display) shutter lenses.
  • LCD liquid crystal display
  • Liquid crystal display shutter lenses are also known as “flicker glasses.” Each shutter lens contains a liquid crystal layer that alternately goes dark or transparent with the respective application and absence of a voltage. The voltage is controlled by a timing signal received at the glasses 1005 (e.g., via an optical or radio frequency communications link to a remote imaging subsystem or module) that enables the shutter lenses to alternately darken over one eye of the participant 105 and then the other eye in synchronization with the refresh rate of the display screen 120 ( FIG. 1 ).
  • a timing signal received at the glasses 1005 (e.g., via an optical or radio frequency communications link to a remote imaging subsystem or module) that enables the shutter lenses to alternately darken over one eye of the participant 105 and then the other eye in synchronization with the refresh rate of the display screen 120 (
  • the video displayed on the screen 120 alternately shows left view and right view images (also termed “fields” when referring to video signals).
  • left view and right view images also termed “fields” when referring to video signals.
  • shutter lenses in the glasses 1005 are synchronously shuttered and un-shuttered to respectively occlude the unwanted image and transmit the wanted image.
  • the left eye only sees the left view and the right eye only sees the right view.
  • the participant's inherent persistence of vision, coupled with a sufficiently high refresh rate of the projected display can typically be expected to result in the participant's perception of stable and flicker-free 3D images.
  • the glasses 1005 may be configured to decode separate left- and right-eye views by applying polarizing filters to the lenses 1010 and 1015 .
  • polarizing filters For example, left- and right-handed circular polarizing filters may be respectively utilized in the lenses.
  • linear polarizing filters may be utilized that are orthogonally oriented in respective lenses. As each lens only passes images having like polarization, stereoscopic imaging can be implemented by projecting two different views (each view being uniquely polarized) that are superimposed onto the display screen 120 .
  • use of circular polarization may be particularly advantageous to avoid image bleed between left and right views and/or loss of stereoscopic perception that may occur while using linear polarizing filters when the participant's head is tilted to thus misalign the polarization axes of the glasses with the projected display.
  • the glasses 1005 may alternatively be configured with both shutter and polarizing components. By employing such a configuration, the glasses 1005 can decode and disambiguate among four unique and dynamic points of view of a virtual environment shown on the display screen 120 . That is, two unique viewpoints can be supported using synchronous shuttering, and two additional unique views can be supported using polarizing filters. In combination with appropriate generation and projection of a virtual environment on the display screen 120 , the four unique views may be used to provide, for example, each of two participants with unique 3D views of the virtual environment, or each of four participants with unique 2D views of the virtual environment.
  • parallax distortion occurs when a virtual environment is generated and displayed using a point of view that is fixed and does not move during the course of a simulation.
  • parallax distortion can occur when the camera's position is fixed at some arbitrary point in space in the capture volume 115 and the position does not change as a simulation unfolds.
  • the view on the screen would need to appear differently depending on the position of the participant's head in the capture volume 115 . That is, the participant 105 would expect the virtual environment to look different as his point of view changes. For example, when the participant 105 is in position “A” (as indicated by reference numeral 1110 ), his line of sight along line 1115 to the enemy soldier 130 is obscured by the wall 1120 . Assuming that the wall 1120 is co-planar with the display screen 120 , the dot 1125 shows that the sight line 1115 intersects the front plane of the environment at the wall 1120 .
  • FIG. 11 when the imaginary camera 1140 is positioned in the center of the capture volume 115 its line of sight 1145 to the enemy soldier 130 is obscured by the wall. Accordingly, if the view of the virtual environment was generated using the image captured by the imaginary camera 1140 , it would show the wall 1120 but the enemy soldier 130 would be hidden from view.
  • This view from the imaginary camera 1140 as projected onto the display screen 120 is shown in the inset drawing FIG. 11A .
  • the position of the imaginary camera 1140 is typically fixed.
  • the virtual environment would appear unnatural and unrealistic because the projected display would not take the position of the participant 105 into account.
  • the display 120 would look the same regardless of the participant's point of view, and the enemy would remain hidden by the wall.
  • FIG. 12 the imaginary camera 1140 is located in substantially the same position and orientation as the participant's head. That way, the displayed view of the modeled environment 1105 will match the physical environment more closely and meet the participant's expectation that movement from position “A” to position “B” through the capture volume 115 will allow inspection of the area behind the wall 1120 .
  • the view from the imaginary camera 1140 at position “B” in the capture volume 115 (which corresponds to what the participant would see) is shown in the inset drawing, FIG. 12A . As shown, the enemy soldier 130 is revealed in this view.
  • the present simulator system supports additional features which can add to the accuracy and realism of a given simulation.
  • the virtual environment may also be generated so that rendered elements in the environment are responsive to the participant's position in the capture volume.
  • the avatar of the enemy soldier 130 can be rendered so that the soldier's eyes and aim of his weapon track the participant 105 .
  • the avatar 130 realistically appears to be looking at the participant 105 and the avatar's gaze will dynamically change in response to the participant's motion.
  • This enhanced realism is in contrast to simulations supported by conventional simulators where avatars typically appear to stare into space or in an odd direction when supposedly attempting to look at or aim a weapon at a participant.
  • Tracking the position of the weapon 110 also enables enhanced simulation accuracy and realism.
  • knowledge of the position of the weapon 110 when it is fired (for example, as indicated by actuation of a light source 915 responsively to a trigger pull as shown in FIG. 9 ) enables a trajectory of the discharged round to be determined.
  • Such determination enables the present simulator system to overcome another common shortcoming of conventional simulators, namely unrealistic trajectory of discharged rounds.
  • a parallax angle p between the actual trajectory 1305 and the perpendicular trajectory 1310 is thus created.
  • Conventional simulators will typically rely on simple 3D scenarios where elements in the modeled environment 1105 do not extend deeply past the plane of the shoot wall in order to minimize the impact of the parallax.
  • the perpendicular trajectory 1310 will still result in a hit on target since the enemy soldier 130 is positioned relatively close to the shoot wall 120 .
  • short-depth modeled environment can typically be expected to constrain the types and quality of the simulations that are supported.
  • the present simulator system avoids the problems associated with the perpendicular trajectory described above.
  • FIG. 15 shows an illustrative architecture that may be used to implement the present simulator system 1505 .
  • the simulator system 1505 is configured to operate using a variety of software modules embodied as instructions on computer-readable storage media, described below, that may execute on general-purpose computing platforms such as personal computers and workstations, or alternatively on purpose-built simulator platforms.
  • the simulator system 1505 may be implemented using various combinations of software, firmware, and hardware.
  • the simulator system 1505 may be configured as a plug-in to existing simulators in order to provide the enhanced functionality described herein.
  • the simulator system 1505 when configured with appropriate interfaces may be used to augment the training scenarios afforded by an existing ground combat simulation to make them more realistic and more immersive.
  • a camera module 1510 is utilized to abstract the functionality provided by the video cameras 305 ( FIG. 3 ) which are used to monitor the capture volume 115 ( FIG. 1 ).
  • the camera module 1510 will utilize an interface such as an API (application programming interface) to expose functionality to the video cameras 305 to enable operative communications over a physical layer interface, such as USB.
  • the camera module 1510 may enhance the native motion capture functionality supported by the video cameras 305 , and in other applications the module functions essentially as a pass-through communications interface.
  • a head tracking module 1515 is also included in the simulator system 1505 .
  • head tracking alone is utilized in order to minimize the resource costs and latency that is typically associated with full body tracking
  • full body tracking and motion capture may be utilized.
  • the head tracking module 1515 uses images of the helmet markers captured by the camera module 1510 in order to triangulate the position of the participant's head within the capture volume 115 as a given simulation unfolds and the participant moves throughout the volume.
  • an object tracking module 1520 is included in the simulator system 1505 which uses images of the weapon markers captured by the camera module 1510 to triangulate the position of the weapon within the capture volume 115 and detect trigger pulls.
  • the position determination is performed substantially in real time to minimize latency as the simulator system generates and renders the virtual environment. Minimization of latency can typically be expected to increase the realism and immersion of the simulation.
  • the head tracking and object tracking modules can be combined into a single module as indicated by dashed line 1525 in FIG. 15 .
  • the simulator system 1505 further supports the utilization of a virtual environment generation module 1530 .
  • This module is responsible for generating a virtual environment responsive to the needs of a given simulation.
  • module 1530 will generate a virtual environment while correcting for point of view parallax distortion and trajectory parallax, as respectively indicated by reference numerals 1535 and 1540 . That is, the virtual environment generation module 1530 will dynamically generate one or more views of a virtual environment that are consistent with the participant's respective and unique points of view. As noted above, up to four unique views may be generated and rendered depending on the configuration of the glasses 1005 ( FIG. 10 ) being utilized.
  • the virtual environment generation module 1530 will determine the actual trajectory of rounds fired by weapon 110 downrange.
  • a virtual environment rendering module 1545 is utilized in the simulator system 1505 to take the generated virtual environment and pass it off in an appropriate format for projection or display on the display screen 120 .
  • multiple views and/or multiple screens may be utilized as needed to meet the requirements of a particular implementation.
  • Other hardware may be abstracted in a hardware abstraction layer 1550 in some cases in order for the simulator system 1505 to implement the necessary interfaces with various other hardware components that may be needed to implement a given simulation.
  • various other types of peripheral equipment may be supported in a simulation, or interfaces may need to be maintained to support the simulator system 1505 across multiple platforms in a distributed computing arrangement.
  • FIG. 16 is a flowchart 1600 of an illustrative method of operating the simulator system 1505 shown in FIG. 15 and described in the accompanying text.
  • the method starts at block 1605 .
  • the position and orientation of the participant's head is tracked as the participant 105 moves throughout the capture volume 115 during the course of a simulation.
  • the position and orientation of the weapon 110 is tracked as the participant 105 moves through the capture volume 115 .
  • a single participant and weapon are tracked, however, multiple participants and weapons may be tracked when the simulator system 1505 is used to support multi-participant simulation scenarios.
  • the participant's point of view is determined, at block 1620 , in response to the head tracking
  • the gaze direction of one or more avatars 130 in the simulation will be determined based on the location of the participant 105 in the capture volume 115 .
  • the direction of the avatar's weapon will be determined, at block 1630 , so that the aim of the weapon will track the motion of the participant and thus appear realistic.
  • the simulator system 1505 will detect weapon fire (and/or detect other communicated data transmitted over the low-bandwidth communication path described above in the text accompanying FIG. 8 ).
  • the actual trajectory of discharged rounds will be determined in response to the position of the weapon 110 within the capture volume 115 .
  • Data descriptive of a given simulation scenario is received, as indicated at block 1645 .
  • Such data may be descriptive of the storyline followed in the simulation, express the actions and reactions of the avatars to the participant's commands and/or actions, and the like.
  • the virtual environment will be generated using the participant's point of view, having a realistic avatar gaze and weapon direction, and using the actual trajectory for weapon fire.
  • the generated virtual environment will be rendered by projecting or displaying the appropriate views on the display screen 120 .
  • control is returned back to the start and the method 1600 is repeated.
  • the rate at which the method repeats can vary by application, however, the various steps of capturing, determining, generating, and rendering will be performed with sufficient frequency to provide a smooth and seamless simulation.

Abstract

A simulator system includes functionality for dynamically tracking position and orientation of one or more simulation participants and objects as they move throughout a capture volume using an array of motion capture video cameras so that two- or three-dimensional (“2D” and “3D”) views of a virtual environment, which are unique to each participant's point of view, may be generated by the system and rendered on a display. In 3D and/or multi-participant usage scenarios, the unique views are decoded from a commonly utilized display by equipping the participants with glasses that are configured with shutter lenses, polarizing filters, or a combination of both. The object tracking supports the provision and use of an optical signaling capability that may be added to an object so that manipulation of the object by the participant can be communicated to the simulator system over the optical communications path that is enabled by use of the video cameras.

Description

    BACKGROUND
  • Increased capabilities in computer processing, such as improved real-time image and audio processing, have aided the development of powerful training simulators such as vehicle, weapon, and flight simulators, action games, and engineering workstations, among other simulator types. Simulators are frequently used as training devices which permit a participant to interact with a realistic simulated environment without the necessity of actually going out into the field to train in a real environment. For example, different simulators may enable a live participant, such as a police officer, pilot, or tank gunner to acquire, maintain, and improve skills while minimizing costs, and, in some cases the risks and dangers that are often associated with live training
  • Current simulators perform satisfactorily in many applications. However, customers for simulators, such as branches of the military, law enforcement agencies, industrial and commercial entities, etc., have expressed a desire for more realistic simulations so that training effectiveness can be improved. In addition, simulator customers typically seek to improve the quality of the simulated training environments supported by simulators by increasing realism in simulations and finding ways to make the simulated experiences more immersive. With regard to shooting simulations in particular, customers have shown a desire for more accurate and complex simulations that go beyond the typical shoot/no shoot scenarios that are currently available.
  • This Background is provided to introduce a brief context for the Summary and Detailed Description that follow. This Background is not intended to be an aid in determining the scope of the claimed subject matter nor be viewed as limiting the claimed subject matter to implementations that solve any or all of the disadvantages or problems presented above.
  • SUMMARY
  • A simulator system includes functionality for dynamically tracking position and orientation of one or more simulation participants and objects as they move throughout a capture volume using an array of motion capture video cameras so that two- or three-dimensional (“2D” and “3D”) views of a virtual environment, which are unique to each participant's point of view, may be generated by the system and rendered on a display. In 3D and/or multi-participant usage scenarios, the unique views are decoded from a commonly utilized display by equipping the participants with glasses that are configured with shutter lenses, polarizing filters, or a combination of both. The object tracking supports the provision and use of an optical signaling capability that may be added to an object so that manipulation of the object by the participant can be communicated to the simulator system over the optical communications path that is enabled by use of the video cameras.
  • In various illustrative examples, the simulator system supports a shoot wall simulation where the simulated personnel (i.e., avatars) can be generated and rendered in the virtual environment so they react to the position and/or motion of the simulation participant. The gaze and/or weapon aim of the avatars, for example, will move in response to the location of the participant so that the avatars realistically appear to be looking and/or aiming their weapons at the participant. The participant's weapon may be tracked using the object tracking capability by tracking markers affixed to the weapon at known locations. A light source affixed to the weapon and operatively coupled to the weapon's trigger is actuated by a trigger pull to optically indicate to the simulator system that the participant has fired the weapon. Using the known location of the weapon gained from the motion capture, an accurate trajectory of discharged rounds from the weapon can be calculated and then realistically simulated. Use of the light source allows the motion capture system to detect weapon fire without the need for cumbersome and restrictive conventional wired or tethered interfaces.
  • The participant's head is tracked through motion capture of markers that are affixed to a helmet or other garment/device worn by the participant when interacting with the simulation. By correlating head position in the capture volume to the participant's gaze direction, an accurate estimate can be made as to where the participant is looking A dynamic view of the virtual environment from the participant's point of view can then be generated and rendered. Such dynamic view generation and rendering from the point of view of the participant enables the participant to interact with the virtual environment in a realistic and believable manner by being enabled, for example, to change positions in the capture volume to look around an obstacle to reveal an otherwise hidden target.
  • Advantageously, the present simulator system supports a richly immersive and realistic simulation by enabling the participant's interaction with the virtual environment that more closely matches interactions with an actual physical environment. In combination with accurate trajectory simulation, the participant-based point of view affords the virtual environment with the appearance and response that would be expected of a real environment—avatars react with gaze direction and weapon aim as would their real world counterparts, rounds sent downrange hit where expected, and the rendered virtual environment has realistic depth well past the plane of the shoot wall.
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a pictorial view of an illustrative simulation environment that may be facilitated by implementation of the present simulator system with a 3D space and reactive avatars;
  • FIG. 2 shows an illustrative implementation of the present simulator system using CAVE (Cave Automatic Virtual Environment) configuration;
  • FIG. 3 shows an illustrative arrangement in which a capture volume may be monitored for motion capture using an array of video cameras;
  • FIG. 4 shows an illustrative six degree-of-freedom coordinate system;
  • FIG. 5 shows an illustrative motion capture video camera;
  • FIG. 6 shows a simplified block diagram of illustrative functional components of a motion video camera;
  • FIG. 7 shows a set of illustrative markers that are applied to a helmet worn by the participant at known locations;
  • FIG. 8 depicts an illustrative idealized object that is arranged with multiple spherical retro-reflective markers that are rigidly fixed to an object at known locations;
  • FIG. 9 shows an illustrative example of markers and light sources as applied to a long arm weapon at known locations;
  • FIG. 10 shows a simulation participant wearing glasses that may be configured with shutter lenses, polarizing filters, or both to decode participant-specific views of a virtual environment;
  • FIG. 11 shows a pictorial representation of a modeled environment;
  • FIG. 11A shows the modeled environment as rendered when captured from a first point of view;
  • FIG. 12 shows a pictorial representation of a modeled environment in which imaginary cameras which capture the environment are located coincidentally with the participant's head;
  • FIG. 12A shows the modeled environment as rendered from a second point of view;
  • FIG. 13 illustrates the divergence between an actual trajectory and perpendicular trajectory of a round discharged from a weapon when the target is relatively close to the plane of the shoot wall;
  • FIG. 14 illustrates the divergence between an actual trajectory and perpendicular trajectory of a round discharged from a weapon when the target is relatively distant from the plane of the shoot wall;
  • FIG. 15 shows an illustrative architecture that may be used to implement the present simulator system; and
  • FIG. 16 is a flowchart of an illustrative method of operating the present simulator system.
  • Like reference numerals indicate like elements in the drawings. Unless otherwise indicated, elements are not drawn to scale.
  • DETAILED DESCRIPTION
  • FIG. 1 shows a pictorial view of an illustrative simulation environment 100 that may be facilitated by implementation of the present simulator system with a 3D space and reactive avatars. The simulation environment 100 supports a participant 105 in the simulation. In this particular illustrative example, the participant 105 is a single soldier, using a simulated weapon 110, who is engaging in training that is intended to provide a realistic and immersive shooting simulation. It is emphasized, however, that the present simulator system is not limited to military applications or shooting simulations. The present simulator system may be adapted to a wide variety of usage scenarios including, for example, industrial, emergency response/911, law enforcement, air traffic control, firefighting, education, sports, commercial, engineering, medicine, gaming/entertainment, and the like.
  • The simulation environment 100 may also support multiple participants if needed to meet the needs of a particular training scenario. In many applications, when a 3D virtual environment is implemented, then the present simulator system may be configured to support two participants, each of whom is provided with unique and independent 3D views of the virtual environment generated by the system. In applications where a 2D virtual environment is implemented, then a configuration may be utilized that may support up to four participants, each of whom is provided with independent 2D views of the virtual environment generated by the system. Discussion of the configurations used to support multiple participants is provided in more detail below.
  • As shown in FIG. 1, the participant 105 trains within a space (designated by reference numeral 115) that is termed a “capture volume.” The participant 105 is typically free to move within the capture volume 115 as a given training simulation unfolds. Although the capture volume 115 is indicated with a circle in FIG. 1, it is noted that this particular shape is arbitrary and various sizes, shapes, and configurations of capture volumes may be utilized as may be needed to meet the requirements of a particular implementation. As described in more detail below, the capture volume 115 is monitored, in this illustrative example, by an optical motion capture system. Motion capture is also referred to as “motion tracking ” Utilization of such a motion capture system enables the simulator system to maintain knowledge of the position and orientation of the soldier and weapon as the soldier moves through the capture volume 115 during the course of the training simulation.
  • A simulation display screen 120 is also supported in the environment 100. The display screen 120 provides a dynamic view 125 of the virtual environment that is generated by the simulator system. Typically a video projector is used to project the view 125 onto the display screen 120, although direct view systems using flat panel emissive displays can also be utilized in some applications. In FIG. 1, the view 125 shows a snapshot of an illustrative avatar 130, who in this example is part of an enemy force and thus a target of the shooting simulation. An avatar is typically a model of a virtual person who is generated and animated by the simulator system. In some applications, the avatar 130 may be a representation of an actual person (i.e., a virtual alter ego) and might take any of a variety of roles such as a member of a friendly or opposing force, a civilian non-combatant, etc. Furthermore, while a single avatar 130 is shown in the view 125, the number of avatars utilized in any given simulation can vary as needs dictate.
  • The simulation environment 100 shown in FIG. 1 is commonly termed a “shoot wall” because a single display screen is utilized in a vertical planar configuration that the participant 105 faces to view the projected virtual environment. However, the present simulator system is not necessarily limited to shoot wall applications and can be arranged to support other configurations. For example, as shown in FIG. 2, a CAVE configuration may be supported in which four non-co-planar display screens 205 1, 2 . . . 4 are typically utilized to provide a richly immersive virtual environment that is projected across three walls and the floor. As the projected virtual environment substantially surrounds the participant 105, the capture volume 115 is coextensive with the space enclosed by the CAVE projection screens, as shown in FIG. 2.
  • In some implementations of CAVE, the display screens 205 1, 2 . . . 4 enclose a space that is approximately 10 feet wide, 10 feet long, and 8 feet high, however, other dimensions may also be utilized as may be required by a particular implementation. The CAVE paradigm has also been applied to fifth and/or sixth display screens (i.e., the rear wall and ceiling) to provide simulations that may be even more encompassing for the participant 105. Video projectors 210 1, 2 . . . 4 may be used to project appropriate portions of the virtual environment onto the corresponding display screens 205 1, 2 . . . 4. In some CAVE simulators, the virtual environment is projected stereoscopically to support 3D observations for the participant 105 and interactive experiences with substantially full-scale images.
  • As shown in FIG. 3, the capture volume 115 is within the field of view of an array of multiple video cameras 305 1, 2 . . . N that are part of a motion capture system so that the position and orientation of the participant 105 and weapon 110 (FIG. 1) may be tracked within the capture volume as the participant moves as a simulation unfolds. Such tracking utilizes images of markers (not shown in FIG. 3) that are captured by the video cameras 305. The markers are placed on the participant 105 and weapon 110 at known locations. The centers of the marker images are matched from the various camera views using triangulation to compute frame-to-frame spatial positions of the participant 105 and weapon 110 within the 3D capture volume 115.
  • The positions are defined by six degrees-of-freedom (“dof”), as depicted by the coordinate system 400 shown in FIG. 4, including translation along each of the x, y, and z axes, as well as rotation about each axis. Thus, both location of an object in the capture volume (i.e., “position”) and its rotation about each of the axes (i.e., “orientation”) may be described using the coordinate system 400. Note that the term “position” will be used to refer to both location and rotation in the description that follows unless stated otherwise.
  • Returning again to FIG. 3, stands, trusses, or similar supports, as representatively indicated by reference numeral 310, are typically used to arrange the video cameras 305 around the periphery 315 of the capture volume 115. The number of video cameras N may vary from 6 to 24 in many typical applications. While fewer cameras can be successfully used in some implementations, six is generally considered to be the minimum number that can be utilized to provide accurate head tracking since tracking markers can be obscured from a given camera in some situations depending on the movement and position of the participant 105. Additional cameras can be utilized to provide full body tracking, additional tracking robustness, and/or redundancy.
  • In this illustrative example, the video cameras 305 may be configured as part of a reflective optical motion capture system. As shown in FIG. 5, reflective systems typically use multiple IR LEDs (infra-red light emitting diodes), as representatively indicated by reference numeral 505, that are arranged around the perimeter of the lens 510 or aperture of a video camera 305. An IR-pass filter may also be utilized over the lens 510 in some camera designs. The IR LEDs 505 will function as light sources to illuminate the markers on the participant 105 and weapon 110 (FIG. 1).
  • FIG. 6 shows a simplified block diagram of illustrative functional components of a motion capture video camera 305. In addition to the IR LEDs light sources 505, a video camera 305 will generally include an image capture subsystem 605 comprising a solid-state image sensor and optics such as one or more lenses. The image capture subsystem, along with a processor 610 and memory 615 will typically be configured to give the video camera 305 the capability to capture video with an appropriate resolution and frame capture rate to enable motion tracking at the simulator system level that meets a desired accuracy in real time. For example, presently commercially available video cameras having multiple megapixels of resolution and a 60 frames-per-second capture rate may provide satisfactory performance in many typical motion capture usage scenarios. The video cameras 305 will typically include a high speed communications interface 620 that facilitates operative connection and data exchange with external subsystems and systems. For example, the interface 620 may be embodied as a USB (Universal Serial Bus) interface.
  • FIG. 7 shows a set of illustrative markers 705 that are applied to a helmet 710 worn by the participant 105 and secured with a chinstrap 715. In alternative implementations, the markers 705 can be applied to hat, headband, skullcap or other relatively tight-fitting device/garment so that the motion of the markers closely matches the motions of participant (i.e., extraneous motion of the markers is minimized). The markers 705 are substantially spherically shaped in many typical applications and formed using retro-reflective materials which reflect incident light back to a light source with minimal scatter. The number of markers 705 utilized in a given implementation can vary, but generally a minimum of three are used to enable six dof head tracking The markers 705 are rigidly mounted in known locations on the helmet 710 to enable the triangulation calculation to be performed to determine position within the capture volume 115. More markers 705 may be utilized in some usage scenarios to provide redundancy when markers would otherwise be obscured during the course of a simulation (for example, the participant lies on the floor, ducks behind cover when so provided in the capture volume, etc.), or to enhance tracking accuracy and/or robustness in some cases.
  • In this illustrative example, the markers 705 are used to dynamically track the position and orientation of the participant's head during interaction with a simulation. Head position is generally well correlated to gaze direction of the participant 105. In other words, knowledge of the motion and position of the participant's head enables an accurate inference to be drawn as to what or who the participant is looking at within the virtual environment. In alternative implementations, additional markers may be applied to the participant, for example, using a body suit, harness, or similar device, to enable full body tracking within the capture volume 115. Real time full body tracking can typically be expected to consume more processing cycles and system resources as compared to head tracking, but may be desirable in some applications where, for example, a simulation is operated over distributed simulator infrastructure and avatars of local participants need to be generated for display on remote systems.
  • FIG. 8 depicts an illustrative idealized object 805 that is arranged with multiple spherical retro-reflective markers 810 1, 2 . . . N that are rigidly fixed to the object 805 at known locations. In applications where the object 805 is implemented as a weapon such as a long arm, two markers 810 fixed in positions along the long axis of the barrel are typically sufficient to triangulate the location of the object 805 within the capture volume 115 (FIG. 1), as the knowledge of the rotation of the object about the long axis is generally unnecessary. However, additional markers may be utilized to support marker redundancy, for example, or when needed to meet the other requirements posed by a particular implementation.
  • In this illustrative example, the object 805 is also configured to support one or more light sources 815 1, 2 . . . N that may be selectively user-actuated via a switch 820 that is operatively coupled to the lights, as indicated by line 825. The light sources 815 may be implemented, for example, using IR LEDs that are powered by a power source, such as a battery (not shown), that is internally disposed in the object or arranged as an externally-coupled power pack. The light sources 815 are used to effectuate a relatively low-bandwidth optical communication path for signaling or transmitting data from the object 805 (or from the participant via interaction with the object) within the capture volume 115 using the same optical motion capture system that is utilized to track the position of the participant and object. Advantageously, the light sources 815 implement the signal path without the necessity of additional communications infrastructure such as RF (radio frequency), magnetic sensing, or other equipment. In addition, utilization of an optically-implemented communication path obviates the need for wires, cables, or other tethers that might restrict movement of the participant 105 within the capture volume 115 or otherwise reduce the realism of the simulation.
  • As with the markers 810, the light sources 815 are rigidly fixed to the object 805 at known locations. The light sources 815 may be located on object 805 both along the long axis as well as off-axis, as shown. The number of light sources 815 utilized and their location on the object 805 can vary by application. Typically, however, at least one light source 815 will be utilized to provide one bit, binary (i.e., on and off) signaling capability.
  • FIG. 9 shows an illustrative example of markers 910 and light sources 915 as particularly applied to the simulated weapon 110 shown in FIG. 1. Simulated weapons are typically similar in appearance and weight as their real counterparts, but are not capable of firing live ammunition. In some cases, simulated weapons are real weapons that have been appropriately reconfigured and/or temporarily modified for simulation purposes. In this example, markers 910 1, 910 2, and 910 3 are located along the long axis defined by the barrel of the weapon 110 while marker 910 N is located off the long axis. Light source 915 1 is located off axis and operatively coupled to the trigger 920 of the weapon 110. Light source 915 N is also located off-axis as shown, and may be alternatively or optionally utilized. Generally, at least two markers 910 located along the long axis of the weapon and one light source 915 (either located on or off the long axis) can be utilized in typical applications to track the position of the weapon 110 in the capture volume 115 and implement the binary signaling capability.
  • In operation during a simulation, the participant's actuation of the trigger 920 will activate a light source 915 to signal that the weapon has been virtually fired. In some cases, different light activation patterns can signal different types of discharge patterns such as a single round per trigger pull, 3-round burst per trigger pull, fully automatic fire with a trigger pull, and the like. Such patterns can be implemented, for example, by various flash patterns using a single light source or multiple light sources 915. Activation of the light source 915 will be detected by one or more of the video cameras 305 (FIG. 3) that monitor the capture volume 115. Since the position of the weapon 110 at the time it is fired can be known from motion capture, a trajectory for the virtually discharged round (or rounds) in the virtual environment can be determined and used for purposes of the simulation. Further description of this aspect of the present simulator system is provided below.
  • FIG. 10 shows the participant 105 wearing a pair of glasses 1005 that are used, in this illustrative example, to provide a 3D view of the virtual environment that is projected on to the display screen 120 (FIG. 1). Such 3D viewing is typically implemented by providing an eye-specific view in which a unique view is projected for each of the left eye and right eye to create the 3D effect by using the participant's stereoscopic vision. That is, what each eye sees is slightly mismatched (what is termed “binocular disparity”) and the human brain uses the mismatch to perceive depth. Thus, to implement 3D view, the projected virtual environment will comprise two unique, separately-encoded dynamic views that are shown on the display screen 120.
  • Eye-specific views can be generated by configuring the left-eye and right-eye lenses (as respectively indicated by reference numerals 1010 and 1015) as LCD (liquid crystal display) shutter lenses. Liquid crystal display shutter lenses are also known as “flicker glasses.” Each shutter lens contains a liquid crystal layer that alternately goes dark or transparent with the respective application and absence of a voltage. The voltage is controlled by a timing signal received at the glasses 1005 (e.g., via an optical or radio frequency communications link to a remote imaging subsystem or module) that enables the shutter lenses to alternately darken over one eye of the participant 105 and then the other eye in synchronization with the refresh rate of the display screen 120 (FIG. 1). The video displayed on the screen 120 alternately shows left view and right view images (also termed “fields” when referring to video signals). When the participant 105 views the display screen 120, shutter lenses in the glasses 1005 are synchronously shuttered and un-shuttered to respectively occlude the unwanted image and transmit the wanted image. Thus, the left eye only sees the left view and the right eye only sees the right view. The participant's inherent persistence of vision, coupled with a sufficiently high refresh rate of the projected display can typically be expected to result in the participant's perception of stable and flicker-free 3D images.
  • In other implementations of the present simulator system, the glasses 1005 may be configured to decode separate left- and right-eye views by applying polarizing filters to the lenses 1010 and 1015. For example, left- and right-handed circular polarizing filters may be respectively utilized in the lenses. Alternatively, linear polarizing filters may be utilized that are orthogonally oriented in respective lenses. As each lens only passes images having like polarization, stereoscopic imaging can be implemented by projecting two different views (each view being uniquely polarized) that are superimposed onto the display screen 120. In some applications, use of circular polarization may be particularly advantageous to avoid image bleed between left and right views and/or loss of stereoscopic perception that may occur while using linear polarizing filters when the participant's head is tilted to thus misalign the polarization axes of the glasses with the projected display.
  • The glasses 1005 may alternatively be configured with both shutter and polarizing components. By employing such a configuration, the glasses 1005 can decode and disambiguate among four unique and dynamic points of view of a virtual environment shown on the display screen 120. That is, two unique viewpoints can be supported using synchronous shuttering, and two additional unique views can be supported using polarizing filters. In combination with appropriate generation and projection of a virtual environment on the display screen 120, the four unique views may be used to provide, for example, each of two participants with unique 3D views of the virtual environment, or each of four participants with unique 2D views of the virtual environment.
  • The provision of a unique dynamic point of view per participant is a feature of the present simulator system that may provide additional realism to a simulation by addressing the issue of parallax distortion that is frequently experienced when interacting with conventional shoot wall simulators. Parallax distortion occurs when a virtual environment is generated and displayed using a point of view that is fixed and does not move during the course of a simulation. In other words, assuming an imaginary camera is used to capture the virtual environment that is displayed on the screen 120 (FIG. 1), then parallax distortion can occur when the camera's position is fixed at some arbitrary point in space in the capture volume 115 and the position does not change as a simulation unfolds.
  • This problem is illustrated in FIG. 11 which shows a pictorial representation of the environment 1105 that is modeled for the simulation shown in FIG. 1. As shown in FIG. 11, the modeled environment 1105 can be thought of as being separated from the capture volume 115 by the display screen 120 (i.e., the plane of the shoot wall) and physically extending into a 3D space that is adjacent to the capture volume. As with the capture volume, the size and configuration of the modeled environment can vary by application and be different from the arbitrary size and shape that is illustrated in the drawing.
  • For the environment 1105 to appear realistic when projected onto the display screen 120, the view on the screen would need to appear differently depending on the position of the participant's head in the capture volume 115. That is, the participant 105 would expect the virtual environment to look different as his point of view changes. For example, when the participant 105 is in position “A” (as indicated by reference numeral 1110), his line of sight along line 1115 to the enemy soldier 130 is obscured by the wall 1120. Assuming that the wall 1120 is co-planar with the display screen 120, the dot 1125 shows that the sight line 1115 intersects the front plane of the environment at the wall 1120. By contrast, when the participant 105 moves to position “B” (as indicated by reference numeral 1130), his line of sight 1135 to the enemy soldier 130 is no longer obscured by the wall 1120. Thus, if the modeled environment accurately matches its physical counterpart, the participant could move to look around an obstacle to see if an enemy is hidden behind it.
  • As shown in FIG. 11, when the imaginary camera 1140 is positioned in the center of the capture volume 115 its line of sight 1145 to the enemy soldier 130 is obscured by the wall. Accordingly, if the view of the virtual environment was generated using the image captured by the imaginary camera 1140, it would show the wall 1120 but the enemy soldier 130 would be hidden from view. This view from the imaginary camera 1140 as projected onto the display screen 120 is shown in the inset drawing FIG. 11A. As noted above, in conventional shoot wall simulators the position of the imaginary camera 1140 is typically fixed. Thus, when rendered by conventional simulators using such a fixed point of view, the virtual environment would appear unnatural and unrealistic because the projected display would not take the position of the participant 105 into account. Thus, if the participant 105 moved his position to attempt to see what is behind the wall 1120, the display 120 would look the same regardless of the participant's point of view, and the enemy would remain hidden by the wall.
  • By contrast, application of the principles of the present simulator system enables an accurate and realistic display to be generated and projected by tracking the position of the participant 105 in the capture volume 115. The imaginary camera 1140 is then placed to be coincident with the participant's head so that the captured view of the modeled environment 1105 matches the participant's point of view as he moves through the capture volume 115. This feature is shown in FIG. 12. As shown, the imaginary camera 1140 is located in substantially the same position and orientation as the participant's head. That way, the displayed view of the modeled environment 1105 will match the physical environment more closely and meet the participant's expectation that movement from position “A” to position “B” through the capture volume 115 will allow inspection of the area behind the wall 1120. The view from the imaginary camera 1140 at position “B” in the capture volume 115 (which corresponds to what the participant would see) is shown in the inset drawing, FIG. 12A. As shown, the enemy soldier 130 is revealed in this view.
  • In addition to supporting the generation and projection of a virtual environment that is dynamically and continuously captured from the participant's point of view as he moves through the capture volume 115, the present simulator system supports additional features which can add to the accuracy and realism of a given simulation. The virtual environment may also be generated so that rendered elements in the environment are responsive to the participant's position in the capture volume. For example, the avatar of the enemy soldier 130 can be rendered so that the soldier's eyes and aim of his weapon track the participant 105. In this way, the avatar 130 realistically appears to be looking at the participant 105 and the avatar's gaze will dynamically change in response to the participant's motion. This enhanced realism is in contrast to simulations supported by conventional simulators where avatars typically appear to stare into space or in an odd direction when supposedly attempting to look at or aim a weapon at a participant.
  • Tracking the position of the weapon 110 (FIG. 1) also enables enhanced simulation accuracy and realism. As described above, knowledge of the position of the weapon 110 when it is fired (for example, as indicated by actuation of a light source 915 responsively to a trigger pull as shown in FIG. 9) enables a trajectory of the discharged round to be determined. Such determination enables the present simulator system to overcome another common shortcoming of conventional simulators, namely unrealistic trajectory of discharged rounds.
  • As shown in FIG. 13, in conventional simulators, when the weapon 110 is fired the discharged round turns unrealistically from its actual trajectory 1305 when it intersects the display screen 120 (i.e. the plane of the shoot wall) to fly exactly perpendicular to the display screen surface. This modified perpendicular trajectory, as indicated by reference numeral 1310, is typically implemented in conventional simulators without regard to the starting incident angle of the incoming round since the position of the weapon in the capture volume is unknown (the point of intersection with the shoot wall by comparison is typically known using a light source such as a laser in the weapon and a photodetector at the shoot wall/display screen that detects the location of the incident laser beam).
  • A parallax angle p between the actual trajectory 1305 and the perpendicular trajectory 1310 is thus created. Conventional simulators will typically rely on simple 3D scenarios where elements in the modeled environment 1105 do not extend deeply past the plane of the shoot wall in order to minimize the impact of the parallax. Thus, as shown in FIG. 13, the perpendicular trajectory 1310 will still result in a hit on target since the enemy soldier 130 is positioned relatively close to the shoot wall 120. However, short-depth modeled environment can typically be expected to constrain the types and quality of the simulations that are supported.
  • As shown in FIG. 14, as the enemy soldier 130 moves deeper into the modeled environment 1105 the impact of the parallax angle p is magnified. In this case, the divergence between the actual trajectory 1405 and the perpendicular trajectory 1410 is great enough so that the perpendicular trajectory results in a miss of the target. In this case, the perpendicular trajectory can be expected to create a readily apparent loss of believability and immersion for the participant 105. Thus, by utilizing the actual trajectories (as indicated by reference numerals 1305 and 1405 in the scenarios depicted in FIGS. 13 and 14), the present simulator system avoids the problems associated with the perpendicular trajectory described above.
  • FIG. 15 shows an illustrative architecture that may be used to implement the present simulator system 1505. In many applications, the simulator system 1505 is configured to operate using a variety of software modules embodied as instructions on computer-readable storage media, described below, that may execute on general-purpose computing platforms such as personal computers and workstations, or alternatively on purpose-built simulator platforms. In other applications, the simulator system 1505 may be implemented using various combinations of software, firmware, and hardware. In some cases, the simulator system 1505 may be configured as a plug-in to existing simulators in order to provide the enhanced functionality described herein. For example, the simulator system 1505 when configured with appropriate interfaces may be used to augment the training scenarios afforded by an existing ground combat simulation to make them more realistic and more immersive.
  • A camera module 1510 is utilized to abstract the functionality provided by the video cameras 305 (FIG. 3) which are used to monitor the capture volume 115 (FIG. 1). Typically the camera module 1510 will utilize an interface such as an API (application programming interface) to expose functionality to the video cameras 305 to enable operative communications over a physical layer interface, such as USB. In some applications, the camera module 1510 may enhance the native motion capture functionality supported by the video cameras 305, and in other applications the module functions essentially as a pass-through communications interface.
  • A head tracking module 1515 is also included in the simulator system 1505. In this illustrative example, head tracking alone is utilized in order to minimize the resource costs and latency that is typically associated with full body tracking However, in alternative implementations, full body tracking and motion capture may be utilized. The head tracking module 1515 uses images of the helmet markers captured by the camera module 1510 in order to triangulate the position of the participant's head within the capture volume 115 as a given simulation unfolds and the participant moves throughout the volume.
  • Similarly, an object tracking module 1520 is included in the simulator system 1505 which uses images of the weapon markers captured by the camera module 1510 to triangulate the position of the weapon within the capture volume 115 and detect trigger pulls. For both head tracking and object tracking, the position determination is performed substantially in real time to minimize latency as the simulator system generates and renders the virtual environment. Minimization of latency can typically be expected to increase the realism and immersion of the simulation. In some cases, the head tracking and object tracking modules can be combined into a single module as indicated by dashed line 1525 in FIG. 15.
  • The simulator system 1505 further supports the utilization of a virtual environment generation module 1530. This module is responsible for generating a virtual environment responsive to the needs of a given simulation. In addition, module 1530 will generate a virtual environment while correcting for point of view parallax distortion and trajectory parallax, as respectively indicated by reference numerals 1535 and 1540. That is, the virtual environment generation module 1530 will dynamically generate one or more views of a virtual environment that are consistent with the participant's respective and unique points of view. As noted above, up to four unique views may be generated and rendered depending on the configuration of the glasses 1005 (FIG. 10) being utilized. In addition, the virtual environment generation module 1530 will determine the actual trajectory of rounds fired by weapon 110 downrange.
  • A virtual environment rendering module 1545 is utilized in the simulator system 1505 to take the generated virtual environment and pass it off in an appropriate format for projection or display on the display screen 120. As described above, multiple views and/or multiple screens may be utilized as needed to meet the requirements of a particular implementation. Other hardware may be abstracted in a hardware abstraction layer 1550 in some cases in order for the simulator system 1505 to implement the necessary interfaces with various other hardware components that may be needed to implement a given simulation. For example, various other types of peripheral equipment may be supported in a simulation, or interfaces may need to be maintained to support the simulator system 1505 across multiple platforms in a distributed computing arrangement.
  • FIG. 16 is a flowchart 1600 of an illustrative method of operating the simulator system 1505 shown in FIG. 15 and described in the accompanying text. The method starts at block 1605. At block 1610 the position and orientation of the participant's head is tracked as the participant 105 moves throughout the capture volume 115 during the course of a simulation. At block 1615 the position and orientation of the weapon 110 is tracked as the participant 105 moves through the capture volume 115. In this illustrative example a single participant and weapon are tracked, however, multiple participants and weapons may be tracked when the simulator system 1505 is used to support multi-participant simulation scenarios.
  • The participant's point of view is determined, at block 1620, in response to the head tracking At block 1625, the gaze direction of one or more avatars 130 in the simulation will be determined based on the location of the participant 105 in the capture volume 115. Similarly the direction of the avatar's weapon will be determined, at block 1630, so that the aim of the weapon will track the motion of the participant and thus appear realistic.
  • At block 1635, the simulator system 1505 will detect weapon fire (and/or detect other communicated data transmitted over the low-bandwidth communication path described above in the text accompanying FIG. 8). At block 1640 the actual trajectory of discharged rounds will be determined in response to the position of the weapon 110 within the capture volume 115.
  • Data descriptive of a given simulation scenario is received, as indicated at block 1645. Such data, for example, may be descriptive of the storyline followed in the simulation, express the actions and reactions of the avatars to the participant's commands and/or actions, and the like. At block 1650, using the captured information from the camera module, the various determinations described in blocks 1625 through 1640, and the received simulation data, the virtual environment will be generated using the participant's point of view, having a realistic avatar gaze and weapon direction, and using the actual trajectory for weapon fire. At block 1655, the generated virtual environment will be rendered by projecting or displaying the appropriate views on the display screen 120. At block 1660 control is returned back to the start and the method 1600 is repeated. The rate at which the method repeats can vary by application, however, the various steps of capturing, determining, generating, and rendering will be performed with sufficient frequency to provide a smooth and seamless simulation.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A method for operating a simulation supported on a simulator, the method comprising the steps of:
tracking a participant in the simulation to determine at least one of position, orientation, or motion of the participant within a capture volume, the capture volume being monitored by an optical motion capture system that is configured to capture positions of participant markers within the capture volume, the participant markers being positioned at known locations on the participant;
configuring an object with i) object markers at known locations so that the optical motion capture system can capture positions of the object markers within the capture volume and ii) at least one participant-actuated light source that is disposed at a known location on the object and monitored by the optical capture system so as to implement an optical communications path between the object and the optical motion capture system over which a signal may be transmitted via actuation of the light source;
tracking the object to determine at least one of position, orientation, or motion of the object within a capture volume; and
dynamically generating a virtual environment utilized by the simulation, the virtual environment being generated from the participant's point of view responsively to the participant tracking and further being generated responsively to the object tracking and signal transmitted over the optical communications path.
2. The method of claim 1 including a further step of rendering the virtual environment onto a display, the display being one of shoot wall or CAVE.
3. The method of claim 1 in which the object comprises a simulated weapon and the user-actuated light source is operatively coupled to a trigger on the weapon, the light source being operated in response to a trigger pull.
4. The method of claim 3 in which the light source is an IR light source and the signal is indicative of the weapon being fired.
5. The method of claim 3 including a further step of determining a trajectory of a simulated discharge of a round from the weapon using the object tracking.
6. The method of claim 1 in which the optical motion capture system utilizes an array of video cameras, each of the video cameras including one or more IR light sources.
7. The method of claim 1 in which the virtual environment includes one or more avatars that are responsive to the tracked participant, the avatars being configured to dynamically change gaze or weapon aim in response to the position, orientation, or motion of the participant within the capture volume.
8. The method of claim 1 in which the participant tracking comprises tracking the participant's head.
9. A computer-implemented method for providing a shoot wall simulation, the method comprising the steps of:
tracking a position within a capture volume of each of one or more participants in the shoot wall simulation using a motion capture system that is configured to monitor the capture volume;
generating a unique view for each of one or more participants, each unique view being taken from a point of view of the respective participant as the participant moves within the capture volume;
superimposing the unique views onto a display device that is commonly utilized by each of the one or more participants;
tracking a position of a weapon associated with one or more of the participants; and
detecting operation of the weapon using the motion capture system, the detecting comprising monitoring actuation of a light affixed to the weapon, the light being actuated when the weapon is fired.
10. The computer-implemented method of claim 9 in which the unique views are encoded as 3D views with left-eye and right-eye images.
11. The computer-implemented method of claim 10 in which the left-eye and right-eye images are decoded using one of LCD shutter glasses or polarizing filter glasses.
12. The computer-implemented method of claim 9 in which the commonly utilized display device utilizes multiple walls in a CAVE configuration.
13. The computer-implemented method of claim 9 in which the weapon utilizes at least two markers disposed substantially along a long axis defined by the barrel of the weapon.
14. The computer-implemented method of claim 9 in which the markers comprise substantially spherical retro-reflectors.
15. One or more computer-readable storage media containing instructions which, when executed by one or more processors disposed in a computing device, implement a simulator system, the instructions being logically grouped in modules, the modules comprising:
a camera module for interfacing with an array of optical motion capture video cameras, the array being configured for optically monitoring a capture volume and for receiving captured images of tracked simulation participants and weapons associated with respective participants;
a head tracking module for determining a position of a head of one or more of the tracked simulation participants within the capture volume using the captured images;
a weapon tracking module for determining a position of one or more tracked weapons within the capture volume using the captured images; and
a virtual environment generation module for generating a virtual environment supported by the simulator system, the virtual environment being corrected for parallax distortion and trajectory parallax.
16. The one or more computer-readable storage media of claim 15 further comprising a virtual environment rendering module for rendering the generated virtual environment onto a display.
17. The one or more computer-readable storage media of claim 15 in which the parallax distortion correction comprises generating a unique view for each participant based on each participant's point of view within the capture volume.
18. The one or more computer-readable storage media of claim 15 in which the trajectory parallax correction comprises determining a trajectory of a round fired from one or more of the weapons using the tracked position of the one or more weapons.
19. The one or more computer-readable storage media of claim 15 in which the virtual environment generation module generates avatars that are responsive to the position of the participants.
20. The one or more computer-readable storage media of claim 16 in which the rendering is performed in 3D.
US12/969,844 2010-12-16 2010-12-16 Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction Abandoned US20120156652A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/969,844 US20120156652A1 (en) 2010-12-16 2010-12-16 Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/969,844 US20120156652A1 (en) 2010-12-16 2010-12-16 Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction

Publications (1)

Publication Number Publication Date
US20120156652A1 true US20120156652A1 (en) 2012-06-21

Family

ID=46234871

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/969,844 Abandoned US20120156652A1 (en) 2010-12-16 2010-12-16 Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction

Country Status (1)

Country Link
US (1) US20120156652A1 (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8777226B1 (en) * 2012-06-21 2014-07-15 Robert Hubert Decker, Jr. Proxy target system
US8920172B1 (en) * 2011-03-15 2014-12-30 Motion Reality, Inc. Method and system for tracking hardware in a motion capture environment
DE102014109921A1 (en) * 2013-07-15 2015-01-15 Rheinmetall Defence Electronics Gmbh Virtual objects in a real 3D scenario
US20150058812A1 (en) * 2013-08-23 2015-02-26 Tobii Technology Ab Systems and methods for changing behavior of computer program elements based on gaze input
CN104635579A (en) * 2015-01-09 2015-05-20 江门市东方智慧物联网科技有限公司 Bird control system and method based on virtual reality robot remote operation technology
US20150154758A1 (en) * 2012-07-31 2015-06-04 Japan Science And Technology Agency Point-of-gaze detection device, point-of-gaze detecting method, personal parameter calculating device, personal parameter calculating method, program, and computer-readable storage medium
US9110503B2 (en) 2012-11-30 2015-08-18 WorldViz LLC Precision position tracking device
US20150283460A1 (en) * 2014-04-08 2015-10-08 Eon Reality, Inc. Interactive virtual reality systems and methods
US20160140930A1 (en) * 2014-11-13 2016-05-19 WorldViz LLC Methods and systems for virtual and augmented reality
CN106067160A (en) * 2016-06-21 2016-11-02 江苏亿莱顿智能科技有限公司 Giant-screen merges projecting method
US9684369B2 (en) 2014-04-08 2017-06-20 Eon Reality, Inc. Interactive virtual reality systems and methods
US20170177833A1 (en) * 2015-12-22 2017-06-22 Intel Corporation Smart placement of devices for implicit triggering of feedbacks relating to users' physical activities
US9898081B2 (en) 2013-03-04 2018-02-20 Tobii Ab Gaze and saccade based graphical manipulation
EP3129111A4 (en) * 2014-04-08 2018-03-07 Eon Reality, Inc. Interactive virtual reality systems and methods
US9990689B2 (en) 2015-12-16 2018-06-05 WorldViz, Inc. Multi-user virtual reality processing
US10025389B2 (en) 2004-06-18 2018-07-17 Tobii Ab Arrangement, method and computer program for controlling a computer apparatus based on eye-tracking
US10055191B2 (en) 2013-08-23 2018-08-21 Tobii Ab Systems and methods for providing audio to a user based on gaze input
US10067415B2 (en) 2014-03-19 2018-09-04 Samsung Electronics Co., Ltd. Method for displaying image using projector and wearable electronic device for implementing the same
US10082870B2 (en) 2013-03-04 2018-09-25 Tobii Ab Gaze and saccade based graphical manipulation
US10095928B2 (en) 2015-12-22 2018-10-09 WorldViz, Inc. Methods and systems for marker identification
US20180314322A1 (en) * 2017-04-28 2018-11-01 Motive Force Technology Limited System and method for immersive cave application
CN109155835A (en) * 2016-05-18 2019-01-04 史克威尔·艾尼克斯有限公司 Program, computer installation, program excutive method and computer system
US10242501B1 (en) 2016-05-03 2019-03-26 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US10288381B1 (en) 2018-06-22 2019-05-14 910 Factor, Inc. Apparatus, system, and method for firearms training
US10353464B2 (en) 2013-03-04 2019-07-16 Tobii Ab Gaze and saccade based graphical manipulation
US10403050B1 (en) * 2017-04-10 2019-09-03 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
WO2019215646A1 (en) * 2018-05-09 2019-11-14 Dreamscape Immersive, Inc. User-selectable tool for an optical tracking virtual reality system
US10495726B2 (en) 2014-11-13 2019-12-03 WorldViz, Inc. Methods and systems for an immersive virtual reality system using multiple active markers
US10613621B2 (en) * 2017-04-07 2020-04-07 Ark Interactive display system and method for operating such a system
US10895908B2 (en) 2013-03-04 2021-01-19 Tobii Ab Targeting saccade landing prediction using visual history
US10922992B2 (en) * 2018-01-09 2021-02-16 V-Armed Inc. Firearm simulation and training system and method
EP3264394B1 (en) * 2016-06-30 2021-02-17 LACS S.r.l. A method and a system for monitoring military tactics simulations
WO2021071584A1 (en) * 2019-08-14 2021-04-15 Cubic Corporation Universal laserless training architecture
US11204215B2 (en) 2018-01-09 2021-12-21 V-Armed Inc. Wireless independent tracking system for use in firearm simulation training
US11226677B2 (en) 2019-01-08 2022-01-18 V-Armed Inc. Full-body inverse kinematic (FBIK) module for use in firearm simulation training
US20230075863A1 (en) * 2019-03-29 2023-03-09 Dwango Co., Ltd. Communication device, communication method, and communication program
US11714487B2 (en) 2013-03-04 2023-08-01 Tobii Ab Gaze and smooth pursuit based continuous foveal adjustment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5641288A (en) * 1996-01-11 1997-06-24 Zaenglein, Jr.; William G. Shooting simulating process and training device using a virtual reality display screen
US5649706A (en) * 1994-09-21 1997-07-22 Treat, Jr.; Erwin C. Simulator and practice method
US6296486B1 (en) * 1997-12-23 2001-10-02 Aerospatiale Societe Nationale Industrielle Missile firing simulator with the gunner immersed in a virtual space
US6408257B1 (en) * 1999-08-31 2002-06-18 Xerox Corporation Augmented-reality display method and system
US7110194B2 (en) * 2002-11-27 2006-09-19 Hubbs Machine & Manufacturing Inc. Spherical retro-reflector mount negative
US7839417B2 (en) * 2006-03-10 2010-11-23 University Of Northern Iowa Research Foundation Virtual coatings application system
US8077914B1 (en) * 2006-08-07 2011-12-13 Arkady Kaplan Optical tracking apparatus using six degrees of freedom
US8217995B2 (en) * 2008-01-18 2012-07-10 Lockheed Martin Corporation Providing a collaborative immersive environment using a spherical camera and motion capture
US8228327B2 (en) * 2008-02-29 2012-07-24 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US8303308B2 (en) * 2005-02-28 2012-11-06 Saab Ab Method and system for fire simulation
US8459997B2 (en) * 2009-02-27 2013-06-11 Opto Ballistics, Llc Shooting simulation system and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5649706A (en) * 1994-09-21 1997-07-22 Treat, Jr.; Erwin C. Simulator and practice method
US5641288A (en) * 1996-01-11 1997-06-24 Zaenglein, Jr.; William G. Shooting simulating process and training device using a virtual reality display screen
US6296486B1 (en) * 1997-12-23 2001-10-02 Aerospatiale Societe Nationale Industrielle Missile firing simulator with the gunner immersed in a virtual space
US6408257B1 (en) * 1999-08-31 2002-06-18 Xerox Corporation Augmented-reality display method and system
US7110194B2 (en) * 2002-11-27 2006-09-19 Hubbs Machine & Manufacturing Inc. Spherical retro-reflector mount negative
US8303308B2 (en) * 2005-02-28 2012-11-06 Saab Ab Method and system for fire simulation
US7839417B2 (en) * 2006-03-10 2010-11-23 University Of Northern Iowa Research Foundation Virtual coatings application system
US8077914B1 (en) * 2006-08-07 2011-12-13 Arkady Kaplan Optical tracking apparatus using six degrees of freedom
US8217995B2 (en) * 2008-01-18 2012-07-10 Lockheed Martin Corporation Providing a collaborative immersive environment using a spherical camera and motion capture
US8228327B2 (en) * 2008-02-29 2012-07-24 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US8459997B2 (en) * 2009-02-27 2013-06-11 Opto Ballistics, Llc Shooting simulation system and method

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10025389B2 (en) 2004-06-18 2018-07-17 Tobii Ab Arrangement, method and computer program for controlling a computer apparatus based on eye-tracking
US8920172B1 (en) * 2011-03-15 2014-12-30 Motion Reality, Inc. Method and system for tracking hardware in a motion capture environment
US8777226B1 (en) * 2012-06-21 2014-07-15 Robert Hubert Decker, Jr. Proxy target system
US9262680B2 (en) * 2012-07-31 2016-02-16 Japan Science And Technology Agency Point-of-gaze detection device, point-of-gaze detecting method, personal parameter calculating device, personal parameter calculating method, program, and computer-readable storage medium
US20150154758A1 (en) * 2012-07-31 2015-06-04 Japan Science And Technology Agency Point-of-gaze detection device, point-of-gaze detecting method, personal parameter calculating device, personal parameter calculating method, program, and computer-readable storage medium
US9692990B2 (en) * 2012-11-30 2017-06-27 WorldViz LLC Infrared tracking system
US20170094197A1 (en) * 2012-11-30 2017-03-30 WorldViz LLC Infrared tracking system
US9110503B2 (en) 2012-11-30 2015-08-18 WorldViz LLC Precision position tracking device
US9541634B2 (en) 2012-11-30 2017-01-10 WorldViz LLC Precision position tracking system
US11619989B2 (en) 2013-03-04 2023-04-04 Tobil AB Gaze and saccade based graphical manipulation
US9898081B2 (en) 2013-03-04 2018-02-20 Tobii Ab Gaze and saccade based graphical manipulation
US11714487B2 (en) 2013-03-04 2023-08-01 Tobii Ab Gaze and smooth pursuit based continuous foveal adjustment
US10353464B2 (en) 2013-03-04 2019-07-16 Tobii Ab Gaze and saccade based graphical manipulation
US10895908B2 (en) 2013-03-04 2021-01-19 Tobii Ab Targeting saccade landing prediction using visual history
US10082870B2 (en) 2013-03-04 2018-09-25 Tobii Ab Gaze and saccade based graphical manipulation
WO2015007732A1 (en) * 2013-07-15 2015-01-22 Rheinmetall Defence Electronics Gmbh Virtual objects in a real 3-d scenario
DE102014109921A1 (en) * 2013-07-15 2015-01-15 Rheinmetall Defence Electronics Gmbh Virtual objects in a real 3D scenario
US10430150B2 (en) * 2013-08-23 2019-10-01 Tobii Ab Systems and methods for changing behavior of computer program elements based on gaze input
US20150058812A1 (en) * 2013-08-23 2015-02-26 Tobii Technology Ab Systems and methods for changing behavior of computer program elements based on gaze input
US10346128B2 (en) 2013-08-23 2019-07-09 Tobii Ab Systems and methods for providing audio to a user based on gaze input
US10635386B2 (en) 2013-08-23 2020-04-28 Tobii Ab Systems and methods for providing audio to a user based on gaze input
US10055191B2 (en) 2013-08-23 2018-08-21 Tobii Ab Systems and methods for providing audio to a user based on gaze input
US10067415B2 (en) 2014-03-19 2018-09-04 Samsung Electronics Co., Ltd. Method for displaying image using projector and wearable electronic device for implementing the same
US9684369B2 (en) 2014-04-08 2017-06-20 Eon Reality, Inc. Interactive virtual reality systems and methods
EP3129111A4 (en) * 2014-04-08 2018-03-07 Eon Reality, Inc. Interactive virtual reality systems and methods
US20150283460A1 (en) * 2014-04-08 2015-10-08 Eon Reality, Inc. Interactive virtual reality systems and methods
US9542011B2 (en) * 2014-04-08 2017-01-10 Eon Reality, Inc. Interactive virtual reality systems and methods
US20160140930A1 (en) * 2014-11-13 2016-05-19 WorldViz LLC Methods and systems for virtual and augmented reality
US10495726B2 (en) 2014-11-13 2019-12-03 WorldViz, Inc. Methods and systems for an immersive virtual reality system using multiple active markers
US9804257B2 (en) * 2014-11-13 2017-10-31 WorldViz LLC Methods and systems for an immersive virtual reality system using multiple active markers
CN104635579A (en) * 2015-01-09 2015-05-20 江门市东方智慧物联网科技有限公司 Bird control system and method based on virtual reality robot remote operation technology
US9990689B2 (en) 2015-12-16 2018-06-05 WorldViz, Inc. Multi-user virtual reality processing
US10269089B2 (en) 2015-12-16 2019-04-23 WorldViz, Inc. Multi-user virtual reality processing
US10095928B2 (en) 2015-12-22 2018-10-09 WorldViz, Inc. Methods and systems for marker identification
US20170177833A1 (en) * 2015-12-22 2017-06-22 Intel Corporation Smart placement of devices for implicit triggering of feedbacks relating to users' physical activities
US10452916B2 (en) 2015-12-22 2019-10-22 WorldViz, Inc. Methods and systems for marker identification
US10242501B1 (en) 2016-05-03 2019-03-26 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US11450073B1 (en) 2016-05-03 2022-09-20 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US10922890B1 (en) 2016-05-03 2021-02-16 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
TWI733731B (en) * 2016-05-18 2021-07-21 日商史克威爾 艾尼克斯股份有限公司 Program, computer device, program execution method, and computer system
CN109155835A (en) * 2016-05-18 2019-01-04 史克威尔·艾尼克斯有限公司 Program, computer installation, program excutive method and computer system
US10960310B2 (en) * 2016-05-18 2021-03-30 Square Enix Co., Ltd. Program, computer apparatus, program execution method, and computer system
US20190275426A1 (en) * 2016-05-18 2019-09-12 Square Enix Co., Ltd. Program, computer apparatus, program execution method, and computer system
CN106067160A (en) * 2016-06-21 2016-11-02 江苏亿莱顿智能科技有限公司 Giant-screen merges projecting method
EP3264394B1 (en) * 2016-06-30 2021-02-17 LACS S.r.l. A method and a system for monitoring military tactics simulations
US10613621B2 (en) * 2017-04-07 2020-04-07 Ark Interactive display system and method for operating such a system
US10403050B1 (en) * 2017-04-10 2019-09-03 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US20180314322A1 (en) * 2017-04-28 2018-11-01 Motive Force Technology Limited System and method for immersive cave application
US10922992B2 (en) * 2018-01-09 2021-02-16 V-Armed Inc. Firearm simulation and training system and method
US11204215B2 (en) 2018-01-09 2021-12-21 V-Armed Inc. Wireless independent tracking system for use in firearm simulation training
US11371794B2 (en) * 2018-01-09 2022-06-28 V-Armed Inc. Firearm simulation and training system and method
US20220299288A1 (en) * 2018-01-09 2022-09-22 V-Armed Inc. Firearm simulation and training system and method
CN112752922A (en) * 2018-05-09 2021-05-04 梦境沉浸股份有限公司 User-selectable tool for optically tracking virtual reality systems
JP2021523503A (en) * 2018-05-09 2021-09-02 ドリームスケイプ・イマーシブ・インコーポレイテッド User Selectable Tool for Optical Tracking Virtual Reality Systems
WO2019215646A1 (en) * 2018-05-09 2019-11-14 Dreamscape Immersive, Inc. User-selectable tool for an optical tracking virtual reality system
US10288381B1 (en) 2018-06-22 2019-05-14 910 Factor, Inc. Apparatus, system, and method for firearms training
US11226677B2 (en) 2019-01-08 2022-01-18 V-Armed Inc. Full-body inverse kinematic (FBIK) module for use in firearm simulation training
US20230075863A1 (en) * 2019-03-29 2023-03-09 Dwango Co., Ltd. Communication device, communication method, and communication program
US11861058B2 (en) * 2019-03-29 2024-01-02 Dwango Co., Ltd. Communication device, communication method, and communication program
US20210302128A1 (en) * 2019-08-14 2021-09-30 Cubic Corporation Universal laserless training architecture
WO2021071584A1 (en) * 2019-08-14 2021-04-15 Cubic Corporation Universal laserless training architecture

Similar Documents

Publication Publication Date Title
US20120156652A1 (en) Virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction
US9892563B2 (en) System and method for generating a mixed reality environment
US10558048B2 (en) Image display system, method for controlling image display system, image distribution system and head-mounted display
CN113632030A (en) System and method for virtual reality and augmented reality
KR101926178B1 (en) Virtual reality system enabling compatibility of sense of immersion in virtual space and movement in real space, and battle training system using same
US9677840B2 (en) Augmented reality simulator
JP2022530012A (en) Head-mounted display with pass-through image processing
US10300389B2 (en) Augmented reality (AR) gaming system with sight lines to other players
Krum et al. Augmented reality using personal projection and retroreflection
US20160246061A1 (en) Display
CN104380347A (en) Video processing device, video processing method, and video processing system
JP6615732B2 (en) Information processing apparatus and image generation method
CN2793674Y (en) Shooting simulator with deficiency and excess combined display effect
US20210235064A1 (en) Method and apparatus for perspective adjustment of images for a user at different positions
KR101348195B1 (en) Virtual reality 4d an image firing system make use of a hologram
WO2013111145A1 (en) System and method of generating perspective corrected imagery for use in virtual combat training
KR20210072902A (en) Tactical training system optimized for multiple users to share a watch in augmented reality
Fafard et al. Design and implementation of a multi-person fish-tank virtual reality display
EP3729235B1 (en) Data processing
KR101770188B1 (en) Method for providing mixed reality experience space and system thereof
Barrilleaux Experiences and observations in applying augmented reality to live training
CN112595169A (en) Actual combat simulation system and actual combat simulation display control method
KR20210072900A (en) Tactical training system to share a feedback in augmented reality
US20190089899A1 (en) Image processing device
Sammartino Integrated Virtual Reality Game Interaction: The Archery Game

Legal Events

Date Code Title Description
AS Assignment

Owner name: LOCKHEED MARTIN CORPORATION, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LANE, KENNETH;AKER, JEREMY;BURNS, ERIC;AND OTHERS;REEL/FRAME:025510/0075

Effective date: 20101215

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION