WO2003060830A1 - Method and system to display both visible and invisible hazards and hazard information - Google Patents

Method and system to display both visible and invisible hazards and hazard information Download PDF

Info

Publication number
WO2003060830A1
WO2003060830A1 PCT/US2002/025282 US0225282W WO03060830A1 WO 2003060830 A1 WO2003060830 A1 WO 2003060830A1 US 0225282 W US0225282 W US 0225282W WO 03060830 A1 WO03060830 A1 WO 03060830A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
display
computer
information
ofthe
Prior art date
Application number
PCT/US2002/025282
Other languages
French (fr)
Inventor
John Franklin Ebersole
Todd Joseph Furlong
John Franklin Ebersole, Jr.
Mark Stanley Bastian
Andrew Wesley Hobgood
John Franklin Walker
Daniel Alan Eads
Jeffrey Patrick Illig
Original Assignee
Information Decision Technologies, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/192,195 external-priority patent/US6903752B2/en
Application filed by Information Decision Technologies, Llc filed Critical Information Decision Technologies, Llc
Priority to EP02806425A priority Critical patent/EP1466300A1/en
Priority to CA002473713A priority patent/CA2473713A1/en
Priority to AU2002366994A priority patent/AU2002366994A1/en
Publication of WO2003060830A1 publication Critical patent/WO2003060830A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • This invention relates to visualization and indication of real and simulated hazards operations, training; communication, and to augmented reality (AR).
  • This invention has use in a wide range of professions in which hazards are present, including navigation, aviation, emergency first response, and counter terrorism.
  • an emergency first responder (EFR) responding to a call may encounter certain chemical compounds in a spill situation which can transform into deadly, invisible, odorless gas.
  • hazards which may not be visible at all (e.g., radiation leaks) that pose a serious threat to those in the immediate vicinity.
  • a ship pilot or navigator may encounter unseen hazards, such as sunken ships, shallow water, reefs, and objects hidden in fog (including other ships and bridges).
  • An airline pilot may encounter wind shear, wingtip vortices, and other dangerous invisible atmospheric phenomenon which could be safely dealt with if the pilot could see this phenomenon.
  • a method for viewing information about hazards, seen or unseen would be quite beneficial to the operator. For example, if an airline pilot were able to see otherwise invisible atmospheric phenomena, those hazards could be avoided. If a ship pilot could see hazards present in a fog situation, a waterway would be easier and safer to traverse. If a fire fighter were to be able to receive textual or iconic messages regarding scene hazards from a scene commander, the fire fighter could be able to better avoid dangers at the scene and help those trapped at the scene. This information could be used in both training and operations to convey relevant situational awareness information to those involved in the incident.
  • the ability to see a hazard, seen or unseen, will better prepare a user to implement the correct procedures for dealing with the situation at hand.
  • the inventive method allows the user to visualize hazards and related indicators containing information which increases the preparedness of the user for the situation.
  • Operational and training settings implementing this method can offer users the ability to see hazards, safe regions in the vicinity of hazards, and other environmental characteristics through use of computer-generated two- and three-dimensional graphical elements. Training and operational situations for which this method is useful include, but are not limited to, typical nuclear, biological, and chemical (NBC) attacks, hazardous materials incidents, airway and waterway interaction, and training which requires actions such as avoidance, response, handling, and cleanup.
  • NBC nuclear, biological, and chemical
  • the inventive method represents an innovation in the field of training and operations.
  • Two purposes ofthe inventive method are safe and expeditious passage through/around hazard(s); and safe and efficient training and operations. This is accomplished by two categories of medi — representations which reproduce a hazard through computer simulation and graphics; and text and icons which present relevant information about a situation so that mission objectives can be completed safely and efficiently.
  • This invention utilizes augmented reality (AR) technology to overlay a display of dangerous materials/hazards and/or relevant data regarding such dangerous materials/hazards onto the real world view in an intuitive, user-friendly format.
  • AR is defined herein to mean combining computer-generated graphical elements with a real world view (which may be static or changing) and presenting the combined view as a replacement for the real world image.
  • these computer-generated graphical elements can be used to present the EFR/trainee/other user with an idea of the extent of the hazard at hand. For example, near the center of a computer-generated element representative of a hazard, the element may be darkened or more intensely colored to suggest extreme danger. At the edges, the element may be light or semitransparent, suggesting an approximate edge to the danger zone where effects may not be as severe.
  • the view of the real world in this invention is typically obtained through a camera mounted at the position of the viewer's eye point on a Head Mounted Display (HMD). This view may also be obtained directly by the viewer's eyes via a see through HMD. Also, an externally tracked motorized camera may be used to provide an external augmented reality view. This camera would be quite useful in training situations in which one or more people are performing an exercise while observers may watch the view through an external camera. This external camera would also be useful if mounted near a runway at an airport or on a boat in a waterway. This would allow for an augmented view of atmospheric or water navigation hazards.
  • HMD Head Mounted Display
  • This data may be presented using a traditional interface such as a computer monitor, or it may be projected into a device the user would typically use, such as a head-mounted display (HMD) mounted inside an EFR's mask, an SCBA (Self-Contained Breathing Apparatus), HAZMAT (hazard materials) suit, a hardhat, or instrumented binoculars.
  • HMD head-mounted display
  • SCBA Self-Contained Breathing Apparatus
  • HAZMAT hazard materials
  • the view of the EFR/trainee's real environment including visible hazards, visible gasses, and actual structural surroundings, will be seen, overlaid or augmented with computer-generated graphical elements representative of the hazards or information about them.
  • the net result is an augmented reality.
  • the inventive method is useful for training and retraining of personnel within a safe, realistic environment.
  • Computer-generated graphical elements (which are representations and indicators of hazards) are superimposed onto a view ofthe real training environment and present no actual hazard to the trainee, yet allow the trainee to become familiar with proper procedures within an environment which is more like an actual incident scene.
  • inventive method is useful for operations.
  • Computer-generated graphical elements (which are representations and indicators of hazards) are superimposed onto a view of the real environment and present relevant information to the user so that the operation can be successfully and safely completed.
  • an air traffic controller could look through instrumented binoculars at a runway to see the actual surroundings, as would normally be seen, augmented with wingtip vortices from planes that are taking off.
  • Atmospheric data that could be displayed in an air navigation implementation include (but is not limited to) wind shear, wingtip vortices, micro bursts, and clear air turbulence.
  • One aspect of the inventive method uses blending of images with varying transparency to present the location, intensity, and other properties of the data being displayed. This will present the air traffic controllers and pilots with a visual indication of properties of otherwise invisible atmospheric disturbances.
  • the performance of the user of the system may be recorded using a smart card or other electronic media, such as a computer's hard drive.
  • trainee scores measuring the success or failure of the trainee's grasp of concepts involving hazards and hazard information may be recorded, i operations, data can be recorded as to whether or not an individual has successfully avoided a hazard or used hazard information.
  • the computer-generated graphical element can be a text message, directional representation (arrow), or other informative icon from the incident commander, or geometrical visualizations of the structure. It can be created via a keyboard, mouse or other method of input on a computer or handheld device at the scene.
  • the real world view consists ofthe EFR's environment, containing elements such as fire, unseen radiation leaks, chemical spills, and structural surroundings.
  • the EFR/trainee will be looking through a head- mounted display, preferably monocular, mounted inside the user's mask (an SCBA in the case of a firefighter).
  • This HMD could also be mounted in a hazmat suit or onto a hardhat.
  • the HMD will be preferably "see through,” that is, the real hazards and surroundings that are normally visible will remain visible without the need for additional equipment.
  • the EFR/trainee's view of the real world is augmented with the text message, icon, or geometrical visualizations ofthe structure.
  • Types of messages sent to an EFR/trainee include (but are not limited to) location of victims, structural data, building/facility information, environmental conditions, and exit directions/locations.
  • This invention can notably increase the communication effectiveness at the scene of an incident or during a training scenario and result in safer operations, training, emergency response, and rescue procedures.
  • the invention has immediate applications for both the training and operations aspects of the fields of emergency first response, navigation, aviation, and command and control; implementation of this invention will result in safer training, retraining, and operations for all individuals involved in situations where hazards must be dealt with.
  • potential applications of this technology include those involving other training and preparedness (i.e., fire fighting, damage control, counter-terrorism, and mission rehearsal), as well as potential for use in the entertainment industry.
  • FIG 1 depicts an augmented reality display according to the invention that displays a safe path available to the user by using computer-generated graphical poles to indicate where the dangerous regions are
  • FIG 2 depicts an augmented reality display according to the invention that depicts a chemical spill emanating from a center that contains radioactive materials
  • FIG 3 is a block diagram indicating the hardware components and interconnectivity of a see- through augmented reality (AR) system useful in the invention.
  • FIG 4 is a block diagram indicating the hardware components and interconnectivity of a video- based AR system involving an external video mixer useful in the invention.
  • FIG 5 is a block diagram indicating the hardware components and interconnectivity of a video- based AR system where video mixing is performed internally to a computer useful in the invention.
  • AR augmented reality
  • FIG 6 is a diagram illustrating the technologies involved in an AR waterway navigation system according to this invention.
  • FIG 7 is a block diagram of the components of an embodiment of an AR waterway navigation system according to this invention.
  • FIGS 8-10 are diagrams indicating display embodiments for the AR waterway navigation system of FIG 7.
  • FIG 11 is a diagram of an AR overlay graphic for aid in ship navigation useful in the invention.
  • FIGS 12 and 13 are diagrams of an AR scene where depth information is overlaid on a navigator's viewpoint as semi-transparent color fields useful in the invention.
  • FIGS 14 and 15 are diagrams of an overlay for a land navigation embodiment ofthe invention.
  • FIGS 16-18 are diagrams of an overlay for an air navigation embodiment ofthe invention.
  • FIG 19 is a block diagram of an embodiment of the method of this invention, labeling both data flow and operators.
  • FIG 20 is a schematic diagram of the hardware components and interconnectivity of a see- through augmented reality (AR) system that can be used in this invention.
  • FIG 21 is a schematic diagram of the hardware components and interconnectivity of a video- . based AR system for this invention involving an external video mixer.
  • AR augmented reality
  • FIG 22 is a schematic diagram of the hardware components and interconnectivity of a video- based AR system for this invention where video mixing is performed internally to a computer.
  • FIG 23 is a representation of vortex trails being visualized behind an airplane.
  • FIG 24 is another representation of vortex trails being visualized.
  • FIG 25 is another representation of wingtip vortices as viewed at a farther distance.
  • FIG 26 is a similar top view of parallel takeoff of aircraft.
  • FIG 27 depicts atmospheric phenomena, with an image of nonhomogeneous transparency used to convey information for the invention.
  • FIG 28 also depicts atmospheric phenomena.
  • FIG 29 shows an example of an irregular display of vortex trails.
  • FIG 30 shows representations of wingtip vortices visualized behind the wings of a real model airplane.
  • FIG 31 is a schematic view of a motorized camera and motorized mount connected to a computer for the purpose of tracking and video capture for augmented reality, for use in a preferred embodiment ofthe invention.
  • FIG 32 is a close-up view ofthe camera and motorized mount of FIG 31.
  • FIG 33 schematically depicts an augmented reality display with computer-generated indicators displayed over an image as an example of a result of this invention.
  • FIG 34 is the un-augmented scene from FIG 33 without computer-generated indicators.
  • This image is a real- world image captured directly from the camera.
  • FIG 35 is an augmented reality display ofthe same scene as that of FIG 33 but from a different camera angle where the computer-generated indicators that were in FIG 33 remain anchored to the real-world image.
  • FIG 36 is the un-augmented scene from FIG 35 without computer-generated indicators.
  • FIG 37 is a schematic diagram of the system components that can be used to accomplish the preferred embodiments ofthe inventive method.
  • FIG 38 is a conceptual drawing of a firefighter's SCBA with an integrated monocular eyepiece that the firefighter may see through for the invention.
  • FIG 39 is a view as seen from inside the HMD of a text message accompanied by an icon indicating a warning of flames ahead for the invention
  • FIG 40 is a possible layout of an incident commander's display in which waypoints are placed useful in the invention.
  • FIG 41 is a possible layout of an incident commander's display in which an escape route or path is drawn for the invention.
  • FIG 42 is a text message accompanied by an icon indicating that the EFR is to proceed up the stairs for the invention.
  • FIG 43 is a waypoint which the EFR is to walk towards for the invention.
  • FIG 44 is a potential warning indicator warning of a radioactive chemical spill for the invention.
  • FIG 45 is a wireframe rendering of an incident scene as seen by an EFR for the invention.
  • FIG 46 shows a possible layout of a tracking system, including emitters and receiver on user.
  • FIG 47 shows the opening screen of a preferred embodiment of the invention embodied with a software training tool, which shows who the card belongs to and what the recent history of training activity has been.
  • FIG 48 shows a screen evidencing a scenario where a fire began and then the trainee extinguished the fire.
  • FIG 49 shows the output screen after the above scenario ended, summarizing the results of the scenario.
  • FIG 50 shows the output screen after the above scenario ended, showing how many points the trainee gained in putting out this fire.
  • FIG 51 schematically depicts the basic hardware required to enable the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION
  • This invention involves a method for visualization of hazards and information about hazards utilizing computer-generated three-dimensional representations, typically displayed to the user in an augmented reality.
  • augmented reality is used.
  • This technology uses software and hardware which is now described briefly.
  • the hardware for augmented reality (AR) consists minimally of a computer 7, see-through display 9, and motion tracking hardware 8, as diagrammed in FIG 3. h such an embodiment, motion tracking hardware 8 is used to determine the human's head position and orientation.
  • the computer 7 uses the information from the motion tracking equipment 8 in order to generate an image which is overlaid on the see- though display 9 and which appears to be anchored to a real- world location or object.
  • Other embodiments of AR systems include video-based (non-see-through) hardware, as diagrammed in FIG 4 and FIG 5.
  • FIG. 4 uses an external video mixer 10 that combines computer-generated imagery with live camera video via a luminance key or chroma key.
  • the second video-based embodiment involves capturing live video in the computer 7 with a frame grabber and overlaying an opaque or semi-transparent imagery internal to the computer.
  • Another video-based embodiment involves a remote camera.
  • Motion tracking equipment 8 can control motors that orient a camera mounted on a high- visibility position on a platform, allowing an augmented reality telepresence system.
  • the inventive method requires a display unit (FIG 8) in order for the user to view computer-generated graphical elements representative of hazards and information 13 overlaid onto a view of the real world - the view of the real world is augmented with the representations and information.
  • the net result is an augmented reality.
  • the display unit is a "heads-up" type of display 16 (see FIG 10) (in which the user's head usually remains in an upright position while using the display unit), preferably a Head-Mounted Display (HMD) 14.
  • HMD Head-Mounted Display
  • the display device could be a "heads-down" type of display, similar to a computer monitor, used within a vehicle (i.e., mounted in the vehicle's interior).
  • the display device could also be used within an aircraft (i.e., mounted on the control panel or other location within a cockpit) and would, for example, allow a pilot or other navigator to "visualize" vortex data and unseen runway hazards (possibly due to poor visibility because of fog or other weather issues).
  • any stationary computer monitor display devices which are moveable yet not small enough to be considered “handheld," and display devices which are not specifically handheld but are otherwise carried or worn by the user, could serve as a display unit for this method.
  • the image ofthe real world may be static or moving.
  • the inventive method can also utilize handheld display units 15.
  • Handheld display units can be either see-through or non-see-through, h one embodiment, the user looks through the "see- through" portion (a transparent or semitransparent surface) of the handheld display device (which can be a monocular or binocular type of device) and views the computer-generated elements projected onto the view of the real surroundings.
  • a pair of binoculars instrumented to display information would be a hand-held display unit that would preferably be used.
  • the user could see real hazards that are present that normally would be difficult or impossible to see. Information about avoiding these hazards could also be displayed.
  • a preferred embodiment ofthe inventive method uses a see-though HMD to define a view of the real world.
  • the "see-through” nature of the display device allows the user to "capture” the view ofthe real world simply by looking through an appropriate part ofthe equipment. No mixing of real world imagery and computer-generated graphical elements is required - the computer-generated imagery is projected directly over user's view of the real world as seen through a semi-transparent display.
  • This optical-based embodiment minimizes necessary system components by reducing the need for additional hardware and software used to capture images of the real world and to blend the captured real world images with the computer-generated graphical elements.
  • Embodiments of the method using non-see through display units obtain an image of the real world with a video camera connected to a computer via a video cable.
  • the video camera may be mounted onto the display unit.
  • the image of the real world is mixed, using a commercial-off-the-shelf (COTS) mixing device, with the computer-generated graphical elements and then presented to the user.
  • COTS commercial-off-the-shelf
  • a video-based embodiment of this method could use a motorized camera mount for tracking the position and orientation of the camera.
  • System components would include a COTS motorized camera, a COTS video mixing device, and software developed for the purpose of telling the computer the position and orientation of the camera mount. This information is used to facilitate accurate placement of the computer-generated graphical elements within the user's composite view.
  • mixing of the real and computer generated images can be done via software on a computer. In this case, the image acquired by the motorized camera would be received by the computer.
  • External tracking devices can also be used in the video-based embodiment.
  • a GPS tracking system, an optical tracking system, or another type of tracking system would provide the position and orientation ofthe camera.
  • the position and orientation of the view need to be determined so that the computer generated imagery can be generated to be viewed from the right perspective.
  • This imagery may require very accurate registration (alignment of computer generated and real objects) for successful implementation.
  • Some uses may only require rough registration. For a scenario in which a hazard needs to be anchored in place, registration and tracking must be quite good. For an information display in which a general direction of traversal is being conveyed, the registration may only need to be of rough quality.
  • a tracking system that uses inertial/acoustic, magnetic, or optical tracking may be desired. These systems are designed for precision tracking, measuring position and orientation accurate to within millimeters or degrees.
  • GPS system For wide area tracking in which the system will be outdoors, a GPS system can be used. This embodiment would work well for navigational implementations of this invention, such as systems to aid in navigation of waterways and airways.
  • the invention can utilize a motorized camera mount with a built-in position tracker. This allows the computer program in the system to query the motorized camera to determine the camera's local orientation.
  • the base of the camera may or may not be stationary. If the base is not stationary, the moving base must be tracked by a separate 6-DOF method to determine the orientation and position of the camera in the world coordinate system. This situation could be applicable on a ship, airplane, or automobile where the base ofthe camera mount is fixed to the moving platform, but not fixed in world coordinates.
  • a GPS tracking system, an optical tracking system, or some other kind of tracking system must provide a position and orientation ofthe base ofthe camera.
  • a GPS system could be used to find the position and orientation of the base. It would then use the camera's orientation sensors to determine the camera's orientation relative to the camera's base, the orientation and position of which must be known. Such a system could be placed on a vehicle, aircraft, or ship. Another example would include mounting the camera base on a 6-DOF gimbaled arm. As the arm moves, it can be tracked in 3D space. Similar to the previous example, this position and orientation can be added to the data from the camera to find the camera's true position and orientation in world coordinates.
  • the motorized camera may also use an open-loop architecture, in which the computer cannot request a report from the camera contaimng current orientation data.
  • the computer drives the camera mount to a specified orientation, and external motion of the camera is not pennitted.
  • the system knows the position ofthe camera by assuming that the camera, in fact, went to the last location directed by the computer.
  • the system may also use feedback architecture. In this scenario, the system will send a command to the camera to move to a specified position, and then the system may request a report from the camera that contains the current position ofthe camera, correcting it again if necessary.
  • the motorized camera may operate in a calibrated configuration, in which a computer-generated infinite horizon and center-of-screen indicator are used to verify anchoring and registration of computer-generated objects to real- world positions, h this case, the computer can know exactly where the camera is looking in fully correct, real world coordinates.
  • the system may also operate in an uncalibrated configuration, which would not guarantee perfect registration and anchoring but which may be suitable in certain lower-accuracy applications.
  • the preferred embodiment of motion tracking hardware for use with a navigation embodiment is a hybrid system, which fuses data from multiple sources (FIG 7) to produce accurate, real-time updates ofthe navigator's head position and orientation.
  • the computer needs to know the navigator's head position and orientation in the real world to properly register and anchor virtual (computer- generated) objects in a real environment.
  • Information on platform position and/or orientation gathered from one source may be combined with position and orientation of the navigator's head relative to the platform and/or world gathered from another source in order to determine the position and orientation of the navigator's head relative to the outside world.
  • GPS/DGPS Platform Tracking The first part of a hybrid tracking system for this invention consists of tracking the platform.
  • One embodiment ofthe invention uses a single GPS or DGPS receiver system to provide 3 degrees-of-freedom (DOF) platform position information.
  • Another embodiment uses a two-receiver GPS or DGPS system to provide platform's heading and pitch information in addition to position (5-DOF).
  • Another embodiment uses a three-receiver GPS or DGPS system to provide 6-DOF position and orientation information of the platform, hi some embodiments, additional tracking equipment is required to determine, in real-time, a navigator's viewpoint position and orientation for registration and anchoring in AR.
  • the simplest embodiment of tracking for AR platform navigation would be to track the platform position with three receivers and require the navigator's head to be in a fixed position on the platform to see the AR view.
  • Head Tracking One GPS Receiver (hybrid).
  • the navigator's head position relative to the GPS receiver and the navigator's head orientation in the real world must be determined to complete the hybrid tracking system.
  • An electronic compass or a series of GPS positions can be used to determine platform heading in this embodiment, and an inertial sensor attached to the navigator's head can determine the pitch and roll of the navigator's head.
  • a magnetic, inertial/acoustic, optical, or other tracking system attached to the navigator's head can be used to track the position and orientation ofthe navigator's head relative to the platform.
  • Head Tracking Two GPS Receivers (hybrid). In the embodiment consisting of two GPS receivers, an electronic compass would not be needed. However, the hybrid tracking system would require an inertial sensor and a magnetic, acoustic, optical, or other tracking system in order to be able to determine the real- world position and orientation ofthe navigator's viewpoint.
  • Head Tracking Three GPS Receivers (hybrid).
  • a three GPS receiver embodiment requires the addition of 6-DOF head tracking relative to the platform. This can be accomplished with magnetic, acoustic, or optical tracking.
  • the inventive method utilizes computer-generated three-dimensional graphical elements to represent actual and fictional hazards, as well as information about actual and fictional hazards.
  • the computer- generated imagery is combined with the user's real world view such that the user visualizes hazards and information within his/her immediate surroundings.
  • the visualization can provides the user with information regarding location, size, and shape of the hazard; location of safe regions (such as a path through a region that has been successfully decontaminated of a biological or chemical agent, or a path through a waterway which is free from floating debris) in the immediate vicinity ofthe hazard; and the severity ofthe hazard.
  • the representation of the hazard can look and sound like the hazard itself (i.e., a different representation for each hazard type); it can be an icon indicative of the size and shape of the appropriate hazard; or it can be a text message or other display informing the user about the hazard.
  • the representation can be a textual message, which would provide information to the user, overlaid onto a view of the real background, possibly in conjunction with the other, nontextual graphical elements, if desired.
  • the representations can also serve as indications of the intensity and size of a hazard.
  • Properties such as fuzziness, fading, transparency, and blending can be used within a computer- generated graphical element to represent intensity and spatial extent and edges of hazard(s). For example, a representation of a hazardous material spill could show darker colors at the most heavily saturated point ofthe spill and fade to lighter hues and greater transparency at the edges, indicating less severity at location ofthe spill at the edges.
  • Audio warning components appropriate to the hazard(s) being represented, also can be used in this invention. Warning sounds can be presented to the user along with the mixed view of rendered graphical elements with reality. Those sounds may have features that include, but are not limited to, chirping, intermittent, steady frequency, modulated frequency, and/or changing frequency.
  • the computer-generated representations can be classified into two categories: reproductions and indicators.
  • Reproductions are computer-generated replicas of an element, seen or unseen, which would pose a danger to a user if it were actually present.
  • Reproductions also visually and audibly mimic actions of hazards (e.g., a computer-generated representation of water might turn to steam and emit a hissing sound when coming into contact with a computer-generated representation of fire).
  • Representations which would be categorized as reproductions can be used to indicate appearance, location and/or actions of many visible hazards, including, but not limited to, fire, water, smoke, heat, radiation, chemical spills (including display of different colors for different chemicals), and poison gas.
  • reproductions can be used to simulate the appearance, location and actions of unreal hazards and to make invisible hazards visible. This is useful for many applications, such as training scenarios where actual exposure to a hazard is too dangerous, or when a substance, such as radiation, is hazardous and invisible.
  • Representations which are reproductions of normally invisible hazards maintain the properties of the hazard as if the hazard were visible - invisible gas has the same movement properties as visible gas and will act accordingly in this method.
  • Reproductions which make normally invisible hazards visible include, but are not limited to, steam, heat, radiation, and poison gas.
  • the second type of representation is an indicator.
  • Indicators provide information to the user, including, but not limited to, indications of hazard locations (but not appearance), warnings, instructions, or communications. Indicators may be represented in the form of text messages and icons, as described above. Examples of indicator information may include procedures for dealing with a hazardous material, location of a member of a fellow EFR team member, or a message noting trainee death by fire, electrocution, or other hazard (useful for training purposes).
  • the inventive method utilizes representations which can appear as many different hazards.
  • hazards and the corresponding representations may be stationary three-dimensional objects, such as signs or poles. They could also be moving hazards, such as unknown liquids or gasses that appear to be bubbling or flowing out ofthe ground.
  • Some real hazards blink (such as a warning indicator which flashes and moves) or twinkle (such as a moving spill which has a metallic component); the computer-generated representation of those hazards would behave in the same manner, hi FIG 1, an example of a display resulting from the inventive method is presented, indicating a safe path to follow 3 in order to avoid coming in contact with a chemical spill 1 or other kind of hazard 1 by using computer-generated poles 2 to demarcate the safe area 3 from the dangerous areas 1.
  • FIG 2 shows a possible display to a user where a chemical/radiation leak 5 is coming out of the ground and visually fading to its edge 4, and simultaneously shows bubbles 6 which could represent the action of bubbling (from a chemical/biological danger), foaming (from a chemical/biological danger), or sparkling (from a radioactive danger).
  • Movement of the representation of the hazard may be done with animated textures mapped onto three-dimensional objects. For example, movement of a "slime" type of substance over a three-dimensional surface would be accomplished by animating to show perceived outward motion from the center of the surface. This is done by smoothly changing the texture coordinates in OpenGL, and the result is smooth motion of a texture mapped surface.
  • the representations describing hazards and other information may be placed in the appropriate location by several methods.
  • the user can enter information (such as significant object positions and types) and representations into his/her computer upon encountering hazards or victims while traversing the space, and can enter such information to a database either stored on the computer or shared with others on the scene.
  • a second, related method would be one where information has already been entered into a pre-existing, shared database, and the system will display representations by retrieving information from this database.
  • a third method could obtain input data from sensors such as a video cameras, thermometers, motion sensors, or other instrumentation placed by EFRs or pre-installed in the space.
  • the rendered representations can also be displayed to the user without a view of the real world. This would allow users to become familiar with the characteristics of a particular hazard without the distraction of the real world in the background. This kind of view is known as virtual reality (VR).
  • VR virtual reality
  • digital navigation charts in both raster and vector formats
  • Digital chart data may be translated into a format useful for AR, such as a bitmap, a polygonal model, or a combination of the two (e.g., texture-mapped polygons).
  • Radar information is combined with digital charts in existing systems, and an AR navigation aid can also incorporate a radar display capability to detect hazards such as the locations of other ships and unmapped coastal features.
  • a challenge in the design of an AR hazard display information system is determining the best way to present relevant information to the navigator, while minimizing cognitive load.
  • current ship navigation systems present digital chart and radar data on a "heads-down" computer screen located on the bridge of a ship. These systems require navigators to take their eyes away from the outside world to ascertain their location and the relative positions of hazards.
  • An AR overlay can be used to superimpose only pertinent information directly on a navigator's view when and where it is needed.
  • FIG 11 shows a diagram of a graphic for overlay on a navigator's view.
  • the overlay includes wireframe representations of bridge pylons 17 and a sandbar 18.
  • FIG 12 shows another a real world view of a waterway, complete with trees 22, shore 24, mountains 21 , and river 23.
  • FIG 13 shows another display embodiment in which color-coded depths are overlaid on a navigator's view. All of the real world elements remain visible. In this embodiment, the color fields indicating depth are semi-transparent. The depth information, as seen in the Depth Key 26, can come from charts or from a depth finder.
  • the current heading 25 is displayed to the navigator in the lower left corner ofthe overlay. A minimally intrusive overlay is generally considered to have the greatest utility.
  • the inventive method for utilizing computer-generated three-dimensional representations to visualize hazards has many possible applications. Broadly, the representations can be used extensively for both training and operations scenarios.
  • Training with this method also allows for intuitive use of the method in actual operations. Operational use of this method would use representations of hazards where dangerous unseen objects or events are occurring, or could occur, (e.g., computer-generated visible gas being placed in the area where real unseen gas is expected to be located). Applications include generation of computer-generated elements while conducting operations in dangerous and emergency situations.
  • the computer renders the representation, it is combined with the real world image, hi the preferred optical-based embodiment, the display of the rendered image is on a see-through HMD, which allows the view ofthe real world to be directly visible to the user through the use of partial mirrors, and to which the rendered image is added.
  • Video-based embodiments utilizing non-see through display units require additional hardware and software for mixing the captured image of the real world with the representation of the hazard.
  • the captured video image of the real world is mixed with the computer-generated graphical elements via an onboard or external image combiner.
  • Onboard mixing is performed via software.
  • External mixing can be provided by commercial-off-the-shelf (COTS) mixing hardware, such as a Videonics video mixer or Coriogen Eclipse keyer.
  • COTS commercial-off-the-shelf
  • Such an external solution would accept the video signal from the camera and a computer generated video signal from the computer and combine them into the final augmented reality image.
  • FIG 14 shows a real world view, including mountains 28, trees 27, shoulders 29, as well as the road 30, as seen by a driver of a vehicle. Dangerous areas of travel and/or a preferred route may be overlaid on a driver's field of view, as shown in FIG 15. The real world features are still visible through the display, which may also be displayed in color. The driver's heading 31 is displayed as well as a Key 32. Air navigation is another potential embodiment, providing information to help in low- visibility aircraft landings and aircraft terrain avoidance.
  • FIG 16 shows the real world view of a runway in a clear visibility situation.
  • FIG 17 shows the same real world view in a low visibility situation, complete with mountains 34, trees 33, shoulder 35, and runway 36, augmented with fog 37.
  • FIG 18 further adds to the display by adding colors or patterns to indicate safe and unsafe areas (as indicated in the Key 39) and heading 38. Similar technologies to those described for waterway navigation would be employed to implement systems for either a land or air navigation application.
  • FIG 6 technologies with the exception of the Ship Radar block (which can be replaced with a "Land Radar” or “Aircraft Radar” block) are all applicable to land or air embodiments.
  • Another preferred embodiment of the invention involves visualization of invisible atmospheric phenomena. The following paragraphs illustrate an embodiment of this method.
  • FIG 19 illustrates the data flow that defines the preferred method of the invention for visualizing otherwise invisible atmospheric phenomena.
  • Data 41 can come from a variety of sources 40 - sensor data, human-reported data, or computer simulation data - concerning atmospheric phenomena in a particular area.
  • the data 41 are used in a modeler 42 to create a model 43 of the atmospheric phenomena, or the atmosphere in the area.
  • This model 43 and a viewpoint 45 from a pose sensor 44 are used by a computer 46 to render a computer-generated image 47 showing how the modeled phenomena would appear to an observer at the chosen viewpoint.
  • "Viewpoint” is used to mean the position and orientation of an imaging sensor (i.e., any sensor which creates an image, such as a video camera), eye, or other instrument "seeing" the scene.
  • the first step in the process is to gather data about relevant atmospheric phenomena. At least three pieces of data about a phenomenon are important - type, intensity, and extent. Types of phenomena include, for example,, aircraft wingtip vortices and microbursts (downdrafts inside thunder clouds). Other important phenomena would include areas of wind shear and clouds with electrical activity. The type of phenomena is relevant because some phenomena are more likely to be dangerous, move faster, and/or dissipate faster than others. Each type may warrant a different amount of caution on the part of pilots and air traffic controllers. The intensity of a phenomenon is similarly important, as a weak and dissipating phenomenon may not require any special action, while a strong or growing one may require rerouting or delaying aircraft.
  • the size of a phenomenon is important, as it tells pilots and air traffic controllers how much of a detour is in order. Larger detours increase delays, and knowing the size, growth rate, and movement of the phenomenon allow pilots and air traffic controllers to estimate the minimum safe detour.
  • a third possible source of this data is atmospheric simulation. For instance, based on known wind strength, and direction, and magnitude of turbulence, it may be possible to calculate the evolution of wingtip vortex positions, i the preferred embodiment, data about wingtip vortices could be taken as data from a simulation, or from airport sensors.
  • microbursts come from a point-and-click interface where a user selects the center of a microburst and can modify its reported size and intensity.
  • the second step in the visualization method involves a modeler 42 converting the data 41 into a model 43 of the atmosphere in a region.
  • the preferred embodiment computes simulated points along possible paths of wingtip vortices of a (simulated) aircraft. Splines are then generated to interpolate the path of wingtip vortices between the known points.
  • Other atmospheric phenomena are stored in a list, each with a center position, dimensions, and maximum intensity.
  • a more accurate system might use more complicated representations, for instance allowing phenomena to have complex shapes (e.g., an anvil-shaped thunder cloud), or using voxels or vector fields for densely sampled regions.
  • An alternative to representing the atmospheric phenomena with complex 3D geometric shapes would be the use of icons (which may be simple or complex, depending on the preference of the user).
  • the icons would require less rendering computer power, and might not clutter the display up as much.
  • the use of a textual representation overlaid onto the display can show specifics of the phenomena such as type, speed, altitude, dimensions (size), and importance (to draw attention to more dangerous phenomena).
  • the user may wish to display the textual display either by itself or in conjunction with the other display options of icons or 3D geometric shapes.
  • the third step in the visualization method uses computer graphics 46 to render a scene, defined by a model ofthe atmospheric phenomena 43, from a particular viewpoint 45, producing a computer-generated image 47.
  • the preferred embodiment uses the OpenGL® (SGI, Mountain View, CA) programming interface, drawing the models of the atmospheric phenomena as sets of triangles.
  • the software in the preferred embodiment converts the splines that model wingtip vortices into a set of ribbons arranged in a star cross-section shape, which has the appearance of a tube in nearly any direction. Texture mapping provides a color fade from intense along the spline to transparent at the ribbon edges.
  • the software uses the technique of billboarding.
  • the software finds a plane passing through a phenomenon's center location and normal to the line from viewpoint to center, uses the size of a phenomenon to determine the radius of a circle in that plane, and draws a fan of triangles to approximate that circle.
  • Different colors are used for different types of phenomena, and alpha blending of these false colors shows an intensity falloff from the center to the edge of each phenomenon.
  • the next step in the visualization method is to acquire an image of the real world 50, using an image sensor 49, and to determine the viewpoint 45 from which that image was taken, using a pose sensor 44.
  • the image of the real world 50 is a static image of an airfield, taken from a birds-eye view by a camera, such as a satellite.
  • the viewpoint 45 is fixed, pointing downward, and the pose sensor 44 consists of the programmer deducing the altitude of the viewpoint from the known size of objects appearing in the image.
  • the image ofthe real world can come from a ground-based stationary imaging sensor from a known viewpoint that is not a birds-eye view.
  • a similar embodiment could use radar as the image sensor, and calculate the equivalent viewpoint of the image.
  • a more complicated embodiment might use a camera or the user's eye(s) as the image sensor, and use a tracking system (common in the field of augmented reality such as the INTERSENSE IS-600 (Burlington, MA) as the pose sensor to determine the position and location of a camera, or the user's head, hi this situation, the camera may be mounted on another person or portable platform, and the user would observe the resultant display at his or her location.
  • augmented reality such as the INTERSENSE IS-600 (Burlington, MA)
  • the remaining steps in this embodiment of the method are to combine the computer- generated image 47 with the real world image 50 in an image combiner 48 and to send the output image 51 to a display 52. Again, this can be done in many ways, known in the art, depending on the hardware used to implement the method.
  • FIGS 20, 21, and 22 Methodologies for mixing and presenting content (steps 48, 51 and 52 of FIG 19) are shown in FIGS 20, 21, and 22.
  • a see-through augmented reality device is demonstrated.
  • no automated mixing is required, as the image is projected directly over what the viewer sees through a semi-transparent display 55, as may be accomplished with partial mirrors.
  • FIG 21 the mixing of real and virtual images (augmented reality) is performed using an external video mixer 56.
  • the real image is acquired by a camera 57 on the viewer's head, which is tracked by a 6-DOF tracker 54.
  • FIG 22 is identical to FIG 21, except that the real and virtual portions ofthe image are mixed on the computer's 53 internal video card, so an external mixer is not required.
  • the composite image can be displayed in any video device, such as a monitor, television, heads-up-display, a moveable display that the user can rotate around that will provide an appropriate view based on how the display is rotated, or a display mounted on a monocular or a pair of binoculars.
  • a monitor such as a monitor, television, heads-up-display, a moveable display that the user can rotate around that will provide an appropriate view based on how the display is rotated, or a display mounted on a monocular or a pair of binoculars.
  • FIGS 23 through 30 show examples of different displays accomplished by the invention.
  • the images consist of virtual images and virtual objects overlaid on real backgrounds. In these images, intuitive representations have been created to represent important atmospheric phenomenon that are otherwise invisible.
  • FIGS 23 through 26 show one application of top-down viewing of an airspace.
  • the images demonstrate that trailing wingtip vortex data can be visualized such that the user can see the position and intensity of local atmospheric data 60.
  • Airplanes can be represented as icons in cases where the planes are too small to see easily. Multiple planes and atmospheric disturbances can be overlaid on the same image.
  • FIGS 27 and 28 show examples of a pilot's augmented view.
  • the figures show that data such as wind shear and microbursts can be represented as virtual objects 61 projected onto the viewer's display. Properties such as color, transparency, intensity, and size can be used to represent the various properties of the atmospheric phenomenon 62.
  • the dashed line (which could be a change of color in the display) of the marker has changed, which could represent a change in phenomena type.
  • FIGS 29 and 30 show examples of an airplane 63 overlaid with virtual wake vortices 65, demonstrating the power of applying virtual representations of data to real images. Fuzziness or blending can be used to show that the edges of the vortex trails 64 are not discrete, but that the area of influence fades as you move away from the center ofthe vortex.
  • FIG 31 illustrates the hardware for the preferred method of the invention using a motorized camera.
  • a motorized video camera 69, 70 is used as a tracking system for augmented reality.
  • the computer 66 By connecting the motorized video camera to the computer 66 via an RS-232 serial cable 67 (for camera control and feedback) and video cable 68, the camera may be aimed, the position of the camera can be queried, and the image seen by the camera may be captured over the video cable 68 by software running on the computer.
  • the computer can query the camera for its current field of view, a necessary piece of information if the computer image is to be rendered properly.
  • FIG 32 is a close-up view ofthe preferred Sony EVI-D30 motorized camera.
  • the camera is composed of a head 69 and a base 70 coupled by a motorized mount.
  • This mount can be panned and tilted via commands from the computer system, which allows the head to move while the base remains stationary.
  • the camera also has internal software, which tracks the current known pan and tilt position ofthe head with respect to the base, which may be queried over the RS-232 serial cable.
  • the video signal from the camera travels into a video capture, or "frame grabber” device connected to the computer.
  • a video capture, or "frame grabber” device connected to the computer.
  • an iRez USB Live! capture device is used, which allows software on the computer to capture, modify, and display the image on the screen of the computer.
  • This image source can be combined with computer-generated elements before display, allowing for augmented reality applications.
  • FIG 33 an augmented reality display using the EVI-D30 as a tracked image source is shown.
  • This image is a composite image originally acquired from the camera, which is displayed in FIG 34, and shows furniture and other items physically located in real space 72, 73, and 74.
  • the software running on the computer then queries the camera for its orientation.
  • the orientation returned from the camera represents the angle of the camera's optics with respect to the base of the camera.
  • a real-world position and orientation can be computed for the camera's optics.
  • These data are then used to render three-dimensional computer-generated poles 71 with proper perspective and screen location, which are superimposed over the image captured from the camera.
  • the resulting composite image is displayed to the user on the screen.
  • FIG 35 shows the same scene as FIG 33, but from a different angle (with new real world elements - the clock 75 and the trash can 76).
  • the unaugmented version of FIG 35 (shown in FIG 36) is captured from the video camera, and the computer-generated elements 71 are again added to the image before display to the user. Note, as the camera angle has changed, the perspective and view angle ofthe poles 71 has also changed, permitting them to remain anchored to locations in the real-world image.
  • the inventive method can be accomplished using the system components shown in FIG 37.
  • the following items and results are needed to accomplish the preferred method of this invention:
  • a display device for presenting computer generated images to the EFR.
  • a method for tracking the position ofthe EFR display device is a method for tracking the position ofthe EFR display device.
  • a method for tracking the orientation ofthe EFR display device is a method for tracking the orientation ofthe EFR display device.
  • a method for communicating the position and orientation of the EFR display device to the incident commander is provided.
  • a method for the incident commander to view information regarding the position and orientation ofthe EFR display device is described.
  • EFR display device with the computer-generated images representing the messages sent to the EFR by the incident commander.
  • a method for presenting the combined view to the EFR on the EFR display device is described.
  • the EFR display device (used to present computer-generated images to the EFR) is a Head Mounted Display (HMD) 83.
  • HMD Head Mounted Display
  • a see-through monocular HMD is used. Utilization of a see-through type of HMD allows the view of the real world to be obtained directly by the EFR. The manners in which a message is added to the display are described below.
  • a non-see-through HMD would be used as the EFR display device, h this case, the images of the real world (as captured via video camera) are mixed with the computer-generated images by using additional hardware and software components known in the art.
  • a monocular HMD may be integrated directly into an EFR face mask which has been customized accordingly. See FIG 38 for a conceptual drawing of an SCBA 102 with the monocular HMD eyepiece 101 visible from the outside ofthe mask. Because first responders are associated with a number of different professions, the customized face mask could be part of a firefighter's SCBA (Self-Contained Breathing Apparatus), part of a HAZMAT or radiation suit, or part of a hard hat.
  • SCBA Self-Contained Breathing Apparatus
  • the EFR display device could also be a hand-held device, either see-through or non-see- through.
  • the EFR looks through the "see- through" portion (a transparent or semitransparent surface) of the hand-held display device and views the computer-generated elements projected onto the view ofthe real surroundings.
  • the images of the real world are mixed with the computer-generated images by using additional hardware and software components.
  • the hand-held embodiment ofthe invention may also be integrated into other devices (which would require some level of customization) commonly used by first responders, such as Thermal Imagers, Navy Firefighter's Thermal hnagers (NFTI), or Geiger counters.
  • first responders such as Thermal Imagers, Navy Firefighter's Thermal hnagers (NFTI), or Geiger counters.
  • the position of an EFR display device 84 and 83 is tracked using a wide area tracking system. This can be accomplished with a Radio Frequency (RF) technology-based tracker.
  • the preferred embodiment would use RF transmitters.
  • the tracking system would likely (but not necessarily) have transmitters installed at the incident site 80 as well as have a receiver that the EFR would have with him or her 81. This receiver could be mounted onto the display device, worn on the user's body, or carried by the user.
  • the receiver 82 is also worn by the EFR, as in FIG 37. The receiver is what will be tracked to determine the location of the EFR's display device.
  • the receiver could be mounted directly in or on the device, or a receiver worn by the EFR could be used to compute the position of the device.
  • a tracking system is shown in FIG 46. Emitters 201 are installed on the outer walls and will provide tracking for the EFR 200 entering the structure.
  • the RF tracking system must have at least four non-coplanar transmitters. If the incident space is at or near one elevation, a system having three tracking stations may be used to determine the EFR's location since definite knowledge of the vertical height of the EFR is not needed, and this method would assume the EFRs are at coplanar locations. In any case, the RF receiver would determine either the direction or distance to each transmitter, which would provide the location of the EFR. Alternately, the RF system just described can be implemented in reverse, with the EFR wearing a transmitter (as opposed to the receiver) and using tliree or more receivers to perform the computation of the display location.
  • the orientation of the EFR display device can be tracked using inertial or compass type tracking equipment, available through the INTERSENSE CORPORATION (Burlington, MA). If a HMD is being used, this type of device 82 can be worn on the display device or on the EFR's head. Additionally, if a hand-held device is used, the orientation tracker could be mounted onto the hand-held device. In an alternate embodiment, two tracking devices can be used together in combination to determine the direction in which the EFR display device is pointing. The tracking equipment could also have a two-axis tilt sensor which measures the pitch and roll of the device.
  • an inertial/ulfrasonic hybrid tracking system can be used to determine both the position and orientation of the device.
  • a magnetic tracking system can be used to determine both the position and orientation of the device.
  • an optical tracking system can be used to determine both the position and orientation of the device.
  • the data regarding the position and orientation ofthe EFR's display device can then be transmitted to the incident commander by using a transmitter 79 via Radio Frequency Technology. This information is received by a receiver 77 attached to the incident commander's on-site laptop or portable computer 78. Method for the Incident Commander to View EFR Display Device Position and Orientation Information.
  • the EFR display device position and orientation information is displayed on the incident commander's on-site, laptop or portable computer. In the preferred embodiment, this display may consist of a floor plan of the incident site onto which the EFR's position and head orientation are displayed.
  • This information may be displayed such that the EFR's position is represented as a stick figure with an orientation identical to that of the EFR.
  • the EFR's position and orientation could also be represented by a simple arrow placed at the EFR's position on the incident commander's display.
  • the path which the EFR has taken may be tracked and displayed to the incident commander so that the incident commander may "see” the route(s) the EFR has taken.
  • the EFR generating the path, a second EFR, and the incident commander could all see the path in their own displays, if desired. If multiple EFRs at an incident scene are using this system, their combined routes can be used to successfully construct routes of safe navigation throughout the incident space. This information could be used to display the paths to the various users of the system, including the EFRs and the incident commander. Since the positions of the EFRs are transmitted to the incident commander, the incident commander may share the positions ofthe EFRs with some or all members ofthe EFR team. If desired, the incident commander could also record the positions ofthe EFRs for feedback at a later time.
  • the incident commander may use his/her computer (located at the incident site) to generate messages for the EFR.
  • the incident commander can generate text messages by typing or by selecting common phrases from a list or menu.
  • the incident commander may select, from a list or menu, icons representing situations, actions, and hazards (such as flames or chemical spills) common to an incident site.
  • FIG 39 is an example of a mixed text and iconic message relating to fire. If the incident commander needs to guide the EFR to a particular location, directional navigation data, such as an arrow, can be generated to indicate in which direction the EFR is to proceed.
  • the incident commander may even generate a set of points in a path ("waypoints") for the EFR to follow to reach a destination. As the EFR reaches consecutive points along the path, the previous point is removed and the next goal is established via an icon representing the next intermediate point on the path. The final destination can also be marked with a special icon. See FIG 40 for a diagram of a structure and possible locations of waypoint icons used to guide the EFR from entry point to destination.
  • the path ofthe EFR 154 can be recorded, and the incident commander may use this information to relay possible escape routes, indicators of hazards 152 and 153, and a final destination point 151 to one or more EFRs 150 at the scene (see FIG 41).
  • the EFR could use a wireframe rendering of the incident space (FIG 45 is an example of such) for navigation within the structure.
  • the two most likely sources of a wireframe model of the incident space are (1) from a database of models that contain the model of the space from previous measurements, or (2) by equipment that the EFRs can wear or carry into the incident space that would generate a model ofthe room in real time as the EFR traverses the space.
  • the incident commander will then transmit, via a transmitter and an EFR receiver, the message (as described above) to the EFR's computer.
  • This combination could be radio-based, possibly commercially available technology such as wireless ethernet.
  • HMD Head Mounted Display
  • FIG 42 shows a possible mixed text and icon display 110 that conveys the message to the EFR to proceed up the stairs 111.
  • FIG 43 shows an example of mixed text and icon display 120 of a path waypoint.
  • Text messages are rendered and displayed as text, and could contain warning data making the EFR aware of dangers of which he/she is presently unaware.
  • Icons representative of a variety of hazards can be rendered and displayed to the EFR provided the type and location of the hazard is known. Specifically, different icons could be used for such dangers as a fire, a bomb, a radiation leak, or a chemical spill. See FIG 44 for a text message 130 relating to a leak of a radioactive substance.
  • the message may contain data specific to the location and environment in which the incident is taking place.
  • a key code for example, could be sent to an EFR who is trying to safely traverse a secure installation. Temperature at the EFR's location inside an incident space could be displayed to the EFR provided a sensor is available to measure that temperature. Additionally, temperatures at other locations within the structure could be displayed to the EFR, provided sensors are installed at other locations within the structure.
  • a message could be sent from the incident commander to the EFR to assist in handling potential injuries, such as First Aid procedures to aid a victim with a known specific medical condition.
  • the layout of the incident space can also be displayed to the EFR as a wireframe rendering (see FIG 45). This is particularly useful in low visibility situations.
  • the geometric model used for this wireframe rendering can be generated in several ways.
  • the model can be created before the incident; the dimensions of the incident space are entered into a computer and the resulting model of the space would be selected by the incident commander and transmitted to the EFR.
  • the model is received and rendered by the EFR's computer to be a wireframe representation of the EFR's surroundings.
  • the model could also be generated at the time of the incident.
  • Technology exists which can use stereoscopic images of a space to construct a 3D-model based on that data.
  • This commercial-off-the-shelf (COTS) equipment could be worn or carried by the EFR while traversing the incident space.
  • the equipment used to generate the 3D model could also be mounted onto a tripod or other stationary mount. This equipment could use either wireless or wired connections.
  • the generated model is sent to the incident commander's computer, the incident commander's computer can serve as a central repository for data relevant to the incident. In this case, the model generated at the incident scene can be relayed to other EFRs at the scene.
  • the results of the various modelers could be combined to create a growing model which could be shared by all users.
  • a see- through display device in which the view of the real world is inherently visible to the user.
  • Computer generated images are projected into this device, where they are superimposed onto the view seen by the user.
  • the combined view is created automatically through the use of partial mirrors used in the see-through display device with no additional equipment required.
  • embodiments of this method use both hardware and software components for the mixing of real world and computer-generated imagery.
  • an image ofthe real world acquired from a camera may be combined with computer generated images using a hardware mixer.
  • the combined view in those embodiments is presented to the EFR on a non-see-through HMD or other non-see-through display device.
  • the result is an augmented view of reality for the EFR for use in both training and actual operations.
  • An embodiment of the inventive method may use smart card technology to store pertinent training, operations, and simulation data related to hazards, including but not limited to, one or more of training information, trainee and team performance data, simulation parameters, metrics, and other information related to training, simulation, and/or evaluation.
  • the relevant data is stored on the smart card and is accessed via a smart card terminal.
  • the terminal can be connected to either the simulation computer or to a separate computer being used for analysis.
  • the smart card terminal provides access to the data upon insertion ofthe smart card. Data on the smart card (from previous training session, for example) can be retrieved and can also be updated to reflect the trainee's most recent performance.
  • a "smart card” is a digital rewriteable memory device in a shape like a credit card that can be read and written by a smart card terminal.
  • Computer-based simulation of specific hazardous scenarios is a frequently used method of training. This simulation is frequently accomplished via Virtual Reality (VR) and Augmented Reality (AR).
  • Smart cards can be used to store data from current and previous training sessions. That data can include trainee identification information, simulation data for the virtual environment, and metrics regarding the trainee's performance in one or more given scenarios where hazards or hazard information is present.
  • training for driving an automobile under difficult conditions can be done with a driving simulator.
  • the trainee would enter the simulator, insert his/her smart card into the smart card terminal and be identified based on information stored on the smart card.
  • the smart card would also contain information such as chase parameters (e.g., speed, visibility type of vehicle, road conditions).
  • the scenario could be run and the trainee's interaction with the scenario (the trainee's performance) can be recorded and stored on the card. Those results can be called up later to evaluate progress in a given skill or other "lessons learned.”
  • training scenarios can be repeated any number of times in a cost-efficient and reliable manner.
  • an instructor could have one smart card with a set of scenarios that can be run at the instructor's discretion. The instructor can administer the same scenario, perhaps as a test, to multiple trainees with minimal risk of instructor error, thus providing more valid test results.
  • the smart card can be used to store any type of data relevant to a particular hazard situation.
  • a trainee's personal training profile identifying personal information and other data
  • This data might include, but is not limited to, skills mastered, levels of expertise or other special training (such as HAZMAT training), and training needed for upcoming assignments.
  • the method featured in this application can also be used to store instructor data on a smart card. Examples include authentication of an instructor into a training system for purposes of security or access control; or simply to provide the system with the instructor's personal training profile for the purpose of tailoring the application to the instructor.
  • Performance data (interaction with training scenarios) such as success, score, or other parameters for individual trainees and for teams can be stored on a smart card.
  • applications which have a notion of a "team” can store information about the user's participation within the team, the user's performance in the context ofthe team, and/or the performance ofthe team as a whole.
  • Training application parameters such as locations of hazards, size of training space, or any other parameter of the application, can be stored on the smart card.
  • the result is the creation of "scenario" cards containing the specific data required by the simulation or training application.
  • multiple smart cards can be used to track multiple users and multiple scenarios.
  • Smart card data can be protected using a number of methods.
  • the card can be protected via a personal identification number (PIN).
  • PIN personal identification number
  • This provides a security layer such that the card used is authenticated by the owner. That is, if a user enters the correct PIN to obtain the data from the card, it can be safely assumed that the user is the valid owner ofthe card, thus preventing identity theft in the training environment.
  • Another method of protection is to issue a password for use of the card. As with use of a PIN, if a card user enters the correct password, it is assumed that the user is the card owner.
  • the smart card can also be protected via a cryptographic "handshake.”
  • the contents of the card are protected via mathematically secure cryptography requiring secure identification of any system requesting data from the card. This can prevent unauthorized systems or users from accessing the data that exists on the card.
  • the smart card terminal 182 is a read- write device which allows the data on the card 181 to be retrieved for use in the system 183 and new data to be written to the card 181 for use in the future. It can be connected directly to the computer(s) 183 which running the training application, displaying the output ofthe smart card and training simulation on the output display 184. This is most practical when the training environment has a computer 183 that can execute the training scenario readily available and a situation involving only one trainee.
  • the method featured in this application also allows training via networked communication.
  • the smart card terminal can be connected to a separate computer which is connected (via standard networking cables) to the computer(s) running the training application. For example, if a local computer can accommodate use of the smart card terminal, but not the fraining scenario, the training scenario can be directed to another computer on the network and used at a local, more convenient location.
  • FIG 47 This screen contains information about the identity ofthe cardholder, and shows a log of previous training scenarios and the score attained for each one.
  • FIG 48 data about the current status of the scenario is shown as in FIG 48.
  • FIG 50 shows the log of recent training scenarios again as in FIG 47, but includes the most recent training scenario depicted in FIG 48 and FIG 49.
  • a user's operational performance can be recorded and reviewed.
  • the system may track any close calls, as well as performance data related to the number of aircraft on the controller's screen, the busiest time of day, average radio transmission length, and other metrics.
  • These metrics as stored on the smart card represent a personal performance profile that can be used for evaluation of the controller, or as a tamper-resistant record of the controller's actions.
  • Such a system would expand performance evaluation beyond training and into daily use, providing improved on-the-job safety and efficiency.

Abstract

A method, utilizing augmented reality, for visualizing hazards and information which pose a serious threat to those involved in a scenario. Hazards, such as, smoke, radiation, gasses and water navigational hazards are displayed to provide the user with a greater awareness of the hazards. Data regarding a user's interaction with hazards and hazard information are recorded using smart card and other electronic technology.

Description

METHOD AND SYSTEM TO DISPLAY VISIBLE AND INVISIBLE HAZARDS AND HAZARD
INFORMATION
FIELD OF THE INVENTION
This invention relates to visualization and indication of real and simulated hazards operations, training; communication, and to augmented reality (AR). This invention has use in a wide range of professions in which hazards are present, including navigation, aviation, emergency first response, and counter terrorism.
COPYRIGHT INFORMATION
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone o the patent document or the patent disclosure as it appears in the Patent and Trademark Office records but otherwise reserves all copyright works whatsoever.
BACKGROUND OF THE INVENTION
Many occupations require being in highly dangerous situations which visually appear to be relatively normal. For example, an emergency first responder (EFR) responding to a call may encounter certain chemical compounds in a spill situation which can transform into deadly, invisible, odorless gas. There are also types of hazards which may not be visible at all (e.g., radiation leaks) that pose a serious threat to those in the immediate vicinity. A ship pilot or navigator may encounter unseen hazards, such as sunken ships, shallow water, reefs, and objects hidden in fog (including other ships and bridges). An airline pilot may encounter wind shear, wingtip vortices, and other dangerous invisible atmospheric phenomenon which could be safely dealt with if the pilot could see this phenomenon. Currently, airports use worst cast scenario times to space out the departures of aircraft so that the airway will be free of hazardous wake vortices. A method for viewing these hazards would allow for safer and faster air travel. In order to prepare for these types of incidents, these situations must be anticipated and presented within the training or operational environment. Furthermore, in order to maintain a high level of proficiency in these situations, frequent re-education of professionals in these areas is called for to ensure that proper procedures are readily and intuitively implemented in a dangerous situation. Current professional training is frequently limited to traditional methods such as classroom/videotape and simulations which are many times unrealistic and inadequate. Classroom and videotape training do not provide an environment which is similar to an actual dangerous scene; therefore, a supplementary method is required for thorough training. Some situations may also be too dangerous to simulate through a controlled physical reproduction. For example subjecting an EFR to real invisible poison gas is not safe. Also, some things are not only dangerous whenever encountered, but difficult to reproduce, such as atmospheric phenomena.
In addition to reproductions of training hazards, a method for viewing information about hazards, seen or unseen, would be quite beneficial to the operator. For example, if an airline pilot were able to see otherwise invisible atmospheric phenomena, those hazards could be avoided. If a ship pilot could see hazards present in a fog situation, a waterway would be easier and safer to traverse. If a fire fighter were to be able to receive textual or iconic messages regarding scene hazards from a scene commander, the fire fighter could be able to better avoid dangers at the scene and help those trapped at the scene. This information could be used in both training and operations to convey relevant situational awareness information to those involved in the incident.
SUMMARY OF THE INVENTION
The ability to see a hazard, seen or unseen, will better prepare a user to implement the correct procedures for dealing with the situation at hand. The inventive method allows the user to visualize hazards and related indicators containing information which increases the preparedness of the user for the situation. Operational and training settings implementing this method can offer users the ability to see hazards, safe regions in the vicinity of hazards, and other environmental characteristics through use of computer-generated two- and three-dimensional graphical elements. Training and operational situations for which this method is useful include, but are not limited to, typical nuclear, biological, and chemical (NBC) attacks, hazardous materials incidents, airway and waterway interaction, and training which requires actions such as avoidance, response, handling, and cleanup.
Methods for recording interactions with hazards and information about hazards are also desirable. This recorded data could be reviewed later by an instructor, supervisor, or chief; who could then provide feedback to the user. Smart card and other recording technologies accomplish this recording. To see these representations and information, a viewer must be able to see the world in which he or she is in, along with the computer generated elements mixed into that world. This image can be obtained by viewing the world with the viewer's eyes, or by the use of a camera viewing the scene. It may also be useful and cost effective to obtain the image ofthe real world through the use of an external camera so that a different view can be obtained.
The inventive method represents an innovation in the field of training and operations. Two purposes ofthe inventive method are safe and expeditious passage through/around hazard(s); and safe and efficient training and operations. This is accomplished by two categories of medi — representations which reproduce a hazard through computer simulation and graphics; and text and icons which present relevant information about a situation so that mission objectives can be completed safely and efficiently.
This invention utilizes augmented reality (AR) technology to overlay a display of dangerous materials/hazards and/or relevant data regarding such dangerous materials/hazards onto the real world view in an intuitive, user-friendly format. AR is defined herein to mean combining computer-generated graphical elements with a real world view (which may be static or changing) and presenting the combined view as a replacement for the real world image. Additionally, these computer-generated graphical elements can be used to present the EFR/trainee/other user with an idea of the extent of the hazard at hand. For example, near the center of a computer-generated element representative of a hazard, the element may be darkened or more intensely colored to suggest extreme danger. At the edges, the element may be light or semitransparent, suggesting an approximate edge to the danger zone where effects may not be as severe.
The view of the real world in this invention is typically obtained through a camera mounted at the position of the viewer's eye point on a Head Mounted Display (HMD). This view may also be obtained directly by the viewer's eyes via a see through HMD. Also, an externally tracked motorized camera may be used to provide an external augmented reality view. This camera would be quite useful in training situations in which one or more people are performing an exercise while observers may watch the view through an external camera. This external camera would also be useful if mounted near a runway at an airport or on a boat in a waterway. This would allow for an augmented view of atmospheric or water navigation hazards.
This data may be presented using a traditional interface such as a computer monitor, or it may be projected into a device the user would typically use, such as a head-mounted display (HMD) mounted inside an EFR's mask, an SCBA (Self-Contained Breathing Apparatus), HAZMAT (hazard materials) suit, a hardhat, or instrumented binoculars. Regardless of the method of display, the view of the EFR/trainee's real environment, including visible hazards, visible gasses, and actual structural surroundings, will be seen, overlaid or augmented with computer-generated graphical elements representative of the hazards or information about them. The net result is an augmented reality.
The inventive method is useful for training and retraining of personnel within a safe, realistic environment. Computer-generated graphical elements (which are representations and indicators of hazards) are superimposed onto a view ofthe real training environment and present no actual hazard to the trainee, yet allow the trainee to become familiar with proper procedures within an environment which is more like an actual incident scene.
Additionally, the inventive method is useful for operations. Computer-generated graphical elements (which are representations and indicators of hazards) are superimposed onto a view of the real environment and present relevant information to the user so that the operation can be successfully and safely completed. For example, an air traffic controller could look through instrumented binoculars at a runway to see the actual surroundings, as would normally be seen, augmented with wingtip vortices from planes that are taking off.
Atmospheric data that could be displayed in an air navigation implementation include (but is not limited to) wind shear, wingtip vortices, micro bursts, and clear air turbulence. One aspect of the inventive method uses blending of images with varying transparency to present the location, intensity, and other properties of the data being displayed. This will present the air traffic controllers and pilots with a visual indication of properties of otherwise invisible atmospheric disturbances.
In both training and operations, the performance of the user of the system may be recorded using a smart card or other electronic media, such as a computer's hard drive. In a training application, trainee scores measuring the success or failure of the trainee's grasp of concepts involving hazards and hazard information may be recorded, i operations, data can be recorded as to whether or not an individual has successfully avoided a hazard or used hazard information. i an EFR embodiment of the method, the computer-generated graphical element can be a text message, directional representation (arrow), or other informative icon from the incident commander, or geometrical visualizations of the structure. It can be created via a keyboard, mouse or other method of input on a computer or handheld device at the scene. The real world view consists ofthe EFR's environment, containing elements such as fire, unseen radiation leaks, chemical spills, and structural surroundings. The EFR/trainee will be looking through a head- mounted display, preferably monocular, mounted inside the user's mask (an SCBA in the case of a firefighter). This HMD could also be mounted in a hazmat suit or onto a hardhat. The HMD will be preferably "see through," that is, the real hazards and surroundings that are normally visible will remain visible without the need for additional equipment. Depending on the implementation and technology available, there may also be a need for a tracking device on the EFR's mask to track location and/or orientation. The EFR/trainee's view of the real world is augmented with the text message, icon, or geometrical visualizations ofthe structure.
Types of messages sent to an EFR/trainee include (but are not limited to) location of victims, structural data, building/facility information, environmental conditions, and exit directions/locations.
This invention can notably increase the communication effectiveness at the scene of an incident or during a training scenario and result in safer operations, training, emergency response, and rescue procedures.
The invention has immediate applications for both the training and operations aspects of the fields of emergency first response, navigation, aviation, and command and control; implementation of this invention will result in safer training, retraining, and operations for all individuals involved in situations where hazards must be dealt with. Furthermore, potential applications of this technology include those involving other training and preparedness (i.e., fire fighting, damage control, counter-terrorism, and mission rehearsal), as well as potential for use in the entertainment industry.
BRIEF DESCRIPTION OF THE DRAWINGS FIG 1 depicts an augmented reality display according to the invention that displays a safe path available to the user by using computer-generated graphical poles to indicate where the dangerous regions are
FIG 2 depicts an augmented reality display according to the invention that depicts a chemical spill emanating from a center that contains radioactive materials
FIG 3 is a block diagram indicating the hardware components and interconnectivity of a see- through augmented reality (AR) system useful in the invention. FIG 4 is a block diagram indicating the hardware components and interconnectivity of a video- based AR system involving an external video mixer useful in the invention. FIG 5 is a block diagram indicating the hardware components and interconnectivity of a video- based AR system where video mixing is performed internally to a computer useful in the invention.
FIG 6 is a diagram illustrating the technologies involved in an AR waterway navigation system according to this invention.
FIG 7 is a block diagram of the components of an embodiment of an AR waterway navigation system according to this invention.
FIGS 8-10 are diagrams indicating display embodiments for the AR waterway navigation system of FIG 7.
FIG 11 is a diagram of an AR overlay graphic for aid in ship navigation useful in the invention. FIGS 12 and 13 are diagrams of an AR scene where depth information is overlaid on a navigator's viewpoint as semi-transparent color fields useful in the invention. FIGS 14 and 15 are diagrams of an overlay for a land navigation embodiment ofthe invention. FIGS 16-18 are diagrams of an overlay for an air navigation embodiment ofthe invention. FIG 19 is a block diagram of an embodiment of the method of this invention, labeling both data flow and operators.
FIG 20 is a schematic diagram of the hardware components and interconnectivity of a see- through augmented reality (AR) system that can be used in this invention. FIG 21 is a schematic diagram of the hardware components and interconnectivity of a video- . based AR system for this invention involving an external video mixer.
FIG 22 is a schematic diagram of the hardware components and interconnectivity of a video- based AR system for this invention where video mixing is performed internally to a computer. FIG 23 is a representation of vortex trails being visualized behind an airplane. FIG 24 is another representation of vortex trails being visualized. FIG 25 is another representation of wingtip vortices as viewed at a farther distance. FIG 26 is a similar top view of parallel takeoff of aircraft.
FIG 27 depicts atmospheric phenomena, with an image of nonhomogeneous transparency used to convey information for the invention. FIG 28 also depicts atmospheric phenomena. FIG 29 shows an example of an irregular display of vortex trails.
FIG 30 shows representations of wingtip vortices visualized behind the wings of a real model airplane.
FIG 31 is a schematic view of a motorized camera and motorized mount connected to a computer for the purpose of tracking and video capture for augmented reality, for use in a preferred embodiment ofthe invention.
FIG 32 is a close-up view ofthe camera and motorized mount of FIG 31.
FIG 33 schematically depicts an augmented reality display with computer-generated indicators displayed over an image as an example of a result of this invention.
FIG 34 is the un-augmented scene from FIG 33 without computer-generated indicators. This image is a real- world image captured directly from the camera.
FIG 35 is an augmented reality display ofthe same scene as that of FIG 33 but from a different camera angle where the computer-generated indicators that were in FIG 33 remain anchored to the real-world image.
FIG 36 is the un-augmented scene from FIG 35 without computer-generated indicators.
FIG 37 is a schematic diagram of the system components that can be used to accomplish the preferred embodiments ofthe inventive method.
FIG 38 is a conceptual drawing of a firefighter's SCBA with an integrated monocular eyepiece that the firefighter may see through for the invention.
FIG 39 is a view as seen from inside the HMD of a text message accompanied by an icon indicating a warning of flames ahead for the invention
FIG 40 is a possible layout of an incident commander's display in which waypoints are placed useful in the invention.
FIG 41 is a possible layout of an incident commander's display in which an escape route or path is drawn for the invention.
FIG 42 is a text message accompanied by an icon indicating that the EFR is to proceed up the stairs for the invention.
FIG 43 is a waypoint which the EFR is to walk towards for the invention.
FIG 44 is a potential warning indicator warning of a radioactive chemical spill for the invention.
FIG 45 is a wireframe rendering of an incident scene as seen by an EFR for the invention.
FIG 46 shows a possible layout of a tracking system, including emitters and receiver on user. FIG 47 shows the opening screen of a preferred embodiment of the invention embodied with a software training tool, which shows who the card belongs to and what the recent history of training activity has been.
FIG 48 shows a screen evidencing a scenario where a fire began and then the trainee extinguished the fire.
FIG 49 shows the output screen after the above scenario ended, summarizing the results of the scenario.
FIG 50 shows the output screen after the above scenario ended, showing how many points the trainee gained in putting out this fire.
FIG 51 schematically depicts the basic hardware required to enable the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION
This invention involves a method for visualization of hazards and information about hazards utilizing computer-generated three-dimensional representations, typically displayed to the user in an augmented reality.
In many embodiments of the method, augmented reality is used. This technology uses software and hardware which is now described briefly. The hardware for augmented reality (AR) consists minimally of a computer 7, see-through display 9, and motion tracking hardware 8, as diagrammed in FIG 3. h such an embodiment, motion tracking hardware 8 is used to determine the human's head position and orientation. The computer 7 uses the information from the motion tracking equipment 8 in order to generate an image which is overlaid on the see- though display 9 and which appears to be anchored to a real- world location or object. Other embodiments of AR systems include video-based (non-see-through) hardware, as diagrammed in FIG 4 and FIG 5. h addition to using motion tracking equipment 8 and a computer 7, these embodiments utilize a camera 11 to capture the real-world imagery and non-see-through display 12 for displaying computer-augmented live video. One embodiment (FIG 4) uses an external video mixer 10 that combines computer-generated imagery with live camera video via a luminance key or chroma key. The second video-based embodiment (FIG 5) involves capturing live video in the computer 7 with a frame grabber and overlaying an opaque or semi-transparent imagery internal to the computer. Another video-based embodiment involves a remote camera. Motion tracking equipment 8 can control motors that orient a camera mounted on a high- visibility position on a platform, allowing an augmented reality telepresence system. A detailed description ofthe method now follows. As mentioned above, the following items and steps are needed to accomplish the method:
• A display unit for the user;
• Acquisition of an image or view of the real world;
• Acquisition of the position and orientation of the user.
• A computer for rendering representations or indications of hazards;
• Combination of the view of the real world with the rendered representation; and
• Presentation ofthe combined (augmented) view to the user.
Display Unit. The inventive method requires a display unit (FIG 8) in order for the user to view computer-generated graphical elements representative of hazards and information 13 overlaid onto a view of the real world - the view of the real world is augmented with the representations and information. The net result is an augmented reality.
In a preferred embodiment of the invention, the display unit is a "heads-up" type of display 16 (see FIG 10) (in which the user's head usually remains in an upright position while using the display unit), preferably a Head-Mounted Display (HMD) 14. There are many varieties of HMDs which would prove acceptable for this method, including see-through and non-see- through types.
There are alternatives to using an HMD as a display unit. For examples, see FIGS 9, 10. The display device could be a "heads-down" type of display, similar to a computer monitor, used within a vehicle (i.e., mounted in the vehicle's interior). The display device could also be used within an aircraft (i.e., mounted on the control panel or other location within a cockpit) and would, for example, allow a pilot or other navigator to "visualize" vortex data and unseen runway hazards (possibly due to poor visibility because of fog or other weather issues). Furthermore, any stationary computer monitor display devices which are moveable yet not small enough to be considered "handheld," and display devices which are not specifically handheld but are otherwise carried or worn by the user, could serve as a display unit for this method. In all embodiments, the image ofthe real world may be static or moving.
The inventive method can also utilize handheld display units 15. Handheld display units can be either see-through or non-see-through, h one embodiment, the user looks through the "see- through" portion (a transparent or semitransparent surface) of the handheld display device (which can be a monocular or binocular type of device) and views the computer-generated elements projected onto the view of the real surroundings. In a navigation application involving either ships or air traffic control, a pair of binoculars instrumented to display information would be a hand-held display unit that would preferably be used. By looking through this device, the user could see real hazards that are present that normally would be difficult or impossible to see. Information about avoiding these hazards could also be displayed.
Acquisition of a View of the Real World. A preferred embodiment ofthe inventive method uses a see-though HMD to define a view of the real world. The "see-through" nature of the display device allows the user to "capture" the view ofthe real world simply by looking through an appropriate part ofthe equipment. No mixing of real world imagery and computer-generated graphical elements is required - the computer-generated imagery is projected directly over user's view of the real world as seen through a semi-transparent display. This optical-based embodiment minimizes necessary system components by reducing the need for additional hardware and software used to capture images of the real world and to blend the captured real world images with the computer-generated graphical elements.
Embodiments of the method using non-see through display units obtain an image of the real world with a video camera connected to a computer via a video cable. In this case, the video camera may be mounted onto the display unit. The image of the real world is mixed, using a commercial-off-the-shelf (COTS) mixing device, with the computer-generated graphical elements and then presented to the user.
A video-based embodiment of this method could use a motorized camera mount for tracking the position and orientation of the camera. System components would include a COTS motorized camera, a COTS video mixing device, and software developed for the purpose of telling the computer the position and orientation of the camera mount. This information is used to facilitate accurate placement of the computer-generated graphical elements within the user's composite view. In addition to the described embodiment, mixing of the real and computer generated images can be done via software on a computer. In this case, the image acquired by the motorized camera would be received by the computer.
External tracking devices can also be used in the video-based embodiment. For example, a GPS tracking system, an optical tracking system, or another type of tracking system would provide the position and orientation ofthe camera. It may be desirable to modify the images of reality if the method is using a video-based embodiment. For instance, in situations where a thermal sort of view of reality is desired, the image of the real world can be modified to appear in a manner similar to a thermal view by reversing the video, removing all color information (so that only brightness remains as grayscale), and, optionally, coloring the captured image green.
Acquisition of the Position and Orientation of the User. The position and orientation of the view need to be determined so that the computer generated imagery can be generated to be viewed from the right perspective. This imagery may require very accurate registration (alignment of computer generated and real objects) for successful implementation. Some uses may only require rough registration. For a scenario in which a hazard needs to be anchored in place, registration and tracking must be quite good. For an information display in which a general direction of traversal is being conveyed, the registration may only need to be of rough quality.
If precision tracking is required, a tracking system that uses inertial/acoustic, magnetic, or optical tracking may be desired. These systems are designed for precision tracking, measuring position and orientation accurate to within millimeters or degrees.
For wide area tracking in which the system will be outdoors, a GPS system can be used. This embodiment would work well for navigational implementations of this invention, such as systems to aid in navigation of waterways and airways.
In the event that a motorized camera is used for the invention, the invention can utilize a motorized camera mount with a built-in position tracker. This allows the computer program in the system to query the motorized camera to determine the camera's local orientation. The base of the camera may or may not be stationary. If the base is not stationary, the moving base must be tracked by a separate 6-DOF method to determine the orientation and position of the camera in the world coordinate system. This situation could be applicable on a ship, airplane, or automobile where the base ofthe camera mount is fixed to the moving platform, but not fixed in world coordinates. A GPS tracking system, an optical tracking system, or some other kind of tracking system must provide a position and orientation ofthe base ofthe camera. For example, a GPS system could be used to find the position and orientation of the base. It would then use the camera's orientation sensors to determine the camera's orientation relative to the camera's base, the orientation and position of which must be known. Such a system could be placed on a vehicle, aircraft, or ship. Another example would include mounting the camera base on a 6-DOF gimbaled arm. As the arm moves, it can be tracked in 3D space. Similar to the previous example, this position and orientation can be added to the data from the camera to find the camera's true position and orientation in world coordinates.
The motorized camera may also use an open-loop architecture, in which the computer cannot request a report from the camera contaimng current orientation data. In this case, the computer drives the camera mount to a specified orientation, and external motion of the camera is not pennitted. In such an implementation, the system knows the position ofthe camera by assuming that the camera, in fact, went to the last location directed by the computer. Similarly, the system may also use feedback architecture. In this scenario, the system will send a command to the camera to move to a specified position, and then the system may request a report from the camera that contains the current position ofthe camera, correcting it again if necessary.
Finally, the motorized camera may operate in a calibrated configuration, in which a computer-generated infinite horizon and center-of-screen indicator are used to verify anchoring and registration of computer-generated objects to real- world positions, h this case, the computer can know exactly where the camera is looking in fully correct, real world coordinates. The system may also operate in an uncalibrated configuration, which would not guarantee perfect registration and anchoring but which may be suitable in certain lower-accuracy applications.
The preferred embodiment of motion tracking hardware for use with a navigation embodiment is a hybrid system, which fuses data from multiple sources (FIG 7) to produce accurate, real-time updates ofthe navigator's head position and orientation. The computer needs to know the navigator's head position and orientation in the real world to properly register and anchor virtual (computer- generated) objects in a real environment. Information on platform position and/or orientation gathered from one source may be combined with position and orientation of the navigator's head relative to the platform and/or world gathered from another source in order to determine the position and orientation of the navigator's head relative to the outside world.
Below are tracking implementations that could be used in a navigation implementation.
GPS/DGPS Platform Tracking. The first part of a hybrid tracking system for this invention consists of tracking the platform. One embodiment ofthe invention uses a single GPS or DGPS receiver system to provide 3 degrees-of-freedom (DOF) platform position information. Another embodiment uses a two-receiver GPS or DGPS system to provide platform's heading and pitch information in addition to position (5-DOF). Another embodiment uses a three-receiver GPS or DGPS system to provide 6-DOF position and orientation information of the platform, hi some embodiments, additional tracking equipment is required to determine, in real-time, a navigator's viewpoint position and orientation for registration and anchoring in AR.
Head Tracking; GPS Only (non-hybrid). The simplest embodiment of tracking for AR platform navigation would be to track the platform position with three receivers and require the navigator's head to be in a fixed position on the platform to see the AR view.
Head Tracking: One GPS Receiver (hybrid). In the embodiment ofthe invention where a single GPS receiver is used for platform position, the navigator's head position relative to the GPS receiver and the navigator's head orientation in the real world must be determined to complete the hybrid tracking system. An electronic compass or a series of GPS positions can be used to determine platform heading in this embodiment, and an inertial sensor attached to the navigator's head can determine the pitch and roll of the navigator's head. Additionally, a magnetic, inertial/acoustic, optical, or other tracking system attached to the navigator's head can be used to track the position and orientation ofthe navigator's head relative to the platform.
Head Tracking: Two GPS Receivers (hybrid). In the embodiment consisting of two GPS receivers, an electronic compass would not be needed. However, the hybrid tracking system would require an inertial sensor and a magnetic, acoustic, optical, or other tracking system in order to be able to determine the real- world position and orientation ofthe navigator's viewpoint.
Head Tracking: Three GPS Receivers (hybrid). A three GPS receiver embodiment requires the addition of 6-DOF head tracking relative to the platform. This can be accomplished with magnetic, acoustic, or optical tracking.
Computer-Generated Graphical Elements as Representations and Indications of Hazards. The inventive method utilizes computer-generated three-dimensional graphical elements to represent actual and fictional hazards, as well as information about actual and fictional hazards. The computer- generated imagery is combined with the user's real world view such that the user visualizes hazards and information within his/her immediate surroundings. Furthermore, not only is the hazard and/or information visualized in a manner which is harmless to the user, the visualization can provides the user with information regarding location, size, and shape of the hazard; location of safe regions (such as a path through a region that has been successfully decontaminated of a biological or chemical agent, or a path through a waterway which is free from floating debris) in the immediate vicinity ofthe hazard; and the severity ofthe hazard. The representation of the hazard can look and sound like the hazard itself (i.e., a different representation for each hazard type); it can be an icon indicative of the size and shape of the appropriate hazard; or it can be a text message or other display informing the user about the hazard. The representation can be a textual message, which would provide information to the user, overlaid onto a view of the real background, possibly in conjunction with the other, nontextual graphical elements, if desired.
The representations can also serve as indications of the intensity and size of a hazard. Properties such as fuzziness, fading, transparency, and blending can be used within a computer- generated graphical element to represent intensity and spatial extent and edges of hazard(s). For example, a representation of a hazardous material spill could show darker colors at the most heavily saturated point ofthe spill and fade to lighter hues and greater transparency at the edges, indicating less severity at location ofthe spill at the edges.
Audio warning components, appropriate to the hazard(s) being represented, also can be used in this invention. Warning sounds can be presented to the user along with the mixed view of rendered graphical elements with reality. Those sounds may have features that include, but are not limited to, chirping, intermittent, steady frequency, modulated frequency, and/or changing frequency.
The computer-generated representations can be classified into two categories: reproductions and indicators. Reproductions are computer-generated replicas of an element, seen or unseen, which would pose a danger to a user if it were actually present. Reproductions also visually and audibly mimic actions of hazards (e.g., a computer-generated representation of water might turn to steam and emit a hissing sound when coming into contact with a computer-generated representation of fire). Representations which would be categorized as reproductions can be used to indicate appearance, location and/or actions of many visible hazards, including, but not limited to, fire, water, smoke, heat, radiation, chemical spills (including display of different colors for different chemicals), and poison gas. Furthermore, reproductions can be used to simulate the appearance, location and actions of unreal hazards and to make invisible hazards visible. This is useful for many applications, such as training scenarios where actual exposure to a hazard is too dangerous, or when a substance, such as radiation, is hazardous and invisible. Representations which are reproductions of normally invisible hazards maintain the properties of the hazard as if the hazard were visible - invisible gas has the same movement properties as visible gas and will act accordingly in this method. Reproductions which make normally invisible hazards visible include, but are not limited to, steam, heat, radiation, and poison gas.
The second type of representation is an indicator. Indicators provide information to the user, including, but not limited to, indications of hazard locations (but not appearance), warnings, instructions, or communications. Indicators may be represented in the form of text messages and icons, as described above. Examples of indicator information may include procedures for dealing with a hazardous material, location of a member of a fellow EFR team member, or a message noting trainee death by fire, electrocution, or other hazard (useful for training purposes).
The inventive method utilizes representations which can appear as many different hazards. For example, hazards and the corresponding representations may be stationary three-dimensional objects, such as signs or poles. They could also be moving hazards, such as unknown liquids or gasses that appear to be bubbling or flowing out ofthe ground. Some real hazards blink (such as a warning indicator which flashes and moves) or twinkle (such as a moving spill which has a metallic component); the computer-generated representation of those hazards would behave in the same manner, hi FIG 1, an example of a display resulting from the inventive method is presented, indicating a safe path to follow 3 in order to avoid coming in contact with a chemical spill 1 or other kind of hazard 1 by using computer-generated poles 2 to demarcate the safe area 3 from the dangerous areas 1. FIG 2 shows a possible display to a user where a chemical/radiation leak 5 is coming out of the ground and visually fading to its edge 4, and simultaneously shows bubbles 6 which could represent the action of bubbling (from a chemical/biological danger), foaming (from a chemical/biological danger), or sparkling (from a radioactive danger).
Movement of the representation of the hazard may be done with animated textures mapped onto three-dimensional objects. For example, movement of a "slime" type of substance over a three-dimensional surface would be accomplished by animating to show perceived outward motion from the center of the surface. This is done by smoothly changing the texture coordinates in OpenGL, and the result is smooth motion of a texture mapped surface.
The representations describing hazards and other information may be placed in the appropriate location by several methods. In one method, the user can enter information (such as significant object positions and types) and representations into his/her computer upon encountering hazards or victims while traversing the space, and can enter such information to a database either stored on the computer or shared with others on the scene. A second, related method would be one where information has already been entered into a pre-existing, shared database, and the system will display representations by retrieving information from this database. A third method could obtain input data from sensors such as a video cameras, thermometers, motion sensors, or other instrumentation placed by EFRs or pre-installed in the space.
The rendered representations can also be displayed to the user without a view of the real world. This would allow users to become familiar with the characteristics of a particular hazard without the distraction of the real world in the background. This kind of view is known as virtual reality (VR).
In the case of presentation of data regarding real hazards, data can be obtained in many ways, hi a navigation implementation, navigation technologies such as digital navigation charts and radar are useful for obtaining data. For example, digital navigation charts (in both raster and vector formats) provide regularly updated information on water depths, coastal features, and potential hazards to a ship. Digital chart data may be translated into a format useful for AR, such as a bitmap, a polygonal model, or a combination of the two (e.g., texture-mapped polygons). Radar information is combined with digital charts in existing systems, and an AR navigation aid can also incorporate a radar display capability to detect hazards such as the locations of other ships and unmapped coastal features.
A challenge in the design of an AR hazard display information system is determining the best way to present relevant information to the navigator, while minimizing cognitive load. For example, current ship navigation systems present digital chart and radar data on a "heads-down" computer screen located on the bridge of a ship. These systems require navigators to take their eyes away from the outside world to ascertain their location and the relative positions of hazards. An AR overlay can be used to superimpose only pertinent information directly on a navigator's view when and where it is needed. FIG 11 shows a diagram of a graphic for overlay on a navigator's view. In this embodiment, the overlay includes wireframe representations of bridge pylons 17 and a sandbar 18. The ship's current heading is indicated with arrows 20, and distance from hazards is drawn as text anchored to those hazards 19. FIG 12 shows another a real world view of a waterway, complete with trees 22, shore 24, mountains 21 , and river 23. FIG 13 shows another display embodiment in which color-coded depths are overlaid on a navigator's view. All of the real world elements remain visible. In this embodiment, the color fields indicating depth are semi-transparent. The depth information, as seen in the Depth Key 26, can come from charts or from a depth finder. The current heading 25 is displayed to the navigator in the lower left corner ofthe overlay. A minimally intrusive overlay is generally considered to have the greatest utility.
Use in Training Scenarios and in Operations. The inventive method for utilizing computer-generated three-dimensional representations to visualize hazards has many possible applications. Broadly, the representations can be used extensively for both training and operations scenarios.
Many training situations are impractical or inconvenient to reproduce in the real world (e.g., flooding in an office), unsafe to reproduce in the real world (e.g., fires aboard a ship), or impossible to produce in the real world (e.g., "see" otherwise invisible radioactivity, or "smell" otherwise odorless fumes). Computer-generated representations of these hazards will allow users to learn correct procedures for alleviating the incident at hand, yet maintain the highest level of trainee and instructor safety. Primary applications are in the training arena where response to potential future dangerous or emergencies must be rehearsed.
Training with this method also allows for intuitive use of the method in actual operations. Operational use of this method would use representations of hazards where dangerous unseen objects or events are occurring, or could occur, (e.g., computer-generated visible gas being placed in the area where real unseen gas is expected to be located). Applications include generation of computer-generated elements while conducting operations in dangerous and emergency situations.
Combining Computer-Generated Graphical Elements with the View of the Real World and Presenting it to the User. Once the computer renders the representation, it is combined with the real world image, hi the preferred optical-based embodiment, the display of the rendered image is on a see-through HMD, which allows the view ofthe real world to be directly visible to the user through the use of partial mirrors, and to which the rendered image is added. Video-based embodiments utilizing non-see through display units require additional hardware and software for mixing the captured image of the real world with the representation of the hazard.
If a motorized camera is used in the invention, the captured video image of the real world is mixed with the computer-generated graphical elements via an onboard or external image combiner. Onboard mixing is performed via software. External mixing can be provided by commercial-off-the-shelf (COTS) mixing hardware, such as a Videonics video mixer or Coriogen Eclipse keyer. Such an external solution would accept the video signal from the camera and a computer generated video signal from the computer and combine them into the final augmented reality image.
DETAILED DESCRIPTIONS OF OTHER EMBODIMENTS OF THE INVENTION
Other embodiments ofthe invention include land navigation hazard display. FIG 14 shows a real world view, including mountains 28, trees 27, shoulders 29, as well as the road 30, as seen by a driver of a vehicle. Dangerous areas of travel and/or a preferred route may be overlaid on a driver's field of view, as shown in FIG 15. The real world features are still visible through the display, which may also be displayed in color. The driver's heading 31 is displayed as well as a Key 32. Air navigation is another potential embodiment, providing information to help in low- visibility aircraft landings and aircraft terrain avoidance. FIG 16 shows the real world view of a runway in a clear visibility situation. FIG 17 shows the same real world view in a low visibility situation, complete with mountains 34, trees 33, shoulder 35, and runway 36, augmented with fog 37. FIG 18 further adds to the display by adding colors or patterns to indicate safe and unsafe areas (as indicated in the Key 39) and heading 38. Similar technologies to those described for waterway navigation would be employed to implement systems for either a land or air navigation application. FIG 6 technologies, with the exception of the Ship Radar block (which can be replaced with a "Land Radar" or "Aircraft Radar" block) are all applicable to land or air embodiments.
Another preferred embodiment of the invention involves visualization of invisible atmospheric phenomena. The following paragraphs illustrate an embodiment of this method.
FIG 19 illustrates the data flow that defines the preferred method of the invention for visualizing otherwise invisible atmospheric phenomena. Data 41 can come from a variety of sources 40 - sensor data, human-reported data, or computer simulation data - concerning atmospheric phenomena in a particular area. The data 41 are used in a modeler 42 to create a model 43 of the atmospheric phenomena, or the atmosphere in the area. This model 43 and a viewpoint 45 from a pose sensor 44 are used by a computer 46 to render a computer-generated image 47 showing how the modeled phenomena would appear to an observer at the chosen viewpoint. "Viewpoint" is used to mean the position and orientation of an imaging sensor (i.e., any sensor which creates an image, such as a video camera), eye, or other instrument "seeing" the scene. Applying color or texture to the model of the otherwise invisible atmospheric phenomena allows the image to show the structure of the invisible phenomena to the observer. Next, the rendered image 47 is combined in a combiner 48 with an image of the real world 50 from image sensor 49, seen from the same viewpoint 45, to produce an output image 51 that is displayed 52. This latter process is commonly known as Augmented Reality.
The first step in the process is to gather data about relevant atmospheric phenomena. At least three pieces of data about a phenomenon are important - type, intensity, and extent. Types of phenomena include, for example,, aircraft wingtip vortices and microbursts (downdrafts inside thunder clouds). Other important phenomena would include areas of wind shear and clouds with electrical activity. The type of phenomena is relevant because some phenomena are more likely to be dangerous, move faster, and/or dissipate faster than others. Each type may warrant a different amount of caution on the part of pilots and air traffic controllers. The intensity of a phenomenon is similarly important, as a weak and dissipating phenomenon may not require any special action, while a strong or growing one may require rerouting or delaying aircraft. The size of a phenomenon, meaning the region over which it has intensity above some threshold, is important, as it tells pilots and air traffic controllers how much of a detour is in order. Larger detours increase delays, and knowing the size, growth rate, and movement of the phenomenon allow pilots and air traffic controllers to estimate the minimum safe detour.
There are several possible sources of data about atmospheric phenomena. One source is sensors. Sensors at airports can provide data on local atmospheric phenomena, while sensors on aircraft provide data on conditions in the airways. A second data source is human observation. Pilots can report their locations as they experience the effect of atmospheric phenomena. As air traffic follows prescribed lanes, these observations may be useful to planes following in the same lane. Similarly, observations by an air traffic controller at an airport would be valid for more planes taking off and landing. A third possible source of this data is atmospheric simulation. For instance, based on known wind strength, and direction, and magnitude of turbulence, it may be possible to calculate the evolution of wingtip vortex positions, i the preferred embodiment, data about wingtip vortices could be taken as data from a simulation, or from airport sensors. These data would be based on the position and orientation of the aircraft over time, and simulations/assumptions regarding the amount of time required for the vortices to dissipate. Data about microbursts come from a point-and-click interface where a user selects the center of a microburst and can modify its reported size and intensity.
The second step in the visualization method (see FIG 19) involves a modeler 42 converting the data 41 into a model 43 of the atmosphere in a region. The preferred embodiment computes simulated points along possible paths of wingtip vortices of a (simulated) aircraft. Splines are then generated to interpolate the path of wingtip vortices between the known points. Other atmospheric phenomena are stored in a list, each with a center position, dimensions, and maximum intensity. A more accurate system might use more complicated representations, for instance allowing phenomena to have complex shapes (e.g., an anvil-shaped thunder cloud), or using voxels or vector fields for densely sampled regions. An alternative to representing the atmospheric phenomena with complex 3D geometric shapes, would be the use of icons (which may be simple or complex, depending on the preference of the user). The icons would require less rendering computer power, and might not clutter the display up as much. Furthermore, the use of a textual representation overlaid onto the display can show specifics of the phenomena such as type, speed, altitude, dimensions (size), and importance (to draw attention to more dangerous phenomena). The user may wish to display the textual display either by itself or in conjunction with the other display options of icons or 3D geometric shapes.
The third step in the visualization method uses computer graphics 46 to render a scene, defined by a model ofthe atmospheric phenomena 43, from a particular viewpoint 45, producing a computer-generated image 47. Although this can be done in many ways, the preferred embodiment uses the OpenGL® (SGI, Mountain View, CA) programming interface, drawing the models of the atmospheric phenomena as sets of triangles. The software in the preferred embodiment converts the splines that model wingtip vortices into a set of ribbons arranged in a star cross-section shape, which has the appearance of a tube in nearly any direction. Texture mapping provides a color fade from intense along the spline to transparent at the ribbon edges. For other phenomena, the software uses the technique of billboarding. The software finds a plane passing through a phenomenon's center location and normal to the line from viewpoint to center, uses the size of a phenomenon to determine the radius of a circle in that plane, and draws a fan of triangles to approximate that circle. Different colors are used for different types of phenomena, and alpha blending of these false colors shows an intensity falloff from the center to the edge of each phenomenon.
The next step in the visualization method is to acquire an image of the real world 50, using an image sensor 49, and to determine the viewpoint 45 from which that image was taken, using a pose sensor 44. There are several ways to accomplish this, depending on the hardware used to implement the method. In one reduction to practice, the image of the real world 50 is a static image of an airfield, taken from a birds-eye view by a camera, such as a satellite. Thus, the viewpoint 45 is fixed, pointing downward, and the pose sensor 44 consists of the programmer deducing the altitude of the viewpoint from the known size of objects appearing in the image. Alternately, the image ofthe real world can come from a ground-based stationary imaging sensor from a known viewpoint that is not a birds-eye view. This may be accomplished by mounting a camera (perhaps even one that can pan and tilt in a known, controlled manner) at an accurately known location on or near the airport. A similar embodiment could use radar as the image sensor, and calculate the equivalent viewpoint of the image. A more complicated embodiment might use a camera or the user's eye(s) as the image sensor, and use a tracking system (common in the field of augmented reality such as the INTERSENSE IS-600 (Burlington, MA) as the pose sensor to determine the position and location of a camera, or the user's head, hi this situation, the camera may be mounted on another person or portable platform, and the user would observe the resultant display at his or her location.
The remaining steps in this embodiment of the method are to combine the computer- generated image 47 with the real world image 50 in an image combiner 48 and to send the output image 51 to a display 52. Again, this can be done in many ways, known in the art, depending on the hardware used to implement the method.
Methodologies for mixing and presenting content (steps 48, 51 and 52 of FIG 19) are shown in FIGS 20, 21, and 22. hi FIG 20, a see-through augmented reality device is demonstrated. In this system, no automated mixing is required, as the image is projected directly over what the viewer sees through a semi-transparent display 55, as may be accomplished with partial mirrors. In FIG 21, the mixing of real and virtual images (augmented reality) is performed using an external video mixer 56. The real image is acquired by a camera 57 on the viewer's head, which is tracked by a 6-DOF tracker 54. FIG 22 is identical to FIG 21, except that the real and virtual portions ofthe image are mixed on the computer's 53 internal video card, so an external mixer is not required. In addition to displaying the image to a viewer's eyes through a Head-Mounted Display (HMD) 58, the composite image can be displayed in any video device, such as a monitor, television, heads-up-display, a moveable display that the user can rotate around that will provide an appropriate view based on how the display is rotated, or a display mounted on a monocular or a pair of binoculars.
FIGS 23 through 30 show examples of different displays accomplished by the invention. The images consist of virtual images and virtual objects overlaid on real backgrounds. In these images, intuitive representations have been created to represent important atmospheric phenomenon that are otherwise invisible.
FIGS 23 through 26 show one application of top-down viewing of an airspace. The images demonstrate that trailing wingtip vortex data can be visualized such that the user can see the position and intensity of local atmospheric data 60. Airplanes can be represented as icons in cases where the planes are too small to see easily. Multiple planes and atmospheric disturbances can be overlaid on the same image.
In FIG 24, triangular icons are used to better indicate the relevant airplane. h FIG 25, the pilot ofthe aft plane can see that the pattern is clear directly in front of him.
In FIG 26, note that the vortex trails 60 are easily seen for use by air control personnel in the terminal 59.
FIGS 27 and 28 show examples of a pilot's augmented view. The figures show that data such as wind shear and microbursts can be represented as virtual objects 61 projected onto the viewer's display. Properties such as color, transparency, intensity, and size can be used to represent the various properties of the atmospheric phenomenon 62. hi the case of FIG 28, the dashed line (which could be a change of color in the display) of the marker has changed, which could represent a change in phenomena type.
FIGS 29 and 30 show examples of an airplane 63 overlaid with virtual wake vortices 65, demonstrating the power of applying virtual representations of data to real images. Fuzziness or blending can be used to show that the edges of the vortex trails 64 are not discrete, but that the area of influence fades as you move away from the center ofthe vortex.
The details ofthe embodiment using a motorized camera are now described. FIG 31 illustrates the hardware for the preferred method of the invention using a motorized camera. A motorized video camera 69, 70 is used as a tracking system for augmented reality. By connecting the motorized video camera to the computer 66 via an RS-232 serial cable 67 (for camera control and feedback) and video cable 68, the camera may be aimed, the position of the camera can be queried, and the image seen by the camera may be captured over the video cable 68 by software running on the computer. Additionally, the computer can query the camera for its current field of view, a necessary piece of information if the computer image is to be rendered properly.
FIG 32 is a close-up view ofthe preferred Sony EVI-D30 motorized camera. The camera is composed of a head 69 and a base 70 coupled by a motorized mount. This mount can be panned and tilted via commands from the computer system, which allows the head to move while the base remains stationary. The camera also has internal software, which tracks the current known pan and tilt position ofthe head with respect to the base, which may be queried over the RS-232 serial cable.
The video signal from the camera travels into a video capture, or "frame grabber" device connected to the computer. In this embodiment of the invention, an iRez USB Live! capture device is used, which allows software on the computer to capture, modify, and display the image on the screen of the computer. This image source can be combined with computer-generated elements before display, allowing for augmented reality applications. hi FIG 33, an augmented reality display using the EVI-D30 as a tracked image source is shown. This image is a composite image originally acquired from the camera, which is displayed in FIG 34, and shows furniture and other items physically located in real space 72, 73, and 74. The software running on the computer then queries the camera for its orientation. The orientation returned from the camera represents the angle of the camera's optics with respect to the base of the camera. By corresponding this information with the known location and orientation of the camera base, a real-world position and orientation can be computed for the camera's optics. These data are then used to render three-dimensional computer-generated poles 71 with proper perspective and screen location, which are superimposed over the image captured from the camera. The resulting composite image is displayed to the user on the screen.
FIG 35 shows the same scene as FIG 33, but from a different angle (with new real world elements - the clock 75 and the trash can 76). The unaugmented version of FIG 35 (shown in FIG 36) is captured from the video camera, and the computer-generated elements 71 are again added to the image before display to the user. Note, as the camera angle has changed, the perspective and view angle ofthe poles 71 has also changed, permitting them to remain anchored to locations in the real-world image.
The inventive method can be accomplished using the system components shown in FIG 37. The following items and results are needed to accomplish the preferred method of this invention:
A display device for presenting computer generated images to the EFR.
A method for tracking the position ofthe EFR display device.
A method for tracking the orientation ofthe EFR display device.
A method for communicating the position and orientation of the EFR display device to the incident commander.
A method for the incident commander to view information regarding the position and orientation ofthe EFR display device.
A method for the incident commander to generate messages to be sent to the EFR display device.
A method for the incident commander to send messages to the EFR display device's portable computer.
A method for presenting the messages, using computer generated images, sent by the incident commander to the EFR.
A method for combining the view ofthe real world seen at the position and orientation ofthe
EFR display device with the computer-generated images representing the messages sent to the EFR by the incident commander.
A method for presenting the combined view to the EFR on the EFR display device.
EFR Display Device, hi one preferred embodiment ofthe invention, the EFR display device (used to present computer-generated images to the EFR) is a Head Mounted Display (HMD) 83. There are many varieties of HMDs which would be acceptable, including see-through and non- see-through types. In the preferred embodiment, a see-through monocular HMD is used. Utilization of a see-through type of HMD allows the view of the real world to be obtained directly by the EFR. The manners in which a message is added to the display are described below. In a second preferred embodiment, a non-see-through HMD would be used as the EFR display device, h this case, the images of the real world (as captured via video camera) are mixed with the computer-generated images by using additional hardware and software components known in the art.
For preferred embodiments using an HMD as the EFR display device, a monocular HMD may be integrated directly into an EFR face mask which has been customized accordingly. See FIG 38 for a conceptual drawing of an SCBA 102 with the monocular HMD eyepiece 101 visible from the outside ofthe mask. Because first responders are associated with a number of different professions, the customized face mask could be part of a firefighter's SCBA (Self-Contained Breathing Apparatus), part of a HAZMAT or radiation suit, or part of a hard hat.
The EFR display device could also be a hand-held device, either see-through or non-see- through. In the see-through embodiment of this method, the EFR looks through the "see- through" portion (a transparent or semitransparent surface) of the hand-held display device and views the computer-generated elements projected onto the view ofthe real surroundings.
Similar to the second preferred embodiment of this method (which utilizes a non-see-though HMD), if the EFR is using a non-see-though hand-held display device, the images of the real world (as captured via video camera) are mixed with the computer-generated images by using additional hardware and software components.
The hand-held embodiment ofthe invention may also be integrated into other devices (which would require some level of customization) commonly used by first responders, such as Thermal Imagers, Navy Firefighter's Thermal hnagers (NFTI), or Geiger counters.
Method for Tracking the Position and Orientation of the EFR Display Device. The position of an EFR display device 84 and 83 is tracked using a wide area tracking system. This can be accomplished with a Radio Frequency (RF) technology-based tracker. The preferred embodiment would use RF transmitters. The tracking system would likely (but not necessarily) have transmitters installed at the incident site 80 as well as have a receiver that the EFR would have with him or her 81. This receiver could be mounted onto the display device, worn on the user's body, or carried by the user. In the preferred embodiment of the method (in which the EFR is wearing an HMD), the receiver 82 is also worn by the EFR, as in FIG 37. The receiver is what will be tracked to determine the location of the EFR's display device. Alternately, if a hand-held display device is used, the receiver could be mounted directly in or on the device, or a receiver worn by the EFR could be used to compute the position of the device. One possible installation of a tracking system is shown in FIG 46. Emitters 201 are installed on the outer walls and will provide tracking for the EFR 200 entering the structure.
To correctly determine the EFR's location in three dimensions, the RF tracking system must have at least four non-coplanar transmitters. If the incident space is at or near one elevation, a system having three tracking stations may be used to determine the EFR's location since definite knowledge of the vertical height of the EFR is not needed, and this method would assume the EFRs are at coplanar locations. In any case, the RF receiver would determine either the direction or distance to each transmitter, which would provide the location of the EFR. Alternately, the RF system just described can be implemented in reverse, with the EFR wearing a transmitter (as opposed to the receiver) and using tliree or more receivers to perform the computation of the display location.
The orientation of the EFR display device can be tracked using inertial or compass type tracking equipment, available through the INTERSENSE CORPORATION (Burlington, MA). If a HMD is being used, this type of device 82 can be worn on the display device or on the EFR's head. Additionally, if a hand-held device is used, the orientation tracker could be mounted onto the hand-held device. In an alternate embodiment, two tracking devices can be used together in combination to determine the direction in which the EFR display device is pointing. The tracking equipment could also have a two-axis tilt sensor which measures the pitch and roll of the device.
Alternately to the above embodiments for position and orientation tracking, an inertial/ulfrasonic hybrid tracking system, a magnetic tracking system, or an optical tracking system can be used to determine both the position and orientation of the device. These tracking systems would have parts that would be worn or mounted in a similar fashion to the preferred embodiment.
Method for Communicating the Position and Orientation of the EFR Display Device to the Incident Commander. The data regarding the position and orientation ofthe EFR's display device can then be transmitted to the incident commander by using a transmitter 79 via Radio Frequency Technology. This information is received by a receiver 77 attached to the incident commander's on-site laptop or portable computer 78. Method for the Incident Commander to View EFR Display Device Position and Orientation Information. The EFR display device position and orientation information is displayed on the incident commander's on-site, laptop or portable computer. In the preferred embodiment, this display may consist of a floor plan of the incident site onto which the EFR's position and head orientation are displayed. This information may be displayed such that the EFR's position is represented as a stick figure with an orientation identical to that of the EFR. The EFR's position and orientation could also be represented by a simple arrow placed at the EFR's position on the incident commander's display.
The path which the EFR has taken may be tracked and displayed to the incident commander so that the incident commander may "see" the route(s) the EFR has taken. The EFR generating the path, a second EFR, and the incident commander could all see the path in their own displays, if desired. If multiple EFRs at an incident scene are using this system, their combined routes can be used to successfully construct routes of safe navigation throughout the incident space. This information could be used to display the paths to the various users of the system, including the EFRs and the incident commander. Since the positions of the EFRs are transmitted to the incident commander, the incident commander may share the positions ofthe EFRs with some or all members ofthe EFR team. If desired, the incident commander could also record the positions ofthe EFRs for feedback at a later time.
Method for the Incident Commander to Generate Messages to be Sent to the EFR Display Device. Based on the information received by the incident commander regarding the position and orientation of the EFR display device, the incident commander may use his/her computer (located at the incident site) to generate messages for the EFR. The incident commander can generate text messages by typing or by selecting common phrases from a list or menu. Likewise, the incident commander may select, from a list or menu, icons representing situations, actions, and hazards (such as flames or chemical spills) common to an incident site. FIG 39 is an example of a mixed text and iconic message relating to fire. If the incident commander needs to guide the EFR to a particular location, directional navigation data, such as an arrow, can be generated to indicate in which direction the EFR is to proceed. The incident commander may even generate a set of points in a path ("waypoints") for the EFR to follow to reach a destination. As the EFR reaches consecutive points along the path, the previous point is removed and the next goal is established via an icon representing the next intermediate point on the path. The final destination can also be marked with a special icon. See FIG 40 for a diagram of a structure and possible locations of waypoint icons used to guide the EFR from entry point to destination. The path ofthe EFR 154 can be recorded, and the incident commander may use this information to relay possible escape routes, indicators of hazards 152 and 153, and a final destination point 151 to one or more EFRs 150 at the scene (see FIG 41). Additionally, the EFR could use a wireframe rendering of the incident space (FIG 45 is an example of such) for navigation within the structure. The two most likely sources of a wireframe model of the incident space are (1) from a database of models that contain the model of the space from previous measurements, or (2) by equipment that the EFRs can wear or carry into the incident space that would generate a model ofthe room in real time as the EFR traverses the space.
Method for the Incident Commander to Send Messages to the EFR Display Device's Portable Computer. The incident commander will then transmit, via a transmitter and an EFR receiver, the message (as described above) to the EFR's computer. This combination could be radio-based, possibly commercially available technology such as wireless ethernet.
Method for Presenting the Messages to the EFR Using Computer-Generated Images. In the preferred embodiment, once the message is received by the EFR, it is rendered by the EFR's computer, displayed as an image in the EFR's forward view via a Head Mounted Display (HMD) 83 (see FIG 37).
If the data is directional data instructing the EFR where to proceed, the data is rendered and displayed as arrows or as markers or other appropriate icons. FIG 42 shows a possible mixed text and icon display 110 that conveys the message to the EFR to proceed up the stairs 111. FIG 43 shows an example of mixed text and icon display 120 of a path waypoint.
Text messages are rendered and displayed as text, and could contain warning data making the EFR aware of dangers of which he/she is presently unaware.
Icons representative of a variety of hazards can be rendered and displayed to the EFR provided the type and location of the hazard is known. Specifically, different icons could be used for such dangers as a fire, a bomb, a radiation leak, or a chemical spill. See FIG 44 for a text message 130 relating to a leak of a radioactive substance.
The message may contain data specific to the location and environment in which the incident is taking place. A key code, for example, could be sent to an EFR who is trying to safely traverse a secure installation. Temperature at the EFR's location inside an incident space could be displayed to the EFR provided a sensor is available to measure that temperature. Additionally, temperatures at other locations within the structure could be displayed to the EFR, provided sensors are installed at other locations within the structure.
If the EFR is trying to rescue a victim downed or trapped in a building, a message could be sent from the incident commander to the EFR to assist in handling potential injuries, such as First Aid procedures to aid a victim with a known specific medical condition.
The layout of the incident space can also be displayed to the EFR as a wireframe rendering (see FIG 45). This is particularly useful in low visibility situations. The geometric model used for this wireframe rendering can be generated in several ways. The model can be created before the incident; the dimensions of the incident space are entered into a computer and the resulting model of the space would be selected by the incident commander and transmitted to the EFR. The model is received and rendered by the EFR's computer to be a wireframe representation of the EFR's surroundings. The model could also be generated at the time of the incident. Technology exists which can use stereoscopic images of a space to construct a 3D-model based on that data. This commercial-off-the-shelf (COTS) equipment could be worn or carried by the EFR while traversing the incident space. The equipment used to generate the 3D model could also be mounted onto a tripod or other stationary mount. This equipment could use either wireless or wired connections. If the generated model is sent to the incident commander's computer, the incident commander's computer can serve as a central repository for data relevant to the incident. In this case, the model generated at the incident scene can be relayed to other EFRs at the scene. Furthermore, if multiple model generators are being used, the results of the various modelers could be combined to create a growing model which could be shared by all users.
Method for Acquiring a View of the Real World, hi the preferred embodiment, as explained above, the view of the real world is inherently present through a see-though HMD. This embodiment minimizes necessary system hardware by eliminating the need for additional devices used to capture the images of the real world and to mix the captured real world images with the computer-generated images. Likewise, if the EFR uses a hand-held, see-through display device, the view of the real world is inherently present when the EFR looks through the see- through portion of the device. Embodiments of this method using non-see through devices would capture an image ofthe real world with a video camera. Method for Combining the View of the Real World with the Computer-Generated Images and for Presenting the Combination to the EFR. In the preferred embodiment, a see- through display device is used in which the view of the real world is inherently visible to the user. Computer generated images are projected into this device, where they are superimposed onto the view seen by the user. The combined view is created automatically through the use of partial mirrors used in the see-through display device with no additional equipment required.
Other embodiments of this method use both hardware and software components for the mixing of real world and computer-generated imagery. For example, an image ofthe real world acquired from a camera may be combined with computer generated images using a hardware mixer. The combined view in those embodiments is presented to the EFR on a non-see-through HMD or other non-see-through display device.
Regardless ofthe method used for combining the images, the result is an augmented view of reality for the EFR for use in both training and actual operations.
An embodiment of the inventive method may use smart card technology to store pertinent training, operations, and simulation data related to hazards, including but not limited to, one or more of training information, trainee and team performance data, simulation parameters, metrics, and other information related to training, simulation, and/or evaluation. The relevant data is stored on the smart card and is accessed via a smart card terminal. The terminal can be connected to either the simulation computer or to a separate computer being used for analysis. The smart card terminal provides access to the data upon insertion ofthe smart card. Data on the smart card (from previous training session, for example) can be retrieved and can also be updated to reflect the trainee's most recent performance. A "smart card" is a digital rewriteable memory device in a shape like a credit card that can be read and written by a smart card terminal.
Computer-based simulation of specific hazardous scenarios is a frequently used method of training. This simulation is frequently accomplished via Virtual Reality (VR) and Augmented Reality (AR). Smart cards can be used to store data from current and previous training sessions. That data can include trainee identification information, simulation data for the virtual environment, and metrics regarding the trainee's performance in one or more given scenarios where hazards or hazard information is present. For example, training for driving an automobile under difficult conditions (such as law enforcement high-speed driving) can be done with a driving simulator. The trainee would enter the simulator, insert his/her smart card into the smart card terminal and be identified based on information stored on the smart card. The smart card would also contain information such as chase parameters (e.g., speed, visibility type of vehicle, road conditions). The scenario could be run and the trainee's interaction with the scenario (the trainee's performance) can be recorded and stored on the card. Those results can be called up later to evaluate progress in a given skill or other "lessons learned." Likewise, by storing simulation information on a smart card, training scenarios can be repeated any number of times in a cost-efficient and reliable manner. Specifically, an instructor could have one smart card with a set of scenarios that can be run at the instructor's discretion. The instructor can administer the same scenario, perhaps as a test, to multiple trainees with minimal risk of instructor error, thus providing more valid test results.
The smart card can be used to store any type of data relevant to a particular hazard situation. When the type of scenario presented must be tailored to what could be referred to as a trainee's personal training profile (identifying personal information and other data), that information can be stored on the smart card. This data might include, but is not limited to, skills mastered, levels of expertise or other special training (such as HAZMAT training), and training needed for upcoming assignments. Likewise, the method featured in this application can also be used to store instructor data on a smart card. Examples include authentication of an instructor into a training system for purposes of security or access control; or simply to provide the system with the instructor's personal training profile for the purpose of tailoring the application to the instructor.
Performance data (interaction with training scenarios) such as success, score, or other parameters for individual trainees and for teams can be stored on a smart card. Furthermore, applications which have a notion of a "team" can store information about the user's participation within the team, the user's performance in the context ofthe team, and/or the performance ofthe team as a whole.
Training application parameters, such as locations of hazards, size of training space, or any other parameter of the application, can be stored on the smart card. The result is the creation of "scenario" cards containing the specific data required by the simulation or training application. Furthermore, multiple smart cards can be used to track multiple users and multiple scenarios.
Security of information contained on the smart card may be of concern to the smart card user. Smart card data can be protected using a number of methods. The card can be protected via a personal identification number (PIN). This provides a security layer such that the card used is authenticated by the owner. That is, if a user enters the correct PIN to obtain the data from the card, it can be safely assumed that the user is the valid owner ofthe card, thus preventing identity theft in the training environment. Another method of protection is to issue a password for use of the card. As with use of a PIN, if a card user enters the correct password, it is assumed that the user is the card owner. The smart card can also be protected via a cryptographic "handshake." In this case, the contents of the card are protected via mathematically secure cryptography requiring secure identification of any system requesting data from the card. This can prevent unauthorized systems or users from accessing the data that exists on the card.
As shown in FIG 51, use ofthe smart card 181 requires a smart card terminal 182. The smart card terminal 182 is a read- write device which allows the data on the card 181 to be retrieved for use in the system 183 and new data to be written to the card 181 for use in the future. It can be connected directly to the computer(s) 183 which running the training application, displaying the output ofthe smart card and training simulation on the output display 184. This is most practical when the training environment has a computer 183 that can execute the training scenario readily available and a situation involving only one trainee. The method featured in this application also allows training via networked communication. The smart card terminal can be connected to a separate computer which is connected (via standard networking cables) to the computer(s) running the training application. For example, if a local computer can accommodate use of the smart card terminal, but not the fraining scenario, the training scenario can be directed to another computer on the network and used at a local, more convenient location.
One very specific use of this method involves the situation where a system is being used to train firefighters/damage control personnel. The data stored on the smart card is used to track and store fire/damage extent, length of time fires burned, amount of water or other extinguishing agent used to put out the fire, relative score, and any potential injury that is likely to have been sustained by the trainee(s) or equipment. The opening screen of an application for this particular use of this method is shown in FIG 47. This screen contains information about the identity ofthe cardholder, and shows a log of previous training scenarios and the score attained for each one. When running a scenario, data about the current status of the scenario is shown as in FIG 48. After the scenario ends, performance statistics are presented to the trainee, and a score is generated, and written to the card, as shown in FIG 49. FIG 50 shows the log of recent training scenarios again as in FIG 47, but includes the most recent training scenario depicted in FIG 48 and FIG 49.
Other potential applications of this method include operational scenarios, in which a user's operational performance can be recorded and reviewed. For instance, in an air. traffic control scenario, the system may track any close calls, as well as performance data related to the number of aircraft on the controller's screen, the busiest time of day, average radio transmission length, and other metrics. These metrics as stored on the smart card represent a personal performance profile that can be used for evaluation of the controller, or as a tamper-resistant record of the controller's actions. Such a system would expand performance evaluation beyond training and into daily use, providing improved on-the-job safety and efficiency.
Although specific features ofthe invention are shown in some drawings and not others, this is for convenience only, as each feature may be combined with any or all ofthe other features in accordance with the invention.
Other embodiments will occur to those skilled in the art and are within the following claims.
What is claimed is:

Claims

CLAIMS A method of visualization of hazards and hazard information, comprising: providing a display unit for the user; providing motion tracking hardware; using the motion tracking hardware to determine the location and direction ofthe viewpoint to which the computer-generated graphical elements are being rendered; providing an image or view ofthe real world; using a computer to generate two- and three-dimensional graphical elements as representations of hazards and information about hazards; rendering the computer-generated graphical elements to correspond to the user's viewpoint; creating for the user a mixed view comprised of an actual view of the real world as it appears in front of the user, where graphical elements can be placed anywhere in the real world and remain anchored to that place in the real world regardless of the direction in which the user is looking, wherein the rendered graphical elements are superimposed on the actual view, to accomplish an augmented reality view of representations of hazards in the real world; and presenting the augmented reality view, via the display unit, to the user. The method of claim 1 in which the display unit is selected from the group of display units consisting of a heads-up display, a Head Mounted Display (HMD), a see-through HMD, and a non-see-through HMD.
The method of claim 1 in which the display unit is selected from the group of display units consisting of a heads-down-display, a display unit that is moveable, but not held, by the user, a fixed computer monitor, a display unit that is used in a vehicle, and a display unit that is used in an aircraft.
The method of claim 1 in which the display unit is selected from the group of display units consisting of a handheld display device, a handheld see-through device, a handheld binocular type of display, a handheld monocular type of display, a handheld non-see- through device, and a display unit that is carried by a user.
The method of claim 1 in which providing an image or view of the real world comprises capturing an image with a video camera that is mounted to the display unit. The method of claim 1 in which the image ofthe real world is a static image.
7. The method of claim 1 in which the image of the real world is from a ground-based stationary imaging sensor from a known viewpoint.
8. The method of claim 1 in which the image of the real world has been modified to appear approximately like a thermal view ofthe real world would appear.
9. The method of claim 1 in which the motion tracking hardware is selected from the group of motion tracking hardware consisting of a motorized camera mount, an external tracking system, and a Global Positional System.
10. The method of claim 1 in which the representations are designed to be reproductions to mimic the appearance and actions of actual hazards.
11. The method of claim 1 in which the representations are designed to be indicators of actual hazards, and to convey their type and positions.
12. The method of claim 1 in which the representations are used to indicate a safe region in the vicinity of a hazard.
13. The method of claim 1 in which the representations are entered into the computer interactively by a user.
14. The method of claim 1 in which the representations are automatically placed using a database of locations.
15. The method of claim 1 in which the representations are automatically placed using input from sensors.
16. The method of claim 1 in which the representations are static 3D objects.
17 The method of claim 1 in which the representations are animated textures mapped onto 3D objects.
18 The method of claim 1 in which the representations are objects that appear to be emanating out ofthe ground.
19. The method of claim 1 in which the representations blink or have a blinking component.
20. The method of claim 1 in which the representations represent at least the location of a hazard selected from the group of hazards consisting of visible fire, visible water, visible smoke, poison gas, heat, chemicals and radiation.
21. The method of claim 1 in which the representations are created to appear and act to mimic how a hazard selected from the group of hazards consisting of fire in that location would appear and act, water in that location would appear and act, smoke in that location would appear and act, unseen poison gas in that location would act, unseen heat in that location would act, and unseen radiation in that location would act.
22. The method of claim 1 in which the rendered computer-generated three-dimensional graphical elements are representations displaying an image property selected from the group of properties consisting of fuzziness, fading, transparency, and blending, to represent the intensity, spatial extent, and edges of at least one hazard.
23. The method of claim 1 in which the rendered computer- generated three-dimensional graphical elements are icons which represent hazards.
24. The method of claim 1 in which information about the hazard is displayed to the user via text overlaid onto a view of a real background.
25. The method of claim 1 further comprising generating for the user an audio warning component appropriate to at least one hazard being represented.
26. The method of claim 1 in which the representations are used in operations.
27. The method of claim 1 in which the representations are used in training.
28. The method of claim 1 in which the representations are displayed without a view of the real world.
29. The method of claim 1 in which said hazards and information about hazards relate to navigation, comprising: obtaining navigation information; creating a graphical overlay of relevant navigation information; providing a display unit for the graphical overlay; determining viewpoint location and direction in real-time with position tracking/positioning hardware; and displaying mixed real and virtual imagery in the display unit.
30. The method of claim 29 in which real-world imagery is provided by a camera.
31. The method of claim 30 in which real and virtual imagery are mixed via a luminance key or a chroma key in a video mixer.
32. The method of claim 30 in which real imagery captured with a frame grabber and virtual imagery are mixed via the alpha (transparency) channel on a computer.
33. The method of claim 29 in which real and virtual imagery are combined on an optical see- through display.
34. The method of claim 29 in which motion tracking hardware consists of a three GPS or DGPS receiver configuration, requiring the user's head to remain in a fixed position to view a correctly anchored AR display.
35. The method of claim 29 in which motion tracking hardware consists of a single GPS or DGPS receiver to measure the vehicle position, an electronic compass to detect platform heading, an inertial or other pitch and roll sensor, and a 6-DOF tracking system (magnetic, acoustic, optical, or other) to determine navigator's head position/orientation relative to the platform.
36. The method of claim 29 in which motion fracking hardware consists of two GPS or DGPS receivers to measure the platform position and heading, an inertial or other pitch and roll sensor, and a 6-DOF tracking system (magnetic, acoustic, optical, or other) to determine navigator's head position/orientation relative to the platform.
37. The method of claim 29 in which motion tracking hardware consists of three GPS or DGPS receivers to measure the platform position and orientation and a 6-DOF tracking system (magnetic, acoustic, optical, or other) to determine navigator's head position/orientation relative to the platform.
38. The method of claim 29 where the display unit is a head- worn display.
39. The method of claim 29 where the display unit is a handheld display, such as binoculars or a flat panel.
40. The method of claim 29 where the display unit is a heads-up display (HUD).
41. The method of claim 29 where digital navigation information includes digital navigation charts.
42. The method of claim 29 where digital navigation information includes information from a radar system.
43. The method of claim 29 where navigation information includes platform's heading.
44. The method of claim 29 where navigation information includes platform's distance from hazards.
45. The method of claim 29 where navigation information includes water depth.
46. The method of claim 29 in which navigation information is overlaid as a wireframe graphic.
47. The method of claim 29 in which navigation information is overlaid as a solid graphic.
48. The method of claim 29 in which navigation information is overlaid as a semi-transparent or fuzzy (soft-bordered) graphic.
49. The method of claim 30 in which the camera is mounted at a distance from the user's head and oriented using motion tracking data to control motors on a camera mount.
50. The method of claim 29 in which relevant navigation information is determined by computer algorithms that filter system data.
51. The method of claim 29 in which navigation information is controlled and customized with a handheld device.
52. The method of claim 29 in which navigation information is controlled and customized by voice recognition.
53. The method of claim 29 in which navigation information is controlled and customized by a touch screen.
54. The method of claim 29 in which navigation information is controlled and customized by a mouse.
55. The method of claim 29 applied to waterway navigation.
56. The method of claim 29 applied to land navigation.
57. The method of claim 29 applied to navigation of aircraft approaching runways and terrain in low visibility conditions.
58. The method of claim 1 in which said hazards and information about hazards relate to invisible atmospheric phenomena, comprising: using a computer to render an image representing the atmospheric information; providing an image or view ofthe real world; augmenting the image or view ofthe real world with the rendered image; and presenting the augmented view to the user, to disseminate atmospheric phenomenon information.
59. The method of claim 58 in which an augmented reality system is used to track the viewpoint ofthe user ofthe real world, and display the augmented view on a head mounted display.
60. The method of claim 59 in which providing an image comprises using a camera to capture the real world image, and wherein the presenting step accomplishes a display of the augmented image.
61. The method of claim 59 in which the presenting step accomplishes a display ofthe rendered image on a see-through head mounted display, which allows the view ofthe real world to be directly visible to the user through the use of partial mirrors, to which the rendered image is added.
62. The method of claim 58 in which atmospheric phenomena include aircraft wingtip vortices.
63. The method of claim 58 in which atmospheric phenomena include microbursts.
64. The method of claim 58 in which atmospheric phenomena include wind shear.
65. The method of claim 58 in which atmospheric phenomena include clear air turbulence.
66. The method of claim 58 in which the rendered image indicates the type of phenomena.
67. The method of claim 58 in which the rendered image indicates the intensity of phenomena.
68. The method of claim 58 in which the rendered image indicates the spatial extent of phenomena.
69. The method of claim 58 in which the data are derived from sensors which acquire atmospheric data.
70. The method of claim 58 in which the data are derived from direct observation by a human.
71. The method of claim 70 in which the human observations are provided by one or more pilots.
72. The method of claim 70 in which the human observations are provided by one or more air traffic controllers.
73. The method of claim 58 in which the data are derived from atmospheric computer simulation.
74. The method of claim 58 in which the rendered image comprises objects presented in single or multiple colors.
75. The method of claim 58 in which the rendering step comprises using objects of various sizes and shapes to represent atmospheric phenomena.
76. The method of claim 58 in which the rendering step comprises using an image property selected from the group of properties consisting of fuzziness, fading, transparency, and blending to represent the edges of atmospheric phenomena.
77. The method of claim 58 in which the rendering step comprises using an image property selected from the group of properties consisting of fuzziness, level of fade, transparency, and blending to represent the magnitude or intensity of atmospheric phenomena.
78. The method of claim 58 in which the rendering step comprises using icons to represent atmospheric phenomena.
79. The method of claim 58 in which the rendering step comprises using icons to represent airplanes.
80. The method of claim 58 in which the augmented view is presented on a television or computer monitor.
81. The method of claim 58 in which the augmented view is presented in a heads-up-display.
82. The method of claim 58 in which the augmented view is presented in a heads-down-display.
83. The method of claim 58 in which the augmented view is presented in a display moveable by the user, and further comprising tracking the position of the display, to present an augmented view corresponding to the position ofthe display.
84. The method of claim 83 in which the augmented view is presented in a handheld binocular type of display.
85. The method of claim 83 in which the augmented view is presented in a handheld monocular type of display.
86. The method of claim 83 in which the augmented view is presented in a handheld movable display.
87. The method of claim 58 in which providing an image or view of the real world comprises taking a real image with an imaging device that is not worn on the user's head
88. The method of claim 87 in which the viewpoint ofthe imaging device is a birds-eye-view.
89. The method of claim 87 in which the image ofthe real world is a static image
90. The method of claim 87 in which the image ofthe real world is output from radar.
91. The method of claim 87 in which the image of the real world is from a ground-based stationary imaging sensor from a known viewpoint.
92. The method of claim 87 in which the presenting step comprises displaying the augmented view on a fixed monitor.
93. The method of claim 87 in which providing an image or view of the real world comprises capturing an image with a camera that is mounted to a head-mounted or other portable display device.
94. The method of claim 58 in which information about the atmospheric phenomena can be displayed to the user via text overlaid onto a view of a real background.
95. The method of claim 94 in which the textual display is optionally displayed to the user in conjunction with the other, non-textual graphical methods described in the patent.
96. The method of claim 1 in which said motion tracking hardware world consists of a motorized camera mount for use as a tracking system for Augmented Reality (AR) comprising: capturing an image or view ofthe real world; determining the orientation of a camera being carried by the camera mount by obtaining information from the motorized camera mount; using a computer to generate a graphical image representing unseen information that corresponds to the known orientation ofthe viewpoint ofthe camera; augmenting a view ofthe real world with the computer generated image; and presenting the augmented view to the user.
97. The method of claim 96 wherein the augmenting step comprises using onboard video mixing through use of a video capture device with the computer, and the capturing step comprises capturing a video of reality.
98. The method of claim 96 wherein the augmenting step comprises using an external video mixing solution, to combine real and computer-generated graphical elements outside of the computer.
99. The method of claim 96 for use in operations.
100. The method of claim 96 for use in training.
101. The method of claim 96 in which the determining step comprises calibrating the camera and camera mount.
102. The method of claim 96 in which the camera mount is coupled to a fixed platform.
103. The method of claim 96 where the determining step comprises using the camera and camera mount in conjunction with a separate tracking system to generate a combined position and orientation value.
104. The method of claim 96 in which the determining step comprises using the camera and camera mount and using the computer to request the current camera position to thus utilize feedback architecture.
105. The method of claim 96 in which the determining step comprises using the camera and camera mount and using a feed-forward architecture.
106. The method of claim 96 in which the camera mount is not stationary.
107. The method of claim 106 in which the camera mount is attached to a vehicle.
108. The method of claim 106 in which the camera mount is attached to an aircraft.
109. The method of claim 106 in which the camera mount is attached to a watercraft or ship.
110. The method of claim 106 in which the camera mount is attached to a gimbaled arm.
111. The method of claim 96 in which the determining step comprises the motorized camera mount reporting the field of view ofthe camera to the computer.
112. The method of claim 1 in which said hazards and hazard information relate to emergency first responder command, control, and safety information comprising: providing a display device; obtaining data about the current physical location ofthe display device; obtaining data about the current orientation ofthe display device; generating 2D and 3D information for the user ofthe display device by using a computer; transmitting the information to a computer worn or held by the user; rendering 3D graphical elements based on the 3D information on the computer worn or held by the user; creating an overlay ofthe 2D information on the computer worn or held by the user; creating for the user a mixed view comprised of an actual view of the real world as it appears in front ofthe user, where 3D graphical elements can be placed any place in the real world that can be anchored to that place in the real world regardless of the direction in which the user is looking, wherein the rendered 3D graphical elements and 2D information are superimposed on the actual view, to accomplish an augmented reality view.
113. The method of claim 112 in which the user display device is selected from the group of display devices consisting of a Head Mounted Display (HMD), a see-through HMD, a non- see-through HMD, a monocular type of HMD, an HMD integrated into the user's face mask, a hand held display device, a see-through device, and a non-see through device.
114. The method of claim 113 in which the real world image is obtained using a video camera.
115. The method of claim 113 in which the face mask is selected from the group of face masks consisting of a firefighter's SCBA (Self Contained Breathing Apparatus), a face mask that is part of a HAZMAT (Hazardous Materials) suit, a face mask that is part of a radiation suit, and a face mask that is part of a hard hat.
116. The method of claim 113 in which the non-see-through display device obtains an image of the real world using a video camera.
117. The method of claim 113 in which the hand held device is integrated into another device.
118. The method of claim 117 in which the other device is selected from the group of devices consisting of a Thermal I ager, a Navy Firefighter's Thermal hnager (NFTI), and a Geiger counter.
119. The method of claim 112 in which the information transmitted to the user's computer is selected from the group of information consisting of textual data, directional navigation data, iconic information, and a wireframe view of the incident space in which the user is physically located.
120. The method of claim 112 in which the rendered data is selected from the group of rendered data consisting of navigation data, telling the user the direction in which to travel, warning data, telling the user of dangers of which the user may not be aware, environmental temperature at the location of the user, environmental temperature at a location the user is approaching, information pertaining to the area in which the event is occurring to help the user safely and thoroughly perform a task, information pertaining to individuals at an incident site, and an arrow that the user can follow to reach a destination.
121. The method of claim 112 in which a waypoint mode is established in which direction- indicating icons are displayed on the computer worn or held by the user, to create for the user intermediate points along a path that the user can follow in order to reach a final destination.
122. The method of claim 121 in which an icon is displayed to indicate the final destination of the user along the waypoint path.
123. The method of claim 121 in which icons are displayed to represent intermediate points between the user's current location and final destination.
124. The method of claim 121 in which the icon about information is used to represent harmful hazards that are located in an area, the harmful hazard selected from the group of hazards consisting of a fire, a bomb, a radiation leak, and a chemical spill.
125. The method of claim 112 in which the information transmitted to the user's wearable computer originates from a user operating a computing device.
126. The method of claim 119 in which a model is used to show the wireframe representation, wherein the model is obtained from a geometric model created before the time of use.
127. The method of claim 119 in which a model is used to show the wireframe representation, wherein the model is generated at the time of use.
128. The method of claim 127 in which equipment mounted on the user is used to generate the wireframe model ofthe space.
129. The method of claim 128 in which the model is generated as the user traverses the space.
130. The method of claim 127 in which the equipment used to generate the model of the space the user is in is carried by the user.
131. The method of claim 127 in which the equipment used to generate the model of the space the user is in is on a stationary mount.
132. The method of claim 127 in which the model obtained at the time of use is shared with other users.
133. The method of claim 132 in which the model of the space is shared with other users using wireless connections.
134. The method of claim 132 in which the model of the space is shared with other users using wired connections.
135. The method of claim 132 in which the shared model information is used in conjunction with other model information to create an enlarged model.
136. The method of claim 135 in which the enlarged model is shared and can be used by other users.
137. The method of claim 112 in which obtaining data about the current location and orientation ofthe display device comprises using a radio frequency fracking technology.
138. The method of claim 137 in which there are at least three radio frequency transmitters located in proximity to the space the user is in, and where the user has a radio frequency receiver.
139. The method of claim 138 in which the radio frequency receiver determines the direction of each ofthe radio frequency transmitters, and from that it determines the location ofthe user relative to the transmitter.
140. The method of claim 137 in which the radio frequency receiver determines the distance to each of the radio frequency transmitters, and from that information determines the location ofthe user relative to the transmitter.
141. The method of claim 137 in which there are at least three radio frequency receivers located in proximity to the space the user is in, and where the user has a radio frequency transmitter on his/her person.
142. The method of claim 141 in which the radio frequency receivers determine the direction of the radio frequency transmitter, and from that determine the location of the user relative to the receivers.
143. The method of claim 141 in which the radio frequency receivers determine the distance of the radio frequency transmitter, and from that information determine the location ofthe user relative to the receivers.
144. The method of claim 137 in which the tracking equipment on the user is selected from the group of tracking equipment consisting of a compass-type unit that determines the direction of magnetic north, which is used to determine the orientation of the display device relative to the stationary receivers/transmitters, tracking equipment on the user that has two receiver/transmitter units, which are used to determine the orientation of the display device relative to the stationary receivers/transmitters, and tracking equipment on the user that has a tilt sensor that senses tilt in two axes, thereby allowing the tracking technology to know roll and pitch ofthe user.
145. The method of claim 112 in which the positions of at least one user is shared with others.
146. The method of claim 145 in which the user can see a display ofthe positions of other users in the space.
147. The method of claim 145 in which the positions of a user are recorded.
148. The method of claim 147 in which the user can see a display of his/her path taken through the space.
149. The method of claim 147 in which a user can see a display of the paths of other users taken through the space.
150. The method in claim 112 in which the method is used in operations.
151. The method in claim 112 in which the method is used in fraining.
152. The method in claim 112 in which the user is selected from the group of users consisting of an emergency first responder, an outside observer, and an incident commander.
153. A method for utilizing smart card technology for storage of performance metrics and user information, comprising: providing a smart card; providing a smart card terminal; and reading from and writing user information to the smart card using the smart card terminal.
154. The method of claim 153 in which the metrics and information stored are used in an operations system.
155. The method of claim 153 in which the metrics and information stored are used in a training and simulation system.
156. The method of claim 155 in which the smart card contains the trainee's personal training profile.
157. The method of claim 155 in which the smart card contains the instructor's personal training profile.
158. The method of claim 155 in which the method is used in the training of firefighters/damage control personnel.
159. The method of claim 153 in which the smart card contains the user's personal performance profile
160. The method of claim 155 in which the system uses virtual reality (VR).
161. The method of claim 155 in which the system uses augmented reality (AR).
162. The method of claim 153 further comprising providing a 2D graphical user interface.
163. The method of claim 153 further comprising providing a textual interface.
164. The method of claim 153 in which the smart card contains performance data for a team.
165. The method of claim 153 in which the smart card contains system parameters.
166. The method of claim 153 in which multiple smart cards are used.
167. The method of claim 153 in which the smart card is protected from being read using a personal identification number (PIN).
168. The method of claim 153 in which the smart card is protected from being read using a password.
169. The method of claim 153 in which the smart card is protected from being read using a cryptographic handshake.
170. The method of claim 153 further comprising one or more computers running the system, in which the smart card terminal is directly connected to at least one such computer.
171. The method of claim 170 further comprising a separate computer connected to at least one computer running the system, in which the smart card terminal is connected to the separate computer by networked communications.
172. The method of claim 154 in which the system uses virtual reality (VR).
173. The method of claim 154 in which the system uses augmented reality (AR).
PCT/US2002/025282 2002-01-15 2002-08-09 Method and system to display both visible and invisible hazards and hazard information WO2003060830A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP02806425A EP1466300A1 (en) 2002-01-15 2002-08-09 Method and system to display both visible and invisible hazards and hazard information
CA002473713A CA2473713A1 (en) 2002-01-15 2002-08-09 Method and system to display both visible and invisible hazards and hazard information
AU2002366994A AU2002366994A1 (en) 2002-01-15 2002-08-09 Method and system to display both visible and invisible hazards and hazard information

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US34902802P 2002-01-15 2002-01-15
US34902902P 2002-01-15 2002-01-15
US34856802P 2002-01-15 2002-01-15
US60/349,029 2002-01-15
US60/348,568 2002-01-15
US60/349,028 2002-01-15
US10/192,195 2002-07-10
US10/192,195 US6903752B2 (en) 2001-07-16 2002-07-10 Method to view unseen atmospheric phenomenon using augmented reality

Publications (1)

Publication Number Publication Date
WO2003060830A1 true WO2003060830A1 (en) 2003-07-24

Family

ID=27497926

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/025282 WO2003060830A1 (en) 2002-01-15 2002-08-09 Method and system to display both visible and invisible hazards and hazard information

Country Status (4)

Country Link
EP (1) EP1466300A1 (en)
AU (1) AU2002366994A1 (en)
CA (1) CA2473713A1 (en)
WO (1) WO2003060830A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005011616A1 (en) * 2004-05-28 2005-12-29 Volkswagen Ag Mobile tracking unit
WO2009010969A3 (en) * 2007-07-18 2009-04-30 Elbit Systems Ltd Aircraft landing assistance
FR2929732A1 (en) * 2008-04-02 2009-10-09 Alcatel Lucent Sas DEVICE AND METHOD FOR MANAGING ACCESSIBILITY TO REAL OR VIRTUAL OBJECTS IN DIFFERENT PLACES.
US8095248B2 (en) 2007-09-04 2012-01-10 Modular Mining Systems, Inc. Method and system for GPS based navigation and hazard avoidance in a mining environment
US8610771B2 (en) 2010-03-08 2013-12-17 Empire Technology Development Llc Broadband passive tracking for augmented reality
WO2014003981A1 (en) * 2012-06-29 2014-01-03 Intel Corporation Enhanced information delivery using a transparent display
KR20160020033A (en) * 2014-08-12 2016-02-23 전자부품연구원 Flight path guiding method based on augmented reality using mobile terminal
US9335545B2 (en) 2014-01-14 2016-05-10 Caterpillar Inc. Head mountable display system
CN108169761A (en) * 2018-01-18 2018-06-15 上海瀚莅电子科技有限公司 Scene of a fire task determines method, apparatus, system and computer readable storage medium
US10215989B2 (en) 2012-12-19 2019-02-26 Lockheed Martin Corporation System, method and computer program product for real-time alignment of an augmented reality device
US10354350B2 (en) 2016-10-18 2019-07-16 Motorola Solutions, Inc. Method and system for information management for an incident response
US10497161B1 (en) 2018-06-08 2019-12-03 Curious Company, LLC Information display by overlay on an object
US10636197B2 (en) 2018-09-06 2020-04-28 Curious Company, LLC Dynamic display of hidden information
US10650600B2 (en) 2018-07-10 2020-05-12 Curious Company, LLC Virtual path display
US10818088B2 (en) 2018-07-10 2020-10-27 Curious Company, LLC Virtual barrier objects
US10832484B1 (en) 2019-05-09 2020-11-10 International Business Machines Corporation Virtual reality risk detection
US10872584B2 (en) 2019-03-14 2020-12-22 Curious Company, LLC Providing positional information using beacon devices
US10970935B2 (en) 2018-12-21 2021-04-06 Curious Company, LLC Body pose message system
US10970883B2 (en) 2017-06-20 2021-04-06 Augmenti As Augmented reality system and method of displaying an augmented reality image
US10991162B2 (en) 2018-12-04 2021-04-27 Curious Company, LLC Integrating a user of a head-mounted display into a process
CN114894253A (en) * 2022-05-18 2022-08-12 威海众合机电科技有限公司 Emergency visual sense intelligent enhancement method, system and equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9508248B2 (en) 2014-12-12 2016-11-29 Motorola Solutions, Inc. Method and system for information management for an incident response

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625765A (en) * 1993-09-03 1997-04-29 Criticom Corp. Vision systems including devices and methods for combining images for extended magnification schemes
US5751576A (en) * 1995-12-18 1998-05-12 Ag-Chem Equipment Co., Inc. Animated map display method for computer-controlled agricultural product application equipment
US6064749A (en) * 1996-08-02 2000-05-16 Hirota; Gentaro Hybrid tracking for augmented reality using both camera motion detection and landmark tracking
US6356905B1 (en) * 1999-03-05 2002-03-12 Accenture Llp System, method and article of manufacture for mobile communication utilizing an interface support framework
US6414696B1 (en) * 1996-06-12 2002-07-02 Geo Vector Corp. Graphical user interfaces for computer vision systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625765A (en) * 1993-09-03 1997-04-29 Criticom Corp. Vision systems including devices and methods for combining images for extended magnification schemes
US5751576A (en) * 1995-12-18 1998-05-12 Ag-Chem Equipment Co., Inc. Animated map display method for computer-controlled agricultural product application equipment
US6414696B1 (en) * 1996-06-12 2002-07-02 Geo Vector Corp. Graphical user interfaces for computer vision systems
US6064749A (en) * 1996-08-02 2000-05-16 Hirota; Gentaro Hybrid tracking for augmented reality using both camera motion detection and landmark tracking
US6356905B1 (en) * 1999-03-05 2002-03-12 Accenture Llp System, method and article of manufacture for mobile communication utilizing an interface support framework

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005011616A1 (en) * 2004-05-28 2005-12-29 Volkswagen Ag Mobile tracking unit
DE102005011616B4 (en) * 2004-05-28 2014-12-04 Volkswagen Ag Mobile tracking unit
US8687056B2 (en) 2007-07-18 2014-04-01 Elbit Systems Ltd. Aircraft landing assistance
WO2009010969A3 (en) * 2007-07-18 2009-04-30 Elbit Systems Ltd Aircraft landing assistance
US8816883B2 (en) 2007-09-04 2014-08-26 Modular Mining Systems, Inc. Method and system for GPS based navigation and hazard avoidance in a mining environment
US8095248B2 (en) 2007-09-04 2012-01-10 Modular Mining Systems, Inc. Method and system for GPS based navigation and hazard avoidance in a mining environment
WO2009125115A3 (en) * 2008-04-02 2009-12-03 Alcatel Lucent Device and method for managing accessibility to real or virtual objects in various places
WO2009125115A2 (en) * 2008-04-02 2009-10-15 Alcatel Lucent Device and method for managing accessibility to real or virtual objects in various places
FR2929732A1 (en) * 2008-04-02 2009-10-09 Alcatel Lucent Sas DEVICE AND METHOD FOR MANAGING ACCESSIBILITY TO REAL OR VIRTUAL OBJECTS IN DIFFERENT PLACES.
US8610771B2 (en) 2010-03-08 2013-12-17 Empire Technology Development Llc Broadband passive tracking for augmented reality
US9390503B2 (en) 2010-03-08 2016-07-12 Empire Technology Development Llc Broadband passive tracking for augmented reality
WO2014003981A1 (en) * 2012-06-29 2014-01-03 Intel Corporation Enhanced information delivery using a transparent display
US9646522B2 (en) 2012-06-29 2017-05-09 Intel Corporation Enhanced information delivery using a transparent display
US10215989B2 (en) 2012-12-19 2019-02-26 Lockheed Martin Corporation System, method and computer program product for real-time alignment of an augmented reality device
US9335545B2 (en) 2014-01-14 2016-05-10 Caterpillar Inc. Head mountable display system
KR101994898B1 (en) * 2014-08-12 2019-07-01 전자부품연구원 Flight path guiding method based on augmented reality using mobile terminal
KR20160020033A (en) * 2014-08-12 2016-02-23 전자부품연구원 Flight path guiding method based on augmented reality using mobile terminal
US10354350B2 (en) 2016-10-18 2019-07-16 Motorola Solutions, Inc. Method and system for information management for an incident response
US10970883B2 (en) 2017-06-20 2021-04-06 Augmenti As Augmented reality system and method of displaying an augmented reality image
CN108169761A (en) * 2018-01-18 2018-06-15 上海瀚莅电子科技有限公司 Scene of a fire task determines method, apparatus, system and computer readable storage medium
US10497161B1 (en) 2018-06-08 2019-12-03 Curious Company, LLC Information display by overlay on an object
US11282248B2 (en) 2018-06-08 2022-03-22 Curious Company, LLC Information display by overlay on an object
US10650600B2 (en) 2018-07-10 2020-05-12 Curious Company, LLC Virtual path display
US10818088B2 (en) 2018-07-10 2020-10-27 Curious Company, LLC Virtual barrier objects
US10636197B2 (en) 2018-09-06 2020-04-28 Curious Company, LLC Dynamic display of hidden information
US10636216B2 (en) 2018-09-06 2020-04-28 Curious Company, LLC Virtual manipulation of hidden objects
US10803668B2 (en) 2018-09-06 2020-10-13 Curious Company, LLC Controlling presentation of hidden information
US10861239B2 (en) 2018-09-06 2020-12-08 Curious Company, LLC Presentation of information associated with hidden objects
US11238666B2 (en) 2018-09-06 2022-02-01 Curious Company, LLC Display of an occluded object in a hybrid-reality system
US10902678B2 (en) 2018-09-06 2021-01-26 Curious Company, LLC Display of hidden information
US10991162B2 (en) 2018-12-04 2021-04-27 Curious Company, LLC Integrating a user of a head-mounted display into a process
US11055913B2 (en) 2018-12-04 2021-07-06 Curious Company, LLC Directional instructions in an hybrid reality system
US10970935B2 (en) 2018-12-21 2021-04-06 Curious Company, LLC Body pose message system
US10955674B2 (en) 2019-03-14 2021-03-23 Curious Company, LLC Energy-harvesting beacon device
US10901218B2 (en) 2019-03-14 2021-01-26 Curious Company, LLC Hybrid reality system including beacons
US10872584B2 (en) 2019-03-14 2020-12-22 Curious Company, LLC Providing positional information using beacon devices
US10832484B1 (en) 2019-05-09 2020-11-10 International Business Machines Corporation Virtual reality risk detection
CN114894253A (en) * 2022-05-18 2022-08-12 威海众合机电科技有限公司 Emergency visual sense intelligent enhancement method, system and equipment

Also Published As

Publication number Publication date
CA2473713A1 (en) 2003-07-24
AU2002366994A2 (en) 2003-07-30
AU2002366994A1 (en) 2003-07-30
EP1466300A1 (en) 2004-10-13

Similar Documents

Publication Publication Date Title
US20030210228A1 (en) Augmented reality situational awareness system and method
WO2003060830A1 (en) Method and system to display both visible and invisible hazards and hazard information
US11862042B2 (en) Augmented reality for vehicle operations
US6500008B1 (en) Augmented reality-based firefighter training system and method
US20020191004A1 (en) Method for visualization of hazards utilizing computer-generated three-dimensional representations
US11869388B2 (en) Augmented reality for vehicle operations
CA2456858A1 (en) Augmented reality-based firefighter training system and method
Butkiewicz Designing augmented reality marine navigation aids using virtual reality
EP2048640A2 (en) A method and an apparatus for controlling a simulated moving object
WO2022094279A1 (en) Augmented reality for vehicle operations
Rogers et al. Enhanced flight symbology for wide-field-of-view helmet-mounted displays
Rottermanner et al. Design and evaluation of a tool to support air traffic control with 2d and 3d visualizations
JP2019128370A (en) Undeveloped land simulation experience system
Mowafy et al. Visualizing spatial relationships: Training fighter pilots in a virtual environment debrief interface
Xiuwen et al. A prototype of marine search and rescue simulator
Aragon Usability evaluation of a flight-deck airflow hazard visualization system
Bagassi et al. Innovation in man machine interfaces: use of 3D conformal symbols in the design of future HUDs (Head Up Displays)
Wang A mobile augmented reality thunderstorm training technique to enhance aviation weather theory knowledge curricula
Brown Displays for air tra c control: 2D, 3D and VR| a preliminary investigation
RU2324982C2 (en) Air simulator
Aragon A prototype flight-deck airflow hazard visualization system
Schlager Design and development of an immersive collaborative geographical environment for tactical decision-making
WO2022235795A2 (en) Methods, systems, apparatuses, and devices for facilitating provisioning of a virtual experience
Myers Effects of visual representations of dynamic hazard worlds of human navigational performance
Dayton 93 0zi'oa

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002806425

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2002366994

Country of ref document: AU

Ref document number: 2473713

Country of ref document: CA

WWP Wipo information: published in national office

Ref document number: 2002806425

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP