WO1997001245A1 - A three-dimensional (3d) video presentation system providing interactive 3d presentation with personalized audio responses for multiple viewers - Google Patents

A three-dimensional (3d) video presentation system providing interactive 3d presentation with personalized audio responses for multiple viewers Download PDF

Info

Publication number
WO1997001245A1
WO1997001245A1 PCT/US1996/010743 US9610743W WO9701245A1 WO 1997001245 A1 WO1997001245 A1 WO 1997001245A1 US 9610743 W US9610743 W US 9610743W WO 9701245 A1 WO9701245 A1 WO 9701245A1
Authority
WO
WIPO (PCT)
Prior art keywords
receiving device
user
audio
dimensional video
presentation
Prior art date
Application number
PCT/US1996/010743
Other languages
French (fr)
Inventor
Michael J. Freeman
Original Assignee
Actv, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Actv, Inc. filed Critical Actv, Inc.
Priority to AU63382/96A priority Critical patent/AU6338296A/en
Publication of WO1997001245A1 publication Critical patent/WO1997001245A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/409Data transfer via television network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Abstract

An interactive three-dimensional video presentation is disclosed that provides interactive choices to audience members in the three-dimensional object form, and after each audience member reaches out at, points at, or looks at the 3D object of choice, the system responds altering each members' audio and/or video feedback. Specifically, at several points during a 3D presentation, a character, appearing on the screen (20), requests each user to select one of several possible 3D objects 'floating' in front of each user. Selection of these objects is facilitated through user interface units (28), embodied in either helmets (74), guns, or wands (84). Upon selection, the user interface unit (28) matches the user choice with a pre-recorded audio feedback response presenting the user, through head phones (132), a distinct, individualized audio response, preferably lip synched with the video character's mouth movements.

Description

A THREE-DIMENSIONAL (3D) VIDEO PRESENTATION SYSTEM
PROVIDING INTERACTIVE 3D PRESENTATION WITH PERSONALIZED AUDIO RESPONSES FOR MULTIPLE VIEWERS
TECHNICAL FIELD
This invention pertains to a 3-D video presentation system capable of simultaneously providing personalized audio responses to a plurality of receivers.
BACKGROUND ART Interactive video and audio presentation systems are currently being introduced into the entertainment and educational industries. A prominent interactive technology that has been applied successfully in these industries is based on providing interactivity in a one-way system through the provision of multiple parallel channels of information. For example, commonly owned
Freeman et al. patents, U.S. patent nos. 4,264,925 and 4,264,924, which provide both audio and video interactivity, disclose interactive television systems where switching among multiple broadcast or cable channels based on viewer selections provides an interactive capability.
These systems have been enhanced to include memory functions using computer logic and memory, where selection of system responses played to the viewer are based on the processing and storage of subscriber responses, as disclosed in Freeman patent, U.S. patent no. 4,507,680.
The benefits of providing interactivity through the use of different audio responses is disclosed in Freeman, U.S. patent nos. 4,847,698, 4,847,699 and 4,847,700. These television systems provide a common video signal accompanied by several synchronized audio channels to provide content related user selectable responses. The audio signals produce different audio responses, and in some cases, these are syllable synched to a first audio script and to the video signal (such as to a person or character on a display), providing the perception that the person's or character's mouth movements match the spoken words.
Interactivity is brought to the classroom in the Freeman U.S. patent application serial no. 08/228,355. The distance learning system claimed in this application enhances the classroom educational experience through an innovative use of interactive technology over transmission independent media. When an instructor, either broadcast live on video or displayed from videotape, asks a question, each and every student responds, preferably by entering a response on a remote handset, and each student immediately receives a distinct and substantive audio response to his or her unique selection. The individualization of audio response from the interactive program is a major aspect of the invention.
Individualization of audio is brought to the home based on the technology disclosed in Freeman U.S. patent application serial no. 08/289,499. This system provides a program that can be watched on any conventional television set or multimedia computer as a normal program. But if the viewer has a special interactive program box connected to the television, or a program board inserted into their multimedia personal computer, he or she can experience a fully functional interactive program. Each interactive viewer enjoys personalized audio responses and video graphics overlayed on the screen. The interactive program can be provided to television sets or to computers by cable, direct broadcast satellite, television broadcast or other transmission means, and can be analog or digital. Unlike previous interactive systems, this application covers a system that subtly introduces the interactive responses to the viewer throughout the program. This enhanced interactivity is provided through the use of "trigger points" spread throughout the program. Trigger points occur at designated times and result in the program content being altered to present individual attention to the particular viewer. However, none of the previous interactive systems disclose interacting with a three dimensional video presentation. Three dimensional (3D) video imaging has been available for a long time, but developers have been unable to improve the quality of the image at an affordable cost to compete with the usual two dimensional (2D) movies or presentations. Three dimensional images may be created by several techniques, including stereoscopy, autostereoscopy, image polarization, and holographic techniques. Until now, 3D imaging techniques have never been used in an interactive system. However, 3D imaging and interactive technology are both designed to achieve similar purposes, personal interaction with the visual image. Three dimensional images are designed to give the viewer the perception they may reach out and touch the image or that they are in the same room as the image. Interactivity is designed to make the user feel involved in the program. Consequently, what is needed is the combination of these technologies to create a synergistic effect to enhance the viewers' involvement in the presentation beyond the independent benefits of 3D imaging and interactivity.
In addition, the previous interactive systems usually require user interaction through the use of some type of contact with a physical device such as a keypad, keyboard or remote control device. Such a selection mechanism does not give the user the impression they are physically interacting with the system. Therefore, what is needed is an interactive system that allows members of an audience to interact with the three dimensional objects in a 3D interactive presentation system. Thus, what is needed is a low cost system for using audio interactivity based on physical movements with the perception of full motion 3D video interactivity.
BRIEF SUMMARY OF INVENTION
The present invention, A THREE-DIMENSIONAL (3D) VIDEO PRESENTATION SYSTEM PROVIDING INTERACTIVE 3D PRESENTATION WITH PERSONALIZED AUDIO RESPONSES FOR MULTIPLE VIEWERS, provides a vastly enhanced three dimensional video /audio presentation by allowing the viewer to actively participate in the program and by providing instantaneous audio feedback to the user. The interactive 3D video presentation provides actual interactive choices for selection in the form of three dimensional objects in space to each member of the audience, and after each member of the audience reaches out at or looks at the object of choice, the system responds and alters the audio feedback to each member.
The educational and entertainment value of almost any presentation is greatly enhanced if the viewer can physically interact with the presentation. In the present invention, the user's interaction with the presentation is maximized by combining three dimensional (3D) video, physical interactivity and personalized audio feedback. By making the viewer feel more involved, the viewers interest, enjoyment, and/or learning are all increased. Consequently, the combination of three dimensional video and interactive technologies in a novel manner creates a unique system allowing group presentations to appeal individually to each viewer.
With the current invention, a large number of viewers watch a common 3D video presentation on a large screen (e.g., a movie theater), but each student can carry on a completely different dialogue with the interactive program. As a result, the presentation maintains the viewer's interest because the presentation is customized to address their personal expectations and choices. Further, the feeling of personal involvement is enhanced by the 3D imaging. Three dimensional (3D) imaging makes the viewer believe they are in the same room as the 3D characters on the screen and even able to reach out and touch the 3D images.
Additionally, the present invention goes further than using simple keypad selections for user response interactivity. This invention allows the communication of interactive responses from the user based on their physical movements such as merely looking at the 3D character of choice, pointing at the 3D character with a pointing device, or reaching out and touching a 3D character, which provides a much more realistic form of user involvement. A variety of user interface embodiments can be used to implement this response capability based on physical movements, including, but not limited to, such selection means as pointers, specialized helmets, etc.
In the present invention, the users watch a 3D presentation that includes one or more animated characters or people. When either a character or person prompts the user to respond to a question or instruction, one or more 3D answer options will appear in front of all the users. These 3D answer options appear to be directly "floating" in front and within arm's length of each user. The 3D answer options could be three characters, for example. As discussed above, each user then responds by either turning their head to look directly at the 3D answer option of choice, reaching out and attempting to touch the 3D character, or pointing a finger or device at the answer option of choice. Of course, there are several other ways in which the user can respond to the query or prompt. The users interact with the 3D interactive program with a user interface unit. Preferably, the user interface unit includes a selection means, a means for determining an appropriate audio response, and attached headphones. It also contains 3D glasses or visor. Immediately upon making the choice, the selection means in each user's interface unit determines the choice made by the viewer. The interface unit processor, with the use of audio selection data codes, matches the choice with a distinct audio response selection, and either calls from memory or commands a branch to a distinct audio response. Each individual then receives a personalized audio feedback response through a set of headphones. These feedback responses are seamlessly communicated to each user, and thus, amount to minimally perceived alternations from the common audio presentation.
To enhance the realism of the interactive experience, it is preferred that each of the distinct audio feedback responses be synchronized to the person or character presented on the video screen. Thus, the feedback audio responses are prerecorded in the narrator's or character's voice and made in such a way as to match their mouth movements. One possible technique for providing a realistic lip sync is through the syllable synching technique as disclosed in U.S. patent nos. 4,847,698, 4,847,699, and 4,847,700, herein incorporated by reference. As a result, regardless of which audio response is chosen, the video character's mouth moves as if the video was customized for the audio.
The 3D presentation system also has the advantage of remembering user selections and using these previous selections in choosing an audio feedback response to present to the user. This memory capability is facilitated through the use of logic and memory in conjunction with each user interface processor. This configuration allows for the storage of previous user selections and the processing of these selections to control future audio feedback.
Accordingly, a primary objective of the invention is an enhanced interactive 3D presentation system which combines the realism of 3D imaging, physical interactivity, personalized audio responses, synchronized video and audio, and seamless audio alterations.
It is an object. of the invention to allow viewers to respond to on-screen character or person prompts by physically selecting a 3D answer option, such as looking at or pointing at the 3D image of interest, to indicate the user's responses to the 3D images.
It is an object of the invention to remember earlier selections in order to use this information in formulating future audio responses.
These and other advantages, features, and objectives of the invention and the manner of attaining them will become apparent and the invention will be best understood by reference to the following description of the embodiments of the invention in conjunction with the accompanying drawings and appended claims.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 is a diagrammatic representation of the 3D video presentation system displaying various elements of the system. Figure 2 is an electrical layout of the theater control unit.
Figure 3a is a drawing of one embodiment of the user interface unit.
Figure 3b is a drawing of an alternative user interface unit. Figure 4 is a diagram of the circuitry of the preferred embodiment for the user interface unit. DISCLOSURE OF INVENTION. BEST MODE FOR CARRYING OUT INVENTION. INDUSTRIAL APPLICABILITY. AND DETAILED
DESCRIPTION I. Introduction
The present invention is a 3D video presentation system for enabling an interactive 3D video presentation to give actual interactive choices for selection in the form of three dimensional objects in space to each member of the audience, and after each member of the audience reaches out at or looks at the object of choice, the system responds and alters the audio feedback to each member. The 3D video imaging provides the viewer the perception they are in the same room with and have the ability to physically reach out, point at or appear to touch the 3D images or characters presented on the screen.
Specifically, a 3D video production is presented in a movie theater, for example, and at several points during the presentation a person, or character, requests each user to respond to a question by either directly looking at, pointing a device at, or reaching toward, one of two or more answer options (depicted as 3D characters or objects). These 3D answer options appear to be directly in front and within arm's length of each user. Immediately after the user selects an answer, the system subtly and seamlessly alters the audio to each individual user, thereby providing a completely personalized audio feedback response. Interactive realism is enhanced through the preferred lip synching of the audio feedback response to the video character's mouth movements. The lip synched audio feedback response to one user is probably different from the response heard by several other users. In this manner, personalized audio feedback is provided to each viewer. Currently, some interactive systems do include video and audio interactivity through the use of a keypad entry or other interface means; however, in none of these systems is the video three dimensional, the interactive selections made by physical gestures, and a realistic and enhanced presentation provided by full synchronization between all the audio responses and the video.
The interactive presentation is of a type disclosed in commonly copending U.S. patent application serial no. 08/228,355, filed April 15, 1994 and copending U.S. patent application serial no. 08/289,499, filed August 12, 1994, and in U.S. patent nos. 4,847,698,
4,847,699, 4,847,700, 4,264,924, 4,264,925, and RE 34,340, the contents of which are incorporated by reference. II. System Components
A. General Overview The interactive presentation system of the present invention uses any three dimensional video imaging technique including, but not limited to, binocular stereoscopy, autostereoscopy, image polarization, or holographic techniques. In some cases, these techniques may be used in conjunction with laser generated imagery. The present invention, however, is not dependant on any particular type of three dimensional video technique and, therefore, could work well with any of the currently known or foreseeable 3D video technologies. Likewise, the medium for storage of the 3D interactive productions images can be any suitable presentation medium including CD ROM, video disc player or magnetic tape.
Any of these mediums may be used in conjunction with film, light, computer and /or laser generated imagery.
As shown in Figure 1, the system makes use of a video /audio storage device (40), (a CD ROM, VCR, film, or video disc player) connected to a theater control unit (36) (a program box or PC) to deliver the common audio and individualized audio feedback segments to the user interface units (28) and the 3D video portion of the program to 3D projection equipment (32), which subsequently projects the 3D video onto a screen (20) at the front of the theater. Alternatively, the common audio could be distributed to the theater sound system. The system includes in a preferred embodiment an infrared or radio transmitter (24) located at the front of the theater for providing a reference beacon signal for aiding in the determination of the users' physical selections. The reference beacon signal is interpreted and processed by each user's interface unit (28), specifically the selection means, to indicate which of the 3D answer objects is selected by the particular user. Based on the determined user selection, the determination means of each user interface unit (28) processes this selection and with the aid of unique audio selection codes selects and presents to the particular user an individualized audio feedback response through a set of headphones (132) attached to the user interface unit (28), wherein the individualized audio feedback is preferably lip synched to the video character's or person's mouth movements. While Figure 1 shows a preferred embodiment of the present invention in an interactive movie theater, the present invention is not constrained to such an environment. In alternative embodiments, a conventional TV broadcast, cable television, wireless network, or satellite system could be used as the source of the 3D interactive programming in accordance with the present invention. The 3D interactive presentation could be transmitted in either digital or analog formats. For example, the 3D interactive presentation program could be packaged as described in U.S. patent application serial no. 08/289,499, preferably with the alternative feedback audio segments inserted into the vertical blanking interval of the 3D video program with appropriate reception equipment at the home or other reception center, as described below.
B. Production of the 3D Interactive Video Presentation The 3D video presentation portion of the present invention may be created many ways including, but not limited to, binocular stereoscopy, autostereoscopy, image polarization, or holographic techniques. If the 3D video presentation is designed using stereoscopy, two cameras are used to record the same image. These cameras are preferably positioned around 2.5 inches (6.3 cm) apart, one representing the left eye and the other representing the right.
The resulting images are simultaneously projected onto the screen by two synchronized projectors. However, the viewer sees these images as three dimensional with the assistance of tinted or polarized glasses. The glasses make the left eye image only visible to the left eye and the right eye image only visible to the right eye. As a result, the viewer actually sees the images separately, but perceives them as three dimensional because, for all practical purposes, the two slightly different images are fused together instantly by their mind. As an example of stereoscopy, the United Artist System uses interlocked 70 mm cameras and projectors to produce 3D images on a screen almost fifty feet wide. Another example is the 3D presentation system used for the Kodak Pavilion at the EPCOT Center at Disney World in Florida. This system uses interlocked 65mm cameras and 70mm projectors for projecting the 3D images onto a 54-by-22 foot screen.
Autostereoscopy or 3D spatial imaging can also be used to create the video portion of the 3D interactive program. Autostereoscopy uses the normal stereoscopy technique, but uses several 2D images, instead of only two images. Autostereoscopy requires several pictures from a series of angles to create more information about the images in the video. The more pictures and the smaller the difference in their angle of perspective, the higher the quality of the image.
Holograms are another method of 3D imaging that could be used with the present invention. In the past, most holograms were created by recording the complex diffraction pattern of laser light reflected from a physical object. More recent techniques are called computer generated holograms (CGH). These techniques are dependent on the fast processing speeds of computers because the complex diffraction patterns require a large number of computations with complex numbers.
Now that the various methods of producing 3D video have been briefly described, the presently preferred method of recording the 3D interactive program shall be described. The 3D presentation preferably will be designed to have a number of points therein where an interrogatory message or instruction is relayed from the actor to the members of the audience. At these points, it will be necessary to create a number of alternative audio feedback segments. These segments represent responses to be presented as a function of user selections. Preferably, the video portion of the presentation is recorded first along with the common audio and with the most likely (or correct choice) audio feedback segment. After the first recording of the program, the alternate audio feedback segments should be successively recorded by the actor on a separate appropriate audio track.
Preferably, the alternative audio feedback segments are synched (at certain times when the character is on the screen) to the mouth movements of the character or person in the common 3D video. Lip synching can be done with humans, but works best with animation, animal characters, computer or laser generated characters, etc. For example, if the viewer is told to reach out and point at a three dimensional floating boy image if they themselves are a boy, and they respond in this fashion, the on-screen character can look directly at them and say "you are a boy." The same would be true for girls in this case, except the alternative feedback audio segment replaces the word "boy" with "girl." However, lip synching is only necessary when the character is actually on the screen at the time the audio feedback responses are presented to the users.
This method of synching the audio segments to the common video is enabled by providing a similar syllable count for each of the alternative audio feedback segments as disclosed in U.S. patent nos. 4,847,698, 4,847,699 and 4,847,700. In order to do this, the actor who recorded the first choice in the original program should view the replayed video on a studio monitor, for example, and wear headphones (132) in order to hear what was originally recorded.
The actor then restates the same line but changes the words or phrases while watching the video. In the example stated above, both "girl" and "boy" are syllable synched to the visual mouth movements of the on-screen character. Since the spoken words "boy" and "girl" are both one syllable words, the mouth movements of the video character will be synchronized with the spoken words. In this manner, several different audio feedback segments can be created corresponding to the interrogatory message. The producer should ensure that each of the n alternative audio segments are kept near the same length in time.
With respect to the video presentation and prior to the triggering of one of the audio feedback segments, the users will be instructed by a character or narrator to select between two to more answer choices. Preferably, each of these answer choices are provided visually to the user in three dimensions and are clearly separated in space from one another. For example, one 3D image would be on the left side of the screen and another 3D image would be on the right side of the screen. These 3D characters appear to be within "touching range" of each individual viewer, but in fact are just part of the 3D effect. This provides the illusion to each individual viewer that they can simply reach out and touch these "floating" characters. Therefore, as each interrogatory message or instruction is relayed from the actor to the members of the audience, one or more 3D objects, corresponding to the possible user selections, will simultaneously appear in the video presentation.
As explained below, the presentation of these 3D images will be used to elicit a selection by the users, wherein each user physically selects one of these images by either looking at, reaching at, or points a pointing device towards the 3D image, as described below. Timing and control between the 3D video, common audio and the multiple audio feedback segments are provided through the use of data codes. The codes comprise audio selection data commands, or branch codes, for branching between a common audio channel to the multiple audio feedback segments at the appropriate branch times. The codes also comprise timing signals
(for controlling the interactive elements and the branching or switching between the various audio signals to ensure frame accuracy), data or text, and indicators designating the initiation and termination of interactive program segments. The use of data codes, in effect, time stamp the relevant audio segments with predetermined times on the video signal. The data codes are preferably prepared using authoring tools and a personal computer.
The 3D presentation can also be created in such a manner where the audio feedback responses do not have to necessarily immediately follow the interrogatory question. Through the use of "trigger points," as described in copending U.S. patent application serial no. 08/289,499, filed August 12, 1994, interactive audio can show up any time during the program as a function of stored user selection information. Trigger points provide designated times during the program when the program content is subtly altered to present individual and specialized attention to each subscriber. The trigger points are essentially markers in the program that effectively trigger macros, preferably stored in memory in each user interface unit (28), which call for a personalized audio segment to play over the user's headphones (132). The particular audio segment(s) chosen at each user interface unit (28) is based on previous user selections to questions preferably presented at the beginning of the show or in combination with a particular pattern of interactions solicited and entered during the program. Each user interface unit (28) recognizes the trigger points by decoding the data codes sent to each of the units. The exact timing of the trigger points are unknown to the subscriber in order to make the interactive elements appear unsolicited to the interactive user. Of course, timing of the interactive events should correspond to suitable times in the program where branching to interactive elements is sensible and does not clash with the program content of the 3D video shown on the screen (20).
There are several different ways in which to integrate, store and /or transmit the 3D interactive presentation. The 3D interactive program video, common audio and multiple audio feedback responses and data codes are preferably recorded for later playback in the movie theater environment in whatever format is available including tape, film, CD ROM or disk (in analog or digital form), including but not limited to: one inch, Dl, D2, Betacam, Betacam SP, Hi-8, three quarter, S-VHS, or any other format. The video /audio storage device (40) can be either a video tape player (VCR), CD ROM external storage device, film or video disc player, etc., depending upon the storage medium of the 3D interactive presentation program. Alternatively, the composite 3D interactive digital or analog program can be transmitted to remote reception sites using any suitable transmission means such as satellite, TV broadcast, fiber optic, cable television, etc. If the program is to be transmitted over some transmission means, it is envisioned that at least two 6 MHz NTSC standard video signals will be required to transmit the multiple images required for a 3D presentation. If analog transmission is used, preferably the common audio and multiple alternative audio feedback signals and data codes are inserted into the vertical blanking interval of one or more of the video signals using an interactive program inserter, such as that shown in Figures 3 and 4 of copending U.S. patent application serial no. 08/228,355, filed April 15, 1994. In this embodiment, the different audio tracks are fed into an audio mixer which aligns the different audio tracks in time. An analog inserter, such as that shown in Figure 3 of U.S. patent application serial no. 08/228,355, attached to the output of the audio mixer, video source and computer, is then used to integrate the analog segments, 3D video and data codes.
Alternatively, the 3D video, multiple alternative audio feedback signals and data codes can be digitally encoded, compressed and time division multiplexed by the configuration shown in
Figure 4 of U.S. patent application serial no. 08/228,355, for example.
C. Theater Control Unit
The theater control unit (36), as shown in Figure 2, is connected to and receives the 3D interactive presentation from the video/audio storage device (40). As shown in Figure 2, the theater control unit (36) preferably comprises an extractor (44), a processor (52) with associated memory (48), and an interface (56) to user interface units (28). In the preferred embodiment, the composite 3D program signal is read from the external storage device (40) and passed to the input of the extractor (44). The extractor (44) separates the composite presentation signal received from the storage device (40) into its three components; video, audio, and control data. If the composite 3D program signal is in analog format as described above, the analog
3D video signal(s) with the embedded audio and data codes either in the VBI or "black frame times" are read into an extractor imit (44) such as that disclosed in Figure 6 of copending U.S. patent application serial no. 08/228,355. Alternatively, if the composite 3D program signal is in digital format, the digital composite signal is preferably read into the digital extractor circuitry shown in Figure 7 of copending U.S. patent application serial no. 08/228,355 where the signals are decoded, demultiplexed and decompressed. However, the extractor unit (44) is not limited to these devices and could be any conventional means for separating or receiving the program elements from any storage format. In another embodiment, if the audio, video and data codes are stored separately on CD ROM (or other storage device), the extractor (44) is merely a pass-through box that receives and forwards the audio, video and data. After these components are extracted they are distributed to various components. The video signal(s) is distributed to the 3D projectors (32). The audio signal is preferably buffered and distributed to an interface unit (56) such as an ethernet interface, IR interface or an RF interface, depending on the transmission media used for communications between the theater control unit (36) and the user interface units (28). The control data codes are sent to the processor (52).
The theater control unit processor (52), as shown in Figure 2, processes the data from the extractor unit (44), stores and reads instructions from ROM/RAM memory (48), and transmits audio selection codes to the user interface units (28). The processor (52) is preferably a 80C188 based microcontroller, but can be any similar controller.
As mentioned above, an important function of the data codes is to align the appropriate audio responses in time and provide branching commands and algorithms to assist in determination of selecting a proper audio response. In the preferred embodiment, the control data is embedded in the visual portion of the movie itself (such as the "black frame times" if the presentation is a movie, or during the vertical blanking interval if a standard NTSC video format is used). Preferably, the theater control unit (36) passes on the branching commands to each user interface unit (28) via the same communications path on which the alternative audio feedback signals are sent to the user interface units (28). For example, a separate infrared signal or radio frequency channel can be added for the sole purpose of sending the data codes to the user interface units (28) from the theater control unit (36).
However, the video and audio timing synchronization may also be done in other ways. Another embodiment is to preprogram the control data into every user interface unit (28) and to send synch pulses from the theater control unit (36) to the user interface units (28). The synch pulses are used to insure the user interface units (28) are in perfect timing with the video presentation. The actual "data codes" (including the branching commands and algorithms) necessary for choosing between different audio feedback channels can be embedded in the user interface unit's EPROM or RAM memory (100) prior to presentation of the movie.
The theater control unit (36) connects with the user interface units 28 either by infrared (IR), radio frequency (RF) connection or by cable in a daisy chain configuration. If IR communications is employed, the theater control interface (56) is any standard IR interface device. If connection is by cable, the audio feedback signals and data codes are sent to the user interface units (28) via an RS 485 T/R device. If connection is by a RF path, the audio feedback channels and the data codes are sent to an RF modulator, transmitter and antenna combination, connected in series and to the theater control unit (36). Preferably, the theater control unit (36) sends to the user interface units (28) the set up commands and audio selection commands. In this embodiment, the user interface unit processors (96) will actually read and interpret the codes for selecting the proper and appropriate audio feedback response.
Although the theater control unit (36) has been described with respect to a movie theater environment, it's components could be integrated into a set top converter box, a personal computer or as a separate interactive program box connected to a television set or monitor in the home.
D. User Interface Units
As mentioned above, several times during the 3D video interactive program, each member of the audience will be prompted by an on-screen character to select one of two or more "floating" 3D choices. Each choice appears to be within "touching range" of each individual viewer, but in fact, is just an illusion based on the 3D effect. The viewer then can "physically" interact with the 3D presentation via a selection means by selecting one of the "floating" 3D choices either by looking at, or pointing towards, or reaching for the "floating" 3D object of choice. Each of these types of physical gestures for selecting the object of choice can be implemented by one of the selection means embodiments described below. After the members of the audience respond, the determining means uses the choice as input to determine the immediate selection of an audio feedback response that subtly alters the common audio through the branching to one of multiple audio channels that comprise the 3D presentation soundtrack. Each individual receives via their own user interface unit (28) by the presentation means only those audio feedback responses that are appropriate to the specific personal choice that has been made by the particular viewer.
Generally, each of the preferred embodiments of the user interface units (28) comprise a selecting means (comprising a means for interacting with the 3D presentation and a means for detecting the user choice), a means for determining the related audio feedback response, and a means for presenting the related audio response to each individual user.
In one preferred embodiment, as shown in Figure 3a, the user interface unit (28) comprises a helmet (74). This helmet (74) allows a viewer to make a selection by merely "looking" at a choice.
This is accomplished by allowing a detecting means (i.e., a processor (96) connected to detection circuitry (88), (92)), as shown in Figure 4 and preferably attached to the inner surface of the helmet (74), to determine the orientation of the user's head with respect to the movie screen (20) using the inputs from an interacting means (74),
(84). For example, assuming three 3D options, a viewer may simply look at the choice which appears to the left of the screen (20), at the choice in the middle of the screen (20), or at the choice at the right of the screen (20), and the microprocessor (96) in conjunction with the detection circuitry (i.e., the voltage detector (88) and A/D converter (92)) can compute the choice being made by the user, as explained below.
The helmet configuration, as shown in Figure 3a, has an interacting means comprising a partition (72) interposed between two antenna (76), (80) attached to the outer surface of the helmet
(74). In addition, each user would preferably wear either 3D glasses or a 3D visor attached to the helmet (74) in order to view the three dimensional presentation. The antenna on the left side of the helmet is the first receiving antenna (76). Whereas, the antenna on the right side of the helmet is designated as the second receiving antenna (80). The partition (72) is attached to the middle of the helmet and runs from the front to the back, preferably spaced equally between the first and second receiving antennas (76), (80). The helmet interacting means functions with the assistance of a reference beacon signal, preferably emanating from the front of the theater. An infrared or RF transmitter (24), as shown in Figure 1, is preferably located at the front of the theater for providing a reference beacon signal for aiding in the determination of the user's physical selections. The reference beacon signal is preferably an infrared or RF signal capable of being received by each of the user interface units (28).
The reception characteristics of the reference beacon signal at a user interface unit (28) is used to determine the orientation of the head of the respective individual with respect to the 3D "floating" options. Once the orientation of the head is determined with respect to the 3D floating options, the user's selected choice can be determined by the detecting means.
Specifically, the determination of whether or not the reference beacon signal is occluded by the partition (72), and thus not received by the first receiving antenna (76) or second receiving antenna (80), is used to determine the orientation of the head. If the first receiving antenna (76) is not receiving the reference beacon signal due to blockage by the partition (72), but the second receiving antenna (80) is receiving the reference beacon signal, the head orientation is determined to be facing to the left. On the other hand, if the second receiving antenna (80) is not receiving the reference beacon signal due to blockage by the partition (72), but the first receiving antenna (76) is receiving the reference beacon signal, the head orientation is determined to be facing to the right. Finally, if both the first and second receiving antennas (76), (80) are receiving the reference beacon signal, the head is determined to be facing straight ahead. Therefore, the partition (72) must be at least large enough to obstruct the first receiving device (76) from the transmitter (24) located above the 3D screen (20) if the viewer turns their head to the left, and to obstruct the second receiving device
(80) from the same transmitter (24) if the viewer turns their head to the right.
As shown in Figure 4, the detecting means circuit preferably comprising the voltage detector (88), analog-to-digital (A/D) converter (92), and microprocessor (96) actually makes the determination of user head orientation after receiving inputs from interacting means comprising the first and second receiving antennas (76), (80), and consequently which of the corresponding three 3D images has been selected by the user. A conventional voltage detector (88) is connected to the output ports of both receiving antennas (76), (80). The voltage detector (80) determines the voltage level of each of the signals received from the first and second receiving antennas (76), (80). If one of the receivers is obstructed from receiving the reference beacon signal by the partition (72), a very weak signal will be received and the voltage detector (88) will read a low signal strength voltage corresponding to the obstructed antenna. If, however, the receiver is not obstructed by the partition (72) and receives the reference beacon signal, the voltage detector (88) will indicate a high signal strength in volts. The voltage detector (88) forwards the measurements from each antenna to an A/D converter (92). The A/D converter (92) is preprogrammed with a reference voltage level to distinguish the signal strength levels considered to be high and the signal strengths considered to be low. For example, assume that the reference voltage level is set at 2.5 volts. Any signal strength above 2.5 volts will be converted by the A/D converter (92) into a logical 1, with any input below 2.5 volts converted into a logical 0. As a result, the output of the A/D converter (92) is a digital representation of the signal received via the receiving device based on whether or not that receiving device was obstructed.
These digital representations are sent to the processor (96). The processor (96) is preferably a 80C451 based microcontroller, but can be any similar processor. Based on the digital representations, the processor (96) determines whether the user was looking left, right, or straight ahead.
Based on the determined user selection, the determining means of the user interface unit (28) (comprising the processor (96) and switch (124)) processes this selection and with the aid of unique audio selection codes, preferably received from the theater control unit (36), selects and presents to the particular user an individualized audio feedback response. In the preferred embodiment, branching between audio channels is preferably performed in the user interface unit 28. A branching algorithm, via the audio selection codes, is downloaded from the theater control unit 36 to each user interface unit (28). The user interface unit (36) reads the algorithmic codes and stores these codes in RAM memory (100). The processor (96) uses the algorithm to determine the proper audio channel to switch to as a function of the most current and /or previous user selections. The circuitry in Figure 4 is preferably enclosed in a small box attached to the outer back surface of the helmet (74). As shown in Figure 4, the audio selection means comprises the ethernet cable connector interface, IR or RF receiver (108) for receiving the multiple audio feedback channels, the 3 X 1 switch (124) and the microprocessor (96). If more than three audio channels are provided, then an n X 1 switch (124) is preferred, where n corresponds to the number of audio channels. The audio channels are amplified by buffer amps (112), (116), (120) which feed the 3 X 1 switch. The output of the switch (124) will be the appropriate audio feedback response which is input into an amplifier (128) with volume control and passed to the headphones (132), which are preferably attached to the inner surface of the helmet (74). However, the audio feedback response segments can alternatively be stored in memory (100) at the user interface unit (28) or in memory (48) at the theater control unit (36). If the audio feedback segments are stored in memory (100) at the user interface unit (28), the selected audio response is simply retrieved from memory (100) and passed to the amplifier (128). If, however, all of the audio feedback responses are stored in memory (48) at the theater control unit (36), the user interface unit processor (96) sends a command (with an ID code identifying the selected audio segment) back to the theater control unit (36) through the transceiver (108) and communications means. Upon receipt of the select audio command, the theater control unit processor (52) calls from memory (48) the audio response identified by the command, passes the response to the transceiver (56) which transmits the response to the particular user interface unit (28) for play to the user. In a multiple user environment, with several audio response signals forwarded to different user interface units, a header or code is required on each signal to identify to the appropriate user interface unit (28) which signal to lock onto and receive. These signals can be combined at the theater control unit (36) according to any conventional multiple access scheme such as CDMA, FDMA or TDMA.
As the 3D interactive program progresses, at various points, the members of the audience will be prompted by an on-screen character or person to choose one of two or more of the 3D "floating" images. As discussed above, the user may respond by looking at the 3D image of choice, or in alternative embodiments, as discussed below, the user may respond by reaching out towards or pointing a pointing device at the image. The three audio feedback tracks, preferably recorded in the on-screen character's own voice and lip synched to the mouth movements of the on-screen character, are sent from the theater control unit (36) to each of the user interface units (28), and in each user interface unit (28) specifically to the 3 X 1 switch (124). Now, as a result of the user choice, as determined by the detection means described above, the microprocessor (96) interprets the algorithmic codes in conjunction with the user choice and either directs the 3 X 1 switch (124) to branch to the proper audio feedback channel or retrieves from memory (100) the appropriate audio segment(s). In this manner, each member of the audience hears a different and personalized response in the on-screen character actor's own voice via the headphones (132). Therefore, a large audience in a movie theater and /or single participants at home can view the same movie screen (20) or display monitor (20), but carry on their own unique dialogue with the three dimensional interactive program.
In addition, or as part of the audio feedback response, audio sounds can be played to enhance the virtual reality effect of the 3D presentation. For example, after a user chooses a "fireball", a prerecorded sound of an explosion can be called from memory by the processor (96), or included as part of one of the audio channels, and be presented to the user to enhance the interactive effect. In an alternative embodiment, instead of receiving the different audio feedback responses from the theater control unit
(36), the audio feedback responses for an entire movie presentation can be stored in RAM or EPROM memory (100), (104) at each user interface unit. In this embodiment, preferably the data codes are still being sent from the theater control unit (36) to each of the user interface units (28). However, the data codes could also be stored in memory (100), (104) at each of the user interface units (28). When the user is prompted by the on-screen character to make a selection, the selection means determines the choice made by the user. The data codes are read from memory (100), (104) and based on these codes and the user choice, the processor (96) sends a command to
RAM (100) and retrieves an appropriate stored audio feedback response. The audio feedback response is aligned with the video presentation, through use of the data codes, and is forwarded via the volume control amplifier (128) to the headphones (132) for presentation to the viewer.
In addition, all of the user choices made during a program by a user can be stored in the memory, preferably RAM (100), associated with the processor (96) in the user interface units (28). Later in the presentation (even after many other choices have been made), through the combination of memory and logic, the system remembers previous choices and at appropriate "trigger" points in the program, as signaled by the program data codes, the common audio associated with the 3D movie presentation is replaced with an appropriate audio response that relates to potentially several previous user selections. At the onset of a trigger point, the processor (96) will select one of several possible audio segments for presentation to the subscriber.
Each trigger point is identified preferably through the use of data codes sent from the theater control unit (36) to each of the user interface units (28). The codes preferably include, at a minimum, the following information: (1) a header identifying the occurrence of a trigger point; (2) audio selection time; (3) cross reference to the corresponding interrogatory message presented at the beginning of the show. The first bit in the sequence identifies to the processor (96) that a trigger point is about to occur. The second portion informs the processor (96) the time to select from memory (100) the appropriate audio feedback response.
Upon receipt of the codes by the user interface unit (28), the processor (96) reads and interprets the codes and calls from memory (100),(104) one or more particular user selections designated by the trigger point codes. The user selections correspond to user selections to the series of interrogatory messages preferably presented at the beginning of the program. After obtaining the appropriate user selection(s), the processor (96) reads and performs the executable instructions using the user selections as inputs into the macro algorithm. The result of the algorithm is the selected audio feedback response. The audio feedback response can be called from memory (100) if it is prestored or the processor (96) can command the switch (124) to branch to the particular audio channel if the response is transmitted from the theater control unit (36). After the selected audio response is played to the user, the switch (124) branches back to the common audio.
The processor (96) commands the appropriate audio response to be forwarded to the volume control (128) and headphones (132). The description of "trigger points" is provided in U.S." patent application serial no. 08/289,499.
Based on this memory feature of the present invention, if the user selected eight choices during the movie, not only does the user receive audio feedback (as described above) each time a choice has been made, but, in addition, it can remember and utilize these choices to provide an instant recap at the end of the movie (e.g., "in this movie, you selected the lion, shark, and alligator; you must be interested in courageous and ferocious species"), as described in previous Freeman patents. Furthermore, the entire collection of user responses, or a collection thereof, can be stored in memory (100) and later transmitted (preferably by infrared or RF transmission) from each user interface unit (28) back to the theater control unit (36). These responses can be tabulated, printed and /or analyzed by the theater control unit (36) to determine either individual or classroom performance. Since the theater control unit (36) can be a modified personal computer, it can easily interface with a printer via a printer port for the output of such performance indicators. This embodiment is especially attractive in an educational environment. Alternatively, each user selection or compilation of user selections can be used by each user interface unit (28) to provide a visual indication of an individual's performance. For example, a light source could be attached to the helmet (74), connected to a battery pack and the processor (96), and when a user answers a question correctly, the processor (96) could send a signal to enable the light source to emit light. In this manner, every person in the audience could see which users answered correctly.
In another preferred embodiment, the user interface units' selection means can be a pointing device such as a wand or gun (84), as shown in Figure 3b. Similar to the helmet (74), the gun or wand
(84) comprises two antennas divided by a partition (72), one on each side. As an example, the user points the gun (84) at one of the three- dimensional objects. If the user points the gun (84) at the left image, positioned on the left side of the 3D screen (20), the first receiving antenna (76) is obstructed. If the user points the gun (84) to the right image, positioned on the right side of the 3D screen (20), the second receiving antenna (80) is obstructed. Alternatively, if the user points the gun at the image in the middle of the screen, neither of the receivers is occluded. Each of the antenna outputs feeds a box which preferably contains the identical selection means (88), (92),
(96) and determining means circuitry (96), (124) shown in Figure 4. Therefore, the means for interacting with the 3D presentation and detecting the user choice, the means for determining an appropriate audio feedback response, and the means for presenting the related audio response to each individual user are similar to that described above with respect to the helmet configuration.
Another handheld means for indicating user response is a keypad. The key pad technique allows the user to directly input their selections. The keypad will preferably comprise a number of function keys for entering user choices. Each function key is preferably connected to the microprocessor 96 by a serial line. In this embodiment, a user makes a selection by simply depressing one of the function keys. The keypad implements essentially the same circuit as shown in Figure 4, except the voltage detector (88) and A/D converter 92 circuitry are replaced with the direct entry keypad. Again, communications to and from the theater control unit (36) would preferably occur either by ethernet cable, infrared or RF transmission.
Alternatively, the user interface unit (28) may comprise a device with a photo sensitive detector on one end. The photo sensitive device is designed to sense the color or shade of the object the handheld unit is directed towards on the 3D screen. In other words, the user points the handheld photo sensitive device at their selection, and the photo sensitive recognizes the color or shade of the particular selection. Of course, each of the projected 3D
"floating" images must be differently colored or shaded in this embodiment. As an example, if the on-screen character or person asked the user to select between a large yellow giraffe and a large brown bear, the user would point the photo sensitive hand held unit at one of these two images which detects the color corresponding to their selection. More specifically, the photosensitive device receives the color and compares it to a color key which assigns it a value or symbol. This value is sent to processor (96) to determine the user's selection. Another alternative embodiment is to provide each user with a device including handheld electrodes. These electrodes sense the user's physical movements in relation to the movie screen (20) and send these signals to a computer that analyzes the movements resulting in the determination of the particular selection. As an example, the viewer raises their right arm and points to their selection. These movements are detected via the electrodes and communicated to a personal computer. The computer processes these movements and compares them to the necessary movements to make such a selection. After the user's selection is identified, the selection is sent to the microprocessor (96) to direct the presentation of an audio feedback response, as discussed above.
Alternatively, the eye movements of a user could be used to discern between different selections. In one embodiment, one or more infrared transmitters could be strategically located at the front of the theater. Infrared sensors would be preferably attached at the front of a helmet with the sensors directed back toward one or both eyes. The direction of the user's gaze can be determined based on the reflection of the infrared signal off the retina of one or both eyes. For example, assume that two objects are displayed, one on the left side and one on the right side of the screen. The angle of receipt of the infrared signal at the sensors will vary depending on whether the user looks at the left image or the right image. Using processing algorithms well known in the art, the user interface unit processor (96) can determine the direction of viewing, and hence, which object was selected by the user.
Instead of using a single transmitter (24) at the front of the theater, as shown in figure 1 and described above, multiple transmitters and different signals can be utilized in the selection means. One example is using two transmitters that transmit signals with different frequencies, phases, or amplitudes, one located in the front of the room and the other in the back. The transmitter in the front of the theater sends a signal of frequency X, while the transmitter in the back sends a signal at another frequency, frequency Y. In this embodiment, the user may use the same helmet (74) or handheld device (84), shown in Figures 3a and 3b, for indicating user selections, but the detecting means is slightly different. The difference is that both the left and right receiving antennas (76), (80) always receive a signal from at least one of the transmitters. For example, if the user looks to the left, the first receiving antenna (76) is occluded from the transmitter at the front of the theater, but exposed to the transmitter in the back and thereby receives the signal of frequency Y. In contrast, the second receiving antenna (80) is occluded from the transmitter in the back of the theater, but exposed to the transmitter in the front, and thereby receives the signal of frequency X. Similarly, if the user looks to the right, the second receiving antenna (80) receives a signal of frequency Y, and the first receiving antenna (76) receives a signal of frequency X. If the user looks straight ahead, both receiving antennas (76), (80) receive two signals of frequencies X and Y.
To determine the 3D object selected, the signal(s) received at each antenna is passed to one of two noncoherent detection receivers. Each receiver is comprised of one bandpass filter centered on frequency Y connected in serial with an envelope detector and another bandpass filter centered on frequency X connected in serial with an envelope detector. The output of each filter /detector pair is passed to a conventional decision circuit. The decision circuit determines which signals are received by each antenna. By converting the received outputs into binary 1's and O's, a four digit binary representation of the user's response is created and forwarded to the microprocessor (96). These four digits are used by the microprocessor (96) to represent whether the user looked left, right, or straight ahead.
While Figure 1 shows a preferred embodiment of the present invention in an interactive movie theater, the present invention is not constrained to such an environment. In alternative embodiments, a conventional TV broadcast, cable television, wireless network, or satellite system could be used as the source of the 3D interactive programming in accordance with the present invention. The 3D interactive presentation could be transmitted in either digital or analog formats.
For example, the 3D interactive presentation program could be packaged as described in the U.S. application serial no. 08/289,499, preferably with the alternative feedback audio segments inserted into the vertical blanking interval of one of the 3D video signals, if more than one signal is required, or if these signals are digitally encoded then compressed and time division multiplexed with the video and data, as described in the aforementioned application. In these embodiments, each home or other reception center includes: a conventional PC or interactive program box that implements the circuitry and functions of the theater control unit (36), but also includes one or more RF demodulators, error correctors and demultiplexers, extractors (for extracting the audio channels), a switch (if branching occurs not in the user interface unit but in the
PC or interactive program box), with these separate components preferably connected together in one of the embodiments as shown in Figures 3-6 of U.S. application serial no. 08/289,499; an infrared transmitter, connected to the PC or interactive program box, that would provide a reference beacon signal for aiding in the determination of the users' physical selections, as described above; one or more user interface units (28) that connect to the PC or interactive program box for allowing for user selection; and projection equipment and monitor, computer or advanced television capable of presenting the 3D video presentation.
Although, the present invention has been described with respect to embodiments for delivering audio interactivity, the present invention could also be used to provide video interactivity. In a 3D video arcade environment, for example, when a user reaches out, or points with a pointing device at the 3D object selected using one of the selection means described above, the video object could be altered to change to a different object, become a different color, or rotate in motion, for example.
In this interactive video embodiment, the user interface unit (28) determines the user selection, as described above, but this selection must now be transmitted back to the theater control unit (36). The theater control imit (36) preferably includes a video selector for branching from one video stream to another video stream based on the user selection. In this embodiment, several 3D video streams, related in time and content to one another, are preferably sent from the video/audio storage device (40) to the theater control unit (36). These video streams could be in analog format or in digital format. The video /audio storage device (40) may be any conventional source capable of providing synchronized 3D multiple video segments, such as a multitrack tape, CD ROM, or a plurality of separate tapes or video discs whose operation is synchronized. Each of the different video segments are preferably related in real time and content to one another. The alternative video segments are preferably sent from the storage device (40) to the control unit (36) where they are forwarded to the input of the video selector.
Upon receipt of the user's selection from the user interface unit (28), the theater control unit processor (52) interprets the data codes and sends a command to the video selector to branch to the channel containing the appropriate video stream. If the video signals are digitally compressed and multiplexed together, the video selector is preferably a video demultiplexer/decompressor. Alternatively, if the video signals are analog and frequency multiplexed, the video selector is preferably a frequency demultiplexer. The selected 3D video stream is then forwarded from the control imit to the video projection equipment for projection onto the screen (20). In this manner, a user may choose a 3D object such as a "fireball", for example, whereupon the image of the fireball may change to show an exploding fireball. Audio interactivity, as described above, can be combined and synchronized with the video interactivity through the use of the data codes to substantially enhance the realism of the 3D presentation.
Instead of receiving several video streams, the theater control unit 36 could simply call from storage the selected video stream. In this manner, the theater control unit processor (52) determines the desired video segment. The processor (52) then reads from some storage device (e.g., CD ROM) the appropriate video segment, buffers the signal to obtain time sync, and sends the 3D video segment to the projection means (32). Near the end of the selected video segment, the theater control unit (36) switches back to the common 3D video presentation.
As discussed above with respect to providing audio interactivity, the theater control unit memory (48) and processor (52) is preferably utilized for storing and processing prior user selections for controlling future video selection. The algorithm for controlling this selection is either embedded into the data codes or exists in software stored in memory (48).
Using the foregoing embodiments, methods and processes, the 3D video presentation system of the present invention maximizes the user's involvement and interactivity in a real time and low cost environment. Although the present invention has been described in detail with respect to certain embodiments and examples, variations and modifications exist which are within the scope of the present invention as defined in the following claims.

Claims

CLAIMSWhat is claimed is:
1. An interactive video system for presenting a three dimensional video presentation, comprising: a means for displaying (20, 32) a three dimensional video presentation, the presentation comprising a common audio and, at predetermined times, at least two selectable three dimensional video image options; a means for selecting (24, 28, 96) one of the three dimensional video image options; a means, connected to the selection means (24, 28, 96), for determining (96, 124) an appropriate audio feedback response, the audio feedback response chosen based on the selected three dimensional video image option; and a means, connected to the determination means (96, 124), for presenting (132) the appropriate audio feedback response to the viewer, wherein the appropriate audio feedback response results in audio which is personalized to the individual viewer.
2. The interactive video system of claim 1, wherein the selection means (24, 28, 96) comprises: a means for interacting (24, 72, 74, 76, 80) with the three dimensional video presentation, the interacting means (24, 72, 74,
76, 80) having an output, wherein the user is prompted to interact with the three dimensional video presentation and the user interactively responds by picking a three dimensional image option; and a means, receiving one or more signals from the output of the interacting means (24, 72, 74, 76, 80), for detecting (88, 92) the three dimensional image option pertaining to the user's interactive response.
3. The interactive video system of claim 2, wherein the interacting means (24, 72, 74, 76, 80) comprises: a transmitter (24), located at the front of the movie theater, for emitting a reference beacon signal; a helmet (74), the pointing device (84) having an inner surface and an outer surface, comprising: a partition (72), connected to the helmet (74), protruding from the center of the helmet (74) outer surface; a first receiving device (76), connected to the helmet (74), extending outward from the helmet (74) outer surface and to the left of the partition (72), wherein the first receiving device (76) produces a detectable output signal when the first receiving device
(76) receives the reference beacon signal unobstructed from the transmitter (24), wherein the receipt of the reference beacon signal is a function of the viewer head orientation; a second receiving device (80), connected to the helmet (74), extending outward from the helmet (74) outer surface and to the right of the partition (72), wherein the second receiving device (80) produces a detectable output signal when the second receiving device (80) receives the identification signal unobstructed from the infrared transmitter (24) and the second receiving device (80) presents no output signal when no identification signal is received due to blockage of the identification signal by the partition (72).
4. The interactive video system of claim 2, wherein the interacting means (24, 72, 74, 76, 80) comprises: a transmitter (24), located at the front of the movie theater, for emitting a reference beacon signal; a pointing device (84), the pointing device (84) having a connecting surface, comprising: a partition (72), attached to the connecting surface, protruding from the center of the connecting surface; a first receiving device (76), connected to the connecting surface, extending outward from the connecting surface and to the left of the partition (72), wherein the first receiving device (76) produces a detectable output signal when the first receiving device (76) receives the reference beacon signal unobstructed from the transmitter (24), wherein the receipt of the reference beacon signal is a function of the direction of the pointing device (84); and a second receiving device (80), connected to the connecting surface, extending outward from the connecting surface and to the right of the partition (72), wherein the second receiving device (80) produces a detectable output signal when the second receiving device (80) receives the reference beacon signal unobstructed from the transmitter
(24) and the second receiving device (80) presents no output signal when no identification signal is received due to blockage of the identification signal by the partition (72).
5. The interactive video system of claim 2, wherein the detecting means (88, 92) comprises: a means, receiving one or more signals from the output of the interacting means (24, 72, 74, 76, 80), for measuring (88) the voltage associated with the signals; a means, connected to the voltage measuring means (88), for converting (92) the voltages to digital values; a means, connected to the converting means (92), for processing (96) the digital values to determine the selected three dimensional video image option.
6. An interactive theater video system for presenting a three dimensional video presentation, comprising: a means for displaying (20, 32) a three dimensional video presentation, comprising a common audio and, at predetermined times, at least one three dimensional video image option; a means for selecting (24, 28, 96) one of at least two selectable three dimensional video image options, comprising: a means for interacting (24, 72, 74, 76, 80) with the three dimensional video presentation, the interacting means (24,
72, 74, 76, 80) having an output, wherein a user is prompted to interact with the three dimensional video presentation and the user interactively responds by picking a three dimensional image option; and a means, connected to and receiving signals from the output of the interacting means (24, 72, 74, 76, 80), for detecting (88, 92) the three dimensional image option pertaining to the user's interactive response; a means, connected to the selection means (24, 28, 96), for determining (96, 124) an appropriate audio feedback response, the audio feedback response chosen based on the selected three dimensional video image option; and a means, connected to the determination means (96, 124), for presenting (132) the appropriate audio feedback response to the viewer, wherein the appropriate audio feedback response results in an alteration from the common audio which is personalized to the individual viewer.
7. The interactive video system of claim 3, wherein the selection means (24, 28, 96) is comprised of: a means, connected to the first and second receiving devices (76, 80), for sensing (28, 88, 92, 96, 124) the orientation of the head in relation to the selected three dimensional video image option, wherein head orientation is determined based on the output of the first and second receiving devices (76, 80), the head orientation indicated as being towards the left of the center of the theater when the second receiving device (80) presents the detectable output signal and the first receiving device (76) presents no output signal, the head orientation indicated as being directed towards the center of the theater when the first and second receiving devices (76, 80) present the detectable output signals, and the head orientation indicated as being to the right of the center of the theater when the first receiving device (76) presents the detectable output signal and the second receiving device (80) presents no output signal.
8. The interactive video system of claim 4, wherein the selection means (24, 28, 96) comprises: a means, connected to the first and second receiving devices (76, 80), for sensing (28, 88, 92, 96, 124) the orientation of the head in relation to the selected three dimensional video image option, wherein head orientation is determined based on the output of the first and second receiving devices (76, 80), the head orientation indicated as being towards the left of the center of the theater when the second receiving device (80) presents the detectable output signal and the first receiving device (76) presents no output signal, the head orientation indicated as being directed towards the center of the theater when the first and second receiving devices (76, 80) present the detectable output signals, and the head orientation indicated as being to the right of the center of the theater when the first receiving device (76) presents the detectable output signal and the second receiving device (80) presents no output signal.
9. A user interface unit (28) for responding to a three dimensional video presentation, the presentation comprising a common audio and, at predetermined times, at least two three dimensional video image options, comprising: a means for selecting (24, 28, 96) one of at least two three dimensional video image options, comprising: a means for interacting (24, 72, 74, 76, 80) with the three dimensional video presentation, the interacting means (24, 72, 74, 76, 80) having an output, wherein a user is prompted to interact with the three dimensional video presentation and a user interactively responds by picking a three dimensional image option; and a means, connected to and receiving signals from the output of the interacting means (24, 72, 74, 76, 80), for detecting (88, 92) the three dimensional image option pertaining to the user's interactive response; a means, connected to the selection means (24, 28, 96), for determining (96, 124) an appropriate audio feedback response, the audio feedback response chosen based on the selected three dimensional video image option; and a means, connected to the determination means (96, 124), for presenting (132) the appropriate audio feedback response to the viewer, wherein the appropriate audio feedback response results in an alteration from the common audio which is personalized to the individual viewer.
10. An interactive video system using three dimensional images in a movie theater type environment wherein individualized audio interactivity is provided in conjunction with presentations of the 3- D image, comprising: an infrared transmitter (24), located at the front of the movie theater, for transmitting an identification signal; a means , located at the front of the movie theater, for presenting (20, 32) more three dimensional images; at least one helmet (74) having an outer and inner surface, comprising: a partition (72), connected to the helmet (74), protruding from the center of the helmet (74) outer surface; a first receiving device (76), connected to the helmet (74), extending outward from the helmet (74) outer surface on the left side of the partition (72), wherein the first receiving device (76) presents an output signal when the first receiving device (76) receives the identification signal unobstructed from the infrared transmitter (24) and the first receiving device (76) presents no output signal when no identification signal is received due to blockage of the identification signal by the partition (72); a second receiving device (80), connected to the helmet
(74), extending outward from the helmet (74) outer surface on the right side of the partition (72), wherein the second receiving device (80) presents an output signal when the second receiving device (80) receives the identification signal unobstructed from the infrared transmitter (24) and the second receiving devices (80) presents no output signal when no identification signal is received due to blockage of the identification signal by the partition (72); a means, connected to the first and second receiving devices (76, 80), for sensing (28, 88, 92, 96, 124) the orientation of the head in relation to the presentation means (20), wherein head orientation is determined based on the output of the first and second receiving devices (76, 80), the head orientation indicated as being towards the left of the center of the theater when the second receiving device (80) presents an output signal and the first receiving means (76) presents no output signals, the head orientation indicated as being directed towards the center of the theater and the head orientation indicated as being to the right of the center of the theater when the first receiving device (76) presents an output signal and the second receiving device (80) presents no output signal; a means (76, 80) for receiving a plurality of audio channels; a means, connected to the receiving means (76, 80), for selecting (24, 28, 96) between the separate audio channels based on the direction of head orientation; and a means for presenting (132) the selected audio channel to the user.
11. An interactive video method for presenting a three dimensional video presentation, comprising: displaying a three dimensional video presentation, comprising a common audio and, at predetermined times, at least two selectable three dimensional video image option; selecting one of the three dimensional video image options; determining an appropriate audio feedback response, the audio feedback response chosen based on the selected three dimensional video image option; and presenting the appropriate audio feedback response to the viewer, wherein the appropriate audio feedback response results in audio which is personalized to the individual viewer.
PCT/US1996/010743 1995-06-22 1996-06-24 A three-dimensional (3d) video presentation system providing interactive 3d presentation with personalized audio responses for multiple viewers WO1997001245A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU63382/96A AU6338296A (en) 1995-06-22 1996-06-24 A three-dimensional (3d) video presentation system providing interactive 3d presentation with personalized audio responses for multiple viewers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/493,845 1995-06-22
US08/493,845 US5682196A (en) 1995-06-22 1995-06-22 Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers

Publications (1)

Publication Number Publication Date
WO1997001245A1 true WO1997001245A1 (en) 1997-01-09

Family

ID=23961932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/010743 WO1997001245A1 (en) 1995-06-22 1996-06-24 A three-dimensional (3d) video presentation system providing interactive 3d presentation with personalized audio responses for multiple viewers

Country Status (3)

Country Link
US (1) US5682196A (en)
AU (1) AU6338296A (en)
WO (1) WO1997001245A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19921739A1 (en) * 1999-05-11 2000-11-16 Stefan Pforte Communications rooms for real and virtual communications partners have same dimensions, are blue, of same design, each with camera system that can film all possible positions of room
US6623428B2 (en) 2001-10-11 2003-09-23 Eastman Kodak Company Digital image sequence display system and method
US10856031B2 (en) 2003-04-15 2020-12-01 MedialP, Inc. Method and apparatus for generating interactive programming in a communication network
USRE48579E1 (en) 2002-04-15 2021-06-01 Media Ip, Inc. Method and apparatus for internet-based interactive programming

Families Citing this family (409)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6418424B1 (en) 1991-12-23 2002-07-09 Steven M. Hoffberg Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US6850252B1 (en) 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US6400996B1 (en) 1999-02-01 2002-06-04 Steven M. Hoffberg Adaptive pattern recognition based control system and method
US10361802B1 (en) 1999-02-01 2019-07-23 Blanding Hovenweep, Llc Adaptive pattern recognition based control system and method
US5903454A (en) 1991-12-23 1999-05-11 Hoffberg; Linda Irene Human-factored interface corporating adaptive pattern recognition based controller apparatus
US6164973A (en) * 1995-01-20 2000-12-26 Vincent J. Macri Processing system method to provide users with user controllable image for use in interactive simulated physical movements
US6769128B1 (en) 1995-06-07 2004-07-27 United Video Properties, Inc. Electronic television program guide schedule system and method with data feed access
US5905865A (en) 1995-10-30 1999-05-18 Web Pager, Inc. Apparatus and method of automatically accessing on-line services in response to broadcast of on-line addresses
US6469753B1 (en) 1996-05-03 2002-10-22 Starsight Telecast, Inc. Information system
US6205485B1 (en) * 1997-03-27 2001-03-20 Lextron Systems, Inc Simulcast WEB page delivery using a 3D user interface system
US9113122B2 (en) 1997-04-21 2015-08-18 Rovi Guides, Inc. Method and apparatus for time-shifting video and text in a text-enhanced television program
DE69812701T2 (en) 1997-07-21 2004-02-05 E Guide, Inc., Pasadena METHOD FOR NAVIGATING A TV PROGRAM GUIDE WITH ADVERTISING
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US6243740B1 (en) * 1998-02-13 2001-06-05 Xerox Corporation Public interactive document
US6522325B1 (en) 1998-04-02 2003-02-18 Kewazinga Corp. Navigable telepresence method and system utilizing an array of cameras
AU761950B2 (en) 1998-04-02 2003-06-12 Kewazinga Corp. A navigable telepresence method and system utilizing an array of cameras
US6742183B1 (en) 1998-05-15 2004-05-25 United Video Properties, Inc. Systems and methods for advertising television networks, channels, and programs
US6135958A (en) 1998-08-06 2000-10-24 Acuson Corporation Ultrasound imaging system with touch-pad pointing device
US7548787B2 (en) 2005-08-03 2009-06-16 Kamilo Feher Medical diagnostic and communication system
US6898762B2 (en) 1998-08-21 2005-05-24 United Video Properties, Inc. Client-server electronic program guide
US6020810A (en) * 1998-10-22 2000-02-01 Har-Even; Eva A. Automatic electronic date/mate finder and method of electronically finding a date/mate
US6615408B1 (en) 1999-01-15 2003-09-02 Grischa Corporation Method, system, and apparatus for providing action selections to an image referencing a product in a video production
US6473804B1 (en) 1999-01-15 2002-10-29 Grischa Corporation System for indexical triggers in enhanced video productions by redirecting request to newly generated URI based on extracted parameter of first URI
US7904187B2 (en) 1999-02-01 2011-03-08 Hoffberg Steven M Internet appliance system and method
US6498955B1 (en) * 1999-03-19 2002-12-24 Accenture Llp Member preference control of an environment
US6636238B1 (en) * 1999-04-20 2003-10-21 International Business Machines Corporation System and method for linking an audio stream with accompanying text material
US6369835B1 (en) * 1999-05-18 2002-04-09 Microsoft Corporation Method and system for generating a movie file from a slide show presentation
US7054831B2 (en) 1999-07-07 2006-05-30 Eric Koenig System and method for combining interactive game with interactive advertising
US6409599B1 (en) 1999-07-19 2002-06-25 Ham On Rye Technologies, Inc. Interactive virtual reality performance theater entertainment system
US6386985B1 (en) * 1999-07-26 2002-05-14 Guy Jonathan James Rackham Virtual Staging apparatus and method
US9373251B2 (en) 1999-08-09 2016-06-21 Kamilo Feher Base station devices and automobile wireless communication systems
US7260369B2 (en) 2005-08-03 2007-08-21 Kamilo Feher Location finder, tracker, communication and remote control system
US9813270B2 (en) 1999-08-09 2017-11-07 Kamilo Feher Heart rate sensor and medical diagnostics wireless devices
US9307407B1 (en) 1999-08-09 2016-04-05 Kamilo Feher DNA and fingerprint authentication of mobile devices
US6539099B1 (en) * 1999-08-30 2003-03-25 Electric Planet System and method for visual chat
US7167577B1 (en) * 1999-08-30 2007-01-23 Electric Planet, Inc. System and method for visual chat
US7743070B1 (en) 1999-10-07 2010-06-22 Blumberg J Seth Entertainment management interactive system using a computer network
US6323893B1 (en) * 1999-10-27 2001-11-27 Tidenet, Inc. Portable conference center
AU2001234732A1 (en) 2000-02-01 2001-08-14 United Video Properties, Inc. Methods and systems for forced advertising
AU2001247901B2 (en) * 2000-03-31 2006-07-27 Rovi Guides, Inc. Interactive media system and method for presenting pause-time content
US8006261B1 (en) 2000-04-07 2011-08-23 Visible World, Inc. System and method for personalized message creation and delivery
US7895620B2 (en) 2000-04-07 2011-02-22 Visible World, Inc. Systems and methods for managing and distributing media content
US7917924B2 (en) 2000-04-07 2011-03-29 Visible World, Inc. Systems and methods for semantic editorial control and video/audio editing
US7904922B1 (en) 2000-04-07 2011-03-08 Visible World, Inc. Template creation and editing for a message campaign
US7870579B2 (en) 2000-04-07 2011-01-11 Visible Worl, Inc. Systems and methods for managing and distributing media content
US7890971B2 (en) 2000-04-07 2011-02-15 Visible World, Inc. Systems and methods for managing and distributing media content
US7861261B2 (en) 2000-04-07 2010-12-28 Visible World, Inc. Systems and methods for managing and distributing media content
US7870577B2 (en) 2000-04-07 2011-01-11 Visible World, Inc. Systems and methods for semantic editorial control and video/audio editing
US7870578B2 (en) 2000-04-07 2011-01-11 Visible World, Inc. Systems and methods for managing and distributing media content
US8572646B2 (en) 2000-04-07 2013-10-29 Visible World Inc. System and method for simultaneous broadcast for personalized messages
US7900227B2 (en) 2000-04-07 2011-03-01 Visible World, Inc. Systems and methods for managing and distributing media content
US7426558B1 (en) 2000-05-11 2008-09-16 Thomson Licensing Method and system for controlling and auditing content/service systems
US7196722B2 (en) * 2000-05-18 2007-03-27 Imove, Inc. Multiple camera video system which displays selected images
US20020089587A1 (en) * 2000-05-18 2002-07-11 Imove Inc. Intelligent buffering and reporting in a multiple camera data streaming video system
AU2001275453A1 (en) * 2000-06-09 2001-12-17 Imove Inc. Streaming panoramic video
EP1327238A1 (en) * 2000-07-08 2003-07-16 Motorola, Inc. Adaptive presentation system
US8037492B2 (en) * 2000-09-12 2011-10-11 Thomson Licensing Method and system for video enhancement transport alteration
US20020069405A1 (en) * 2000-09-20 2002-06-06 Chapin Paul W. System and method for spokesperson interactive television advertisements
US20030037332A1 (en) * 2000-09-20 2003-02-20 Chapin Paul W. System and method for storyboard interactive television advertisements
US7490344B2 (en) 2000-09-29 2009-02-10 Visible World, Inc. System and method for seamless switching
JP3540740B2 (en) * 2000-11-21 2004-07-07 三洋電機株式会社 Digital broadcast reception advertisement information output device
US20050039130A1 (en) * 2001-01-16 2005-02-17 Chris Paul Presentation management system and method
US20050039131A1 (en) * 2001-01-16 2005-02-17 Chris Paul Presentation management system and method
US20050039129A1 (en) * 2001-01-16 2005-02-17 Chris Paul Presentation management system and method
WO2002069121A1 (en) * 2001-02-26 2002-09-06 Ip Planet Networks Ltd. Modular interactive application generation system
US20020143901A1 (en) * 2001-04-03 2002-10-03 Gtech Rhode Island Corporation Interactive media response processing system
US6776681B2 (en) 2001-05-07 2004-08-17 Mattel, Inc. Animated doll
US7752546B2 (en) * 2001-06-29 2010-07-06 Thomson Licensing Method and system for providing an acoustic interface
AU2002327217A1 (en) 2001-07-09 2003-01-29 Visible World, Inc. System and method for seamless switching of compressed audio streams
US6865920B2 (en) * 2001-10-01 2005-03-15 Sumitomo Light Metal Industries, Ltd Indirect extrusion method of clad material
US7315820B1 (en) * 2001-11-30 2008-01-01 Total Synch, Llc Text-derived speech animation tool
US6990639B2 (en) 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
AU2003215292A1 (en) 2002-02-15 2004-03-11 Visible World, Inc. System and method for seamless switching through buffering
US20040032486A1 (en) 2002-08-16 2004-02-19 Shusman Chad W. Method and apparatus for interactive programming using captioning
US8843990B1 (en) 2002-04-25 2014-09-23 Visible World, Inc. System and method for optimized channel switching in digital television broadcasting
US20040008198A1 (en) * 2002-06-14 2004-01-15 John Gildred Three-dimensional output system
US7239981B2 (en) 2002-07-26 2007-07-03 Arbitron Inc. Systems and methods for gathering audience measurement data
WO2004023437A2 (en) 2002-09-06 2004-03-18 Visible World, Inc. System for authoring and editing personalized message campaigns
US9711153B2 (en) 2002-09-27 2017-07-18 The Nielsen Company (Us), Llc Activating functions in processing devices using encoded audio and detecting audio signatures
US8959016B2 (en) 2002-09-27 2015-02-17 The Nielsen Company (Us), Llc Activating functions in processing devices using start codes embedded in audio
WO2004031911A2 (en) * 2002-10-02 2004-04-15 Nielsen Media Research, Inc. Methods and apparatus to present survey information
CN101101520B (en) * 2002-11-20 2012-11-14 皇家飞利浦电子股份有限公司 User interface system based on pointing device
US7827034B1 (en) 2002-11-27 2010-11-02 Totalsynch, Llc Text-derived speech animation tool
US7204374B2 (en) * 2003-01-08 2007-04-17 Marek James E Tool holder
US8745541B2 (en) 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US8170096B1 (en) 2003-11-18 2012-05-01 Visible World, Inc. System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
JP2005181404A (en) * 2003-12-16 2005-07-07 Nec Viewtechnology Ltd Image projection controller capable of displaying multiple images
US7542050B2 (en) 2004-03-03 2009-06-02 Virtual Iris Studios, Inc. System for delivering and enabling interactivity with images
US7262783B2 (en) * 2004-03-03 2007-08-28 Virtual Iris Studios, Inc. System for delivering and enabling interactivity with images
US9087126B2 (en) 2004-04-07 2015-07-21 Visible World, Inc. System and method for enhanced video selection using an on-screen remote
US9396212B2 (en) 2004-04-07 2016-07-19 Visible World, Inc. System and method for enhanced video selection
US8132204B2 (en) 2004-04-07 2012-03-06 Visible World, Inc. System and method for enhanced video selection and categorization using metadata
JP4855654B2 (en) * 2004-05-31 2012-01-18 ソニー株式会社 On-vehicle device, on-vehicle device information providing method, on-vehicle device information providing method program, and on-vehicle device information providing method program
US20080288978A1 (en) * 2004-06-16 2008-11-20 Douglas Sundberg Nut Cracker
US7387387B2 (en) * 2004-06-17 2008-06-17 Amo Manufacturing Usa, Llc Correction of presbyopia using adaptive optics and associated methods
US9060200B1 (en) 2004-08-11 2015-06-16 Visible World, Inc. System and method for digital program insertion in cable systems
US20060112650A1 (en) * 2004-11-18 2006-06-01 Ari Kogut Method and system for multi-dimensional live sound performance
US8120655B1 (en) 2004-12-01 2012-02-21 Molyneux Entertainment Company Interactive theatrical performance
KR100659888B1 (en) * 2005-01-07 2006-12-20 엘지전자 주식회사 3d video processing system of mobile device
JP4553362B2 (en) * 2005-01-31 2010-09-29 キヤノン株式会社 System, image processing apparatus, and information processing method
US10009956B1 (en) 2017-09-02 2018-06-26 Kamilo Feher OFDM, 3G and 4G cellular multimode systems and wireless mobile networks
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US20070100986A1 (en) * 2005-10-27 2007-05-03 Bagley Elizabeth V Methods for improving interactive online collaboration using user-defined sensory notification or user-defined wake-ups
US20070100939A1 (en) * 2005-10-27 2007-05-03 Bagley Elizabeth V Method for improving attentiveness and participation levels in online collaborative operating environments
US20070124766A1 (en) * 2005-11-30 2007-05-31 Broadcom Corporation Video synthesizer
JP5052853B2 (en) * 2005-12-08 2012-10-17 コニカミノルタプラネタリウム株式会社 Digital planetarium projector
US8677252B2 (en) * 2006-04-14 2014-03-18 Citrix Online Llc Systems and methods for displaying to a presenter visual feedback corresponding to visual changes received by viewers
JP5458321B2 (en) 2006-04-24 2014-04-02 ヴィジブル ワールド インコーポレイテッド System and method for generating media content using microtrends
KR20080063041A (en) * 2006-12-29 2008-07-03 삼성전자주식회사 Method and apparatus for user interface
US7801888B2 (en) 2007-03-09 2010-09-21 Microsoft Corporation Media content search results ranked by popularity
US8005238B2 (en) 2007-03-22 2011-08-23 Microsoft Corporation Robust adaptive beamforming with enhanced noise suppression
US8005237B2 (en) 2007-05-17 2011-08-23 Microsoft Corp. Sensor array beamformer post-processor
US8629976B2 (en) * 2007-10-02 2014-01-14 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems
US20090166684A1 (en) * 2007-12-26 2009-07-02 3Dv Systems Ltd. Photogate cmos pixel for 3d cameras having reduced intra-pixel cross talk
US8385557B2 (en) 2008-06-19 2013-02-26 Microsoft Corporation Multichannel acoustic echo reduction
US8325909B2 (en) 2008-06-25 2012-12-04 Microsoft Corporation Acoustic echo suppression
US8203699B2 (en) 2008-06-30 2012-06-19 Microsoft Corporation System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed
US9667365B2 (en) 2008-10-24 2017-05-30 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US8121830B2 (en) 2008-10-24 2012-02-21 The Nielsen Company (Us), Llc Methods and apparatus to extract data encoded in media content
US8359205B2 (en) 2008-10-24 2013-01-22 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
JP5308127B2 (en) * 2008-11-17 2013-10-09 株式会社豊田中央研究所 Power supply system
US8681321B2 (en) 2009-01-04 2014-03-25 Microsoft International Holdings B.V. Gated 3D camera
US8294767B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Body scan
US8682028B2 (en) * 2009-01-30 2014-03-25 Microsoft Corporation Visual target tracking
US8565477B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8588465B2 (en) * 2009-01-30 2013-11-19 Microsoft Corporation Visual target tracking
US7996793B2 (en) 2009-01-30 2011-08-09 Microsoft Corporation Gesture recognizer system architecture
US8565476B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
US20100199231A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Predictive determination
US8577084B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8577085B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8295546B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Pose tracking pipeline
US8267781B2 (en) 2009-01-30 2012-09-18 Microsoft Corporation Visual target tracking
US8448094B2 (en) 2009-01-30 2013-05-21 Microsoft Corporation Mapping a natural input device to a legacy system
US8487938B2 (en) * 2009-01-30 2013-07-16 Microsoft Corporation Standard Gestures
US8773355B2 (en) 2009-03-16 2014-07-08 Microsoft Corporation Adaptive cursor sizing
US9256282B2 (en) 2009-03-20 2016-02-09 Microsoft Technology Licensing, Llc Virtual object manipulation
US8988437B2 (en) 2009-03-20 2015-03-24 Microsoft Technology Licensing, Llc Chaining animations
US9313376B1 (en) 2009-04-01 2016-04-12 Microsoft Technology Licensing, Llc Dynamic depth power equalization
US20100265326A1 (en) * 2009-04-20 2010-10-21 Kujala Kevin A Sensory enhancement method and system for visual media
US8253746B2 (en) * 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US8638985B2 (en) 2009-05-01 2014-01-28 Microsoft Corporation Human body pose estimation
US8660303B2 (en) * 2009-05-01 2014-02-25 Microsoft Corporation Detection of body and props
US8181123B2 (en) 2009-05-01 2012-05-15 Microsoft Corporation Managing virtual port associations to users in a gesture-based computing environment
US8649554B2 (en) * 2009-05-01 2014-02-11 Microsoft Corporation Method to control perspective for a camera-controlled computer
CA3094520A1 (en) * 2009-05-01 2010-11-04 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
US9498718B2 (en) 2009-05-01 2016-11-22 Microsoft Technology Licensing, Llc Altering a view perspective within a display environment
US8942428B2 (en) 2009-05-01 2015-01-27 Microsoft Corporation Isolate extraneous motions
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
US9015638B2 (en) 2009-05-01 2015-04-21 Microsoft Technology Licensing, Llc Binding users to a gesture based system and providing feedback to the users
US8340432B2 (en) 2009-05-01 2012-12-25 Microsoft Corporation Systems and methods for detecting a tilt angle from a depth image
US9377857B2 (en) 2009-05-01 2016-06-28 Microsoft Technology Licensing, Llc Show body position
US20100277470A1 (en) * 2009-05-01 2010-11-04 Microsoft Corporation Systems And Methods For Applying Model Tracking To Motion Capture
US9898675B2 (en) * 2009-05-01 2018-02-20 Microsoft Technology Licensing, Llc User movement tracking feedback to improve tracking
US20100295771A1 (en) * 2009-05-20 2010-11-25 Microsoft Corporation Control of display objects
US8542252B2 (en) 2009-05-29 2013-09-24 Microsoft Corporation Target digitization, extraction, and tracking
US8625837B2 (en) 2009-05-29 2014-01-07 Microsoft Corporation Protocol and format for communicating an image from a camera to a computing environment
US8509479B2 (en) * 2009-05-29 2013-08-13 Microsoft Corporation Virtual object
US20100302138A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Methods and systems for defining or modifying a visual representation
US8379101B2 (en) * 2009-05-29 2013-02-19 Microsoft Corporation Environment and/or target segmentation
US8320619B2 (en) 2009-05-29 2012-11-27 Microsoft Corporation Systems and methods for tracking a model
US8418085B2 (en) 2009-05-29 2013-04-09 Microsoft Corporation Gesture coach
US8693724B2 (en) 2009-05-29 2014-04-08 Microsoft Corporation Method and system implementing user-centric gesture control
US20100306716A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Extending standard gestures
US9182814B2 (en) 2009-05-29 2015-11-10 Microsoft Technology Licensing, Llc Systems and methods for estimating a non-visible or occluded body part
US9400559B2 (en) * 2009-05-29 2016-07-26 Microsoft Technology Licensing, Llc Gesture shortcuts
US8744121B2 (en) 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
US8856691B2 (en) 2009-05-29 2014-10-07 Microsoft Corporation Gesture tool
US9383823B2 (en) 2009-05-29 2016-07-05 Microsoft Technology Licensing, Llc Combining gestures beyond skeletal
US8487871B2 (en) 2009-06-01 2013-07-16 Microsoft Corporation Virtual desktop coordinate transformation
US8390680B2 (en) 2009-07-09 2013-03-05 Microsoft Corporation Visual representation expression based on player expression
US9159151B2 (en) * 2009-07-13 2015-10-13 Microsoft Technology Licensing, Llc Bringing a visual representation to life via learned input from the user
JP5293500B2 (en) * 2009-08-25 2013-09-18 ソニー株式会社 Display device and control method
US8264536B2 (en) * 2009-08-25 2012-09-11 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US9141193B2 (en) 2009-08-31 2015-09-22 Microsoft Technology Licensing, Llc Techniques for using human gestures to control gesture unaware programs
US9166714B2 (en) 2009-09-11 2015-10-20 Veveo, Inc. Method of and system for presenting enriched video viewing analytics
US8330134B2 (en) * 2009-09-14 2012-12-11 Microsoft Corporation Optical fault monitoring
US8508919B2 (en) * 2009-09-14 2013-08-13 Microsoft Corporation Separation of electrical and optical components
US8760571B2 (en) * 2009-09-21 2014-06-24 Microsoft Corporation Alignment of lens and image sensor
US8976986B2 (en) * 2009-09-21 2015-03-10 Microsoft Technology Licensing, Llc Volume adjustment based on listener position
US8428340B2 (en) * 2009-09-21 2013-04-23 Microsoft Corporation Screen space plane identification
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US8359616B2 (en) 2009-09-30 2013-01-22 United Video Properties, Inc. Systems and methods for automatically generating advertisements using a media guidance application
US8452087B2 (en) 2009-09-30 2013-05-28 Microsoft Corporation Image selection techniques
US8723118B2 (en) * 2009-10-01 2014-05-13 Microsoft Corporation Imager for constructing color and depth images
US20110083108A1 (en) * 2009-10-05 2011-04-07 Microsoft Corporation Providing user interface feedback regarding cursor position on a display screen
US8564534B2 (en) 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US8867820B2 (en) 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image
US8963829B2 (en) 2009-10-07 2015-02-24 Microsoft Corporation Methods and systems for determining and tracking extremities of a target
US7961910B2 (en) * 2009-10-07 2011-06-14 Microsoft Corporation Systems and methods for tracking a model
US9400548B2 (en) * 2009-10-19 2016-07-26 Microsoft Technology Licensing, Llc Gesture personalization and profile roaming
US20110099476A1 (en) * 2009-10-23 2011-04-28 Microsoft Corporation Decorating a display environment
US8988432B2 (en) * 2009-11-05 2015-03-24 Microsoft Technology Licensing, Llc Systems and methods for processing an image for target tracking
US8843857B2 (en) 2009-11-19 2014-09-23 Microsoft Corporation Distance scalable no touch computing
US9244533B2 (en) * 2009-12-17 2016-01-26 Microsoft Technology Licensing, Llc Camera navigation for presentations
US20110150271A1 (en) 2009-12-18 2011-06-23 Microsoft Corporation Motion detection using depth images
US20110151974A1 (en) * 2009-12-18 2011-06-23 Microsoft Corporation Gesture style recognition and reward
US8320621B2 (en) 2009-12-21 2012-11-27 Microsoft Corporation Depth projector system with integrated VCSEL array
US9019201B2 (en) * 2010-01-08 2015-04-28 Microsoft Technology Licensing, Llc Evolving universal gesture sets
US8631355B2 (en) 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US9268404B2 (en) * 2010-01-08 2016-02-23 Microsoft Technology Licensing, Llc Application gesture interpretation
US8933884B2 (en) * 2010-01-15 2015-01-13 Microsoft Corporation Tracking groups of users in motion capture system
US8334842B2 (en) 2010-01-15 2012-12-18 Microsoft Corporation Recognizing user intent in motion capture system
US8676581B2 (en) 2010-01-22 2014-03-18 Microsoft Corporation Speech recognition analysis via identification information
US8265341B2 (en) 2010-01-25 2012-09-11 Microsoft Corporation Voice-body identity correlation
KR101671954B1 (en) * 2010-01-26 2016-11-03 엘지전자 주식회사 3D image display appratus
US8864581B2 (en) 2010-01-29 2014-10-21 Microsoft Corporation Visual based identitiy tracking
US8891067B2 (en) 2010-02-01 2014-11-18 Microsoft Corporation Multiple synchronized optical sources for time-of-flight range finding systems
US8619122B2 (en) * 2010-02-02 2013-12-31 Microsoft Corporation Depth camera compatibility
US8687044B2 (en) * 2010-02-02 2014-04-01 Microsoft Corporation Depth camera compatibility
US8717469B2 (en) * 2010-02-03 2014-05-06 Microsoft Corporation Fast gating photosurface
US8659658B2 (en) * 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US8499257B2 (en) * 2010-02-09 2013-07-30 Microsoft Corporation Handles interactions for human—computer interface
US8633890B2 (en) * 2010-02-16 2014-01-21 Microsoft Corporation Gesture detection based on joint skipping
US20110202845A1 (en) * 2010-02-17 2011-08-18 Anthony Jon Mountjoy System and method for generating and distributing three dimensional interactive content
US8928579B2 (en) * 2010-02-22 2015-01-06 Andrew David Wilson Interacting with an omni-directionally projected display
US9229227B2 (en) 2010-02-28 2016-01-05 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
WO2011106798A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US9097890B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc Grating in a light transmissive illumination system for see-through near-eye display glasses
US9366862B2 (en) 2010-02-28 2016-06-14 Microsoft Technology Licensing, Llc System and method for delivering content to a group of see-through near eye display eyepieces
US9128281B2 (en) 2010-09-14 2015-09-08 Microsoft Technology Licensing, Llc Eyepiece with uniformly illuminated reflective display
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US9223134B2 (en) 2010-02-28 2015-12-29 Microsoft Technology Licensing, Llc Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses
US9134534B2 (en) 2010-02-28 2015-09-15 Microsoft Technology Licensing, Llc See-through near-eye display glasses including a modular image source
US9129295B2 (en) 2010-02-28 2015-09-08 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear
US20150309316A1 (en) 2011-04-06 2015-10-29 Microsoft Technology Licensing, Llc Ar glasses with predictive control of external device based on event input
US9341843B2 (en) 2010-02-28 2016-05-17 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a small scale image source
US20120249797A1 (en) 2010-02-28 2012-10-04 Osterhout Group, Inc. Head-worn adaptive display
US9091851B2 (en) 2010-02-28 2015-07-28 Microsoft Technology Licensing, Llc Light control in head mounted displays
US9285589B2 (en) 2010-02-28 2016-03-15 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered control of AR eyepiece applications
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
US9097891B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US20120194549A1 (en) * 2010-02-28 2012-08-02 Osterhout Group, Inc. Ar glasses specific user interface based on a connected external device type
US8411948B2 (en) 2010-03-05 2013-04-02 Microsoft Corporation Up-sampling binary images for segmentation
US8655069B2 (en) * 2010-03-05 2014-02-18 Microsoft Corporation Updating image segmentation following user input
US8422769B2 (en) 2010-03-05 2013-04-16 Microsoft Corporation Image segmentation using reduced foreground training data
US20110221755A1 (en) * 2010-03-12 2011-09-15 Kevin Geisner Bionic motion
US20110223995A1 (en) 2010-03-12 2011-09-15 Kevin Geisner Interacting with a computer based application
US8279418B2 (en) 2010-03-17 2012-10-02 Microsoft Corporation Raster scanning for depth detection
US8213680B2 (en) * 2010-03-19 2012-07-03 Microsoft Corporation Proxy training data for human body tracking
US20110234481A1 (en) * 2010-03-26 2011-09-29 Sagi Katz Enhancing presentations using depth sensing cameras
US8514269B2 (en) * 2010-03-26 2013-08-20 Microsoft Corporation De-aliasing depth images
US8523667B2 (en) * 2010-03-29 2013-09-03 Microsoft Corporation Parental control settings based on body dimensions
US8605763B2 (en) 2010-03-31 2013-12-10 Microsoft Corporation Temperature measurement and control for laser and light-emitting diodes
US9098873B2 (en) 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US9646340B2 (en) 2010-04-01 2017-05-09 Microsoft Technology Licensing, Llc Avatar-based virtual dressing room
US8351651B2 (en) 2010-04-26 2013-01-08 Microsoft Corporation Hand-location post-process refinement in a tracking system
US8379919B2 (en) 2010-04-29 2013-02-19 Microsoft Corporation Multiple centroid condensation of probability distribution clouds
US8284847B2 (en) 2010-05-03 2012-10-09 Microsoft Corporation Detecting motion for a multifunction sensor device
US8885890B2 (en) 2010-05-07 2014-11-11 Microsoft Corporation Depth map confidence filtering
US8498481B2 (en) 2010-05-07 2013-07-30 Microsoft Corporation Image segmentation using star-convexity constraints
US8457353B2 (en) 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
US8803888B2 (en) 2010-06-02 2014-08-12 Microsoft Corporation Recognition system for sharing information
US9008355B2 (en) 2010-06-04 2015-04-14 Microsoft Technology Licensing, Llc Automatic depth camera aiming
US8751215B2 (en) 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter
US9557574B2 (en) 2010-06-08 2017-01-31 Microsoft Technology Licensing, Llc Depth illumination and detection optics
US8330822B2 (en) 2010-06-09 2012-12-11 Microsoft Corporation Thermally-tuned depth camera light source
US8749557B2 (en) 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
US8675981B2 (en) 2010-06-11 2014-03-18 Microsoft Corporation Multi-modal gender recognition including depth data
US9384329B2 (en) 2010-06-11 2016-07-05 Microsoft Technology Licensing, Llc Caloric burn determination from body movement
US8982151B2 (en) 2010-06-14 2015-03-17 Microsoft Technology Licensing, Llc Independently processing planes of display data
US8670029B2 (en) 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US8558873B2 (en) 2010-06-16 2013-10-15 Microsoft Corporation Use of wavefront coding to create a depth image
US8296151B2 (en) 2010-06-18 2012-10-23 Microsoft Corporation Compound gesture-speech commands
US8381108B2 (en) 2010-06-21 2013-02-19 Microsoft Corporation Natural user input for driving interactive stories
US8416187B2 (en) 2010-06-22 2013-04-09 Microsoft Corporation Item navigation using motion-capture data
US9075434B2 (en) 2010-08-20 2015-07-07 Microsoft Technology Licensing, Llc Translating user motion into multiple object responses
US8613666B2 (en) 2010-08-31 2013-12-24 Microsoft Corporation User selection and navigation based on looped motions
US20120058824A1 (en) 2010-09-07 2012-03-08 Microsoft Corporation Scalable real-time motion recognition
US8437506B2 (en) 2010-09-07 2013-05-07 Microsoft Corporation System for fast, probabilistic skeletal tracking
US8988508B2 (en) 2010-09-24 2015-03-24 Microsoft Technology Licensing, Llc. Wide angle field of view active illumination imaging system
US8681255B2 (en) 2010-09-28 2014-03-25 Microsoft Corporation Integrated low power depth camera and projection device
US8548270B2 (en) 2010-10-04 2013-10-01 Microsoft Corporation Time-of-flight depth imaging
US9484065B2 (en) 2010-10-15 2016-11-01 Microsoft Technology Licensing, Llc Intelligent determination of replays based on event identification
US8592739B2 (en) 2010-11-02 2013-11-26 Microsoft Corporation Detection of configuration changes of an optical element in an illumination system
US8866889B2 (en) 2010-11-03 2014-10-21 Microsoft Corporation In-home depth camera calibration
US8667519B2 (en) 2010-11-12 2014-03-04 Microsoft Corporation Automatic passive and anonymous feedback system
US10726861B2 (en) 2010-11-15 2020-07-28 Microsoft Technology Licensing, Llc Semi-private communication in open environments
US9349040B2 (en) 2010-11-19 2016-05-24 Microsoft Technology Licensing, Llc Bi-modal depth-image analysis
US10234545B2 (en) 2010-12-01 2019-03-19 Microsoft Technology Licensing, Llc Light source module
US8553934B2 (en) 2010-12-08 2013-10-08 Microsoft Corporation Orienting the position of a sensor
US8618405B2 (en) 2010-12-09 2013-12-31 Microsoft Corp. Free-space gesture musical instrument digital interface (MIDI) controller
US8408706B2 (en) 2010-12-13 2013-04-02 Microsoft Corporation 3D gaze tracker
KR20120065774A (en) * 2010-12-13 2012-06-21 삼성전자주식회사 Audio providing apparatus, audio receiver and method for providing audio
US8884968B2 (en) 2010-12-15 2014-11-11 Microsoft Corporation Modeling an object from image data
US9171264B2 (en) 2010-12-15 2015-10-27 Microsoft Technology Licensing, Llc Parallel processing machine learning decision tree training
US8920241B2 (en) 2010-12-15 2014-12-30 Microsoft Corporation Gesture controlled persistent handles for interface guides
US8448056B2 (en) 2010-12-17 2013-05-21 Microsoft Corporation Validation analysis of human target
US8803952B2 (en) 2010-12-20 2014-08-12 Microsoft Corporation Plural detector time-of-flight depth mapping
US8385596B2 (en) 2010-12-21 2013-02-26 Microsoft Corporation First person shooter control with virtual skeleton
US9823339B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Plural anode time-of-flight sensor
US9821224B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Driving simulator control with virtual skeleton
US8994718B2 (en) 2010-12-21 2015-03-31 Microsoft Technology Licensing, Llc Skeletal control of three-dimensional virtual world
US9848106B2 (en) 2010-12-21 2017-12-19 Microsoft Technology Licensing, Llc Intelligent gameplay photo capture
US9123316B2 (en) 2010-12-27 2015-09-01 Microsoft Technology Licensing, Llc Interactive content creation
US8488888B2 (en) 2010-12-28 2013-07-16 Microsoft Corporation Classification of posture states
US8587583B2 (en) 2011-01-31 2013-11-19 Microsoft Corporation Three-dimensional environment reconstruction
US8401242B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Real-time camera tracking using depth maps
US8401225B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Moving object segmentation using depth images
US9247238B2 (en) 2011-01-31 2016-01-26 Microsoft Technology Licensing, Llc Reducing interference between multiple infra-red depth cameras
US8724887B2 (en) 2011-02-03 2014-05-13 Microsoft Corporation Environmental modifications to mitigate environmental factors
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US8497838B2 (en) 2011-02-16 2013-07-30 Microsoft Corporation Push actuation of interface controls
US9551914B2 (en) 2011-03-07 2017-01-24 Microsoft Technology Licensing, Llc Illuminator with refractive optical element
US9067136B2 (en) 2011-03-10 2015-06-30 Microsoft Technology Licensing, Llc Push personalization of interface controls
US8571263B2 (en) 2011-03-17 2013-10-29 Microsoft Corporation Predicting joint positions
US9470778B2 (en) 2011-03-29 2016-10-18 Microsoft Technology Licensing, Llc Learning from high quality depth measurements
US9298287B2 (en) 2011-03-31 2016-03-29 Microsoft Technology Licensing, Llc Combined activation for natural user interface systems
US9760566B2 (en) 2011-03-31 2017-09-12 Microsoft Technology Licensing, Llc Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof
US10642934B2 (en) 2011-03-31 2020-05-05 Microsoft Technology Licensing, Llc Augmented conversational understanding architecture
US9842168B2 (en) 2011-03-31 2017-12-12 Microsoft Technology Licensing, Llc Task driven user intents
US8824749B2 (en) 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
US8503494B2 (en) 2011-04-05 2013-08-06 Microsoft Corporation Thermal management system
US8620113B2 (en) 2011-04-25 2013-12-31 Microsoft Corporation Laser diode modes
US8702507B2 (en) 2011-04-28 2014-04-22 Microsoft Corporation Manual and camera-based avatar control
US9259643B2 (en) 2011-04-28 2016-02-16 Microsoft Technology Licensing, Llc Control of separate computer game elements
US10671841B2 (en) 2011-05-02 2020-06-02 Microsoft Technology Licensing, Llc Attribute state classification
US8888331B2 (en) 2011-05-09 2014-11-18 Microsoft Corporation Low inductance light source module
US9137463B2 (en) 2011-05-12 2015-09-15 Microsoft Technology Licensing, Llc Adaptive high dynamic range camera
US9064006B2 (en) 2012-08-23 2015-06-23 Microsoft Technology Licensing, Llc Translating natural language utterances to keyword search queries
US8923686B2 (en) 2011-05-20 2014-12-30 Echostar Technologies L.L.C. Dynamically configurable 3D display
US8788973B2 (en) 2011-05-23 2014-07-22 Microsoft Corporation Three-dimensional gesture controlled avatar configuration interface
US8760395B2 (en) 2011-05-31 2014-06-24 Microsoft Corporation Gesture recognition techniques
US8526734B2 (en) 2011-06-01 2013-09-03 Microsoft Corporation Three-dimensional background removal for vision system
US9594430B2 (en) 2011-06-01 2017-03-14 Microsoft Technology Licensing, Llc Three-dimensional foreground selection for vision system
US9208571B2 (en) 2011-06-06 2015-12-08 Microsoft Technology Licensing, Llc Object digitization
US9724600B2 (en) 2011-06-06 2017-08-08 Microsoft Technology Licensing, Llc Controlling objects in a virtual environment
US9013489B2 (en) 2011-06-06 2015-04-21 Microsoft Technology Licensing, Llc Generation of avatar reflecting player appearance
US8929612B2 (en) 2011-06-06 2015-01-06 Microsoft Corporation System for recognizing an open or closed hand
US8597142B2 (en) 2011-06-06 2013-12-03 Microsoft Corporation Dynamic camera based practice mode
US9098110B2 (en) 2011-06-06 2015-08-04 Microsoft Technology Licensing, Llc Head rotation tracking from depth-based center of mass
US10796494B2 (en) 2011-06-06 2020-10-06 Microsoft Technology Licensing, Llc Adding attributes to virtual representations of real-world objects
US8897491B2 (en) 2011-06-06 2014-11-25 Microsoft Corporation System for finger recognition and tracking
US9597587B2 (en) 2011-06-08 2017-03-21 Microsoft Technology Licensing, Llc Locational node device
US8949901B2 (en) 2011-06-29 2015-02-03 Rovi Guides, Inc. Methods and systems for customizing viewing environment preferences in a viewing environment control application
US8786730B2 (en) 2011-08-18 2014-07-22 Microsoft Corporation Image exposure using exclusion regions
US9930128B2 (en) 2011-09-30 2018-03-27 Nokia Technologies Oy Method and apparatus for accessing a virtual object
US9557836B2 (en) 2011-11-01 2017-01-31 Microsoft Technology Licensing, Llc Depth image compression
US9117281B2 (en) 2011-11-02 2015-08-25 Microsoft Corporation Surface segmentation from RGB and depth images
US8854426B2 (en) 2011-11-07 2014-10-07 Microsoft Corporation Time-of-flight camera with guided light
WO2013072879A2 (en) * 2011-11-16 2013-05-23 Chandrasagaran Murugan A remote engagement system
US8724906B2 (en) 2011-11-18 2014-05-13 Microsoft Corporation Computing pose and/or shape of modifiable entities
US8509545B2 (en) 2011-11-29 2013-08-13 Microsoft Corporation Foreground subject detection
US8803800B2 (en) 2011-12-02 2014-08-12 Microsoft Corporation User interface control based on head orientation
US8635637B2 (en) 2011-12-02 2014-01-21 Microsoft Corporation User interface presenting an animated avatar performing a media reaction
US9100685B2 (en) 2011-12-09 2015-08-04 Microsoft Technology Licensing, Llc Determining audience state or interest using passive sensor data
US8879831B2 (en) 2011-12-15 2014-11-04 Microsoft Corporation Using high-level attributes to guide image processing
US8971612B2 (en) 2011-12-15 2015-03-03 Microsoft Corporation Learning image processing tasks from scene reconstructions
US8630457B2 (en) 2011-12-15 2014-01-14 Microsoft Corporation Problem states for pose tracking pipeline
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
US9342139B2 (en) 2011-12-19 2016-05-17 Microsoft Technology Licensing, Llc Pairing a computing device to a user
US9720089B2 (en) 2012-01-23 2017-08-01 Microsoft Technology Licensing, Llc 3D zoom imager
US8898687B2 (en) 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
US9210401B2 (en) 2012-05-03 2015-12-08 Microsoft Technology Licensing, Llc Projected visual cues for guiding physical movement
CA2775700C (en) 2012-05-04 2013-07-23 Microsoft Corporation Determining a future portion of a currently presented media program
EP2864961A4 (en) 2012-06-21 2016-03-23 Microsoft Technology Licensing Llc Avatar construction using depth camera
US9836590B2 (en) 2012-06-22 2017-12-05 Microsoft Technology Licensing, Llc Enhanced accuracy of user presence status determination
US9696427B2 (en) 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US8882310B2 (en) 2012-12-10 2014-11-11 Microsoft Corporation Laser die light source module with low inductance
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US9251590B2 (en) 2013-01-24 2016-02-02 Microsoft Technology Licensing, Llc Camera pose estimation for 3D reconstruction
US9052746B2 (en) 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US9135516B2 (en) 2013-03-08 2015-09-15 Microsoft Technology Licensing, Llc User body angle, curvature and average extremity positions extraction using depth images
US9092657B2 (en) 2013-03-13 2015-07-28 Microsoft Technology Licensing, Llc Depth image processing
US9274606B2 (en) 2013-03-14 2016-03-01 Microsoft Technology Licensing, Llc NUI video conference controls
US9953213B2 (en) 2013-03-27 2018-04-24 Microsoft Technology Licensing, Llc Self discovery of autonomous NUI devices
US9442186B2 (en) 2013-05-13 2016-09-13 Microsoft Technology Licensing, Llc Interference reduction for TOF systems
US9711152B2 (en) 2013-07-31 2017-07-18 The Nielsen Company (Us), Llc Systems apparatus and methods for encoding/decoding persistent universal media codes to encoded audio
US20150039321A1 (en) 2013-07-31 2015-02-05 Arbitron Inc. Apparatus, System and Method for Reading Codes From Digital Audio on a Processing Device
US9462253B2 (en) 2013-09-23 2016-10-04 Microsoft Technology Licensing, Llc Optical modules that reduce speckle contrast and diffraction artifacts
US9443310B2 (en) 2013-10-09 2016-09-13 Microsoft Technology Licensing, Llc Illumination modules that emit structured light
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
US9769459B2 (en) 2013-11-12 2017-09-19 Microsoft Technology Licensing, Llc Power efficient laser diode driver circuit and method
US9508385B2 (en) 2013-11-21 2016-11-29 Microsoft Technology Licensing, Llc Audio-visual project generator
KR102201617B1 (en) * 2014-01-07 2021-01-12 삼성전자 주식회사 Av device and control method thereof
US9971491B2 (en) 2014-01-09 2018-05-15 Microsoft Technology Licensing, Llc Gesture library for natural user input
EP3102085A1 (en) 2014-02-06 2016-12-14 Dentsply Sirona Inc. Inspection of dental roots and the endodontic cavity space therein
US9747727B2 (en) 2014-03-11 2017-08-29 Amazon Technologies, Inc. Object customization and accessorization in video content
US10375434B2 (en) * 2014-03-11 2019-08-06 Amazon Technologies, Inc. Real-time rendering of targeted video content
US9892556B2 (en) 2014-03-11 2018-02-13 Amazon Technologies, Inc. Real-time exploration of video content
US9894405B2 (en) 2014-03-11 2018-02-13 Amazon Technologies, Inc. Object discovery and exploration in video content
US10939175B2 (en) 2014-03-11 2021-03-02 Amazon Technologies, Inc. Generating new video content from pre-recorded video
US9288521B2 (en) 2014-05-28 2016-03-15 Rovi Guides, Inc. Systems and methods for updating media asset data based on pause point in the media asset
US10092833B2 (en) 2014-06-27 2018-10-09 Amazon Technologies, Inc. Game session sharing
US9393486B2 (en) 2014-06-27 2016-07-19 Amazon Technologies, Inc. Character simulation and playback notification in game session replay
US9409083B2 (en) 2014-06-27 2016-08-09 Amazon Technologies, Inc. Spawning new timelines during game session replay
US10719192B1 (en) 2014-08-08 2020-07-21 Amazon Technologies, Inc. Client-generated content within a media universe
US10332311B2 (en) 2014-09-29 2019-06-25 Amazon Technologies, Inc. Virtual world generation engine
US10293260B1 (en) 2015-06-05 2019-05-21 Amazon Technologies, Inc. Player audio analysis in online gaming environments
US10300394B1 (en) 2015-06-05 2019-05-28 Amazon Technologies, Inc. Spectator audio analysis in online gaming environments
US11513658B1 (en) 2015-06-24 2022-11-29 Amazon Technologies, Inc. Custom query of a media universe database
US10970843B1 (en) 2015-06-24 2021-04-06 Amazon Technologies, Inc. Generating interactive content using a media universe database
US10864447B1 (en) 2015-06-29 2020-12-15 Amazon Technologies, Inc. Highlight presentation interface in a game spectating system
US10363488B1 (en) 2015-06-29 2019-07-30 Amazon Technologies, Inc. Determining highlights in a game spectating system
US10390064B2 (en) 2015-06-30 2019-08-20 Amazon Technologies, Inc. Participant rewards in a spectating system
US10376795B2 (en) 2015-06-30 2019-08-13 Amazon Technologies, Inc. Game effects from spectating community inputs
US10484439B2 (en) 2015-06-30 2019-11-19 Amazon Technologies, Inc. Spectating data service for a spectating system
US10632372B2 (en) 2015-06-30 2020-04-28 Amazon Technologies, Inc. Game content interface in a spectating system
US11071919B2 (en) 2015-06-30 2021-07-27 Amazon Technologies, Inc. Joining games from a spectating system
US10345897B2 (en) 2015-06-30 2019-07-09 Amazon Technologies, Inc. Spectator interactions with games in a specatating system
US11055552B2 (en) * 2016-01-12 2021-07-06 Disney Enterprises, Inc. Systems and methods for detecting light signatures and performing actions in response thereto
US10412280B2 (en) 2016-02-10 2019-09-10 Microsoft Technology Licensing, Llc Camera with light valve over sensor array
US10257932B2 (en) 2016-02-16 2019-04-09 Microsoft Technology Licensing, Llc. Laser diode chip on printed circuit board
US10462452B2 (en) 2016-03-16 2019-10-29 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4445187A (en) * 1979-02-05 1984-04-24 Best Robert M Video games with voice dialog
US5239463A (en) * 1988-08-04 1993-08-24 Blair Preston E Method and apparatus for player interaction with animated characters and objects
US5405152A (en) * 1993-06-08 1995-04-11 The Walt Disney Company Method and apparatus for an interactive video game with physical feedback

Family Cites Families (168)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US32776A (en) * 1861-07-09 Machine foe polishing stoste
US34340A (en) * 1862-02-04 Improved washing-machine
US2612533A (en) * 1951-05-29 1952-09-30 Burgess Battery Co Primary cell
US2826828A (en) * 1951-08-22 1958-03-18 Hamilton Sanborn Variable difficulty devices
US2777901A (en) * 1951-11-07 1957-01-15 Leon E Dostert Binaural apparatus for teaching languages
US2908767A (en) * 1954-06-18 1959-10-13 Mc Graw Edison Co Juke box and recordation-transfer machine therefor
US2921385A (en) * 1955-04-25 1960-01-19 Hamilton Sanborn Remote question-answer apparatus
US3008000A (en) * 1958-09-11 1961-11-07 Charles A Morchand Action-reaction television system
US3020360A (en) * 1959-01-29 1962-02-06 Gen Dynamics Corp Pronunciary
GB940092A (en) * 1961-06-23 1963-10-23 Smith & Sons Ltd S Improvements in or relating to apparatus for sound reproduction
US3221098A (en) * 1962-08-15 1965-11-30 Eugene S Feldman Multiple lingual television in a multiplex broadcast system
US3263027A (en) * 1962-12-11 1966-07-26 Beltrami Aurelio Simultaneous bilateral televideophonic communication systems
BE652172A (en) * 1963-08-22
US3245157A (en) * 1963-10-04 1966-04-12 Westinghouse Electric Corp Audio visual teaching system
GB1070864A (en) * 1963-12-10 1967-06-07 Gabor Kornel Tolnai An arrangement in sound reproducing appliances having tapelike sound recording carriers, particularly for teaching purposes
US3255536A (en) * 1963-12-12 1966-06-14 Tutortape Lab Inc Selective programmed information receiving and responding system
US3284923A (en) * 1964-07-16 1966-11-15 Educational Res Associates Inc Teaching machine with programmed multiple track film
US3273260A (en) * 1964-10-06 1966-09-20 Tutortape Lab Inc Audio-visual communication systems and methods
US3387084A (en) * 1964-11-23 1968-06-04 Mc Donnell Douglas Corp Color television data display system
GB1147603A (en) * 1965-06-15 1969-04-02 Mullard Ltd Improvements in or relating to television transmission systems
US3366731A (en) * 1967-08-11 1968-01-30 Comm And Media Res Services In Television distribution system permitting program substitution for selected viewers
US3538621A (en) * 1967-11-16 1970-11-10 Wataru Mayeda Teaching apparatus
US3484950A (en) * 1968-06-12 1969-12-23 Educational Testing Service Teaching machine
BE792678Q (en) * 1968-06-20 1973-03-30 Koos Eugenia M EDUCATIONAL TELEVISION SYSTEM
FR1584571A (en) * 1968-06-28 1969-12-26
US3602582A (en) * 1968-09-11 1971-08-31 Ngo Torricelli Triptych cinematographic system
US3566482A (en) * 1968-10-24 1971-03-02 Data Plex Systems Educational device
US3575861A (en) * 1969-01-29 1971-04-20 Atlantic Richfield Co Mineral oil containing surface active agent
BE755561A (en) * 1969-09-09 1971-02-15 Sodeteg TEACHING MACHINERY IMPROVEMENTS INCLUDING AN IMAGE PROJECTOR
JPS505886B1 (en) * 1970-03-24 1975-03-08
CH514904A (en) * 1970-05-26 1971-10-31 Meier Hans Werner Teaching machine
US3708891A (en) * 1971-01-18 1973-01-09 Oregon Res Inst Spoken questionnaire method and apparatus
US3730980A (en) * 1971-05-24 1973-05-01 Television Communications Corp Electronic communication apparatus for selectively distributing supplementary private programming
US3725571A (en) * 1971-06-21 1973-04-03 Westinghouse Electric Corp Multiplex video transmission system
US3763577A (en) * 1972-01-26 1973-10-09 D Goodson Electronic teaching aid
US3814841A (en) * 1972-03-16 1974-06-04 Telebeam Corp Communication system with premises access monitoring
US3757225A (en) * 1972-03-16 1973-09-04 Telebeam Corp Communication system
US3988528A (en) * 1972-09-04 1976-10-26 Nippon Hoso Kyokai Signal transmission system for transmitting a plurality of information signals through a plurality of transmission channels
JPS5237896B2 (en) * 1972-09-04 1977-09-26
US3833760A (en) * 1973-02-27 1974-09-03 Ferranti Ltd Television systems
US3857999A (en) * 1973-05-25 1974-12-31 Westinghouse Electric Corp Converter for a line shared educational tv system
US3849594A (en) * 1973-05-25 1974-11-19 Westinghouse Electric Corp Multi-picture tv system with audio and doding channels
US3825674A (en) * 1973-05-25 1974-07-23 Westinghouse Electric Corp Educational tv branching system
US3916092A (en) * 1973-05-25 1975-10-28 Westinghouse Electric Corp Transmission system for audio and coding signals in educational tv
US3902007A (en) * 1973-06-26 1975-08-26 Westinghouse Electric Corp Audio and video plural source time division multiplex for an educational tv system
US4044380A (en) * 1973-12-17 1977-08-23 Westinghouse Electric Corporation Encoder and constructed answer system for television
US4040088A (en) * 1974-01-10 1977-08-02 Rca Corporation Adaptor for inter-relating an external audio input device with a standard television receiver, and an audio recording for use therewith
US3947972A (en) * 1974-03-20 1976-04-06 Freeman Michael J Real time conversational student response teaching apparatus
US4199781A (en) * 1974-08-20 1980-04-22 Dial-A-Channel, Inc. Program schedule displaying system
US3991266A (en) * 1974-09-03 1976-11-09 Sanders Associates, Inc. Dual image television
JPS51115718A (en) * 1975-02-24 1976-10-12 Pioneer Electronic Corp Bi-directional catv system
US4034990A (en) * 1975-05-02 1977-07-12 Sanders Associates, Inc. Interactive television gaming system
USRE32776E (en) 1976-06-23 1988-11-01 IDR, Incorporated Piggy back row grabbing system
US4078316A (en) * 1976-06-24 1978-03-14 Freeman Michael J Real time conversational toy
DE2807986A1 (en) * 1978-02-22 1979-08-30 Hertz Inst Heinrich SYSTEM FOR INTERACTIVE CABLE TV
US4264924A (en) * 1978-03-03 1981-04-28 Freeman Michael J Dedicated channel interactive cable television system
US4305131A (en) * 1979-02-05 1981-12-08 Best Robert M Dialog between TV movies and human viewers
US4569026A (en) * 1979-02-05 1986-02-04 Best Robert M TV Movies that talk back
US4333152A (en) * 1979-02-05 1982-06-01 Best Robert M TV Movies that talk back
FR2448821A1 (en) * 1979-02-12 1980-09-05 Telediffusion Fse METHOD AND SYSTEM FOR INTEGRATING COLOR TELEVISION IMAGES
US4264925A (en) * 1979-08-13 1981-04-28 Michael J. Freeman Interactive cable television system
JPS5647181A (en) * 1979-09-26 1981-04-28 Pioneer Electronic Corp Periodic electric-power-source turning-off device of terminal device of catv system
US4422105A (en) * 1979-10-11 1983-12-20 Video Education, Inc. Interactive system and method for the control of video playback devices
US4862268A (en) 1980-03-31 1989-08-29 General Instrument Corporation Addressable cable television control system with video format data transmission
JPS57500537A (en) * 1980-03-31 1982-03-25
US4361730A (en) * 1980-08-29 1982-11-30 Warner Amex Cable Communications Inc. Security terminal for use with two-way interactive cable system
US4331974A (en) * 1980-10-21 1982-05-25 Iri, Inc. Cable television with controlled signal substitution
US4381522A (en) * 1980-12-01 1983-04-26 Adams-Russell Co., Inc. Selective viewing
US4445137A (en) * 1981-09-11 1984-04-24 Machine Intelligence Corporation Data modifier apparatus and method for machine vision systems
US4965825A (en) 1981-11-03 1990-10-23 The Personalized Mass Media Corporation Signal processing apparatus and methods
US4694490A (en) * 1981-11-03 1987-09-15 Harvey John C Signal processing apparatus and methods
US4399329A (en) * 1981-11-25 1983-08-16 Rca Corporation Stereophonic bilingual signal processor
US4516156A (en) * 1982-03-15 1985-05-07 Satellite Business Systems Teleconferencing method and system
US4591248A (en) * 1982-04-23 1986-05-27 Freeman Michael J Dynamic audience responsive movie system
US4599611A (en) * 1982-06-02 1986-07-08 Digital Equipment Corporation Interactive computer-based information display system
US4507680A (en) * 1982-06-22 1985-03-26 Freeman Michael J One way interactive multisubscriber communication system
US4665431A (en) * 1982-06-24 1987-05-12 Cooper J Carl Apparatus and method for receiving audio signals transmitted as part of a television video signal
US4571640A (en) * 1982-11-01 1986-02-18 Sanders Associates, Inc. Video disc program branching system
US4555730A (en) * 1983-01-24 1985-11-26 Media Transference International Single channel split-sound receiver for use with television set
US4616261A (en) * 1983-05-04 1986-10-07 Stimutech, Inc. Method and apparatus for generating subliminal visual messages
JPS59226576A (en) * 1983-06-08 1984-12-19 Mitsubishi Electric Corp Printer of television receiver
US4566030A (en) * 1983-06-09 1986-01-21 Ctba Associates Television viewer data collection system
US4574305A (en) * 1983-08-11 1986-03-04 Tocum, Incorporated Remote hub television and security systems
US4530008A (en) * 1983-10-03 1985-07-16 Broadband Technologies, Inc. Secured communications system
US4768087A (en) * 1983-10-07 1988-08-30 National Information Utilities Corporation Education utility
US4602279A (en) * 1984-03-21 1986-07-22 Actv, Inc. Method for providing targeted profile interactive CATV displays
US4573072A (en) * 1984-03-21 1986-02-25 Actv Inc. Method for expanding interactive CATV displayable choices for a given channel capacity
US4839743A (en) 1984-08-01 1989-06-13 Worlds Of Wonder, Inc. Interactive video and audio controller
US4701896A (en) * 1984-08-20 1987-10-20 Resolution Research, Inc. Interactive plural head laser disc system
US4644515A (en) * 1984-11-20 1987-02-17 Resolution Research, Inc. Interactive multi-user laser disc system
US4941040A (en) 1985-04-29 1990-07-10 Cableshare, Inc. Cable television system selectively distributing pre-recorded video and audio messages
CA1284211C (en) * 1985-04-29 1991-05-14 Terrence Henry Pocock Cable television system selectively distributing pre-recorder video and audio messages
US5043891A (en) 1985-08-16 1991-08-27 Wang Laboratories, Inc. Document generation apparatus and methods
US4967368A (en) 1985-08-16 1990-10-30 Wang Laboratories, Inc. Expert system with knowledge base having term definition hierarchy
US4916633A (en) 1985-08-16 1990-04-10 Wang Laboratories, Inc. Expert system apparatus and methods
US5023707A (en) 1985-12-02 1991-06-11 Media Transference International, Ltd. System for combining multiple audio channels into the baseband video signal and the recovery of the audio channels therefrom
US4763317A (en) * 1985-12-13 1988-08-09 American Telephone And Telegraph Company, At&T Bell Laboratories Digital communication network architecture for providing universal information services
US4647980A (en) * 1986-01-21 1987-03-03 Aviation Entertainment Corporation Aircraft passenger television system
US5227874A (en) 1986-03-10 1993-07-13 Kohorn H Von Method for measuring the effectiveness of stimuli on decisions of shoppers
US4926255A (en) 1986-03-10 1990-05-15 Kohorn H Von System for evaluation of response to broadcast transmissions
US5128752A (en) 1986-03-10 1992-07-07 Kohorn H Von System and method for generating and redeeming tokens
US4876592A (en) 1986-03-10 1989-10-24 Henry Von Kohorn System for merchandising and the evaluation of responses to broadcast transmissions
US5177604A (en) 1986-05-14 1993-01-05 Radio Telcom & Technology, Inc. Interactive television and data transmission system
US4750036A (en) * 1986-05-14 1988-06-07 Radio Telcom & Technology, Inc. Interactive television and data transmission system
US4733301A (en) * 1986-06-03 1988-03-22 Information Resources, Inc. Signal matching signal substitution
US4786967A (en) * 1986-08-20 1988-11-22 Smith Engineering Interactive video apparatus with audio and video branching
US4846693A (en) 1987-01-08 1989-07-11 Smith Engineering Video based instructional and entertainment system using animated figure
US4821101A (en) 1987-02-19 1989-04-11 Isix, Inc. Video system, method and apparatus
US4847690A (en) 1987-02-19 1989-07-11 Isix, Inc. Interleaved video system, method and apparatus
US4816905A (en) * 1987-04-30 1989-03-28 Gte Laboratories Incorporated & Gte Service Corporation Telecommunication system with video and audio frames
US4780758A (en) * 1987-04-30 1988-10-25 Gte Government Systems Corporation Telecommunication system with burst and continuous audio signals
US4780757A (en) * 1987-04-30 1988-10-25 Gte Government Systems Corporation Telecommunication system with frame selected continuous audio signals
US4847698A (en) 1987-07-16 1989-07-11 Actv, Inc. Interactive television system for providing full motion synched compatible audio/visual displays
US4847700A (en) 1987-07-16 1989-07-11 Actv, Inc. Interactive television system for providing full motion synched compatible audio/visual displays from transmitted television signals
US4847699A (en) 1987-07-16 1989-07-11 Actv, Inc. Method for providing an interactive full motion synched compatible audio/visual television display
US4777529A (en) * 1987-07-21 1988-10-11 R. M. Schultz & Associates, Inc. Auditory subliminal programming system
US4855827A (en) 1987-07-21 1989-08-08 Worlds Of Wonder, Inc. Method of providing identification, other digital data and multiple audio tracks in video systems
US4870591A (en) 1987-08-24 1989-09-26 International Business Machines Corp. System for ensuring device compatibility
US4785349A (en) * 1987-10-05 1988-11-15 Technology Inc. 64 Digital video decompression system
US4807031A (en) * 1987-10-20 1989-02-21 Interactive Systems, Incorporated Interactive video method and apparatus
USRE34340E (en) 1987-10-26 1993-08-10 Actv, Inc. Closed circuit television system having seamless interactive television programming and expandable user participation
US4918516A (en) 1987-10-26 1990-04-17 501 Actv, Inc. Closed circuit television system having seamless interactive television programming and expandable user participation
US4884974A (en) 1987-12-21 1989-12-05 View-Master Ideal Group, Inc. Interactive talking book and audio player assembly
US4894789A (en) 1988-02-22 1990-01-16 Yee Keen Y TV data capture device
US4918620A (en) 1988-06-16 1990-04-17 General Electric Company Expert system method and architecture
US4905094A (en) 1988-06-30 1990-02-27 Telaction Corporation System for audio/video presentation
JPH0243822A (en) 1988-08-03 1990-02-14 Toshiba Corp Television tuner
US5174759A (en) 1988-08-04 1992-12-29 Preston Frank S TV animation interactively controlled by the viewer through input above a book page
US4924303A (en) 1988-09-06 1990-05-08 Kenneth Dunlop Method and apparatus for providing interactive retrieval of TV still frame images and audio segments
JPH0286384A (en) 1988-09-22 1990-03-27 Pioneer Electron Corp Moving picture information service system and head end device for the system
US5157491A (en) 1988-10-17 1992-10-20 Kassatly L Samuel A Method and apparatus for video broadcasting and teleconferencing
US4975771A (en) 1989-02-10 1990-12-04 Kassatly Salim A Method and apparatus for TV broadcasting
US4930019A (en) 1988-11-29 1990-05-29 Chi Wai Chu Multiple-user interactive audio/video apparatus with automatic response units
IL88661A (en) 1988-12-12 1991-12-12 A T Ltd Sa Toy for aiming and firing a radiation beam at a target
US4972328A (en) 1988-12-16 1990-11-20 Bull Hn Information Systems Inc. Interactive knowledge base end user interface driven maintenance and acquisition system
US5001554A (en) 1988-12-23 1991-03-19 Scientific-Atlanta, Inc. Terminal authorization method
US4987486A (en) 1988-12-23 1991-01-22 Scientific-Atlanta, Inc. Automatic interactive television terminal configuration
US4994908A (en) 1988-12-23 1991-02-19 Scientific-Atlanta, Inc. Interactive room status/time information system
US5077607A (en) 1988-12-23 1991-12-31 Scientific-Atlanta, Inc. Cable television transaction terminal
US4991011A (en) 1988-12-23 1991-02-05 Scientific-Atlanta, Inc. Interactive television terminal with programmable background audio or video
US5053883A (en) 1988-12-23 1991-10-01 Scientific-Atlanta, Inc. Terminal polling method
DE3901790A1 (en) 1989-01-21 1990-07-26 Gfk Gmbh METHOD FOR THE REMOTE CONTROLLED REPLACEMENT OF A PARTICULAR PROGRAM PART OF A TELEVISION PROGRAM BY A SEPARATELY SENT PROGRAM PART FOR SPECIFIC SELECTED RECEIVER, HOUSEHOLD TERMINAL DEVICE AND THROUGH THE DRIVE DRIVE
US5010500A (en) 1989-01-26 1991-04-23 Xerox Corporation Gesture-modified diagram for retrieval of image resembling diagram, with parts selectable for further interactive retrieval
US4989233A (en) 1989-04-11 1991-01-29 Evanston Enterprises, Inc. Systems for capturing telephonic mass responses
US4989234A (en) 1989-04-11 1991-01-29 Evanston Enterprises, Inc. Systems for capturing telephonic mass responses
US5014125A (en) 1989-05-05 1991-05-07 Cableshare, Inc. Television system for the interactive distribution of selectable video presentations
US4875096A (en) 1989-08-20 1989-10-17 Smith Engineering Encoding of audio and digital signals in a video signal
US5051822A (en) 1989-10-19 1991-09-24 Interactive Television Systems, Inc. Telephone access video game distribution center
US5181107A (en) 1989-10-19 1993-01-19 Interactive Television Systems, Inc. Telephone access information service distribution system
US5318450A (en) 1989-11-22 1994-06-07 Gte California Incorporated Multimedia distribution system for instructional materials
US5239617A (en) 1990-01-05 1993-08-24 International Business Machines Corporation Method and apparatus providing an intelligent help explanation paradigm paralleling computer user activity
US5176520A (en) 1990-04-17 1993-01-05 Hamilton Eric R Computer assisted instructional delivery system and method
CA2022302C (en) 1990-07-30 1995-02-28 Douglas J. Ballantyne Method and apparatus for distribution of movies
US5220420A (en) 1990-09-28 1993-06-15 Inteletext Systems, Inc. Interactive home information system for distributing compressed television programming
US5093718A (en) 1990-09-28 1992-03-03 Inteletext Systems, Inc. Interactive home information system
US5090708A (en) 1990-12-12 1992-02-25 Yonatan Gerlitz Non hand-held toy
US5132992A (en) 1991-01-07 1992-07-21 Paul Yurt Audio and video transmission and receiving system
US5236199A (en) 1991-06-13 1993-08-17 Thompson Jr John W Interactive media system and telecomputing method using telephone keypad signalling
US5340317A (en) 1991-07-09 1994-08-23 Freeman Michael J Real-time interactive conversational apparatus
US5210611A (en) 1991-08-12 1993-05-11 Keen Y. Yee Automatic tuning radio/TV using filtered seek
US5247347A (en) 1991-09-27 1993-09-21 Bell Atlantic Network Services, Inc. Pstn architecture for video-on-demand services
US5404393A (en) 1991-10-03 1995-04-04 Viscorp Method and apparatus for interactive television through use of menu windows
US5442389A (en) 1992-12-28 1995-08-15 At&T Corp. Program server for interactive television system
US5488411A (en) 1994-03-14 1996-01-30 Multimedia Systems Corporation Interactive system for a closed cable network
US5537141A (en) 1994-04-15 1996-07-16 Actv, Inc. Distance learning system providing individual television participation, audio responses and memory for every student
US5477263A (en) 1994-05-26 1995-12-19 Bell Atlantic Network Services, Inc. Method and apparatus for video on demand with fast forward, reverse and channel pause
US5526478A (en) 1994-06-30 1996-06-11 Silicon Graphics, Inc. Three dimensional model with three dimensional pointers and multimedia functions linked to the pointers
US5594935A (en) 1995-02-23 1997-01-14 Motorola, Inc. Interactive image display system of wide angle images comprising an accounting system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4445187A (en) * 1979-02-05 1984-04-24 Best Robert M Video games with voice dialog
US5239463A (en) * 1988-08-04 1993-08-24 Blair Preston E Method and apparatus for player interaction with animated characters and objects
US5405152A (en) * 1993-06-08 1995-04-11 The Walt Disney Company Method and apparatus for an interactive video game with physical feedback

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19921739A1 (en) * 1999-05-11 2000-11-16 Stefan Pforte Communications rooms for real and virtual communications partners have same dimensions, are blue, of same design, each with camera system that can film all possible positions of room
US6623428B2 (en) 2001-10-11 2003-09-23 Eastman Kodak Company Digital image sequence display system and method
USRE48579E1 (en) 2002-04-15 2021-06-01 Media Ip, Inc. Method and apparatus for internet-based interactive programming
US10856031B2 (en) 2003-04-15 2020-12-01 MedialP, Inc. Method and apparatus for generating interactive programming in a communication network
US11076190B2 (en) 2003-04-15 2021-07-27 MedialP, Inc. Method and apparatus for generating interactive programming in a communication network
US11477506B2 (en) 2003-04-15 2022-10-18 MediaIP, LLC Method and apparatus for generating interactive programming in a communication network
US11575955B2 (en) 2003-04-15 2023-02-07 MediaIP, LLC Providing interactive video on demand

Also Published As

Publication number Publication date
AU6338296A (en) 1997-01-22
US5682196A (en) 1997-10-28

Similar Documents

Publication Publication Date Title
US5682196A (en) Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers
AU610887B2 (en) Interactive television system for providing full motion synched compatible audio/visual displays from transmitted television signals
US4847699A (en) Method for providing an interactive full motion synched compatible audio/visual television display
JP2854005B2 (en) An interactive television system that has seamless interactive television programming and can expand user participation.
US5423555A (en) Interactive television and video game system
EP0299829A2 (en) Interactive television system for providing full motion synched compatible audio/visual displays
EP0772481B1 (en) Apparatus and methods for controlling educational and amusement use of a television
JP4551371B2 (en) Interactive program generation system and interactive system and method for receiving composite interactive signals
US5795228A (en) Interactive computer-based entertainment system
AU686795B2 (en) A distance learning system providing individual television participation, audio responses, and memory for every student
US8873767B2 (en) Audio or audio/visual interactive entertainment system and switching device therefor
USRE34340E (en) Closed circuit television system having seamless interactive television programming and expandable user participation
US7867088B2 (en) Interactive game system using game data encoded within a video signal
US20070122786A1 (en) Video karaoke system
US9445034B2 (en) Method and apparatus for processing video image
JP2003501965A (en) Digital television receiver / decoder with interactive playback of predetermined television program
KR20140084463A (en) Apparatus and method for displaying image of narrator information and, server for editing video data
JP2625847B2 (en) Information transmission equipment
JPH0843766A (en) Stereoscopic picture system, compound picture drawing system and device therefor
KR20000068852A (en) Video tape and apparatus for playing such a video tape
CA1100880A (en) Language information system
FR2541062A1 (en) Conversational system including a visual display device, a computer and a source of pre-recorded video-frequency information
JPH08332284A (en) Presenting device
CA2151638A1 (en) Interactive television system and method
MXPA96001376A (en) Simultaneous transmission of interactive signals with a video signal convention

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA