|Veröffentlichungsdatum||17. Okt. 1996|
|Eingetragen||10. Apr. 1996|
|Prioritätsdatum||10. Apr. 1995|
|Veröffentlichungsnummer||PCT/1996/4847, PCT/US/1996/004847, PCT/US/1996/04847, PCT/US/96/004847, PCT/US/96/04847, PCT/US1996/004847, PCT/US1996/04847, PCT/US1996004847, PCT/US199604847, PCT/US96/004847, PCT/US96/04847, PCT/US96004847, PCT/US9604847, WO 1996/032816 A1, WO 1996032816 A1, WO 1996032816A1, WO 9632816 A1, WO 9632816A1, WO-A1-1996032816, WO-A1-9632816, WO1996/032816A1, WO1996032816 A1, WO1996032816A1, WO9632816 A1, WO9632816A1|
|Erfinder||James A. Loftus|
|Zitat exportieren||BiBTeX, EndNote, RefMan|
|Patentzitate (13), Referenziert von (1), Klassifizierungen (5), Juristische Ereignisse (6)|
|Externe Links: Patentscope, Espacenet|
INTEGRATED VIRTUAL SET PRODUCTION FACILITY WITH USER INTERFACE AND CENTRAL FAULT TOLERANT CONTROLLER
CROSS-REFERENCE TO RELATED APPLICATION
This application is related to U.S. Patent Application Serial No. , entitled HAND-HELD
CAMERA TRACKING FOR VIRTUAL SET VIDEO PRODUCTION SYSTEM, by inventors James A. Loftus, Ian G. Reid and Steven M. Cohen (Attorney Docket No. ELGG2020WSW/KJD) , filed concurrently herewith and assigned to the assignee of the present application. The related application is incorporated herein by reference in its entirety.
BACKGROUND 1. Field of the Invention The invention is related to real time video production systems, and more particularly, to real time video production systems utilizing computer-generated virtual sets.
2. Description of Related Art
In a conventional video production studio, an actor, announcer or other subject is located on a stage containing real objects, such as a desk, a chair, and so on. One or more video cameras are directed toward the stage, each having a respective viewpoint (x, y and z positions relative to the stage) , orientation (pan and tilt angles relative to the stage) and other image parameters (zoom and focus, for example) . Often the stage includes an area of backdrop painted with a particular shade of blue, known as a blue screen. When a blue screen is being used, the video output from the camera is provided to a "foreground" input of a video compositor or keyer, and a source of background video is provided to another input of the compositor. The compositor substitutes the background video information into the composite image wherever the foreground video information has the particular shade of the blue screen. Thus, to the viewer of the composite output, the subject appears to be located in front of the background image provided by the background video source. A common example of blue screen technology is for television weather reports in which a meteorologist stands in front a blue screen and points out various features of satellite images displayed apparently behind the meteorologist. In fact, the background satellite images are combined with the actual live camera video only in the compositor.
In a video production studio, live operation centers around a video console which includes a large number of pushbuttons and other activators which are used by an operator to cause a video switch to select among a variety of video input feeds, to provide a variety of video output feeds. The input feeds can come from many different kinds of video sources, such as cameras, compositors, remote units, videotape players, and so on. The video outputs can be provided to different monitors for the operator to view or preview, to other video processing components in the studio, to various program feeds, and so on. Outputs of the switcher can also be connected back to other inputs of the switcher.
In one example arrangement, each of the camera feeds is provided to a respective input of the switcher, and one or more background feeds are provided to other inputs of the switcher. The switcher produces a foreground output, selectable among the camera inputs, and a background output, selectable among the background inputs. The foreground and background outputs are connected to respective inputs of a compositor, the output of which is connected back to yet another input of the switcher. A preview output of the switcher is connected to a video monitor viewable by the operator, who can select any of the camera inputs, any of the background inputs, or the compositor output, to be displayed on the video monitor. The switcher also has a program output to which the operator can select any of these same inputs.
The production operator is able to manage all of the various signal flows in real time, primarily because of the simple, single-button operation of most functions of the switching console. That is, if the switcher permits any of six possible inputs to be selected to a particular output, then the console may have six mutually exclusive buttons (referred to herein sometimes as "switch settings") . The operator simply presses the button corresponding to the input desired for routing to the output, and the switcher automatically disconnects the previously routed input and connects the newly selected input to the output. Such simplicity of operation enables real time video production, with program output which is seamlessly and accurately switched among various sources, all on cue.
Recently, blue screen technology has been enhanced through the use of computer generated three-dimensional images for background instead of ordinary two- dimensional video information. More specifically, a three-dimensional graphics rendering system is provided, in which a three-dimensional model (which in the context of video production is often referred to as a virtual set) is rendered onto a two-dimensional video image plane in real time, typically once each video frame. The graphics engine is made aware of the position and orientation of the cameras, as well as which camera is currently active, and it renders the model as it would be viewed from that particular camera perspective. When this rendered video information is composited with the video feed from the active camera, the foreground subject (also referred to herein as the "talent") appears to the viewer of the composite image to be located on the set represented by the computer model. In this manner, live video production on a wide variety of apparent "sets" can be accomplished using a single, simple, blue screen stage. The virtual set appears to be three-dimensional, because different surfaces of the model are visible, or are shaped or shaded differently in the two-dimensional image plane, depending on the position and orientation of the active camera. In some virtual set systems, even portions of the talent can be occluded by objects in the virtual set depending on the position and orientation of the active camera. Neither of these types of effects occur with conventional, two- dimensional video background feeds.
Despite the tremendous promise of virtual set technology, a number of problems have arisen which have limited market acceptance of the technology. One problem arises because the graphics engines which have been used to render three-dimensional virtual sets are usually highly sophisticated computers or supercomputers. Such computers are typically operated by an experienced computer operator through a workstation-type video display terminal and a graphical user interface (GUI) environment. Makers of virtual set production equipment have not sought to simplify the user-operability of such computers, except possibly by attempting to simplify the graphical user environment.
Thus, in a conventional virtual set production facility, a change from one virtual set to another often requires keystrokes or mouse clicks on the computer workstation, as does activation of different types of animations. This type of operator control interface is foreign to many studio production operators, at best requiring operator retraining, and at worst precluding the use of the equipment altogether in the video production studio. Moreover, a dedicated workstation operator may be required in any event for a conventional virtual set facility merely because it is difficult for a single production operator to operate both the video switching console and the computer workstation simultaneously and on cue.
A second problem that has arisen in existing virtual set production equipment is that, in part because of the sophistication of the computer or supercomputer which is used to render the virtual sets, these computer systems can sometimes be prone to failure. When a failure occurs, it can either take the form of cessation of the video signal, in which case the viewer will typically be left looking at the talent on a plain blue screen stage, or can take the form of a lockup in the graphics rendering pipeline, in which case the viewer might see the set from an inappropriate perspective after a camera switch. Either of these situations is usually considered intolerable for real time video production. A need therefore exists for virtual set production technology which is tolerant of faults and minimizes disruption of the viewing experience when a fault occurs in the virtual set renderer. SUMMARY OF THE INVENTION According to the invention, roughly described, an integrated virtual set video production system, including one or more cameras, a three-dimensional virtual set renderer that renders virtual sets from the viewpoint of the cameras, and one or more compositors, is controlled by a video production operator using familiar pushbutton operations on a switch console. The switch console has one bank of pushbuttons which represent the different cameras, one bank of pushbuttons which represent different virtual sets to be rendered, and may also have a third bank of pushbuttons which represent various animations which can be activated within a virtual set. A system controller communicates all of the set, camera and animation change requests from the switch console to the virtual set renderer, and also controls all other switching which needs to be adjusted in the facility in response to such change requests (primarily camera change requests) . Thus, despite the complexity of a virtual . set production facility, it can be controlled during real time production by an operator using highly familiar pushbutton operations which are very similar to those used by the operator on a conventional video switching console.
In another aspect of the invention, a system controller detects faults in the operation of the virtual set renderer. When a fault is detected, the system controller automatically presents a substitute video feed to the background input of the compositor to thereby minimize any disruption of the viewing experience. In one embodiment, a frame store periodically captures the video output of the virtual set renderer, and if a fault is detected, the substitute feed is taken from the most recent frame stored by the frame store. If the facility includes more than one camera, and the virtual set renderer has rendered the set from the viewpoints of each camera, then separate frame stores can be used to capture the rendered sets from each viewpoint. In an embodiment of the invention, two mechanisms are used to detect a fault in the virtual set renderer: loss of video output signal from the virtual set renderer, and failure to receive a periodically transmitted "I'm alive" code from the virtual set renderer. The former represents a failure in the video interface of the renderer and the latter represents a failure in the graphics engine of the renderer.
BRIEF DESCRIPTION OF THE DRAWINGS The invention will be described with respect to particular embodiments thereof, and reference will be made to the drawings in which: Fig. l is a symbolic block diagram of an integrated virtual set production facility according to the invention.
Fig. 2 is a block diagram of one of the fault handlers 118. Fig. 3 is a symbolic diagram of the switch console 138 as seen by a production operator.
Fig. 4 is a flow chart of the operation of system controller 132 upon receipt of a camera change request from the switch console 138. Fig. 5 is a flow chart illustrating the operation of the system controller 132 in response to a set change request. Fig. 6 is a flow chart of the operation of the system controller 132 in response to an animation change request.
Fig. 7 is a symbolic diagram of a scene graph used in the graphics engine of the rendering system 110.
Figs. 8-11 are symbolic flow charts illustrating how the system controller 132 manages the fault tolerance of the facility of Fig. 1.
Fig. 12 is a flow chart illustrating the operation of the rendering system 110 with respect to fault tolerance.
Fig. 13 is a flow chart of a signal handler which can be used in a fault tolerance mechanism in the rendering system 110.
DETAILED DESCRIPTION I. OVERALL SYSTEM STRUCTURE
Fig. 1 is a symbolic block diagram of an integrated virtual set production facility according to the invention. It comprises a blue screen stage 102, with talent 104 located thereon. The stage 102 is bare except for the blue screen background and floor, but it will be understood that some real objects can also be included on the stage with the talent. Two video cameras 106 and 108 are shown, although another embodiment may use only a single camera and yet another embodiment may use more than two cameras. The system assumes a predefined set of x, y and z axes relative to the stage 102, and each of the cameras 106 and 108 has an x, y and z position relative to these axes, as well as pan and tilt angles relative to these axes. Each of the cameras 106 and 108 also at any given time has a set of "image parameters", which include zoom and focus. The facility of Fig. 1 also includes a three- dimensional graphical rendering system 110, which may be an Onyx computer available from Silicon Graphics, Inc. (SGI) , configured, for example, with one gigabyte (GByte) of random access memory (RAM) , four GBytes of disk memory, four RM-5 raster managers, eight MIPS R4400 CPUs, a video interface module and ten serial ports. The 3-D rendering system 110 also includes a workstation console, but this is not shown in Fig. 1 because it is not important to an understanding of the invention. Note that other graphics rendering systems can be used instead of an Onyx computer, such as an ESIG-4000, by Evans & Sutherland, Salt Lake City, Utah; a Compuscene VII, by Martin-Marietta, Daytona Beach, Florida; or a Provision 200 VTR, by Division, Inc., Redwood City, California.
The three-dimensional graphics rendering system 110 communicates with components in the studio to obtain real world position, orientation and image parameter information. This information is communicated symbolically between the rendering system 110 and each of the cameras 106 and 108 via respective lines 112 and 114, but it will be understood that some of the communication may be between the rendering system 110 and other tracking devices in the studio. Examples of such devices are set forth in the above-incorporated related patent application.
The three-dimensional rendering system 110 includes a software process (referred to sometimes herein as a "graphics engine" or "graphics loop") which renders a model of a virtual set into a frame buffer, updating the rendering at a rate at least equal to the video frame speed (30 frames per second for NTSC video) , in response to changes in the position, orientation and/or image parameters of an active camera or, as described more fully below, in response to a change in which camera is the active camera.
In one embodiment, the 3-D rendering system 110 renders more than one image during each frame time, representing either a single set as viewed from two or more different camera positions, and/or two or more different sets as viewed from one or more different camera positions. In the embodiment of Fig. 1, however, the rendering system 110 renders only one image at a time, only from the viewpoint of the active camera.
The three-dimensional rendering system 110 includes a video interface module 116 which produces broadcast quality video output from the images rendered by the graphics engine into a frame buffer. The video interface module 116 can be a Sirius video card, available from SGI, Mountain View, California. The Sirius video card is described in SGI, "Sirius Video Programming and Configuration Guide" (1994) , incorporated herein by reference.
The video output of the video interface module 110 is connected to the video input of each of two fault handlers 118 and 120. The fault handlers are described in more detail below, but it is sufficient to note at this point that absent a fault, the input video information is passed unchanged to the output of the respective fault handler.
The output of fault handler 118 is connected to the "background" input of a compositor 122, the "foreground" input of which is connected to receive the video information from the camera 106. Similarly, the output of fault handler 120 is connected to the background input of a second compositor 124, the foreground input of which is connected to receive the video information from camera 108. The compositors 122 can each be an Ultimatte-7, available from Ultimatte Corp., Chatsworth, California. The Ultimatte-7 is described in Ultimatte Corp., "Ultimatte-7 Digital 4:2:2:4 Operating Manual" (September 1, 1994) , incorporated herein by reference. The outputs of the two compositors 122 and 124 are connected to respective inputs of a digital video switcher 126. Switcher 126 has a program output 128 which may be connected to broadcast equipment 130 for broadcasting the real time-produced video. The switcher 126 may be a Model 1000, from the Grass Valley Group (GVG) , Grass Valley, California.
All of the different video signal paths in the facility can pass through switchers, if desired, although the switchers are not shown for simplicity of illustration. The video signal paths can also pass through other components as well (not shown) , but these are not important to an understanding of the invention.
In addition to the video signal path components just described, the facility of Fig. 1 also includes a system controller 132. The controller 132 includes a rack¬ mounted 80486DX2-based industrial quality IBM PC/AT- compatible personal computer (PC) , equipped with four serial ports C0M1-C0M4 and several digital I/O boards 134. The digital I/O boards 134 can each be a model DI032B, available from Industrial Computer Source, San Diego, California. The DI032B is described in Industrial Computer Source, "Model DI032B Product Manual" (1994), incorporated herein by reference. The digital I/O board in system controller 132 enables the system controller to use video-production-system- standard "GPI signaling" to control the various video processing components in the facility. In GPI signaling, a GPI "output" is merely a pair of conductors which either are or are not connected together in the controller. A GPI "input" is merely a conductor which is or is not connected to ground by one of the external devices. In order to provide GPI outputs, the digital I/O board 134 includes 16 dip reed relays. Each relay can be individually energized (closed) by the system controller 132 by writing a logic 1 to the proper parallel port bit, and opened by writing a logic 0. To implement GPI inputs, the digital I/O board 134 includes 16 optically isolated input sensors having one terminal tied through a current-limiting resistor to a power supply voltage, the other terminal being the GPI input. The input ports are read-only, as viewed from the system controller 132, whereas the output ports can be both written to and read back as inputs.
The system controller 132 runs two primary applications under Microsoft Windows, specifically an asset management program and a fault-tolerant system (FTS) management program. The system controller 132 includes a control console 136, such as a standard keyboard and monitor for a conventional PC, but the control console 136 is not important for an understanding of the invention. Rather, a production operator uses a switch console 138 to control the operation of the facility of Fig. 1. Switch console 138 has a plurality of momentary contact pushbuttons, which are provided as GPI inputs 140 to the system controller 132. The pushbuttons on the switch console 138 are lighted, and the lights are controlled by GPI outputs 142 from the system controller 132. The switch console 138 is described in more detail below.
The system controller 132 communicates with the 3-D rendering system 110 via an RS-232 serial communication link 144. The rendering system 110 also has an input which, when asserted, causes the rendering system 110 to re-boot itself. This input is controlled by a GPI signal line 146 output from the system controller 132. In addition to being provided to the fault handlers 118 and 120, the video output from the Sirius video card 116 in the rendering system 110 is also provided to an input of a digital video detector 148. In an embodiment, the digital video detector 148 comprises a D/A converter 150, which receives the digital video signal from the rendering system 116 and converts it to NTSC-compliant analog form, and an analog video detector 152, which detects the presence or absence of a video signal from the output of the D/A converter 150. All video signals in the facility of Fig. 1 other than that between the D/A converter 150 and the video detector 152 conform to the CCIR 601 serial digital video standard. The D/A converter 150 may be, for example, a Mranda SDM- 100A 4:2:2 DAC, available from Miranda Technologies, Montreal, Canada, and the video detector 152 may be a model 5700, available from QSI Systems, Inc., Salem, New Hampshire. The analog video detector 152 is described in QSI Systems, "QSI Systems 5700 Service Manual", Manual No. 013X0081A, incorporated herein by reference. The analog video detector 152 detects absence of video by detecting the absence of a sync pulse. When the video detector 152 detects the absence of video, it activates a current sink output which is connected to the system controller 132 via a GPI signal line 154. The system controller 132 also has a number of GPI outputs represented on bus 156 which control the two fault handlers 118 and 120 as described below, and which also control the switcher 126 to switch between the compositor 122 output and the compositor 124 output. Fig. 2 is a block diagram of one of the fault handlers 118. The other fault handler 120 is identical. Referring to Fig. 2, the video input from the rendering system 110 is provided to the input of a distribution amplifier 202, which duplicates the video signal on two outputs 204 and 206. The distribution amplifier 202 may be a GVG model M9131 1x6 distribution amplifier. The M9131 has six outputs, but only two are used.
The output 204 of distribution amplifier 202 is connected to one input of a video switcher 208, the output of which constitutes the output of fault handler 118. Switcher 208 may be, for example, a GVG model 1000 switcher. Again, a number of inputs to the switcher are left unused. The output 206 from the distribution amplifier 202 is connected to the video input of a frame store 210, the output of which is connected to a second input of switcher 208. The frame store 210 has a GPI trigger input 212 which, when asserted, causes the frame store 210 to capture a video frame from the distribution amplifier 202 and update its output. The frame store 210 continuously outputs the most recently captured video frame until the capture of a new frame is triggered. The system controller 132 triggers the GPI input 212 via the GPI bus 156 periodically.
The switcher 208 also has a GPI input 214 which is operated by the system controller 132 via the GPI bus 156. The switcher selects between its two video inputs in response the signal on GPI input line 214.
II. OVERALL SYSTEM OPERATION
Fig. 3 is a symbolic diagram of the switch console 138 as seen by a production operator. In the present embodiment, it includes a bank of eight momentary contact lighted pushbuttons 302, each corresponding to a different camera. In the facility of Fig. 1, only two of the pushbuttons 302a and 302b are used.
To the right of the camera pushbuttons 302, are four rows 304, 306, 308 and 310 of eight lighted pushbuttons each. Each of the pushbuttons in row 304 correspond to a respective "virtual set", and each of the pushbuttons in rows 306 and 308 correspond to different animations which can be produced by the rendering system 110. The buttons in row 310 are available for future expansion.
As will be seen, when the production operator desires to switch from one camera to another, the operator need only press the button corresponding to the desired camera. The light under the button for the previously active camera is automatically turned off by the system controller 132, and the light under the pushbutton for the newly active camera is turned on by the system controller 132. The system controller 132 manages all switching and appropriate notification of the rendering system 110 in response to the operator's action.
Similarly, the currently active virtual set being rendered by the rendering system 110 is indicated on the console 138 by the light under a corresponding pushbutton in row 304. To change to a different set, the operator need merely press a different button in the row 304. The system controller 132 unlights the previously active set pushbutton, lights the newly active set pushbutton, and communicates with the rendering system 110 in order to cause it to begin rendering using the selected model.
The rendering system 110 can produce two kinds of animations: persistent animations, which continue until stopped, and self-terminating animations. In one embodiment, the rendering system 110 supports only persistent animations, whereas in another embodiment, the rendering system supports both kinds of animations. More than one animation can be active simultaneously. To begin an animation, the operator need only press the corresponding button in row 306 or 308. For persistent animations, the operator presses the corresponding button again to terminate the animation. The system controller 132 lights the buttons for each of the animations which are currently active, and communicates with the rendering system 110 via the serial bus 144 to start and stop the selected animations. In an embodiment which supports self- terminating animations, the rendering system 110 can notify the system controller 132 via the serial bus 144 that such an animation has terminated, and the system controller 132 will automatically turn off the light under the pushbutton corresponding to that animation.
It can be seen that despite the complexity of the virtual set production facility of Fig. 1, it is controlled during real time production by an operator using highly familiar pushbutton operations which are very similar to those used by the operator on a conventional video switching console.
III. SYSTEM OPERATION FOR ASSET MANAGEMENT A. ?Ylfty" Controller
Fig. 4 is a flow chart of the operation of system controller 132 upon receipt of a camera change request from the switch console 138. An operator asserts a camera change request merely by pressing the button corresponding to the newly desired camera.
In a step 402, the system controller 132 transmits a "camera N" code via the serial link 134 to the rendering system 110, where N is the number of the camera corresponding to the camera pushbutton pressed by the operator. The system controller 132 does not, however, immediately notify the switcher 126 to switch to the compositor for the newly desired camera. This is because the graphics rendering pipeline in the rendering system 110 can be lengthy, depending on the complexity of the virtual set being rendered. If the switcher 126 were to switch at the same time the rendering system 110 is notified of the camera change request, then the next few frames of the program output 128 would likely show the talent 104 as viewed from the viewpoint of the newly selected camera 106 or 108, but the virtual set as viewed from the viewpoint of the previously active camera.
Accordingly, in step 404, the system controller 132 waits a fixed time interval before commanding the switcher 126 to switch. The delay in step 404 is programmable in the range of about zero to about 2200 milliseconds, but it is presumed that the delay will be established once during configuration and will remain fixed for all camera change requests as long as the virtual set does not change.
The switcher 126 requires a GPI pulse on an input corresponding to the newly selected camera. After the delay in step 404, therefore, the system controller 132 asserts the GPI output for the newly selected camera in step 406, waits the pulse delay time in step 408, and then negates the GPI output for the newly selected camera in step 410. This causes the switcher 126 to select to the program output 128, the video output of the compositor 122 or 124 which corresponds to the newly selected camera 106 or 108. In step 412, the system controller 132 negates the GPI output for the light under the pushbutton on the switch console 138 for the previously active camera, and in step 414, activates the GPI output for the light under the pushbutton for the newly active camera. This completes the operation of system controller 132 in response to a camera change request. Note that in a different embodiment, the steps 412 and 414 to change the lights under the pushbuttons on the switch console 138 could be performed earlier in the flow chart of Fig. 4.
Fig. 5 is a flow chart illustrating the operation of the system controller 132 in response to a set change request. As previously mentioned, the operator requests a set change by pressing an appropriate pushbutton on the switch console 138, and this is sensed by the system controller 132 via the digital I/O board 134.
In step 502, the system controller 132 sends a "set XX" command to the rendering system 110 via the serial communication link 144. XX is the number of the desired virtual set. In step 504, it awaits a "set command acknowledge" from the rendering system 110 via the same link. In step 506, the system controller 132 negates the GPI output for the light under the pushbutton on the switch console 138 for the previously selected set, and in step 508, it asserts the GPI output for the light under the pushbutton for the set newly selected by the operator. This completes the operation of the system controller 132 in response to a set change request. As with the flow chart of Fig. 4, steps 506 and 508 in a different embodiment can take place earlier in the flow chart of Fig. 5.
Fig. 6 is a flow chart of the operation of the system controller 132 in response to an animation change request. As previously described, an operator requests the start of an animation by pressing the pushbutton on the switch console 138 corresponding to the desired animation. The operator requests termination of an animation by pressing the pushbutton again.
Accordingly, when an animation pushbutton is pressed, in step 602, the system controller 132 sends an "animation AA" command to the rendering system 110. AA is the number of the animation corresponding to the selected pushbutton. In step 604, the system controller 132 awaits an "animation command acknowledge" from the rendering system 110 via the same serial communication link 144. In step 606, the system controller 132 toggles the GPI output for the light under the selected animation pushbutton on the switch console 138. That is, if the GPI output was on, system controller 132 turns it off, and if the GPI output was off, then the system controller 132 turns it on. Step 606 could be performed earlier in the flow chart of Fig. 6. This completes the operation of the system controller 132 in response to an animation change request. B. Rendering System
The three-dimensional rendering system 110 renders the virtual set as a three-dimensional model onto a two- dimensional image plane. As used herein, graphics rendering is the process of computing a two-dimensional image (or part of an image) from three-dimensional geometric forms. A renderer is a tool which performs graphics rendering. Some Tenderers are exclusively software, some are exclusively hardware, and some are implemented using a combination of both (e.g. software with hardware assist or acceleration) . Renderers typically render scenes into a buffer which is subsequently output to the graphical output device, but it is possible for some Tenderers to write their two- dimensional output directly to the output device. A graphics rendering system (or subsystem) , as used herein, refers to all of the levels of processing in the rendering system 110 from the application-level rendering loop, down to the hardware of the system 110. The rendering loop in system 110 uses two primary graphics software packages, GL™ and Inventor™, both available from SGI. In alternative embodiments, other graphic software packages could be used.
GL is a renderer used primarily for interactive graphics. GL is described in "Graphics Library Programming Guide," Silicon Graphics Computer Systems, 1991, incorporated by reference herein. It was designed as an interface to SGI rendering hardware. GL supports simple display lists which are essentially macros for a sequence of GL commands. The GL routines perform rendering operations by issuing commands to the SGI hardware. Inventor is an object oriented 3-D graphics user interaction toolkit that sits on top of the GL graphics system. Inventor has an entire 3-dimensional model residing in a "scene graph". Inventor has render action objects that take a model as a parameter. The render action draws the entire model by traversing the model and calling the appropriate rendering method for each object. The usual render action is the GL rendering mode. Inventor is described in Wernecke, "The Inventor Mentor", Addison-Wesley (1994), incorporated by reference herein.
Fig. 7 is a symbolic diagram of a scene graph used in the graphics engine of the rendering system 110. It has a root node 702, and each of the virtual sets are attached as child nodes to the root node 702. For example, the node 704 which begins the scene graph for set number 1 is attached as a child node to the root node 702, and node 706, which is the beginning of the scene graph for virtual set number 2, is also attached as a child node to the root node 702.
Set node 704 has a camera object 708 attached thereto, as well as all of the geometric and other objects 710 in a typical Inventor scene graph. These include first and second animation objects 712 and 714 for virtual set number 1. Animation objects are specific to the virtual set.
Similarly, the node 706 for set number 2 has attached thereto a camera object 716 and a number of conventional scene graph objects 718, including first and second animations 720 and 722 for the virtual set number 2. The camera objects 708 and 716 point to respective data structures which describe, among other things, the position, orientation and image parameters which have been reported most recently for the currently active camera. The entire scene graph of Fig. 7 is established upon initialization or configuration of the system.
In normal operation, the graphics engine (GE) is a process executing on the rendering system 110 that updates and displays rendered images. It operates by passing to an Inventor "start rendering" function, as the "current node", either set number 1 node 704 or set number 2 node 706, depending on which virtual set is currently active. The Inventor routines then traverse the scene graph below the specified node in a conventional manner. When camera object 708 is encountered during the traversal, all subsequent objects are rendered from the viewpoint and with the orientation and image parameters specified in the data structure pointed to by the camera object 708. When an animation object 712 or 714 is encountered during traversal, the routines call an Inventor "engine" to update the parameters of certain objects in the scene graph before continuing. The data structure for an animation object includes an indication of whether the animation is on or off, and the engine is invoked only if the animation is on.
When a camera, set or animation change command is received via the serial link 144 from the system controller 132, it is detected by the GE process. If the received command was a "camera N" code, then the camera object 708 is modified to point to the data structure for camera N. If the received command is a "set XX" command, then for the next frame to be rendered, the node corresponding to virtual set XX will be passed to Inventor as the "current node". If the received command is an "animation AA" command, then the flag in the animation object AA for the current set which indicates whether the animation is active or not, is toggled. In any of these cases, control then returns to the rendering loop.
Thus it can be seen that the system controller 132, in combination with the rendering system 110 and the switcher 126 automatically perform all the steps necessary to implement the camera, set and animation changes requested by the operator merely by pressing buttons in a familiar manner on the switch console 138.
IV. SYSTEM OPERATION FOR FAULT TOLERANCE A. System Controller
Figs. 8-11 are symbolic flow charts illustrating how the system controller 132 manages the fault tolerance of the facility of Fig. 1. The fault-tolerant system is activated by an FTS-ON command byte received from the rendering system 110 via the serial link 144. On power- up of the system 132, an FTS-ON flag is unset. In step 802, the controller sets the FTS-ON flag, and in step 804, the controller sends an FTS-ON ACKNOWLEDGE byte command back to the rendering system 110 via the serial link 144.
The rendering system 110 can also turn off the fault-tolerant system, and this is illustrated in the flow chart of Fig. 9. Specifically, when the system controller 132 receives an FTS-OFF command byte from the rendering system 110 via the serial link 144, in step 902, the controller unsets the FTS-ON flag. In step 904, the controller 132 sends an FTS-OFF ACKNOWLEDGE code back to the rendering system 110 via the serial link 144.
While the fault-tolerant system is active, at regular intervals, the graphics engine in the rendering system 110 sends a GE-ALIVE code to the system controller 132 via the serial link 144. Fig. 10 illustrates the operation of the system controller 132 in response to receipt of a GE-ALIVE code. Specifically, in step 1002, it is determined whether the FTS-ON flag is set. If so, then the controller merely sets a GE-ALIVE flag within the controller 132. If FTS-ON is unset, then the GE-ALIVE code is ignored.
While the fault-tolerant system is enabled, the system controller 132 executes a GE-ALIVE CHECK loop illustrated in Fig. 11. In step 1102, the routine first awaits expiration of a preset GE-ALIVE CHECK INTERVAL. This interval is programmable on system configuration, within the range of 550 milliseconds to 3300 milliseconds. In step 1104, the routine determines whether the GE-ALIVE flag is currently set. If so, then all is well and in step 1106, the routine unsets the GE-ALIVE flag and loops back to step 1102. Presumably, during the waiting period of step 1102, a GE-ALIVE code will be received from the rendering system 110, and the GE-ALIVE flag will be set again in step 1002 (Fig. 10) . If, in step 1104, the GE-ALIVE flag is not set, then a fault-tolerant state is entered. The same state is entered also if the system controller 132 detects loss of video detect via GPI input 154 (Fig. 1) . In this state, in step 1108, the system controller 132 unsets the FTS-ON flag and, in step 1104, asserts a RE-BOOT GPI output via the GPI signal line 146 (Fig. 1) to the rendering system 110. This will cause the rendering system 110 to restart itself as described hereinafter. In normal operation, outside the fault-tolerant state, the system controller 132 has notified the switchers 208 (Fig. 2) in each of the fault handlers 118 and 120 to output the video information directly from the distribution amplifier 202 in the fault handler via video path 204. The system controller 132 also periodically notifies the frame store 210 in the fault handler 118 or 120 for the active camera only, to update its stored frame. This notification takes place over the GPI line 212 for the proper fault handler 118 or 120, and may, for example, be initiated once each traversal of the GE-ALIVE CHECK loop in Fig. 11. Other mechanisms can also be used to time assertion of the trigger for the frame stores 210.
When the fault-tolerant state is entered, frame store triggers are no longer asserted. Also, the system controller 132 notifies the switchers 208 in both of the fault handlers 118 and 120, via the respective GPI lines 214, to switch. Thereafter, these switchers will provide the stored frames from the frame stores 210, to the outputs of the respective fault handlers 118 and 120. Thus, the viewer will continue to see a fully rendered virtual set behind the talent, except possibly for a brief loss of background which is no lengthier than the GE-ALIVE CHECK INTERVAL.
Note that although two mechanisms are used in the facility of Fig. 1 to detect the presence of a fault in graphics engine, other detection mechanisms can be used in another embodiment, either in addition to these or instead of these.
B. Rendering System
Fig. 12 is a flow chart illustrating the operation of the rendering system 110 with respect to fault tolerance. When the rendering system 110 boots up, the graphics engine (GE) process is automatically started. After initialization in step 1202, the GE process sends the FTS-ON code to the system controller 132 via the serial link 144, and awaits an FTS-ON ACKNOWLEDGE from the system controller (step 1206). In step 1208, the GE process loads the graphics parameters for rendering from a non-volatile disk file lastconfiguration.rt (step 1208) . The GE process then enters the main graphics loop. In step 1210, it reads the serial port buffer for any commands from the system controller via the serial bus 144. If a set, camera or animation change command was received (step 1212) , then the graphics parameters are updated to reflect the new set, the new camera or the new animation, or the termination of an animation, and the revised parameters are written to lastconfiguration.rt (step 1214) .
In step 1216, after any set, camera or animation change command has been handled, the GE process reads the memory head or other tracker data for new camera position, orientation or image parameters (see the above-incorporated related application). In step 1218, the next graphics frame is rendered based on the current graphics parameters. In step 1220, the GE process sends a GE-ALIVE code to the system controller 132 via the serial bus 144, and loops back to step 1210.
The process illustrated in Fig. 12 is only one of many possible ways of handling the fault tolerance features in the rendering system 110. This particular mechanism is advantageous for a number of reasons. First, a non-volatile record is kept of the current graphics parameters in the lastconfiguration.rt file. This file is updated every time a set, camera or animation change command is received from the system controller 132, and is automatically loaded should the system controller 132 have to re-boot the 3-D rendering system 110.
Second, the GE-ALIVE code is initiated from within the main graphics loop itself rather than from a separate process running under the rendering system multi-tasking Unix operating system. While that arrangement would also work, it would fail to send the periodic GE-ALIVE code only in the case of faults which are catastrophic enough to cause Unix itself (or the rendering system hardware) to lock up or fail. By including the GE-ALIVE transmission in the main graphics loop itself, any fault which locks up the graphics loop will cause the facility to enter the fault-tolerant state, whether or not the fault is serious enough to also lock up or halt Unix. Another mechanism which can be used to send the GE-ALIVE code with the proper timing is the Unix "signal" mechanism. Before entering the main graphics loop, the GE process requests a signal from the Unix operating system after an appropriate time interval, for example two seconds. The GE process then enters the graphics loop similar to that shown in Fig. 12, except that the step of sending the GE-ALIVE code to the system controller (step 1220) is omitted. The main graphics loop simply repeats continuously, rendering new graphics frames as quickly as possible. When the signal arrives from the operating system, control of the GE process is automatically transferred to a signal handler such as that shown in Fig. 13. In step 1302, the signal handler sends the GE-ALIVE code to the system controller, and in step 1304, the signal handler restarts the operating system signal for another two seconds hence. Control then returns to wherever it left off in the main graphics loop of Fig. 12. This latter mechanism has the same advantage of that illustrated in Fig. 12, in that faults will be detected in the GE process even if they do not extend to other parts of the 3-D rendering system 110. Additionally, this mechanism has the further advantage that the GE-ALIVE code will be sent to the system controller 132 at fixed intervals regardless of how quickly the graphics engine is actually rendering frames.
The foregoing description of preferred embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
|US4344085 *||4. Dez. 1980||10. Aug. 1982||Vlahos-Gottschalk Research Corp.||Comprehensive electronic compositing system|
|US4684990 *||12. Apr. 1985||4. Aug. 1987||Ampex Corporation||Method and apparatus for combining multiple video images in three dimensions|
|US4847690 *||19. Febr. 1987||11. Juli 1989||Isix, Inc.||Interleaved video system, method and apparatus|
|US5077608 *||19. Sept. 1990||31. Dez. 1991||Dubner Computer Systems, Inc.||Video effects system able to intersect a 3-D image with a 2-D image|
|US5144454 *||15. Aug. 1991||1. Sept. 1992||Cury Brian L||Method and apparatus for producing customized video recordings|
|US5305108 *||2. Juli 1992||19. Apr. 1994||Ampex Systems Corporation||Switcher mixer priority architecture|
|US5307456 *||28. Jan. 1992||26. Apr. 1994||Sony Electronics, Inc.||Integrated multi-media production and authoring system|
|US5313566 *||16. Febr. 1993||17. Mai 1994||Sony United Kingdom Ltd.||Composite image generation with hidden surface removal using a single special effect generator|
|US5327156 *||8. Jan. 1993||5. Juli 1994||Fuji Photo Film Co., Ltd.||Apparatus for processing signals representative of a computer graphics image and a real image including storing processed signals back into internal memory|
|US5347306 *||17. Dez. 1993||13. Sept. 1994||Mitsubishi Electric Research Laboratories, Inc.||Animated electronic meeting place|
|US5367341 *||20. Okt. 1992||22. Nov. 1994||Canon Information Systems, Inc.||Digital video editor having lost video frame protection|
|US5491743 *||24. Mai 1994||13. Febr. 1996||International Business Machines Corporation||Virtual conference system and terminal apparatus therefor|
|US5508940 *||14. Febr. 1994||16. Apr. 1996||Sony Corporation Of Japan And Sony Electronics, Inc.||Random access audio/video processor with multiple outputs|
|Zitiert von Patent||Eingetragen||Veröffentlichungsdatum||Antragsteller||Titel|
|DE19825302B4 *||5. Juni 1998||25. Sept. 2014||Evans & Sutherland Computer Corp.||System zur Einrichtung einer dreidimensionalen Abfallmatte, welche eine vereinfachte Einstellung räumlicher Beziehungen zwischen realen und virtuellen Szeneelementen ermöglicht|
|Europäische Klassifikation||H04N5/222S, H04N5/222|
|17. Okt. 1996||AK||Designated states|
Kind code of ref document: A1
Designated state(s): AT AU BR BY CA CH CN CZ DE DK ES FI GB HU JP KP KR MX NO NZ PL PT SE SG AM AZ BY KG KZ MD RU TJ TM
|17. Okt. 1996||AL||Designated countries for regional patents|
Kind code of ref document: A1
Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG
|15. Jan. 1997||121||Ep: the epo has been informed by wipo that ep was designated in this application|
|15. Mai 1997||REG||Reference to national code|
Ref country code: DE
Ref legal event code: 8642
|10. Febr. 1998||NENP||Non-entry into the national phase in:|
Ref country code: CA
|1. Sept. 1999||122||Ep: pct application non-entry in european phase|