WO2001019079A1 - A system for distributing and delivering multiple streams of multimedia data - Google Patents

A system for distributing and delivering multiple streams of multimedia data Download PDF

Info

Publication number
WO2001019079A1
WO2001019079A1 PCT/US2000/040851 US0040851W WO0119079A1 WO 2001019079 A1 WO2001019079 A1 WO 2001019079A1 US 0040851 W US0040851 W US 0040851W WO 0119079 A1 WO0119079 A1 WO 0119079A1
Authority
WO
WIPO (PCT)
Prior art keywords
bit stream
data
serialized bit
serialized
digital
Prior art date
Application number
PCT/US2000/040851
Other languages
French (fr)
Other versions
WO2001019079A9 (en
Inventor
John Taylor
Original Assignee
Quokka Sports, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quokka Sports, Inc. filed Critical Quokka Sports, Inc.
Priority to AU12505/01A priority Critical patent/AU1250501A/en
Publication of WO2001019079A1 publication Critical patent/WO2001019079A1/en
Publication of WO2001019079A9 publication Critical patent/WO2001019079A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234318Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4516Management of client data or end-user data involving client characteristics, e.g. Set-Top-Box type, software version or amount of memory available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/162Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
    • H04N7/165Centralised control of user terminal ; Registering at central

Definitions

  • the present invention relates to the field of broadcasting events using multiple media; more particularly, the present invention relates to simultaneously distributing, delivering and presenting multiple streams of multimedia data.
  • Individual viewers may wish to view events through a perspective that is different from that of the broadcasters. For instance, if a broadcaster is showing a sporting event on television, an individual viewer may wish or desire to follow an individual competitor, sponsor, etc. However, the individual viewer does not have control over the particular content that is broadcasted by the broadcaster and cannot indicate the content they desire to see as an event is being broadcast.
  • current broadcasting technologies are unable to organize and transmit the rich diversity of experiences and information sources that are available to participants or direct spectators at the event.
  • a live spectator or participant at a sporting event is able to simultaneously perceive a wide range of information, such as watching the event, listening to it, reading the program, noticing weather changes, hearing the roar of the crowd, reading the scoreboard, discussing the event with other spectators, and more.
  • the spectator/participant is immersed in information relating to the event.
  • a knowledgeable spectator knows how to direct his attention within this flood of information to maximize his experience of the event.
  • the viewer of a television broadcast does not and cannot experience this information immersion. That is, television lacks the ability to emulate for the viewer the experience of attending or participating in the particular event.
  • delivery of live broadcasts of an event in combination with the virtual emulation may be desirable in order to additionally provide an individual with a broadcaster's perspective of the event.
  • a method and apparatus for distributing one or more streams of data from a venue to a client comprises capturing data from a remotely occurring event, converting the data into digital media assets, transmitting the digital media assets to a distribution mechanism, receiving broadcast data at the distribution mechanism, distributing the digital media assets as a first serialized bit stream and distributing the broadcast data to the client set-top device as a second serialized bit stream.
  • Figure 1 illustrates one embodiment of a multimedia delivery platform
  • Figure la is a block diagram of one embodiment of a transmission, distribution and delivery component of a platform
  • Figure lb is a block diagram of another embodiment of a transmission, distribution and delivery component of a platform
  • Figure 2 is a block diagram of one embodiment of a remote production stage of the platform
  • Figure 3 is a block diagram of one embodiment of a set-top device
  • Figure 4 illustrates one embodiment of broadcast video and IP data displayed on a display
  • FIG. 5 illustrates subprocesses of one embodiment production
  • Figure 6 illustrates a route allocated to a flow-through IP video stream.
  • a platform for distributing multiple streams of digital media data from events (e.g., sporting events) and/or other sources to end users is described.
  • the digital media assets include assets (individual units of digital content) and metadata.
  • An asset may include, for example, material such as a photographic image, a video stream, timing data from an event (e.g., timing data from a race, timing data for a single car on a single lap within a race), the trajectory of a ball as it flies towards a goal, an HTML file, an electronic mail message (from a computer), etc.
  • material such as a photographic image, a video stream, timing data from an event (e.g., timing data from a race, timing data for a single car on a single lap within a race), the trajectory of a ball as it flies towards a goal, an HTML file, an electronic mail message (from a computer), etc.
  • Metadata is information about an asset, such as for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from a given frame of a particular video), the situation in which an end user accessed it (e.g., a hit), its heritage (e.g., other assets from which it was generated), its importance to the immersive experience, and its movement through the platform.
  • asset such as for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from a given frame of a particular video), the situation in which an end user accessed it (e.g., a hit), its heritage (e.
  • Metadata may provide more abstract information, such as for example, the types of values available within a particular kind of telemetry stream, instructions generated by production and followed by immersion (e.g., to track certain kinds of user behavior, to automatically present certain assets to the user at certain times or in response to certain user actions, etc.), relationships between assets and other entities such as events, competitors, sponsors, etc.
  • the platform treats both assets and metadata as first-class objects, which are well known in the art.
  • a context is a metadata structure that defines a set of assets (and/or other contexts). Contexts may be dynamically generated or may be stored for optimal access.
  • the platform controls the data flow from each event.
  • the platform collects assets from a venue, produces an immersive experience and delivers the experience to end users (viewers).
  • the platform receives digital media assets (e.g., metadata and individual units of digital content) corresponding to the event and converts those assets into immersive content (i.e., context from and about an event in which users may immerse themselves).
  • digital media assets e.g., metadata and individual units of digital content
  • the conversion of digital media assets into immersive content is performed by using a context map (e.g., a graph structure), which organizes the digital media assets and indicates relationships between contexts associated with the digital media assets.
  • the platform may maintain a hierarchical database of contexts.
  • the platform tags digital media assets with global identifiers indicative of context information.
  • the global identifier is a persistent, location-independent, globally unique identifier to the digital media asset describing the event.
  • the content map may be used at remote production (as described later).
  • the immersive content is distributed, delivered, and presented to end users (viewers).
  • the immersive content enables an immersive experience to be obtained.
  • This immersive experience is a virtual emulation of the experience of actually being present at or participating in an event, obtained by being subjected to the content that is available from and about the event.
  • one or more live broadcast audio /video feeds corresponding to an event is combined with and transmitted to end users along with the digital media assets.
  • Broadcast feed data streams are combined with the asset data streams at a distribution stage of the platform and are transmitted to end-users. Upon being received, the streams of data are separated and presented to end-users.
  • the platform collects, transmits, produces and combines digital media assets with live broadcast data.
  • the platform distributes, delivers, and presents the digital media assets and the broadcast data.
  • Each of these functions, or phases may be implemented in hardware, software, or a combination of both. In alternative embodiments, some of these may be performed through human intervention with a user interface.
  • computing or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • the present invention also relates to apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks,
  • CD-ROMs compact discs
  • magnetic-optical disks read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • ROMs read-only memories
  • RAMs random access memories
  • EPROMs EPROMs
  • EEPROMs electrically erasable programmable read-only memory
  • magnetic or optical cards or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below.
  • the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the
  • the instructions of the programming language(s) may be executed by one or more processing devices (e.g., processors, controllers, central processing units (CPUs), execution cores, etc.).
  • processing devices e.g., processors, controllers, central processing units (CPUs), execution cores, etc.
  • FIG. 1 is a block diagram of one embodiment of a platform represented in terms of the following subprocesses or segments which it performs: collection 101, remote production 107, transmission-distribution-delivery (TDD) 108 and client immersion 106. Additional functional descriptions of the platform are discussed in further detail in the co-pending application entitled, "An Architecture for Controlling the Flow and Transformation of Multimedia Data", Application serial no.09/316.328. filed on May 21, 1999, assigned to the corporate assignee of the present invention and herein incorporated by reference.
  • Collection 101 generates streams and packages that are forwarded to remote production 107.
  • a stream is a constant flow of real time data.
  • a package is a bundle of files that is stored and forwarded as a single unit.
  • a stream resembles a radio or television program, while a package resembles a letter or box sent through the mail (where the letter or box may contain video or audio recordings).
  • Streams are often used to allow end users to view time-based content (e.g., real-time video data), while packages may be used for non- temporal content (e.g., graphics, still images taken at the event, snapshots of content that changes less rapidly than time-based content, such as a leaderboard, etc.).
  • the streams and packages being transmitted are formatted by collection 101 into a particular format.
  • the streams and packages flow through the platform from collection 101 through to client immersion 106.
  • Collection 101 and remote production 107 exchange metadata in order to synchronize contextual information.
  • streams, packages and metadata are transferred via TDD 108 to client immersion 106.
  • the streams and packages may be converted into formats specific to the delivery technology (e.g., http responses, etc.).
  • Client immersion 106 includes a set-top unit 155.
  • Set-top unit 155 receives the streams and packages of data in order to enable a user to view an event and corresponding data.
  • set-top unit 155 could be implemented using any type of processing device or system.
  • Collection 101 is the process of capturing proprietary data at event venues and translating it into a format. That is, at its most upstream point, the platform interfaces to a variety of data acquisition systems in order to gather raw data from those systems and translate it into a predefined format.
  • the format contains media assets, context and other metadata.
  • the format adds a global identifier and synchronization information that allows subsequent processing to coordinate content from different streams or packages to each other. The processes within collection 101 are able to control gathering and translation activities.
  • collection 101 may occur simultaneously at multiple sites of an event venue.
  • venue data may be collected at an ice skating event and a bobsledding event that are occurring at different sites at the same time. The data from both events may be forwarded to remote production 107.
  • collection 101 converts raw venue data into digital media assets.
  • the venue data may include both real-time and file-based media.
  • This media may include traditional real-time media such as, for example, audio and video, real-time data collected directly from competitors (e.g., vehicle telemetry, biometrics, etc.), venue-side real-time data (e.g., timing, position, results, etc.), traditional file-based media (e.g., photographs, editorial text, commentary text, etc.), other file-based media (e.g., electronic mail messages sent by competitors, weather information, maps, etc.), and /or other software elements used by the client (e.g., visualization modules, user interface elements dynamically sent out to client to view data in new ways, sponsor (advertising) contexts, view style sheets).
  • software elements used by the client e.g., visualization modules, user interface elements dynamically sent out to client to view data in new ways, sponsor (advertising) contexts, view style sheets).
  • Sporting events can be characterized by the assets collected from the event.
  • these assets can include audio and video of the actual event, audio or video of individual competitors (e.g., a video feed from an in-car camera), timing and scoring information, editorial /commentary /analysis information taken before, during and/or after an event, photographs or images taken of and by the competitors, messages to and from the competitors (e.g., the radio links between the pit and the driver in an auto race), a data channel (e.g., a control panel readout taken from the device in a car), telemetry indicating vital functions of a competitor.
  • the telemetry can include biometrics of the competitor (e.g., heart rate, body temperature, etc.).
  • Other telemetry may include position information of a competitor (e.g., player with microchip indicating position) using, for example, a global positioning system (GPS), telemetry from an on-board computer, etc., or a physical device (e.g., an automobile).
  • position information of a competitor e.g., player with microchip indicating position
  • GPS global positioning system
  • telemetry from an on-board computer etc.
  • a physical device e.g., an automobile
  • Various devices may be used to perform the collection of sporting event data, or other data.
  • cameras may be used to collect video and audio.
  • Microphones may be used to collect audio (e.g., audience reaction, participant reaction, sounds from the event, etc.).
  • Sensors may be used to obtain telemetry and electronic information from humans and /or physical objects.
  • the information captured by such devices may be transferred using wires or other conductors, fibers, cables, or using wireless communications, such as, for example, radio frequency (RF) or satellite transmission.
  • RF radio frequency
  • collection 101 uses includes hardware, such as collection devices or data acquisition systems (e.g., cameras, microphones, recorders, sensors, etc.), communications equipment, encoding servers, production management server(s), and network equipment.
  • each of the collection devices converts the event data it captures into a format that includes the digital units of data, metadata and context information.
  • each of the captured devices sends the raw captured data to a location where a remote production unit, device, or system formats it.
  • the formatting process of collection 101 is further discussed in further detail in the co-pending Application serial no. 09/316,328 entitled, "An Architecture for Controlling the Flow and Transformation of Multimedia Data" assigned to the corporate assignee of the present invention and incorporated herein by reference.
  • the production process may commence at the venue (i.e., a remote production).
  • Remote production 107 includes a set of processes that are applied to digital media assets before they are distributed.
  • Production is the process of managing an event from a particular point of view.
  • managing an event includes determining which assets will be collected and transferred from the event venue.
  • Event management may include: defining, statically or dynamically, event-specific metadata based on global metadata received from production; dynamically controlling which assets are captured (using dynamic selection of information as event data is being collected), how they are formatted (e.g., adjusting a compression rate using a video encoder depending on contents of the video), and transmitted away from the event; managing physical resources (data collection hardware, communications paths and bandwidth, addresses, etc.) necessary to capture, format, transmit assets, etc.
  • FIG. 2 is a block diagram of one embodiment of remote production 107.
  • Remote production 107 includes a media manager 210 and an uplink module 220.
  • Media manager 210 manages the production of the various venue data streams received at remote production 107. For example, media manager 210 coordinates the timing of the venue streams for archival and compression. The coordination of timing conducted at media manager 210 may be necessary since each stream of venue data collected may be received at collection 101 at different data rates. Subsequently, media manager 210 multiplexes the venue streams into uplink module 220.
  • Uplink module 220 includes an inserter 222 and a modulator 224.
  • inserter 222 is an Internet Protocol (IP) inserter that encapsulates the stream of IP formatted data received at uplink module 220 within a video-encoded format.
  • IP Internet Protocol
  • inserter 222 is a
  • MPEG-2 Picture Expert Group 2 (MPEG-2) standard as developed by the International
  • inserter 222 may create a
  • DVD Digital Video Broadcasting
  • Remote production 107 may be performed at a Remote Production Facility (RPF).
  • RPF Remote Production Facility
  • An RPF may be a studio at a central site where the digital media assets are produced before being distributed.
  • the studio is an Internet protocol (IP) studio. It is referred to as an IP studio because all, or some portion of, digital media assets that are received from the studio are sent out using an industry standard TCP/IP protocol suite throughout the rest of the segments (phases) and the assets are digital IP assets.
  • IP Internet protocol
  • the studio may not send and view the digital video assets or perform all operations using IP in alternative embodiments.
  • Remote production 107 may receive venue data collected from events that are occurring simultaneously, as described above.
  • each event is transmitted through TDD 108 to client immersion 106 via a single communications channel.
  • the data from each event may be transmitted and distributed to client immersion 106 by multiple communications channel.
  • TDD 108 provides for the transmission, distribution and delivery of data streams to client immersion 106.
  • TDD 108 receives the IP venue stream data that is transmitted from remote production
  • TDD 108 may receive one or more live audio/video broadcast feeds 110 from a broadcast (e.g., television) network providing coverage of a venue.
  • TDD 108 is coupled to a package delivery system (PDS) 120 for receiving data from one or more World Wide Web (Web) sites.
  • PDS package delivery system
  • the transmission and delivery of data streams at TDD 108 is implemented using satellite transmissions.
  • satellite transmissions may be used to implement the transmission and delivery of data streams at TDD 108.
  • FIG. la is a block diagram of one embodiment of TDD 108.
  • TDD 108 includes transmission 202, distribution 204 and delivery 205.
  • Transmission 202 transmits specifically formatted streams and packages, including metadata, from event venues.
  • streams and packages are transferred via high speed IP networks.
  • the IP networks may be terrestrial and/or satellite-based.
  • transmission 202 is implemented using satellite 125.
  • a communication mechanism for the transmission of streams may be selected based on its ability to accommodate bandwidth management, while a communication mechanism for the transmission of packages may be selected based on its reliability.
  • transmission 202 treats the specifically formatted assets as opaque entities. In other words, transmission 202 has no knowledge of what data is being transmitted, nor its format, so that the information is just raw data to transmission 202.
  • transmission 202 may include dynamic network provisioning for individual sessions. That is, the network may dynamically allot more bandwidth to particular streams or packages based on priority. Data could be routed over links based on cost or time priorities. For example, transmission 202 may purchase transport bandwidth, while a terrestrial IP network is on all the time. Supplementary data might be routed over Internet virtual private networks while video might be sent over a satellite.
  • transmission 202 may include a process that encrypts assets prior to transmission.
  • Distribution 204 is the process of transmitting streams and packages from the remote production studio, via high-speed IP networks, to delivery facilities and /or mechanisms. Distribution 204 may use a distribution network having multiple simultaneously transmitting channels. In one embodiment, broadband communications are used for transmission.
  • the mechanism(s) used for distribution 204 may be selected based on the ability to accommodate bandwidth management, reliability, and /or other considerations.
  • Distribution 204 receives an IP venue stream data from remote production 107 via satellite 125 encapsulated in the MPEG-2 format, as described above.
  • distribution 204 may receive one or more streams of video data from broadcast feed 110 ( Figure 1).
  • broadcast feed 110 represents video data transmitted from an entity or individual(s) providing coverage of the venue such as from a television network.
  • broadcast feed 110 is a live feed from an event captured as the event is occurring.
  • the broadcast feed 110 may be encoded in the same format as the IP venue stream (e.g., MPEG-2). However, the IP data stream is modulated at one satellite frequency (or program identifier (PID)), while the broadcast feed is modulated according to a second PID. Subsequently, distribution 204 may combine the digital media stream of data with the broadcast stream of data for delivery to client immersion 106.
  • PID program identifier
  • distribution 204 may receive IP data from PDS 120 ( Figure 1).
  • PDS 120 includes a File Transfer Protocol (FTP) server that retrieves data from a HyperText Transfer Protocol (HTTP) server corresponding to a particular Web site.
  • FTP File Transfer Protocol
  • PDS 120 may include software that removes all HyperText Markup Language (HTML) format tags that make reference to the HTTP server.
  • the data from PDS 120 is forwarded to distribution 204.
  • the data is forwarded to distribution 204 via dual Tl carrier lines. Nevertheless, one of ordinary skill in the art will appreciate that the data may be transmitted to distribution 204 using other types of communication mechanisms.
  • distribution 204 includes a switch for switching between the PDS data and the IP venue data for transmission to client immersion 106. During an event, distribution 204 transmits the IP venue data to client immersion 106 along with the broadcast data as described above. However, before and after an event, distribution 204 transmits the PDS data from a Web site to client immersion 106.
  • Distribution 104 transmits the IP and broadcast video data to client immersion 106 via delivery 205 according to the modulated PIDs described above.
  • Delivery 105 makes immersion content available to immersion 106. Numerous classes of delivery technology may be used, and multiple instances of each class may be used as well.
  • delivery 205 may employ satellite 135. However, in other embodiments, delivery 205 employs cable, telcos, ISPs, television, radio, on-line and print.
  • delivery 205 may take the form of one-way broadcast or client-server requests, via low, medium, and high- bandwidth bi-directional networks. Also depending on the technology, delivery 205 may or may not involve translating streams and packages into other formats (e.g., extracting data from a telemetry stream and inserting it into a relational database). Each delivery provider may implement a proprietary content reception protocol on top of basic IP protocols.
  • Figure lb is another embodiment of TDD 108 including a production 103 segment.
  • the production 103 operates as a switch in which formatted content from multiple sources is received and sent out to multiple destinations. Numerous operations may be performed on the content, such as archiving, compression, editing, etc., as part of the switching process. In one embodiment, there is no hardwired connection between the operations and they may be performed on the pool of assets in general. Other production operations may be performed by production 103 such as, for example, laying out text and graphics, setting priorities for views (assets groups), creating associations between assets, etc.
  • production 103 comprises the following processes: acquisition, asset storage, asset production, analysis, immersion production, metadata management, dissemination, process management, user management, distillation and syndication.
  • each of these operate in a manner decoupled from each other.
  • the process may be implemented as hardware and/or software modules.
  • Figure 5 illustrates each of these processes.
  • the acquisition process 301 provides the interface between transmission 202 and production 103.
  • Acquisition process 301 receives specifically formatted streams, packages, and metadata from collection 101 and remote production 107 and parses them into assets (units of digital data) and metadata.
  • Metadata may come to the acquisition process 301 separate from digital media assets when the metadata cannot be attached to event data that has been captured. This may be the case with an NTSC-based stream of data, where in such a case the metadata may indicate that the stream is an NTSC stream.
  • the acquisition process 301 provides an interface through which a number of operations may be performed. For instance, in one embodiment, the acquisition process 301 decrypts assets that had been encrypted for secure transmission, unpackages packages into their constituent parts, parses metadata messages to determine their type and meaning.
  • the acquisition process 301 parses the metadata messages from the information received and forwards their contents to the metadata management process 306. After initially processing assets, the acquisition process 301 forwards them to the asset storage process 302. It also registers new assets with the metadata management process 306. The registration may be based on a context map that indicates what assets will be collected, and the tag on each asset (attached at collection). During the acquisition process 301, using the tag, the process knows one or more of the following: what to do with the asset (e.g., store for later use, pass through unchanged, contact context management to notify it that the asset has been received, etc.).
  • the acquisition process 301 forwards the assets directly to the dissemination process 307.
  • flow- through assets are simultaneously forwarded from the acquisition process 301 to the asset storage process 302 and the dissemination process 307. This is shown in Figure 6.
  • the asset storage process 302 manages physical storage and retrieval of all assets.
  • One or more storage media may be used.
  • a variety of storage technologies may be used, each suitable for certain types of assets.
  • the asset storage process 302 is responsible for interacting with the appropriate storage technology based on asset types.
  • a given asset type may be stored in multiple ways.
  • a telemetry stream may be stored as a flat file in memory and as a set of database records.
  • the asset storage process 302 is involved in storage, retrieval, removal, migration and versioning.
  • Migration refers to moving assets up and down within a storage hierarchy. This movement may be between different storage technologies (e.g., between hard disk and tape). Migration may be performed to free up local or short-term storage. Versioning may be used to indicate an asset's current version (after changes to this asset have been made or have occurred).
  • every time the asset storage process 302 stores, removes, migrates, or versions an asset it communicates with the metadata management process 306 to update the asset's physical location attributes, which the metadata management process 306 manages.
  • the asset production process 303 is the set of processes by which individual digital media assets are created and edited within production 103.
  • the asset production process 303 is applied to most assets that have been acquired from collection 101.
  • in-house editorial and production staffs may create and edit their own assets during the asset production process 303.
  • asset production process 303 includes creation, editing, format conversion (e.g., Postscript to JPEG, etc.), and distillation.
  • a number of editing tools may be used.
  • the creation and editing processes are performed in cooperation with the asset storage process 302 and the metadata management process 306. This interaction may be automatic or manual.
  • assets are transferred from asset storage in order to be edited, and transferred back into asset storage after editing has been completed.
  • effects of production actions are communicated to the metadata management process 306.
  • the asset production process 303 notifies the metadata management process 306 that a JPEG asset was derived from a Postscript asset.
  • the distillation process creates multiple versions of an asset to support different kinds of delivery technologies (e.g., high, medium, and low-bandwidth web sites, one-way satellite data broadcast, interactive television, etc.).
  • the distillation process is performed by assessing the capabilities of the delivery technology against the asset and type of data being transformed. Depending on the complexity of the differences, the distillation process may be more or less automated. In any case, in one embodiment, the distillation process takes into account many aspects of delivery, including, but not limited to, file format, and the number and kind of assets that will be included for a specific delivery platform.
  • Immersion production process 305 attaches greater meaning to the assets that flow through production.
  • the immersion production process 305 initially creates HTML pages that reference and embed other assets.
  • the immersion production process 305 also creates and edits contexts, generates (manually or automatically) one kind of content based on another (e.g., highlight generation, running averages derived from telemetry values, specifically formatted metadata based on context management, etc.), generates production instructions directing immersion applications to automatically present certain information based on specific user actions, uses immersion applications to view content for quality control purposes, and defines the types of values available within particular stream types.
  • Metadata may include many different types of data.
  • metadata includes asset attributes, production instructions and contexts.
  • the production instructions may control the immersion applications based on activity of the user.
  • all types of metadata are first- class objects, thereby allowing easy transport between the platform segments.
  • every object has a unique identifier and a set of attributes. Unique identifiers are used to track objects and to relate them to other objects.
  • the metadata management process 306 creates and modifies attributes and contexts, logically locates objects by querying contexts (e.g., locate all streams belonging to Zinardi's car in a Long Beach Auto race), logically locates objects by querying asset attributes (e.g., locate all the JPEG assets whose author is "Emily Robertson"), physically locates objects by tracking their movements in the form of asset attributes (e.g., version #5 of asset #456 has been stored in the file named "foo.JPG.5" on the file server named "file_server_l").
  • contexts e.g., locate all streams belonging to Zinardi's car in a Long Beach Auto race
  • asset attributes e.g., locate all the JPEG assets whose author is "Emily Robertson
  • asset attributes e.g., locate all the JPEG assets whose author is "Emily Robertson
  • physically locates objects by tracking their movements in the form of asset attributes e.g., version #5 of asset #4
  • the dissemination process 307 provides the interface between production 103 and distribution 204. To facilitate this interface, the dissemination process 307 is configured to communicate with individual distribution channels. The dissemination process 307 communicates with the asset storage process 302 to retrieve assets and with the metadata management process 306 to retrieve metadata. The dissemination process 307 also communicates directly with the acquisition process 301 in the case of flow-through streams. In one embodiment, the dissemination process 307 provides an interface for a number of operations. In one embodiment, the dissemination process 307 provides an interface that constructs messages out of metadata, packages assets and metadata into packages, optionally encrypts data for secure distribution, and logs departure times for all streams and packages.
  • the dissemination process 307 sends the digital media assets to various delivery head ends.
  • the type of data that is distributed to different types of devices is dependent on the device and the dissemination process 307 controls which streams and packages of data are forwarded to the delivery head ends.
  • a device such as a Personal Digital Assistant (PDA) will only be sent data that it is capable of displaying.
  • PDA Personal Digital Assistant
  • an HDTV device will only be sent data that it will capable of displaying.
  • all data that is available is forwarded to the device and the device makes a determination as to whether it can or cannot display some or all of the information.
  • the studio distributes a control stream.
  • this control stream is the context map. That is, the context map is sent to the end user devices. In an alternative embodiment, only a portion of the context map that specifically deals with the event being captured is forwarded to the device to indicate what types of digital media assets are being forwarded.
  • the end user devices may determine what information is being sent to it and may determine what to view.
  • the process management 308 is a process that controls the automation of other production processes.
  • the process management 308 uses several types of objects to control asset switching (routing).
  • these types of objects include routes, process events, schedules and rules.
  • a route is a mapping between a set of processes and a set of physical (e.g., hardware, software, and network) resources.
  • Figure 6 illustrates a simple route allocated to a flow-through IP video stream. Referring to Figure 6, a stream is received from an incoming network 401 and undergoes acquisition via the acquisition process
  • a process event is the application of a given route to a particular asset or group of assets at a specific time.
  • a schedule is the set of times at which a processing event occurs.
  • a rule is a logical constraint that determines when an event occurs. For example, a rule might state that a static leaderboard update page should be generated whenever a leaderboard stream has been acquired and archived. By using these objects, the assets may be managed, including indicating what information is to be shown.
  • the process management 308 also provides an interface for creating, querying, and editing routes, process events, schedules, and rules. In one embodiment, the process management 308 also keeps a log of every completed event and the success or failure of its outcome.
  • the user management process 309 controls access by production users to the various processes within the studio.
  • the user management process 309 manages definitions of users, groups, and access levels. Based on these definitions, it responds to requests from the process management 308 to provide access credentials for particular studio activities.
  • the syndication process 310 allows 3 rd -party organizations (e.g., external media companies) access to assets within the studio.
  • 3 rd -party organizations e.g., external media companies
  • individual assets and subscriptions can be offered, with e-commerce taking place based on those offers.
  • Processes of production 103 occur in a studio.
  • the studio contains a hierarchical or other type of arrangement of asset storage hardware and software (e.g., a database, robotic-type system, etc.).
  • the asset storage control system controls the flow of assets up and down within that hierarchy and determines how to route assets based on their types.
  • the asset storage system would direct data of differing types (e.g., telemetry vs. video) to appropriate storage types.
  • the asset storage system can also make intelligent decisions about asset migration, for example, based on the time since the asset was accessed, the relationship of the asset to current production activity (as determined by context analysis, the time-sensitivity of the asset, and /or an industry standard algorithm (e.g., least recently used (LRU)).
  • LRU least recently used
  • each production 103 subsystem presents an application programming interface (API) for access by other subsystems.
  • API application programming interface
  • the process management 308 controls the automated movement of assets and metadata through the studio. It manages routes, process events, schedules, and rules, defines a common process management API that studio subsystems support and use this API to invoke particular asset and metadata operations in response to event triggers.
  • the process management system may be tightly integrated with the monitoring and control system.
  • the content may be web content.
  • a web content publishing system streamlines the web content production process.
  • the web content publishing system may support file locking to prevent simultaneous updates by multiple users, version management, HTML link maintenance, specialized content verification triggers, incremental update generation, multiple staging areas, and automated content pushes.
  • Web sites may have special needs for content publishing and delivery.
  • the web page may need a graceful mechanism for dynamically updating the files being delivered by the site.
  • the web pages may need a robust, scalable infrastructure for delivering dynamic content (particularly content that is truly interactive, such as a multi-user on-line game).
  • the web content delivery system includes middleware and application software necessary to support these requirements.
  • the web content delivery system may be a third-party different than the content generator.
  • production 103 is able to create and distribute content in the form of incremental web site updates for ensuring incremental updates to live sites.
  • two versions of the data are maintained on one server. Each version is accessed through separate directories in the file system. While one version is being accessed by a server, the other version may be updated by a replication module that is capable of updating either file directory.
  • the directories are switched so that the server accesses the updated directory and allows the previously used directory to be updated. The directories need not be moved to implement the switch.
  • only the pointer used by the server to access a directory is changed. This ensures that the newest version is always available.
  • a version number is associated with each version to indicate which version is currently being stored. In such a case, latest version available on all servers is the version that is used and made accessible.
  • production 103 uses hardware such as specialized archive equipment (tape backup systems, video servers, etc.), production management servers, video encoders, and network equipment.
  • specialized archive equipment tape backup systems, video servers, etc.
  • production management servers video encoders, and network equipment.
  • client immersion 106 includes a set-top unit 155 that receives the signals transmitted from distribution 204 for processing and display.
  • Figure 3 is a block diagram of one embodiment of set-top unit 155.
  • set-top unit 155 includes an IP tuner 320, broadcast tuner 330, central processing unit (CPU) 340, PDS receiver 350, a modem 370 and a graphics controller 375.
  • set- top box 155 also includes other components, such as, for example, one or more memories, dedicated and /or shared, that may be used by the processing components in set-top box 155.
  • IP tuner 320 receives the video-encoded data from distribution 204 and locks on to the PID carrying the IP venue data while filtering out all other signals.
  • IP tuner 320 may include an MPEG-2 decoder for decoding the received video signals.
  • the decoder may be external to tuner 320.
  • IP tuner 320 may also include a switch that enables a user to choose between multiple IP data streams transmitted, distributed and delivered to set- top unit 155. For example, a user may select between viewing venue data from the ice skating and bobsledding events described above.
  • the venue data is transmitted to CPU 340 for processing in the form of IP data.
  • CPU 340 is a processor in the Pentium® family of processors including the Pentium® HI family of processors available from Intel Corporation of Santa Clara, California. Alternatively, other CPUs may be used.
  • various segments of the data may be transmitted to graphics controller 375 to prepare for display at display 380.
  • Broadcast tuner 330 receives the video encoded data from distribution 204 and locks on to the PID carrying the broadcast data. Broadcast tuner 330 may also include an MPEG-2 decoder for decoding the broadcast data before the data is transmitted to graphics controller 375 in order to prepare for display at display 380. Broadcast tuner 330 may also include a switch for selecting between multiple streams of video received from broadcast feed 110.
  • Graphics controller 375 controls the display of graphics and alphanumeric characters at display 380.
  • PDS receiver 350 receives PDS data from distribution 204 before and after events.
  • PDS receiver receives packages of data transmitted from package delivery 120 and stores the data to a storage device located within set-top unit 155.
  • both the IP video data received at IP tuner 320 and the broadcast data may be displayed at display 380 simultaneously.
  • Figure 4 illustrates one embodiment of broadcast video and IP data from an auto race displayed at display 380.
  • LP data displayed at display 380 includes, for example, telemetry data from cars competing in the auto race, a real-time leaderboard and timing information for the cars in the race, track states (e.g., yellow flag) and advertisements.
  • IP video data is displayed at display 380 in one window, while the live broadcast video is displayed in a second window.
  • a user may make a selection at client immersion 106 to view information for each individual racer by selecting a hyperlink button by the racer's name on the leaderboard.
  • live video of the racer's car is displayed in the IP video window.
  • IP data regarding the racer's biographical information may be displayed above the IP video window. Further, the user may continue to view the entire racing field through the broadcast video window.
  • Modem 370 transmits and receives data from FTP server 115 ( Figure 1).
  • the platform may not be capable of transmitting all of the IP data generated at a venue to client immersion 106 at once due to limited bandwidth. Therefore, a user at set-top unit 155 may transmit a request to server 115 via modem 370 indicating which data to transmit. Server 115 subsequently relays the request to remote production 107, which responds by transmitting the desired data.
  • the user may choose to view the bobsledding or ice skating events by making and transmitting a selection to client 115 via modem 370.

Abstract

A method and apparatus for distributing one or more streams of data from a venue to a client (106) is described. In one embodiment, the method comprises capturing data from a remotely occuring event (101), converting the data into digital media assets (107), transmitting the digital media assets to a distribution mechanism (108), receiving broadcast data at the distribution mechanism, distributing the digital media assets to a client set-top device (155) as a first serialized bit stream and distributing the broadcast data to the client set-top device (155) as a second serialized bit stream.

Description

A SYSTEM FOR DISTRIBUTING AND DELIVERING MULTIPLE
STREAMS OF MULTIMEDIA DATA
This is a continuation-in-part application of a co-pending application entitled, "An Architecture for Controlling the Flow and Transformation of Multimedia Data", Application Serial no. 09/316,328 , filed on May 21, 1999.
FIELD OF THE INVENTION
The present invention relates to the field of broadcasting events using multiple media; more particularly, the present invention relates to simultaneously distributing, delivering and presenting multiple streams of multimedia data.
BACKGROUND OF THE INVENTION
Television has been the traditional medium for broadcasting certain events, such as sporting events, to consumers. Consumers have long been limited to those broadcasts that the broadcasters believe are worthy of viewing. Moreover, even with respect to individual events, broadcasters decide which portions of a particular event to broadcast.
Individual viewers may wish to view events through a perspective that is different from that of the broadcasters. For instance, if a broadcaster is showing a sporting event on television, an individual viewer may wish or desire to follow an individual competitor, sponsor, etc. However, the individual viewer does not have control over the particular content that is broadcasted by the broadcaster and cannot indicate the content they desire to see as an event is being broadcast.
In addition, current broadcasting technologies are unable to organize and transmit the rich diversity of experiences and information sources that are available to participants or direct spectators at the event. For example, a live spectator or participant at a sporting event is able to simultaneously perceive a wide range of information, such as watching the event, listening to it, reading the program, noticing weather changes, hearing the roar of the crowd, reading the scoreboard, discussing the event with other spectators, and more. Thus, the spectator/participant is immersed in information relating to the event. Furthermore, a knowledgeable spectator knows how to direct his attention within this flood of information to maximize his experience of the event. The viewer of a television broadcast does not and cannot experience this information immersion. That is, television lacks the ability to emulate for the viewer the experience of attending or participating in the particular event.
In order to provide a viewer virtual emulation of the event (i.e., to enable an individual to be immersed in information relating to the event) while the viewer is not a live spectator, what is needed is a way to deliver this rich set of experiences and information (which may include multiple video, audio, text, image, data telemetry, biometrics, weather, chat, and other kinds of data and interactivity), so that individual viewers can simultaneously perceive and interact with many of them, freely directing their attention among them, and so that a production team can provide guidance that will help less knowledgeable spectators direct their attention to the information sources that provide the best experience of the event at any given moment.
Moreover, delivery of live broadcasts of an event in combination with the virtual emulation may be desirable in order to additionally provide an individual with a broadcaster's perspective of the event.
SUMMARY OF THE INVENTION
A method and apparatus for distributing one or more streams of data from a venue to a client is described. In one embodiment, the method comprises capturing data from a remotely occurring event, converting the data into digital media assets, transmitting the digital media assets to a distribution mechanism, receiving broadcast data at the distribution mechanism, distributing the digital media assets as a first serialized bit stream and distributing the broadcast data to the client set-top device as a second serialized bit stream.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention, which, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only.
Figure 1 illustrates one embodiment of a multimedia delivery platform;
Figure la is a block diagram of one embodiment of a transmission, distribution and delivery component of a platform;
Figure lb is a block diagram of another embodiment of a transmission, distribution and delivery component of a platform;
Figure 2 is a block diagram of one embodiment of a remote production stage of the platform;
Figure 3 is a block diagram of one embodiment of a set-top device;
Figure 4 illustrates one embodiment of broadcast video and IP data displayed on a display;
Figure 5 illustrates subprocesses of one embodiment production; and
Figure 6 illustrates a route allocated to a flow-through IP video stream.
DETAILED DESCRIPTION
A platform for distributing multiple streams of digital media data from events (e.g., sporting events) and/or other sources to end users is described. The digital media assets include assets (individual units of digital content) and metadata.
An asset may include, for example, material such as a photographic image, a video stream, timing data from an event (e.g., timing data from a race, timing data for a single car on a single lap within a race), the trajectory of a ball as it flies towards a goal, an HTML file, an electronic mail message (from a computer), etc.
Metadata is information about an asset, such as for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from a given frame of a particular video), the situation in which an end user accessed it (e.g., a hit), its heritage (e.g., other assets from which it was generated), its importance to the immersive experience, and its movement through the platform. In one embodiment, metadata may provide more abstract information, such as for example, the types of values available within a particular kind of telemetry stream, instructions generated by production and followed by immersion (e.g., to track certain kinds of user behavior, to automatically present certain assets to the user at certain times or in response to certain user actions, etc.), relationships between assets and other entities such as events, competitors, sponsors, etc. In one embodiment, the platform treats both assets and metadata as first-class objects, which are well known in the art.
Data flow and context management are critical to operation of the platform. Data flow refers to the transfer of information, while context management refers to controlling the relationship between data. In one embodiment, a context is a metadata structure that defines a set of assets (and/or other contexts). Contexts may be dynamically generated or may be stored for optimal access.
The platform controls the data flow from each event. The platform collects assets from a venue, produces an immersive experience and delivers the experience to end users (viewers). In one embodiment, the platform receives digital media assets (e.g., metadata and individual units of digital content) corresponding to the event and converts those assets into immersive content (i.e., context from and about an event in which users may immerse themselves).
In one embodiment, the conversion of digital media assets into immersive content is performed by using a context map (e.g., a graph structure), which organizes the digital media assets and indicates relationships between contexts associated with the digital media assets. The platform may maintain a hierarchical database of contexts. As part of using a context map, the platform tags digital media assets with global identifiers indicative of context information. In one embodiment, the global identifier is a persistent, location-independent, globally unique identifier to the digital media asset describing the event. The content map may be used at remote production (as described later). For more information on the use of context maps see the co-pending application entitled,
"An Architecture for Controlling the Flow and Transformation of Multimedia
Data", Application serial no.09/316.328. filed on May 21, 1999, assigned to the corporate assignee of the present invention and herein incorporated by reference.
The immersive content is distributed, delivered, and presented to end users (viewers). The immersive content enables an immersive experience to be obtained. This immersive experience is a virtual emulation of the experience of actually being present at or participating in an event, obtained by being subjected to the content that is available from and about the event.
In one embodiment, one or more live broadcast audio /video feeds corresponding to an event is combined with and transmitted to end users along with the digital media assets. Broadcast feed data streams are combined with the asset data streams at a distribution stage of the platform and are transmitted to end-users. Upon being received, the streams of data are separated and presented to end-users. Thus, the platform collects, transmits, produces and combines digital media assets with live broadcast data. Moreover, the platform distributes, delivers, and presents the digital media assets and the broadcast data. Each of these functions, or phases, may be implemented in hardware, software, or a combination of both. In alternative embodiments, some of these may be performed through human intervention with a user interface.
In the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or
"computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks,
CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
The instructions of the programming language(s) may be executed by one or more processing devices (e.g., processors, controllers, central processing units (CPUs), execution cores, etc.).
Figure 1 is a block diagram of one embodiment of a platform represented in terms of the following subprocesses or segments which it performs: collection 101, remote production 107, transmission-distribution-delivery (TDD) 108 and client immersion 106. Additional functional descriptions of the platform are discussed in further detail in the co-pending application entitled, "An Architecture for Controlling the Flow and Transformation of Multimedia Data", Application serial no.09/316.328. filed on May 21, 1999, assigned to the corporate assignee of the present invention and herein incorporated by reference.
Collection 101 generates streams and packages that are forwarded to remote production 107. A stream is a constant flow of real time data. A package is a bundle of files that is stored and forwarded as a single unit. A stream resembles a radio or television program, while a package resembles a letter or box sent through the mail (where the letter or box may contain video or audio recordings). Streams are often used to allow end users to view time-based content (e.g., real-time video data), while packages may be used for non- temporal content (e.g., graphics, still images taken at the event, snapshots of content that changes less rapidly than time-based content, such as a leaderboard, etc.). In one embodiment, the streams and packages being transmitted are formatted by collection 101 into a particular format.
The streams and packages flow through the platform from collection 101 through to client immersion 106. Collection 101 and remote production 107 exchange metadata in order to synchronize contextual information. From remote production 107 through delivery, streams, packages and metadata are transferred via TDD 108 to client immersion 106. Between TDD 108 and client immersion 106, the streams and packages may be converted into formats specific to the delivery technology (e.g., http responses, etc.). Client immersion 106 includes a set-top unit 155. Set-top unit 155 receives the streams and packages of data in order to enable a user to view an event and corresponding data. One of ordinary skill in the art will appreciate that set-top unit 155 could be implemented using any type of processing device or system.
Collection 101 is the process of capturing proprietary data at event venues and translating it into a format. That is, at its most upstream point, the platform interfaces to a variety of data acquisition systems in order to gather raw data from those systems and translate it into a predefined format. The format contains media assets, context and other metadata. In one embodiment, the format adds a global identifier and synchronization information that allows subsequent processing to coordinate content from different streams or packages to each other. The processes within collection 101 are able to control gathering and translation activities.
According to one embodiment, collection 101 may occur simultaneously at multiple sites of an event venue. Using the World Winter Olympics as an example, venue data may be collected at an ice skating event and a bobsledding event that are occurring at different sites at the same time. The data from both events may be forwarded to remote production 107.
In one embodiment, collection 101 converts raw venue data into digital media assets. The venue data may include both real-time and file-based media. This media may include traditional real-time media such as, for example, audio and video, real-time data collected directly from competitors (e.g., vehicle telemetry, biometrics, etc.), venue-side real-time data (e.g., timing, position, results, etc.), traditional file-based media (e.g., photographs, editorial text, commentary text, etc.), other file-based media (e.g., electronic mail messages sent by competitors, weather information, maps, etc.), and /or other software elements used by the client (e.g., visualization modules, user interface elements dynamically sent out to client to view data in new ways, sponsor (advertising) contexts, view style sheets).
Sporting events can be characterized by the assets collected from the event. In one embodiment, these assets can include audio and video of the actual event, audio or video of individual competitors (e.g., a video feed from an in-car camera), timing and scoring information, editorial /commentary /analysis information taken before, during and/or after an event, photographs or images taken of and by the competitors, messages to and from the competitors (e.g., the radio links between the pit and the driver in an auto race), a data channel (e.g., a control panel readout taken from the device in a car), telemetry indicating vital functions of a competitor. In one embodiment, the telemetry can include biometrics of the competitor (e.g., heart rate, body temperature, etc.). Other telemetry may include position information of a competitor (e.g., player with microchip indicating position) using, for example, a global positioning system (GPS), telemetry from an on-board computer, etc., or a physical device (e.g., an automobile).
Various devices may be used to perform the collection of sporting event data, or other data. For example, cameras may be used to collect video and audio. Microphones may be used to collect audio (e.g., audience reaction, participant reaction, sounds from the event, etc.). Sensors may be used to obtain telemetry and electronic information from humans and /or physical objects. The information captured by such devices may be transferred using wires or other conductors, fibers, cables, or using wireless communications, such as, for example, radio frequency (RF) or satellite transmission.
In one embodiment, collection 101 uses includes hardware, such as collection devices or data acquisition systems (e.g., cameras, microphones, recorders, sensors, etc.), communications equipment, encoding servers, production management server(s), and network equipment. In one embodiment, each of the collection devices converts the event data it captures into a format that includes the digital units of data, metadata and context information.
In an alternative embodiment, each of the captured devices sends the raw captured data to a location where a remote production unit, device, or system formats it. The formatting process of collection 101 is further discussed in further detail in the co-pending Application serial no. 09/316,328 entitled, "An Architecture for Controlling the Flow and Transformation of Multimedia Data" assigned to the corporate assignee of the present invention and incorporated herein by reference.
The production process may commence at the venue (i.e., a remote production). Remote production 107 includes a set of processes that are applied to digital media assets before they are distributed. Production is the process of managing an event from a particular point of view. In one embodiment, managing an event includes determining which assets will be collected and transferred from the event venue. Event management may include: defining, statically or dynamically, event-specific metadata based on global metadata received from production; dynamically controlling which assets are captured (using dynamic selection of information as event data is being collected), how they are formatted (e.g., adjusting a compression rate using a video encoder depending on contents of the video), and transmitted away from the event; managing physical resources (data collection hardware, communications paths and bandwidth, addresses, etc.) necessary to capture, format, transmit assets, etc.
Figure 2 is a block diagram of one embodiment of remote production 107. Remote production 107 includes a media manager 210 and an uplink module 220. Media manager 210 manages the production of the various venue data streams received at remote production 107. For example, media manager 210 coordinates the timing of the venue streams for archival and compression. The coordination of timing conducted at media manager 210 may be necessary since each stream of venue data collected may be received at collection 101 at different data rates. Subsequently, media manager 210 multiplexes the venue streams into uplink module 220.
Uplink module 220 includes an inserter 222 and a modulator 224.
According to one embodiment, inserter 222 is an Internet Protocol (IP) inserter that encapsulates the stream of IP formatted data received at uplink module 220 within a video-encoded format. In a further embodiment, inserter 222 is a
SkyStream DBN-24™ integrator that formats IP streams according to the Motion
Picture Expert Group 2 (MPEG-2) standard as developed by the International
Standardization Organization (ISO). As a result, inserter 222 may create a
Digital Video Broadcasting (DVB) compliant transport stream that can be carried over a variety of digital transmission systems. Modulator 224 merges the IP data stream into a carrier for transmission.
Remote production 107 may be performed at a Remote Production Facility (RPF). An RPF may be a studio at a central site where the digital media assets are produced before being distributed. In one embodiment, the studio is an Internet protocol (IP) studio. It is referred to as an IP studio because all, or some portion of, digital media assets that are received from the studio are sent out using an industry standard TCP/IP protocol suite throughout the rest of the segments (phases) and the assets are digital IP assets. The studio may not send and view the digital video assets or perform all operations using IP in alternative embodiments.
Remote production 107 may receive venue data collected from events that are occurring simultaneously, as described above. In one embodiment, each event is transmitted through TDD 108 to client immersion 106 via a single communications channel. Alternatively, the data from each event may be transmitted and distributed to client immersion 106 by multiple communications channel.
Referring back to Figure 1, TDD 108 provides for the transmission, distribution and delivery of data streams to client immersion 106. TDD 108 receives the IP venue stream data that is transmitted from remote production
107. In addition, TDD 108 may receive one or more live audio/video broadcast feeds 110 from a broadcast (e.g., television) network providing coverage of a venue. Moreover, TDD 108 is coupled to a package delivery system (PDS) 120 for receiving data from one or more World Wide Web (Web) sites.
According to one embodiment, the transmission and delivery of data streams at TDD 108 is implemented using satellite transmissions. However, one of ordinary skill in the art will appreciate that various other communication mechanisms may be used to implement the transmission and delivery of data streams at TDD 108.
Figure la is a block diagram of one embodiment of TDD 108. TDD 108 includes transmission 202, distribution 204 and delivery 205. Transmission 202 transmits specifically formatted streams and packages, including metadata, from event venues. In one embodiment, streams and packages are transferred via high speed IP networks. Depending on the event location, the IP networks may be terrestrial and/or satellite-based. As shown in Figure la, transmission 202 is implemented using satellite 125.
In other embodiments, however, a communication mechanism for the transmission of streams may be selected based on its ability to accommodate bandwidth management, while a communication mechanism for the transmission of packages may be selected based on its reliability. In either case, transmission 202 treats the specifically formatted assets as opaque entities. In other words, transmission 202 has no knowledge of what data is being transmitted, nor its format, so that the information is just raw data to transmission 202.
Depending on the underlying transport technology, transmission 202 may include dynamic network provisioning for individual sessions. That is, the network may dynamically allot more bandwidth to particular streams or packages based on priority. Data could be routed over links based on cost or time priorities. For example, transmission 202 may purchase transport bandwidth, while a terrestrial IP network is on all the time. Supplementary data might be routed over Internet virtual private networks while video might be sent over a satellite.
In one embodiment, transmission 202 may include a process that encrypts assets prior to transmission.
Distribution 204 is the process of transmitting streams and packages from the remote production studio, via high-speed IP networks, to delivery facilities and /or mechanisms. Distribution 204 may use a distribution network having multiple simultaneously transmitting channels. In one embodiment, broadband communications are used for transmission.
In one embodiment, the mechanism(s) used for distribution 204 may be selected based on the ability to accommodate bandwidth management, reliability, and /or other considerations.
Distribution 204 receives an IP venue stream data from remote production 107 via satellite 125 encapsulated in the MPEG-2 format, as described above. In addition, distribution 204 may receive one or more streams of video data from broadcast feed 110 (Figure 1). In one embodiment, broadcast feed 110 represents video data transmitted from an entity or individual(s) providing coverage of the venue such as from a television network. In one embodiment, broadcast feed 110 is a live feed from an event captured as the event is occurring. The broadcast feed 110 may be encoded in the same format as the IP venue stream (e.g., MPEG-2). However, the IP data stream is modulated at one satellite frequency (or program identifier (PID)), while the broadcast feed is modulated according to a second PID. Subsequently, distribution 204 may combine the digital media stream of data with the broadcast stream of data for delivery to client immersion 106.
Further, distribution 204 may receive IP data from PDS 120 (Figure 1). According to one embodiment, PDS 120 includes a File Transfer Protocol (FTP) server that retrieves data from a HyperText Transfer Protocol (HTTP) server corresponding to a particular Web site. PDS 120 may include software that removes all HyperText Markup Language (HTML) format tags that make reference to the HTTP server. Subsequently, the data from PDS 120 is forwarded to distribution 204. In one embodiment, the data is forwarded to distribution 204 via dual Tl carrier lines. Nevertheless, one of ordinary skill in the art will appreciate that the data may be transmitted to distribution 204 using other types of communication mechanisms.
According to one embodiment, distribution 204 includes a switch for switching between the PDS data and the IP venue data for transmission to client immersion 106. During an event, distribution 204 transmits the IP venue data to client immersion 106 along with the broadcast data as described above. However, before and after an event, distribution 204 transmits the PDS data from a Web site to client immersion 106.
Distribution 104 transmits the IP and broadcast video data to client immersion 106 via delivery 205 according to the modulated PIDs described above. Delivery 105 makes immersion content available to immersion 106. Numerous classes of delivery technology may be used, and multiple instances of each class may be used as well. In one embodiment, delivery 205 may employ satellite 135. However, in other embodiments, delivery 205 employs cable, telcos, ISPs, television, radio, on-line and print.
Depending on the particular technology, delivery 205 may take the form of one-way broadcast or client-server requests, via low, medium, and high- bandwidth bi-directional networks. Also depending on the technology, delivery 205 may or may not involve translating streams and packages into other formats (e.g., extracting data from a telemetry stream and inserting it into a relational database). Each delivery provider may implement a proprietary content reception protocol on top of basic IP protocols.
Figure lb is another embodiment of TDD 108 including a production 103 segment. In one embodiment, the production 103 operates as a switch in which formatted content from multiple sources is received and sent out to multiple destinations. Numerous operations may be performed on the content, such as archiving, compression, editing, etc., as part of the switching process. In one embodiment, there is no hardwired connection between the operations and they may be performed on the pool of assets in general. Other production operations may be performed by production 103 such as, for example, laying out text and graphics, setting priorities for views (assets groups), creating associations between assets, etc.
In one embodiment, production 103 comprises the following processes: acquisition, asset storage, asset production, analysis, immersion production, metadata management, dissemination, process management, user management, distillation and syndication. In one embodiment, each of these operate in a manner decoupled from each other. The process may be implemented as hardware and/or software modules. Figure 5 illustrates each of these processes.
Referring to Figure 5, the acquisition process 301 provides the interface between transmission 202 and production 103. Acquisition process 301 receives specifically formatted streams, packages, and metadata from collection 101 and remote production 107 and parses them into assets (units of digital data) and metadata. Metadata may come to the acquisition process 301 separate from digital media assets when the metadata cannot be attached to event data that has been captured. This may be the case with an NTSC-based stream of data, where in such a case the metadata may indicate that the stream is an NTSC stream.
In one embodiment, the acquisition process 301 provides an interface through which a number of operations may be performed. For instance, in one embodiment, the acquisition process 301 decrypts assets that had been encrypted for secure transmission, unpackages packages into their constituent parts, parses metadata messages to determine their type and meaning.
The acquisition process 301 parses the metadata messages from the information received and forwards their contents to the metadata management process 306. After initially processing assets, the acquisition process 301 forwards them to the asset storage process 302. It also registers new assets with the metadata management process 306. The registration may be based on a context map that indicates what assets will be collected, and the tag on each asset (attached at collection). During the acquisition process 301, using the tag, the process knows one or more of the following: what to do with the asset (e.g., store for later use, pass through unchanged, contact context management to notify it that the asset has been received, etc.).
Some assets, particularly streams, flow in and out of production 103 with as little latency as possible. In this case, the acquisition process 301 forwards the assets directly to the dissemination process 307. In one embodiment, flow- through assets are simultaneously forwarded from the acquisition process 301 to the asset storage process 302 and the dissemination process 307. This is shown in Figure 6.
The asset storage process 302 manages physical storage and retrieval of all assets. One or more storage media may be used. In one embodiment, a variety of storage technologies may be used, each suitable for certain types of assets.
In one embodiment, the asset storage process 302 is responsible for interacting with the appropriate storage technology based on asset types. A given asset type may be stored in multiple ways. For example, a telemetry stream may be stored as a flat file in memory and as a set of database records. In one embodiment, the asset storage process 302 is involved in storage, retrieval, removal, migration and versioning. Migration refers to moving assets up and down within a storage hierarchy. This movement may be between different storage technologies (e.g., between hard disk and tape). Migration may be performed to free up local or short-term storage. Versioning may be used to indicate an asset's current version (after changes to this asset have been made or have occurred). In one embodiment, every time the asset storage process 302 stores, removes, migrates, or versions an asset, it communicates with the metadata management process 306 to update the asset's physical location attributes, which the metadata management process 306 manages.
The asset production process 303 is the set of processes by which individual digital media assets are created and edited within production 103. The asset production process 303 is applied to most assets that have been acquired from collection 101. In addition, in one embodiment, in-house editorial and production staffs may create and edit their own assets during the asset production process 303. In one embodiment, asset production process 303 includes creation, editing, format conversion (e.g., Postscript to JPEG, etc.), and distillation.
A number of editing tools may be used. In one embodiment, the creation and editing processes are performed in cooperation with the asset storage process 302 and the metadata management process 306. This interaction may be automatic or manual. In one embodiment, assets are transferred from asset storage in order to be edited, and transferred back into asset storage after editing has been completed. In addition, the effects of production actions are communicated to the metadata management process 306. For example, the asset production process 303 notifies the metadata management process 306 that a JPEG asset was derived from a Postscript asset.
The distillation process creates multiple versions of an asset to support different kinds of delivery technologies (e.g., high, medium, and low-bandwidth web sites, one-way satellite data broadcast, interactive television, etc.). The distillation process is performed by assessing the capabilities of the delivery technology against the asset and type of data being transformed. Depending on the complexity of the differences, the distillation process may be more or less automated. In any case, in one embodiment, the distillation process takes into account many aspects of delivery, including, but not limited to, file format, and the number and kind of assets that will be included for a specific delivery platform.
Immersion production process 305 attaches greater meaning to the assets that flow through production. In one embodiment, the immersion production process 305 initially creates HTML pages that reference and embed other assets. In one embodiment, the immersion production process 305 also creates and edits contexts, generates (manually or automatically) one kind of content based on another (e.g., highlight generation, running averages derived from telemetry values, specifically formatted metadata based on context management, etc.), generates production instructions directing immersion applications to automatically present certain information based on specific user actions, uses immersion applications to view content for quality control purposes, and defines the types of values available within particular stream types.
The metadata management process 306 manages metadata within production 103. Metadata may include many different types of data. In one embodiment, metadata includes asset attributes, production instructions and contexts. The production instructions may control the immersion applications based on activity of the user. In one embodiment, all types of metadata are first- class objects, thereby allowing easy transport between the platform segments. In one embodiment, every object has a unique identifier and a set of attributes. Unique identifiers are used to track objects and to relate them to other objects.
The metadata management process 306 creates and modifies attributes and contexts, logically locates objects by querying contexts (e.g., locate all streams belonging to Zinardi's car in a Long Beach Auto race), logically locates objects by querying asset attributes (e.g., locate all the JPEG assets whose author is "Emily Robertson"), physically locates objects by tracking their movements in the form of asset attributes (e.g., version #5 of asset #456 has been stored in the file named "foo.JPG.5" on the file server named "file_server_l").
The dissemination process 307 provides the interface between production 103 and distribution 204. To facilitate this interface, the dissemination process 307 is configured to communicate with individual distribution channels. The dissemination process 307 communicates with the asset storage process 302 to retrieve assets and with the metadata management process 306 to retrieve metadata. The dissemination process 307 also communicates directly with the acquisition process 301 in the case of flow-through streams. In one embodiment, the dissemination process 307 provides an interface for a number of operations. In one embodiment, the dissemination process 307 provides an interface that constructs messages out of metadata, packages assets and metadata into packages, optionally encrypts data for secure distribution, and logs departure times for all streams and packages.
When the studio generates content for distribution, the dissemination process 307 sends the digital media assets to various delivery head ends. In one embodiment, the type of data that is distributed to different types of devices is dependent on the device and the dissemination process 307 controls which streams and packages of data are forwarded to the delivery head ends. In one embodiment, a device such as a Personal Digital Assistant (PDA) will only be sent data that it is capable of displaying. Similarly, an HDTV device will only be sent data that it will capable of displaying. In an alternative embodiment, all data that is available is forwarded to the device and the device makes a determination as to whether it can or cannot display some or all of the information. In one embodiment, to notify a particular device of the type of data that is being sent to it, the studio distributes a control stream. In one embodiment, this control stream is the context map. That is, the context map is sent to the end user devices. In an alternative embodiment, only a portion of the context map that specifically deals with the event being captured is forwarded to the device to indicate what types of digital media assets are being forwarded.
Based on the information in the control stream, the end user devices may determine what information is being sent to it and may determine what to view.
The process management 308 is a process that controls the automation of other production processes. The process management 308 uses several types of objects to control asset switching (routing). In one embodiment, these types of objects include routes, process events, schedules and rules. A route is a mapping between a set of processes and a set of physical (e.g., hardware, software, and network) resources. For example, Figure 6 illustrates a simple route allocated to a flow-through IP video stream. Referring to Figure 6, a stream is received from an incoming network 401 and undergoes acquisition via the acquisition process
301. From acquisition, the stream is forwarded to both the assets storage process
302, which stores the video stream and makes it accessible on a video server 403, and the dissemination process 307 where the video stream is disseminated to an outgoing network 402.
A process event is the application of a given route to a particular asset or group of assets at a specific time. A schedule is the set of times at which a processing event occurs. A rule is a logical constraint that determines when an event occurs. For example, a rule might state that a static leaderboard update page should be generated whenever a leaderboard stream has been acquired and archived. By using these objects, the assets may be managed, including indicating what information is to be shown.
The process management 308 also provides an interface for creating, querying, and editing routes, process events, schedules, and rules. In one embodiment, the process management 308 also keeps a log of every completed event and the success or failure of its outcome.
The user management process 309 controls access by production users to the various processes within the studio. In one embodiment, the user management process 309 manages definitions of users, groups, and access levels. Based on these definitions, it responds to requests from the process management 308 to provide access credentials for particular studio activities.
The syndication process 310 allows 3rd-party organizations (e.g., external media companies) access to assets within the studio. In one embodiment, individual assets and subscriptions can be offered, with e-commerce taking place based on those offers.
Processes of production 103 occur in a studio. The studio contains a hierarchical or other type of arrangement of asset storage hardware and software (e.g., a database, robotic-type system, etc.). The asset storage control system controls the flow of assets up and down within that hierarchy and determines how to route assets based on their types. In a live environment, the asset storage system would direct data of differing types (e.g., telemetry vs. video) to appropriate storage types. The asset storage system can also make intelligent decisions about asset migration, for example, based on the time since the asset was accessed, the relationship of the asset to current production activity (as determined by context analysis, the time-sensitivity of the asset, and /or an industry standard algorithm (e.g., least recently used (LRU)). As the production evolves, the asset system might choose to push some (previously used) data into off-line storage (e.g., HSM). Various subsystems within production 103 interact with one another (for example, immersion production and metadata management). In one embodiment, each production 103 subsystem presents an application programming interface (API) for access by other subsystems. The process management 308 controls the automated movement of assets and metadata through the studio. It manages routes, process events, schedules, and rules, defines a common process management API that studio subsystems support and use this API to invoke particular asset and metadata operations in response to event triggers. The process management system may be tightly integrated with the monitoring and control system.
The result of the processes described above in conjunction with Figure 5 is the production of content. In one embodiment, the content may be web content. In one embodiment, a web content publishing system streamlines the web content production process. The web content publishing system may support file locking to prevent simultaneous updates by multiple users, version management, HTML link maintenance, specialized content verification triggers, incremental update generation, multiple staging areas, and automated content pushes. Web sites may have special needs for content publishing and delivery. First, the web page may need a graceful mechanism for dynamically updating the files being delivered by the site. Secondly, the web pages may need a robust, scalable infrastructure for delivering dynamic content (particularly content that is truly interactive, such as a multi-user on-line game). In one embodiment, the web content delivery system includes middleware and application software necessary to support these requirements. The web content delivery system may be a third-party different than the content generator.
In one embodiment, production 103 is able to create and distribute content in the form of incremental web site updates for ensuring incremental updates to live sites. To perform incremental updates to the live sites, two versions of the data are maintained on one server. Each version is accessed through separate directories in the file system. While one version is being accessed by a server, the other version may be updated by a replication module that is capable of updating either file directory. When updates for all the storage locations have been completed, the directories are switched so that the server accesses the updated directory and allows the previously used directory to be updated. The directories need not be moved to implement the switch. In alternative embodiments, only the pointer used by the server to access a directory is changed. This ensures that the newest version is always available. In one embodiment, a version number is associated with each version to indicate which version is currently being stored. In such a case, latest version available on all servers is the version that is used and made accessible.
In one embodiment, production 103 uses hardware such as specialized archive equipment (tape backup systems, video servers, etc.), production management servers, video encoders, and network equipment.
Referring back to Figure 1, client immersion 106 includes a set-top unit 155 that receives the signals transmitted from distribution 204 for processing and display. Figure 3 is a block diagram of one embodiment of set-top unit 155. Referring to Figure 3, set-top unit 155 includes an IP tuner 320, broadcast tuner 330, central processing unit (CPU) 340, PDS receiver 350, a modem 370 and a graphics controller 375. Although not shown, set- top box 155 also includes other components, such as, for example, one or more memories, dedicated and /or shared, that may be used by the processing components in set-top box 155.
IP tuner 320 receives the video-encoded data from distribution 204 and locks on to the PID carrying the IP venue data while filtering out all other signals. According to one embodiment, IP tuner 320 may include an MPEG-2 decoder for decoding the received video signals. In other embodiments, the decoder may be external to tuner 320.
IP tuner 320 may also include a switch that enables a user to choose between multiple IP data streams transmitted, distributed and delivered to set- top unit 155. For example, a user may select between viewing venue data from the ice skating and bobsledding events described above.
Subsequent to decoding, the venue data is transmitted to CPU 340 for processing in the form of IP data. In one embodiment, CPU 340 is a processor in the Pentium® family of processors including the Pentium® HI family of processors available from Intel Corporation of Santa Clara, California. Alternatively, other CPUs may be used. After the IP data is processed at CPU 340, various segments of the data may be transmitted to graphics controller 375 to prepare for display at display 380.
Broadcast tuner 330 receives the video encoded data from distribution 204 and locks on to the PID carrying the broadcast data. Broadcast tuner 330 may also include an MPEG-2 decoder for decoding the broadcast data before the data is transmitted to graphics controller 375 in order to prepare for display at display 380. Broadcast tuner 330 may also include a switch for selecting between multiple streams of video received from broadcast feed 110.
Graphics controller 375 controls the display of graphics and alphanumeric characters at display 380. PDS receiver 350 receives PDS data from distribution 204 before and after events. PDS receiver receives packages of data transmitted from package delivery 120 and stores the data to a storage device located within set-top unit 155.
According to one embodiment, both the IP video data received at IP tuner 320 and the broadcast data may be displayed at display 380 simultaneously. Figure 4 illustrates one embodiment of broadcast video and IP data from an auto race displayed at display 380. LP data displayed at display 380 includes, for example, telemetry data from cars competing in the auto race, a real-time leaderboard and timing information for the cars in the race, track states (e.g., yellow flag) and advertisements. In addition, IP video data is displayed at display 380 in one window, while the live broadcast video is displayed in a second window.
According to one embodiment, a user may make a selection at client immersion 106 to view information for each individual racer by selecting a hyperlink button by the racer's name on the leaderboard. Upon selecting a particular racer to preview, live video of the racer's car is displayed in the IP video window. In addition, IP data regarding the racer's biographical information may be displayed above the IP video window. Further, the user may continue to view the entire racing field through the broadcast video window.
Modem 370 transmits and receives data from FTP server 115 (Figure 1). In some embodiments, the platform may not be capable of transmitting all of the IP data generated at a venue to client immersion 106 at once due to limited bandwidth. Therefore, a user at set-top unit 155 may transmit a request to server 115 via modem 370 indicating which data to transmit. Server 115 subsequently relays the request to remote production 107, which responds by transmitting the desired data. Using the Olympic example discussed above, the user may choose to view the bobsledding or ice skating events by making and transmitting a selection to client 115 via modem 370.
Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention.
Thus, a platform for distributing and presenting multiple streams of digital media data from events and/or other sources to end users has been described.

Claims

CLAIMSI claim:
1. An architecture comprising: a first set of collection devices to capture data from a first remotely occurring event; a remote production unit coupled to the first set of collection devices to convert the data into a first set of digital media assets; at least one communication mechanism to transmit the first set of digital media assets; and a distribution network coupled to the remote production unit and the at least one communication mechanism to distribute the first set of digital media assets a first serialized bit stream and coupled to receive a first broadcast feed of the event to distribute digital broadcast signals representing the broadcast feed as a second serialized bit stream.
2. The architecture defined in Claim 1 wherein the digital media assets are transmitted from the plurality of collection devices to the distribution mechanism using an industry standard protocol.
3. The architecture defined in Claim 2 wherein the industry standard protocol comprises an IP protocol.
4. The architecture defined in Claim 1 further comprising a set-top device in communication with the distribution mechanism, wherein the set-top device further comprises: a first digital tuner for receiving the first serialized bit stream; a second digital tuner for receiving the second serialized bit stream; and a modem.
5. The architecture defined in Claim 4 wherein the digital media assets and the digital broadcast signals are transmitted from the distribution mechanism to the set-top device using a Motion Picture Expert Group 2 (MPEG- 2) standard.
6. The architecture defined in Claim 4 further comprising: a second set of collection devices to capture data from a second remotely occurring event, wherein the remote production unit is coupled to the second set of collection devices to convert the data into a second set of digital media assets and wherein the distribution network distributes the second set of digital media assets as a third serialized bit stream.
7. The architecture defined in Claim 6 wherein the distribution network is coupled to a second broadcast feed of the event to distribute digital broadcast signals as a fourth serialized bit stream.
8. The architecture defined in Claim 7 wherein the first digital tuner includes a first switch for selecting between the first serialized bit stream and the third serialized bit stream, and the second digital tuner includes a second switch for selecting between the second serialized bit stream and the fourth serialized bit stream.
9. The architecture defined in Claim 4 further comprising: a server in communication with the modem and the remote production unit.
10. The architecture defined in Claim 1 further comprising a server in communication with the distribution mechanism, wherein the distribution mechanism receives network files from the server to distribute as a third serialized bit stream.
11. The architecture defined in Claim 10 wherein the distribution mechanism includes a switch for selecting between distribution of the first serialized bit stream and the third serialized bit stream.
12. The architecture defined in Claim 4 further comprising a display unit coupled to the first and second digital tuners, wherein data corresponding with the first and second serialized bit streams are displayed on the display unit.
13. The architecture defined in Claim 1 wherein the remote production unit comprises: a media manager; and an uplink module.
14. The architecture defined in Claim 13 wherein the uplink module comprises: an inserter for converting the digital media assets into the first serialized bit stream; and a modulator for modulating the first serialized bit stream onto a carrier frequency.
15. The architecture defined in Claim 1 further comprising a production unit coupled to the remote production unit and the distribution network.
16. A method of depicting an event comprising: capturing data from a remotely occurring event; converting the data into a first set of digital media assets; transmitting the first set of digital media assets; receiving a first set of broadcast data; distributing the first set of digital media assets as a first serialized bit stream; and distributing the first set of broadcast data as a second serialized bit stream, wherein the first serialized bit stream and the second serialized bit streams are distributed via the same communication mechanism.
17. The method defined in Claim 16 further comprising; receiving the first serialized bit stream at a first digital tuner; and receiving the second serialized bit stream at a second digital tuner.
18. The method defined in Claim 17 further comprising: decoding the second serialized bit stream; and transmitting data corresponding with the second serialized bit stream to a display.
19. The method defined in Claim 18 further comprising: decoding the first serialized bit stream; transmitting decoded data corresponding with the first serialized bit stream to a central processing unit (CPU); processing the data corresponding with the first serialized bit stream data; and transmitting the data corresponding with the second serialized bit stream data to a display.
20. The method defined in Claim 16 wherein the data corresponding to the first serialized bit stream is displayed in a first window at the display and the data corresponding to the second serialized bit stream is displayed in a second window at the display.
21. The method defined in Claim 17 further comprising; capturing data from a second remotely occurring event; converting the data into a second set of digital media assets; transmitting the second set of digital media assets; receiving a second set of broadcast data from the second remotely occurring event; distributing the second set of digital media assets as a third serialized bit stream; and distributing the second set of broadcast data as a fourth serialized bit.
22. The method defined in Claim 21 further comprising; receiving the third serialized bit stream at the first digital tuner; and receiving the fourth serialized bit stream at the second digital tuner.
23. The method defined in Claim 22 further comprising: decoding the second serialized bit stream; decoding the fourth serialized bit stream; selecting between the second and fourth serialized bit stream; and transmitting data corresponding with the selected bit stream to a display.
24. The method defined in Claim 23 further comprising: decoding the first serialized bit stream and the third serialized bit stream; transmitting decoded data corresponding with the first and third serialized bit streams to a central processing unit (CPU); processing the data corresponding with the first and third serialized bit stream data; selecting between the first and third serialized bit stream; and transmitting the data corresponding with the selected serialized bit stream data to a display.
25. a set-top device comprising: a first digital tuner for receiving a first serialized bit stream; and a second digital tuner for receiving a second serialized bit stream, wherein the first and second serialized bit streams are received via the same communication mechanism.
26. The set-top device defined in Claim 25 further comprising a display unit coupled to the first and second digital tuners, wherein data corresponding with the first and second serialized bit streams are displayed on the display unit.
27. The set-top device defined in Claim 25 further comprising a central processing unit (CPU) coupled to the first digital tuners for processing the first serialized bit stream.
28. The set-top device defined in Claim 25 further comprising a modem.
PCT/US2000/040851 1999-09-10 2000-09-07 A system for distributing and delivering multiple streams of multimedia data WO2001019079A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU12505/01A AU1250501A (en) 1999-09-10 2000-09-07 A system for distributing and delivering multiple streams of multimedia data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39412499A 1999-09-10 1999-09-10
US09/394,124 1999-09-10

Publications (2)

Publication Number Publication Date
WO2001019079A1 true WO2001019079A1 (en) 2001-03-15
WO2001019079A9 WO2001019079A9 (en) 2002-08-08

Family

ID=23557661

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/040851 WO2001019079A1 (en) 1999-09-10 2000-09-07 A system for distributing and delivering multiple streams of multimedia data

Country Status (2)

Country Link
AU (1) AU1250501A (en)
WO (1) WO2001019079A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006085205A1 (en) * 2005-02-14 2006-08-17 William Mutual A system for managing bandwidth
EP2434772A1 (en) * 2010-09-22 2012-03-28 Thomson Licensing Method for navigation in a panoramic scene
US9191429B2 (en) 2012-07-13 2015-11-17 Qualcomm Incorporated Dynamic resolution of content references for streaming media
US10419796B2 (en) 2017-03-02 2019-09-17 The Directv Group, Inc. Broadband backup to satellite-based set-top boxes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5491517A (en) * 1994-03-14 1996-02-13 Scitex America Corporation System for implanting an image into a video stream
US5634849A (en) * 1993-01-11 1997-06-03 Abecassis; Max Content-on-demand interactive video method and apparatus
US5912700A (en) * 1996-01-10 1999-06-15 Fox Sports Productions, Inc. System for enhancing the television presentation of an object at a sporting event
US5929849A (en) * 1996-05-02 1999-07-27 Phoenix Technologies, Ltd. Integration of dynamic universal resource locators with television presentations
US5953077A (en) * 1997-01-17 1999-09-14 Fox Sports Productions, Inc. System for displaying an object that is not visible to a camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5634849A (en) * 1993-01-11 1997-06-03 Abecassis; Max Content-on-demand interactive video method and apparatus
US5491517A (en) * 1994-03-14 1996-02-13 Scitex America Corporation System for implanting an image into a video stream
US5912700A (en) * 1996-01-10 1999-06-15 Fox Sports Productions, Inc. System for enhancing the television presentation of an object at a sporting event
US5929849A (en) * 1996-05-02 1999-07-27 Phoenix Technologies, Ltd. Integration of dynamic universal resource locators with television presentations
US5953077A (en) * 1997-01-17 1999-09-14 Fox Sports Productions, Inc. System for displaying an object that is not visible to a camera

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006085205A1 (en) * 2005-02-14 2006-08-17 William Mutual A system for managing bandwidth
EP2434772A1 (en) * 2010-09-22 2012-03-28 Thomson Licensing Method for navigation in a panoramic scene
US9232257B2 (en) 2010-09-22 2016-01-05 Thomson Licensing Method for navigation in a panoramic scene
US9191429B2 (en) 2012-07-13 2015-11-17 Qualcomm Incorporated Dynamic resolution of content references for streaming media
US10419796B2 (en) 2017-03-02 2019-09-17 The Directv Group, Inc. Broadband backup to satellite-based set-top boxes

Also Published As

Publication number Publication date
AU1250501A (en) 2001-04-10
WO2001019079A9 (en) 2002-08-08

Similar Documents

Publication Publication Date Title
US7506355B2 (en) Tracking end-user content viewing and navigation
JP6346859B2 (en) Receiving device, receiving method, transmitting device, and transmitting method
CN1819559B (en) Multicast distribution of streaming multimedia content
US9100547B2 (en) Accessing broadcast media
EP1110394B1 (en) Simulating two way connectivity for one way data streams for multiple parties
EP1415473B1 (en) On-demand interactive magazine
US20020108115A1 (en) News and other information delivery system and method
EP1024661A2 (en) Pictographic electronic program guide
US20090070324A1 (en) Related information transmission method, related information transmission server, terminal apparatus and related information transmission system
JP2000224257A (en) Transmitter and receiver
CA2506448A1 (en) Strategies for pausing and resuming the presentation of programs
US11025982B2 (en) System and method for synchronizing content and data for customized display
EP1535460A2 (en) Information platform
WO2000072574A2 (en) An architecture for controlling the flow and transformation of multimedia data
US20150296005A1 (en) Receiving device, receiving method, transmission device, transmission method, and program
CN1817020B (en) Method of broadcasting multimedia content via a distribution network
US20020019978A1 (en) Video enhanced electronic commerce systems and methods
US20020199197A1 (en) System for exchanging data
JPH1153441A (en) Information processing method
WO2001019079A1 (en) A system for distributing and delivering multiple streams of multimedia data
JP2000083233A (en) Authentication device and method and system therefor and storage medium
EP0929974A1 (en) Multimedia information transmission and distribution system
EP1971144A1 (en) Method, software and installation for the creation and distribution of personalized internet TV-channels
JP2004236240A (en) Network broadcast system, content distributing method, and program providing apparatus

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/8-8/8, DRAWINGS, REPLACED BY NEW PAGES 1/8-8/8; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP