Suche Bilder Maps Play YouTube News Gmail Drive Mehr »
Nutzer von Screenreadern: Klicke auf diesen Link, um die Bedienungshilfen zu aktivieren. Dieser Modus bietet die gleichen Grundfunktionen, funktioniert aber besser mit deinem Reader.


  1. Erweiterte Patentsuche
VeröffentlichungsnummerUS20120166951 A1
AnmeldenummerUS 13/192,266
Veröffentlichungsdatum28. Juni 2012
Eingetragen27. Juli 2011
Prioritätsdatum31. Okt. 2007
Auch veröffentlicht unterUS8798436, US9454994, US20090110362, US20140301713
Veröffentlichungsnummer13192266, 192266, US 2012/0166951 A1, US 2012/166951 A1, US 20120166951 A1, US 20120166951A1, US 2012166951 A1, US 2012166951A1, US-A1-20120166951, US-A1-2012166951, US2012/0166951A1, US2012/166951A1, US20120166951 A1, US20120166951A1, US2012166951 A1, US2012166951A1
ErfinderRyan Steelberg, Chad Steelberg
Ursprünglich BevollmächtigterRyan Steelberg, Chad Steelberg
Zitat exportierenBiBTeX, EndNote, RefMan
Externe Links: USPTO, USPTO-Zuordnung, Espacenet
Video-Related Meta Data Engine System and Method
US 20120166951 A1
A engine for use with a display is disclosed. The engine includes a presenter that presents at least two audio/visual works, at least one software application capable of at least one metadata-related interaction with the at least two audio/visual works, communication points over which the audio/visual works are received, and over which at least a portion of the at least one metadata-related interaction occurs, and a hierarchical taxonomy that effects a common metadata reference to each recurrence of a particular object across the audio/visual works, and across each of the at least one metadata related interaction.
Previous page
Next page
1. An engine for use with a display, comprising:
a presenter, provided by computing code embodied in a tangible medium that, upon interaction with a computer processor, presents an audio/visual work;
a software application, instantiated by second computing code executed by the computer processor and received across a computing network, that is capable of at least one metadata-related interaction with the at least two audio/visual works, said metadata-related interaction including providing for a click on an object in the audio/visual work to display more information, including at least a price, regarding said object; and
at least two communication points over which the audio/visual works are received, and over which at least a portion of the at least one metadata-related interaction occurs.
2. The engine of claim 1, wherein said software application is at least partially remote from the presenter.
3. The engine of claim 1, wherein the at least one metadata-related interaction comprises interaction with remotely generated metadata associated with the audio/visual work.
4. The engine of claim 1, wherein the at least one metadata-related interaction comprises interaction with locally generated metadata for association with the audio/visual work.
5. The engine of claim 1, further comprising a prioritization filter.
  • [0001]
    This application claims the priority under 35 U.S.C. 120 as a continuation of U.S. patent application Ser. No. 12/592,737 entitled “Video-Related Meta Data Engine System and Method,” filed Dec. 2, 2009, which is a continuation of U.S. patent application Ser. No. 11/981,838 entitled “Video-Related Meta Data Engine System and Method”, filed Oct. 31, 2007; and likewise claims priority under 35 U.S.C. 120 to U.S. patent application Ser. No. 11/981,838 entitled “Video-Related Meta Data Engine System and Method”, filed Oct. 31, 2007; the disclosures of which are incorporated by reference herein as if set forth in their respective entirety.
  • [0002]
    The present invention is directed to video and metadata and, more particularly, to a video-related metadata engine, system and method.
  • [0003]
    Present endeavors to create applications for operation on metadata associated with audio/visual works suffers from the drawback that only certain audio/visual works can have each application applied thereto, in part because there must be an agreement between the application provider and the metadata provider as to the terminology used in the metadata to allow for operation by the application.
  • [0004]
    Thus, there exists a need for a video engine interoperable with a common nomenclature across all applications and audio/visual works, thereby allowing for standardized interaction between any application and any audio/visual work.
  • [0005]
    The present invention includes at least a video engine, system and method for use with a video player, including a presenter that presents at least two audio/visual works, at least one software application capable of at least one metadata-related interaction with the audio/visual works, communication points over which the audio/visual works are received, and over which at least a portion of the at least one meta-data interaction occurs, and a hierarchical taxonomy that effects a common metadata reference to each recurrence of a particular object across the audio/visual works, and across each of the at least one metadata-related interaction. The video engine, system, and method may additionally include a prioritization data for use with the metadata.
  • [0006]
    Thus, the present invention provides a video engine interoperable with a common nomenclature across all applications and audio/visual works, thereby allowing for standardized interaction between any application and any audio/visual work.
  • [0007]
    Understanding of the present invention will be facilitated by consideration of the following detailed description of the preferred embodiments of the present invention taken in conjunction with the accompanying drawings, in which like numerals refer to like parts:
  • [0008]
    FIG. 1 illustrates a video player in accordance with the present invention;
  • [0009]
    FIG. 2 illustrates a video player-related hierarchical taxonomy in accordance with the present invention;
  • [0010]
    FIGS. 3A-3E illustrate aspects of the invention.
  • [0011]
    It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for the purposes of clarity, many other elements found in typical interactive, metadata, and video play systems and methods. Those of ordinary skill in the art will recognize that other elements are desirable and/or required in order to implement the present invention. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements is not provided herein.
  • [0012]
    FIG. 1 is a block diagram illustrating an audio/video media player 10 (hereinafter videoplayer) having associated therewith software 12 and hardware 14, in the form of a video engine 16, for the playing of audio/visual works 18 on the videoplayer 10 (the software and hardware referred to hereinafter as a video engine). The videoplayer as discussed herein may include any type of videoplayer that makes use of any audio/video media for playing on the videoplayer. The videoplayer may be, but is not limited to, televisions, desktop and laptop computers, personal digital assistants (PDA), personal entertainment devices, such as IPODs, mobile telephones, and the like, typically having video processing and play capabilities.
  • [0013]
    The video engine 16 of the present invention operates in conjunction with a video player 10 in order to present audio˜/visual works to a user of the videoplayer. The video engine includes all hardware necessary to execute the playing of the video on the videoplayer, and additionally includes one or more software applications capable of presenting not only audio/visual works, but additionally capable of manipulating user interaction with such audio/visual works, or manipulating such audio/visual works themselves. The software aspects and applications of the video engine may be partly or entirely remote from the videoplayer, such as to allow for development of applications, interactions or data manipulations remote from the audio/visual work(s), or may be entirely local to the videoplayer. Such software applications may interact with the audio/visual works via, for example, locally or remotely generated meta data 24 embedded in or associated with the audio/visual work, or a meta data stream received by the video player and video engine separately from, and or in conjunction with, the audio/visual work or audio/visual work stream.
  • [0014]
    The video engine, as used herein, may include any software application capable of receiving audio/visual works and instructions associated therewith, and additionally capable of relaying instructions and/or commands to an from at least one manipulation of or interaction with such audio/visual work. A video engine may be, but is not limited to, a digital video recorder, a computer hard drive in association with one or more processors, a micro processor in conjunction with a video processor, or the like. The video engine may include typical hardware and/or software to allow for viewing of or interaction with an audio/visual work, such as a hard drive, random access memory, flash memory, or the like, and may receive and/or communicate the audio/visual work, commands, interactions and/or instructions from and to communication points via, for example, satellite communication, radio communication, wired communication, infrared communication, coaxial cable communication, WIFI communication, WiMAX communication, LAN communication, WAN communication, telephonic communication, DSL communication, Ethernet communication, or the like as would be known to those skilled in the art.
  • [0015]
    Metadata 24 as used herein, encompasses any type of computer readable information that may be associated with an audio/visual work, any object therein, or any portion thereof, including the formation or portions thereof, or that may be used for interaction thereupon or therewith, as will be understood by one of ordinary skill in the art. Metadata, as used herein, is defined to include any type of executable code, computer language code (such as xml or html, object code and/or source code), or “mash-up” data (program associated data integral with data of interest) that may be associated with or into an audio/visual work. Metadata further includes metadata created through the use of the present invention and pre-existing metadata that may be, form part of, or be associated with the audio-visual works on which the present invention is operated. As discussed hereinthroughout, and as illustrated in interactivity with an audio/visual work may include manipulation of the audio/visual work itself, manipulation of a menu, overlay, or the like associated with such audio/visual work, off-line accessing or content requests for content associated with such audio/visual work or with such interaction with such audio/visual work, and peer to peer interactivity, for example. As such, the present invention makes available interactivity with an audio/visual work of any type known to one of ordinary skill in the art, and interactivity between any entities as such interaction may relate to an audio/visual work, including server entities known to those skilled in the art, and the obtaining of information related to, based in, or related to information related to or based in, an audio/visual work, over any communications media, as will be apparent to one of ordinary skill in the art.
  • [0016]
    As shown in FIG. 2, the present invention thus is and includes within the video engine 16 a hierarchical taxonomy 50 for making common reference to items, objects, or portions 52 of and within audio/visual works across multiple audio/visual works 18 and across multiple interactivity planes 54 and/or interactive applications 12 for interacting with ones of the multiple audio/visual works. Thus, in ˜n exemplary embodiment of the present invention, one, several, or every object in every audiovisual work, and/or one or more portions or topics of portions of one or more audio/visual works, may be assigned a common nomenclature having set nomenclature to reference such object(s) at each of different levels of a hierarchy, with such nomenclature, and nomenclature at such levels, being common across all audio/visual works in accordance with the metadata indicative of each object or portion, as referred to at each such hierarchical level, in each such audio/visual work.
  • [0017]
    Thus, in the present invention the metadata associated with each such audio/visual work is built, such as by manual entry after review of the audio/visual work, or by automated object (audio and/or video and/or character) recognition, which may employ a “crawler” to review many audio/visual works, to use the proper, common nomenclature for each object or portion within and of the work upon every reference to that object or portion across all audio/visual works making reference to that object or portion. For example, the hierarchy may include any number of top level categories, such as “fashion”, “automotive”, “health and leisure”, and the like. Needless to say, the present invention is by no means limited to the aforementioned list, and in fact includes top level hierarchal nomenclature in each category of an object or video portion that may appear in an audio/visual work, as will be apparent to those skilled in the art. The hierarchical nomenclature of the present invention is systematically applied with a particular view to those items in an audio/video work that might be of most interest, such as to a consumer or advertiser, as will also be apparent to one or ordinary skill in the art,
  • [0018]
    In the example mentioned above, the top level hierarchy “fashion” may include any number of subcategories, such as “suits”, “dresses”, “shoes”, “accessories”˜ and the like. Again, it goes without saying that the aforementioned list is in no way limiting, but rather the importance of the aforementioned list lies in the fact that the nomenclature in each of the hierarchal categories within the hierarchical taxonomy of the video engine is not varied, with respect to its metadata application to objects and portions within audio/visual works, nor is it varied across audio/visual works.
  • [0019]
    Continuing with the example hereinabove, the sub category “accessories” may include, for example, “purses”. Thus, the metadata associated with any audio/video work having an object therein qualifying as a purse would either be manually recognized or automatically recognized, and labeled for use via the video engine in accordance with the nomenclature hierarchy as 1. Fashion, 1A. Accessories, 1AA. Purses.
  • [0020]
    Upon reaching any particular level of the nomenclature hierarchy, more specific references may or may not be employed based on the intended use of the audio/visual work, but any more specific references must be employed in accordance with the nomenclature hierarchy. For example, a consumer who watches audio/visual words may wish to employ the nomenclature to access video snippets based on references to that consumer's favorite baseball team. In such a case, the initial hierarchy employed by the application that accesses, using associated metadata, particular vide snippets may be “professional sports”, the sub-category may be “major league baseball”, and a specific reference may be “Philadelphia Phillies”. No additionally specific references may be necessary for such a use, and thus may not be provided by the party generating the metadata, although other deeper sub-categories, such as player names, field names, positional data, former player, etc. may be available in the nomenclature hierarchy for common use in other applications requiring such depth.
  • [0021]
    As an example of use of the deeper hierarchy, in the event that an advertiser wishes to make use of the engine of the present invention, more specific hierarchy levels may be employed by an advertiser application using the metadata associated with the audio/visual work. For example, in the reference hereinabove to “purses”, Gucci may wish to advertise its own brown purses in certain cases when the object purse is shown in an audio/visual work, but only a very specific purse. Thus, advertisers may make use of more specific hierarchy levels, such as, under the sub-category of “purses”, categories for “Gucci” purses, and then “brown” Gucci purses (of course other purse colors, types of Gucci purses, model years of Gucci purses, etc., may be made of use at this level of the hierarchy), leading to the reference, by the metadata, to an advertisement external to the audio/visual work (which may be accessible over one or more of the communication access points) only in the event the viewer of the audio/visual work interacts with a brown, Gucci purse object. Needless to say in light of the disclosure herein, the nomenclature hierarchy of the present invention may include a translator, whereby the nomenclature of one human language (or computer language) is precisely and consistently translated into the common terminology of another language, with no loss of commonality in any language.
  • [0022]
    Therefore, although different levels of the nomenclature hierarchy may be employed by different users of audio/visual works, or by different applications associated via the metadata with the audio/visual works, the hierarchal nomenclature references employed are the same at any respective level across all users, across all audio/video works, and across all items, objects or video portions of that type. Of course, this aspect of the present invention makes available a number of advantageous presentations for association with audio/visual works. For example, in the exemplary embodiment discussed above, Gucci may wish to place an overlay advertisement in the lower right hand corner of any audio/visual work making reference to purses, or may wish to place an overlay advertisement only with respect to those audio/visual works that make reference to brown purse, or only brown, Gucci purses. However, it must be noted that such choices are not made available in the prior art in any event in any audio/visual work, due to the fact that, without a video engine having a common nomenclature hierarchy to create common references across all audio/visual works, the lack of consistent reference to objects makes searching for multiple appearances of such objects or video portions across multiple audio/visual works difficult if not impossible.
  • [0023]
    In view of the video engine supplying a common hierarchical nomenclature as discussed above, applications and/or audio/visual filters may be developed to allow access to, interaction with, or reference to particular items, objects, or video portions across all videos created anywhere for play over any media. For example, in the exemplary embodiment discussed above, a user may access a video filter or video application that allows that user to record, or view, or buy, or the like, by interaction with any reference to a brown Gucci purse in any video across all videos. Such a filter or application may, of course, attempt to metadata tag only those audio/visual works deemed most likely, such as based on a prioritization filter, to make the requested reference in the audio/visual work, or may crawl across all audio/visual works on all media obtainable to the video engine via any media accessible over the communication access points.
  • [0024]
    Of course, even using the video engine of the present invention to create a common nomenclature across consistent objects among all video works, the task of assessing a particular object or objects across a great many video works may be overwhelming. Thus, the video engine of the present invention may be programmed with the aforementioned prioritization filter 68, whereby, based on a user type of the video engine user, the prioritization filter 68 prioritizes the level of the hierarchy at which review is best to occur, the media type over which review of audio/visual works is best to occur, or the type of communication access point that the most desired customers have the highest likelihood of using, for example. Thus, the video engine of the present invention may make use of empirical data in the application of the nomenclature hierarchy to arrive at the most desired result of nomenclature assignment for any particular application. Additionally, this empirical data may be accessed from any communication access point, and thus any media type, to which the video engine has access, such as by obtaining empirical video over the interact, from television broadcasts, from the frequency of the play of certain commercials or other audio/visual works over internet, radio, personal electronic device, or television, and the like. Additionally, certain user types may be polled, such as by polling developers or advertisers as to the manner of prioritization for accessing audio/visual works having particular nomenclature therein. Thereby, the meta data in compliance with the hierarchy may be exposed for use nakedly or in any available application, or for development of applications or filters, by users, advertisers, and developers, etc.
  • [0025]
    Yet more specifically, it has been approximated that there are over one trillion hours of digitized video available through a variety of media sources. Each such audio/visual work may have corresponded thereto metadata that is indicative of and allows for interaction with the audio/video work, portions thereof and objects therein. Consequently, an additional function performed by the video engine of the present invention may be prioritization of that video or those objects which must be primarily, secondarily, tertiary, etc. associated with metadata or a metadata stream. It will be apparent to those skilled in the art that this prioritization may either form a portion of the video engine, and may thereby mark different videos viewed by different users with the same nomenclature hierarchy, while simultaneously reporting such marking of objects (and which videos or video portions have been so marked) back to a remote operations hub 70, or a remote operations hub may likewise begin the process of marking videos with the nomenclature hierarchy for feed to local video players alongside the audio/visual work feed. Of course, in undertaking the prioritization of which videos should be marked first (primarily), and which objects within Which videos should be marked first, the focus may be on one or more of a variety of factors, including but not limited to: high desirability of sponsorship for videos or objects; high volume of viewers of particular videos, video types, or with desire to see particular objects; number of likely references to particular objects and correspondent actions necessary by the nomenclature engine to name all such objects in all such videos; likely order of executed affiliations subscribing to the common nomenclature of the present invention. Other factors may, of course, be apparent to those of ordinary skill in the art.
  • [0026]
    In one of the aforementioned exemplary embodiments, metadata within the common nomenclature may be prioritized for application to those objects having the highest desire for use by the highest desirability advertisers. Thus, those objects that advertisers get premier return on investment for advertising in association with, and/or those items that advertisers otherwise most wished to be affiliated with, may provide an opportunity for the highest priority objects and or videos to be marked by the video engine in response to the prioritization instruction. Additionally and alternatively, metadata marking may be prioritized to those audio/visual works that are most frequently watched by users, such as broadcasts of the National Football League. By thus marking the most popular objects within the most popular programs, a variety of other economic avenues and applications may be opened, such as, using the example of the NFL, advertisers placing highest priority on the highest watched programs, and users having the greatest desire for interactivity with the highest watched programs (such as through being fans of a team or a participant in fantasy sports with regard to the NFL example above). There is a likely increase in an advertiser's desire to be affiliated with programming, and objects within such programming, that have been the subject of such an indication that such programming and/or such objects are among the most watched or interacted with.
  • [0027]
    Prioritization thus may primarily target, for example, network shows in prime time. Further, such prime time shows may have a limited number of objects in each video frame, or may have objects that need be metadata tagged only once because they are re-used week aider week. Such frequent objects may include, for example, background sets that appear in many scenes every week and that, when interacted with by a user, may have the same metadata linked thereto every week, such as the New York City tourism board and/or Wikipedia/New York if New York City is in the background often. Of course, as will be apparent to one skilled in the art in view of the discussion herein, in the aforementioned New York example, the video engine, through the prioritization, will insure that the reference to the New York skyline is consistent across all videos, which may thereby allow applications using the metadata associated via that reference to make various manipulations based on the New York skyline in any video, without having to view the video before programming the application. For example, and application may provide that, each time the New York skyline appears, in any video, the viewer may cursor over the skyline, click or hit enter, and be taken to the New York state tourism board. Similarly, due to the multiple levels of the nomenclature hierarchy, an application may link to a September 11 memorial site if the user cursors over the pre-Sep. 11, 2001 New York skyline in any video, but may link to a different location, via metadata tagging, if the user cursors over the New York skyline in any video showing the post 9/11 skyline. Of course, the interaction by the viewer itself may vary from application to application, and may include “mousing over” an object, clicking an object, calling up a menu or overlay on a scene or an object, pausing the video to interact, not pausing the video, and the like.
  • [0028]
    Prioritization may additionally allow for differentiation of the worth of object, videos, or video portions, such as for advertisers. For example, a little known object in a home made online video may have low priority, and thus lower worth, such as for the purchase of an advertisement to be associated with an object in such a video. However, a well known object in a prime time network television video may have high priority, and thus high worth, and may thereby demand premium payment for advertisements associated with such objects.
  • [0029]
    Simplification of the prioritization, such as the reference above to having millions of video engines marking, using the common nomenclature, millions of videos under the supervision of a remote operations center, may be desirable. For example, in the event the common nomenclature is to be manually associated with objects in a video work for sending outbound from the remote operations center to the local video players, the resolution of the audio/visual work viewed to manually enter the corresponded metadata may be lower than that desirable for view by a user.
  • [0030]
    The prioritization of the application of metadata to certain objects or videos, or the depth of the hierarchy to which the application of metadata occurs, may vary in accordance with the user, the target, the application creator, and the like. For example, applications using the metadata built by or approved by professional sports leagues may make use of the sports broadcasts of the league at only a very high level of the hierarchy, and only with respect to very few programming objects. As such, lower levels of the hierarchy, or other objects, may be more readily available in such league broadcasts to other application creators. However, shows on a food-related channel may make use of very deep hierarchical levels, such as food brands, kitchen utensils, expirations dates, and the like. The intelligent prioritization of the present invention may elect, with regard to what videos and/or what objects, the vertical depth in the hierarchy for the common nomenclature metadata tagging on a case-by-ease basis.
  • [0031]
    According to an aspect of the present invention, and as further illustrated in FIGS. 3A-3E, within a video and/or any audio/visual content, any object may be clicked upon to learn about the object, shop for information on the object, or other similar interactions, as discussed hereinabove. Further, the present invention provides the ability to upload videos, modify or insert into the videos, distribute videos, measure engagement in the videos and return revenue from the videos. This ability corresponds to the ability to tag objects within the video and share, monitor and generate revenue from the object in the video using the simple interactions discussed herein. This may be accomplished by allowing the clickable video to track tagging and categorize objects within videos. This categorization may be performed using the taxonomy discussed.
  • [0032]
    The present invention allows for the control—such as the tagging, tracking and categorizing—of people, places, and products found within videos, thereby allowing users to interact with any of the objects found in the video. Users may learn more about these objects, purchase products, get storylines and bios, find endorsements based on the people, simply find pricing or purchase point information, play along or receive general background information, for example.
  • [0033]
    The present invention provides the ability to create, manage, distribute, measure and monetize clickable videos. Objects in videos, such as people, place and products, may be clickable, as discussed hereinthroughout. This clickability achieves a number of functions, including allowing users to shop, play and learn while watching videos. Further, this clickability increases traffic through the video hosting site while increasing viewer retention. As discussed hereinabove, metrics may be created and monitored to examine rollovers, clicks and other monitorable functions. Such clickable video closes the loop in product placement, brand integration, and branded entertainment, and allows for metricizing based on products and sponsors, for example. Such tagging may be hand performed or may occur automatically through the tagging and tracking of objects, discussed hereinabove.
  • [0034]
    The present invention provides the opportunity to manage, create, distribute and monetize video content. Within the framework of the present invention, channels of content may be created containing discrete topical content, and/or containing common products and the like. The present invention may also provide the ability to edit and display content, embed tags and meta tag data in videos, link products in the e-commerce realm to products within the videos, and provide campaigns, for example.
  • [0035]
    Further, the present invention may provide interactivity with the content by making a portion and/or a majority of the objects clickable. This increases the viewer retention and traffic associated with the content. Users may be engaged with fun facts, trivia and the like. Further, character development or storylines may be added or developed. Monetization may also occur through monitoring the click throughs of users and developing targeted advertisements and content delivery. Further, objects that are interacted with may be tagged for display to a user at the end of the content, for example. Further, links may be connected with the displayed clicked links.
  • [0036]
    Clickable video content may be monetized by displaying ads, such as brand messages, buy-now ads, sponsorships, and lead generation, for example. Certain objects may be targeted, or categories of objects targeted, for delivery of certain types of advertisements. A third party advertising network, including an ad server, may be linked. Geo-targeting and other campaigns may also be utilized.
  • [0037]
    Metrics may be created to track and evaluate a user's behavior. Every clickable object may be tracked, analyzed and reported, for example. This metricizing may allow increased information about a user and her interests. Metrics that may be utilized include information about the viewed video, rollovers, object clicks, as clicks, click through information, object comparison and delineations, viewer retention, demographics and location, and ad campaign performance.
  • [0038]
    Although the invention has been described and pictured in an exemplary form with a certain degree of particularity, it is understood that the present disclosure of the exemplary form has been made by way of example, and that numerous changes in the details of construction and combination and arrangement of parts and steps may be made without departing from the spirit and scope of the invention as set forth in the claims hereinafter.
Zitiertes PatentEingetragen Veröffentlichungsdatum Antragsteller Titel
US7606810 *25. Apr. 200720. Okt. 2009Colin JeavonsEditorial related advertising content delivery system
US8060816 *31. Okt. 200015. Nov. 2011International Business Machines CorporationMethods and apparatus for intelligent crawling on the world wide web
US20020083468 *26. Febr. 200127. Juni 2002Dudkiewicz Gil GavrielSystem and method for generating metadata for segments of a video program
US20030093790 *8. Juni 200215. Mai 2003Logan James D.Audio and video program recording, editing and playback systems using metadata
US20030131355 *27. Nov. 200110. Juli 2003Berenson Richard W.Program guide system
US20060026655 *30. Juli 20042. Febr. 2006Perez Milton DSystem and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads
US20070208711 *21. Dez. 20066. Sept. 2007Rhoads Geoffrey BRules Driven Pan ID Metadata Routing System and Network
US20070266016 *11. Mai 200615. Nov. 2007International Business Machines CorporationSystem and method for selecting a sub-domain for a specified domain of the web
US20080027985 *31. Juli 200631. Jan. 2008Microsoft CorporationGenerating spatial multimedia indices for multimedia corpuses
US20080177726 *22. Jan. 200724. Juli 2008Forbes John BMethods for delivering task-related digital content based on task-oriented user activity
US20090006937 *26. Juni 20081. Jan. 2009Knapp SeanObject tracking and content monetization
US20090138906 *25. Aug. 200828. Mai 2009Eide Kurt SEnhanced interactive video system and method
US20090199230 *1. Aug. 20076. Aug. 2009Kshitij KumarSystem, device, and method for delivering multimedia
US20110162010 *29. Nov. 201030. Juni 2011United Video Properties, Inc.Interactive television program guide with on-demand data supplementation
Referenziert von
Zitiert von PatentEingetragen Veröffentlichungsdatum Antragsteller Titel
US9756295 *29. Dez. 20075. Sept. 2017International Business Machines CorporationSimultaneous recording of a live event and third party information
US20090167860 *29. Dez. 20072. Juli 2009International Business Machines CorporationSimultaneous recording of a live event and third party information
Internationale KlassifikationG06F3/01
UnternehmensklassifikationG11B27/105, H04N5/85, H04N5/907, G11B27/30, H04N9/8715, H04N5/765
Europäische KlassifikationH04N5/765