US20110052144A1 - System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos - Google Patents
System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos Download PDFInfo
- Publication number
- US20110052144A1 US20110052144A1 US12/552,146 US55214609A US2011052144A1 US 20110052144 A1 US20110052144 A1 US 20110052144A1 US 55214609 A US55214609 A US 55214609A US 2011052144 A1 US2011052144 A1 US 2011052144A1
- Authority
- US
- United States
- Prior art keywords
- video
- hypercode
- interactive application
- computer
- cimple
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 192
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000002123 temporal effect Effects 0.000 claims abstract description 31
- 230000004044 response Effects 0.000 claims abstract description 15
- 230000004913 activation Effects 0.000 claims abstract description 12
- 230000003993 interaction Effects 0.000 claims description 36
- 230000010354 integration Effects 0.000 claims description 34
- 230000009471 action Effects 0.000 claims description 18
- 230000003213 activating effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 description 25
- 230000006854 communication Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 11
- 238000011161 development Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 8
- 230000007704 transition Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 5
- 230000001788 irregular Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000005096 rolling process Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000013479 data entry Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000000699 topical effect Effects 0.000 description 2
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000011093 media selection Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- GOLXNESZZPUPJE-UHFFFAOYSA-N spiromesifen Chemical compound CC1=CC(C)=CC(C)=C1C(C(O1)=O)=C(OC(=O)CC(C)(C)C)C11CCCC1 GOLXNESZZPUPJE-UHFFFAOYSA-N 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4722—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
- H04N21/4725—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234318—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6125—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8166—Monomedia components thereof involving executable data, e.g. software
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8583—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by creating hot-spots
Definitions
- the World Wide Web has traditionally been a primarily text-based communication medium with a relatively high level of engagement and interaction with media viewers.
- Television is a highly visual, primarily video-based communication medium, but is generally passive and not as interactive with media viewers.
- the present disclosure relates in general to interactive video applications, and in particular to a system and method for integrating interactive call-to-action, contextual application with videos.
- FIG. 1 is a diagrammatic illustration of a system for managing and delivering interactive video applications according to an exemplary embodiment.
- FIG. 2 is a diagrammatic illustration of a software architecture for operating the system of FIG. 1 for managing and delivering interactive video applications according to an exemplary embodiment.
- FIGS. 3A and 3B are flow chart illustrations of a method for managing and delivering interactive video applications using the system of FIG. 1 and the software architecture of FIG. 2 according to an exemplary embodiment.
- FIG. 4 illustrates a user interface for defining the context and properties of videos used to deliver interactive video applications according to an exemplary embodiment.
- FIG. 5 illustrates a user interface for customizing a video player used to deliver interactive video applications according to an exemplary embodiment.
- FIG. 6 illustrates a user interface for defining and linking interactive video applications to a video or video player according to an exemplary embodiment.
- FIG. 7 is a diagrammatic illustration of a method for automatically determining a list of graphical objects in a video according to an exemplary embodiment.
- FIG. 8 is a diagrammatic illustration of a method for automatically generating tracking data for a graphical object in a video according to an exemplary embodiment.
- FIG. 9 illustrates a user interface for managing a system of presenting interactive video applications to sponsors according to an exemplary embodiment.
- FIG. 10 illustrates a user interface for managing a system of buying and managing interactive video applications according to an exemplary embodiment.
- FIG. 11 illustrates an interactive video application that can be created and delivered using the system of FIG. 1 and the software architecture of FIG. 2 according to an exemplary embodiment.
- FIG. 12 is a diagrammatic illustration of a node for implementing one or more exemplary embodiments of the present disclosure.
- the present disclosure relates generally to interactive video applications. It is understood that the following disclosure provides many different embodiments, or examples, for implementing different features of the invention. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting.
- the system 100 includes a network 102 , which is preferably the Internet but may be a private intranet, a local area network (LAN), a wide area network (WAN), an adhoc network, cellular network including CDMA, GSM, and UMTS, a cable network, DSL network, fiber network, WiMAX network, or a combination of some or all of these networks, or any other suitable network.
- Communicating with and over the network 102 are a variety of servers and clients.
- the servers include a video server 104 , a hypercode overlay server 106 , an application server 108 , and an analytics server 110 .
- Each of these servers may be implemented using hardware, software, or a combination of the two.
- the servers 104 - 110 may be separate from one another, or some or all of them may share computing resources such as data storage, network access, processing resources, memory, operating systems, software libraries, and the like.
- the servers may be controlled by one entity, or they may be under the control of separate entities.
- the video server 104 may be controlled by a media company
- the hypercode overlay server 106 and the application server 108 may be controlled by a separate marketing company
- the analytics server 110 may be controlled by a third company.
- video publishers identify application hotspots within a video stored on the video server 104 .
- Hotspots are spatial and temporal locations within a video that are deemed important. Importance can be based on a key aspect of the video, a particular point in time in the video, or an arbitrary point in the video.
- Contextually relevant applications are then associated with each of the hotspots by hypercode stored on the hypercode overlay server 106 .
- sponsors are made aware of the application hotspots and then buy or bid on contextually relevant call-to-action interactive applications to be associated with the video.
- the hypercoding process includes (i) the process of incorporating hypercode objects on a virtual timeline that is linked to a video player or certain objects/areas within the video and (ii) the process of incorporating one or more hypercode objects while the video player is playing the video and executing the actions specified by the one or more hypercode objects.
- a video server 104 provides video content to the video player and other parts of the system.
- the video server 104 may include multiple servers that provide redundant serving capacity for video content, and a server may be selected to provide video content to a particular viewer based on the geographic location of the user. In this way, the server that is logically or physically nearest to the viewer can deliver the requested video content.
- the video content may be provided by hypertext transfer protocol (HTTP), real-time transport protocol (RTP), real time messaging protocol (RTMP), or any other suitable protocol.
- HTTP hypertext transfer protocol
- RTP real-time transport protocol
- RTMP real time messaging protocol
- Viewers interact with the video via the interactive applications in order to obtain more information, receive associated services, make a purchase, etc.
- These applications can be activated based on time, user interaction, or some other event. For example, the viewer can mouse over a hypercode object and be presented with a menu of applications, such as product info (type, available colors, prices, etc.), retail location search, click-to-call, coupons, etc.
- product info type, available colors, prices, etc.
- retail location search click-to-call
- coupons etc.
- the first sponsor's applications may include a custom video player skin, a click-to-call application, a retailer location search, a coupon, etc.
- a second sponsor sponsors the applications in the video player and in the video stream, such that the video player has a different skin, a different click-to-call application, a second retailer location search, new coupons, etc.
- applications appear at certain intervals throughout the video and are sponsored by different sponsors.
- the analytics server 110 performs tracking of viewer interaction with the embedded applications.
- the tracking data may be used by publishers and sponsors for business intelligence and financial analysis purposes, and to improve the application delivery.
- the web client 112 may be web browser software executing on a personal computer, for example Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, or any other suitable web client.
- the television client 114 may be a television, set-top box, cable converter, digital video recorder, or any other suitable television client.
- the mobile client 116 may be a personal data assistant, mobile phone, smart phone, portable media player, MP3 player, automobile, wearable computer, or other suitable mobile client.
- the game console 118 may be a video game console, such as a Microsoft Xbox 360, Sony PlayStation 3, Nintendo Wii, or any other suitable game console or platform.
- each of the clients 112 - 118 communicates with one or more of the servers 104 - 110 .
- the web client 112 may request interactive content from the application server 108 .
- the application server 108 directs the web client 112 to request a video from the video server 104 and a hypercode overlay from the hypercode server 106 and attach applications from the applications server 108 .
- the web client 112 subsequently reports information on a viewer's interaction with the received video, hypercode objects, and triggered applications to the analytics server 110 .
- any of the other clients 114 - 118 could also be used to access the same or similar content instead of or in addition to the web client 112 .
- FIG. 2 illustrates an exemplary embodiment of a software architecture 200 used with the system 100 to manage and deliver interactive video applications.
- each of the part of the architecture 200 may be stored and/or executed on one or more of the above-described components of the system 100 .
- each of the parts of the described architecture 200 may execute on one computer or on multiple computers, and it is understood that the various parts of the architecture 200 may execute on different computers, and that the computers may implement some parts while not implementing others.
- the software architecture 200 includes application server software 202 .
- the application server software 202 is Java application software, although a non Java application server software could also be used.
- the application server software 202 is operably coupled to database software 242 .
- the application server software 202 supports various server processes or “servlets,” including an application integration engine (AIE) servlet 204 , a video context editor (VCE) servlet 206 , a sponsor space manager (SSM) servlet 208 , a sponsor campaign manager (SCM) servlet 210 , an application & video analytics (AVA) servlet 212 , an application services servlet 214 , an interactive video player (IVP) service servlet 216 , and an application development platform (ADP) service servlet 218 .
- AIE application integration engine
- VCE video context editor
- SSM sponsor space manager
- SCM sponsor campaign manager
- ADP application development platform
- Some or all of the servlets 204 - 218 may rely on services provided by one another, and thus they may communicate with each other either directly or indirectly through the application server 204 .
- the various servlets 204 - 218 may store their associated data in one database, or they may store data in multiple databases, which may be shared or not shared among and between the servlets. Some servlets may access or store data in multiple databases.
- the application integration engine servlet 204 is operably coupled to and responds to requests from an application integration engine 220 for customizing a video player and linking applications with hotspots in a video using hypercode objects.
- the application integration engine servlet 204 is operably coupled to and responds to requests from the application integration engine 220 for defining the properties of hotspots related to applications in a video.
- the video context editor servlet 206 is operably coupled to and responds to requests from a video context editor 222 for defining the context and location of hotspots in a video.
- the video context editor servlet 206 is also operably coupled to speech and video analysis server software 240 .
- the sponsor space manager servlet 208 is operably coupled to and responds to requests from a sponsor space manager 224 for placing hotspots and appropriate applications in videos up for purchase or bid by sponsors.
- the sponsor campaign manager servlet 210 is operably coupled to and responds to requests from the sponsor campaign manager 226 for managing the creation and oversight of sponsor's campaigns.
- the application & video analytics servlet 212 is operably coupled to and responds to requests from an application & video analytics 228 for counting the number of times videos and applications have been viewed or delivered to viewers, as well as analyzing the different types of interactions with videos and applications by viewers.
- the application & video analytics servlet 212 may also perform analysis on viewer interaction data to produce charts and tables to be made available to publishers and sponsors.
- application & video analytics servlet 212 records viewer interactions with a video to analytics server 110 using database server software 242 .
- application & video analytics servlet 212 records the location of the viewer, originating link for the video, the most popular and least popular sections of the video, etc.
- the application services servlet 214 is operably coupled to an interactive video player 236 and allows publishers and sponsors to serve interactive applications to a viewer.
- the interactive video player service servlet 216 is operably coupled to the interactive video player 236 and allows publishers and sponsors to serve video to a viewer.
- the interactive video player service servlet 216 is a server process that runs on hypercode overlay server 106 .
- the application development platform service servlet 218 is operably coupled to and responds to requests from an application development platform 238 for creating and customize new applications using widget blocks.
- a content management system user interface 230 contains a graphical user interface that acts as the main console for publishers and sponsors to manage the content of the video and the applications.
- the content management system user interface 230 may also be used by administrators, publishers and sponsors.
- This content management system user interface 230 is operably coupled to the application integration engine 220 , the video context editor 222 , the sponsor space manager 224 , the sponsor campaign manager 226 , and the application & video analytics 228 .
- the application integration engine 220 served by the application integration engine servlet 204 allows the video content owner or the publisher to embed interactive applications at application hotspots defined by the video context editor 222 served by the video context editor servlet 206 .
- the applications employed by the application integration engine 220 are stored on the application server 108 .
- the content owner or the publisher uses the application integration engine 220 to embed applications in the video by defining various types of hotspots at certain positions and times within the video stream using hypercode objects.
- the application integration engine 200 links applications at the hotspots by non-intrusive hypercode objects within the video.
- a hypercode object is a passive software marker, usually invisible to the viewer, that is linked to a video player skin or video stream.
- a virtual timeline is a schedule of hypercode objects linked to a video. The virtual timeline is activated when the video player starts the video playback. The video player reads the virtual timeline, and takes the appropriate action based on the applicable scheduling of the hypercode objects in the virtual timeline.
- a hypercode object becomes visible when a viewer moves a mouse cursor over the hypercode object.
- Hypercode objects have a variety of properties, such as time, duration, x-axis position, y-axis position, size, action, and type.
- the time and duration properties indicate the activation time and duration of the hypercode object along the virtual timeline.
- the x-axis position of the video, y-axis position and size properties are used to determine the spatial point and size of the hypercode object on the video stream.
- the action property indicates the action to taken by the video player.
- hypercode objects are saved to a XML file, although the hypercode objects could also be saved to any suitable file format.
- Various examples of hypercode objects XML files are provided in appendices at the end of this disclosure. The examples show various features and properties that are available for the hypercode objects, including id, size, time, duration, and action.
- the type property identifies the type of hypercode object.
- hypercode object is an audio hypercode object, which plays audio files associated with it.
- viewers can distinguish audio hypercode objects by rolling a mouse pointer over it, causing a distinguishing audio icon to appear.
- an audio application will be executed by an audio hypercode object when the viewer moves a mouse pointer over the audio hypercode object, and execution will cease when the viewer moves the mouse pointer away from the audio hypercode object.
- Another type of hypercode object is an image hypercode object, which may be displayed in an image banner. The viewer clicks on the image hypercode object to execute an interactive application, which, in one embodiment, links to a specific uniform resource locator.
- the image hypercode object contains files in the jpeg, png or gif file formats.
- hypercode object is a text hypercode object.
- text is added to a text hypercode object using hypertext markup language.
- hypercode object is a video hypercode object.
- viewer interaction with a video hypercode object executes an application that plays another video within the video containing the video hypercode object.
- the interactive video player 236 displays hypercode objects of the following shapes: circle, rectangle, round rectangle, dotted rectangle, dashed rectangle, and irregular. In an exemplary embodiment, other shapes could be used.
- the video player receives a series of XML point instructions that are used to draw the irregular shape.
- Hypercode objects may by animated in a linear, curving, or multiple curving direction to track moving graphical objects in a video. Hypercode objects are added to a video player skin or certain graphical objects or certain areas of a video stream. Adding these hypercode objects causes the video player skin and the areas and object to become interactive. When viewers provide input to a hypercode object in the video player skin or video stream, the application linked with the hypercode object is invoked.
- the hypercoding process enables the deployment of applications temporally and spatially in video stream.
- a sponsor buys customized applications linked with embedded hypercode objects. After a video is published to the public, multiple viewers viewing the video click on the embedded hypercode objects to be redirected to a sponsor's landing page or otherwise receive additional information from the sponsor through applications.
- hypercode objects do not activate applications unless and until a viewer interacts with them via a mouse-over or mouse-click.
- hypercode objects can invoke applications based on certain time intervals or certain events, without direct input from the viewer.
- activation of an application associated with a hypercode object occurs at a particular time in the video.
- a method of operating the system 100 using the software architecture 200 is generally referred to by the reference numeral 300 and includes a step 302 , which includes beginning a content management system session using the content management system 230 .
- a step 304 includes selecting a video from a library or remote source.
- a publisher uploads the video from a remote source to the video server 104 .
- the publisher selects video from a library or video stored on video server 104 .
- the publisher defines the context of the video using the video context editor 222 .
- contextual information is defined manually, without computer assistance.
- the publisher enters information about the video, such as an overall topical category (e.g. sport or news) and individual topical categories, time codes, and durations for each scene in the video.
- contextual information is added through a set of automatic processes that do not require any input from the publisher.
- this process requires the speech and video analysis server software 240 to be linked to an object signature database and a speech or word signature database.
- the video context editor 222 differentiates the scenes of the video by computing and comparing a frame to frame histogram. This process generates contextual information for each scene and attaches the information to a video context file.
- the publisher identifies hotspots within the video and video player skin.
- the publisher defines properties for hypercode objects embedded at the hotspots, either manually or with computer assistance, or both.
- the publisher selects the desired properties for the embedded hypercode object (e.g. shape, text, audio, time code, duration, x and y coordinates, trigger event).
- an automatic process identifies the graphical object present in each scene, and generate a signature for each graphical object to compare it with graphical object signature in an graphical object signature database.
- the graphical object in the database is linked to a scene record.
- graphical object locations are tracked in a scene and changes in location are recorded and saved for use by applications.
- the automatic process uses speech recognition and pattern recognition technology to identify the words spoken in the scene and patterns of graphical objects in a scene using the speech and video analysis server software 240 and link these works with the scene record. After analyzing the graphical objects and speech in the scene, the automatic processes will generate results for the publisher to accept or correct.
- the method 300 of operating the system 100 using the software architecture 200 also includes a step 312 , during which the publisher links applications to the video at the hotspots with hypercode objects using the application integration engine 220 .
- the step 312 also includes the publisher selecting an appropriate video player template based on the video context.
- the publisher defines criteria for sale and/or auction of applications. In an exemplary embodiment, the definition is done through sponsor space manager 224 .
- the publisher makes the video available to potential sponsors.
- sponsors buy or bid on applications attached to the hypercode objects. In an exemplary embodiment, this is done using the sponsor campaign manager 226 .
- sponsors may customize the applications they buy (e.g. by inserting logos, phone numbers, etc.). In one embodiment, the sponsor customizes the application to appear at certain times of the day or to visitor from certain geographic locations or to visitors matching a certain demographic profile.
- the sponsor's offer is accepted or rejected by the publisher.
- the publisher reviews and chooses whether or not to approve the sponsor's bid and/or customization of the applications. In an exemplary embodiment, this approval is done through the sponsor space manager 224 .
- the video is published to the public.
- viewer interaction with the applications embedded in the video is tracked and analyzed using the application & video analytics 228 .
- the viewer plays the video using the interactive video player 236 . When the video is played, the applicable applications will be accessible through hypercode objects at appropriate points and times within the video or on the player skin, as defined in the earlier steps.
- the applications are optimized, edited and repositioned according to the tracking data obtained in the step 326 . In an exemplary embodiment, optimization of the applications includes repeating one or more of the steps in FIGS. 3A and 3B .
- the video context editor 222 allows the video owner or the publisher to define contextual information about the video by associating tags with certain times during the video and/or with certain areas within the video in the step 306 . Video context editor 222 then uses these tags to help identify application hotspots. In one embodiment, the contextual information is kept in a database separate from the video.
- the video context editor 222 includes a video editing window 400 showing a video 402 . Within the video 402 is a hotspot 404 indicated by a dotted outline.
- the hotspot 404 has the shape of a rounded rectangle, although other shapes are also possible, including squares, rectangles, circles, triangles, stars, ovals, trapezoids, parallelograms, pentagons, octagons, sectors, irregular shapes, and any other shape. Instead of a dotted outline, the hotspot 404 could also be indicated by varying the shading or lighting of the underlying video, such as to create a bubble-effect, lighted sphere appearance, dimming effect, or glowing effect.
- the video editing window 400 also includes a virtual timeline 406 corresponding to the playback timeline for the video 402 .
- a hotspot timeline 408 indicating a start time when the hotspot 404 will begin being displayed and an end time when the hotspot 404 will cease being displayed.
- a user can use the video context editor 222 to add a variety of different kinds of the hotspots 404 to the video 402 at the desired hotspots in the step 308 .
- the hotspot 404 can be stationary, or it can move to track movement in the underlying video, such as when an object moves.
- the hotspot 404 can change location and shape during the hotspot timeline 408 .
- a user can create multiple hotspots and multiple hotspot timelines. The start time and end time may be the same or different for various hotspots, and thus, more than one hypercode object may be active at any point in the virtual timeline.
- the video context editor 222 automates the hotspot and hypercode object creation processes via communications with a server. These communications begin with the video context editor 222 sending an object recognition request to the server.
- the request includes a video, a location, and a time.
- the video may be provided by reference, such as by providing a URL to the video file, or by sending the video data itself.
- the location is a location on the video image, and may be a single location, such as a point or pixel, or a range of locations, such as a shape.
- the time is a time or range of times within the video's virtual timeline.
- the server analyzes the video at the specified time and in the specified location. Using an object recognition algorithm, the server creates a list of graphical objects shown in the video.
- the object recognition algorithm may use an open source library, or any other suitable graphical object recognition algorithm.
- the server-generated list of graphical objects may contain only one graphical object, or it may contain multiple graphical objects.
- the server sends the list of graphical objects to the video context editor 222 , where the video context editor 222 presents the list of graphical objects to the user.
- the video context editor 222 displays the graphical objects in the list as images taken from the video, but any suitable presentation of the list may be used.
- the user selects a graphical object from the list (if there are multiple objects) or confirms the detected graphical object (if there is only one graphical object on the list).
- the client then sends the user's selection or confirmation to the server.
- the server then employs an graphical object tracking algorithm to track the motion of the selected graphical object in the video over the range of times specified in the request.
- the graphical object tracking algorithm may be supplied by an open source library or by any other suitable graphical object tracking algorithm.
- the graphical object tracking algorithm generates movement data that describes the movement of the graphical object in the video.
- the server then sends this movement data back to the video context editor 222 , preferably in an XML format, although any suitable file format may be used.
- the preview session command generates an XML file based on the video player, hypercode objects, and linked applications. This XML file is then sent to the interactive video player 236 .
- the save sessions command generates and saves an XML file of the video player and hypercode objects, allowing the publisher to close the application integration engine session, and open another application integration engine session without loss of data.
- the edit menu 502 contains traditional cut, copy, and paste commands, as well as commands for selecting hypercode objects, video player templates, and find and search features.
- the commands on the edit menu are accessible using input from a keyboard.
- An insert menu 504 contains a list of various types of hypercode objects, such as video, audio, text and shape. The insert menu 504 also contains commands for insertion of animation and preset transitions.
- a view menu 506 contains commands for viewing, opening, and navigating toolbars and tabs. The view menu 506 also contains commands for changing the zoom level and changing the application integration engine graphical layout.
- a control menu 508 contains commands related to the viewing of the video stream, such as play, stop, pause, volume adjust, and quality adjust.
- a help menu 510 contains commands to access information about and check for updates to the application integration engine software. The help menu 510 also contains commands to access information plug-ins for the software, operating system. The help menu 510 also contains commands to check for software updates, visit a Web Site with information about the software, and to detect problems with and make repairs to the software.
- the first application integration engine window is illustrated in FIG. 5 , and contains a template layout panel 512 and template skin panel 514 , which are used in the step 312 to choose a video player template.
- the publisher also allocates appropriate space on the video player for a message, logo, image, etc.
- the publisher also defines this space to be a 320 by 80 pixel banner that will slide up from the bottom of the video at 55 second into the video and slide back down after 15 seconds.
- the first application integration engine window also contains an applications panel 516 , which contains the various applications available to be embedded in the step 312 in the video player skin 518 .
- the publisher selects the type of application from the application panel 516 desired to be embedded in the video player skin 518 during a given scene. Then, the publisher links applications from the application panel 516 in the step 312 by dragging the application from the panel 516 and dropping the application on the locations 520 .
- the applications include applications designed to contact the viewer through SMS, phone call, phone text, email, etc.
- the applications include graphics applications such as maps, quizzes, games, etc. When the video is published, the viewer interacts with the applications by rolling a mouse cursor over or clicking on the locations 520 in the video player skin 518 .
- the second application integration engine window for performing the step 310 is illustrated and contains the move tool 600 , which moves selected hypercode object in the video.
- a lock tool 602 prevents the selection and editing of hypercode objects.
- An automatic object motion detection tool 604 allows selection of a region of a video, which is then analyzed by a server to generate a list of graphical objects within the region in the same manner as the object recognition system described above in the context of the video context editor 222 . One or more of the graphical objects can then be selected for tracking by a hypercode object as the item moves through the video over time.
- the animation tool 606 draws a linear or curved motion path and associates the path with a hypercode object.
- the location of the hypercode object then follows the motion path during video playback.
- the transformation tool 608 changes the appearance of an item in the video by scaling, rotating, skewing, changing perspectives, distorting, flipping, etc.
- the hand tool 610 moves the video within the application integration engine window.
- the magnify tool 612 zooms in or out on the video.
- the application integration engine window also contains a video canvas panel 614 , which shows the video title, video file path and output size.
- the video canvas panel contains the commands load video, play video, pause video, show video loading/buffering, zoom in/out on video, show playing time, show time code, and show time range.
- a change to a hypercode object or to the video can be made by manipulation of sliders 616 at the bottom of the video canvas panel 614 .
- the application integration engine window also contains a hypercode spot list panel 618 .
- the items on the hypercode spot list panel 618 sort automatically based on staring time. Clicking of an item in the hypercode spot list panel 618 selects a spot on the video and jumps the video to the starting time position associated with the selected item.
- the application integration engine window also contain a spot properties panel 622 , which is used to perform the step 310 .
- the spot properties panel 622 is used to set the type and properties of hypercode objects.
- Types of hypercode objects include audio, video, image, geometric or irregular shape, etc. Hypercode objects can be added or removed and their properties set through the spot properties panel 622 . Properties are common to all hypercode objects or unique to individual hotspots. For example, a time of occurrence or x and y position may be common to all hypercode objects in the video, while some hypercode objects would be of the audio type, and some would be of the video type.
- Types of hypercode object properties include: x position, y position, width, height, begin time, end time, rollover text and hyperlink.
- the position of the hypercode objects is set using the numeric stepper in the spot properties panel 622 , while the hyperlink and rollover text can be set using the text box in the spot properties panel 622 .
- hypercode objects can be linked to or removed from the hypercode list panel 618 by clicking the add or remove buttons 620 .
- the application integration engine window also contains an application panel 624 , which contains the various applications available to be embedded in the video at the hotspots in the step 312 .
- the publisher links applications from the application panel in the step 312 by dragging the application from the panel and dropping the application on the hypercode object hotspot.
- the applications on the application selection panel include: player branding, click-to-call, mobile coupon, search for store, click to email, landing Web pages, social network integration etc.
- the applications include applications designed to contact the viewer through SMS, phone call, phone text, email, etc.
- the applications include graphics applications such as maps, quizzes, games, etc. When the video is published, the viewer interacts with the applications by rolling a mouse cursor over or clicking on the hotspots.
- the application integration engine window also contains a hypercode type toolbar that provides icons allowing a user to specify as part of the step 312 how a hypercode object will respond to a viewer's activation.
- the hypercode type toolbar includes an icon 626 for a video hypercode object that will load and play a different video file, which may be another interactive video application. A video hypercode object can also cause a jump to a different location in the virtual timeline within the same video.
- An icon 628 for an audio hypercode object will load and play an audio file, such as a WAV or MP3 file.
- An icon 630 for an image hypercode object will display an image, such as a photo or drawing, which may be in GIF, JPG, PNG, or any other suitable image format.
- An icon 632 for a text hypercode object will display text, which may be hypertext, such as a Web page.
- the hypercode type toolbar 626 also includes a hotspot shape icon 634 and an sponsor space icon 636 .
- FIG. 7 illustrates a process for determining a list of graphical objects in a video.
- the process may be used, for instance, as part of the step 312 by the application integration engine 220 .
- the process begins in step 710 with receiving a coordinate range, a time position, and a video file.
- the coordinate range indicates a selected area of the video image to be analyzed, and the time position indicates the time during the video's timeline at which the video image is to be analyzed.
- the video file can be in any suitable video format, including MPEG, H.264, AVI, Quicktime, Flash Video, Windows Media Player.
- the still image of the video at the time position is retrieved.
- step 714 the still image is processed.
- the processing may depend on the original video format and may include, for example, cropping the still image to the received coordinate range.
- step 716 a list of graphical objects within the coordinate range of the still image is generated.
- step 718 the list of graphical objects is sent out.
- FIG. 8 illustrates a process for generating tracking data for a graphical object in a video as part on the step 312 .
- the process begins at step 720 with receiving a graphical object to be tracked and a time frame.
- the video is processed over the length of the received time frame and at each frame the location of the object is determined.
- the graphical object's movement across the frames is tracked.
- the movement data from step 722 is written to a file, such as an XML file.
- the data file is sent out.
- a user can then use the movement data to create a hypercode object that will track the movement of the graphical object with a hotspot.
- This automated system for creating a hypercode object greatly reduces the amount of time and human effort required to create hotspots in videos and accelerates the process of creating interactive video applications.
- the movement data remains editable; the user can adjust the hotspot movement if necessary.
- the content management system software 230 ( FIG. 2 ) includes a sponsor space manager 224 that allows a publisher to define details for each application that is linked to a hotspot within the video by the application integration engine 220 .
- the sponsor space manager 224 is served by the sponsor space manager servlet 210 as part of the application software 202 on the application server 108 .
- the sponsor space manager 224 includes an available spaces panel 800 .
- the available spaces panel 800 is used by the publisher to view and manage the information about applications to be embedded in a given video.
- the publisher uses available spaces panel 800 to disseminate information about applications for the video player skin, as well as the hotspots in the video stream.
- the sponsor space manager 224 also includes transaction type panel 802 .
- the publisher uses transaction type panel 802 at the step 316 , to identify prices, duration, discounts for each application linked to hypercode object related to the video.
- the publisher can put the applications linked to hypercode objects up for bid by sponsors.
- the publisher can use an external video ad network to place applications into the hotspots.
- the publisher uses the sponsor space manager 224 to view analytical data regarding viewer interaction with the placed content.
- the content management system software 230 includes a sponsor campaign manager 226 that allows the sponsor to buy or bid on applications that are embedded at the hotspots within the video stream or as part of the video player skin.
- the sponsor space manager 226 is served by the sponsor space manager servlet 210 as part of the application server software 202 on the application server 108 .
- the sponsor opens a create campaign panel 900 and names, describes, and defines the category of the new campaign (e.g., sport, entertainment, etc.).
- the sponsor can define the geographic regions to which the sponsored content will be displayed as part of the campaign (e.g., North America, U.S.A., Texas, or Dallas) in the location panel 902 .
- the sponsor can define a target demographic by characteristics such as age, gender or hobbies in the demographics panel 904 .
- the sponsor selects applications that have been previously designated by a publisher as available for sponsorship in publishers panel 906 .
- the media selection panel 908 presents the sponsor with available applications in an inventory, and allows the sponsor to add media assets, such as images, audio, video or text, to the applications.
- the sponsor campaign manager includes an ad spaces panel 910 , which presents the sponsor with a interface operable to link available applications with sponsored content, such as phone number, email address, URL or location. These applications can be customized using an application configuration panel 912 .
- the sponsor campaign manager 226 also includes a transaction type panel 914 . Sponsors use the transaction type panel 914 to buy or bid on applications at the step 320 , which applications are embedded using hypercode objects at hotspot in a video.
- the sponsor chooses the transaction type for the purchase of applications as part of the new campaign.
- the transaction type is money, which means the sponsor campaign manager 226 will automatically continue to purchase applications with the sponsor's content until the set amount of money is exhausted.
- the transaction type is time period, which means the sponsor campaign manager 226 will automatically continue to purchase application with the sponsor's content until the set time period expires.
- the sponsor's campaign may be organized on the basis of both a set amount of money and a set time period.
- the sponsor campaign manager 226 presents the sponsor with a selection of video player skins and a customization panel for linking the skin with sponsored content.
- the sponsor After the sponsor chooses a video and selects from the available applications, the sponsor submits a request for approval of the sponsor's content from the relevant publisher(s).
- the publisher may accept or reject the sponsor's purchase of applications and/or the sponsor's content. If approved, the sponsor's content appears as part of the purchased applications embedded in the video.
- Interactive video player service 236 plays the video 234 stored on video server 104 back to the viewer.
- the interactive video player service servlet 216 provides video files and hypercode overlay files to a video player 236 that runs within a web browser or other clients.
- the video player 236 or the web browser may initiate one or more interactive applications 232 served by application services servlet 214 .
- the video being played in the step 324 has interactive video applications 232 embedded into it by the application integration engine 220 at contextually relevant places defined earlier by the video context editor 222 .
- the interactive video player servlet 216 allows interaction between the viewer and the embedded application.
- the interactive video player service servlet 216 also provide the video player skin, which is customized based on the video context, and is linked to embedded applications and sponsor messages.
- the interactive video player 236 also allows for viewer interaction tracking by application and video analytics 228 .
- the interactive video player servlet 216 is served by the application services software 202 .
- the interactive video player service servlet 216 loads data associated with a video 234 , including video identifier data and hypercode object data, from hypercode overlay sever 106 in XML format. After loading this data, the interactive video player servlet 216 processes the data and begins playback.
- the hypercode object data contains hotspot placement information for hypercode objects linked with applications.
- the hypercode object data also contains data associated with the application, such as application identifier data and placement data.
- the interactive video player 236 uses a common application programming interface for communicating with applications stored on application server 108 .
- Application inputs and events are specified by the associated hypercode objects.
- the interactive video player 236 reads application-related data from the hypercode object and passes the data to the interactive video applications 232 .
- the common application programming interface also allows bi-directional communication with the interactive video player service servlet 216 and application services servlet 214 .
- the application development platform 238 ( FIG. 2 ) is served by the application development platform service servlet 218 and is used to develop applications using reusable widget blocks and other development tools.
- the application development platform 238 is used to develop new applications and integrate third party applications with hypercode objects.
- Applications are built with “widget blocks,” which are integrated by application developers to create new applications or new widget blocks. Widget blocks are run on the application server software 202 . Widget block are available on a panel in the application services engine (as discussed above). Widget blocks are typically combined to create applications, which are embedded in a video or video player skin.
- applications are Web applications that provide the viewer various ways to interact with the video and the associated content placed by publishers and sponsors.
- applications are attached to sponsored content and activated by viewer interactions, or are activated base on a timed event, or some other event.
- the communication category of widget block initiates and creates outgoing audio, video, and text (e.g., chat) sessions, handles incoming audio, video, and text sessions, and the addition or deletion of multimedia streams in an existing session. For example, an interactive video stream can be added to an existing audio session, or a video stream can be dropped from an existing audio and video session.
- Other examples of communication widget blocks include a presence widget block, a click-to-call widget block, a multi party conferencing widget block, a session-on-hold widget block, a session forwarding widget block, a session transfer widget block, etc.
- the gaming category of widget blocks provides capabilities to support multiplayer strategy games, search-based games, etc.
- the messaging category of widget blocks provides capabilities to send and receive short messaging service (SMS) texts and multimedia messaging service (MMS), sending and receiving email messages, performing text-to-voice and voice-to-text services for messaging, Instant Messaging/Chat, etc.
- SMS short messaging service
- MMS multimedia messaging service
- the mapping category of widget blocks provides capabilities for integrating with mapping and geographic information systems (GIS), etc.
- the above-described widget blocks can be combined and integrated, along with video and other content, to create visually rich, engaging interactive video applications using the application development platform 238 .
- An application developer designs, configures, and connects the widget blocks and other graphical user interface components to create the interactive video application logic. Because the underlying widget blocks and other components are network- and platform-independent, the resulting interactive video application can run on any client platform and communicate over any network. Thus, a single interactive video can be made available to a variety of clients, including personal computers, televisions, set-top boxes, mobile phones, and game consoles.
- the application development platform 238 provides a mechanism for converting a completed interactive video application into a new widget block.
- the new widget block can then be saved into a widget block library, allowing the completed interactive video application itself to be reused as a component of other interactive video applications.
- an interactive video application can build on other interactive video applications to provide increasingly complex services to an user.
- An application developer can also create new widget blocks by importing functionality from another source, such as a custom-written source code, a Web service, an application programming interface, or any other source.
- the different types of applications include: (i) location based maps capable of showing the viewer retail stores proximate to the viewer's location; (ii) click to call applications to establish direct communication with a viewer through a landline, cellular, VOIP network, or call to a sales representative or a technical support representative; (iii) SMS applications to deliver trial offers, coupons, or discount offers to the viewer, or sending a view request to viewer's friends; (iv) feedback applications to gather text, audio, or video responses from viewers of the video and to display these responses to publishers, sponsors, or other viewers; (v) polling applications to present viewer surveys and gather responses; (vi) quiz applications to present quizzes to viewers in the context of education videos, sports videos, or other videos; (vii) presentation applications used for creating slideshows and animations to show in conjunction with a video; and (viii) video puzzle applications that convert a frame of video into a slide puzzle consisting of smaller tiles (the size of the puzzle can vary, such as 3 ⁇ 2, 3 ⁇ 3 or 4 ⁇ 4; the
- FIG. 11 illustrates an interactive video and embedded applications that can be created and delivered using the system of FIG. 1 and the software architecture of FIG. 2 .
- the interactive video and embedded applications could be viewed and used on any of the clients 112 - 118 .
- Interactive video 922 of a woman contains a hotspot 924 that has been created over the woman's purse.
- the hotspot 924 can trigger any of a variety of interactive applications, include a shopping cart 926 , a document download 928 , a phone call or SMS message 930 , a product rating 932 and a store locator map 934 .
- the shopping cart 926 permits a viewer to purchase the woman's purse immediately on-line.
- the document download 928 provides the viewer with more information about the purse, such as the available colors, information about the manufacturer, and other details.
- the phone call or SMS message 930 allows the viewer to immediately contact a sales representative from the purse seller or manufacturer to get more information about the purse. The viewer can simply provide his or her telephone number and receive a phone call connecting to the sales representative, or alternatively receive an SMS text message to initiate a chat session with the sales representative.
- the product rating 932 permits the viewer to enter a rating for the purse and comment on the purse.
- the nearest store locator 934 allows the viewer to provide an address and get information about stores near that location where the purse is for sale. The nearest store locator 934 can also provide driving directions from a provided address.
- the viewer can obtain information about and directions to the store nearest to the viewer's current location.
- the interactive video and embedded applications allows a viewer to engage with and interact with a video in ways not previously possible.
- the interactive video and embedded applications also reports on the viewer's engagement in the application & video analytics 228 .
- the analytics server 110 records information about the viewer's actions, such as which hotspots the viewer clicked on, which parts of the interactive video 922 if any were replayed and how many times, which parts of the interactive video 922 if any were skipped over. This information may be sent as each action is recorded, at a predetermined interval, or when the viewer takes an action, such as closing or navigating away from the interactive video 922 .
- the application & video analytics 228 then compiles the information from all instances of the interactive video 922 and generates reports for the video content owner, sponsor, or other interested party.
- analytics server 110 records interactions with applications embedded by the hypercode objects in the interactive video 922 and/or the video player skin. For example, a viewer can click on a hypercode object to trigger an application that delivers additional sponsor content to the viewer's email address. This action is analyzed by the publisher and/or sponsor using application & video analytics 228 to improve delivery of applications and sponsor content. In this way, application & video analytics 228 assists sponsors in selecting, positioning, and customizing applications that will generate the most revenue for the publisher or sponsor.
- a first example interactive video application is a real-estate browsing application.
- the application combines functionality provided by various widget blocks such as click-to-talk, instant messaging, voice mail, email, video conferencing, multiple listing service listings, interactive video, searching, and maps.
- the real-estate browsing application allows a viewer to search for and view homes via interactive video.
- the viewer can then communicate with a real-estate listing agent via voice call, SMS, email, voicemail, instant message, video conference, or any other supported form of communication.
- the viewer can invite additional individuals, such as family or friends, to join the conversation or to view the interactive video.
- the viewer can engage in a visually rich and meaningful home search with extensive participation by the real-estate agent, family and friends.
- Another example interactive video application is an interactive advertisement in a video offered by a video-on-demand system.
- a viewer selects a video to watch, which launches the interactive video application.
- the viewer may select a video to watch from within another interactive video application.
- the selected video begins to play, and during the playback one or more hotspots appear to indicate to the viewer that more information is available about certain objects within the video.
- the objects may be highlighted for the viewer by visible highlighting, such as dimming or lightening effects, contrast or saturation adjustments, or outlining, any other technique. If the viewer interacts with a highlighted object, such as by using any input device including a keyboard, mouse, touch screen, or remote-control, an event in the interactive video application is triggered.
- the event causes the video to pause and opens a new window with information about the object.
- the video may continue to play in the background.
- the information in the new window may be in audio, video, text, or any other form, and may provide the user with features for buying the object, jumping to another video or website about the object, or any other interactive feature. The viewer may then close the newly opened window and resume watching the selected video.
- Another example interactive video application is an interactive advertisement in a live video.
- a viewer watches a live video feed that may include news, a sporting event, or any suitable content.
- An interactive advertisement is placed on the live video stream and may be highlighted using a frame, glowing spot, or any other suitable technique. If the viewer interacts with the interactive advertisement, such as by using any input device including a keyboard, mouse, touch screen, or remote-control, an event in the interactive video application is triggered and causes a pop-up window or screen overlay to appear with more information.
- the viewer may be offered options such as receiving a coupon by email, SMS message, or contacting a sales agent by phone or video conference.
- Another example interactive video application is a context-sensitive interactive advertisement placed in a video, which may be live video or stored video. Based on tags associated with the video, an interactive advertisement is selected from a library of interactive advertisements. In this way, the selected interactive advertisement is relevant to the video already being watched and is more likely to be of interest to the viewer. For example, a viewer watching a music video can be shown an interactive advertisement for an upcoming music concert being performed in the viewer's local area. As another example, a viewer watching a movie can be shown advertisements for other movies starring some of the same actors as the watched movie.
- Yet another example interactive video application is an interactive instructional video.
- a viewer watches the interactive instructional video which can be an educational video for new employees, an installation guide video, or any other kind of instructional video.
- navigable objects overlay the video and allow the user to make navigation choices. For example, the choices may allow a viewer to replay a section or to jump from one video section to another related section or video.
- the viewer may be prompted to answer a question regarding the video section just viewed. If the viewer answers correctly, the video continues playing normally. If the viewer answers incorrectly, the previous video section is replayed so that the user can learn the information needed to answer the question.
- the viewer who completes watching the video will have demonstrated that the user learned the material.
- an illustrative node 950 for implementing one or more embodiments of one or more of the above-described networks, elements, methods and/or steps, and/or any combination thereof, is depicted.
- the node 950 includes a microprocessor 952 , an input device 958 , a storage device 954 , a video controller 964 , a system memory 956 , a display 966 , and a communication device 960 all interconnected by one or more buses 962 .
- the storage device 954 may include a floppy drive, hard drive, CD-ROM, optical drive, any other form of storage device and/or any combination thereof.
- the storage device 954 may include, and/or be capable of receiving, a floppy disk, CD-ROM, DVD-ROM, or any other form of computer-readable medium that may contain executable instructions.
- the communication device 960 may include a modem, network card, or any other device to enable the node to communicate with other nodes.
- any node represents a plurality of interconnected (whether by intranet or Internet) computer systems, including without limitation, personal computers, mainframes, PDAs, and cell phones.
- one or more of the system 100 , the software architecture 200 , and/or component thereof are, or at least include, the node 950 and/or components thereof, and/or one or more nodes that are substantially similar to the node 950 and/or components thereof.
- the system 100 typically includes at least hardware capable of executing machine readable instructions, as well as the software for executing acts (typically machine-readable instructions) that produce a desired result.
- system 100 may include hybrids of hardware and software, as well as computer sub-systems.
- hardware may include any physical device that is capable of storing machine-readable instructions, such as memory or other data storage devices.
- other forms of hardware include hardware sub-systems, including transfer devices such as modems, modem cards, ports, and port cards, for example.
- the software architecture 200 includes any machine code stored in any memory medium, such as RAM or ROM, and machine code stored on other devices (such as floppy disks, flash memory, or a CD ROM, for example).
- the software architecture 200 may include source or object code.
- the software architecture 200 encompasses any set of instructions capable of being executed on a node such as, for example, on a client machine or server.
- combinations of software and hardware could also be used for providing enhanced functionality and performance for certain embodiments of the present disclosure.
- software functions may be directly manufactured into a silicon chip. Accordingly, it should be understood that combinations of hardware and software are also included within the definition of a computer system and are thus envisioned by the present disclosure as possible equivalent structures and equivalent methods.
- computer readable mediums include, for example, passive data storage, such as a random access memory (RAM) as well as semi-permanent data storage such as a compact disk read only memory (CD-ROM).
- RAM random access memory
- CD-ROM compact disk read only memory
- One or more exemplary embodiments of the present disclosure may be embodied in the RAM of a computer to transform a standard computer into a new specific computing machine.
- data structures are defined organizations of data that may enable an embodiment of the present disclosure.
- a data structure may provide an organization of data, or an organization of executable code.
- the network 102 may be designed to work on any specific architecture.
- one or more portions of the network 102 may be executed on a single computer, local area networks, client-server networks, wide area networks, internets, hand-held and other portable and wireless devices and networks.
- the database server software 242 may be any standard or proprietary database software, such as Oracle, Microsoft Access, SyBase, or DBase II, for example.
- the database server software 242 may have fields, records, data, and other database elements that may be associated through database specific software.
- data may be mapped.
- mapping is the process of associating one data entry with another data entry.
- the data contained in the location of a character file can be mapped to a field in a second table.
- the physical location of the database server software 242 is not limiting, and the database may be distributed.
- the database server software 242 may exist remotely from the application server software 202 , and run on a separate platform. In an exemplary embodiment, the database server software 242 may be accessible across the Internet. In several exemplary embodiments, more than one database may be implemented.
- the system 100 with the software architecture 200 provides a system for a video publisher is provided that associates and pushes relevant, interactive and targeted applications to viewers of videos on any multimedia client, such as a personal computer, gaming device, or mobile device.
- the system 100 with the software architecture 200 provides a system is provided that dynamically places on a video a set of interactive applications on a video player skin or on hotspots within a video stream using hypercode objects.
- hypercode objects One or more exemplary hypercode objects, and/or portions or combinations thereof, may be implemented according to the examples files provided in the Appendices below. This placement allows a publisher to link interactive call-to-action applications to the video that are customized based on the context of the video.
- the interactive applications can be sponsored by any sponsor desiring media viewer interaction with these call-to-action applications.
- the system determines the location and demographics of the viewer and pushes demographically and contextually relevant interactive call-to-action applications as part of the video and video player.
- the system 100 with the software architecture 200 provides a system for a video publisher is provided that embeds interactive applications in the video player skin or in hotspots in the video stream.
- the embedded interactive applications can be activated based on time, viewer interaction, or some other event. These applications follow the video virally on any client on which the video player is located.
- the system 100 with the software architecture 200 provides a system by which custom applications may be developed using widgets on an application development platform that allows developers and others to create interactive applications and integrate them with the video.
- the system also records and provides statistics related to various relevant parameters for analyzing and improving the delivery of the applications to viewers and provides metrics relevant to the publisher and sponsor for business intelligence and commercial use.
- the applications provide a rich and engaging video experience to the viewer and a monetization solution for the video publisher while effectively delivering the sponsor's messages to viewers.
- a method includes identifying a hotspot in a portion of a video content, overlaying a hypercode object on the hotspot at a spatial point, causing the hypercode object to be displayed at a temporal point during playback of the video content, and providing an interactive application in response to activation of the hypercode object.
- the method includes analyzing the video content at the spatial point and the temporal point and isolating at least one graphical object detected in the video content at the spatial point and the temporal point.
- the method includes receiving a temporal range comprising a start time and an end time, wherein the starting time is the temporal point, and tracking a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time.
- the isolating the at least one graphical object comprises: isolating multiple graphical objects in the video content, providing a list of the multiple graphical objects; and receiving a selection of one graphical object from the list.
- the method includes defining a context for the video content and selecting the interactive application according to the context.
- the method includes providing the interactive application further pauses playback of the video content.
- the interactive application displays advertising content.
- the method includes modifying the timing or location of the hypercode object.
- the method includes making the interactive application available to a sponsor and customizing the interactive application according to a request by the sponsor.
- the method includes obtaining data related to viewer interaction with the interactive application and revising the interactive application based on the data.
- the hypercode object is an XML file.
- An apparatus includes a computer-readable physical medium containing instructions executable on a computer that when executed cause the computer to identify hotspot in a portion of a video content, overlay a hypercode object on the hotspot at a spatial point, cause the hypercode object to be displayed at a temporal point during playback of the video content, and provide an interactive application in response to activation of the hypercode object.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to analyze the video content at the spatial point and the temporal point and isolate at least one graphical object detected in the video content at the spatial point and the temporal point.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to: receive a temporal range comprising a start time and an end time, wherein the starting time is the temporal point and track a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to: isolate multiple graphical objects in the video content, provide a list of the multiple graphical objects and receive a selection of one graphical object from the list.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to define a context for the video content and select the interactive application according to the context.
- the computer-readable physical medium contains instructions executable on a computer that when executed cause the computer to pause playback of the video content upon activation of the hypercode object.
- the interactive application displays advertising content.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to modify the timing or location of the hypercode object.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to make the interactive application available to a sponsor and customize the interactive application according a request by the sponsor.
- the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to obtain data related to viewer interaction with the interactive application; and revise the interactive application based on the data.
- a system has been described that includes an interactive video player responsive to a video content request to provide a video content to a client device, a video context editor responsive to a request to identify and track movement of an object in the video content automatically and thereby generate object motion data, an application integration engine responsive to a request to link an interactive application to a hypercode object; the hypercode object incorporating the object motion data, and an analytics server responsive to receipt of user interaction data from the client device to store the user interaction data in a database.
- the hypercode object is an XML file.
- the user interaction data indicates whether a user at the client device initiated the interactive application linked with the hypercode object.
- a system has been described that a means for identifying a hotspot in a portion of a video content, a means for overlaying a hypercode object on the hotspot at a spatial point, a means for causing the hypercode object to be displayed at a temporal point during playback of the video content, and a means for providing an interactive application in response to activation of the hypercode object.
- the system includes a means for analyzing the video content at the spatial point and the temporal point and a means for isolating at least one graphical object detected in the video content at the spatial point and the temporal point.
- the system includes a means for receiving temporal range comprising a start time and an end time, wherein the starting time is the identified point and means for tracking a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time.
- the isolating the at least one graphical object comprises: means for isolating multiple graphical objects in the video content, means for providing a list of the multiple graphical objects and means for receiving a selection of one graphical object from the list.
- the system includes a means for defining a context for the video content and a means for selecting the interactive application according to the context.
- the means for providing the interactive application further pauses playback of the video content.
- the interactive application displays advertising content.
- the system includes a means for modifying the timing or location of the hypercode object.
- the system includes a means for making the interactive application available to a sponsor and means for customizing the interactive application according a request by the sponsor.
- the system includes a means for obtaining data related to viewer interaction with the interactive application and a means for revising the interactive application based on the data.
- the hypercode object is an XML file.
- a method has been described that includes associating at least one interactive application with a video, the at least one interactive application being contextually relevant to the subject matter of the video, wherein associating at least one interactive application with the video comprises at least one of the following: embedding the at least one interactive application on a video player skin that is proximate to the video during playback of the video, and embedding the at least one interactive application in one or more hotspots within the video and activating the at least one interactive application in response to the one or more of the following: the passage of one or more time periods during playback of the video and one or more interactions initiated by one or more viewers of the video during playback of the video, wherein the at least one interactive application is sponsored by a sponsor and comprises one or more of the following: one or more messages from the sponsor, each of the one or more messages being relevant to the subject matter of the video and one or more call-to-action applications, each of the one or more call-to action applications comprising a request that the one or more viewers of the video initiate at least one action
- a system has been described that includes a computer readable medium comprising a plurality or instruction stored therein, the plurality or instruction comprising: instructions for associating at least one interactive application with a video, the at least one interactive application being contextually relevant to the subject matter of the video, wherein the instructions for associating at least one interactive application with the video comprises at least one of the following: instructions for embedding the at least one interactive application on a video player skin that is proximate to the video during playback of the video, and instructions for embedding the at least one interactive application in one or more hotspots within the video and instructions for activating the at least one interactive application in response to the one or more of the following: the passage of one or more time periods during playback of the video and one or more interactions initiated by one or more viewers of the video during playback of the video, wherein the at least one interactive application is sponsored by a sponsor and comprises one or more of the following: one or more messages from the sponsor, each of the one or more messages being relevant to the subject matter of the video and one or more
- steps, processes, and procedures are described as appearing as distinct acts, one or more of the steps, one or more of the processes, and/or one or more of the procedures may also be performed in different orders, simultaneously and/or sequentially. In several exemplary embodiments, the steps, processes and/or procedures may be merged into one or more steps, processes and/or procedures.
- one or more of the operational steps in each embodiment may be omitted.
- some features of the present disclosure may be employed without a corresponding use of the other features.
- one or more of the above-described embodiments and/or variations may be combined in whole or in part with any one or more of the other above-described embodiments and/or variations.
Abstract
A system and method of delivering an interactive video application includes identifying a hotspot in a portion of a video content. A hypercode object is overlaid on the hotspot at a spatial point. The hypercode object is displayed at a temporal point during playback of the video content. An interactive application is provided to a viewer of the video in response to activation of the hypercode object.
Description
- Many people are now familiar with using the World Wide Web and other hyperlink-based communication systems. The World Wide Web has traditionally been a primarily text-based communication medium with a relatively high level of engagement and interaction with media viewers. Television, on the other hand, is a highly visual, primarily video-based communication medium, but is generally passive and not as interactive with media viewers. The present disclosure relates in general to interactive video applications, and in particular to a system and method for integrating interactive call-to-action, contextual application with videos.
- The present disclosure is best understood from the following detailed description when read with the accompanying figures. It is emphasized that, in accordance with the standard practice in the industry, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion. Furthermore, all features may not be shown in all drawings for simplicity.
-
FIG. 1 is a diagrammatic illustration of a system for managing and delivering interactive video applications according to an exemplary embodiment. -
FIG. 2 is a diagrammatic illustration of a software architecture for operating the system ofFIG. 1 for managing and delivering interactive video applications according to an exemplary embodiment. -
FIGS. 3A and 3B are flow chart illustrations of a method for managing and delivering interactive video applications using the system ofFIG. 1 and the software architecture ofFIG. 2 according to an exemplary embodiment. -
FIG. 4 illustrates a user interface for defining the context and properties of videos used to deliver interactive video applications according to an exemplary embodiment. -
FIG. 5 illustrates a user interface for customizing a video player used to deliver interactive video applications according to an exemplary embodiment. -
FIG. 6 illustrates a user interface for defining and linking interactive video applications to a video or video player according to an exemplary embodiment. -
FIG. 7 is a diagrammatic illustration of a method for automatically determining a list of graphical objects in a video according to an exemplary embodiment. -
FIG. 8 is a diagrammatic illustration of a method for automatically generating tracking data for a graphical object in a video according to an exemplary embodiment. -
FIG. 9 illustrates a user interface for managing a system of presenting interactive video applications to sponsors according to an exemplary embodiment. -
FIG. 10 illustrates a user interface for managing a system of buying and managing interactive video applications according to an exemplary embodiment. -
FIG. 11 illustrates an interactive video application that can be created and delivered using the system ofFIG. 1 and the software architecture ofFIG. 2 according to an exemplary embodiment. -
FIG. 12 is a diagrammatic illustration of a node for implementing one or more exemplary embodiments of the present disclosure. - The present disclosure relates generally to interactive video applications. It is understood that the following disclosure provides many different embodiments, or examples, for implementing different features of the invention. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting.
- Referring to
FIG. 1 , illustrated is asystem 100 for delivering interactive video applications. Thesystem 100 includes anetwork 102, which is preferably the Internet but may be a private intranet, a local area network (LAN), a wide area network (WAN), an adhoc network, cellular network including CDMA, GSM, and UMTS, a cable network, DSL network, fiber network, WiMAX network, or a combination of some or all of these networks, or any other suitable network. Communicating with and over thenetwork 102 are a variety of servers and clients. The servers include avideo server 104, ahypercode overlay server 106, anapplication server 108, and ananalytics server 110. Each of these servers may be implemented using hardware, software, or a combination of the two. The servers 104-110 may be separate from one another, or some or all of them may share computing resources such as data storage, network access, processing resources, memory, operating systems, software libraries, and the like. The servers may be controlled by one entity, or they may be under the control of separate entities. For example, thevideo server 104 may be controlled by a media company, thehypercode overlay server 106 and theapplication server 108 may be controlled by a separate marketing company, and theanalytics server 110 may be controlled by a third company. - In an exemplary embodiment, during the operation of the
system 100, video publishers identify application hotspots within a video stored on thevideo server 104. Hotspots are spatial and temporal locations within a video that are deemed important. Importance can be based on a key aspect of the video, a particular point in time in the video, or an arbitrary point in the video. Contextually relevant applications are then associated with each of the hotspots by hypercode stored on thehypercode overlay server 106. In one embodiment, sponsors are made aware of the application hotspots and then buy or bid on contextually relevant call-to-action interactive applications to be associated with the video. These applications are stored on theapplication server 108, and are embedded (at the hotspots) through the use of hypercode within the video stored on thevideo server 104. The hypercoding process includes (i) the process of incorporating hypercode objects on a virtual timeline that is linked to a video player or certain objects/areas within the video and (ii) the process of incorporating one or more hypercode objects while the video player is playing the video and executing the actions specified by the one or more hypercode objects. - A
video server 104 provides video content to the video player and other parts of the system. Thevideo server 104 may include multiple servers that provide redundant serving capacity for video content, and a server may be selected to provide video content to a particular viewer based on the geographic location of the user. In this way, the server that is logically or physically nearest to the viewer can deliver the requested video content. The video content may be provided by hypertext transfer protocol (HTTP), real-time transport protocol (RTP), real time messaging protocol (RTMP), or any other suitable protocol. - Viewers interact with the video via the interactive applications in order to obtain more information, receive associated services, make a purchase, etc. These applications can be activated based on time, user interaction, or some other event. For example, the viewer can mouse over a hypercode object and be presented with a menu of applications, such as product info (type, available colors, prices, etc.), retail location search, click-to-call, coupons, etc. As an example, when the video player and the video are loaded, initially the applications embedded in the video player skin and video stream are sponsored by a first sponsor for the first few minutes. The first sponsor's applications may include a custom video player skin, a click-to-call application, a retailer location search, a coupon, etc. After the first few minutes, a second sponsor sponsors the applications in the video player and in the video stream, such that the video player has a different skin, a different click-to-call application, a second retailer location search, new coupons, etc. In another example, applications appear at certain intervals throughout the video and are sponsored by different sponsors.
- The
analytics server 110 performs tracking of viewer interaction with the embedded applications. The tracking data may be used by publishers and sponsors for business intelligence and financial analysis purposes, and to improve the application delivery. - Also communicating with and over the
network 102 are a variety of clients including aweb client 112, adesktop client 113, atelevision client 114, amobile client 116, and agame console client 118. Theweb client 112 may be web browser software executing on a personal computer, for example Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, or any other suitable web client. Thetelevision client 114 may be a television, set-top box, cable converter, digital video recorder, or any other suitable television client. Themobile client 116 may be a personal data assistant, mobile phone, smart phone, portable media player, MP3 player, automobile, wearable computer, or other suitable mobile client. Thegame console 118 may be a video game console, such as a Microsoft Xbox 360, Sony PlayStation 3, Nintendo Wii, or any other suitable game console or platform. - In the
system 100, each of the clients 112-118 communicates with one or more of the servers 104-110. As one example, theweb client 112 may request interactive content from theapplication server 108. In response, theapplication server 108 directs theweb client 112 to request a video from thevideo server 104 and a hypercode overlay from thehypercode server 106 and attach applications from theapplications server 108. Theweb client 112 subsequently reports information on a viewer's interaction with the received video, hypercode objects, and triggered applications to theanalytics server 110. Of course it is understood that any of the other clients 114-118 could also be used to access the same or similar content instead of or in addition to theweb client 112. -
FIG. 2 illustrates an exemplary embodiment of asoftware architecture 200 used with thesystem 100 to manage and deliver interactive video applications. As the focus ofFIG. 2 is on the software architecture and not on the hardware architecture, each of the part of thearchitecture 200 may be stored and/or executed on one or more of the above-described components of thesystem 100. In several exemplary embodiments, instead of, or in addition to being stored and/or executed on one or more of thesystem 100, each of the parts of the describedarchitecture 200 may execute on one computer or on multiple computers, and it is understood that the various parts of thearchitecture 200 may execute on different computers, and that the computers may implement some parts while not implementing others. - The
software architecture 200 includesapplication server software 202. In an exemplary embodiment, theapplication server software 202 is Java application software, although a non Java application server software could also be used. Theapplication server software 202 is operably coupled todatabase software 242. Theapplication server software 202 supports various server processes or “servlets,” including an application integration engine (AIE)servlet 204, a video context editor (VCE)servlet 206, a sponsor space manager (SSM)servlet 208, a sponsor campaign manager (SCM)servlet 210, an application & video analytics (AVA)servlet 212, anapplication services servlet 214, an interactive video player (IVP)service servlet 216, and an application development platform (ADP)service servlet 218. - Some or all of the servlets 204-218 may rely on services provided by one another, and thus they may communicate with each other either directly or indirectly through the
application server 204. The various servlets 204-218 may store their associated data in one database, or they may store data in multiple databases, which may be shared or not shared among and between the servlets. Some servlets may access or store data in multiple databases. - The application
integration engine servlet 204 is operably coupled to and responds to requests from anapplication integration engine 220 for customizing a video player and linking applications with hotspots in a video using hypercode objects. The applicationintegration engine servlet 204 is operably coupled to and responds to requests from theapplication integration engine 220 for defining the properties of hotspots related to applications in a video. The videocontext editor servlet 206 is operably coupled to and responds to requests from avideo context editor 222 for defining the context and location of hotspots in a video. The videocontext editor servlet 206 is also operably coupled to speech and videoanalysis server software 240. The sponsorspace manager servlet 208 is operably coupled to and responds to requests from asponsor space manager 224 for placing hotspots and appropriate applications in videos up for purchase or bid by sponsors. The sponsorcampaign manager servlet 210 is operably coupled to and responds to requests from thesponsor campaign manager 226 for managing the creation and oversight of sponsor's campaigns. - The application &
video analytics servlet 212 is operably coupled to and responds to requests from an application &video analytics 228 for counting the number of times videos and applications have been viewed or delivered to viewers, as well as analyzing the different types of interactions with videos and applications by viewers. The application &video analytics servlet 212 may also perform analysis on viewer interaction data to produce charts and tables to be made available to publishers and sponsors. In one embodiment, application & video analytics servlet 212 records viewer interactions with a video toanalytics server 110 usingdatabase server software 242. In one embodiment, application & video analytics servlet 212 records the location of the viewer, originating link for the video, the most popular and least popular sections of the video, etc. - The
application services servlet 214 is operably coupled to aninteractive video player 236 and allows publishers and sponsors to serve interactive applications to a viewer. The interactive videoplayer service servlet 216 is operably coupled to theinteractive video player 236 and allows publishers and sponsors to serve video to a viewer. The interactive videoplayer service servlet 216 is a server process that runs onhypercode overlay server 106. - The application development
platform service servlet 218 is operably coupled to and responds to requests from anapplication development platform 238 for creating and customize new applications using widget blocks. - A content management
system user interface 230 contains a graphical user interface that acts as the main console for publishers and sponsors to manage the content of the video and the applications. The content managementsystem user interface 230 may also be used by administrators, publishers and sponsors. This content managementsystem user interface 230 is operably coupled to theapplication integration engine 220, thevideo context editor 222, thesponsor space manager 224, thesponsor campaign manager 226, and the application &video analytics 228. - The
application integration engine 220 served by the applicationintegration engine servlet 204 allows the video content owner or the publisher to embed interactive applications at application hotspots defined by thevideo context editor 222 served by the videocontext editor servlet 206. The applications employed by theapplication integration engine 220 are stored on theapplication server 108. The content owner or the publisher uses theapplication integration engine 220 to embed applications in the video by defining various types of hotspots at certain positions and times within the video stream using hypercode objects. - In one embodiment, the
application integration engine 200 links applications at the hotspots by non-intrusive hypercode objects within the video. A hypercode object is a passive software marker, usually invisible to the viewer, that is linked to a video player skin or video stream. A virtual timeline is a schedule of hypercode objects linked to a video. The virtual timeline is activated when the video player starts the video playback. The video player reads the virtual timeline, and takes the appropriate action based on the applicable scheduling of the hypercode objects in the virtual timeline. In one embodiment, a hypercode object becomes visible when a viewer moves a mouse cursor over the hypercode object. Hypercode objects have a variety of properties, such as time, duration, x-axis position, y-axis position, size, action, and type. The time and duration properties indicate the activation time and duration of the hypercode object along the virtual timeline. The x-axis position of the video, y-axis position and size properties are used to determine the spatial point and size of the hypercode object on the video stream. The action property indicates the action to taken by the video player. In an exemplary embodiment, hypercode objects are saved to a XML file, although the hypercode objects could also be saved to any suitable file format. Various examples of hypercode objects XML files are provided in appendices at the end of this disclosure. The examples show various features and properties that are available for the hypercode objects, including id, size, time, duration, and action. The type property identifies the type of hypercode object. One type of hypercode object is an audio hypercode object, which plays audio files associated with it. In one embodiment, viewers can distinguish audio hypercode objects by rolling a mouse pointer over it, causing a distinguishing audio icon to appear. In one embodiment, an audio application will be executed by an audio hypercode object when the viewer moves a mouse pointer over the audio hypercode object, and execution will cease when the viewer moves the mouse pointer away from the audio hypercode object. Another type of hypercode object is an image hypercode object, which may be displayed in an image banner. The viewer clicks on the image hypercode object to execute an interactive application, which, in one embodiment, links to a specific uniform resource locator. In one embodiment, the image hypercode object contains files in the jpeg, png or gif file formats. Another type of hypercode object is a text hypercode object. In one embodiment, text is added to a text hypercode object using hypertext markup language. Another type of hypercode object is a video hypercode object. In one embodiment, viewer interaction with a video hypercode object executes an application that plays another video within the video containing the video hypercode object. - The
interactive video player 236 displays hypercode objects of the following shapes: circle, rectangle, round rectangle, dotted rectangle, dashed rectangle, and irregular. In an exemplary embodiment, other shapes could be used. In one embodiment, to display a hypercode object of irregular shape, the video player receives a series of XML point instructions that are used to draw the irregular shape. Hypercode objects may by animated in a linear, curving, or multiple curving direction to track moving graphical objects in a video. Hypercode objects are added to a video player skin or certain graphical objects or certain areas of a video stream. Adding these hypercode objects causes the video player skin and the areas and object to become interactive. When viewers provide input to a hypercode object in the video player skin or video stream, the application linked with the hypercode object is invoked. The hypercoding process enables the deployment of applications temporally and spatially in video stream. In one embodiment, a sponsor buys customized applications linked with embedded hypercode objects. After a video is published to the public, multiple viewers viewing the video click on the embedded hypercode objects to be redirected to a sponsor's landing page or otherwise receive additional information from the sponsor through applications. Typically, hypercode objects do not activate applications unless and until a viewer interacts with them via a mouse-over or mouse-click. However, hypercode objects can invoke applications based on certain time intervals or certain events, without direct input from the viewer. In one embodiment, activation of an application associated with a hypercode object occurs at a particular time in the video. - In an exemplary embodiment, as illustrated in
FIGS. 3A and 3B , a method of operating thesystem 100 using thesoftware architecture 200 is generally referred to by thereference numeral 300 and includes astep 302, which includes beginning a content management system session using thecontent management system 230. Astep 304 includes selecting a video from a library or remote source. In one embodiment, to execute thestep 304, a publisher uploads the video from a remote source to thevideo server 104. In another embodiment, to execute thestep 304, the publisher selects video from a library or video stored onvideo server 104. At astep 306, the publisher defines the context of the video using thevideo context editor 222. In an exemplary embodiment, contextual information is defined manually, without computer assistance. In the manual case, the publisher enters information about the video, such as an overall topical category (e.g. sport or news) and individual topical categories, time codes, and durations for each scene in the video. In the computer assisted case, contextual information is added through a set of automatic processes that do not require any input from the publisher. In one embodiment, this process requires the speech and videoanalysis server software 240 to be linked to an object signature database and a speech or word signature database. In the computer assisted case, thevideo context editor 222 differentiates the scenes of the video by computing and comparing a frame to frame histogram. This process generates contextual information for each scene and attaches the information to a video context file. - At a
step 308, using thevideo context editor 222, the publisher identifies hotspots within the video and video player skin. At astep 310, using theapplication integration engine 220, the publisher defines properties for hypercode objects embedded at the hotspots, either manually or with computer assistance, or both. In the manual case, the publisher selects the desired properties for the embedded hypercode object (e.g. shape, text, audio, time code, duration, x and y coordinates, trigger event). In the computer-assisted case, an automatic process identifies the graphical object present in each scene, and generate a signature for each graphical object to compare it with graphical object signature in an graphical object signature database. If a match is found, the graphical object in the database is linked to a scene record. In one embodiment, graphical object locations are tracked in a scene and changes in location are recorded and saved for use by applications. The automatic process uses speech recognition and pattern recognition technology to identify the words spoken in the scene and patterns of graphical objects in a scene using the speech and videoanalysis server software 240 and link these works with the scene record. After analyzing the graphical objects and speech in the scene, the automatic processes will generate results for the publisher to accept or correct. - As shown in
FIG. 3B , themethod 300 of operating thesystem 100 using thesoftware architecture 200 also includes astep 312, during which the publisher links applications to the video at the hotspots with hypercode objects using theapplication integration engine 220. In one embodiment, thestep 312 also includes the publisher selecting an appropriate video player template based on the video context. - At a
step 316, the publisher defines criteria for sale and/or auction of applications. In an exemplary embodiment, the definition is done throughsponsor space manager 224. At astep 318, the publisher makes the video available to potential sponsors. At astep 320, sponsors buy or bid on applications attached to the hypercode objects. In an exemplary embodiment, this is done using thesponsor campaign manager 226. Also in thestep 320, sponsors may customize the applications they buy (e.g. by inserting logos, phone numbers, etc.). In one embodiment, the sponsor customizes the application to appear at certain times of the day or to visitor from certain geographic locations or to visitors matching a certain demographic profile. - At a
step 322, the sponsor's offer is accepted or rejected by the publisher. In thestep 322, the publisher reviews and chooses whether or not to approve the sponsor's bid and/or customization of the applications. In an exemplary embodiment, this approval is done through thesponsor space manager 224. At astep 324, the video is published to the public. At astep 326, viewer interaction with the applications embedded in the video is tracked and analyzed using the application &video analytics 228. In thestep 326, the viewer plays the video using theinteractive video player 236. When the video is played, the applicable applications will be accessible through hypercode objects at appropriate points and times within the video or on the player skin, as defined in the earlier steps. In astep 328, the applications are optimized, edited and repositioned according to the tracking data obtained in thestep 326. In an exemplary embodiment, optimization of the applications includes repeating one or more of the steps inFIGS. 3A and 3B . - Turning now to
FIG. 4 , illustrated is a user interface for thevideo context editor 222. Thevideo context editor 222 allows the video owner or the publisher to define contextual information about the video by associating tags with certain times during the video and/or with certain areas within the video in thestep 306.Video context editor 222 then uses these tags to help identify application hotspots. In one embodiment, the contextual information is kept in a database separate from the video. Thevideo context editor 222 includes avideo editing window 400 showing avideo 402. Within thevideo 402 is ahotspot 404 indicated by a dotted outline. Thehotspot 404 has the shape of a rounded rectangle, although other shapes are also possible, including squares, rectangles, circles, triangles, stars, ovals, trapezoids, parallelograms, pentagons, octagons, sectors, irregular shapes, and any other shape. Instead of a dotted outline, thehotspot 404 could also be indicated by varying the shading or lighting of the underlying video, such as to create a bubble-effect, lighted sphere appearance, dimming effect, or glowing effect. - The
video editing window 400 also includes avirtual timeline 406 corresponding to the playback timeline for thevideo 402. Just above thevirtual timeline 406 is ahotspot timeline 408 indicating a start time when thehotspot 404 will begin being displayed and an end time when thehotspot 404 will cease being displayed. By adjusting the start time and end time, a user of thevideo context editor 222 can adjust when during playback of thevideo 402 thehotspot 404 will be available. - Thus, a user can use the
video context editor 222 to add a variety of different kinds of thehotspots 404 to thevideo 402 at the desired hotspots in thestep 308. Thehotspot 404 can be stationary, or it can move to track movement in the underlying video, such as when an object moves. Thus, thehotspot 404 can change location and shape during thehotspot timeline 408. A user can create multiple hotspots and multiple hotspot timelines. The start time and end time may be the same or different for various hotspots, and thus, more than one hypercode object may be active at any point in the virtual timeline. - In one embodiment, instead of manually specifying a hotspot's location and movement, the
video context editor 222 automates the hotspot and hypercode object creation processes via communications with a server. These communications begin with thevideo context editor 222 sending an object recognition request to the server. The request includes a video, a location, and a time. The video may be provided by reference, such as by providing a URL to the video file, or by sending the video data itself. The location is a location on the video image, and may be a single location, such as a point or pixel, or a range of locations, such as a shape. The time is a time or range of times within the video's virtual timeline. The server analyzes the video at the specified time and in the specified location. Using an object recognition algorithm, the server creates a list of graphical objects shown in the video. The object recognition algorithm may use an open source library, or any other suitable graphical object recognition algorithm. - The server-generated list of graphical objects may contain only one graphical object, or it may contain multiple graphical objects. The server sends the list of graphical objects to the
video context editor 222, where thevideo context editor 222 presents the list of graphical objects to the user. Preferably, thevideo context editor 222 displays the graphical objects in the list as images taken from the video, but any suitable presentation of the list may be used. The user then selects a graphical object from the list (if there are multiple objects) or confirms the detected graphical object (if there is only one graphical object on the list). The client then sends the user's selection or confirmation to the server. - The server then employs an graphical object tracking algorithm to track the motion of the selected graphical object in the video over the range of times specified in the request. The graphical object tracking algorithm may be supplied by an open source library or by any other suitable graphical object tracking algorithm. The graphical object tracking algorithm generates movement data that describes the movement of the graphical object in the video. The server then sends this movement data back to the
video context editor 222, preferably in an XML format, although any suitable file format may be used. - Turning now to
FIG. 5 , afile menu 500 of theapplication integration engine 220 for use in thestep 312 that enables publishers to open or close application integration engine sessions, save sessions, preview sessions, and create copies of sessions, according to an exemplary embodiment. The preview session command generates an XML file based on the video player, hypercode objects, and linked applications. This XML file is then sent to theinteractive video player 236. The save sessions command generates and saves an XML file of the video player and hypercode objects, allowing the publisher to close the application integration engine session, and open another application integration engine session without loss of data. Theedit menu 502 contains traditional cut, copy, and paste commands, as well as commands for selecting hypercode objects, video player templates, and find and search features. In one embodiment, the commands on the edit menu are accessible using input from a keyboard. Aninsert menu 504 contains a list of various types of hypercode objects, such as video, audio, text and shape. Theinsert menu 504 also contains commands for insertion of animation and preset transitions. Aview menu 506 contains commands for viewing, opening, and navigating toolbars and tabs. Theview menu 506 also contains commands for changing the zoom level and changing the application integration engine graphical layout. Acontrol menu 508 contains commands related to the viewing of the video stream, such as play, stop, pause, volume adjust, and quality adjust. Ahelp menu 510 contains commands to access information about and check for updates to the application integration engine software. Thehelp menu 510 also contains commands to access information plug-ins for the software, operating system. Thehelp menu 510 also contains commands to check for software updates, visit a Web Site with information about the software, and to detect problems with and make repairs to the software. - The first application integration engine window is illustrated in
FIG. 5 , and contains atemplate layout panel 512 andtemplate skin panel 514, which are used in thestep 312 to choose a video player template. In one embodiment, the publisher also allocates appropriate space on the video player for a message, logo, image, etc. In one embodiment, the publisher also defines this space to be a 320 by 80 pixel banner that will slide up from the bottom of the video at 55 second into the video and slide back down after 15 seconds. - The first application integration engine window also contains an
applications panel 516, which contains the various applications available to be embedded in thestep 312 in thevideo player skin 518. The publisher selects the type of application from theapplication panel 516 desired to be embedded in thevideo player skin 518 during a given scene. Then, the publisher links applications from theapplication panel 516 in thestep 312 by dragging the application from thepanel 516 and dropping the application on thelocations 520. In one embodiment, the applications include applications designed to contact the viewer through SMS, phone call, phone text, email, etc. In one embodiment, the applications include graphics applications such as maps, quizzes, games, etc. When the video is published, the viewer interacts with the applications by rolling a mouse cursor over or clicking on thelocations 520 in thevideo player skin 518. - Turning now to
FIG. 6 , the second application integration engine window for performing thestep 310 is illustrated and contains themove tool 600, which moves selected hypercode object in the video. Alock tool 602 prevents the selection and editing of hypercode objects. An automatic objectmotion detection tool 604 allows selection of a region of a video, which is then analyzed by a server to generate a list of graphical objects within the region in the same manner as the object recognition system described above in the context of thevideo context editor 222. One or more of the graphical objects can then be selected for tracking by a hypercode object as the item moves through the video over time. - The animation tool 606 draws a linear or curved motion path and associates the path with a hypercode object. The location of the hypercode object then follows the motion path during video playback. The
transformation tool 608 changes the appearance of an item in the video by scaling, rotating, skewing, changing perspectives, distorting, flipping, etc. Thehand tool 610 moves the video within the application integration engine window. The magnifytool 612 zooms in or out on the video. - The application integration engine window also contains a
video canvas panel 614, which shows the video title, video file path and output size. The video canvas panel contains the commands load video, play video, pause video, show video loading/buffering, zoom in/out on video, show playing time, show time code, and show time range. A change to a hypercode object or to the video can be made by manipulation ofsliders 616 at the bottom of thevideo canvas panel 614. - The application integration engine window also contains a hypercode
spot list panel 618. The items on the hypercodespot list panel 618 sort automatically based on staring time. Clicking of an item in the hypercodespot list panel 618 selects a spot on the video and jumps the video to the starting time position associated with the selected item. - The application integration engine window also contain a
spot properties panel 622, which is used to perform thestep 310. Thespot properties panel 622 is used to set the type and properties of hypercode objects. Types of hypercode objects include audio, video, image, geometric or irregular shape, etc. Hypercode objects can be added or removed and their properties set through thespot properties panel 622. Properties are common to all hypercode objects or unique to individual hotspots. For example, a time of occurrence or x and y position may be common to all hypercode objects in the video, while some hypercode objects would be of the audio type, and some would be of the video type. Types of hypercode object properties include: x position, y position, width, height, begin time, end time, rollover text and hyperlink. The position of the hypercode objects is set using the numeric stepper in thespot properties panel 622, while the hyperlink and rollover text can be set using the text box in thespot properties panel 622. To further perform thestep 310, hypercode objects can be linked to or removed from thehypercode list panel 618 by clicking the add or removebuttons 620. - The application integration engine window also contains an
application panel 624, which contains the various applications available to be embedded in the video at the hotspots in thestep 312. The publisher links applications from the application panel in thestep 312 by dragging the application from the panel and dropping the application on the hypercode object hotspot. In one embodiment, the applications on the application selection panel include: player branding, click-to-call, mobile coupon, search for store, click to email, landing Web pages, social network integration etc. In one embodiment, the applications include applications designed to contact the viewer through SMS, phone call, phone text, email, etc. In one embodiment, the applications include graphics applications such as maps, quizzes, games, etc. When the video is published, the viewer interacts with the applications by rolling a mouse cursor over or clicking on the hotspots. - The application integration engine window also contains a hypercode type toolbar that provides icons allowing a user to specify as part of the
step 312 how a hypercode object will respond to a viewer's activation. The hypercode type toolbar includes anicon 626 for a video hypercode object that will load and play a different video file, which may be another interactive video application. A video hypercode object can also cause a jump to a different location in the virtual timeline within the same video. Anicon 628 for an audio hypercode object will load and play an audio file, such as a WAV or MP3 file. Anicon 630 for an image hypercode object will display an image, such as a photo or drawing, which may be in GIF, JPG, PNG, or any other suitable image format. Anicon 632 for a text hypercode object will display text, which may be hypertext, such as a Web page. Thehypercode type toolbar 626 also includes ahotspot shape icon 634 and ansponsor space icon 636. -
FIG. 7 illustrates a process for determining a list of graphical objects in a video. The process may be used, for instance, as part of thestep 312 by theapplication integration engine 220. The process begins instep 710 with receiving a coordinate range, a time position, and a video file. The coordinate range indicates a selected area of the video image to be analyzed, and the time position indicates the time during the video's timeline at which the video image is to be analyzed. The video file can be in any suitable video format, including MPEG, H.264, AVI, Quicktime, Flash Video, Windows Media Player. Next atstep 712, the still image of the video at the time position is retrieved. Then instep 714, the still image is processed. The processing may depend on the original video format and may include, for example, cropping the still image to the received coordinate range. Next in step 716 a list of graphical objects within the coordinate range of the still image is generated. Then instep 718 the list of graphical objects is sent out. -
FIG. 8 illustrates a process for generating tracking data for a graphical object in a video as part on thestep 312. The process begins atstep 720 with receiving a graphical object to be tracked and a time frame. Instep 722, the video is processed over the length of the received time frame and at each frame the location of the object is determined. Then instep 724, the graphical object's movement across the frames is tracked. Instep 726, the movement data fromstep 722 is written to a file, such as an XML file. Finally instep 728, the data file is sent out. A user can then use the movement data to create a hypercode object that will track the movement of the graphical object with a hotspot. This automated system for creating a hypercode object greatly reduces the amount of time and human effort required to create hotspots in videos and accelerates the process of creating interactive video applications. The movement data remains editable; the user can adjust the hotspot movement if necessary. - Turning now to
FIG. 9 , illustrated is a user interface for thesponsor space manager 224. As described above, the content management system software 230 (FIG. 2 ) includes asponsor space manager 224 that allows a publisher to define details for each application that is linked to a hotspot within the video by theapplication integration engine 220. Thesponsor space manager 224 is served by the sponsorspace manager servlet 210 as part of theapplication software 202 on theapplication server 108. As shown inFIG. 9 , thesponsor space manager 224 includes anavailable spaces panel 800. Theavailable spaces panel 800 is used by the publisher to view and manage the information about applications to be embedded in a given video. In an exemplary embodiment, the publisher usesavailable spaces panel 800 to disseminate information about applications for the video player skin, as well as the hotspots in the video stream. Thesponsor space manager 224 also includestransaction type panel 802. The publisher usestransaction type panel 802 at thestep 316, to identify prices, duration, discounts for each application linked to hypercode object related to the video. In another embodiment, the publisher can put the applications linked to hypercode objects up for bid by sponsors. In another embodiment, the publisher can use an external video ad network to place applications into the hotspots. In yet another embodiment, the publisher uses thesponsor space manager 224 to view analytical data regarding viewer interaction with the placed content. - Turning now to
FIG. 10 , illustrated is a user interface for thesponsor campaign manager 226. As described above, the contentmanagement system software 230 includes asponsor campaign manager 226 that allows the sponsor to buy or bid on applications that are embedded at the hotspots within the video stream or as part of the video player skin. Thesponsor space manager 226 is served by the sponsorspace manager servlet 210 as part of theapplication server software 202 on theapplication server 108. As shown inFIG. 10 , to create a new campaign as part of thestep 320, the sponsor opens a createcampaign panel 900 and names, describes, and defines the category of the new campaign (e.g., sport, entertainment, etc.). In one embodiment, the sponsor can define the geographic regions to which the sponsored content will be displayed as part of the campaign (e.g., North America, U.S.A., Texas, or Dallas) in thelocation panel 902. In another embodiment, the sponsor can define a target demographic by characteristics such as age, gender or hobbies in thedemographics panel 904. - The sponsor selects applications that have been previously designated by a publisher as available for sponsorship in
publishers panel 906. Themedia selection panel 908 presents the sponsor with available applications in an inventory, and allows the sponsor to add media assets, such as images, audio, video or text, to the applications. The sponsor campaign manager includes anad spaces panel 910, which presents the sponsor with a interface operable to link available applications with sponsored content, such as phone number, email address, URL or location. These applications can be customized using anapplication configuration panel 912. Thesponsor campaign manager 226 also includes atransaction type panel 914. Sponsors use thetransaction type panel 914 to buy or bid on applications at thestep 320, which applications are embedded using hypercode objects at hotspot in a video. In one embodiment, the sponsor chooses the transaction type for the purchase of applications as part of the new campaign. In one embodiment, the transaction type is money, which means thesponsor campaign manager 226 will automatically continue to purchase applications with the sponsor's content until the set amount of money is exhausted. In another embodiment, the transaction type is time period, which means thesponsor campaign manager 226 will automatically continue to purchase application with the sponsor's content until the set time period expires. In another embodiment, the sponsor's campaign may be organized on the basis of both a set amount of money and a set time period. In an exemplary embodiment, thesponsor campaign manager 226 presents the sponsor with a selection of video player skins and a customization panel for linking the skin with sponsored content. - After the sponsor chooses a video and selects from the available applications, the sponsor submits a request for approval of the sponsor's content from the relevant publisher(s). The publisher may accept or reject the sponsor's purchase of applications and/or the sponsor's content. If approved, the sponsor's content appears as part of the purchased applications embedded in the video.
- To publish at the
step 324, the contentmanagement session software 230 ends and the video is made available to the public. Interactivevideo player service 236 plays thevideo 234 stored onvideo server 104 back to the viewer. The interactive videoplayer service servlet 216 provides video files and hypercode overlay files to avideo player 236 that runs within a web browser or other clients. Thevideo player 236 or the web browser may initiate one or moreinteractive applications 232 served byapplication services servlet 214. - The video being played in the
step 324 hasinteractive video applications 232 embedded into it by theapplication integration engine 220 at contextually relevant places defined earlier by thevideo context editor 222. The interactivevideo player servlet 216 allows interaction between the viewer and the embedded application. The interactive videoplayer service servlet 216 also provide the video player skin, which is customized based on the video context, and is linked to embedded applications and sponsor messages. Theinteractive video player 236 also allows for viewer interaction tracking by application andvideo analytics 228. The interactivevideo player servlet 216 is served by theapplication services software 202. - In an exemplary embodiment, the interactive video
player service servlet 216 loads data associated with avideo 234, including video identifier data and hypercode object data, from hypercode overlay sever 106 in XML format. After loading this data, the interactivevideo player servlet 216 processes the data and begins playback. The hypercode object data contains hotspot placement information for hypercode objects linked with applications. The hypercode object data also contains data associated with the application, such as application identifier data and placement data. - The
interactive video player 236 uses a common application programming interface for communicating with applications stored onapplication server 108. Application inputs and events are specified by the associated hypercode objects. For each application, theinteractive video player 236 reads application-related data from the hypercode object and passes the data to theinteractive video applications 232. The common application programming interface also allows bi-directional communication with the interactive videoplayer service servlet 216 andapplication services servlet 214. - In an exemplary embodiment, to develop interactive application to embed into video or video player skins, the application development platform 238 (
FIG. 2 ) is served by the application developmentplatform service servlet 218 and is used to develop applications using reusable widget blocks and other development tools. Theapplication development platform 238 is used to develop new applications and integrate third party applications with hypercode objects. Applications are built with “widget blocks,” which are integrated by application developers to create new applications or new widget blocks. Widget blocks are run on theapplication server software 202. Widget block are available on a panel in the application services engine (as discussed above). Widget blocks are typically combined to create applications, which are embedded in a video or video player skin. In one embodiment, applications are Web applications that provide the viewer various ways to interact with the video and the associated content placed by publishers and sponsors. In another embodiment, applications are attached to sponsored content and activated by viewer interactions, or are activated base on a timed event, or some other event. - The communication category of widget block initiates and creates outgoing audio, video, and text (e.g., chat) sessions, handles incoming audio, video, and text sessions, and the addition or deletion of multimedia streams in an existing session. For example, an interactive video stream can be added to an existing audio session, or a video stream can be dropped from an existing audio and video session. Other examples of communication widget blocks include a presence widget block, a click-to-call widget block, a multi party conferencing widget block, a session-on-hold widget block, a session forwarding widget block, a session transfer widget block, etc. The gaming category of widget blocks provides capabilities to support multiplayer strategy games, search-based games, etc. The messaging category of widget blocks provides capabilities to send and receive short messaging service (SMS) texts and multimedia messaging service (MMS), sending and receiving email messages, performing text-to-voice and voice-to-text services for messaging, Instant Messaging/Chat, etc. The mapping category of widget blocks provides capabilities for integrating with mapping and geographic information systems (GIS), etc.
- The above-described widget blocks can be combined and integrated, along with video and other content, to create visually rich, engaging interactive video applications using the
application development platform 238. An application developer designs, configures, and connects the widget blocks and other graphical user interface components to create the interactive video application logic. Because the underlying widget blocks and other components are network- and platform-independent, the resulting interactive video application can run on any client platform and communicate over any network. Thus, a single interactive video can be made available to a variety of clients, including personal computers, televisions, set-top boxes, mobile phones, and game consoles. - The
application development platform 238 provides a mechanism for converting a completed interactive video application into a new widget block. The new widget block can then be saved into a widget block library, allowing the completed interactive video application itself to be reused as a component of other interactive video applications. In this way, an interactive video application can build on other interactive video applications to provide increasingly complex services to an user. An application developer can also create new widget blocks by importing functionality from another source, such as a custom-written source code, a Web service, an application programming interface, or any other source. - The different types of applications include: (i) location based maps capable of showing the viewer retail stores proximate to the viewer's location; (ii) click to call applications to establish direct communication with a viewer through a landline, cellular, VOIP network, or call to a sales representative or a technical support representative; (iii) SMS applications to deliver trial offers, coupons, or discount offers to the viewer, or sending a view request to viewer's friends; (iv) feedback applications to gather text, audio, or video responses from viewers of the video and to display these responses to publishers, sponsors, or other viewers; (v) polling applications to present viewer surveys and gather responses; (vi) quiz applications to present quizzes to viewers in the context of education videos, sports videos, or other videos; (vii) presentation applications used for creating slideshows and animations to show in conjunction with a video; and (viii) video puzzle applications that convert a frame of video into a slide puzzle consisting of smaller tiles (the size of the puzzle can vary, such as 3×2, 3×3 or 4×4; the puzzle is created by removing one tile from the frame of video, and randomizing the location of the remaining tiles; the puzzle is solved by the viewer by clicking on the tiles to change their position until the original frame is reconstructed). In an exemplary embodiment, a presentation application presents predefined animations, slide transitions, and other interactivity within the presentation application, and the viewer can add other applications into a slide.
-
FIG. 11 illustrates an interactive video and embedded applications that can be created and delivered using the system ofFIG. 1 and the software architecture ofFIG. 2 . The interactive video and embedded applications could be viewed and used on any of the clients 112-118.Interactive video 922 of a woman contains ahotspot 924 that has been created over the woman's purse. Thehotspot 924 can trigger any of a variety of interactive applications, include ashopping cart 926, adocument download 928, a phone call orSMS message 930, aproduct rating 932 and astore locator map 934. Theshopping cart 926 permits a viewer to purchase the woman's purse immediately on-line. Thedocument download 928 provides the viewer with more information about the purse, such as the available colors, information about the manufacturer, and other details. The phone call orSMS message 930 allows the viewer to immediately contact a sales representative from the purse seller or manufacturer to get more information about the purse. The viewer can simply provide his or her telephone number and receive a phone call connecting to the sales representative, or alternatively receive an SMS text message to initiate a chat session with the sales representative. Theproduct rating 932 permits the viewer to enter a rating for the purse and comment on the purse. Thenearest store locator 934 allows the viewer to provide an address and get information about stores near that location where the purse is for sale. Thenearest store locator 934 can also provide driving directions from a provided address. Alternately, if the viewer is interacting with the interactive 922 on a mobile phone or other device with location information, the user can obtain information about and directions to the store nearest to the viewer's current location. Thus, it will be appreciated that the interactive video and embedded applications allows a viewer to engage with and interact with a video in ways not previously possible. - The interactive video and embedded applications also reports on the viewer's engagement in the application &
video analytics 228. Theanalytics server 110 records information about the viewer's actions, such as which hotspots the viewer clicked on, which parts of theinteractive video 922 if any were replayed and how many times, which parts of theinteractive video 922 if any were skipped over. This information may be sent as each action is recorded, at a predetermined interval, or when the viewer takes an action, such as closing or navigating away from theinteractive video 922. The application &video analytics 228 then compiles the information from all instances of theinteractive video 922 and generates reports for the video content owner, sponsor, or other interested party. In an exemplary embodiment,analytics server 110 records interactions with applications embedded by the hypercode objects in theinteractive video 922 and/or the video player skin. For example, a viewer can click on a hypercode object to trigger an application that delivers additional sponsor content to the viewer's email address. This action is analyzed by the publisher and/or sponsor using application &video analytics 228 to improve delivery of applications and sponsor content. In this way, application &video analytics 228 assists sponsors in selecting, positioning, and customizing applications that will generate the most revenue for the publisher or sponsor. - Additional example interactive video application are described as follows
- Example Interactive Video Application 1
- A first example interactive video application is a real-estate browsing application. The application combines functionality provided by various widget blocks such as click-to-talk, instant messaging, voice mail, email, video conferencing, multiple listing service listings, interactive video, searching, and maps. The real-estate browsing application allows a viewer to search for and view homes via interactive video. The viewer can then communicate with a real-estate listing agent via voice call, SMS, email, voicemail, instant message, video conference, or any other supported form of communication. The viewer can invite additional individuals, such as family or friends, to join the conversation or to view the interactive video. Thus, the viewer can engage in a visually rich and meaningful home search with extensive participation by the real-estate agent, family and friends.
- Example
Interactive Video Application 2 - Another example interactive video application is an interactive advertisement in a video offered by a video-on-demand system. A viewer selects a video to watch, which launches the interactive video application. Alternatively, the viewer may select a video to watch from within another interactive video application. The selected video begins to play, and during the playback one or more hotspots appear to indicate to the viewer that more information is available about certain objects within the video. The objects may be highlighted for the viewer by visible highlighting, such as dimming or lightening effects, contrast or saturation adjustments, or outlining, any other technique. If the viewer interacts with a highlighted object, such as by using any input device including a keyboard, mouse, touch screen, or remote-control, an event in the interactive video application is triggered. The event causes the video to pause and opens a new window with information about the object. Alternatively, the video may continue to play in the background. The information in the new window may be in audio, video, text, or any other form, and may provide the user with features for buying the object, jumping to another video or website about the object, or any other interactive feature. The viewer may then close the newly opened window and resume watching the selected video.
- Example
Interactive Video Application 3 - Another example interactive video application is an interactive advertisement in a live video. A viewer watches a live video feed that may include news, a sporting event, or any suitable content. An interactive advertisement is placed on the live video stream and may be highlighted using a frame, glowing spot, or any other suitable technique. If the viewer interacts with the interactive advertisement, such as by using any input device including a keyboard, mouse, touch screen, or remote-control, an event in the interactive video application is triggered and causes a pop-up window or screen overlay to appear with more information. The viewer may be offered options such as receiving a coupon by email, SMS message, or contacting a sales agent by phone or video conference.
- Example Interactive Video Application 4
- Another example interactive video application is a context-sensitive interactive advertisement placed in a video, which may be live video or stored video. Based on tags associated with the video, an interactive advertisement is selected from a library of interactive advertisements. In this way, the selected interactive advertisement is relevant to the video already being watched and is more likely to be of interest to the viewer. For example, a viewer watching a music video can be shown an interactive advertisement for an upcoming music concert being performed in the viewer's local area. As another example, a viewer watching a movie can be shown advertisements for other movies starring some of the same actors as the watched movie.
- Example Interactive Video Application 5
- Yet another example interactive video application is an interactive instructional video. A viewer watches the interactive instructional video, which can be an educational video for new employees, an installation guide video, or any other kind of instructional video. At various points in the video, navigable objects overlay the video and allow the user to make navigation choices. For example, the choices may allow a viewer to replay a section or to jump from one video section to another related section or video. Alternatively, the viewer may be prompted to answer a question regarding the video section just viewed. If the viewer answers correctly, the video continues playing normally. If the viewer answers incorrectly, the previous video section is replayed so that the user can learn the information needed to answer the question. Thus, the viewer who completes watching the video will have demonstrated that the user learned the material.
- In an exemplary embodiment, as illustrated in
FIG. 12 , anillustrative node 950 for implementing one or more embodiments of one or more of the above-described networks, elements, methods and/or steps, and/or any combination thereof, is depicted. Thenode 950 includes amicroprocessor 952, aninput device 958, astorage device 954, avideo controller 964, asystem memory 956, adisplay 966, and acommunication device 960 all interconnected by one ormore buses 962. In several exemplary embodiments, thestorage device 954 may include a floppy drive, hard drive, CD-ROM, optical drive, any other form of storage device and/or any combination thereof. In several exemplary embodiments, thestorage device 954 may include, and/or be capable of receiving, a floppy disk, CD-ROM, DVD-ROM, or any other form of computer-readable medium that may contain executable instructions. In several exemplary embodiments, thecommunication device 960 may include a modem, network card, or any other device to enable the node to communicate with other nodes. In several exemplary embodiments, any node represents a plurality of interconnected (whether by intranet or Internet) computer systems, including without limitation, personal computers, mainframes, PDAs, and cell phones. - In several exemplary embodiments, one or more of the
system 100, thesoftware architecture 200, and/or component thereof, are, or at least include, thenode 950 and/or components thereof, and/or one or more nodes that are substantially similar to thenode 950 and/or components thereof. - In several exemplary embodiments, the
system 100 typically includes at least hardware capable of executing machine readable instructions, as well as the software for executing acts (typically machine-readable instructions) that produce a desired result. In several exemplary embodiments,system 100 may include hybrids of hardware and software, as well as computer sub-systems. In several exemplary embodiments, hardware may include any physical device that is capable of storing machine-readable instructions, such as memory or other data storage devices. In several exemplary embodiments, other forms of hardware include hardware sub-systems, including transfer devices such as modems, modem cards, ports, and port cards, for example. - In several exemplary embodiments, the
software architecture 200 includes any machine code stored in any memory medium, such as RAM or ROM, and machine code stored on other devices (such as floppy disks, flash memory, or a CD ROM, for example). In several exemplary embodiments, thesoftware architecture 200 may include source or object code. In several exemplary embodiments, thesoftware architecture 200 encompasses any set of instructions capable of being executed on a node such as, for example, on a client machine or server. - In several exemplary embodiments, combinations of software and hardware could also be used for providing enhanced functionality and performance for certain embodiments of the present disclosure. In an exemplary embodiment, software functions may be directly manufactured into a silicon chip. Accordingly, it should be understood that combinations of hardware and software are also included within the definition of a computer system and are thus envisioned by the present disclosure as possible equivalent structures and equivalent methods.
- In several exemplary embodiments, computer readable mediums include, for example, passive data storage, such as a random access memory (RAM) as well as semi-permanent data storage such as a compact disk read only memory (CD-ROM). One or more exemplary embodiments of the present disclosure may be embodied in the RAM of a computer to transform a standard computer into a new specific computing machine. In several exemplary embodiments, data structures are defined organizations of data that may enable an embodiment of the present disclosure. In an exemplary embodiment, a data structure may provide an organization of data, or an organization of executable code.
- In several exemplary embodiments, the
network 102, and/or one or more portions thereof, may be designed to work on any specific architecture. In an exemplary embodiment, one or more portions of thenetwork 102 may be executed on a single computer, local area networks, client-server networks, wide area networks, internets, hand-held and other portable and wireless devices and networks. - In several exemplary embodiments, the
database server software 242 may be any standard or proprietary database software, such as Oracle, Microsoft Access, SyBase, or DBase II, for example. In several exemplary embodiments, thedatabase server software 242 may have fields, records, data, and other database elements that may be associated through database specific software. In several exemplary embodiments, data may be mapped. In several exemplary embodiments, mapping is the process of associating one data entry with another data entry. In an exemplary embodiment, the data contained in the location of a character file can be mapped to a field in a second table. In several exemplary embodiments, the physical location of thedatabase server software 242 is not limiting, and the database may be distributed. In an exemplary embodiment, thedatabase server software 242 may exist remotely from theapplication server software 202, and run on a separate platform. In an exemplary embodiment, thedatabase server software 242 may be accessible across the Internet. In several exemplary embodiments, more than one database may be implemented. - In an exemplary embodiment, the
system 100 with thesoftware architecture 200 provides a system for a video publisher is provided that associates and pushes relevant, interactive and targeted applications to viewers of videos on any multimedia client, such as a personal computer, gaming device, or mobile device. - In an exemplary embodiment, the
system 100 with thesoftware architecture 200 provides a system is provided that dynamically places on a video a set of interactive applications on a video player skin or on hotspots within a video stream using hypercode objects. One or more exemplary hypercode objects, and/or portions or combinations thereof, may be implemented according to the examples files provided in the Appendices below. This placement allows a publisher to link interactive call-to-action applications to the video that are customized based on the context of the video. The interactive applications can be sponsored by any sponsor desiring media viewer interaction with these call-to-action applications. When the video is played by a viewer, the system determines the location and demographics of the viewer and pushes demographically and contextually relevant interactive call-to-action applications as part of the video and video player. - In an employ embodiment, the
system 100 with thesoftware architecture 200 provides a system for a video publisher is provided that embeds interactive applications in the video player skin or in hotspots in the video stream. The embedded interactive applications can be activated based on time, viewer interaction, or some other event. These applications follow the video virally on any client on which the video player is located. - In an employ embodiment, the
system 100 with thesoftware architecture 200 provides a system by which custom applications may be developed using widgets on an application development platform that allows developers and others to create interactive applications and integrate them with the video. The system also records and provides statistics related to various relevant parameters for analyzing and improving the delivery of the applications to viewers and provides metrics relevant to the publisher and sponsor for business intelligence and commercial use. The applications provide a rich and engaging video experience to the viewer and a monetization solution for the video publisher while effectively delivering the sponsor's messages to viewers. - A method has been described that includes identifying a hotspot in a portion of a video content, overlaying a hypercode object on the hotspot at a spatial point, causing the hypercode object to be displayed at a temporal point during playback of the video content, and providing an interactive application in response to activation of the hypercode object. In an exemplary embodiment, the method includes analyzing the video content at the spatial point and the temporal point and isolating at least one graphical object detected in the video content at the spatial point and the temporal point. In an exemplary embodiment, the method includes receiving a temporal range comprising a start time and an end time, wherein the starting time is the temporal point, and tracking a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time. In an exemplary embodiment, the isolating the at least one graphical object comprises: isolating multiple graphical objects in the video content, providing a list of the multiple graphical objects; and receiving a selection of one graphical object from the list. In an exemplary embodiment, the method includes defining a context for the video content and selecting the interactive application according to the context. In an exemplary embodiment, the method includes providing the interactive application further pauses playback of the video content. In an exemplary embodiment, the interactive application displays advertising content. In an exemplary embodiment, the method includes modifying the timing or location of the hypercode object. In an exemplary embodiment, the method includes making the interactive application available to a sponsor and customizing the interactive application according to a request by the sponsor. In an exemplary embodiment, the method includes obtaining data related to viewer interaction with the interactive application and revising the interactive application based on the data. In an exemplary embodiment, the hypercode object is an XML file.
- An apparatus has been described that includes a computer-readable physical medium containing instructions executable on a computer that when executed cause the computer to identify hotspot in a portion of a video content, overlay a hypercode object on the hotspot at a spatial point, cause the hypercode object to be displayed at a temporal point during playback of the video content, and provide an interactive application in response to activation of the hypercode object. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to analyze the video content at the spatial point and the temporal point and isolate at least one graphical object detected in the video content at the spatial point and the temporal point. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to: receive a temporal range comprising a start time and an end time, wherein the starting time is the temporal point and track a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to: isolate multiple graphical objects in the video content, provide a list of the multiple graphical objects and receive a selection of one graphical object from the list. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to define a context for the video content and select the interactive application according to the context. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed cause the computer to pause playback of the video content upon activation of the hypercode object. In an exemplary embodiment, the interactive application displays advertising content. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to modify the timing or location of the hypercode object. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to make the interactive application available to a sponsor and customize the interactive application according a request by the sponsor. In an exemplary embodiment, the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to obtain data related to viewer interaction with the interactive application; and revise the interactive application based on the data.
- A system has been described that includes an interactive video player responsive to a video content request to provide a video content to a client device, a video context editor responsive to a request to identify and track movement of an object in the video content automatically and thereby generate object motion data, an application integration engine responsive to a request to link an interactive application to a hypercode object; the hypercode object incorporating the object motion data, and an analytics server responsive to receipt of user interaction data from the client device to store the user interaction data in a database. In an exemplary embodiment, the hypercode object is an XML file. In an exemplary embodiment, the user interaction data indicates whether a user at the client device initiated the interactive application linked with the hypercode object.
- A system has been described that a means for identifying a hotspot in a portion of a video content, a means for overlaying a hypercode object on the hotspot at a spatial point, a means for causing the hypercode object to be displayed at a temporal point during playback of the video content, and a means for providing an interactive application in response to activation of the hypercode object. In an exemplary embodiment, the system includes a means for analyzing the video content at the spatial point and the temporal point and a means for isolating at least one graphical object detected in the video content at the spatial point and the temporal point. In an exemplary embodiment, the system includes a means for receiving temporal range comprising a start time and an end time, wherein the starting time is the identified point and means for tracking a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time. In an exemplary embodiment, the isolating the at least one graphical object comprises: means for isolating multiple graphical objects in the video content, means for providing a list of the multiple graphical objects and means for receiving a selection of one graphical object from the list. In an exemplary embodiment, the system includes a means for defining a context for the video content and a means for selecting the interactive application according to the context. In an exemplary embodiment, the means for providing the interactive application further pauses playback of the video content. In an exemplary embodiment, the interactive application displays advertising content. In an exemplary embodiment, the system includes a means for modifying the timing or location of the hypercode object. In an exemplary embodiment, the system includes a means for making the interactive application available to a sponsor and means for customizing the interactive application according a request by the sponsor. In an exemplary embodiment, the system includes a means for obtaining data related to viewer interaction with the interactive application and a means for revising the interactive application based on the data. In an exemplary embodiment, the hypercode object is an XML file.
- A method has been described that includes associating at least one interactive application with a video, the at least one interactive application being contextually relevant to the subject matter of the video, wherein associating at least one interactive application with the video comprises at least one of the following: embedding the at least one interactive application on a video player skin that is proximate to the video during playback of the video, and embedding the at least one interactive application in one or more hotspots within the video and activating the at least one interactive application in response to the one or more of the following: the passage of one or more time periods during playback of the video and one or more interactions initiated by one or more viewers of the video during playback of the video, wherein the at least one interactive application is sponsored by a sponsor and comprises one or more of the following: one or more messages from the sponsor, each of the one or more messages being relevant to the subject matter of the video and one or more call-to-action applications, each of the one or more call-to action applications comprising a request that the one or more viewers of the video initiate at least one action that is relevant to the subject matter of the video.
- A system has been described that includes a computer readable medium comprising a plurality or instruction stored therein, the plurality or instruction comprising: instructions for associating at least one interactive application with a video, the at least one interactive application being contextually relevant to the subject matter of the video, wherein the instructions for associating at least one interactive application with the video comprises at least one of the following: instructions for embedding the at least one interactive application on a video player skin that is proximate to the video during playback of the video, and instructions for embedding the at least one interactive application in one or more hotspots within the video and instructions for activating the at least one interactive application in response to the one or more of the following: the passage of one or more time periods during playback of the video and one or more interactions initiated by one or more viewers of the video during playback of the video, wherein the at least one interactive application is sponsored by a sponsor and comprises one or more of the following: one or more messages from the sponsor, each of the one or more messages being relevant to the subject matter of the video and one or more call-to-action applications, each of the one or more call-to action applications comprising a request that the one or more viewers of the video initiate at least one action that is relevant to the subject matter of the video.
- It is understood that variations may be made in the foregoing without departing from the scope of the disclosure.
- In several exemplary embodiments, while different steps, processes, and procedures are described as appearing as distinct acts, one or more of the steps, one or more of the processes, and/or one or more of the procedures may also be performed in different orders, simultaneously and/or sequentially. In several exemplary embodiments, the steps, processes and/or procedures may be merged into one or more steps, processes and/or procedures.
- In several exemplary embodiments, one or more of the operational steps in each embodiment may be omitted. Moreover, in some instances, some features of the present disclosure may be employed without a corresponding use of the other features. Moreover, one or more of the above-described embodiments and/or variations may be combined in whole or in part with any one or more of the other above-described embodiments and/or variations.
- Although several exemplary embodiments have been described in detail above, the embodiments described are exemplary only and are not limiting, and those skilled in the art will readily appreciate that many other modifications, changes and/or substitutions are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications, changes and/or substitutions are intended to be included within the scope of this disclosure as defined in the following claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures.
-
APPENDIX 1 Sample Hypercode Object XML file: <?xml version=“1.0” encoding=“UTF-8”?> <cimple:IvSpots xmlns:cimple=“http://www.example.org/IvSpotsSchema” xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=“http://www.example.org/IvSpotsSchema IvSpotsSchema.xsd ”> <cimple:IvSpot id=“”> <cimple:Appearance> <cimple:ivType>cimple:ivType</cimple:ivType> <cimple:ivColor>cimple:ivColor</cimple:ivColor> <cimple:ivText>cimple:ivText</cimple:ivText> <cimple:ivAlpha>cimple:ivAlpha</cimple:ivAlpha> <cimple:name>cimple:name</cimple:name> </cimple:Appearance> <cimple:Behaviour> <cimple:rollOver>true</cimple:rollOver> <cimple:rolloverWinType>cimple:rolloverWinType</cimple:rolloverWinType> <cimple:rolloverWinColor>cimple:rolloverWinColor</cimple:rolloverWinColor> <cimple:rolloverText>cimple:rolloverText</cimple:rolloverText> </cimple:Behaviour> <cimple:ivTime> <cimple:startTime>12:00:00</cimple:startTime> <cimple:duration>0.0</cimple:duration> </cimple:ivTime> <cimple:Dimensions> <cimple:height>0</cimple:height> <cimple:width>0</cimple:width> <cimple:start_x>0</cimple:start_x> <cimple:start_y>0</cimple:start_y> </cimple:Dimensions> <cimple:embedded> <cimple:type>cimple:type</cimple:type> <cimple:hyperlink>cimple:hyperlink</cimple:hyperlink> <cimple:description>cimple:description</cimple:description> <cimple:tags>cimple:tags</cimple:tags> </cimple:embedded> <cimple:ad> <cimple:campaign_id>0</cimple:campaign_id> <cimple:banner_id>0</cimple:banner_id> </cimple:ad> <cimple:tool_tip> <cimple:description>cimple:description</cimple:description> <cimple:type>cimple:type</cimple:type> <cimple:font> <cimple:font>Aerial</cimple:font> <cimple:size>10</cimple:size> <cimple:color>0xFFFFFF</cimple:color> </cimple:font> <cimple:tipColor>cimple:tipColor</cimple:tipColor> <cimple:alpha>0.0</cimple:alpha> </cimple:tool_tip> <cimple:ivMov> <cimple:id>cimple:id</cimple:id> <cimple:time> <cimple:startTime>12:00:00</cimple:startTime> <cimple:duration>0.0</cimple:duration> </cimple:time> <cimple:scale_x>0.0</cimple:scale_x> <cimple:scale_y>0.0</cimple:scale_y> <cimple:rotation>0.0</cimple:rotation> <cimple:color>cimple:color</cimple:color> <cimple:alpha>0.0</cimple:alpha> <cimple:transition>cimple:transition</cimple:transition> <cimple:target_x>0</cimple:target_x> <cimple:target_y>0</cimple:target_y> </cimple:ivMov> </cimple:IvSpot> </cimple:IvSpots> -
APPENDIX 2Example Hypercode XML file for audio hotspot <?xml version=“1.0” encoding=“UTF-8”?> <cimple:IvSpots xmlns:cimple=“http://www.example.org/IvSpotsSchema” xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=“http://www.example.org/IvSpotsSchema IvSpotsSchema.xsd ”> <cimple:IvSpot> <cimple:id>0</cimple:id> <cimple:Appearance> <cimple:ivType>cimple:ivType</cimple:ivType> </cimple:Appearance> <cimple:embedded> <cimple:audio>cimple:audio_path</cimple:audio> </cimple:embedded> </cimple:IvSpot> </cimple:IvSpots> -
APPENDIX 3Example Hypercode XML file for video hotspot <cimple:IvSpot> <cimple:id>0</cimple:id> <cimple:Appearance> <cimple:ivType>cimple:ivType</cimple:ivType> </cimple:Appearance> <cimple:embedded> <cimple: image>cimple:image_path</cimple: image> </cimple:embedded> </cimple:IvSpot> </cimple:IvSpots> -
APPENDIX 4 Example Hypercode XML file for text hotspot <cimple:IvSpot> <cimple:id>0</cimple:id> <cimple:Appearance> <cimple:ivType>cimple:textIVSpot</cimple:ivType> <cimple:ivText>cimple:ivText</cimple:ivText> </cimple:Appearance> </cimple:IvSpot> </cimple:IvSpots> -
APPENDIX 5 Example Hypercode XML file motion-tracking hotspot <cimple:id>0</cimple:id> <cimple:Appearance> <cimple:ivType>cimple:ivType</cimple:ivType> <cimple:ivColor>cimple:ivColor</cimple:ivColor> <cimple:ivText>cimple:ivText</cimple:ivText> <cimple:ivAlpha>cimple:ivAlpha</cimple:ivAlpha> <cimple:name>cimple:name</cimple:name> </cimple:Appearance> <cimple:Dimensions> <cimple:height>0</cimple:height> <cimple:width>0</cimple:width> <cimple:start_x>0</cimple:start_x> <cimple:start_y>0</cimple:start_y> </cimple:Dimensions> <cimple:ivMov> <cimple:id>cimple:id</cimple:id> <cimple:time> <cimple:startTime>12:00:00</cimple:startTime> <cimple:duration>0.0</cimple:duration> </cimple:time> <cimple:scale_x>0.0</cimple:scale_x> <cimple:scale_y>0.0</cimple:scale_y> <cimple:rotation>0.0</cimple:rotation> <cimple:color>cimple:color</cimple:color> <cimple:alpha>0.0</cimple:alpha> <cimple:transition>cimple:transition</cimple:transition> <cimple:target_x>0</cimple:target_x> <cimple:target_y>0</cimple:target_y> <cimple:control_x>0</cimple: control_x > <cimple: control_y>0</cimple: control_y> </cimple:ivMov> </cimple:IvSpot>
Claims (37)
1. A method comprising:
identifying a hotspot in a portion of a video content;
overlaying a hypercode object on the hotspot at a spatial point;
causing the hypercode object to be displayed at a temporal point during playback of the video content; and
providing an interactive application in response to activation of the hypercode object.
2. The method of claim 1 further comprising:
analyzing the video content at the spatial point and the temporal point; and
isolating at least one graphical object detected in the video content at the spatial point and the temporal point.
3. The method of claim 2 further comprising:
receiving a temporal range comprising a start time and an end time, wherein the starting time is the temporal point; and
tracking a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time.
4. The method of claim 2 wherein the isolating the at least one graphical object comprises:
isolating multiple graphical objects in the video content;
providing a list of the multiple graphical objects; and
receiving a selection of one graphical object from the list.
5. The method of claim 1 further comprising:
defining a context for the video content; and
selecting the interactive application according to the context.
6. The method of claim 1 wherein providing the interactive application further pauses playback of the video content.
7. The method of claim 1 wherein the interactive application displays advertising content.
8. The method of claim 1 further comprising modifying the timing or location of the hypercode object.
9. The method of claim 1 further comprising:
making the interactive application available to a sponsor; and
customizing the interactive application according to a request by the sponsor.
10. The method of claim 1 further comprising:
obtaining data related to viewer interaction with the interactive application; and
revising the interactive application based on the data.
11. The method of claim 1 wherein the hypercode object is an XML file.
12. An apparatus comprising:
computer-readable physical medium containing instructions executable on a computer that when executed cause the computer to:
identify a hotspot in a portion of a video content;
overlay a hypercode object on the hotspot at a spatial point;
cause the hypercode object to be displayed at a temporal point during playback of the video content; and
provide an interactive application in response to activation of the hypercode object.
13. The apparatus of claim 12 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to:
analyze the video content at the spatial point and the temporal point; and
isolate at least one graphical object detected in the video content at the spatial point and the temporal point.
14. The apparatus of claim 13 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to:
receive a temporal range comprising a start time and an end time, wherein the starting time is the temporal point; and
track a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time.
15. The apparatus of claim 13 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to:
isolate multiple graphical objects in the video content;
provide a list of the multiple graphical objects; and
receive a selection of one graphical object from the list.
16. The apparatus of claim 12 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to:
define a context for the video content; and
select the interactive application according to the context.
17. The apparatus of claim 12 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed cause the computer to pause playback of the video content upon activation of the hypercode object.
18. The apparatus of claim 12 wherein the interactive application displays advertising content.
19. The apparatus of claim 12 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to modify the timing or location of the hypercode object.
20. The apparatus of claim 12 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to:
make the interactive application available to a sponsor; and
customize the interactive application according a request by the sponsor.
21. The apparatus of claim 12 , wherein the computer-readable physical medium contains instructions executable on a computer that when executed further cause the computer to:
obtain data related to viewer interaction with the interactive application; and
revise the interactive application based on the data.
22. A system comprising:
an interactive video player responsive to a video content request to provide a video content to a client device;
a video context editor responsive to a request to identify and track movement of an object in the video content automatically and thereby generate object motion data;
an application integration engine responsive to a request to link an interactive application to a hypercode object; the hypercode object incorporating the object motion data; and
an analytics server responsive to receipt of user interaction data from the client device to store the user interaction data in a database.
23. The system of claim 22 wherein the hypercode object is an XML file.
24. The system of claim 22 wherein the user interaction data indicates whether a user at the client device initiated the interactive application linked with the hypercode object.
25. A system comprising:
means for receiving a video;
means for overlaying a hypercode object on the hotspot at a spatial point;
means for causing the hypercode object to be displayed at a temporal point during playback of the video content; and
means for providing an interactive application in response to activation of the hypercode object.
26. The system of claim 25 further comprising:
means for analyzing the video content at the spatial point and the temporal point; and
means for isolating at least one graphical object detected in the video content at the spatial point and the temporal point.
27. The system of claim 26 further comprising:
means for receiving temporal range comprising a start time and an end time, wherein the starting time is the temporal point; and
means for tracking a movement of the graphical object in the video content with the hypercode object beginning at the start time and continuing until the end time.
28. The system of claim 26 wherein the isolating the at least one graphical object comprises:
means for isolating multiple graphical objects in the video content;
means for providing a list of the multiple graphical objects; and
means for receiving a selection of one graphical object from the list.
29. The system of claim 25 further comprising:
means for defining a context for the video content; and
means for selecting the interactive application according to the context.
30. The system of claim 25 wherein means for providing the interactive application further pauses playback of the video content.
31. The system of claim 25 wherein the interactive application displays advertising content.
32. The system of claim 25 further comprising means for modifying the timing or location of the hypercode object.
33. The system of claim 25 further comprising:
means for making the interactive application available to a sponsor; and
means for customizing the interactive application according a request by the sponsor.
34. The system of claim 25 further comprising:
means for obtaining data related to viewer interaction with the interactive application; and
means for revising the interactive application based on the data.
35. The system of claim 25 wherein the hypercode object is an XML file.
36. A method comprising:
associating at least one interactive application with a video, the at least one interactive application being contextually relevant to the subject matter of a scene in the video, wherein associating at least one interactive application with the scene comprises at least one of the following:
embedding the at least one interactive application on a video player skin that is proximate to the video during playback of the video; and
embedding the at least one interactive application in one or more hotspots within the video;
activating the at least one interactive application in response to the one or more of the following:
the passage of one or more time periods during playback of the video; and
one or more interactions initiated by one or more viewers of the video during playback of the video;
wherein the interactive application changes depending on the interactions initiated by one or more viewers of the video during playback of the video;
wherein the at least one interactive application is sponsored by a sponsor and comprises one or more of the following:
one or more messages from the sponsor, each of the one or more messages being relevant to the subject matter of the video; and
one or more call-to-action applications, each of the one or more call-to action applications comprising a request that the one or more viewers of the video initiate at least one action that is relevant to the subject matter of the video; and
changing the identity of the sponsor depending on the interactions initiated by one or more viewers of the video during playback of the video.
37. A system comprising:
a computer readable medium comprising a plurality or instruction stored therein, the plurality or instruction comprising:
instructions for associating at least one interactive application with a scene in a video, the at least one interactive application being contextually relevant to the subject matter of the scene, wherein the instructions for associating at least one interactive application with the scene comprises at least one of the following:
instructions for embedding the at least one interactive application on a video player skin that is proximate to the video during playback of the video; and
instructions for embedding the at least one interactive application in one or more hotspots within the video; and
instructions for activating the at least one interactive application in response to the one or more of the following:
the passage of one or more time periods during playback of the video; and
one or more interactions initiated by one or more viewers of the video during playback of the video; and
wherein the at least one interactive application is sponsored by a sponsor and comprises one or more of the following:
one or more messages from the sponsor, each of the one or more messages being relevant to the subject matter of the video;
wherein the identity of the sponsor changes depending on the interactions initiated by one or more viewers of the video during playback of the video;
wherein the interactive application changes depending on the interactions initiated by one or more viewers of the video during playback of the video; and
one or more call-to-action applications, each of the one or more call-to action applications comprising a request that the one or more viewers of the video initiate at least one action that is relevant to the subject matter of the video.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/552,146 US20110052144A1 (en) | 2009-09-01 | 2009-09-01 | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos |
US14/483,885 US9588663B2 (en) | 2009-09-01 | 2014-09-11 | System and method for integrating interactive call-to-action, contextual applications with videos |
US15/450,384 US20170180806A1 (en) | 2009-09-01 | 2017-03-06 | System and method for integrating interactive call-to-action, contextual applications with videos |
US16/000,727 US20190174191A1 (en) | 2009-09-01 | 2018-06-05 | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/552,146 US20110052144A1 (en) | 2009-09-01 | 2009-09-01 | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/483,885 Continuation US9588663B2 (en) | 2009-09-01 | 2014-09-11 | System and method for integrating interactive call-to-action, contextual applications with videos |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110052144A1 true US20110052144A1 (en) | 2011-03-03 |
Family
ID=43625057
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/552,146 Abandoned US20110052144A1 (en) | 2009-09-01 | 2009-09-01 | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos |
US14/483,885 Active US9588663B2 (en) | 2009-09-01 | 2014-09-11 | System and method for integrating interactive call-to-action, contextual applications with videos |
US15/450,384 Abandoned US20170180806A1 (en) | 2009-09-01 | 2017-03-06 | System and method for integrating interactive call-to-action, contextual applications with videos |
US16/000,727 Abandoned US20190174191A1 (en) | 2009-09-01 | 2018-06-05 | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/483,885 Active US9588663B2 (en) | 2009-09-01 | 2014-09-11 | System and method for integrating interactive call-to-action, contextual applications with videos |
US15/450,384 Abandoned US20170180806A1 (en) | 2009-09-01 | 2017-03-06 | System and method for integrating interactive call-to-action, contextual applications with videos |
US16/000,727 Abandoned US20190174191A1 (en) | 2009-09-01 | 2018-06-05 | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos |
Country Status (1)
Country | Link |
---|---|
US (4) | US20110052144A1 (en) |
Cited By (98)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090276805A1 (en) * | 2008-05-03 | 2009-11-05 | Andrews Ii James K | Method and system for generation and playback of supplemented videos |
US20110191809A1 (en) * | 2008-01-30 | 2011-08-04 | Cinsay, Llc | Viral Syndicated Interactive Product System and Method Therefor |
US20110261258A1 (en) * | 2009-09-14 | 2011-10-27 | Kumar Ramachandran | Systems and methods for updating video content with linked tagging information |
US20120139940A1 (en) * | 2010-12-02 | 2012-06-07 | Philippe Chavanne | Video Overlay Techniques |
US8312486B1 (en) | 2008-01-30 | 2012-11-13 | Cinsay, Inc. | Interactive product placement system and method therefor |
US20120316936A1 (en) * | 2011-06-07 | 2012-12-13 | Philip Clifford Jacobs | Integrated loyalty program infrastructure |
ITTO20110946A1 (en) * | 2011-10-19 | 2013-04-20 | Emisfera Societa Cooperativa | SYSTEM TO ALLOW A USER TO INTERACT IN REAL TIME WITH A VIDEO CONTENT |
US20130145269A1 (en) * | 2011-09-26 | 2013-06-06 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US20130290101A1 (en) * | 2012-04-25 | 2013-10-31 | Google Inc. | Media-enabled delivery of coupons |
US20130297998A1 (en) * | 2012-05-04 | 2013-11-07 | Qualcomm Innovation Center, Inc. | Processing of displayed content to augment viewing |
US20130346865A1 (en) * | 2012-06-25 | 2013-12-26 | Via Technologies, Inc. | Dynamic wallpaper of mobile systems |
US20140004938A1 (en) * | 2010-05-28 | 2014-01-02 | Wms Gaming, Inc. | Providing and controlling embeddable gaming content |
EP2690881A1 (en) * | 2012-07-25 | 2014-01-29 | WireWax Limited | Online video distribution |
US20140173025A1 (en) * | 2012-12-17 | 2014-06-19 | Cox Communications, Inc. | Systems and methods for content delivery |
US8769053B2 (en) | 2011-08-29 | 2014-07-01 | Cinsay, Inc. | Containerized software for virally copying from one endpoint to another |
US8771048B2 (en) | 2011-06-24 | 2014-07-08 | Wpc, Llc | Computer-implemented video puzzles |
US20140201778A1 (en) * | 2013-01-15 | 2014-07-17 | Sap Ag | Method and system of interactive advertisement |
US20140201638A1 (en) * | 2013-01-14 | 2014-07-17 | Discovery Communications, Llc | Methods and systems for previewing a recording |
CN104113785A (en) * | 2014-06-26 | 2014-10-22 | 小米科技有限责任公司 | Information acquisition method and device |
US20140351838A1 (en) * | 2013-05-22 | 2014-11-27 | Andrew Dunmore | System and method for providing a secure access-controlled enterprise content repository |
WO2015048128A1 (en) * | 2013-09-30 | 2015-04-02 | Analog Analytics, Inc. | System and method for improved app distribution |
WO2015073065A1 (en) * | 2013-11-14 | 2015-05-21 | The Motley Fool Holdings, Inc. | Automatically transitioning a user from a call to action to an enrollment interface |
US20150163191A1 (en) * | 2011-08-15 | 2015-06-11 | Comigo Ltd. | Methods and systems for creating and managing multi participant sessions |
US20150177940A1 (en) * | 2013-12-20 | 2015-06-25 | Clixie Media, LLC | System, article, method and apparatus for creating event-driven content for online video, audio and images |
US20150237082A1 (en) * | 2014-02-20 | 2015-08-20 | International Business Machines Corporation | Dynamically enabling an interactive element within a non-interactive view of a screen sharing session |
USD742394S1 (en) * | 2013-09-27 | 2015-11-03 | Exfo Inc. | Display screen, or portion thereof, with graphical user interface for an optical fiber microscope |
EP2961172A1 (en) * | 2014-06-26 | 2015-12-30 | Xiaomi Inc. | Method and device for information acquisition |
US20160098180A1 (en) * | 2014-10-01 | 2016-04-07 | Sony Corporation | Presentation of enlarged content on companion display device |
US20160110884A1 (en) * | 2013-03-14 | 2016-04-21 | Aperture Investments, Llc | Systems and methods for identifying objects within video content and associating information with identified objects |
US20160182969A1 (en) * | 2010-07-30 | 2016-06-23 | Grab Vision Group LLC | Interactive advertising and marketing system |
US20160360289A1 (en) * | 2015-06-03 | 2016-12-08 | lnVidz, LLC | Video management and marketing |
US9549152B1 (en) * | 2014-06-09 | 2017-01-17 | Google Inc. | Application content delivery to multiple computing environments using existing video conferencing solutions |
US9552079B2 (en) | 2013-04-29 | 2017-01-24 | Swisscom Ag | Method, electronic device and system for remote text input |
US9607330B2 (en) | 2012-06-21 | 2017-03-28 | Cinsay, Inc. | Peer-assisted shopping |
CN106648675A (en) * | 2016-12-28 | 2017-05-10 | 乐蜜科技有限公司 | Method and device for displaying application program using information and electronic equipment |
US9665895B2 (en) | 2013-08-12 | 2017-05-30 | Mov, Inc. | Technologies for video-based commerce |
US9697504B2 (en) | 2013-09-27 | 2017-07-04 | Cinsay, Inc. | N-level replication of supplemental content |
US9734513B1 (en) * | 2012-10-16 | 2017-08-15 | Alexander F. Mehr | System and method for advertising applications to users without requiring the applications to be installed |
TWI608433B (en) * | 2014-10-17 | 2017-12-11 | 深圳市華星光電技術有限公司 | Interacting method |
US9875489B2 (en) | 2013-09-11 | 2018-01-23 | Cinsay, Inc. | Dynamic binding of video content |
US9961181B2 (en) | 2011-09-12 | 2018-05-01 | Fiserv, Inc. | Systems and methods for customizing mobile applications based upon user associations with one or more entities |
US20180182168A1 (en) * | 2015-09-02 | 2018-06-28 | Thomson Licensing | Method, apparatus and system for facilitating navigation in an extended scene |
CN108377334A (en) * | 2018-04-03 | 2018-08-07 | 优视科技有限公司 | Short-sighted frequency image pickup method, device and electric terminal |
US20180253567A1 (en) * | 2017-03-02 | 2018-09-06 | Ricoh Co., Ltd. | Tamper Protection and Video Source Identification for Video Processing Pipeline |
US10079039B2 (en) | 2011-09-26 | 2018-09-18 | The University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US20180343495A1 (en) * | 2017-05-25 | 2018-11-29 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US10338799B1 (en) * | 2017-07-06 | 2019-07-02 | Spotify Ab | System and method for providing an adaptive seek bar for use with an electronic device |
US20190230138A1 (en) * | 2014-05-23 | 2019-07-25 | Samsung Electronics Co., Ltd. | Server and method of providing collaboration services and user terminal for receiving collaboration services |
US10409819B2 (en) * | 2013-05-29 | 2019-09-10 | Microsoft Technology Licensing, Llc | Context-based actions from a source application |
US10425700B2 (en) | 2016-12-31 | 2019-09-24 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on real-time or near-real-time content context analysis |
US10477287B1 (en) | 2019-06-18 | 2019-11-12 | Neal C. Fairbanks | Method for providing additional information associated with an object visually present in media content |
US20190394426A1 (en) * | 2013-06-26 | 2019-12-26 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US10645462B2 (en) | 2016-12-31 | 2020-05-05 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain |
US20200162798A1 (en) * | 2018-11-20 | 2020-05-21 | International Business Machines Corporation | Video integration using video indexing |
US10694231B2 (en) | 2016-12-31 | 2020-06-23 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain based on user preferences |
US20200204834A1 (en) | 2018-12-22 | 2020-06-25 | Turner Broadcasting Systems, Inc. | Publishing a Disparate Live Media Output Stream Manifest That Includes One or More Media Segments Corresponding to Key Events |
US10701127B2 (en) | 2013-09-27 | 2020-06-30 | Aibuy, Inc. | Apparatus and method for supporting relationships associated with content provisioning |
US10708635B2 (en) | 2017-03-02 | 2020-07-07 | Ricoh Company, Ltd. | Subsumption architecture for processing fragments of a video stream |
US10719552B2 (en) | 2017-03-02 | 2020-07-21 | Ricoh Co., Ltd. | Focalized summarizations of a video stream |
US10720182B2 (en) | 2017-03-02 | 2020-07-21 | Ricoh Company, Ltd. | Decomposition of a video stream into salient fragments |
US10750224B2 (en) | 2016-12-31 | 2020-08-18 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on user selection |
US10789631B2 (en) | 2012-06-21 | 2020-09-29 | Aibuy, Inc. | Apparatus and method for peer-assisted e-commerce shopping |
US10856016B2 (en) | 2016-12-31 | 2020-12-01 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode based on user selection |
US10878851B2 (en) * | 2017-08-18 | 2020-12-29 | BON2 Media Services LLC | Embedding interactive content into a shareable online video |
US10880606B2 (en) | 2018-12-21 | 2020-12-29 | Turner Broadcasting System, Inc. | Disparate live media output stream playout and broadcast distribution |
US10929685B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US10929707B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10943122B2 (en) | 2017-03-02 | 2021-03-09 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10949463B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US10949705B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10956494B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US10956495B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US10956773B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10965967B2 (en) | 2016-12-31 | 2021-03-30 | Turner Broadcasting System, Inc. | Publishing a disparate per-client live media output stream based on dynamic insertion of targeted non-programming content and customized programming content |
US10983812B2 (en) * | 2018-11-19 | 2021-04-20 | International Business Machines Corporation | Replaying interactions with a graphical user interface (GUI) presented in a video stream of the GUI |
US10992973B2 (en) | 2016-12-31 | 2021-04-27 | Turner Broadcasting System, Inc. | Publishing a plurality of disparate live media output stream manifests using live input streams and pre-encoded media assets |
US11038932B2 (en) | 2016-12-31 | 2021-06-15 | Turner Broadcasting System, Inc. | System for establishing a shared media session for one or more client devices |
US11051061B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream using pre-encoded media assets |
US11051074B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams using live input streams |
US11057652B1 (en) * | 2019-04-30 | 2021-07-06 | Amazon Technologies, Inc. | Adjacent content classification and targeting |
US11082734B2 (en) | 2018-12-21 | 2021-08-03 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream that complies with distribution format regulations |
US11102543B2 (en) | 2014-03-07 | 2021-08-24 | Sony Corporation | Control of large screen display using wireless portable computer to pan and zoom on large screen display |
US11109086B2 (en) | 2016-12-31 | 2021-08-31 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode |
US11134309B2 (en) | 2016-12-31 | 2021-09-28 | Turner Broadcasting System, Inc. | Creation of channels using pre-encoded media assets |
US20220005111A1 (en) * | 2018-09-25 | 2022-01-06 | talkshoplive, Inc. | Systems and methods for embeddable point-of-sale transactions |
US11227315B2 (en) | 2008-01-30 | 2022-01-18 | Aibuy, Inc. | Interactive product placement system and method therefor |
US11263221B2 (en) | 2013-05-29 | 2022-03-01 | Microsoft Technology Licensing, Llc | Search result contexts for application launch |
US11367466B2 (en) * | 2019-10-04 | 2022-06-21 | Udo, LLC | Non-intrusive digital content editing and analytics system |
US11457176B2 (en) | 2013-06-26 | 2022-09-27 | Touchcast, Inc. | System and method for providing and interacting with coordinated presentations |
US11488363B2 (en) | 2019-03-15 | 2022-11-01 | Touchcast, Inc. | Augmented reality conferencing system and method |
US11503352B2 (en) | 2016-12-31 | 2022-11-15 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on external data |
US11546400B2 (en) | 2016-12-31 | 2023-01-03 | Turner Broadcasting System, Inc. | Generating a live media segment asset |
US11599253B2 (en) * | 2020-10-30 | 2023-03-07 | ROVl GUIDES, INC. | System and method for selection of displayed objects by path tracing |
US11863827B2 (en) | 2016-12-31 | 2024-01-02 | Turner Broadcasting System, Inc. | Client-side dynamic presentation of programming content in an indexed disparate live media output stream |
US11871062B2 (en) | 2016-12-31 | 2024-01-09 | Turner Broadcasting System, Inc. | Server-side dynamic insertion of programming content in an indexed disparate live media output stream |
US11928758B2 (en) | 2020-03-06 | 2024-03-12 | Christopher Renwick Alston | Technologies for augmented-reality |
US11956518B2 (en) | 2020-11-23 | 2024-04-09 | Clicktivated Video, Inc. | System and method for creating interactive elements for objects contemporaneously displayed in live video |
US11962821B2 (en) | 2021-03-19 | 2024-04-16 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream using pre-encoded media assets |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10122660B2 (en) | 2015-03-27 | 2018-11-06 | MINDBODY, Inc. | Contextual mobile communication platform |
KR101698558B1 (en) * | 2015-05-08 | 2017-01-23 | 네이버 주식회사 | Method and system for registering service the link in the content |
GB201610755D0 (en) * | 2016-06-20 | 2016-08-03 | Flavourworks Ltd | Method for delivering an interactive video |
US10515473B2 (en) | 2017-12-04 | 2019-12-24 | At&T Intellectual Property I, L.P. | Method and apparatus for generating actionable marked objects in images |
US20190197089A1 (en) * | 2017-12-22 | 2019-06-27 | Zmags Corp. | Harnessing Analytics By Server To Increase Interactivity Of Visitor with Customer's Application |
JP2022100419A (en) * | 2019-04-23 | 2022-07-06 | 株式会社ラキール | System, device, method, and program for processing information |
WO2022009196A1 (en) * | 2020-07-07 | 2022-01-13 | Gadot Roee Meir | Incorporation of information during a visual display |
GB202104554D0 (en) | 2021-03-31 | 2021-05-12 | British Telecomm | Auto safe zone detection |
WO2022221749A1 (en) * | 2021-04-17 | 2022-10-20 | Kinoo, Inc. | Systems and methods to enhance interactive engagement with shared content by a contextual virtual agent |
US11594258B2 (en) | 2021-07-19 | 2023-02-28 | Pes University | System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154771A (en) * | 1998-06-01 | 2000-11-28 | Mediastra, Inc. | Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively |
US20020080165A1 (en) * | 2000-06-08 | 2002-06-27 | Franz Wakefield | Method and system for creating, using and modifying multifunctional website hot spots |
US20030152366A1 (en) * | 1997-11-28 | 2003-08-14 | Kabushiki Kaisha Toshiba | AV information reproducing system and a reproducing method applied to the same system |
US20060037044A1 (en) * | 1993-03-29 | 2006-02-16 | Microsoft Corporation | Pausing television programming in response to selection of hypertext link |
US20060242574A1 (en) * | 2005-04-25 | 2006-10-26 | Microsoft Corporation | Associating information with an electronic document |
US20080077952A1 (en) * | 2006-09-25 | 2008-03-27 | St Jean Randy | Dynamic Association of Advertisements and Digital Video Content, and Overlay of Advertisements on Content |
US7577978B1 (en) * | 2000-03-22 | 2009-08-18 | Wistendahl Douglass A | System for converting TV content to interactive TV game program operated with a standard remote control and TV set-top box |
US20090276805A1 (en) * | 2008-05-03 | 2009-11-05 | Andrews Ii James K | Method and system for generation and playback of supplemented videos |
US20100278453A1 (en) * | 2006-09-15 | 2010-11-04 | King Martin T | Capture and display of annotations in paper and electronic documents |
US20110055713A1 (en) * | 2007-06-25 | 2011-03-03 | Robert Lee Gruenewald | Interactive delivery of editoral content |
US20110217022A1 (en) * | 2008-11-06 | 2011-09-08 | Ofer Miller | System and method for enriching video data |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6496981B1 (en) * | 1997-09-19 | 2002-12-17 | Douglass A. Wistendahl | System for converting media content for interactive TV use |
US6298482B1 (en) * | 1997-11-12 | 2001-10-02 | International Business Machines Corporation | System for two-way digital multimedia broadcast and interactive services |
US6188398B1 (en) * | 1999-06-02 | 2001-02-13 | Mark Collins-Rector | Targeting advertising using web pages with video |
US20020087969A1 (en) * | 2000-12-28 | 2002-07-04 | International Business Machines Corporation | Interactive TV audience estimation and program rating in real-time using multi level tracking methods, systems and program products |
US20050132420A1 (en) * | 2003-12-11 | 2005-06-16 | Quadrock Communications, Inc | System and method for interaction with television content |
EP2011017A4 (en) * | 2006-03-30 | 2010-07-07 | Stanford Res Inst Int | Method and apparatus for annotating media streams |
US20070239546A1 (en) * | 2006-04-10 | 2007-10-11 | Scott Blum | Computer implemented interactive advertising system and method |
US20080288983A1 (en) * | 2007-05-18 | 2008-11-20 | Johnson Bradley G | System and Method for Providing Sequential Video and Interactive Content |
US9336528B2 (en) * | 2008-12-16 | 2016-05-10 | Jeffrey Beaton | System and method for overlay advertising and purchasing utilizing on-line video or streaming media |
-
2009
- 2009-09-01 US US12/552,146 patent/US20110052144A1/en not_active Abandoned
-
2014
- 2014-09-11 US US14/483,885 patent/US9588663B2/en active Active
-
2017
- 2017-03-06 US US15/450,384 patent/US20170180806A1/en not_active Abandoned
-
2018
- 2018-06-05 US US16/000,727 patent/US20190174191A1/en not_active Abandoned
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060037044A1 (en) * | 1993-03-29 | 2006-02-16 | Microsoft Corporation | Pausing television programming in response to selection of hypertext link |
US20030152366A1 (en) * | 1997-11-28 | 2003-08-14 | Kabushiki Kaisha Toshiba | AV information reproducing system and a reproducing method applied to the same system |
US6154771A (en) * | 1998-06-01 | 2000-11-28 | Mediastra, Inc. | Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively |
US7577978B1 (en) * | 2000-03-22 | 2009-08-18 | Wistendahl Douglass A | System for converting TV content to interactive TV game program operated with a standard remote control and TV set-top box |
US20020080165A1 (en) * | 2000-06-08 | 2002-06-27 | Franz Wakefield | Method and system for creating, using and modifying multifunctional website hot spots |
US20060242574A1 (en) * | 2005-04-25 | 2006-10-26 | Microsoft Corporation | Associating information with an electronic document |
US20100278453A1 (en) * | 2006-09-15 | 2010-11-04 | King Martin T | Capture and display of annotations in paper and electronic documents |
US20080077952A1 (en) * | 2006-09-25 | 2008-03-27 | St Jean Randy | Dynamic Association of Advertisements and Digital Video Content, and Overlay of Advertisements on Content |
US20110055713A1 (en) * | 2007-06-25 | 2011-03-03 | Robert Lee Gruenewald | Interactive delivery of editoral content |
US20090276805A1 (en) * | 2008-05-03 | 2009-11-05 | Andrews Ii James K | Method and system for generation and playback of supplemented videos |
US20110217022A1 (en) * | 2008-11-06 | 2011-09-08 | Ofer Miller | System and method for enriching video data |
Cited By (190)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9344754B2 (en) | 2008-01-30 | 2016-05-17 | Cinsay, Inc. | Interactive product placement system and method therefor |
US20110191809A1 (en) * | 2008-01-30 | 2011-08-04 | Cinsay, Llc | Viral Syndicated Interactive Product System and Method Therefor |
US9674584B2 (en) | 2008-01-30 | 2017-06-06 | Cinsay, Inc. | Interactive product placement system and method therefor |
US10438249B2 (en) | 2008-01-30 | 2019-10-08 | Aibuy, Inc. | Interactive product system and method therefor |
US8312486B1 (en) | 2008-01-30 | 2012-11-13 | Cinsay, Inc. | Interactive product placement system and method therefor |
US10425698B2 (en) | 2008-01-30 | 2019-09-24 | Aibuy, Inc. | Interactive product placement system and method therefor |
US20140095330A1 (en) * | 2008-01-30 | 2014-04-03 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9351032B2 (en) | 2008-01-30 | 2016-05-24 | Cinsay, Inc. | Interactive product placement system and method therefor |
US10055768B2 (en) * | 2008-01-30 | 2018-08-21 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9986305B2 (en) | 2008-01-30 | 2018-05-29 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9332302B2 (en) | 2008-01-30 | 2016-05-03 | Cinsay, Inc. | Interactive product placement system and method therefor |
US8893173B2 (en) | 2008-01-30 | 2014-11-18 | Cinsay, Inc. | Interactive product placement system and method therefor |
US11227315B2 (en) | 2008-01-30 | 2022-01-18 | Aibuy, Inc. | Interactive product placement system and method therefor |
US9338499B2 (en) | 2008-01-30 | 2016-05-10 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9338500B2 (en) | 2008-01-30 | 2016-05-10 | Cinsay, Inc. | Interactive product placement system and method therefor |
US8782690B2 (en) | 2008-01-30 | 2014-07-15 | Cinsay, Inc. | Interactive product placement system and method therefor |
US20090276805A1 (en) * | 2008-05-03 | 2009-11-05 | Andrews Ii James K | Method and system for generation and playback of supplemented videos |
US9113214B2 (en) | 2008-05-03 | 2015-08-18 | Cinsay, Inc. | Method and system for generation and playback of supplemented videos |
US8813132B2 (en) | 2008-05-03 | 2014-08-19 | Cinsay, Inc. | Method and system for generation and playback of supplemented videos |
US9813770B2 (en) | 2008-05-03 | 2017-11-07 | Cinsay, Inc. | Method and system for generation and playback of supplemented videos |
US10986412B2 (en) | 2008-05-03 | 2021-04-20 | Aibuy, Inc. | Methods and system for generation and playback of supplemented videos |
US9210472B2 (en) | 2008-05-03 | 2015-12-08 | Cinsay, Inc. | Method and system for generation and playback of supplemented videos |
US10225614B2 (en) | 2008-05-03 | 2019-03-05 | Cinsay, Inc. | Method and system for generation and playback of supplemented videos |
US20110261258A1 (en) * | 2009-09-14 | 2011-10-27 | Kumar Ramachandran | Systems and methods for updating video content with linked tagging information |
US9478101B2 (en) | 2010-05-28 | 2016-10-25 | Bally Gaming, Inc. | Providing and controlling embeddable gaming content |
US9202335B2 (en) * | 2010-05-28 | 2015-12-01 | Bally Gaming, Inc. | Providing and controlling embeddable gaming content |
US20140004938A1 (en) * | 2010-05-28 | 2014-01-02 | Wms Gaming, Inc. | Providing and controlling embeddable gaming content |
US20160182969A1 (en) * | 2010-07-30 | 2016-06-23 | Grab Vision Group LLC | Interactive advertising and marketing system |
US10674230B2 (en) * | 2010-07-30 | 2020-06-02 | Grab Vision Group LLC | Interactive advertising and marketing system |
US20120139940A1 (en) * | 2010-12-02 | 2012-06-07 | Philippe Chavanne | Video Overlay Techniques |
US20120316936A1 (en) * | 2011-06-07 | 2012-12-13 | Philip Clifford Jacobs | Integrated loyalty program infrastructure |
US8771048B2 (en) | 2011-06-24 | 2014-07-08 | Wpc, Llc | Computer-implemented video puzzles |
US9538250B2 (en) | 2011-08-15 | 2017-01-03 | Comigo Ltd. | Methods and systems for creating and managing multi participant sessions |
US20150163191A1 (en) * | 2011-08-15 | 2015-06-11 | Comigo Ltd. | Methods and systems for creating and managing multi participant sessions |
US10171555B2 (en) | 2011-08-29 | 2019-01-01 | Cinsay, Inc. | Containerized software for virally copying from one endpoint to another |
US9451010B2 (en) | 2011-08-29 | 2016-09-20 | Cinsay, Inc. | Containerized software for virally copying from one endpoint to another |
US11005917B2 (en) | 2011-08-29 | 2021-05-11 | Aibuy, Inc. | Containerized software for virally copying from one endpoint to another |
US8769053B2 (en) | 2011-08-29 | 2014-07-01 | Cinsay, Inc. | Containerized software for virally copying from one endpoint to another |
US9961181B2 (en) | 2011-09-12 | 2018-05-01 | Fiserv, Inc. | Systems and methods for customizing mobile applications based upon user associations with one or more entities |
US20180358049A1 (en) * | 2011-09-26 | 2018-12-13 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US10079039B2 (en) | 2011-09-26 | 2018-09-18 | The University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US20130145269A1 (en) * | 2011-09-26 | 2013-06-06 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US9354763B2 (en) * | 2011-09-26 | 2016-05-31 | The University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
ITTO20110946A1 (en) * | 2011-10-19 | 2013-04-20 | Emisfera Societa Cooperativa | SYSTEM TO ALLOW A USER TO INTERACT IN REAL TIME WITH A VIDEO CONTENT |
US20130290101A1 (en) * | 2012-04-25 | 2013-10-31 | Google Inc. | Media-enabled delivery of coupons |
US20130297998A1 (en) * | 2012-05-04 | 2013-11-07 | Qualcomm Innovation Center, Inc. | Processing of displayed content to augment viewing |
US10789631B2 (en) | 2012-06-21 | 2020-09-29 | Aibuy, Inc. | Apparatus and method for peer-assisted e-commerce shopping |
US10726458B2 (en) | 2012-06-21 | 2020-07-28 | Aibuy, Inc. | Peer-assisted shopping |
US9607330B2 (en) | 2012-06-21 | 2017-03-28 | Cinsay, Inc. | Peer-assisted shopping |
US20130346865A1 (en) * | 2012-06-25 | 2013-12-26 | Via Technologies, Inc. | Dynamic wallpaper of mobile systems |
US9207841B2 (en) | 2012-07-25 | 2015-12-08 | WireWax Limited | Online video distribution |
EP2690881A1 (en) * | 2012-07-25 | 2014-01-29 | WireWax Limited | Online video distribution |
US9734513B1 (en) * | 2012-10-16 | 2017-08-15 | Alexander F. Mehr | System and method for advertising applications to users without requiring the applications to be installed |
US20140173025A1 (en) * | 2012-12-17 | 2014-06-19 | Cox Communications, Inc. | Systems and methods for content delivery |
US9769538B2 (en) * | 2012-12-17 | 2017-09-19 | Cox Communications, Inc. | Systems and methods for content delivery |
US20140201638A1 (en) * | 2013-01-14 | 2014-07-17 | Discovery Communications, Llc | Methods and systems for previewing a recording |
US9786328B2 (en) * | 2013-01-14 | 2017-10-10 | Discovery Communications, Llc | Methods and systems for previewing a recording |
US20140201778A1 (en) * | 2013-01-15 | 2014-07-17 | Sap Ag | Method and system of interactive advertisement |
US20160110884A1 (en) * | 2013-03-14 | 2016-04-21 | Aperture Investments, Llc | Systems and methods for identifying objects within video content and associating information with identified objects |
US11016578B2 (en) | 2013-04-29 | 2021-05-25 | Swisscom Ag | Method, electronic device and system for remote text input |
US9552079B2 (en) | 2013-04-29 | 2017-01-24 | Swisscom Ag | Method, electronic device and system for remote text input |
US20140351838A1 (en) * | 2013-05-22 | 2014-11-27 | Andrew Dunmore | System and method for providing a secure access-controlled enterprise content repository |
US10409819B2 (en) * | 2013-05-29 | 2019-09-10 | Microsoft Technology Licensing, Llc | Context-based actions from a source application |
US11526520B2 (en) | 2013-05-29 | 2022-12-13 | Microsoft Technology Licensing, Llc | Context-based actions from a source application |
US11263221B2 (en) | 2013-05-29 | 2022-03-01 | Microsoft Technology Licensing, Llc | Search result contexts for application launch |
US10430418B2 (en) | 2013-05-29 | 2019-10-01 | Microsoft Technology Licensing, Llc | Context-based actions from a source application |
US20190394426A1 (en) * | 2013-06-26 | 2019-12-26 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US11457176B2 (en) | 2013-06-26 | 2022-09-27 | Touchcast, Inc. | System and method for providing and interacting with coordinated presentations |
US11310463B2 (en) | 2013-06-26 | 2022-04-19 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US10757365B2 (en) * | 2013-06-26 | 2020-08-25 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US9665895B2 (en) | 2013-08-12 | 2017-05-30 | Mov, Inc. | Technologies for video-based commerce |
US9953347B2 (en) | 2013-09-11 | 2018-04-24 | Cinsay, Inc. | Dynamic binding of live video content |
US11763348B2 (en) | 2013-09-11 | 2023-09-19 | Aibuy, Inc. | Dynamic binding of video content |
US10559010B2 (en) | 2013-09-11 | 2020-02-11 | Aibuy, Inc. | Dynamic binding of video content |
US9875489B2 (en) | 2013-09-11 | 2018-01-23 | Cinsay, Inc. | Dynamic binding of video content |
US11074620B2 (en) | 2013-09-11 | 2021-07-27 | Aibuy, Inc. | Dynamic binding of content transactional items |
US10268994B2 (en) | 2013-09-27 | 2019-04-23 | Aibuy, Inc. | N-level replication of supplemental content |
US11017362B2 (en) | 2013-09-27 | 2021-05-25 | Aibuy, Inc. | N-level replication of supplemental content |
US9697504B2 (en) | 2013-09-27 | 2017-07-04 | Cinsay, Inc. | N-level replication of supplemental content |
US10701127B2 (en) | 2013-09-27 | 2020-06-30 | Aibuy, Inc. | Apparatus and method for supporting relationships associated with content provisioning |
USD742394S1 (en) * | 2013-09-27 | 2015-11-03 | Exfo Inc. | Display screen, or portion thereof, with graphical user interface for an optical fiber microscope |
WO2015048128A1 (en) * | 2013-09-30 | 2015-04-02 | Analog Analytics, Inc. | System and method for improved app distribution |
WO2015073065A1 (en) * | 2013-11-14 | 2015-05-21 | The Motley Fool Holdings, Inc. | Automatically transitioning a user from a call to action to an enrollment interface |
US9537807B2 (en) | 2013-11-14 | 2017-01-03 | Silicon Valley Bank | Automatically transitioning a user from a call to action to an enrollment interface |
US20150177940A1 (en) * | 2013-12-20 | 2015-06-25 | Clixie Media, LLC | System, article, method and apparatus for creating event-driven content for online video, audio and images |
US9798944B2 (en) * | 2014-02-20 | 2017-10-24 | International Business Machines Corporation | Dynamically enabling an interactive element within a non-interactive view of a screen sharing session |
US20150237082A1 (en) * | 2014-02-20 | 2015-08-20 | International Business Machines Corporation | Dynamically enabling an interactive element within a non-interactive view of a screen sharing session |
US10649608B2 (en) | 2014-02-20 | 2020-05-12 | International Business Machines Corporation | Dynamically enabling an interactive element within a non-interactive view of a screen sharing session |
US11102543B2 (en) | 2014-03-07 | 2021-08-24 | Sony Corporation | Control of large screen display using wireless portable computer to pan and zoom on large screen display |
US20190230138A1 (en) * | 2014-05-23 | 2019-07-25 | Samsung Electronics Co., Ltd. | Server and method of providing collaboration services and user terminal for receiving collaboration services |
US10810360B2 (en) * | 2014-05-23 | 2020-10-20 | Samsung Electronics Co., Ltd. | Server and method of providing collaboration services and user terminal for receiving collaboration services |
US9549152B1 (en) * | 2014-06-09 | 2017-01-17 | Google Inc. | Application content delivery to multiple computing environments using existing video conferencing solutions |
RU2614137C2 (en) * | 2014-06-26 | 2017-03-23 | Сяоми Инк. | Method and apparatus for obtaining information |
CN104113785A (en) * | 2014-06-26 | 2014-10-22 | 小米科技有限责任公司 | Information acquisition method and device |
EP2961172A1 (en) * | 2014-06-26 | 2015-12-30 | Xiaomi Inc. | Method and device for information acquisition |
US20160098180A1 (en) * | 2014-10-01 | 2016-04-07 | Sony Corporation | Presentation of enlarged content on companion display device |
TWI608433B (en) * | 2014-10-17 | 2017-12-11 | 深圳市華星光電技術有限公司 | Interacting method |
US20160360289A1 (en) * | 2015-06-03 | 2016-12-08 | lnVidz, LLC | Video management and marketing |
US9654843B2 (en) * | 2015-06-03 | 2017-05-16 | Vaetas, LLC | Video management and marketing |
US11699266B2 (en) * | 2015-09-02 | 2023-07-11 | Interdigital Ce Patent Holdings, Sas | Method, apparatus and system for facilitating navigation in an extended scene |
US20180182168A1 (en) * | 2015-09-02 | 2018-06-28 | Thomson Licensing | Method, apparatus and system for facilitating navigation in an extended scene |
CN106648675A (en) * | 2016-12-28 | 2017-05-10 | 乐蜜科技有限公司 | Method and device for displaying application program using information and electronic equipment |
US11622143B2 (en) | 2016-12-31 | 2023-04-04 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode |
US11863827B2 (en) | 2016-12-31 | 2024-01-02 | Turner Broadcasting System, Inc. | Client-side dynamic presentation of programming content in an indexed disparate live media output stream |
US11595713B2 (en) | 2016-12-31 | 2023-02-28 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on external data |
US10856016B2 (en) | 2016-12-31 | 2020-12-01 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode based on user selection |
US11546400B2 (en) | 2016-12-31 | 2023-01-03 | Turner Broadcasting System, Inc. | Generating a live media segment asset |
US11917217B2 (en) | 2016-12-31 | 2024-02-27 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode based on user selection publishing disparate live media output streams in mixed mode based on user selection |
US11503352B2 (en) | 2016-12-31 | 2022-11-15 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on external data |
US11503349B2 (en) | 2016-12-31 | 2022-11-15 | Turner Broadcasting System, Inc. | Publishing a plurality of disparate live media output stream manifests using live input streams and pre-encoded media assets |
US11611804B2 (en) | 2016-12-31 | 2023-03-21 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams using live input streams |
US11622142B2 (en) | 2016-12-31 | 2023-04-04 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on external data |
US11871062B2 (en) | 2016-12-31 | 2024-01-09 | Turner Broadcasting System, Inc. | Server-side dynamic insertion of programming content in an indexed disparate live media output stream |
US10750224B2 (en) | 2016-12-31 | 2020-08-18 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on user selection |
US11134309B2 (en) | 2016-12-31 | 2021-09-28 | Turner Broadcasting System, Inc. | Creation of channels using pre-encoded media assets |
US11109086B2 (en) | 2016-12-31 | 2021-08-31 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode |
US11665398B2 (en) | 2016-12-31 | 2023-05-30 | Turner Broadcasting System, Inc. | Creation of channels using pre-encoded media assets |
US10694231B2 (en) | 2016-12-31 | 2020-06-23 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain based on user preferences |
US11051074B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams using live input streams |
US11051061B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream using pre-encoded media assets |
US10965967B2 (en) | 2016-12-31 | 2021-03-30 | Turner Broadcasting System, Inc. | Publishing a disparate per-client live media output stream based on dynamic insertion of targeted non-programming content and customized programming content |
US11038932B2 (en) | 2016-12-31 | 2021-06-15 | Turner Broadcasting System, Inc. | System for establishing a shared media session for one or more client devices |
US11671641B2 (en) | 2016-12-31 | 2023-06-06 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode |
US10992973B2 (en) | 2016-12-31 | 2021-04-27 | Turner Broadcasting System, Inc. | Publishing a plurality of disparate live media output stream manifests using live input streams and pre-encoded media assets |
US10645462B2 (en) | 2016-12-31 | 2020-05-05 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain |
US10425700B2 (en) | 2016-12-31 | 2019-09-24 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on real-time or near-real-time content context analysis |
US10708635B2 (en) | 2017-03-02 | 2020-07-07 | Ricoh Company, Ltd. | Subsumption architecture for processing fragments of a video stream |
US10713391B2 (en) * | 2017-03-02 | 2020-07-14 | Ricoh Co., Ltd. | Tamper protection and video source identification for video processing pipeline |
US10929685B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US10956773B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10956495B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US11398253B2 (en) | 2017-03-02 | 2022-07-26 | Ricoh Company, Ltd. | Decomposition of a video stream into salient fragments |
US10929707B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10956494B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US20180253567A1 (en) * | 2017-03-02 | 2018-09-06 | Ricoh Co., Ltd. | Tamper Protection and Video Source Identification for Video Processing Pipeline |
US10720182B2 (en) | 2017-03-02 | 2020-07-21 | Ricoh Company, Ltd. | Decomposition of a video stream into salient fragments |
US10949705B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10719552B2 (en) | 2017-03-02 | 2020-07-21 | Ricoh Co., Ltd. | Focalized summarizations of a video stream |
US10949463B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US10943122B2 (en) | 2017-03-02 | 2021-03-09 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US11095942B2 (en) | 2017-05-25 | 2021-08-17 | Turner Broadcasting System, Inc. | Rules-based delivery and presentation of non-programming media items at client device |
US11051073B2 (en) | 2017-05-25 | 2021-06-29 | Turner Broadcasting System, Inc. | Client-side overlay of graphic items on media content |
US20210385538A1 (en) * | 2017-05-25 | 2021-12-09 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US11228809B2 (en) | 2017-05-25 | 2022-01-18 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US10939169B2 (en) | 2017-05-25 | 2021-03-02 | Turner Broadcasting System, Inc. | Concurrent presentation of non-programming media assets with programming media content at client device |
US11245964B2 (en) * | 2017-05-25 | 2022-02-08 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US20220060787A1 (en) | 2017-05-25 | 2022-02-24 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US11109102B2 (en) | 2017-05-25 | 2021-08-31 | Turner Broadcasting System, Inc. | Dynamic verification of playback of media assets at client device |
US11297386B2 (en) | 2017-05-25 | 2022-04-05 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US11632589B2 (en) | 2017-05-25 | 2023-04-18 | Turner Broadcasting System, Inc. | Client-side overlay of graphic hems on media content |
US20180343495A1 (en) * | 2017-05-25 | 2018-11-29 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US20210385537A1 (en) * | 2017-05-25 | 2021-12-09 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US11638064B2 (en) | 2017-05-25 | 2023-04-25 | Turner Broadcasting System, Inc. | Dynamic verification of playback of media assets at client device |
US11617011B2 (en) | 2017-05-25 | 2023-03-28 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US11825161B2 (en) * | 2017-05-25 | 2023-11-21 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US10924804B2 (en) | 2017-05-25 | 2021-02-16 | Turner Broadcasting System, Inc. | Dynamic verification of playback of media assets at client device |
US11825162B2 (en) * | 2017-05-25 | 2023-11-21 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US11659246B2 (en) | 2017-05-25 | 2023-05-23 | Turner Broadcasting System, Inc. | Client-side playback of personalized media content generated dynamically for event opportunities in programming media content |
US11778272B2 (en) | 2017-05-25 | 2023-10-03 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US10827220B2 (en) | 2017-05-25 | 2020-11-03 | Turner Broadcasting System, Inc. | Client-side playback of personalized media content generated dynamically for event opportunities in programming media content |
US11743539B2 (en) | 2017-05-25 | 2023-08-29 | Turner Broadcasting System, Inc. | Concurrent presentation of non-programming media assets with programming media content at client device |
US10338799B1 (en) * | 2017-07-06 | 2019-07-02 | Spotify Ab | System and method for providing an adaptive seek bar for use with an electronic device |
US10878851B2 (en) * | 2017-08-18 | 2020-12-29 | BON2 Media Services LLC | Embedding interactive content into a shareable online video |
US11475920B2 (en) * | 2017-08-18 | 2022-10-18 | Bon2 Mfdia Services Llc | Embedding interactive content into a shareable online video |
CN108377334A (en) * | 2018-04-03 | 2018-08-07 | 优视科技有限公司 | Short-sighted frequency image pickup method, device and electric terminal |
US11736534B2 (en) | 2018-07-17 | 2023-08-22 | Turner Broadcasting System, Inc. | System for establishing a shared media session for one or more client devices |
US20220005111A1 (en) * | 2018-09-25 | 2022-01-06 | talkshoplive, Inc. | Systems and methods for embeddable point-of-sale transactions |
US10983812B2 (en) * | 2018-11-19 | 2021-04-20 | International Business Machines Corporation | Replaying interactions with a graphical user interface (GUI) presented in a video stream of the GUI |
US20200162798A1 (en) * | 2018-11-20 | 2020-05-21 | International Business Machines Corporation | Video integration using video indexing |
US10932012B2 (en) * | 2018-11-20 | 2021-02-23 | International Business Machines Corporation | Video integration using video indexing |
US10880606B2 (en) | 2018-12-21 | 2020-12-29 | Turner Broadcasting System, Inc. | Disparate live media output stream playout and broadcast distribution |
US11082734B2 (en) | 2018-12-21 | 2021-08-03 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream that complies with distribution format regulations |
US11617000B2 (en) | 2018-12-21 | 2023-03-28 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream that complies with distribution format regulations |
US11553227B2 (en) | 2018-12-21 | 2023-01-10 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream that complies with distribution format regulations |
US11743538B2 (en) | 2018-12-21 | 2023-08-29 | Turner Broadcasting System, Inc. | Disparate live media output stream playout and broadcast distribution |
US11683543B2 (en) | 2018-12-22 | 2023-06-20 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream manifest that includes one or more media segments corresponding to key events |
US20200204834A1 (en) | 2018-12-22 | 2020-06-25 | Turner Broadcasting Systems, Inc. | Publishing a Disparate Live Media Output Stream Manifest That Includes One or More Media Segments Corresponding to Key Events |
US10873774B2 (en) | 2018-12-22 | 2020-12-22 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream manifest that includes one or more media segments corresponding to key events |
US11765409B2 (en) | 2018-12-22 | 2023-09-19 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream manifest that includes one or more media segments corresponding to key events |
US11488363B2 (en) | 2019-03-15 | 2022-11-01 | Touchcast, Inc. | Augmented reality conferencing system and method |
US11528512B2 (en) | 2019-04-30 | 2022-12-13 | Amazon Technologies, Inc. | Adjacent content classification and targeting |
US11057652B1 (en) * | 2019-04-30 | 2021-07-06 | Amazon Technologies, Inc. | Adjacent content classification and targeting |
US11032626B2 (en) | 2019-06-18 | 2021-06-08 | Neal C. Fairbanks | Method for providing additional information associated with an object visually present in media content |
US10477287B1 (en) | 2019-06-18 | 2019-11-12 | Neal C. Fairbanks | Method for providing additional information associated with an object visually present in media content |
US11830525B2 (en) | 2019-10-04 | 2023-11-28 | Udo, LLC | Non-intrusive digital content editing and analytics system |
US11367466B2 (en) * | 2019-10-04 | 2022-06-21 | Udo, LLC | Non-intrusive digital content editing and analytics system |
US11928758B2 (en) | 2020-03-06 | 2024-03-12 | Christopher Renwick Alston | Technologies for augmented-reality |
US11599253B2 (en) * | 2020-10-30 | 2023-03-07 | ROVl GUIDES, INC. | System and method for selection of displayed objects by path tracing |
US11956518B2 (en) | 2020-11-23 | 2024-04-09 | Clicktivated Video, Inc. | System and method for creating interactive elements for objects contemporaneously displayed in live video |
US11962821B2 (en) | 2021-03-19 | 2024-04-16 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream using pre-encoded media assets |
Also Published As
Publication number | Publication date |
---|---|
US20190174191A1 (en) | 2019-06-06 |
US9588663B2 (en) | 2017-03-07 |
US20150033127A1 (en) | 2015-01-29 |
US20170180806A1 (en) | 2017-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190174191A1 (en) | System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos | |
US11432033B2 (en) | Interactive video distribution system and video player utilizing a client server architecture | |
US11915277B2 (en) | System and methods for providing user generated video reviews | |
JP6040120B2 (en) | System and method for generating media content using microtrends | |
US10506278B2 (en) | Interactive video distribution system and video player utilizing a client server architecture | |
US20140149867A1 (en) | Web-based interactive experience utilizing video components | |
US20080163283A1 (en) | Broadband video with synchronized highlight signals | |
US10674230B2 (en) | Interactive advertising and marketing system | |
US20080281689A1 (en) | Embedded video player advertisement display | |
US11617015B2 (en) | Connected interactive content data creation, organization, distribution and analysis | |
US20120139940A1 (en) | Video Overlay Techniques | |
US9113215B1 (en) | Interactive advertising and marketing system | |
WO2013138370A1 (en) | Interactive overlay object layer for online media | |
US20180348972A1 (en) | Lithe clip survey facilitation systems and methods | |
US20170287000A1 (en) | Dynamically generating video / animation, in real-time, in a display or electronic advertisement based on user data | |
US11647259B2 (en) | Method for serving interactive digital advertising content within a streaming platform | |
US20110208583A1 (en) | Advertising control system and method for motion media content | |
CN103988162B (en) | It is related to the system and method for the establishment of information module, viewing and the feature utilized | |
EP3317838A1 (en) | Interactive advertising and marketing method | |
WO2005088492A1 (en) | Rich media personal selling system and method | |
US20160038838A1 (en) | Consumer engagement gaming platform | |
JP2002202922A (en) | Tour regeneration system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: 2CIMPLE, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABBAS, SYED ATHAR;AHMAD, MUBASHIR;SANAPALA, SRIDHAR;AND OTHERS;SIGNING DATES FROM 20091021 TO 20091104;REEL/FRAME:023528/0779 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |