US20110184807A1 - System and Method for Filtering Targeted Advertisements for Video Content Delivery - Google Patents

System and Method for Filtering Targeted Advertisements for Video Content Delivery Download PDF

Info

Publication number
US20110184807A1
US20110184807A1 US12/957,972 US95797210A US2011184807A1 US 20110184807 A1 US20110184807 A1 US 20110184807A1 US 95797210 A US95797210 A US 95797210A US 2011184807 A1 US2011184807 A1 US 2011184807A1
Authority
US
United States
Prior art keywords
user
advertisements
categories
video
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/957,972
Inventor
Qi Wang
Shu Wang
Yu Huang
Hong Heather Yu
Dong-Qing Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FutureWei Technologies Inc
Original Assignee
FutureWei Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FutureWei Technologies Inc filed Critical FutureWei Technologies Inc
Priority to US12/957,972 priority Critical patent/US20110184807A1/en
Priority to US12/958,102 priority patent/US9473828B2/en
Assigned to FUTUREWEI TECHNOLOGIES, INC. reassignment FUTUREWEI TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUANG, YU, YU, HONG HEATHER, ZHANG, DONG-QING, WANG, QI, WANG, SHU
Publication of US20110184807A1 publication Critical patent/US20110184807A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the present invention relates generally to data communication systems, and more particularly to a system and method for filtering targeted advertisements for video content delivery.
  • VOD video-on-demand
  • a method of inserting advertisements into video content includes electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements.
  • the video content has a plurality of segments, each segment of which is associated with a category from the plurality of categories.
  • each advertisement in the first list of advertisements is associated with a video category from a plurality of categories, and electronically filtering includes filtering the first list of advertisements for the plurality of video segments on a segment by segment basis.
  • the method further includes transmitting the second list of advertisements to a user device for insertion with the video content.
  • a method of inserting advertisements into video content includes electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements.
  • Each advertisement in the first list of advertisements comprises a category from a plurality of categories, and the user preference data includes a plurality of preference fields indexed by the plurality of categories.
  • electronically filtering includes updating the user preference data according to a user profile matrix that includes a plurality of attributes according to a hierarchical user ontology.
  • the method further includes transmitting the second list of advertisements to a user device for insertion with the video content.
  • a system for inserting advertisements into video content includes a filtering block configured to filter a first list of advertisements according to user preference data to determine a second list of advertisements.
  • Each advertisement in the first list of advertisements is associated with a video category from a plurality of categories.
  • the video content includes a plurality of video segments, and each video segment is associated with a category from the plurality of categories.
  • the filtering block filters the first list of advertisements on a segment by segment basis.
  • a non-transitory computer readable medium has an executable program stored thereon.
  • the program instructs a microprocessor to filter a first list of advertisements according to user preference data to determine a second list of advertisements.
  • Each advertisement in the first list of advertisements is associated with a video category from a plurality of categories.
  • the video content includes a plurality of video segments, and each video segment is associated with a category from the plurality of categories.
  • the filtering block filters the first list of advertisements on a segment by segment basis.
  • FIG. 1 illustrates an embodiment video transmission and advertisement insertion system
  • FIG. 2 illustrates an embodiment metadata matching system
  • FIG. 3 illustrates an embodiment advertisement filtering system
  • FIG. 4 illustrates an embodiment advertisement insertion system
  • FIG. 5 illustrates an embodiment advertisement insertion example
  • FIG. 6 illustrates a flow chart of an embodiment advertisement determination and insertion method
  • FIG. 7 illustrates an embodiment 5-layer advertisement determination system structure
  • FIG. 8 illustrates an ontological structure according to the prior art
  • FIGS. 9 a and 9 b illustrate an embodiment category structure
  • FIGS. 10 a and 10 b illustrates an embodiment user profile structure
  • FIGS. 11 a - 11 d illustrate embodiment table and matrix structures
  • FIG. 12 illustrates an embodiment PNP update system
  • FIG. 13 illustrates an embodiment sliding history window
  • FIG. 14 illustrates an embodiment Bayesian network model
  • FIG. 15 illustrates a flow diagram of the construction of an embodiment Bayesian network
  • FIG. 16 illustrates an embodiment Bayesian network construction algorithm
  • FIG. 17 illustrates an embodiment Bayesian filtering algorithm
  • FIG. 18 illustrates an embodiment Bayesian model updating algorithm
  • FIG. 19 illustrates an embodiment computer system that implements embodiment algorithms.
  • Embodiments of the invention may also be applied to other applications that require advertising insertion or applications that match content to user preferences or profiles.
  • Embodiments of the invention address providing content based target advertisement capability.
  • metadata relating to both the video/IPTV program and advertisements are available or generated using metadata techniques known in the art.
  • Embodiments of the present invention match advertisement to video content and/or IPTV programming. Some embodiments further customize advertisement matching by learning user's preference, profile and scene information.
  • user profile information can include categories and ontology that include user internal information such as age and gender, user external information such as family status, occupation, education, and user scene information such as whether the user is with his or her family or alone at home, on a vacation or on a business trip.
  • user's profile information will affect a user's preference for particular types of video content and video content/advertisement combinations.
  • the user may be more receptive advertisements directed toward restaurants, attractions and discounts than toward more business related advertisements, such as advertisements directed toward business staffing firms.
  • the user may be more receptive to business staffing firm advertisements.
  • embodiments use information like age, gender, family status, occupation and education to initializing a new user preference matrix.
  • information like age, gender, family status, occupation and education to initializing a new user preference matrix.
  • such an initialization accelerates the learning process of user preference, and provides better selection of advertisements for viewers.
  • FIG. 1 illustrates IPTV transmission system 100 according to an embodiment of the present invention, which includes IPTV provider and metadata server 102 , service provider 104 and user device 106 .
  • User device 106 provides video content and advertising content for user 108 .
  • IPTV Provider 102 provides video programming to user device 106 and video content metadata that described the video programming to service provider 104 .
  • the metadata description is in a TVAnytime format, which is a format developed by the TV Anytime Forum.
  • other metadata formats can be used, for example, MPEG-7 and MPEG-21.
  • the video programming sent to user device 106 can be in any video format, such as an MPG, or an AVI format.
  • service provider 104 provides advertisements and advertisement metadata associated with the advertisements to user device 106 .
  • user device 106 can receive advertisements from another source.
  • service provider 104 has advertising provider 114 and advertising matching service 116 .
  • Advertising provider 114 provides advertisement content and advertisement metadata
  • advertising matching service 116 provides computation capability regarding advertisement metadata.
  • advertising matching service 116 includes metadata matching block 110 and ads filtering block 112 .
  • metadata matching block 110 matches advertisement metadata to advertising metadata and ads filtering block 112 filters the matched metadata according to a user preference model. For example, matching service 116 selects related advertisements for given IPTV program via metadata matching algorithm and generates a global ad play table. Matching service 116 then filters the global ad play table based on user preference data.
  • service provider 104 also provides computation capability to match metadata associated with advertisements with metadata associated with video programming, and filter the matched metadata according to a user profile.
  • advertisement metadata is generated based on an embodiment advertisement metadata schema.
  • the service provider can receive advertising metadata from another source or service.
  • the processor for matching metadata can be separate from the computing resources that store, process and transmit the actual advertising content.
  • the computation resources or server that performs metadata matching 110 can be separate from the computation resources or server that performs ads filtering 112 .
  • user device 106 receives video programming from IPTV provider 102 and a list of filtered advertisements from service provider 104 . In some embodiments, user device 106 further provides requests, such as IPTV requests, and feedback data from user 108 with respect to the provided advertisements. In an embodiment, user device 106 includes video reception equipment such as, but not limited to a computer, a high-definition television (HDTV), a set-top box, a hand-held video device, and the like.
  • HDMI high-definition television
  • FIG. 2 illustrates a block diagram of embodiment metadata matching function 150 .
  • a matching function selects content related advertisements using advertisement metadata 154 against given IPTV or video program using IPTV or video program metadata 152 to generate Global Ads_Play_Table 156 .
  • Global Ads_Play_Table includes fields such as VideoId, VideoSegmentId, VideoSegmentTime, RelatedAdsIdList. In further embodiments greater, fewer or different fields can be used.
  • VideoID represents an ID of a video
  • VideoSegmentId represents an ID of a video segment in the video
  • VideoSegmentTime represents the time stamp of the video segment
  • RelatedAdsList represents a list of Ads related to the video.
  • FIG. 3 illustrates block diagram of embodiment filtering block 160 , which filters Global Ads_Play_Table 156 according to a user preference data to produce Filtered Ads_Play_Table 162 .
  • Filtered Ads_Play_Table 162 describes a subset of Global Ads_Play_Table that most closely matched the user preference data.
  • service provider 104 sends Filtered_Ads_Play_Table to user device 106 to specify which ads are inserted into the video content played by user device 106 .
  • user device 106 provides user profile and/or user preference data to service provider 104 for use in ad filtering.
  • the user's preference data can be learned or can be manually specified by the user device, for example, with help of user's feedback.
  • FIG. 4 illustrates a block diagram illustrating embodiment ad insertion method 170 .
  • user device 106 After user device 106 receives Filtered Ads_Play_Table 162 , and video content from IPTV provider 102 , user device 106 inserts 174 within content 172 .
  • Filtered Ads_Play_Table is used by user device 106 to pop up advertisements that play during playing the IPTV program by using advertisement locating information.
  • FIG. 5 illustrates video content 180 having advertisement 182 inserted in the lower right hand corner.
  • advertising locating information is information for locating the advertisements in a video or a video frame.
  • FIG. 6 illustrates a block diagram of embodiment ad insertion method 200 .
  • the video program provider sends video program Program's Metadata (IPTV Metadata) to the matching service.
  • IPTV Metadata video program Program's Metadata
  • the video program provider is an IPTV provider.
  • other types of video providers such as Web TV service providers, can be used.
  • the advertising provider sends the advertisement metadata to the matching service.
  • steps 202 and 203 are performed at the same time.
  • the matching service uses the video metadata to search against the Ads metadata based on several similarity criteria to generate the Global Ads_Play_Table.
  • Global similarity means that the chosen ads metadata matches the video metadata in a global level.
  • the leading role of the video content is the same person as the role in the advertisement, or the topics of the video content and the advertisement content are similar (i.e. about Christmas).
  • Local similarity means that matched Ads Metadata matches the video Metadata in one segment.
  • one segment of the video content is related to or shows a particular location of a chain of home products stores, and advertisement for the chain of home products stores can be matched with the corresponding video segment.
  • advertisements selected based on local similarity are assigned a particular “popup-time” based on the length of the video shot.
  • a “popup-time” is not defined, and can be decided by other factors in some embodiments.
  • a “popup-time” can also be defined from advertisements based on global similarity.
  • the matching service sends video content data to the user device.
  • the video program provider sends the video program content data as well as metadata that describe the video content to the consumer's user device.
  • the user device sends the user's preference and profile data to the matching service. In some embodiments, steps 206 and 208 can be performed at the same time.
  • the matching service filters the matched advertisement list according to the user's preference and profile data.
  • the user preference data is used to filter out advertisements from the Global Ads_Play_Table to generate a Filitered Ads_Play_Table.
  • the removed advertisements are those matching a low user interest according to the user preference and profile data.
  • the advertisements corresponding to a high user preference are retained.
  • the Global Ads_Play_Table is saved at the server side to be used by other consumers, and the Filtered Ads_Play_Table is transmitted to the user to the consumer's user device to help play the advertisements in step 212 .
  • the consumer's user device sends a request to the advertising provider to retrieve advertisements.
  • these advertisements are played based on the time slots specified in the Filtered Ads_Play_Table.
  • the advertising provider sends the advertisements to the user device in response to the request.
  • the advertisements are inserted with the video content on the user device.
  • the advertisements are displayed using different ads insertion schemes.
  • the insertion schemes selected are those that maximize user experience, according to user feedback as well as the advertiser's monetary goals.
  • the ads can be put on the bottom of a video frame or can be inserted as whole frames after video frames.
  • a survey regarding the advertisements that were played in the video content is displayed on the user device when the video program is completed in some embodiments.
  • the user survey is display at other times, for example, after several videos are viewed.
  • the user provides feedback about the combinations of the video and advertisements.
  • an embodiment learning mechanism updates the user's preference data based on the feedback.
  • the feedback is in the format of a list of Ads_Feedback_Tuple, in which each entry of Ads_Feedback_Tuple includes three feedback elements: int shotId, int AdsId, and enum remark.
  • int shotId represents the ID of a shot
  • int AdsId represents the ID of ad segment
  • enum remark represents a certain number for indicating preferences.
  • other feedback fields can be used, for example, a field specifying a user's location.
  • FIG. 7 illustrates embodiment 5-layer systems structure 300 .
  • the first layer is data source layer 302 that includes metadata file 312 , ads_play_table 314 , user preference data 316 and video file 318 .
  • Metadata manager layer 304 performs the function of storing and managing metadata, and includes IPTV metadata 320 , ads metadata 322 and preference model 324 .
  • Algorithm layer 306 includes matching algorithm 326 , preference learning 328 and ads filtering algorithm 330 .
  • Media layer 308 includes ads player 332 and media player 334 .
  • GUI 310 includes TV MD panel 336 that shows the video metadata, advertisement MD panel 338 that shows the ad metadata, adsPlay panel 340 that plays the ad segment, control panel 342 that performs certain control functions, such as stop or fast forward, and media player panel 344 that plays the video.
  • matching algorithm 326 finds matches between IPTV metadata 320 and Ads metadata 322 .
  • This module deals with metadata matching in order to find content related advertisements in a metadata level.
  • the input is a TV metadata segment that describes TV content, and a list of advertisement metadata segments that describes the advertisements.
  • advertising matching can be specified as having a first input as a video segment metadata instance (VIDEO_METADATA) and a second input as an advertisement metadata instance (ADS_METADATA).
  • a preference and profile (PNP) model is used as a filtering mechanism.
  • the “Global Ads_Play_Table” initially includes all content related advertisements for one particular video segment, then in a next step; some PNP-irrelevant advertisements (i.e. the ads that are not relevant to the video content and user PNPs) are filtered out by the preference matrix. From this filtering mechanism, in one embodiment, the “Global Ads_Play_Table” initially includes all possible content related advertisements for one particular video segment. Alternatively, the Global Ads_Play_Table can be initialized with a smaller set of initial advertisements depending on the system and its specifications.
  • both the video metadata and the advertisement metadata have a keyword and synopsis description, and two methods are used to match advertisements to one scene or shot.
  • a first matching method is keyword matching in which a keyword in the video content metadata is matched with a keywords in the advertisement metadata.
  • keywords are matched according to the following pseudo code:
  • the functions getVideoKeywords and getAdsKeywords extract keywords from the video segment metadata file and advertisement metadata file by analyzing related nodes (for example “tva:Synopsis”, “tva:Keyword”, “AdvertisementKeyword” and “AdvertisementCategory”).
  • tva:Synopsis is a synopsis of the video content that may contain a phrase and/or one or more sentences
  • tva:Keyword is a keyword that corresponds to the video content
  • AdvertisementKeyword is a keyword that corresponds to advertising content
  • AdvertisementCategory is a category corresponding to the advertising content.
  • an ontological matching strategy is used to match the video metadata to the advertising metadata.
  • Using an ontological matching strategy reduces noise and the possibility of mismatch.
  • an ontological strategy can help identify pertinent keywords.
  • ontological strategy singular and plural words are treated as similar keywords.
  • the word “family” and “families” are treated as the same keyword in one embodiment.
  • morphological similar words are treated similarly, for example, the words “politics” and “politician” are treated as the same keyword in one embodiment.
  • synonyms are treated are the same keyword.
  • the ontological matching strategy is implemented using a lexical database, for example, a WordNet database, also known as WordNet Boost.
  • the matching algorithm is based on a on the tva:Keyword of video shot metadata and the AdvertisementKeyword of advertisement metadata.
  • the matching algorithm also uses the tva:Synopsis of the video shot metadata or the AdvertisementDescription of the advertisement or both fields.
  • a natural language processing method can be used to find key grammar functional units within a description sentence of tva:Synopsis and AdvertisementDescription. After these grammar functional units are found, for example, by using existing grammar parsing software tools, another keyword matching between the key grammar functional units are performed to find matching pairs of video shot metadata and advertisement metadata.
  • video content and advertising content is categorized according to embodiment ontological techniques, using for example, Upper Ontology to ensure that the video content metadata and the advertising metadata are covered by one or several categories.
  • categories once the categories are established, no further changes are made to the defined categories.
  • a flexible categorization scheme can be used in which categories are updateable.
  • FIG. 8 illustrates a hierarchy of top-level categories according to the book, Knowledge Representation: Logical, Philosophical, and Computational Foundations, by John F. Sowa, Brooks Cole ( Pacific Grove, Calif., 2000) and described at http:/www.jfsowa.com/ontology/toplevel.htm.
  • categories are derived by combining top levels of the FIG. 8 with Basic Formal Ontology (BFO), which was developed and formulated by Barry Smith and Pierre Grenon and described online at http://www.ifomis.org/bfo.
  • BFO Basic Formal Ontology
  • the first level is Things, which includes everything in the world.
  • the category of Things is further divided into 7 Level 2 categories: Independent, Physical, Relative, Abstract, Mediating, Continuant and Occurrent. These seven Level 2 categories are further divided, through a middle level, to a level containing the Object category. These 12 categories are also referred to as central categories. Table 1 shows how the 12 central categories are derived come from the Level 2 categories according to Knowledge Representation.
  • these 12 central categories are adjusted to be suitable for video clip and advertisement categories.
  • some of the twelve categories for example, structure, situation, object, history, process, description and purpose are divided into subcategories. Other categories, in some embodiments, remain in their original form. Alternatively, different groupings categories can be subdivided depending on the particular application and its specifications.
  • the object and participation categories are combined because they have similar or the same sub-categories in video database.
  • FIGS. 9 a and 9 b A description of one embodiment category structure is illustrated in FIGS. 9 a and 9 b .
  • FIG. 9 a illustrates the top-level categories juncture, structure, script, situation, object, schema, history, process, description, purpose and reason and their subcategories, if applicable.
  • FIG. 9 b illustrates further subcategories of “artificial inanimate object,” which, itself, is a subcategory of the object category.
  • the artificial inanimate object subcategory includes a movie, music and games category that pertains to movies, video, music, television, games, and related objects and products.
  • the books and magazine subcategory pertains to books, newspaper, magazine and digital publications
  • the computer subcategory pertains to, for example, computer hardware, software, pc games and peripheral devices
  • the electronics subcategory includes, for example, consumer electronic devices such as cameras, televisions, and the like.
  • the embodiment home and garden subcategory pertains to home and garden products, for example, furniture
  • the grocery subcategory pertains to groceries such as food and wine.
  • the embodiment health and beauty subcategory includes, for example, medicine, natural and organic foods, beauty products, and the embodiment, toys, children and baby subcategory covers toys and baby products including, but not limited to food and clothing.
  • the embodiment apparel and shoes subcategory pertains to clothes shoes and accessories, for example, and the embodiment sports and outdoor subcategory pertains to, for example, sports products and products for outdoor activities.
  • the tools and auto subcategory relates to objects such as power tools, hand tools, equipment, automobiles and related products.
  • the embodiment jewelry and watch subcategory pertains to jewelry and watches, and the embodiment travel subcategory covers travel related objects such as hotels and travel products.
  • the embodiment arts subcategory relates to art related objects such as painting and sculptures.
  • the other artificial inanimate objects subcategory pertains to objects that do not fit into the artificial inanimate object categories described hereinabove.
  • the juncture category describes a prehending entity that is an object in a stable relationship to some prehended entity during that interval.
  • An example of a juncture is the relationship between two adjacent stones in an arch.
  • the structure category refers to that which mediates multiple objects whose junctures constitute the structure.
  • the structure category is divided into an artificial structure subcategory and a natural structure subcategory.
  • the artificial subcategory describes, for example, human built structures
  • the natural structure subcategory describes, for example, structures in nature.
  • the script category describes an abstract form that represents time sequences. Such sequences can include, for example, a computer program, a recipe for baking a cake, a sheet of music to be played on a piano, or a differential equation that governs the evolution of a physical process.
  • the situation category describes something that occurs in a region of time and space. The situation category is subdivided into a state category and a phenomenon category. The state category describes a situation that does not change during a given period of time, and the phenomenon category describes a state or process known though the senses rather than by intuition or reasoning.
  • the embodiment object category is an entity that retains its identity over some interval of time.
  • Subcategories of the object category include natural inanimate object, artificial inanimate objects, wild animals, human, pets, plant and livestock.
  • the natural inanimate object category pertains to non-living physical entities such as a rock or a mountain.
  • the Artificial inanimate object category pertains to a large number of further subcategories as described in FIGS. 9 a and 9 b.
  • Artificial inanimate objects can include, for example, such objects as vehicles, desks and chairs.
  • the wild animal subcategory includes animals in the wild such as tigers, lions, monkeys, and the like.
  • the Human subcategory includes human beings, and the pet subcategory includes domesticated or tamed animal kept as a companion.
  • the plant subcategory includes members of the kingdom Plantae, and the livestock category includes, for example, horses, cattle, sheep, and other useful animals kept or raised, for example on a farm or a ranch.
  • the schema category represents an abstract form whose structure does not specify time or time-like relationships. Examples include as geometric forms, the syntactic structures of sentences in some language, or the encodings of pictures in a multimedia system.
  • the history category represents a proposition that relates some script to the stages of some occurrent, which is an entity that does not have a stable identity during any interval of time.
  • Embodiment subcategories of the history category include human in history, event in history, and thing in history.
  • the human in history subcategory describes people in history
  • the event in history subcategory describes a historic event
  • the thing in history category describes, for example, an object in history.
  • the process category represents a thing that makes a change during some period of time.
  • Embodiment subcategories of the process category include event, human action—other activity, human action—economics, human action—sports and outdoors.
  • human action language, human action—movie, music and games, human action—home and garden, human action—social, problem solving, video start, video end, travel, and arts creating.
  • the event subcategory describes, for example, a process that makes a change during a short period of time. In one embodiment, a very short period of time is about two seconds. Alternatively greater or lesser time periods can be considered a very short period of time depending on the environment and particular embodiment.
  • the human action—other activity subcategory includes things that people do or people cause to happen.
  • the human action—economics subcategory includes the science that deals with the production, distribution, and consumption of goods and services, or the material welfare of humankind
  • the human action—sports and outdoors subcategory includes sports and outdoor activities, for example, football games, baseball games, etc.
  • the human action—language subcategory includes, for example, language related activities such as speaking and talking
  • the human action—movie, music, games subcategory includes activities, such as, but not limited to watching movies and television, listening to music and playing video games.
  • the human action—home and garden subcategory includes home and garden related activities such as housekeeping
  • the human action—social subcategory includes social activities such as going to parties and other social gatherings.
  • the embodiment problem solving subcategory includes a cognitive activity made by an agent for solving a problem.
  • the video start subcategory denotes the starting of a video clip and the video end subcategory denotes the ending of a video clip.
  • the embodiment travel subcategory pertains to travel, and the arts creating subcategory pertains to creating artistic objects.
  • the description category is subdivided into the proposition, narration, exposition, description for argumentation, abstraction and property subcategories.
  • the proposition subcategories include descriptions, and the narration subcategory includes, for example, reports, stories, biographies, etc.
  • the embodiment exposition subcategory includes operational or locations plans such as a meeting agenda, and the description for argumentation subcategory includes arguments, issues, positions and facts.
  • the abstraction subcategory pertains to a concept that abstracts some data, such as certain computer data structures, and the property subcategory pertains to descriptions of things. It should be appreciated that in alternative embodiments, different categories can be used. For example, a user defined category can be used to provide a more specific categorization.
  • the purpose category pertains to an intention that explains a situation.
  • Embodiment subcategories include time sequence, contingency, and success or failure.
  • the time sequence subcategory describes sequences in time. For example, if an agent x performs an act y whose purpose is a situation z, the start of y occurs before the start of z.
  • the contingency subcategory describes contingent purposes. For example, if an agent x performs an act y whose purpose is a situation z described by a proposition p, then it is possible that z might not occur or that p might not be true of z.
  • the success or failure subcategory purposes that result in success or failure.
  • the reason category unlike a simple description pertains to an entity in terms of an intention.
  • FIGS. 10 a and 10 b illustrate an embodiment ontological user profile structure.
  • FIG. 10 a illustrates an embodiment user profile having three major categories: internal attributes, external attributes and scene, and each of the three major categories have subcategories.
  • the internal information subcategory includes information that contains internal attributes of the user that can include gender, age, height, weight, ethnicity, language, nationality and religion.
  • the external information category includes external attributes of the user that can include location, family status, occupation, education, spirituality, family goals, communication style, emotional management style, and conflict resolution style.
  • the scene category includes information about a user's present activities such as being at home or on a trip.
  • FIG. 10 b illustrates extensions to some of the subcategories shown in FIG. 10 a.
  • the height, ethnicity, language, weight and religion subcategories are extended to provide ranges and classifications.
  • lesser or fewer categories and classifications can be used.
  • the classifications can be modified, in some embodiments, to more directly address regional needs and differences.
  • the subcategories under the language subcategories can be modified to address different dialects.
  • Other subcategories that can be similarly extended are, for example, the location, user scene subcategories, and external information subcategories 19 to 23.
  • additional attributes can be added if necessary in some embodiments.
  • attributes that are not used to assist with advertisement filtering are ignored in the user ontology, such as the user's name, address, phone number, and other privacy related attributes.
  • these privacy related attributes can be stored in the ontological structure, for example, if the user gives authorization to use this information.
  • some address data can be used to assist with targeting advertisements to specific geographical locations.
  • FIGS. 11 a - 11 d illustrate embodiment table and matrix structures used during operation of an embodiment system.
  • FIG. 11 a illustrates the structure of the Global Ads_Play_Table.
  • Each column of the table is denoted by VS i , which indicates the video segment to be played.
  • Each column contains an ad identifier Ads ij that indentified the specific ad and an accompanying category.
  • FIG. 11 b illustrates an embodiment user preference matrix having n rows denoted by video segment category VSc i and m columns denoted by advertising category Ads j .
  • Each element a ij in the user preference matrix is a vector containing like value a ij l , dislike value a ij l and history field h ij .
  • (1 ⁇ i ⁇ n, j ⁇ m) where n is the number of rows and m is the number of columns of the user preference matrix.
  • segment category VSc i and advertising category Ads j correspond to embodiment ontological categories described hereinabove.
  • FIG. 11 c illustrates an embodiment user profile matrix that is divided into two portions.
  • the first portion of the user profile matrix has n rows denoted by user profile items UPC i .
  • these user profile items correspond to the ontological user profile categories described hereinabove.
  • the user profile matrix has m columns denoted by advertising category Ads j .
  • Each element in the first part of the user profile matrix is denoted by a ij contains user preference data on each (UPc i
  • a ij l denotes a number of choice “like” choices
  • a ij d denotes a number of “dislike” choices
  • h ij stores a history of like or dislike based on user feedback. In an embodiment, this history is kept for a certain period of time or over a certain number of user feedback events.
  • the second portion of the user profile matrix has k rows denoted by video categories USc r .
  • these video categories correspond to the ontological video categories described hereinabove.
  • Each element in the first part of the user profile matrix is denoted by a ij contains user preference data on each (UPc r
  • a rj l denotes a number of choice “like” choices
  • a rj d denotes a number of “dislike” choices
  • h rj stores a history of like or dislike based on user feedback. In an embodiment, this history is also kept for a certain period of time or over a certain number of user feedback events.
  • FIG. 11 c illustrates an embodiment user feedback matrix.
  • the user feedback matrix contains N rows and three columns. Each column contains video segment category VSc, advertisement category ADc, and a user preference value chosen from 0, 1, and ⁇ 1.
  • ⁇ 1 indicates a negative response
  • 1, indicates a positive response
  • 0 indicates a default value and/or a neutral response.
  • different feedback values can be used and/or feedback values with more granularity.
  • these user feedback values are derived from a user feedback survey, however, in alternative embodiments, these values can be derived by other means.
  • the user preference matrix and user profile matrix are maintained for each person and/or each user.
  • the user preference matrix is used for storing this user's preference about the combinations of each video segment category and advertisement category
  • the profile matrix is used for storing the user's profile and scene preference. Both matrices are updated by using the feedback from when video content is viewed.
  • the user preference matrix is used for storing this user's preference about the combinations of each video segment category and advertisement category
  • the profile matrix is used for storing the user's profile and scene preference. Both matrices are updated by using the feedback from when video content is viewed.
  • a Bayesian Engine One or several columns are selected from the user profile matrix according to the categories of the current segment and a user scene category.
  • the user selects the scene category during login.
  • the Bayesian Engine then uses the user profile to adjust the preference data from preference matrix by multiplying a weight that is decided by user profile matrix. Then the adjusted preference data is then used to compute the user preference value. Next, the ads filter uses this value to filter out unsuitable advertisements from the Global Ads_Play_Table. Once this step is done, one or more advertisements are selected base on other conditions, such as the priority of each advertisement, and those advertisements selected are sent to user device for insertion into the video content.
  • FIG. 12 illustrates a block diagram showing an embodiment workflow of Bayesian Engine-based user preference and profile (PNP) update system 400 .
  • system 400 is used for constructing and updating user preference matrix 404 .
  • preference data 406 from preference matrix 404 is selected according to the video category.
  • Advertisement data is stored in Ads Pool 418 .
  • preference data from profile matrix 414 is selected.
  • Bayesian Engine 408 performs preference adjustment 410 and adjusts a value pair in each cell 407 of selected preference data row 406 from preference matrix 404 .
  • preference vales are adjusted by multiplying a weight value to either a ij l or a ij d of each preference value pair.
  • the weight is a number between ⁇ 1 to 1, and is calculated by the selected data from profile matrix 414 by according to the following expression:
  • Weight a ij l - a ij d ( a ij l ⁇ + 1 ) ⁇ ( a ij d + 1 ) .
  • profile weight factors from user profile matrix 414 are not used to adjust user preference value 416 directly to prevent user profile values from having too large an influence on user preference value 416 . Therefore, in one embodiment, an average of 17 weight factors is used to adjust user preference value 416 once. Furthermore, the user scene category is also used to adjust the preference values one more time for a total of two adjustments are used for adjusting preference once itself for a total of two adjustments. Alternatively, a greater or fewer adjustments to the user preference value can be made depending on the system and its specifications. In an embodiment, if the weight is greater than 0, the adjustment is applied to a ij l , otherwise, the adjustment is applied to a ij d .
  • Bayesian Engine 408 calculates preference value according to:
  • the weight adjustment is applied before the calculation of preference value to ensure linearity of the preference value between ⁇ 1 and 1.
  • the weight adjustment can be applied after the preference value calculation and scaling applied afterward.
  • both the preference matrix and the user ontology or user preference matrix are updated according to user's feedback. If user's feedback indicates positive response, then the corresponding cell in preference matrix is updated according to a ij l +1. In one embodiment, an entire column of the user profile category in the first section of the user profile matrix, and one cell of the corresponding scene category in the second section of the user profile matrix are updated with a ij l +1. On the other hand, if the feedback is negative, the corresponding cells in the preference matrix are updated according to a ij d +1, and the entire column of profile category plus one cell of corresponding scene category is updated according a ij d +1.
  • the feedback is appended at the end of the array h ij in each corresponding cell.
  • update equation a ij l +1 can be replaced by other update rules, for example, instead of incrementing by 1, it can be updated by incrementing with other specified constant or non-constant values.
  • some of the fields in the user profile are not used and/or not updated. Such unused fields, however, can be used and/or updated in a future version on of the system depending on the specific embodiment and its specifications.
  • a sliding window is used to store the value of a ij l and a ij d in each cell.
  • the sliding window prevents new incoming feedback values from having a disproportionate affect on the calculation of the preference value, and prevents the values of a ij l and a ij d from having a high value due to accumulation.
  • each time a feedback value arrives either a 1 or an ⁇ 1 is appended to the first available cell in the sliding window. If the sliding window is full, the first element will be removed. By doing this, the total count in each cell of preference matrix and ontology-category matrix does note exceed a certain number.
  • the structure of sliding window is shown in FIG. 13 .
  • a ij l and a ij d track the number of 1 and ⁇ 1 in the sliding window so that each time this cell is accessed. In some embodiments, the number of 1 and ⁇ 1 do not need to be accounted in real-time, so each time a new feedback value arrives, either a ij l +1 is applied if the feedback value is 1, or a ij d +1 is applied if the feedback value is ⁇ 1. If the sliding window is full, the leftmost element is removed, in one embodiment and either a ij l ⁇ 1 or a ij d ⁇ 1 applies depending on the removed values. In an embodiment, the window size also determines how much history is tracked for a particular video advertisement combination.
  • FIG. 14 illustrates Bayesian network model 510 according to an embodiment of the present invention.
  • the Bayesian network is constructed and applied to user preference model 514 .
  • Global Ads_Play_Table 516 is filtered according to user preference model 514 to produce Filtered Ads_Play_Table 520 .
  • filtering operation 518 discards advertisements that are likely not preferred by the customer according to the output of the Bayesian model.
  • a user survey is administered in step 522 to provide feedback 524 .
  • survey step 522 is composed of playing video segment, an accompanying advertisement, and a preference choice.
  • the survey is given to a user or customer to capture their preference after the whole video is shown to the user or customer.
  • the user preference model is updated according to feedback 524 in step 526 .
  • FIG. 15 illustrates flow diagram 500 of the construction of an embodiment Bayesian network based on user preference model.
  • the Bayesian network calculates the preference probability of a certain video segment and advertisement combination.
  • three nodes are used.
  • Node V 502 denotes a video segment
  • node Ads 506 denotes an Advertisement
  • node Pref 504 denotes a user preference(Like or Dislike) and the probability distribution tables defined below are constructed temporally during the operation of the Bayesian model.
  • FIG. 16 illustrates pseudo code corresponding to embodiment algorithm for the construction of a Bayesian model.
  • the first table is a V-Ads table (P(Ads
  • this parameter is calculated, for example, according to the amount of financial support from the advertiser. It is assumed that there are n Ads in a candidate set of a video segments, and the financial sponsor is listed as (f 1 , f 2 , . . . , f n ), which is also used as the header of V-Ads table.
  • the second embodiment table is a Pref-V-Ads table (P(Pref ⁇ Ads,V)) that denotes the probability that a user prefers a particular video segment and advertisement combination. This parameter is calculated according to the feedback from user.
  • head the Pref-V-Ads table is (video segment, advertisement, like, dislike), and the conditional probability is calculated by:
  • the third embodiment table is a V table (P(V)) that denotes the probability that a certain video segment was displayed according to the general user preferences, which may be obtained from popular video websites, such as YouTube.com and Hulu.com, in some embodiments.
  • V table P(V)
  • the probability of P(V,Ads,Pref) P(Pref
  • V) ⁇ P(V) is calculated using a Bayesian formula.
  • user preference probability is predicted according to:
  • advertisements are filtered according to the output of this model.
  • FIG. 17 illustrates pseudo code corresponding to an algorithm for an embodiment Bayesian Filtering Algorithm
  • FIG. 18 illustrates pseudo code corresponding to an algorithm for an embodiment Bayesian model updating algorithm.
  • FIG. 19 illustrates computer system 600 adapted to use embodiments of the present invention, e.g., storing and/or executing software associated with the embodiments.
  • Central processing unit (CPU) 601 is coupled to system bus 602 .
  • CPU 601 may be any general purpose CPU. However, embodiments of the present invention are not restricted by the architecture of CPU 601 as long as CPU 601 supports the inventive operations as described herein.
  • Bus 602 is coupled to random access memory (RAM) 603 , which may be SRAM, DRAM, or SDRAM.
  • RAM 604 is also coupled to bus 602 , which may be PROM, EPROM, or EEPROM.
  • RAM 603 and ROM 604 hold user and system data and programs as is well known in the art.
  • Bus 602 is also coupled to input/output (I/O) adapter 605 , communications adapter 611 , user interface 608 , and display adaptor 609 .
  • the I/O adapter 605 connects storage devices 606 , such as one or more of a hard drive, a CD drive, a floppy disk drive, a tape drive, to computer system 600 .
  • Communications adapter 611 is configured to interface with network 612
  • the I/O adapter 605 is also connected to a printer (not shown), which would allow the system to print paper copies of information such as documents, photographs, articles, and the like. Note that the printer may be a printer, e.g., dot matrix, laser, and the like, a fax machine, scanner, or a copier machine.
  • User interface adaptor is coupled to keyboard 613 and mouse 607 , as well as other devices.
  • Display adapter which can be a display card in some embodiments, is connected to display device 610 .
  • Display device 610 can be a CRT, flat panel display, or other type of display device.
  • system 600 can correspond to a server at the service provider, a server with the video provider, or a user device.
  • Advantages of embodiments include an ability to provide scene specific targeted advertisement capability that takes into account specific video content and user preferences.
  • a further advantage of some embodiments includes enabling advertisements to be received by different users and different user terminals, in different time frames, and in different location according to the specific context and semantic information according to each user's preference and profile.
  • An advantage of some embodiments includes the ability to quickly and efficiently determine advertisements for video insertion for a particular user. Furthermore, in some embodiments, the user preferences are updated to track a user's changing preferences.
  • Advantages of embodiments that employ ontologically based categorization methods include the ability to quickly match advertisements to specific video segments, as well as quickly and efficiently filter a candidate list of advertisements according to a specific user preferences.
  • Advantages of embodiments that process video metadata, rather than raw video content include efficient transmission and reception of advertising lists. Furthermore, in embodiments that process advertisement lists separate from the actual advertisements themselves, the need for having a user device download a large amount of data devoted to potentially unwatched videos is alleviated.

Abstract

In accordance with an embodiment, a method of inserting advertisements into video content includes electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements. The video content has a plurality of segments, each segment of which is associated with a category from the plurality of categories. Furthermore, each advertisement in the first list of advertisements is associated with a video category from a plurality of categories, and electronically filtering includes filtering the first list of advertisements for the plurality of video segments on a segment by segment basis. The method further includes transmitting the second list of advertisements to a user device for insertion with the video content.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This patent application claims priority to U.S. Provisional Application No. 61/299,223 filed on Jan. 28, 2010, entitled “System and Method for Target Advertisement,” which application is incorporated by reference herein in its entirety. This patent application further relates to the following co-pending and commonly assigned U.S. patent application Ser. No. ______, filed on entitled “System and Method for Matching Targeted Advertisements for Video Content Delivery,” and Ser. No. ______, filed on ______ entitled “System and Method for Targeted Advertisements for Video Content Delivery,” which applications are hereby incorporated herein by reference in their entirety.
  • TECHNICAL FIELD
  • The present invention relates generally to data communication systems, and more particularly to a system and method for filtering targeted advertisements for video content delivery.
  • BACKGROUND
  • As video content delivery has progressed from a mass broadcast model to a more personalized narrowcast model, the modes and methods of advertising has changed accordingly. In the past, advertisers had to rely on demographic studies to determine the makeup of their advertising audience before committing large sums of money to mass broadcast and print advertising.
  • Already, advertisers can target their desired demographic on the web by placing ads according to user search terms and web browsing history. For example, if a user performs a search for “luxury automobiles” on a web-based search engine, the search engine will often return advertisements from luxury automobile manufactures and dealers.
  • With respect to video programming, some resources exist for targeted advertising also exists. As television channels and programming becomes more localized, advertisers can target their potential demographic based on program content. For example, an advertisement for a local automobile dealer can be inserted in a cable television show about automobiles at a local CATV head end. Furthermore, video-on-demand (VOD) services available at the set-top box from cable and telephony service providers, and video services available directly on the Internet, have brought with it the possibility for advertisers to directly target their desired demographic using targeted advertisements for video content. According to some market studies, targeted advertisements will account for between 40% and 60% of the total revenue for Internet protocol television (IPTV) and other Internet based video services.
  • SUMMARY OF THE INVENTION
  • In accordance with an embodiment, a method of inserting advertisements into video content includes electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements. The video content has a plurality of segments, each segment of which is associated with a category from the plurality of categories. Furthermore, each advertisement in the first list of advertisements is associated with a video category from a plurality of categories, and electronically filtering includes filtering the first list of advertisements for the plurality of video segments on a segment by segment basis. The method further includes transmitting the second list of advertisements to a user device for insertion with the video content.
  • In accordance with a further embodiment, a method of inserting advertisements into video content includes electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements. Each advertisement in the first list of advertisements comprises a category from a plurality of categories, and the user preference data includes a plurality of preference fields indexed by the plurality of categories. In an embodiment, electronically filtering includes updating the user preference data according to a user profile matrix that includes a plurality of attributes according to a hierarchical user ontology. The method further includes transmitting the second list of advertisements to a user device for insertion with the video content.
  • In accordance with a further embodiment, a system for inserting advertisements into video content includes a filtering block configured to filter a first list of advertisements according to user preference data to determine a second list of advertisements. Each advertisement in the first list of advertisements is associated with a video category from a plurality of categories. The video content includes a plurality of video segments, and each video segment is associated with a category from the plurality of categories. In an embodiment, the filtering block filters the first list of advertisements on a segment by segment basis.
  • In accordance with a further embodiment, a non-transitory computer readable medium has an executable program stored thereon. The program instructs a microprocessor to filter a first list of advertisements according to user preference data to determine a second list of advertisements. Each advertisement in the first list of advertisements is associated with a video category from a plurality of categories. Furthermore, the video content includes a plurality of video segments, and each video segment is associated with a category from the plurality of categories. The filtering block filters the first list of advertisements on a segment by segment basis.
  • The foregoing has outlined rather broadly the features of an embodiment of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of embodiments of the invention will be described hereinafter, which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiments disclosed may be readily utilized as a basis for modifying or designing other structures or processes for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the embodiments, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 illustrates an embodiment video transmission and advertisement insertion system;
  • FIG. 2 illustrates an embodiment metadata matching system;
  • FIG. 3 illustrates an embodiment advertisement filtering system;
  • FIG. 4 illustrates an embodiment advertisement insertion system;
  • FIG. 5 illustrates an embodiment advertisement insertion example;
  • FIG. 6 illustrates a flow chart of an embodiment advertisement determination and insertion method;
  • FIG. 7 illustrates an embodiment 5-layer advertisement determination system structure;
  • FIG. 8 illustrates an ontological structure according to the prior art;
  • FIGS. 9 a and 9 b illustrate an embodiment category structure;
  • FIGS. 10 a and 10 b illustrates an embodiment user profile structure;
  • FIGS. 11 a-11 d illustrate embodiment table and matrix structures;
  • FIG. 12 illustrates an embodiment PNP update system;
  • FIG. 13 illustrates an embodiment sliding history window;
  • FIG. 14 illustrates an embodiment Bayesian network model;
  • FIG. 15 illustrates a flow diagram of the construction of an embodiment Bayesian network;
  • FIG. 16 illustrates an embodiment Bayesian network construction algorithm;
  • FIG. 17 illustrates an embodiment Bayesian filtering algorithm;
  • FIG. 18 illustrates an embodiment Bayesian model updating algorithm; and
  • FIG. 19 illustrates an embodiment computer system that implements embodiment algorithms.
  • DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
  • The making and using of the embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention.
  • The present invention will be described with respect to various embodiments in a specific context, a system and method for inserting advertisements into video content. Embodiments of the invention may also be applied to other applications that require advertising insertion or applications that match content to user preferences or profiles.
  • Embodiments of the invention address providing content based target advertisement capability. In some embodiments, it is assumed that metadata relating to both the video/IPTV program and advertisements are available or generated using metadata techniques known in the art.
  • Embodiments of the present invention match advertisement to video content and/or IPTV programming. Some embodiments further customize advertisement matching by learning user's preference, profile and scene information. Such user profile information can include categories and ontology that include user internal information such as age and gender, user external information such as family status, occupation, education, and user scene information such as whether the user is with his or her family or alone at home, on a vacation or on a business trip. Embodiments assume that, at some point, the user's profile information will affect a user's preference for particular types of video content and video content/advertisement combinations. For example, if a user is on vacation with his family, the user may be more receptive advertisements directed toward restaurants, attractions and discounts than toward more business related advertisements, such as advertisements directed toward business staffing firms. On the other hand, if a user is on a business trip, the user may be more receptive to business staffing firm advertisements.
  • Besides user scene information, embodiments use information like age, gender, family status, occupation and education to initializing a new user preference matrix. In some embodiments, such an initialization accelerates the learning process of user preference, and provides better selection of advertisements for viewers.
  • FIG. 1 illustrates IPTV transmission system 100 according to an embodiment of the present invention, which includes IPTV provider and metadata server 102, service provider 104 and user device 106. User device 106 provides video content and advertising content for user 108. IPTV Provider 102 provides video programming to user device 106 and video content metadata that described the video programming to service provider 104. In one embodiment, the metadata description is in a TVAnytime format, which is a format developed by the TV Anytime Forum. Alternatively, other metadata formats can be used, for example, MPEG-7 and MPEG-21. In embodiments, the video programming sent to user device 106 can be in any video format, such as an MPG, or an AVI format. In an embodiment, service provider 104 provides advertisements and advertisement metadata associated with the advertisements to user device 106. Alternatively, user device 106 can receive advertisements from another source.
  • In a further embodiment, service provider 104 has advertising provider 114 and advertising matching service 116. Advertising provider 114 provides advertisement content and advertisement metadata, and advertising matching service 116 provides computation capability regarding advertisement metadata. In one embodiment, advertising matching service 116 includes metadata matching block 110 and ads filtering block 112. In one embodiment, metadata matching block 110 matches advertisement metadata to advertising metadata and ads filtering block 112 filters the matched metadata according to a user preference model. For example, matching service 116 selects related advertisements for given IPTV program via metadata matching algorithm and generates a global ad play table. Matching service 116 then filters the global ad play table based on user preference data.
  • In an embodiment, service provider 104 also provides computation capability to match metadata associated with advertisements with metadata associated with video programming, and filter the matched metadata according to a user profile. In an embodiment, advertisement metadata is generated based on an embodiment advertisement metadata schema. In alternative embodiments, the service provider can receive advertising metadata from another source or service. In further embodiments, the processor for matching metadata can be separate from the computing resources that store, process and transmit the actual advertising content. Furthermore, the computation resources or server that performs metadata matching 110 can be separate from the computation resources or server that performs ads filtering 112.
  • In an embodiment, user device 106 receives video programming from IPTV provider 102 and a list of filtered advertisements from service provider 104. In some embodiments, user device 106 further provides requests, such as IPTV requests, and feedback data from user 108 with respect to the provided advertisements. In an embodiment, user device 106 includes video reception equipment such as, but not limited to a computer, a high-definition television (HDTV), a set-top box, a hand-held video device, and the like.
  • FIG. 2 illustrates a block diagram of embodiment metadata matching function 150. In an embodiment, a matching function selects content related advertisements using advertisement metadata 154 against given IPTV or video program using IPTV or video program metadata 152 to generate Global Ads_Play_Table 156. Global Ads_Play_Table includes fields such as VideoId, VideoSegmentId, VideoSegmentTime, RelatedAdsIdList. In further embodiments greater, fewer or different fields can be used. In one embodiment, VideoID represents an ID of a video, VideoSegmentId represents an ID of a video segment in the video, VideoSegmentTime represents the time stamp of the video segment, and RelatedAdsList represents a list of Ads related to the video.
  • FIG. 3 illustrates block diagram of embodiment filtering block 160, which filters Global Ads_Play_Table 156 according to a user preference data to produce Filtered Ads_Play_Table 162. In an embodiment, Filtered Ads_Play_Table 162 describes a subset of Global Ads_Play_Table that most closely matched the user preference data. In an embodiment, service provider 104 sends Filtered_Ads_Play_Table to user device 106 to specify which ads are inserted into the video content played by user device 106. In addition, in some embodiments, user device 106 provides user profile and/or user preference data to service provider 104 for use in ad filtering. In some embodiments, the user's preference data can be learned or can be manually specified by the user device, for example, with help of user's feedback.
  • FIG. 4 illustrates a block diagram illustrating embodiment ad insertion method 170. After user device 106 receives Filtered Ads_Play_Table 162, and video content from IPTV provider 102, user device 106 inserts 174 within content 172. In one embodiment, Filtered Ads_Play_Table is used by user device 106 to pop up advertisements that play during playing the IPTV program by using advertisement locating information. For example, FIG. 5 illustrates video content 180 having advertisement 182 inserted in the lower right hand corner. In an embodiment, advertising locating information is information for locating the advertisements in a video or a video frame.
  • FIG. 6 illustrates a block diagram of embodiment ad insertion method 200. In Step 202, the video program provider sends video program Program's Metadata (IPTV Metadata) to the matching service. In an embodiment, the video program provider is an IPTV provider. Alternatively, other types of video providers, such as Web TV service providers, can be used. In step 203, the advertising provider sends the advertisement metadata to the matching service. In some embodiments, steps 202 and 203 are performed at the same time.
  • In step 204, the matching service uses the video metadata to search against the Ads metadata based on several similarity criteria to generate the Global Ads_Play_Table. In one embodiment, two separate two kinds of similarities are used: global similarity and local or shot similarity. Global similarity means that the chosen ads metadata matches the video metadata in a global level. For example, the leading role of the video content is the same person as the role in the advertisement, or the topics of the video content and the advertisement content are similar (i.e. about Christmas). Local similarity means that matched Ads Metadata matches the video Metadata in one segment. For example, one segment of the video content is related to or shows a particular location of a chain of home products stores, and advertisement for the chain of home products stores can be matched with the corresponding video segment. In some embodiments, advertisements selected based on local similarity are assigned a particular “popup-time” based on the length of the video shot. For advertisements based on global similarity, however, a “popup-time” is not defined, and can be decided by other factors in some embodiments. Alternatively, a “popup-time can also be defined from advertisements based on global similarity.
  • In step 206, the matching service sends video content data to the user device. For example, when a user watches a video program, the video program provider sends the video program content data as well as metadata that describe the video content to the consumer's user device. In step 208, the user device sends the user's preference and profile data to the matching service. In some embodiments, steps 206 and 208 can be performed at the same time.
  • In step 210, the matching service filters the matched advertisement list according to the user's preference and profile data. In one embodiment, the user preference data is used to filter out advertisements from the Global Ads_Play_Table to generate a Filitered Ads_Play_Table. In one embodiment, the removed advertisements are those matching a low user interest according to the user preference and profile data. In some embodiments, the advertisements corresponding to a high user preference are retained. In one embodiment, the Global Ads_Play_Table is saved at the server side to be used by other consumers, and the Filtered Ads_Play_Table is transmitted to the user to the consumer's user device to help play the advertisements in step 212.
  • In step 214, based on the returned Filtered Ads_Play_Table, the consumer's user device sends a request to the advertising provider to retrieve advertisements. In an embodiment, these advertisements are played based on the time slots specified in the Filtered Ads_Play_Table. In step 216, the advertising provider sends the advertisements to the user device in response to the request. In step 218, the advertisements are inserted with the video content on the user device. In some embodiments, the advertisements are displayed using different ads insertion schemes. In some embodiments, the insertion schemes selected are those that maximize user experience, according to user feedback as well as the advertiser's monetary goals. For example, the ads can be put on the bottom of a video frame or can be inserted as whole frames after video frames.
  • In step 220, a survey regarding the advertisements that were played in the video content is displayed on the user device when the video program is completed in some embodiments. Alternatively, the user survey is display at other times, for example, after several videos are viewed. Here, the user provides feedback about the combinations of the video and advertisements. In step 222, an embodiment learning mechanism updates the user's preference data based on the feedback. In one embodiment, the feedback is in the format of a list of Ads_Feedback_Tuple, in which each entry of Ads_Feedback_Tuple includes three feedback elements: int shotId, int AdsId, and enum remark. In an embodiment, int shotId represents the ID of a shot, int AdsId represents the ID of ad segment, and enum remark represents a certain number for indicating preferences. Alternatively, other feedback fields can be used, for example, a field specifying a user's location.
  • FIG. 7 illustrates embodiment 5-layer systems structure 300. The first layer is data source layer 302 that includes metadata file 312, ads_play_table 314, user preference data 316 and video file 318. Metadata manager layer 304 performs the function of storing and managing metadata, and includes IPTV metadata 320, ads metadata 322 and preference model 324. Algorithm layer 306 includes matching algorithm 326, preference learning 328 and ads filtering algorithm 330. Media layer 308 includes ads player 332 and media player 334. Finally, graphical user interface (GUI) 310 includes TV MD panel 336 that shows the video metadata, advertisement MD panel 338 that shows the ad metadata, adsPlay panel 340 that plays the ad segment, control panel 342 that performs certain control functions, such as stop or fast forward, and media player panel 344 that plays the video.
  • In an embodiment, matching algorithm 326 finds matches between IPTV metadata 320 and Ads metadata 322. This module deals with metadata matching in order to find content related advertisements in a metadata level. In one embodiment, the input is a TV metadata segment that describes TV content, and a list of advertisement metadata segments that describes the advertisements. Alternatively, other objects and formats can be used. In an embodiment, advertising matching can be specified as having a first input as a video segment metadata instance (VIDEO_METADATA) and a second input as an advertisement metadata instance (ADS_METADATA).
  • In an embodiment, a preference and profile (PNP) model is used as a filtering mechanism. Here the “Global Ads_Play_Table” initially includes all content related advertisements for one particular video segment, then in a next step; some PNP-irrelevant advertisements (i.e. the ads that are not relevant to the video content and user PNPs) are filtered out by the preference matrix. From this filtering mechanism, in one embodiment, the “Global Ads_Play_Table” initially includes all possible content related advertisements for one particular video segment. Alternatively, the Global Ads_Play_Table can be initialized with a smaller set of initial advertisements depending on the system and its specifications.
  • In an embodiment, both the video metadata and the advertisement metadata have a keyword and synopsis description, and two methods are used to match advertisements to one scene or shot. A first matching method is keyword matching in which a keyword in the video content metadata is matched with a keywords in the advertisement metadata. In one example, given a video segment instance (Video.VIDEO_METADATA) and an advertisement pool in which each advertisement has metadata (Ads.ADS_METADATA), keywords are matched according to the following pseudo code:
  • VideoKeywordList = getVideoKeywords(Video.VIDEO_METADATA);
    Foreach Ads in Advertisement_Pool:
     AdsKeywordList=getAdsKeywords(Ads.ADS_METADATA);
     Foreach keyword in AdsKeywordList:
        If VideoKeywordList.contains(keyword):
          Associate(Video.VIDEO_METADATA, Ads);
          Break.

    The functions getVideoKeywords and getAdsKeywords extract keywords from the video segment metadata file and advertisement metadata file by analyzing related nodes (for example “tva:Synopsis”, “tva:Keyword”, “AdvertisementKeyword” and “AdvertisementCategory”). Here, tva:Synopsis is a synopsis of the video content that may contain a phrase and/or one or more sentences, tva:Keyword is a keyword that corresponds to the video content, AdvertisementKeyword is a keyword that corresponds to advertising content and AdvertisementCategory is a category corresponding to the advertising content.
  • In an embodiment, an ontological matching strategy is used to match the video metadata to the advertising metadata. Using an ontological matching strategy reduces noise and the possibility of mismatch. For example, when using a synopsis having a value of one more sentences, an ontological strategy can help identify pertinent keywords. In an embodiment, ontological strategy singular and plural words are treated as similar keywords. For example, the word “family” and “families” are treated as the same keyword in one embodiment. Furthermore, in one embodiment, morphological similar words are treated similarly, for example, the words “politics” and “politician” are treated as the same keyword in one embodiment. In one embodiment, synonyms are treated are the same keyword. Furthermore, if a the matching criteria are further relaxed, word pairs such as “dog and cat” and “theater and bar” are treated as the same keyword. In one embodiment, the ontological matching strategy is implemented using a lexical database, for example, a WordNet database, also known as WordNet Boost.
  • In one Synopsis matching embodiment, the matching algorithm is based on a on the tva:Keyword of video shot metadata and the AdvertisementKeyword of advertisement metadata. In a further embodiment, the matching algorithm also uses the tva:Synopsis of the video shot metadata or the AdvertisementDescription of the advertisement or both fields. Furthermore, a natural language processing method can be used to find key grammar functional units within a description sentence of tva:Synopsis and AdvertisementDescription. After these grammar functional units are found, for example, by using existing grammar parsing software tools, another keyword matching between the key grammar functional units are performed to find matching pairs of video shot metadata and advertisement metadata.
  • In an embodiment, video content and advertising content is categorized according to embodiment ontological techniques, using for example, Upper Ontology to ensure that the video content metadata and the advertising metadata are covered by one or several categories. In some embodiments, once the categories are established, no further changes are made to the defined categories. Alternatively, a flexible categorization scheme can be used in which categories are updateable.
  • FIG. 8 illustrates a hierarchy of top-level categories according to the book, Knowledge Representation: Logical, Philosophical, and Computational Foundations, by John F. Sowa, Brooks Cole (Pacific Grove, Calif., 2000) and described at http:/www.jfsowa.com/ontology/toplevel.htm. In one embodiment, categories are derived by combining top levels of the FIG. 8 with Basic Formal Ontology (BFO), which was developed and formulated by Barry Smith and Pierre Grenon and described online at http://www.ifomis.org/bfo. In FIG. 8, the first level is Things, which includes everything in the world. The category of Things is further divided into 7 Level 2 categories: Independent, Physical, Relative, Abstract, Mediating, Continuant and Occurrent. These seven Level 2 categories are further divided, through a middle level, to a level containing the Object category. These 12 categories are also referred to as central categories. Table 1 shows how the 12 central categories are derived come from the Level 2 categories according to Knowledge Representation.
  • TABLE 1
    Matrix of the twelve central categories
    Physical Abstract
    Continuant Occurrent Continuant Occurrent
    Independent Object Process Schema Script
    Relative Juncture Participation Description History
    Mediating Structure Situation Reason Purpose
  • In an embodiment, these 12 central categories are adjusted to be suitable for video clip and advertisement categories. In particular, some of the twelve categories, for example, structure, situation, object, history, process, description and purpose are divided into subcategories. Other categories, in some embodiments, remain in their original form. Alternatively, different groupings categories can be subdivided depending on the particular application and its specifications. In one embodiment, the object and participation categories are combined because they have similar or the same sub-categories in video database. A description of one embodiment category structure is illustrated in FIGS. 9 a and 9 b. FIG. 9 a illustrates the top-level categories juncture, structure, script, situation, object, schema, history, process, description, purpose and reason and their subcategories, if applicable. FIG. 9 b illustrates further subcategories of “artificial inanimate object,” which, itself, is a subcategory of the object category.
  • In an embodiment, the artificial inanimate object subcategory, as shown in FIG. 9 b, includes a movie, music and games category that pertains to movies, video, music, television, games, and related objects and products. The books and magazine subcategory pertains to books, newspaper, magazine and digital publications, the computer subcategory pertains to, for example, computer hardware, software, pc games and peripheral devices, and the electronics subcategory includes, for example, consumer electronic devices such as cameras, televisions, and the like. The embodiment home and garden subcategory pertains to home and garden products, for example, furniture, and the grocery subcategory pertains to groceries such as food and wine. The embodiment health and beauty subcategory includes, for example, medicine, natural and organic foods, beauty products, and the embodiment, toys, children and baby subcategory covers toys and baby products including, but not limited to food and clothing. The embodiment apparel and shoes subcategory pertains to clothes shoes and accessories, for example, and the embodiment sports and outdoor subcategory pertains to, for example, sports products and products for outdoor activities.
  • In an embodiment, the tools and auto subcategory relates to objects such as power tools, hand tools, equipment, automobiles and related products. The embodiment jewelry and watch subcategory pertains to jewelry and watches, and the embodiment travel subcategory covers travel related objects such as hotels and travel products. The embodiment arts subcategory relates to art related objects such as painting and sculptures. Finally, the other artificial inanimate objects subcategory pertains to objects that do not fit into the artificial inanimate object categories described hereinabove.
  • In an embodiment, the juncture category describes a prehending entity that is an object in a stable relationship to some prehended entity during that interval. An example of a juncture is the relationship between two adjacent stones in an arch. In an embodiment, the structure category refers to that which mediates multiple objects whose junctures constitute the structure. In an embodiment, the structure category is divided into an artificial structure subcategory and a natural structure subcategory. The artificial subcategory describes, for example, human built structures, and the natural structure subcategory describes, for example, structures in nature.
  • In an embodiment, the script category describes an abstract form that represents time sequences. Such sequences can include, for example, a computer program, a recipe for baking a cake, a sheet of music to be played on a piano, or a differential equation that governs the evolution of a physical process. In an embodiment, the situation category describes something that occurs in a region of time and space. The situation category is subdivided into a state category and a phenomenon category. The state category describes a situation that does not change during a given period of time, and the phenomenon category describes a state or process known though the senses rather than by intuition or reasoning.
  • The embodiment object category, is an entity that retains its identity over some interval of time. Subcategories of the object category include natural inanimate object, artificial inanimate objects, wild animals, human, pets, plant and livestock. In an embodiment, the natural inanimate object category pertains to non-living physical entities such as a rock or a mountain. The Artificial inanimate object category pertains to a large number of further subcategories as described in FIGS. 9 a and 9 b. Artificial inanimate objects can include, for example, such objects as vehicles, desks and chairs. The wild animal subcategory includes animals in the wild such as tigers, lions, monkeys, and the like. In one embodiment, the Human subcategory includes human beings, and the pet subcategory includes domesticated or tamed animal kept as a companion. The plant subcategory includes members of the kingdom Plantae, and the livestock category includes, for example, horses, cattle, sheep, and other useful animals kept or raised, for example on a farm or a ranch.
  • In an embodiment, the schema category represents an abstract form whose structure does not specify time or time-like relationships. Examples include as geometric forms, the syntactic structures of sentences in some language, or the encodings of pictures in a multimedia system.
  • In an embodiment, the history category represents a proposition that relates some script to the stages of some occurrent, which is an entity that does not have a stable identity during any interval of time. Embodiment subcategories of the history category include human in history, event in history, and thing in history. The human in history subcategory describes people in history, the event in history subcategory describes a historic event, and the thing in history category describes, for example, an object in history.
  • In an embodiment, the process category represents a thing that makes a change during some period of time. Embodiment subcategories of the process category include event, human action—other activity, human action—economics, human action—sports and outdoors. human action—language, human action—movie, music and games, human action—home and garden, human action—social, problem solving, video start, video end, travel, and arts creating. The event subcategory describes, for example, a process that makes a change during a short period of time. In one embodiment, a very short period of time is about two seconds. Alternatively greater or lesser time periods can be considered a very short period of time depending on the environment and particular embodiment. The human action—other activity subcategory includes things that people do or people cause to happen. The human action—economics subcategory includes the science that deals with the production, distribution, and consumption of goods and services, or the material welfare of humankind The human action—sports and outdoors subcategory includes sports and outdoor activities, for example, football games, baseball games, etc. The human action—language subcategory includes, for example, language related activities such as speaking and talking, and the human action—movie, music, games subcategory includes activities, such as, but not limited to watching movies and television, listening to music and playing video games.
  • In an embodiment, the human action—home and garden subcategory includes home and garden related activities such as housekeeping, and the human action—social subcategory includes social activities such as going to parties and other social gatherings. The embodiment problem solving subcategory includes a cognitive activity made by an agent for solving a problem.
  • In an embodiment, the video start subcategory denotes the starting of a video clip and the video end subcategory denotes the ending of a video clip. The embodiment travel subcategory pertains to travel, and the arts creating subcategory pertains to creating artistic objects.
  • In an embodiment, the description category is subdivided into the proposition, narration, exposition, description for argumentation, abstraction and property subcategories. The proposition subcategories include descriptions, and the narration subcategory includes, for example, reports, stories, biographies, etc. The embodiment exposition subcategory includes operational or locations plans such as a meeting agenda, and the description for argumentation subcategory includes arguments, issues, positions and facts. The abstraction subcategory pertains to a concept that abstracts some data, such as certain computer data structures, and the property subcategory pertains to descriptions of things. It should be appreciated that in alternative embodiments, different categories can be used. For example, a user defined category can be used to provide a more specific categorization.
  • In an embodiment, the purpose category pertains to an intention that explains a situation. Embodiment subcategories include time sequence, contingency, and success or failure. The time sequence subcategory describes sequences in time. For example, if an agent x performs an act y whose purpose is a situation z, the start of y occurs before the start of z. The contingency subcategory describes contingent purposes. For example, if an agent x performs an act y whose purpose is a situation z described by a proposition p, then it is possible that z might not occur or that p might not be true of z. Lastly, the success or failure subcategory purposes that result in success or failure. For example, if an agent x performs an act y whose purpose is a situation z described by a proposition p, then x is said to be successful if z occurs and p is true of z; otherwise, x is said to have failed.
  • In an embodiment, the reason category, unlike a simple description pertains to an entity in terms of an intention.
  • FIGS. 10 a and 10 b illustrate an embodiment ontological user profile structure. FIG. 10 a illustrates an embodiment user profile having three major categories: internal attributes, external attributes and scene, and each of the three major categories have subcategories. In an embodiment, the internal information subcategory includes information that contains internal attributes of the user that can include gender, age, height, weight, ethnicity, language, nationality and religion. The external information category includes external attributes of the user that can include location, family status, occupation, education, spirituality, family goals, communication style, emotional management style, and conflict resolution style. The scene category includes information about a user's present activities such as being at home or on a trip.
  • FIG. 10 b illustrates extensions to some of the subcategories shown in FIG. 10 a. For example, the height, ethnicity, language, weight and religion subcategories are extended to provide ranges and classifications. It should be appreciated that in alternative embodiments, lesser or fewer categories and classifications can be used. Furthermore, the classifications can be modified, in some embodiments, to more directly address regional needs and differences. For example, in some embodiments that service regions with diverse dialects, the subcategories under the language subcategories can be modified to address different dialects. Other subcategories that can be similarly extended are, for example, the location, user scene subcategories, and external information subcategories 19 to 23. Furthermore, additional attributes can be added if necessary in some embodiments.
  • In one embodiment, attributes that are not used to assist with advertisement filtering are ignored in the user ontology, such as the user's name, address, phone number, and other privacy related attributes. Alternatively, these privacy related attributes can be stored in the ontological structure, for example, if the user gives authorization to use this information. In some embodiments, some address data can be used to assist with targeting advertisements to specific geographical locations.
  • FIGS. 11 a-11 d illustrate embodiment table and matrix structures used during operation of an embodiment system. FIG. 11 a illustrates the structure of the Global Ads_Play_Table. Each column of the table is denoted by VSi, which indicates the video segment to be played. Each column contains an ad identifier Adsij that indentified the specific ad and an accompanying category.
  • FIG. 11 b illustrates an embodiment user preference matrix having n rows denoted by video segment category VSci and m columns denoted by advertising category Adsj. Each element aij in the user preference matrix is a vector containing like value aij l, dislike value aij l and history field hij. In an embodiment (1≦i≦n, j≦m), where n is the number of rows and m is the number of columns of the user preference matrix. In an embodiment, segment category VSci and advertising category Adsj correspond to embodiment ontological categories described hereinabove.
  • FIG. 11 c illustrates an embodiment user profile matrix that is divided into two portions. The first portion of the user profile matrix has n rows denoted by user profile items UPCi. In one embodiment, these user profile items correspond to the ontological user profile categories described hereinabove. The user profile matrix has m columns denoted by advertising category Adsj. Each element in the first part of the user profile matrix is denoted by aij contains user preference data on each (UPci|category, Adcj|category) combination. Element aij l denotes a number of choice “like” choices, aij d denotes a number of “dislike” choices, and hij stores a history of like or dislike based on user feedback. In an embodiment, this history is kept for a certain period of time or over a certain number of user feedback events.
  • The second portion of the user profile matrix has k rows denoted by video categories UScr. In one embodiment these video categories correspond to the ontological video categories described hereinabove. Each element in the first part of the user profile matrix is denoted by aij contains user preference data on each (UPcr|category, Adcj|category) combination. Element arj l denotes a number of choice “like” choices, arj d denotes a number of “dislike” choices, and hrj stores a history of like or dislike based on user feedback. In an embodiment, this history is also kept for a certain period of time or over a certain number of user feedback events.
  • FIG. 11 c illustrates an embodiment user feedback matrix. In an embodiment, the user feedback matrix contains N rows and three columns. Each column contains video segment category VSc, advertisement category ADc, and a user preference value chosen from 0, 1, and −1. In one embodiment, −1 indicates a negative response, 1, indicates a positive response, and 0 indicates a default value and/or a neutral response. Alternatively, different feedback values can be used and/or feedback values with more granularity. In an embodiment, these user feedback values are derived from a user feedback survey, however, in alternative embodiments, these values can be derived by other means.
  • In an embodiment, the user preference matrix and user profile matrix are maintained for each person and/or each user. In one embodiment, the user preference matrix is used for storing this user's preference about the combinations of each video segment category and advertisement category, and the profile matrix is used for storing the user's profile and scene preference. Both matrices are updated by using the feedback from when video content is viewed. During the playing of the video, once a segment is reached, one or several rows are selected from the user preference matrix according to the categories of the current segment and sent to a Bayesian Engine. One or several columns are selected from the user profile matrix according to the categories of the current segment and a user scene category. In one embodiment, the user selects the scene category during login. The Bayesian Engine then uses the user profile to adjust the preference data from preference matrix by multiplying a weight that is decided by user profile matrix. Then the adjusted preference data is then used to compute the user preference value. Next, the ads filter uses this value to filter out unsuitable advertisements from the Global Ads_Play_Table. Once this step is done, one or more advertisements are selected base on other conditions, such as the priority of each advertisement, and those advertisements selected are sent to user device for insertion into the video content.
  • FIG. 12 illustrates a block diagram showing an embodiment workflow of Bayesian Engine-based user preference and profile (PNP) update system 400. In an embodiment, system 400 is used for constructing and updating user preference matrix 404. Once a new video segment 422 with segment category information VSc i 402 arrives, preference data 406 from preference matrix 404 is selected according to the video category. Advertisement data is stored in Ads Pool 418. In addition, preference data from profile matrix 414 is selected. In a preference adjustment step, Bayesian Engine 408 performs preference adjustment 410 and adjusts a value pair in each cell 407 of selected preference data row 406 from preference matrix 404. In an embodiment, preference vales are adjusted by multiplying a weight value to either aij l or aij d of each preference value pair. In an embodiment, the weight is a number between −1 to 1, and is calculated by the selected data from profile matrix 414 by according to the following expression:
  • Weight = a ij l - a ij d ( a ij l + 1 ) ( a ij d + 1 ) .
  • In one embodiment, profile weight factors from user profile matrix 414 are not used to adjust user preference value 416 directly to prevent user profile values from having too large an influence on user preference value 416. Therefore, in one embodiment, an average of 17 weight factors is used to adjust user preference value 416 once. Furthermore, the user scene category is also used to adjust the preference values one more time for a total of two adjustments are used for adjusting preference once itself for a total of two adjustments. Alternatively, a greater or fewer adjustments to the user preference value can be made depending on the system and its specifications. In an embodiment, if the weight is greater than 0, the adjustment is applied to aij l, otherwise, the adjustment is applied to aij d. The weight value is between −1 and 1, so that the adjustment has the form of aij l=aij l*(1+weight) if weight>0 or aij d=aij d*(1−weight) if weight<0.
  • In embodiment preference calculation step 412, Bayesian Engine 408 calculates preference value according to:
  • P = a ij l - a ij d ( a ij l + 1 ) ( a ij d + 1 ) .
  • In one embodiment, the weight adjustment is applied before the calculation of preference value to ensure linearity of the preference value between −1 and 1. Alternatively, the weight adjustment can be applied after the preference value calculation and scaling applied afterward.
  • After the preference value of the combination of this video category and all possible advertisement categories Adcj . . . Adcj are calculated, and the advertisement category with the highest score Adcj is selected in an embodiment. Alternatively, lower scoring advertisement categories are omitted when forming Bayesian Engine 408 then chooses those advertisements in category Adcj from row VScj of global Ads_play_table. After this filtering step, the rest of the advertisements are sent to user device for playing according to the advertisement related information such as priority, playing times etc.
  • In an embodiment, both the preference matrix and the user ontology or user preference matrix are updated according to user's feedback. If user's feedback indicates positive response, then the corresponding cell in preference matrix is updated according to aij l+1. In one embodiment, an entire column of the user profile category in the first section of the user profile matrix, and one cell of the corresponding scene category in the second section of the user profile matrix are updated with aij l+1. On the other hand, if the feedback is negative, the corresponding cells in the preference matrix are updated according to aij d+1, and the entire column of profile category plus one cell of corresponding scene category is updated according aij d+1. In an embodiment, the feedback is appended at the end of the array hij in each corresponding cell. In alternative embodiments, update equation aij l+1 can be replaced by other update rules, for example, instead of incrementing by 1, it can be updated by incrementing with other specified constant or non-constant values. In some embodiments, some of the fields in the user profile are not used and/or not updated. Such unused fields, however, can be used and/or updated in a future version on of the system depending on the specific embodiment and its specifications.
  • In an embodiment, a sliding window is used to store the value of aij l and aij d in each cell. In some embodiments, the sliding window prevents new incoming feedback values from having a disproportionate affect on the calculation of the preference value, and prevents the values of aij l and aij d from having a high value due to accumulation. In an embodiment, each time a feedback value arrives, either a 1 or an −1 is appended to the first available cell in the sliding window. If the sliding window is full, the first element will be removed. By doing this, the total count in each cell of preference matrix and ontology-category matrix does note exceed a certain number. The structure of sliding window is shown in FIG. 13. Here, aij l and aij d track the number of 1 and −1 in the sliding window so that each time this cell is accessed. In some embodiments, the number of 1 and −1 do not need to be accounted in real-time, so each time a new feedback value arrives, either aij l+1 is applied if the feedback value is 1, or aij d+1 is applied if the feedback value is −1. If the sliding window is full, the leftmost element is removed, in one embodiment and either aij l−1 or aij d−1 applies depending on the removed values. In an embodiment, the window size also determines how much history is tracked for a particular video advertisement combination.
  • FIG. 14 illustrates Bayesian network model 510 according to an embodiment of the present invention. In step 512, the Bayesian network is constructed and applied to user preference model 514. In step 518, Global Ads_Play_Table 516 is filtered according to user preference model 514 to produce Filtered Ads_Play_Table 520. In one embodiment, filtering operation 518 discards advertisements that are likely not preferred by the customer according to the output of the Bayesian model. Next, after the user views the video, a user survey is administered in step 522 to provide feedback 524. In one embodiment, survey step 522 is composed of playing video segment, an accompanying advertisement, and a preference choice. In an embodiment, the survey is given to a user or customer to capture their preference after the whole video is shown to the user or customer. The user preference model is updated according to feedback 524 in step 526.
  • FIG. 15 illustrates flow diagram 500 of the construction of an embodiment Bayesian network based on user preference model. In an embodiment, the Bayesian network calculates the preference probability of a certain video segment and advertisement combination. In an embodiment, three nodes are used. Node V 502 denotes a video segment, node Ads 506 denotes an Advertisement and node Pref 504 denotes a user preference(Like or Dislike) and the probability distribution tables defined below are constructed temporally during the operation of the Bayesian model.
  • FIG. 16 illustrates pseudo code corresponding to embodiment algorithm for the construction of a Bayesian model. In an embodiment, three probability distribution tables are defined. The first table is a V-Ads table (P(Ads|V)) that denotes the probability of choosing an advertisement when a specific video segment is selected. In an embodiment, this parameter is calculated, for example, according to the amount of financial support from the advertiser. It is assumed that there are n Ads in a candidate set of a video segments, and the financial sponsor is listed as (f1, f2, . . . , fn), which is also used as the header of V-Ads table. The kth ads conditional probability is calculated using according to fki=1 nfi.
  • The second embodiment table is a Pref-V-Ads table (P(PrefβAds,V)) that denotes the probability that a user prefers a particular video segment and advertisement combination. This parameter is calculated according to the feedback from user. In an embodiment, head the Pref-V-Ads table is (video segment, advertisement, like, dislike), and the conditional probability is calculated by:
  • P ( Pref | Ads , V ) = a ij l a ij l + a ij d ,
  • using the data of user preference matrix.
  • The third embodiment table is a V table (P(V)) that denotes the probability that a certain video segment was displayed according to the general user preferences, which may be obtained from popular video websites, such as YouTube.com and Hulu.com, in some embodiments.
  • In an embodiment, during the construction of the Bayesian model, the probability of P(V,Ads,Pref)=P(Pref|Ads,V)·P(Ads|V)·P(V) is calculated using a Bayesian formula. After the model is constructed, user preference probability is predicted according to:
  • P ( Ads , Pref | V ) = P ( Ads , V , Pref ) P ( V ) .
  • In other words, the probability the conditional probability that certain kind of advertisement is preferred by customer, given the category of video segment. In an embodiment, advertisements are filtered according to the output of this model.
  • FIG. 17 illustrates pseudo code corresponding to an algorithm for an embodiment Bayesian Filtering Algorithm, and FIG. 18 illustrates pseudo code corresponding to an algorithm for an embodiment Bayesian model updating algorithm.
  • FIG. 19 illustrates computer system 600 adapted to use embodiments of the present invention, e.g., storing and/or executing software associated with the embodiments. Central processing unit (CPU) 601 is coupled to system bus 602. CPU 601 may be any general purpose CPU. However, embodiments of the present invention are not restricted by the architecture of CPU 601 as long as CPU 601 supports the inventive operations as described herein. Bus 602 is coupled to random access memory (RAM) 603, which may be SRAM, DRAM, or SDRAM. ROM 604 is also coupled to bus 602, which may be PROM, EPROM, or EEPROM. RAM 603 and ROM 604 hold user and system data and programs as is well known in the art.
  • Bus 602 is also coupled to input/output (I/O) adapter 605, communications adapter 611, user interface 608, and display adaptor 609. The I/O adapter 605 connects storage devices 606, such as one or more of a hard drive, a CD drive, a floppy disk drive, a tape drive, to computer system 600. Communications adapter 611 is configured to interface with network 612 The I/O adapter 605 is also connected to a printer (not shown), which would allow the system to print paper copies of information such as documents, photographs, articles, and the like. Note that the printer may be a printer, e.g., dot matrix, laser, and the like, a fax machine, scanner, or a copier machine. User interface adaptor is coupled to keyboard 613 and mouse 607, as well as other devices. Display adapter, which can be a display card in some embodiments, is connected to display device 610. Display device 610 can be a CRT, flat panel display, or other type of display device. In embodiments, system 600 can correspond to a server at the service provider, a server with the video provider, or a user device.
  • Advantages of embodiments include an ability to provide scene specific targeted advertisement capability that takes into account specific video content and user preferences. A further advantage of some embodiments includes enabling advertisements to be received by different users and different user terminals, in different time frames, and in different location according to the specific context and semantic information according to each user's preference and profile.
  • An advantage of some embodiments includes the ability to quickly and efficiently determine advertisements for video insertion for a particular user. Furthermore, in some embodiments, the user preferences are updated to track a user's changing preferences.
  • Advantages of embodiments that employ ontologically based categorization methods include the ability to quickly match advertisements to specific video segments, as well as quickly and efficiently filter a candidate list of advertisements according to a specific user preferences.
  • Advantages of embodiments that process video metadata, rather than raw video content include efficient transmission and reception of advertising lists. Furthermore, in embodiments that process advertisement lists separate from the actual advertisements themselves, the need for having a user device download a large amount of data devoted to potentially unwatched videos is alleviated.
  • Although the embodiments and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims (29)

1. A method of inserting advertisements into video content, the method comprising:
electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements, wherein
each advertisement in the first list of advertisements is associated with a video category from a plurality of categories;
the video content comprises a plurality of video segments, and each video segment is associated with a category from the plurality of categories,
electronically filtering comprises filtering the first list of advertisements for the plurality of video segments on a segment by segment basis; and
transmitting the second list of advertisements to a user device for insertion with the video content.
2. The method of claim 1, wherein electronically filtering comprises
adjusting the user preference data according to user profile data;
calculating a preference value based on the adjusted user preference data; and
selecting an advertisement from the list of advertisements according to the calculated preference value.
3. The method of claim 2, further comprising
receiving user feedback info from the user device; and
adjusting the user preference data based on the user feedback.
4. The method of claim 3, wherein adjusting the user preference data comprises adjusting the user preference data according to a history of user preference data.
5. The method of claim 4, further comprising applying a sliding window to the history of user preference data.
6. The method of claim 2, wherein adjusting the user preference data according to the user profile data comprise and calculating the preference value comprises using a preference and profile (PNP) based Bayesian engine.
7. The method of claim 2, wherein:
the user preference data comprises a user preference matrix comprising preference values according to rows indexed by video categories and columns indexed by advertising categories; and
the user profile data comprises a user profile matrix comprising rows indexed by user profile categories and columns indexed by advertising categories.
8. The method of claim 7, wherein:
adjusting the user preference data according to user profile data comprises:
selecting a row of the user preference matrix corresponding to a present video category,
selecting at least one row of the user profile matrix,
calculating weights from the at least one row of the user profile matrix,
applying the weights to the row of the user preference matrix to form an adjusted row of the user preference matrix; and
calculating the preference value comprises calculating preference values for the adjusted row of the user preference matrix.
9. The method of claim 8, wherein:
each element in the user profile matrix and the user preference matrix comprises an aij l value denoting a positive user disposition and an aij d value denoting a negative user disposition; and
calculating weights comprises calculating a weight according to:
Weight = a ij l - a ij d ( a ij l + 1 ) ( a ij d + 1 ) ;
applying the weights to the user preference matrix comprises
if the weight is greater than 0, then the weight is applied to aij l of the user preference matrix such that aij l=aij d(1+Weight), and
if the weight is not greater than 0, then the weight is applied to aij d of the user preference matrix such that aij d=aij d(1+Weight).
10. The method of claim 9, wherein calculating the preference value comprises applying the following expression to elements in the row of the user preference matrix:
P = a ij l - a ij d ( a ij l + 1 ) ( a ij d + 1 ) .
11. The method of claim 10, wherein the weight and the preference value are between −1 and 1.
12. The method of claim 10, wherein:
each element in the user profile matrix and the user preference matrix further comprises a history of a last N user values; and
the method further comprises calculating the aij l and aij d values according to the history of the last N user values.
13. The method of claim 7, wherein the user profile categories comprises a first set of categories, the first set of categories comprising the video categories.
14. The method of claim 7, wherein the user profile categories comprises:
a first set of categories comprising the video categories; and
a second set of categories comprising user profile data.
15. A method of inserting advertisements into video content, the method comprising:
electronically filtering a first list of advertisements according to user preference data to determine a second list of advertisements, wherein
each advertisement in the first list of advertisements comprises a category from a plurality of categories,
the user preference data comprises a plurality of preference fields indexed by the plurality of categories,
electronically filtering comprises updating the user preference data according to a user profile matrix comprising a plurality of attributes according to a hierarchical user ontology; and
transmitting the second list of advertisements to a user device for insertion with the video content.
16. The method of claim 15, wherein:
video content comprises a plurality of segments;
each video segment is associated with a category from the plurality of categories; and
the electronically filtering further comprises filtering the advertisements for insertion on a segment by segment basis.
17. The method of claim 15, wherein the hierarchical user ontology comprises a first plurality of top level categories the first plurality of top level categories comprising:
a first category comprising subcategories relating to an individual's internal attributes;
a second category comprising subcategories relating to an individual's social status; and
a third comprising subcategories relating to an individual's immediate situation.
18. The method of claim 17, wherein:
subcategories relating to the individual's internal attributes comprise subcategories pertaining to age and gender;
subcategories relating to the individual's social status comprises subcategories pertaining to occupation and education; and
subcategories relating to the individual's immediate situation comprise subcategories pertaining to whether the individual is at home or whether the individual is on a trip.
19. A system for inserting advertisements into video content, the system comprising:
a filtering block configured to filter a first list of advertisements according to user preference data to determine a second list of advertisements, wherein
each advertisement in the first list of advertisements is associated with a video category from a plurality of categories,
the video content comprises a plurality of video segments, and each video segment is associated with a category from the plurality of categories, and
the filtering block filters the first list of advertisements on a segment by segment basis.
20. The system of claim 19, further comprising a communications adaptor transmitting the second list of advertisements to a user device for insertion with the video content.
21. The system of claim 19, wherein the filtering block is further configured to:
adjust the user preference data according to user profile data;
calculate a preference value based on the adjusted user preference data; and
select an advertisement from the list of advertisements according to the calculated preference value.
22. The system of claim 21, wherein:
the system further comprises a communications adaptor receiving user feedback info from a user device; and
the filtering block is further configured to adjust the user preference data based on the user feedback.
23. The system of claim 21, wherein the filtering block comprises a Bayesian engine configured to adjust the user preference data and calculate the preference value.
24. The system of claim 21, wherein:
the user preference data comprises a user preference matrix comprising preference values according to rows indexed by video categories and columns indexed by advertising categories; and
the user profile data comprises a user profile matrix comprising rows indexed by user profile categories and columns indexed by advertising categories.
25. The system of claim 19, wherein filtering block is further configured to update the user preference data according to a user profile matrix comprising a plurality of attributes according to a hierarchical user ontology.
26. A non-transitory computer readable medium with an executable program stored thereon, wherein the program instructs a microprocessor to perform the following steps:
filtering a first list of advertisements according to user preference data to determine a second list of advertisements, wherein
each advertisement in the first list of advertisements is associated with a video category from a plurality of categories,
the video content comprises a plurality of video segments, and each video segment is associated with a category from the plurality of categories, and
the filtering block filters the first list of advertisements on a segment by segment basis.
27. The non-transitory computer readable medium of claim 26, wherein the program further instructs the microprocessor to perform the following steps:
adjusting the user preference data according to user profile data;
calculating a preference value based on the adjusted user preference data; and
selecting an advertisement from the list of advertisements according to the calculated preference value.
28. The non-transitory computer readable medium of claim 26, wherein the program further instructs the microprocessor to transmit the second list of advertisements to a user device for insertion with the video content.
29. The non-transitory computer readable medium of claim 26, wherein the program further instructs the microprocessor to update the user preference data according to a user profile matrix comprising a plurality of attributes according to a hierarchical user ontology.
US12/957,972 2010-01-28 2010-12-01 System and Method for Filtering Targeted Advertisements for Video Content Delivery Abandoned US20110184807A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/957,972 US20110184807A1 (en) 2010-01-28 2010-12-01 System and Method for Filtering Targeted Advertisements for Video Content Delivery
US12/958,102 US9473828B2 (en) 2010-01-28 2010-12-01 System and method for matching targeted advertisements for video content delivery

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29922310P 2010-01-28 2010-01-28
US12/957,972 US20110184807A1 (en) 2010-01-28 2010-12-01 System and Method for Filtering Targeted Advertisements for Video Content Delivery

Publications (1)

Publication Number Publication Date
US20110184807A1 true US20110184807A1 (en) 2011-07-28

Family

ID=44309671

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/958,102 Active 2032-05-05 US9473828B2 (en) 2010-01-28 2010-12-01 System and method for matching targeted advertisements for video content delivery
US12/958,072 Abandoned US20110185384A1 (en) 2010-01-28 2010-12-01 System and Method for Targeted Advertisements for Video Content Delivery
US12/957,972 Abandoned US20110184807A1 (en) 2010-01-28 2010-12-01 System and Method for Filtering Targeted Advertisements for Video Content Delivery

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US12/958,102 Active 2032-05-05 US9473828B2 (en) 2010-01-28 2010-12-01 System and method for matching targeted advertisements for video content delivery
US12/958,072 Abandoned US20110185384A1 (en) 2010-01-28 2010-12-01 System and Method for Targeted Advertisements for Video Content Delivery

Country Status (1)

Country Link
US (3) US9473828B2 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100191689A1 (en) * 2009-01-27 2010-07-29 Google Inc. Video content analysis for automatic demographics recognition of users and videos
US20110185381A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Matching Targeted Advertisements for Video Content Delivery
US8266246B1 (en) * 2012-03-06 2012-09-11 Limelight Networks, Inc. Distributed playback session customization file management
US20120331169A1 (en) * 2011-06-22 2012-12-27 Mcintire John P Method and apparatus for automatically associating media segments with broadcast media streams
US20130124310A1 (en) * 2010-07-20 2013-05-16 Koninklijke Philips Electronics N.V. Method and apparatus for creating recommendations for a user
US20130151340A1 (en) * 2010-08-27 2013-06-13 Axel Springer Digital Tv Guide Gmbh Coordinated automatic ad placement for personal content channels
US20140278983A1 (en) * 2013-03-15 2014-09-18 Microsoft Corporation Using entity repository to enhance advertisement display
US8924993B1 (en) 2010-11-11 2014-12-30 Google Inc. Video content analysis for automatic demographics recognition of users and videos
EP2875459A4 (en) * 2012-07-20 2015-07-29 Intertrust Tech Corp Information targeting systems and methods
US20160189239A1 (en) * 2014-12-30 2016-06-30 Yahoo!, Inc. Advertisement generator
US20160189236A1 (en) * 2014-12-29 2016-06-30 Yahoo! Inc. Techniques for reducing irrelevant ads
CN106600343A (en) * 2016-12-30 2017-04-26 中广热点云科技有限公司 Method and system for managing online video advertisement associated with video content
CN106658094A (en) * 2015-10-29 2017-05-10 北京国双科技有限公司 Video advertisement putting method, client and server
US9762402B2 (en) 2015-05-20 2017-09-12 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
US9767768B2 (en) 2012-12-20 2017-09-19 Arris Enterprises, Inc. Automated object selection and placement for augmented reality
US9860790B2 (en) 2011-05-03 2018-01-02 Cisco Technology, Inc. Mobile service routing in a network environment
WO2018132602A1 (en) * 2017-01-11 2018-07-19 Invidi Technologies Corporation Managing addressable asset campaigns across multiple devices
CN109408670A (en) * 2018-10-23 2019-03-01 聚好看科技股份有限公司 Kinsfolk's attribute forecast method, apparatus and intelligent terminal based on topic model
US10361969B2 (en) 2016-08-30 2019-07-23 Cisco Technology, Inc. System and method for managing chained services in a network environment
US10417025B2 (en) 2014-11-18 2019-09-17 Cisco Technology, Inc. System and method to chain distributed applications in a network environment
US10587919B2 (en) 2017-09-29 2020-03-10 International Business Machines Corporation Cognitive digital video filtering based on user preferences
JP2020156001A (en) * 2019-03-22 2020-09-24 株式会社三井住友銀行 Advertisement recommendation method utilizing ai, program, and computer
CN112866748A (en) * 2021-01-19 2021-05-28 北京锐马视讯科技有限公司 AI-based video advertisement implanting method, AI-based video advertisement implanting device, AI-based video advertisement implanting equipment and AI-based video advertisement implanting storage medium
US11044203B2 (en) 2016-01-19 2021-06-22 Cisco Technology, Inc. System and method for hosting mobile packet core and value-added services using a software defined network and service chains
WO2021147460A1 (en) * 2020-01-22 2021-07-29 天窗智库文化传播(苏州)有限公司 Media information release management method and system
US11087369B1 (en) * 2018-03-16 2021-08-10 Facebook, Inc. Context-based provision of media content
WO2021183146A1 (en) * 2020-03-13 2021-09-16 Google Llc Mixing of media content items for display on a focus area of a network-connected television device
US11252461B2 (en) 2020-03-13 2022-02-15 Google Llc Media content casting in network-connected television devices
US11363352B2 (en) 2017-09-29 2022-06-14 International Business Machines Corporation Video content relationship mapping
US11683564B2 (en) 2020-03-13 2023-06-20 Google Llc Network-connected television devices with knowledge-based media content recommendations and unified user interfaces
US11709889B1 (en) * 2012-03-16 2023-07-25 Google Llc Content keyword identification

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120166275A1 (en) * 2010-12-23 2012-06-28 Yahoo! Inc Privacy enhancing display advertisment recommendation using tagging
US8566156B2 (en) * 2011-07-05 2013-10-22 Yahoo! Inc. Combining segments of users into vertically indexed super-segments
US8849095B2 (en) * 2011-07-26 2014-09-30 Ooyala, Inc. Goal-based video delivery system
GB2493696A (en) * 2011-08-02 2013-02-20 Qatar Foundation A method of matching video to advertising content
US9009083B1 (en) * 2012-02-15 2015-04-14 Google Inc. Mechanism for automatic quantification of multimedia production quality
US20130347032A1 (en) * 2012-06-21 2013-12-26 Ebay Inc. Method and system for targeted broadcast advertising
US10002206B2 (en) * 2012-10-26 2018-06-19 Saturn Licensing Llc Information processing device and information processing method
US9069765B2 (en) * 2012-10-26 2015-06-30 Nbcuniversal Media, Llc Method and system for matching objects having symmetrical object profiling
US9332284B1 (en) * 2013-02-28 2016-05-03 Amazon Technologies, Inc. Personalized advertisement content
CN103399917B (en) * 2013-07-31 2017-07-14 小米科技有限责任公司 Data file insertion, device and system
CN104063799A (en) * 2014-06-16 2014-09-24 百度在线网络技术(北京)有限公司 Promotion message pushing method and device
US9853950B2 (en) * 2014-08-13 2017-12-26 Oath Inc. Systems and methods for protecting internet advertising data
US9727566B2 (en) * 2014-08-26 2017-08-08 Nbcuniversal Media, Llc Selecting adaptive secondary content based on a profile of primary content
US9235385B1 (en) * 2015-01-20 2016-01-12 Apollo Education Group, Inc. Dynamic software assembly
EP3086273A1 (en) * 2015-04-20 2016-10-26 Spoods GmbH A method for data communication between a data processing unit and an end device as well as a system for data communication
US10075755B2 (en) * 2015-09-18 2018-09-11 Sorenson Media, Inc. Digital overlay offers on connected media devices
US20180096018A1 (en) 2016-09-30 2018-04-05 Microsoft Technology Licensing, Llc Reducing processing for comparing large metadata sets
US20180211177A1 (en) 2017-01-25 2018-07-26 Pearson Education, Inc. System and method of bayes net content graph content recommendation
GB2553247B (en) * 2017-10-21 2018-08-15 Bruce Kelman Alistair Apparatus and method for protecting the privacy of viewers of commercial television
CN108540853A (en) * 2018-05-04 2018-09-14 科大讯飞股份有限公司 Advertisement placement method and device
CN108921221B (en) * 2018-07-04 2022-11-18 腾讯科技(深圳)有限公司 User feature generation method, device, equipment and storage medium
CN111629273B (en) * 2020-04-14 2022-02-11 北京奇艺世纪科技有限公司 Video management method, device, system and storage medium
US11475668B2 (en) 2020-10-09 2022-10-18 Bank Of America Corporation System and method for automatic video categorization
CN114390342B (en) * 2021-12-10 2023-08-29 阿里巴巴(中国)有限公司 Video music distribution method, device, equipment and medium

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010042249A1 (en) * 2000-03-15 2001-11-15 Dan Knepper System and method of joining encoded video streams for continuous play
US20010049620A1 (en) * 2000-02-29 2001-12-06 Blasko John P. Privacy-protected targeting system
US20020059399A1 (en) * 2000-11-14 2002-05-16 Itt Manufacturing Enterprises, Inc. Method and system for updating a searchable database of descriptive information describing information stored at a plurality of addressable logical locations
US20030093792A1 (en) * 2000-06-30 2003-05-15 Labeeb Ismail K. Method and apparatus for delivery of television programs and targeted de-coupled advertising
US20040054572A1 (en) * 2000-07-27 2004-03-18 Alison Oldale Collaborative filtering
US6804659B1 (en) * 2000-01-14 2004-10-12 Ricoh Company Ltd. Content based web advertising
US20050165782A1 (en) * 2003-12-02 2005-07-28 Sony Corporation Information processing apparatus, information processing method, program for implementing information processing method, information processing system, and method for information processing system
US20050216516A1 (en) * 2000-05-02 2005-09-29 Textwise Llc Advertisement placement method and system using semantic analysis
US20060029093A1 (en) * 2004-08-09 2006-02-09 Cedric Van Rossum Multimedia system over electronic network and method of use
US20060074769A1 (en) * 2004-09-17 2006-04-06 Looney Harold F Personalized marketing architecture
US7050988B2 (en) * 1993-09-09 2006-05-23 Realnetworks, Inc. Method and apparatus for recommending selections based on preferences in a multi-user system
US20060242016A1 (en) * 2005-01-14 2006-10-26 Tremor Media Llc Dynamic advertisement system and method
US7136875B2 (en) * 2002-09-24 2006-11-14 Google, Inc. Serving advertisements based on content
US20070067297A1 (en) * 2004-04-30 2007-03-22 Kublickis Peter J System and methods for a micropayment-enabled marketplace with permission-based, self-service, precision-targeted delivery of advertising, entertainment and informational content and relationship marketing to anonymous internet users
WO2007048432A1 (en) * 2005-10-28 2007-05-03 Telecom Italia S.P.A. Method of providing selected content items to a user
US7260823B2 (en) * 2001-01-11 2007-08-21 Prime Research Alliance E., Inc. Profiling and identification of television viewers
US20070204310A1 (en) * 2006-02-27 2007-08-30 Microsoft Corporation Automatically Inserting Advertisements into Source Video Content Playback Streams
US20070245379A1 (en) * 2004-06-17 2007-10-18 Koninklijke Phillips Electronics, N.V. Personalized summaries using personality attributes
US7370342B2 (en) * 1998-06-12 2008-05-06 Metabyte Networks, Inc. Method and apparatus for delivery of targeted video programming
WO2008056358A2 (en) * 2006-11-10 2008-05-15 Media Layers Ltd Method and computer program product for providing advertisements to a mobile user device
US20090007195A1 (en) * 2007-06-26 2009-01-01 Verizon Data Services Inc. Method And System For Filtering Advertisements In A Media Stream
US20090006375A1 (en) * 2007-06-27 2009-01-01 Google Inc. Selection of Advertisements for Placement with Content
US20090037262A1 (en) * 2007-07-30 2009-02-05 Yahoo! Inc. System for contextual matching of videos with advertisements
US7526722B2 (en) * 2005-12-29 2009-04-28 Sap Ag System and method for providing user help according to user category
US20090164419A1 (en) * 2007-12-19 2009-06-25 Google Inc. Video quality measures
US20100161441A1 (en) * 2008-12-24 2010-06-24 Comcast Interactive Media, Llc Method and apparatus for advertising at the sub-asset level
US20100235220A1 (en) * 2009-03-10 2010-09-16 Google Inc. Category similarities
US20110078723A1 (en) * 2009-09-29 2011-03-31 Verizon Patent and Licensing. Inc. Real time television advertisement shaping
US20110082824A1 (en) * 2009-10-06 2011-04-07 David Allison Method for selecting an optimal classification protocol for classifying one or more targets
US20110185384A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Targeted Advertisements for Video Content Delivery
US20120102523A1 (en) * 1994-11-29 2012-04-26 Frederick Herz System and method for providing access to data using customer profiles
US8321889B2 (en) * 2006-03-08 2012-11-27 Kamfu Wong Method and system for personalized and localized TV ad delivery

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7050988B2 (en) * 1993-09-09 2006-05-23 Realnetworks, Inc. Method and apparatus for recommending selections based on preferences in a multi-user system
US20120102523A1 (en) * 1994-11-29 2012-04-26 Frederick Herz System and method for providing access to data using customer profiles
US7370342B2 (en) * 1998-06-12 2008-05-06 Metabyte Networks, Inc. Method and apparatus for delivery of targeted video programming
US6804659B1 (en) * 2000-01-14 2004-10-12 Ricoh Company Ltd. Content based web advertising
US20010049620A1 (en) * 2000-02-29 2001-12-06 Blasko John P. Privacy-protected targeting system
US20010042249A1 (en) * 2000-03-15 2001-11-15 Dan Knepper System and method of joining encoded video streams for continuous play
US20050216516A1 (en) * 2000-05-02 2005-09-29 Textwise Llc Advertisement placement method and system using semantic analysis
US20030093792A1 (en) * 2000-06-30 2003-05-15 Labeeb Ismail K. Method and apparatus for delivery of television programs and targeted de-coupled advertising
US20040054572A1 (en) * 2000-07-27 2004-03-18 Alison Oldale Collaborative filtering
US20020059399A1 (en) * 2000-11-14 2002-05-16 Itt Manufacturing Enterprises, Inc. Method and system for updating a searchable database of descriptive information describing information stored at a plurality of addressable logical locations
US7260823B2 (en) * 2001-01-11 2007-08-21 Prime Research Alliance E., Inc. Profiling and identification of television viewers
US7136875B2 (en) * 2002-09-24 2006-11-14 Google, Inc. Serving advertisements based on content
US20050165782A1 (en) * 2003-12-02 2005-07-28 Sony Corporation Information processing apparatus, information processing method, program for implementing information processing method, information processing system, and method for information processing system
US20070067297A1 (en) * 2004-04-30 2007-03-22 Kublickis Peter J System and methods for a micropayment-enabled marketplace with permission-based, self-service, precision-targeted delivery of advertising, entertainment and informational content and relationship marketing to anonymous internet users
US20070245379A1 (en) * 2004-06-17 2007-10-18 Koninklijke Phillips Electronics, N.V. Personalized summaries using personality attributes
US20060029093A1 (en) * 2004-08-09 2006-02-09 Cedric Van Rossum Multimedia system over electronic network and method of use
US20060074769A1 (en) * 2004-09-17 2006-04-06 Looney Harold F Personalized marketing architecture
US20060242016A1 (en) * 2005-01-14 2006-10-26 Tremor Media Llc Dynamic advertisement system and method
WO2007048432A1 (en) * 2005-10-28 2007-05-03 Telecom Italia S.P.A. Method of providing selected content items to a user
US20090234784A1 (en) * 2005-10-28 2009-09-17 Telecom Italia S.P.A. Method of Providing Selected Content Items to a User
US7526722B2 (en) * 2005-12-29 2009-04-28 Sap Ag System and method for providing user help according to user category
US20070204310A1 (en) * 2006-02-27 2007-08-30 Microsoft Corporation Automatically Inserting Advertisements into Source Video Content Playback Streams
US8321889B2 (en) * 2006-03-08 2012-11-27 Kamfu Wong Method and system for personalized and localized TV ad delivery
WO2008056358A2 (en) * 2006-11-10 2008-05-15 Media Layers Ltd Method and computer program product for providing advertisements to a mobile user device
US20090007195A1 (en) * 2007-06-26 2009-01-01 Verizon Data Services Inc. Method And System For Filtering Advertisements In A Media Stream
US20090006375A1 (en) * 2007-06-27 2009-01-01 Google Inc. Selection of Advertisements for Placement with Content
US20090037262A1 (en) * 2007-07-30 2009-02-05 Yahoo! Inc. System for contextual matching of videos with advertisements
US20090164419A1 (en) * 2007-12-19 2009-06-25 Google Inc. Video quality measures
US20100161441A1 (en) * 2008-12-24 2010-06-24 Comcast Interactive Media, Llc Method and apparatus for advertising at the sub-asset level
US20100235220A1 (en) * 2009-03-10 2010-09-16 Google Inc. Category similarities
US20110078723A1 (en) * 2009-09-29 2011-03-31 Verizon Patent and Licensing. Inc. Real time television advertisement shaping
US20110082824A1 (en) * 2009-10-06 2011-04-07 David Allison Method for selecting an optimal classification protocol for classifying one or more targets
US20110185384A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Targeted Advertisements for Video Content Delivery

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100191689A1 (en) * 2009-01-27 2010-07-29 Google Inc. Video content analysis for automatic demographics recognition of users and videos
US20110185381A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Matching Targeted Advertisements for Video Content Delivery
US20110185384A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Targeted Advertisements for Video Content Delivery
US9473828B2 (en) * 2010-01-28 2016-10-18 Futurewei Technologies, Inc. System and method for matching targeted advertisements for video content delivery
US20130124310A1 (en) * 2010-07-20 2013-05-16 Koninklijke Philips Electronics N.V. Method and apparatus for creating recommendations for a user
US20130151340A1 (en) * 2010-08-27 2013-06-13 Axel Springer Digital Tv Guide Gmbh Coordinated automatic ad placement for personal content channels
US8924993B1 (en) 2010-11-11 2014-12-30 Google Inc. Video content analysis for automatic demographics recognition of users and videos
US10210462B2 (en) 2010-11-11 2019-02-19 Google Llc Video content analysis for automatic demographics recognition of users and videos
US9860790B2 (en) 2011-05-03 2018-01-02 Cisco Technology, Inc. Mobile service routing in a network environment
US20120331169A1 (en) * 2011-06-22 2012-12-27 Mcintire John P Method and apparatus for automatically associating media segments with broadcast media streams
US9681160B2 (en) * 2011-06-22 2017-06-13 Tout Inc. Method and apparatus for automatically associating media segments with broadcast media streams
US20130238757A1 (en) * 2012-03-06 2013-09-12 Limelight Networks, Inc. Distributed playback session customization file management
US8266246B1 (en) * 2012-03-06 2012-09-11 Limelight Networks, Inc. Distributed playback session customization file management
US11709889B1 (en) * 2012-03-16 2023-07-25 Google Llc Content keyword identification
US9355157B2 (en) 2012-07-20 2016-05-31 Intertrust Technologies Corporation Information targeting systems and methods
EP2875459A4 (en) * 2012-07-20 2015-07-29 Intertrust Tech Corp Information targeting systems and methods
US10061847B2 (en) 2012-07-20 2018-08-28 Intertrust Technologies Corporation Information targeting systems and methods
US11482192B2 (en) 2012-12-20 2022-10-25 Arris Enterprises Llc Automated object selection and placement for augmented reality
US9767768B2 (en) 2012-12-20 2017-09-19 Arris Enterprises, Inc. Automated object selection and placement for augmented reality
US20140278983A1 (en) * 2013-03-15 2014-09-18 Microsoft Corporation Using entity repository to enhance advertisement display
US10417025B2 (en) 2014-11-18 2019-09-17 Cisco Technology, Inc. System and method to chain distributed applications in a network environment
US20160189236A1 (en) * 2014-12-29 2016-06-30 Yahoo! Inc. Techniques for reducing irrelevant ads
US20160189239A1 (en) * 2014-12-30 2016-06-30 Yahoo!, Inc. Advertisement generator
US9825769B2 (en) 2015-05-20 2017-11-21 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
US9762402B2 (en) 2015-05-20 2017-09-12 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
CN106658094A (en) * 2015-10-29 2017-05-10 北京国双科技有限公司 Video advertisement putting method, client and server
US11044203B2 (en) 2016-01-19 2021-06-22 Cisco Technology, Inc. System and method for hosting mobile packet core and value-added services using a software defined network and service chains
US10361969B2 (en) 2016-08-30 2019-07-23 Cisco Technology, Inc. System and method for managing chained services in a network environment
CN106600343A (en) * 2016-12-30 2017-04-26 中广热点云科技有限公司 Method and system for managing online video advertisement associated with video content
US10735795B2 (en) 2017-01-11 2020-08-04 Invidi Technologies Corporation Managing addressable asset campaigns across multiple devices
US11689758B2 (en) 2017-01-11 2023-06-27 Invidi Technologies Corporation Managing addressable asset campaigns across multiple devices
WO2018132602A1 (en) * 2017-01-11 2018-07-19 Invidi Technologies Corporation Managing addressable asset campaigns across multiple devices
US11363352B2 (en) 2017-09-29 2022-06-14 International Business Machines Corporation Video content relationship mapping
US10587919B2 (en) 2017-09-29 2020-03-10 International Business Machines Corporation Cognitive digital video filtering based on user preferences
US10587920B2 (en) 2017-09-29 2020-03-10 International Business Machines Corporation Cognitive digital video filtering based on user preferences
US11395051B2 (en) 2017-09-29 2022-07-19 International Business Machines Corporation Video content relationship mapping
US11087369B1 (en) * 2018-03-16 2021-08-10 Facebook, Inc. Context-based provision of media content
CN109408670A (en) * 2018-10-23 2019-03-01 聚好看科技股份有限公司 Kinsfolk's attribute forecast method, apparatus and intelligent terminal based on topic model
JP2020156001A (en) * 2019-03-22 2020-09-24 株式会社三井住友銀行 Advertisement recommendation method utilizing ai, program, and computer
WO2021147460A1 (en) * 2020-01-22 2021-07-29 天窗智库文化传播(苏州)有限公司 Media information release management method and system
US11252461B2 (en) 2020-03-13 2022-02-15 Google Llc Media content casting in network-connected television devices
WO2021183146A1 (en) * 2020-03-13 2021-09-16 Google Llc Mixing of media content items for display on a focus area of a network-connected television device
US11683564B2 (en) 2020-03-13 2023-06-20 Google Llc Network-connected television devices with knowledge-based media content recommendations and unified user interfaces
CN112866748A (en) * 2021-01-19 2021-05-28 北京锐马视讯科技有限公司 AI-based video advertisement implanting method, AI-based video advertisement implanting device, AI-based video advertisement implanting equipment and AI-based video advertisement implanting storage medium

Also Published As

Publication number Publication date
US20110185381A1 (en) 2011-07-28
US9473828B2 (en) 2016-10-18
US20110185384A1 (en) 2011-07-28

Similar Documents

Publication Publication Date Title
US9473828B2 (en) System and method for matching targeted advertisements for video content delivery
US11188573B2 (en) Character based media analytics
Kehoe et al. The impact of digital technology on the distribution value chain model of independent feature films in the UK
Tewksbury et al. News on the Internet: Information and Citizenship in the 21st Century
US8335714B2 (en) Identification of users for advertising using data with missing values
JP5735087B2 (en) Providing personalized resources on demand to consumer device applications over a broadband network
US8626752B2 (en) Broadcast network platform system
US20100161424A1 (en) Targeted advertising system and method
US20120254301A1 (en) Broadcast Network Platform System
US20120078725A1 (en) Method and system for contextual advertisement recommendation across multiple devices of content delivery
US20090164442A1 (en) Interactive hybrid recommender system
US20090172727A1 (en) Selecting advertisements to present
US8060498B2 (en) Broadcast network platform system
US20090024470A1 (en) Vertical clustering and anti-clustering of categories in ad link units
US20130124310A1 (en) Method and apparatus for creating recommendations for a user
US20160055183A1 (en) Binary Media Broadcast Distribution System
Seely et al. Email Newsletters: An Analysis of Content From Nine Top News Organizations
EP2652945A2 (en) Processes and systems for creating idiomorphic media and delivering granular media suitable for interstitial channels
US20080300976A1 (en) Identification of users for advertising purposes
US20160294885A1 (en) Live Video Communications System
Manko Video advertising: Using YouTube analytics for the target audience
US20110125570A1 (en) Method and apparatus for selecting content
US20160050389A1 (en) Live Video Communications System
EP2275984A1 (en) Automatic information selection based on involvement classification
US20170200190A1 (en) Dynamically served digital content based on real time event updates

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUTUREWEI TECHNOLOGIES, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, QI;WANG, SHU;HUANG, YU;AND OTHERS;SIGNING DATES FROM 20101130 TO 20101201;REEL/FRAME:025437/0371

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION