US20110167445A1 - Audiovisual content channelization system - Google Patents

Audiovisual content channelization system Download PDF

Info

Publication number
US20110167445A1
US20110167445A1 US12/701,300 US70130010A US2011167445A1 US 20110167445 A1 US20110167445 A1 US 20110167445A1 US 70130010 A US70130010 A US 70130010A US 2011167445 A1 US2011167445 A1 US 2011167445A1
Authority
US
United States
Prior art keywords
content
run
length
advertising
audiovisual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/701,300
Inventor
Robert W. Reams
Evan C. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Streaming Appliances LLC
Original Assignee
Streaming Appliances LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Streaming Appliances LLC filed Critical Streaming Appliances LLC
Priority to US12/701,300 priority Critical patent/US20110167445A1/en
Publication of US20110167445A1 publication Critical patent/US20110167445A1/en
Assigned to STREAMING APPLIANCES, LLC reassignment STREAMING APPLIANCES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REAMS, ROBERT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the invention relates to delivery of audiovisual content over a packet-switched network, and more particularly to a system and process that channelizes audiovisual content for delivery based on semantic filtering.
  • a system and method for delivering video content is provided that channelizes on demand content to simulate broadcast programming.
  • the system and method use semantic filtering to combine content into channels.
  • a system for providing audiovisual data A run-length audiovisual data system for providing run-length audiovisual content.
  • An advertising content system for providing advertising content.
  • a semantic filter system for processing the run-length audiovisual content and the advertising content and matching the run-length audiovisual content to the advertising content based on semantic filter output matching.
  • FIG. 1 is a diagram of a system for providing video compression in accordance with an exemplary embodiment of the present invention
  • FIG. 2 is a diagram of a system for processing video data in accordance with an exemplary embodiment of the present invention
  • FIG. 3 is a diagram of a system for filtering video signals in accordance with an exemplary embodiment of the present invention
  • FIG. 4 is a diagram of a system for channelizing run-length audiovisual content, matching the channelized run-length audiovisual content with advertising, and for determining the effectiveness of the matched advertising and run-length audiovisual content in accordance with an exemplary embodiment of the present invention
  • FIG. 5 is a diagram of a semantic filter in accordance with an exemplary embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for improving the image quality of audiovisual data in accordance with an exemplary embodiment of the present invention
  • FIG. 7 is a flowchart of a method for providing advertising data and run-length content for on-demand channels in accordance with an exemplary embodiment of the present invention
  • FIG. 8 is a diagram of a screen display in accordance with an exemplary embodiment of the present so invention.
  • FIG. 9 is a flowchart of a method for monitoring video content delivery to determine whether brand information has been requested in accordance with an exemplary embodiment of the present invention.
  • FIG. 10 is a diagram of a system and method for filtering video signals in accordance with an exemplary embodiment of the present invention.
  • FIG. 11 is a diagram of a system and method for generating offset data in accordance with an exemplary embodiment of the present invention.
  • FIG. 1 is a diagram of a system 100 for providing video compression in accordance with an exemplary embodiment of the present invention.
  • System 100 allows variable rate video data to be compressed without noticeable loss of quality.
  • variable rate video encoder 102 which can be implemented in hardware or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose processing platform.
  • variable rate video encoder 102 can be an MPEG4 part 10-compliant variable rate video encoder operating on a general purpose processing platform, an application specific integrated circuit, or other suitable platforms.
  • Variable rate video encoder 102 receives video data and generates a variable rate video output.
  • the variable rate video output can be at a variable bandwidth that is higher than a target bandwidth.
  • further processing of the output of variable rate video encoder 102 can be required in order to provide a bandwidth at a desired level.
  • the bandwidth of the video is low, no additional processing is required.
  • An indicator of whether additional processing is required can be obtained based on the entropy of the video signal being processed, where a high level of entropy generates a high level of quantization noise.
  • Variable rate video encoder 102 generates a quantization noise output, such as an indication of the amount of variation in macroblocks, blocks, pixels, or other video data.
  • the quantization noise output can be a variable bit rate mode output of the MPEG 4 part 10 encoder, which can also be characterized as an indication of the entropy of the video that is being encoded.
  • System 100 includes variable jitter filter 104 , which can be implemented in hardware or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose processing platform.
  • Variable jitter filter 104 provides a controllable amount of jitter reduction based on an input. Threshold input and quantization noise output are used to determine whether a threshold has been exceeded for activation of variable jitter filter 104 . In one exemplary embodiment, when the quantization noise output is below a threshold input, variable jitter filter 104 will not be activated. Likewise, when the quantization noise output exceeds the threshold input, the variable jitter filter 104 will reduce jitter by a predetermined amount related to the difference between the quantization noise output and the threshold input.
  • Variable aliasing filter 106 receives the filtered output from variable jitter filter 104 and performs anti-aliasing filtering based upon an input quantization noise level and a threshold input.
  • variable aliasing filter 106 can receive the quantization noise output and the threshold input and can deactivate aliasing filtering if the quantization noise output is below the threshold level, otherwise, variable aliasing filter 106 performs aliasing filtering based on the difference between the threshold input level and the quantization noise output level.
  • system 100 can be used to reduce the bandwidth of a variable rate video signal without creating video artifacts, such as blurriness or lack of picture quality.
  • System 100 activates a variable jitter filter 104 and variable aliasing filter 106 when quantization noise levels exceed threshold inputs to the filters.
  • the threshold inputs to both filters can be matched, such that they are symmetric. By making the threshold level symmetric, the video artifacts generated by processing the video signal through variable jitter filter 104 and variable aliasing filter 106 can be minimized.
  • FIG. 2 is a diagram of a system 200 for processing video data in accordance with an exemplary embodiment of the present invention.
  • System 200 includes variable rate video encoder 202 , which can be implemented in hardware or a suitable combination of hardware and software, and which can be one or more software systems operating on a general processing platform, such as an MPEG 4 part 10 variable rate video encoder.
  • Quantization noise output from variable rate video encoder 202 such as the variable bit rate mode output of the MPEG 4 part 10 encoder is provided as an input to variable filter stage 1 204 and variable filter stage 2 206 .
  • variable filter stage 1 204 receives a threshold 1 input
  • variable filter stage 2 206 receives a threshold 2 input.
  • variable filter stage 1 204 and variable filter stage 2 206 can include a variable jitter filter 104 and a variable aliasing filter 106 , or other suitable filters.
  • the thresholds provided to the filters that comprise variable filter stage 1 204 and variable filter stage 2 206 can be symmetric threshold levels.
  • system 200 provides multiple stage filtering of a video signal to further reduce the bandwidth of the video signal and improve signal quality.
  • threshold 1 input can have a first size or speed, such as a slow speed
  • threshold 2 input can have a second speed, such as a higher speed, so as to stage the filtering of the variable rate video data signal and reduce the signal bandwidth without affecting signal quality.
  • FIG. 3 is a diagram of a system 300 for filtering video signals in accordance with an exemplary embodiment of the present invention.
  • System 300 can be implemented in hardware or a suitable combination of hardware and software, and can be one or more software systems operating on a general purpose processing platform.
  • f RMS lim T ⁇ ⁇ ⁇ 1 2 ⁇ T ⁇ ⁇ - T T ⁇ [ f ⁇ ( t ) ] 2 ⁇ ⁇ t .
  • fast RMS processor 302 having a shorter integration time than slow RMS processor 304 .
  • the output from fast RMS processor 302 is processed by log 1 processor 306
  • the output from slow RMS processor 304 is processed by log 1 processor 308 .
  • the outputs from log 1 processors 306 and 308 are provided to summer 310 , which subtracts the output of log 1 processor 308 from the output of log 1 306 , and provides the output to threshold 312 .
  • Threshold 312 can receive a user programmable threshold level and outputs a signal if the input exceeds the threshold.
  • the output is then provided to ratio multiplier 314 , which receives a predetermined ratio and multiplies the output by the predetermined ratio.
  • the output from ratio multiplier 314 is fed into log ⁇ 1 processor 316 which is provided as an input to jitter filter 320 .
  • the video input is also provided to Z ⁇ n processor 318 , which performs a Z ⁇ n transform on input video data.
  • the output of Z ⁇ n processor 318 is provided to jitter filter 320 , which performs jitter filtering based on the setting received from log ⁇ 1 processor 316 .
  • the output from jitter filter 320 is provided to fast RMS processor 322 and slow RMS processor 324 , which perform processing of the video signal as previously discussed.
  • the output from fast RMS processor 322 is provided to log 1 processor 326 and the output from slow RMS processor 324 is provided to log 1 processor 328 .
  • the outputs from log 1 processors 326 and 328 are provided to summer 330 which subtracts the output of log 1 processor 328 from the output of log 1 processor 326 and provides the difference to threshold 332 , which passes the unmodified signal if the input is below a predetermined threshold and which passes a threshold modified signal if the input is above the predetermined threshold.
  • Multiplier 334 multiplies the output from threshold 332 by a ratio, which is provided to log ⁇ 1 processor 336 .
  • the output of log ⁇ 1 processor 336 is provided to alias filter 340 .
  • the output of jitter filter 320 is provided to Z ⁇ n processor 338 , and which is then filtered by alias filter 340 in response to the output of log ⁇ 1 processor 336 .
  • the video output signal that is generated has a lower bandwidth at higher quality than video processed using other processes.
  • the threshold levels set by threshold 312 and 332 can be symmetric thresholds.
  • system 300 performs compression of a variable rate video signal to provide improved video quality at lower bandwidth.
  • FIG. 4 is a diagram of a system 400 for channelizing run-length audiovisual content, matching the channelized run-length audiovisual content with advertising, and for determining the effectiveness of the matched advertising and run-length audiovisual content in accordance with an exemplary embodiment of the present invention.
  • run-length audiovisual content refers to programmatic audiovisual content having a duration, theme, characters, plot, focus (such as scientific programs or sporting events), or other characteristics that distinguish audiovisual content that is viewed for entertainment or information purposes from advertising.
  • “Advertising” can include programs such as “infomercials,” but more typically includes content that can be included within the “run-length audiovisual content,” such as icons that can be selected by a user, headers or footers, watermarks or other content that is not the selecting factor in a user's decision to view the “run-length audiovisual content,” but rather which is usually included with the “run-length audiovisual content” for a fee, for a public service, or for reasons other than the purpose for the “run-length audiovisual content.”
  • System 400 can be implemented in hardware or a suitable combination of hardware and software, and can be one or more software systems operating on a general purpose processing platform.
  • “hardware” can include a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field programmable gate array, a digital signal processor, or other suitable hardware.
  • “software” can include one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code or other suitable software structures operating in two or more software applications or on two or more processors, or other suitable software structures.
  • software can include one or more lines of code or other suitable software structures operating in a general purpose software application, such as an operating system, and one or more lines of code or other suitable software structures operating in a specific purpose software application.
  • Run-length audiovisual content system 402 provides run-length audiovisual content programs for analysis.
  • run-length audiovisual content system 402 can include a plurality of sources of run-length audiovisual content that are processed to extract text for semantic filtering.
  • Semantic filter 404 filters content, such as by analyzing the relationship of words, phrases and terms.
  • semantic filter 404 is used to distinguish first content from second content, where the first and second content have the exact same words, but where the order of the words have substantially different meanings.
  • Semantic filter 404 uses a database of semantic metrics to score content, where the database is configured based on a flexible set of user-defined rules.
  • a semantic filter can be used to identify content relating to certain topics, such as “food,” by identifying text strings that are associated with food topics, such as “cooking,” “dining,” and “groceries.” These text strings can be characterized as a “lexicon manifold,” where the words, terms or phrases that are associated with a particular category of the semantic filter form a lexicon, and the set of categories and associated lexicons form a manifold that can be used to perform semantic filtering.
  • text strings that may appear to be associated with food but which represent other topics, such as “brain food” in the current example can be identified as excluded strings that should be ignored.
  • Food is only provided as an exemplary topic, and any other suitable topic can also or alternatively be processed or identified using semantic filter 404 .
  • semantic filter 404 will be used to generate metrics for content that can be used to group the content with related content, to match the content with advertising that is related to the content or of potential interest to content viewers, or for other suitable purposes.
  • Content channelization system 406 receives filtered run-length audiovisual content from semantic filter 404 and assembles the run-length audiovisual content into channels.
  • content channelization system 406 can use semantic filter settings or output and can select run-length audiovisual content based on the semantic filter settings or output.
  • content channelization system 406 can have predetermined run-length audiovisual content, which can be processed using a semantic filter 404 to ensure that it complies with the parameters for a predetermined channel in content channelization system 406 .
  • Advertising content system 408 provides advertising content.
  • advertising content can be provided by advertisers, such as standard advertising content developed by the advertisers produced with other media.
  • advertising content system 408 can be used in conjunction with a semantic filter to develop advertising scripts that match semantic filter settings or output for channels of audiovisual content.
  • Semantic filter 410 processes advertising content from advertising content system 408 to generate a plurality of semantic filter settings or outputs.
  • the semantic filter settings or outputs can be selected to match advertising content from advertising content system 408 with run-length audiovisual content from run-length audiovisual content system 402 .
  • Advertising preprocessing system 412 allows a user to modify run-length audiovisual content to add advertising content.
  • advertising preprocessing system 412 allows a user to insert markers into audiovisual content so as to cause the advertising from advertising content system 408 to be automatically linked to the audiovisual content, such as by including tags in the audiovisual content.
  • systems or processes such as those provided or used by Princeton Video Image, http://www.pvi.tv/pvi/index.asp, can be used to process run-length audiovisual content to insert tags into the run-length audiovisual content to allow advertising to be merged into the run-length audiovisual content.
  • a billboard in run-length audiovisual content such as a sporting event can be processed to remove the advertising on the billboard and to replace it with a tag, to allow advertising to be dynamically inserted into the location of the billboard in the audiovisual content.
  • advertising preprocessing can be used to associate advertising content from advertising content system 408 with run-length audiovisual content from run-length audiovisual content system 402 , such as by placing tags on cans, bottles, boxes, or other props.
  • Advertising insertion system 414 receives advertising content from semantic filter 410 and inserts the advertising where available and where suitable.
  • advertising insertion system 414 can insert advertising on a space available basis, such as backdrops on sports fields, billboards and programming, or any other suitable locations.
  • Advertising insertion system 414 outputs the channelized content.
  • Advertising delivery metrics system 416 generates one or more advertising delivery metrics, such as a number of requests for additional information, a number of follow-ups on requests received from users/viewers, or other suitable metrics.
  • advertising triggers inserted in the audiovisual content can be selected by viewers.
  • Advertising delivery metrics system 416 can count the number of times that viewers requested additional information or otherwise indicated an interest in an advertized product or service, can determine the number of times that viewers followed up on such requests, can determine whether or not users purchased such requests, can determine whether free samples of the advertised products were provided, or can otherwise provide advertising delivery metrics.
  • Social media content collection system 418 monitors social media that is responsive to advertising for associated content.
  • certain types of social media such as Twitter, Facebook, blogs or other social media
  • some types of social media attract feedback on predetermined audiovisual content programs, such as blogs or discussion groups that are identified by the name of the program, or that may be subject of certain interest groups.
  • Social media content collection system 418 collects text from social media associated with such predetermined social media groups.
  • Semantic filter 420 filters the text data from social media content collection system 418 based on advertising inserted into the run-length audiovisual content by advertising preprocessing system 412 or advertising insertion system 414 .
  • the advertising provided to run-length audiovisual content can be used to set settings of semantic filter 420 , such as to look for predetermined phrases that are related to the advertising or for other suitable purposes.
  • the output of semantic filter 420 can thus be used to determine an indication of the success or failure of the advertising.
  • advertising success metrics can be determined based on an expected semantic filtering output. Where the semantic filtering output differs from the expected output, the output can be determined to have an error that may be corrected in future iterations.
  • Advertising quality metrics system 422 generates one or more advertising quality metrics, such as number of successful deliveries per program view, success of a relationship between semantic filtering from social media and advertising targets, or other suitable quality metrics.
  • the semantic filtering performed on text data from social media content collection 418 can include positive and negative indicators, and can be correlated to advertisement scoring data and advertising metrics data to provide an indication of advertising quality.
  • an advertisement can include an offer for a free sample of a product, where the user receives a message that can be printed and exchanged for the free sample.
  • Text from social media that is associated with the free sample and advertisement can be monitored and collected, and the semantic filter can score the words and phrases associated with the free sample based on positive or negative connotations.
  • the effectiveness of the advertising can be determined by monitoring social media to detect key words associated with an advertisement and to perform semantic filtering of the social media data to determine whether the response to the advertisement and the product that is the focus of the advertisement is positive, neutral or negative.
  • system 400 provides a new paradigm for audiovisual content delivery and advertising.
  • system 400 can allow run-length audiovisual content to have advertising inserted in a manner that does not interrupt the run-length audiovisual content, and provides for immediate monitoring of indicators of consumer interest in advertised products.
  • system 400 can monitor social media for feedback on advertising trends, so as to give near real time feedback on the success or failure of a single advertisement to an entire advertising campaign. For example, a new soft drink producer can initiate an advertising campaign at a predetermined time by offering free samples of its new soft drink. Social media can then be monitored to determine if discussion of the offer is being circulated in certain social media outlets, such as recommendations for others to watch content or to obtain free samples of the product. In this manner, system 400 can provide real time or near real time metrics of run-length audiovisual content and advertising content that does not currently exist.
  • FIG. 5 is a diagram of a semantic filter 500 in accordance with an exemplary embodiment of the present invention.
  • Semantic filter 500 provides an exemplary semantic filter that can be used to process text to classify the text for ranking with associated content, such as to match run-length audiovisual content with interest groups and to match advertising content with run-length audiovisual content.
  • Semantic filter 500 includes behavior metrics and metric weightings.
  • a three by three metric matrix can be used, where three exemplary behavior metrics are shown.
  • a behavior metric ranking behavioral attributes as adult, teen and child is shown, but additional gradations can be provided. For example, preteen, senior, young adult, or other suitable gradations can be provided to improve the sensitivity of the first behavior metric along the metric weighting axis.
  • a second behavior metric having weightings of efficient, average and wasteful is provided.
  • additional metric weightings can be provided such as professional, competent, careless, obsolete, or other similar subjective measures, which can be associated with concepts, terms, or other information that can be used to filter text to identify whether the text associates to concepts along the weighting metric.
  • a third exemplary behavior metric is shown with weightings of energetic, active and inactive.
  • additional weighting metrics can be provided, such as vigorous, brisk, idle, sluggish or other suitable gradations along the metric weighting axis, to provide as much detail as desired in quantifying and filtering semantic content.
  • semantic filter 500 can be used to create a grading paradigm for textual content to classify the textual content automatically and based upon predetermined metrics. For example, content can be determined to fall within the metrics of adult, average and inactive based upon textual content or phrases within the content. For example, adult content may have phrases or other semantic measures associated with adults and disassociated from teens or children. Likewise, content can be associated with an average behavior metric, relative to an efficient behavior metric and a wasteful behavior metric, such as in regards to environmental efficiency or wastefulness. Likewise, content can be semantically filtered based on semantic tags associated with energetic activity, active activity, and inactive activity.
  • frequency of occurrence can be normalized (e.g., where the metrics and weights having the greatest number of occurrences of text from semantic filtering have a value of 1.0 and where the other metrics range from 1.0 to 0.0), frequency of occurrence can be absolute (e.g. where one metric has X occurrences, a second metric has Y occurrences, etc., where X and Y are whole numbers), frequency of occurrence can be tracked on a log scale, or other suitable manners for tracking frequency of occurrence can be used. For example, if a 30 minute program has over 500 adult content occurrences from semantic filtering, that show might be classified as adult content regardless of whether there are a greater absolute number of “teen” or “child” content occurrences.
  • This process can also be performed in an iterative manner to “tune” the semantic filter, such as where the semantic filter settings are tested on content that has a predetermined desired semantic filtering profile (or a benchmark), and where the semantic filter settings are adjusted if groups of content do not match their benchmarks.
  • content that is intended to be benchmarked in different categories such as live sports and pre-recorded cooking shows, can be processed by the semantic filters to confirm that it is graded or ranked by the semantic filters into different and predetermined categories. If benchmarked content is graded into categories that are different from the expected categories, the semantic filter settings can be reviewed and modified to correct such misplacement.
  • a television program can be processed to extract the text from the television program, and upon processing can be classified based on that text.
  • the television program may be a serial drama dealing with adult content that is typically shown after 9:00 pm, where the programming is not geared towards environmental efficiency or wastefulness, and where the typical viewers of the content would be associated with people who are inactive as opposed to people who are physically energetic.
  • the semantic filtering processes for making these determinations are based on real world sampling of individuals, and how those individuals would react to the programming content. However, once the associations are generated, other content can be processed to determine if the semantic filtering is effective. In this manner, a recursive process can be used where additional programming and content is added to the semantic filtering database to prove the accuracy of the semantic filtering.
  • semantic filtering can be used on advertising content, and can be used based on the persons reviewing the content and the advertising content. For example, a reviewer may be asked to classify themselves with the behavior metrics and metric weighting, and others may be asked to classify the reviewer based on behavior metric and metric weighting.
  • a semantic filter map for a reviewer can be determined, and the reviewer's input on content, such as audiovisual run-length, audiovisual programming content or advertising content can also be determined.
  • a number of metrics can be automatically generated from text that can be used to compare run-length audiovisual content with other run-length audiovisual content (such as for channelization) to match run-length audiovisual content with advertising content (such as to automatically insert advertising into run-length audiovisual content), to detect content in social media (for example to determine the effectiveness of advertising content) or for other suitable purposes.
  • FIG. 6 is a flowchart of a method 600 for improving the image quality of audiovisual data in accordance with an exemplary embodiment of the present invention.
  • Method 600 can be implemented as one or more algorithms running on a general purpose processor so as to create a special purpose machine or in other suitable manners.
  • Method 600 begins at 602 where a quantization noise metric is generated, such as using an algorithm operating on a processor that measures quantization noise and that outputs a relative number or other suitable data.
  • a quantization noise metric can determine the variations in macro blocks, blocks, individual pixels, or other suitable data that comprises frames and video. More particularly, the quantization noise metric can determine changes in subsequent adjacent frames of video data and can generate a figure of merit that is used to determine whether there is significant amount of quantization noise, a low level of quantization noise, or other levels of quantization noise.
  • the method then proceeds to 604 .
  • the metric is compared to a jitter threshold, such as using a compare algorithm operating on a processor. In one exemplary embodiment, it may be desirable to avoid filtering for jitter if the quantization noise is below a predetermined threshold.
  • the method then proceeds to 606 where it is determined whether or not to filter the video data based on the jitter threshold comparison. If it is determined to filter the audiovisual data the method proceeds to 608 , such as by generating suitable control data from a control algorithm that transfers control to a suitable programming point, where jitter filter is applied at a level equal to the difference between the quantization noise metric and the threshold setting. Otherwise the method proceeds to 610 .
  • the quantization noise metric is compared to an aliasing threshold, such as using a compare algorithm operating on a processor.
  • the aliasing threshold can be symmetric with the jitter threshold, such that aliasing is only used when jitter filtering is applied, and at the same threshold level.
  • the method then proceeds to 612 where it is determined whether to apply an aliasing filter, such as by using an algorithm that determines whether or not the threshold was met. If it is determined not to apply an aliasing filter at 612 the method proceeds to 616 , such as by using an algorithm that generates control data transferring control to a suitable programming point. Otherwise, the method proceeds to 614 where the aliasing filter is applied. The method then proceeds to 616 .
  • the filtering can be provided in a number of stages, such as where a first threshold is used for slower jitter or aliasing changes, and a second or third or additional thresholds are used for faster metrics. In this manner, using a recursive process, the amount of filtering can help to reduce the entropy or quantization noise and associated bandwidth of video programming. If it is determined at 616 that there are more stages, such as by using a detector algorithm for detecting control data, a loop counter or other suitable algorithms, the method returns to 604 , otherwise the method proceeds to 618 or the processed videos output for delivery.
  • FIG. 7 is a flowchart of a method 700 for providing advertising data and run-length content for on-demand channels in accordance with an exemplary embodiment of the present invention.
  • Method 700 can be implemented as an algorithm running on a general purpose processor or other suitable embodiments, such as in the manner previously discussed for various types of processes in regards to method 600 .
  • Method 700 begins at 702 where a semantic filter is applied to run-length content.
  • the semantic filter can be used to classify run-length content based on one or more behavior metrics and one or more weighting metrics, where a frequency in each of the different behavior and weighting metric classes is measured based on the text of the run-length content. The method then proceeds to 704 .
  • the run-length content is channelized.
  • a number of different channels of channelized content can be created.
  • the channelized content thus tracks the semantic filter to provide content of interest to viewers having predetermined behavioral preferences that match the semantic filter settings. The method then proceeds to 706 .
  • semantic filtering is applied to advertising.
  • the semantic filter can be applied to advertising either after processing of run-length content, or in parallel to the processing of run-length content.
  • the application of semantic filter to advertising can be performed as part of the advertising development, such that the advertising is modified to match predetermined semantic filter characteristics. The method then proceeds to 708 .
  • the run-length content is preprocessed for advertising.
  • certain advertisers may pay to have their products advertised in run-length content regardless of the semantic filtering correlation.
  • the advertising can be pre-associated with run-length content. The method then proceeds to 710 .
  • advertising is inserted into additional advertising spaces based on space availability.
  • advertising can be inserted into certain marked areas, such as billboards and audiovisual content, backdrops, or in other areas where advertising is expected. The method then proceeds to 712 .
  • the run-length content is output.
  • the channelized content can be provided to a plurality of server locations or head-ends throughout the country, where it can be provided in virtual real time, with the appearance of being channelized. Likewise, on demand programming can be accommodated. The method then proceeds to 714 .
  • advertising metrics are received.
  • the advertising metrics can include the number of times advertising information was requested, the number of times advertising information was viewed, additional information pertaining to the specific types of ads, or other suitable advertising metrics. The method then proceeds to 716 .
  • social media data is received based on the advertising in the audiovisual ad length audiovisual content.
  • the provision of certain types of advertising can be used to select semantic filter data for filtering social media data, such as to determine whether the advertising has had an effect on the discussions in social media. The method then proceeds to 718 .
  • method 700 allows run-length audiovisual content, advertising content, social media content, and other suitable content to be filtered using a semantic filter to determine correlations between the content for various purposes.
  • the correlations between run-length audiovisual content from semantic filtering can be used to channelize the run-length audiovisual content.
  • Correlations between the run-length audiovisual content and advertising content from semantic filtering can be used to associate the advertising content with the run-length audiovisual content.
  • Correlations between the social media data content and the advertising content can be used to determine whether the advertising was effective or reach the intended audience or had the intended effect on the audience. In this manner, the provision of run-length audiovisual data and advertising can be automated and optimized.
  • FIG. 8 is a diagram of a screen display 800 in accordance with an exemplary embodiment of the present invention.
  • Screen display 800 is an exemplary portion of an audiovisual run-length audiovisual content program showing an automobile, a user or person on the screen, and a gas pump.
  • the person can be getting out of their automobile to fuel the automobile, such that there is a brand opportunity indicated as brand 1 for the automobile and brand 2 for the fuel pump.
  • advertisers can be given the opportunity to associate a brand with either brand 1 or brand 2 , such that automobile manufacturer may wish to have their automobile superimposed on brand 1 , a gasoline manufacturer may elect to have their gasoline superimposed on brand 2 , or other suitable combinations can be used.
  • brand 1 and brand 2 may not be identified or tagged in the run-length audiovisual content, such branding can be identified using advertising preprocessing system 412 , such as by using the Princeton Video Image processes described at http://www.pvi.tv/pvi/index.asp or other suitable processes.
  • brand icon 1 and brand icon 2 are user selectable icons that allow a user to request additional information on the associated brand shown on the screen. In this manner, the user viewing the automobile identified as brand can select the brand icon 1 and request additional information. In this manner, the effectiveness of advertising can be determined in real time, and without interruption of programming. Advertising delivery is performed by inserting the advertising into the programming so as to create advertising that is associated with programming. In this manner, recording a video or otherwise removing the programming from real time delivery such as pirating of the content doesn't result in the loss of advertising content.
  • FIG. 9 is a flowchart of a method 900 for monitoring audiovisual content delivery to determine whether brand information has been requested in accordance with an exemplary embodiment of the present invention.
  • Method 900 can be implemented as software operating on a general purpose processor so as to create a special purpose machine.
  • Method 900 begins at 902 where a brand type identifier is received.
  • the brand type identifier can be associated with a type of product, type of service, or other suitable information. The method then proceeds to 904 .
  • the association can be added in advertising preprocessing system 412 or in other manners so that predetermined advertising content is refused. Otherwise, in this exemplary embodiment the method proceeds to 906 where the associated brand is used with the brand type identifier. Otherwise, the method proceeds to 908 which brand is selected using semantic filtering metrics, such as based on the product type, consumer class and other suitable information. The method then proceeds to 910 .
  • brand icons are generated.
  • brand icons can be generated based on new occurrences of brand icons in a frame of video, such that a continuing occurrence of the brand icon within successive frames of video does not generate a new brand icon occurrence.
  • brand icons can be generated in order, such that the most recent brand icons will replace the least recent brand icon. The method then proceeds to 914 .
  • the method returns to 902 . Otherwise, the method proceeds to 916 where a notification is generated.
  • the notification can be a message in a user's account, an email message, other suitable notifications. The method then returns to 902 .
  • FIG. 10 is a diagram of a system 1000 and method for filtering video signals in accordance with an exemplary embodiment of the present invention.
  • System 1000 can be implemented in hardware or a suitable combination of software and hardware, such as one or more known software tools, such as in the NVIDIA CUDA software package or other suitable software packages for performing the associated function operating on a general purpose processing platform.
  • System 1000 can also be implemented as a method performed by executing algorithms associated with the software tools or other well known algorithms associated with the disclosed functional blocks on a general purpose processing platform.
  • System 1000 receives a horizontal offset signal and a vertical offset signal, such as from system 1100 of FIG. 11 or other suitable horizontal offset signals and vertical offset signals.
  • the derivative of the offset signals are then determined, and the fast RMS values of the derivatives are determined and Z ⁇ n transform is performed on the derivatives to generate a limit.
  • a threshold is applied to the RMS values, and the outputs are multiplied by respective ratios.
  • the ratio-multiplied outputs are further multiplied by the respective Z ⁇ n transform outputs, and the signal is subtracted from a summer for derivative removal.
  • the horizontal offset and vertical offset are time corrected and processed by a Z ⁇ n transform, and are added to the corresponding summers.
  • the horizontal and vertical derivative values are then squared and added, and the square root of the sum is subtracted as velocity correction from an anti-alias frequency setting.
  • the output frequency is then applied to an anti-aliasing filter to generate a corrected horizontal offset and a corrected vertical offset, to improve image quality at low data rates for video data.
  • System 1000 provides phase correlation for video data that is resilient to noise and other defects typical of audiovisual content or advertising content. During repetitive images, phase correlation can generate results with several peaks in the resulting output. If the shift of the centroid exceeds a predetermined spatial threshold, such as by exceeding a predetermined velocity noise limit, the spatial jitter correction ratio should approach 1:1.
  • Each displacement axis is input into independently into system 1000 , where the result is directed into two paths. The first path compensates for the derivative phase and prediction delay, and the second path determines the derivative of the motion offset axis as confined within a predicative limit function and the first path, which compensates for the derivative phase and prediction delay. The path results are then subtracted, resulting in reduction of entropy of that axis.
  • the resultant is then integrated to reduce aliasing.
  • the anti-aliasing filter spectrum is predetermined to affect best results for how aggressive the compression is anticipated.
  • the aliasing filter is further modified to compensate for increases in vector velocity resulting from spatial (multi-axis) change.
  • the spatial offset resultants are the applied to the next frame.
  • FIG. 11 is a diagram of a system 1100 and method for generating offset data in accordance with an exemplary embodiment of the present invention.
  • System 1100 can be implemented in hardware or a suitable combination of software and hardware, such as one or more known software tools, such as in the NVIDIA CODA software package or other suitable software packages for performing the associated function operating on a general purpose processing platform.
  • System 1100 can also be implemented as a method performed by executing algorithms associated with the software tools or other well known algorithms associated with the disclosed functional blocks on a general purpose processing platform.
  • Motion estimation can be accomplished by processing serial input image data sets g a (x, y) and g b (x, y) with window systems 1102 and 1104 , respectively, which apply a suitable two-dimensional window function (such as a Hamming window) to the sets of image data to reduce edge effects. Then, a discrete two dimensional Fourier transform of both sets of image data is generated using Fourier transform systems 1106 and 1108 :
  • the cross-power spectrum is then calculated by normalization system 1110 , such as by taking the complex conjugate of the Fourier transform of the second set of image data, multiplying the complex conjugate with the Fourier transforms of the first set of image data element-wise, and normalizing the product element-wise:
  • the normalized cross-correlation can be obtained by applying the inverse Fourier transform or in other suitable manners.
  • the location of the peak in r is generated using peak detector 1112 , such as by using quarter pixel edge detection or other suitable processes.
  • a single peak value can be generated that represents the frame centroid shift expressed as a Cartesian (horizontal and vertical) offset, which can then be input to system 1000 .

Abstract

A system for providing audiovisual data. A run-length audiovisual data system for providing run-length audiovisual content. An advertising content system for providing advertising content. A semantic filter system for processing the run-length audiovisual content and the advertising content and matching the run-length audiovisual content to the advertising content based on semantic filter output matching.

Description

    RELATED APPLICATION
  • This application claims priority to U.S. provisional application 61/292,703, filed Jan. 6, 2010, entitled “Audiovisual Content Delivery System,” which is hereby incorporated by reference for all purposes.
  • FIELD OF THE INVENTION
  • The invention relates to delivery of audiovisual content over a packet-switched network, and more particularly to a system and process that channelizes audiovisual content for delivery based on semantic filtering.
  • BACKGROUND OF THE INVENTION
  • Delivery of audiovisual content over a packet-switched network, such as an Internet protocol network, is known in the art. Many obstacles persist in the commercialization of services for provision of audiovisual content in this manner. For example, the amount of content that is available to be downloaded on demand makes the selection of content very difficult and obscures content that may be of interest to a viewer.
  • SUMMARY OF THE INVENTION
  • A system and method for delivering video content is provided that channelizes on demand content to simulate broadcast programming. The system and method use semantic filtering to combine content into channels.
  • A system for providing audiovisual data. A run-length audiovisual data system for providing run-length audiovisual content. An advertising content system for providing advertising content. A semantic filter system for processing the run-length audiovisual content and the advertising content and matching the run-length audiovisual content to the advertising content based on semantic filter output matching.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • FIG. 1 is a diagram of a system for providing video compression in accordance with an exemplary embodiment of the present invention;
  • FIG. 2 is a diagram of a system for processing video data in accordance with an exemplary embodiment of the present invention;
  • FIG. 3 is a diagram of a system for filtering video signals in accordance with an exemplary embodiment of the present invention;
  • FIG. 4 is a diagram of a system for channelizing run-length audiovisual content, matching the channelized run-length audiovisual content with advertising, and for determining the effectiveness of the matched advertising and run-length audiovisual content in accordance with an exemplary embodiment of the present invention;
  • FIG. 5 is a diagram of a semantic filter in accordance with an exemplary embodiment of the present invention;
  • FIG. 6 is a flowchart of a method for improving the image quality of audiovisual data in accordance with an exemplary embodiment of the present invention;
  • FIG. 7 is a flowchart of a method for providing advertising data and run-length content for on-demand channels in accordance with an exemplary embodiment of the present invention;
  • FIG. 8 is a diagram of a screen display in accordance with an exemplary embodiment of the present so invention;
  • FIG. 9 is a flowchart of a method for monitoring video content delivery to determine whether brand information has been requested in accordance with an exemplary embodiment of the present invention;
  • FIG. 10 is a diagram of a system and method for filtering video signals in accordance with an exemplary embodiment of the present invention; and
  • FIG. 11 is a diagram of a system and method for generating offset data in accordance with an exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • In the description that follows, like parts are marked throughout the specification and drawings with the same reference numerals, respectively. The drawing figures might not be to scale and certain components can be shown in generalized or schematic form and identified by commercial designations in the interest of clarity and conciseness.
  • FIG. 1 is a diagram of a system 100 for providing video compression in accordance with an exemplary embodiment of the present invention. System 100 allows variable rate video data to be compressed without noticeable loss of quality.
  • System 100 includes variable rate video encoder 102, which can be implemented in hardware or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose processing platform. In one exemplary embodiment, variable rate video encoder 102 can be an MPEG4 part 10-compliant variable rate video encoder operating on a general purpose processing platform, an application specific integrated circuit, or other suitable platforms. Variable rate video encoder 102 receives video data and generates a variable rate video output. In one exemplary embodiment, the variable rate video output can be at a variable bandwidth that is higher than a target bandwidth. Thus, further processing of the output of variable rate video encoder 102 can be required in order to provide a bandwidth at a desired level. Likewise, where the bandwidth of the video is low, no additional processing is required. An indicator of whether additional processing is required can be obtained based on the entropy of the video signal being processed, where a high level of entropy generates a high level of quantization noise.
  • Variable rate video encoder 102 generates a quantization noise output, such as an indication of the amount of variation in macroblocks, blocks, pixels, or other video data. In one exemplary embodiment, the quantization noise output can be a variable bit rate mode output of the MPEG 4 part 10 encoder, which can also be characterized as an indication of the entropy of the video that is being encoded.
  • System 100 includes variable jitter filter 104, which can be implemented in hardware or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose processing platform. Variable jitter filter 104 provides a controllable amount of jitter reduction based on an input. Threshold input and quantization noise output are used to determine whether a threshold has been exceeded for activation of variable jitter filter 104. In one exemplary embodiment, when the quantization noise output is below a threshold input, variable jitter filter 104 will not be activated. Likewise, when the quantization noise output exceeds the threshold input, the variable jitter filter 104 will reduce jitter by a predetermined amount related to the difference between the quantization noise output and the threshold input.
  • Variable aliasing filter 106 receives the filtered output from variable jitter filter 104 and performs anti-aliasing filtering based upon an input quantization noise level and a threshold input. In one exemplary embodiment, variable aliasing filter 106 can receive the quantization noise output and the threshold input and can deactivate aliasing filtering if the quantization noise output is below the threshold level, otherwise, variable aliasing filter 106 performs aliasing filtering based on the difference between the threshold input level and the quantization noise output level.
  • In operation, system 100 can be used to reduce the bandwidth of a variable rate video signal without creating video artifacts, such as blurriness or lack of picture quality. System 100 activates a variable jitter filter 104 and variable aliasing filter 106 when quantization noise levels exceed threshold inputs to the filters. In one exemplary embodiment, the threshold inputs to both filters can be matched, such that they are symmetric. By making the threshold level symmetric, the video artifacts generated by processing the video signal through variable jitter filter 104 and variable aliasing filter 106 can be minimized.
  • FIG. 2 is a diagram of a system 200 for processing video data in accordance with an exemplary embodiment of the present invention. System 200 includes variable rate video encoder 202, which can be implemented in hardware or a suitable combination of hardware and software, and which can be one or more software systems operating on a general processing platform, such as an MPEG 4 part 10 variable rate video encoder. Quantization noise output from variable rate video encoder 202, such as the variable bit rate mode output of the MPEG 4 part 10 encoder is provided as an input to variable filter stage 1 204 and variable filter stage 2 206. Likewise, variable filter stage 1 204 receives a threshold 1 input, and variable filter stage 2 206 receives a threshold 2 input. In one exemplary embodiment, variable filter stage 1 204 and variable filter stage 2 206 can include a variable jitter filter 104 and a variable aliasing filter 106, or other suitable filters. Likewise, the thresholds provided to the filters that comprise variable filter stage 1 204 and variable filter stage 2 206 can be symmetric threshold levels.
  • In operation, system 200 provides multiple stage filtering of a video signal to further reduce the bandwidth of the video signal and improve signal quality. In this exemplary embodiment, threshold 1 input can have a first size or speed, such as a slow speed, and threshold 2 input can have a second speed, such as a higher speed, so as to stage the filtering of the variable rate video data signal and reduce the signal bandwidth without affecting signal quality.
  • FIG. 3 is a diagram of a system 300 for filtering video signals in accordance with an exemplary embodiment of the present invention. System 300 can be implemented in hardware or a suitable combination of hardware and software, and can be one or more software systems operating on a general purpose processing platform.
  • System 300 receives a video input signal at fast RMS processor 302 and slow RMS processor 304. RMS processors 302 and 304 satisfy the equation:
  • f RMS = lim T 1 2 T - T T [ f ( t ) ] 2 t .
  • with fast RMS processor 302 having a shorter integration time than slow RMS processor 304.
  • The output from fast RMS processor 302 is processed by log1 processor 306, and the output from slow RMS processor 304 is processed by log1 processor 308. The outputs from log1 processors 306 and 308 are provided to summer 310, which subtracts the output of log1 processor 308 from the output of log 1 306, and provides the output to threshold 312. Threshold 312 can receive a user programmable threshold level and outputs a signal if the input exceeds the threshold. The output is then provided to ratio multiplier 314, which receives a predetermined ratio and multiplies the output by the predetermined ratio. The output from ratio multiplier 314 is fed into log−1 processor 316 which is provided as an input to jitter filter 320.
  • The video input is also provided to Z−n processor 318, which performs a Z−n transform on input video data. The output of Z−n processor 318 is provided to jitter filter 320, which performs jitter filtering based on the setting received from log−1 processor 316.
  • The output from jitter filter 320 is provided to fast RMS processor 322 and slow RMS processor 324, which perform processing of the video signal as previously discussed. The output from fast RMS processor 322 is provided to log1 processor 326 and the output from slow RMS processor 324 is provided to log1 processor 328.
  • The outputs from log1 processors 326 and 328 are provided to summer 330 which subtracts the output of log1 processor 328 from the output of log1 processor 326 and provides the difference to threshold 332, which passes the unmodified signal if the input is below a predetermined threshold and which passes a threshold modified signal if the input is above the predetermined threshold. Multiplier 334 multiplies the output from threshold 332 by a ratio, which is provided to log−1 processor 336. The output of log−1 processor 336 is provided to alias filter 340.
  • The output of jitter filter 320 is provided to Z−n processor 338, and which is then filtered by alias filter 340 in response to the output of log−1 processor 336. The video output signal that is generated has a lower bandwidth at higher quality than video processed using other processes. Likewise, the threshold levels set by threshold 312 and 332 can be symmetric thresholds.
  • In operation, system 300 performs compression of a variable rate video signal to provide improved video quality at lower bandwidth.
  • FIG. 4 is a diagram of a system 400 for channelizing run-length audiovisual content, matching the channelized run-length audiovisual content with advertising, and for determining the effectiveness of the matched advertising and run-length audiovisual content in accordance with an exemplary embodiment of the present invention. As used herein, “run-length audiovisual content” refers to programmatic audiovisual content having a duration, theme, characters, plot, focus (such as scientific programs or sporting events), or other characteristics that distinguish audiovisual content that is viewed for entertainment or information purposes from advertising. “Advertising” can include programs such as “infomercials,” but more typically includes content that can be included within the “run-length audiovisual content,” such as icons that can be selected by a user, headers or footers, watermarks or other content that is not the selecting factor in a user's decision to view the “run-length audiovisual content,” but rather which is usually included with the “run-length audiovisual content” for a fee, for a public service, or for reasons other than the purpose for the “run-length audiovisual content.”
  • System 400 can be implemented in hardware or a suitable combination of hardware and software, and can be one or more software systems operating on a general purpose processing platform. As used herein, “hardware” can include a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field programmable gate array, a digital signal processor, or other suitable hardware. As used herein, “software” can include one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code or other suitable software structures operating in two or more software applications or on two or more processors, or other suitable software structures. In one exemplary embodiment, software can include one or more lines of code or other suitable software structures operating in a general purpose software application, such as an operating system, and one or more lines of code or other suitable software structures operating in a specific purpose software application.
  • Run-length audiovisual content system 402 provides run-length audiovisual content programs for analysis. In one exemplary embodiment, run-length audiovisual content system 402 can include a plurality of sources of run-length audiovisual content that are processed to extract text for semantic filtering.
  • Semantic filter 404 filters content, such as by analyzing the relationship of words, phrases and terms. In one exemplary embodiment, semantic filter 404 is used to distinguish first content from second content, where the first and second content have the exact same words, but where the order of the words have substantially different meanings. Semantic filter 404 uses a database of semantic metrics to score content, where the database is configured based on a flexible set of user-defined rules. For example, a semantic filter can be used to identify content relating to certain topics, such as “food,” by identifying text strings that are associated with food topics, such as “cooking,” “dining,” and “groceries.” These text strings can be characterized as a “lexicon manifold,” where the words, terms or phrases that are associated with a particular category of the semantic filter form a lexicon, and the set of categories and associated lexicons form a manifold that can be used to perform semantic filtering. Likewise, text strings that may appear to be associated with food but which represent other topics, such as “brain food” in the current example, can be identified as excluded strings that should be ignored. Food is only provided as an exemplary topic, and any other suitable topic can also or alternatively be processed or identified using semantic filter 404. Typically, semantic filter 404 will be used to generate metrics for content that can be used to group the content with related content, to match the content with advertising that is related to the content or of potential interest to content viewers, or for other suitable purposes.
  • Content channelization system 406 receives filtered run-length audiovisual content from semantic filter 404 and assembles the run-length audiovisual content into channels. In one exemplary embodiment, content channelization system 406 can use semantic filter settings or output and can select run-length audiovisual content based on the semantic filter settings or output. Likewise, content channelization system 406 can have predetermined run-length audiovisual content, which can be processed using a semantic filter 404 to ensure that it complies with the parameters for a predetermined channel in content channelization system 406.
  • Advertising content system 408 provides advertising content. In one exemplary embodiment, advertising content can be provided by advertisers, such as standard advertising content developed by the advertisers produced with other media. In another exemplary embodiment, advertising content system 408 can be used in conjunction with a semantic filter to develop advertising scripts that match semantic filter settings or output for channels of audiovisual content.
  • Semantic filter 410 processes advertising content from advertising content system 408 to generate a plurality of semantic filter settings or outputs. In one exemplary embodiment, the semantic filter settings or outputs can be selected to match advertising content from advertising content system 408 with run-length audiovisual content from run-length audiovisual content system 402.
  • Advertising preprocessing system 412 allows a user to modify run-length audiovisual content to add advertising content. In one exemplary embodiment, advertising preprocessing system 412 allows a user to insert markers into audiovisual content so as to cause the advertising from advertising content system 408 to be automatically linked to the audiovisual content, such as by including tags in the audiovisual content. In another exemplary embodiment, systems or processes such as those provided or used by Princeton Video Image, http://www.pvi.tv/pvi/index.asp, can be used to process run-length audiovisual content to insert tags into the run-length audiovisual content to allow advertising to be merged into the run-length audiovisual content. For example, a billboard in run-length audiovisual content such as a sporting event can be processed to remove the advertising on the billboard and to replace it with a tag, to allow advertising to be dynamically inserted into the location of the billboard in the audiovisual content. Likewise, advertising preprocessing can be used to associate advertising content from advertising content system 408 with run-length audiovisual content from run-length audiovisual content system 402, such as by placing tags on cans, bottles, boxes, or other props.
  • Advertising insertion system 414 receives advertising content from semantic filter 410 and inserts the advertising where available and where suitable. In one exemplary embodiment, advertising insertion system 414 can insert advertising on a space available basis, such as backdrops on sports fields, billboards and programming, or any other suitable locations. Advertising insertion system 414 outputs the channelized content.
  • Advertising delivery metrics system 416 generates one or more advertising delivery metrics, such as a number of requests for additional information, a number of follow-ups on requests received from users/viewers, or other suitable metrics. In one exemplary embodiment, during presentation of run-length audiovisual content, advertising triggers inserted in the audiovisual content can be selected by viewers. Advertising delivery metrics system 416 can count the number of times that viewers requested additional information or otherwise indicated an interest in an advertized product or service, can determine the number of times that viewers followed up on such requests, can determine whether or not users purchased such requests, can determine whether free samples of the advertised products were provided, or can otherwise provide advertising delivery metrics.
  • Social media content collection system 418 monitors social media that is responsive to advertising for associated content. In one exemplary embodiment, certain types of social media such as Twitter, Facebook, blogs or other social media, can be identified that track certain types of advertising. For example, some types of social media attract feedback on predetermined audiovisual content programs, such as blogs or discussion groups that are identified by the name of the program, or that may be subject of certain interest groups. Social media content collection system 418 collects text from social media associated with such predetermined social media groups.
  • Semantic filter 420 filters the text data from social media content collection system 418 based on advertising inserted into the run-length audiovisual content by advertising preprocessing system 412 or advertising insertion system 414. In one exemplary embodiment, the advertising provided to run-length audiovisual content can be used to set settings of semantic filter 420, such as to look for predetermined phrases that are related to the advertising or for other suitable purposes. The output of semantic filter 420 can thus be used to determine an indication of the success or failure of the advertising. In one exemplary embodiment, advertising success metrics can be determined based on an expected semantic filtering output. Where the semantic filtering output differs from the expected output, the output can be determined to have an error that may be corrected in future iterations.
  • Advertising quality metrics system 422 generates one or more advertising quality metrics, such as number of successful deliveries per program view, success of a relationship between semantic filtering from social media and advertising targets, or other suitable quality metrics. In one exemplary embodiment, the semantic filtering performed on text data from social media content collection 418 can include positive and negative indicators, and can be correlated to advertisement scoring data and advertising metrics data to provide an indication of advertising quality. For example, an advertisement can include an offer for a free sample of a product, where the user receives a message that can be printed and exchanged for the free sample. Text from social media that is associated with the free sample and advertisement can be monitored and collected, and the semantic filter can score the words and phrases associated with the free sample based on positive or negative connotations. In this mariner, the effectiveness of the advertising can be determined by monitoring social media to detect key words associated with an advertisement and to perform semantic filtering of the social media data to determine whether the response to the advertisement and the product that is the focus of the advertisement is positive, neutral or negative.
  • In operation, system 400 provides a new paradigm for audiovisual content delivery and advertising. In one exemplary embodiment, system 400 can allow run-length audiovisual content to have advertising inserted in a manner that does not interrupt the run-length audiovisual content, and provides for immediate monitoring of indicators of consumer interest in advertised products. Likewise, system 400 can monitor social media for feedback on advertising trends, so as to give near real time feedback on the success or failure of a single advertisement to an entire advertising campaign. For example, a new soft drink producer can initiate an advertising campaign at a predetermined time by offering free samples of its new soft drink. Social media can then be monitored to determine if discussion of the offer is being circulated in certain social media outlets, such as recommendations for others to watch content or to obtain free samples of the product. In this manner, system 400 can provide real time or near real time metrics of run-length audiovisual content and advertising content that does not currently exist.
  • FIG. 5 is a diagram of a semantic filter 500 in accordance with an exemplary embodiment of the present invention. Semantic filter 500 provides an exemplary semantic filter that can be used to process text to classify the text for ranking with associated content, such as to match run-length audiovisual content with interest groups and to match advertising content with run-length audiovisual content.
  • Semantic filter 500 includes behavior metrics and metric weightings. In one exemplary embodiment, a three by three metric matrix can be used, where three exemplary behavior metrics are shown. In this exemplary embodiment, a behavior metric ranking behavioral attributes as adult, teen and child is shown, but additional gradations can be provided. For example, preteen, senior, young adult, or other suitable gradations can be provided to improve the sensitivity of the first behavior metric along the metric weighting axis.
  • Likewise, a second behavior metric having weightings of efficient, average and wasteful is provided. As previously discussed, additional metric weightings can be provided such as professional, competent, careless, extravagant, or other similar subjective measures, which can be associated with concepts, terms, or other information that can be used to filter text to identify whether the text associates to concepts along the weighting metric.
  • A third exemplary behavior metric is shown with weightings of energetic, active and inactive. Likewise, as previously discussed, additional weighting metrics can be provided, such as vigorous, brisk, idle, sluggish or other suitable gradations along the metric weighting axis, to provide as much detail as desired in quantifying and filtering semantic content.
  • In operation, semantic filter 500 can be used to create a grading paradigm for textual content to classify the textual content automatically and based upon predetermined metrics. For example, content can be determined to fall within the metrics of adult, average and inactive based upon textual content or phrases within the content. For example, adult content may have phrases or other semantic measures associated with adults and disassociated from teens or children. Likewise, content can be associated with an average behavior metric, relative to an efficient behavior metric and a wasteful behavior metric, such as in regards to environmental efficiency or wastefulness. Likewise, content can be semantically filtered based on semantic tags associated with energetic activity, active activity, and inactive activity. The different metrics and their associated weightings are used to classify the text for semantic filtering, and frequency of occurrences are tracked as a third dimension. For example, frequency of occurrence can be normalized (e.g., where the metrics and weights having the greatest number of occurrences of text from semantic filtering have a value of 1.0 and where the other metrics range from 1.0 to 0.0), frequency of occurrence can be absolute (e.g. where one metric has X occurrences, a second metric has Y occurrences, etc., where X and Y are whole numbers), frequency of occurrence can be tracked on a log scale, or other suitable manners for tracking frequency of occurrence can be used. For example, if a 30 minute program has over 500 adult content occurrences from semantic filtering, that show might be classified as adult content regardless of whether there are a greater absolute number of “teen” or “child” content occurrences.
  • This process can also be performed in an iterative manner to “tune” the semantic filter, such as where the semantic filter settings are tested on content that has a predetermined desired semantic filtering profile (or a benchmark), and where the semantic filter settings are adjusted if groups of content do not match their benchmarks. For example, content that is intended to be benchmarked in different categories, such as live sports and pre-recorded cooking shows, can be processed by the semantic filters to confirm that it is graded or ranked by the semantic filters into different and predetermined categories. If benchmarked content is graded into categories that are different from the expected categories, the semantic filter settings can be reviewed and modified to correct such misplacement.
  • In another exemplary embodiment, a television program can be processed to extract the text from the television program, and upon processing can be classified based on that text. In an example where the behavior metrics having the highest semantic filter incidence are adult, average, and inactive, the television program may be a serial drama dealing with adult content that is typically shown after 9:00 pm, where the programming is not geared towards environmental efficiency or wastefulness, and where the typical viewers of the content would be associated with people who are inactive as opposed to people who are physically energetic. The semantic filtering processes for making these determinations are based on real world sampling of individuals, and how those individuals would react to the programming content. However, once the associations are generated, other content can be processed to determine if the semantic filtering is effective. In this manner, a recursive process can be used where additional programming and content is added to the semantic filtering database to prove the accuracy of the semantic filtering.
  • Likewise, semantic filtering can be used on advertising content, and can be used based on the persons reviewing the content and the advertising content. For example, a reviewer may be asked to classify themselves with the behavior metrics and metric weighting, and others may be asked to classify the reviewer based on behavior metric and metric weighting. In this manner, a semantic filter map for a reviewer can be determined, and the reviewer's input on content, such as audiovisual run-length, audiovisual programming content or advertising content can also be determined. The Cogito Intelligence Platform semantic filter available at http://expertsystem.net/page.asp?id=1521&idd=25 receives such information and can automatically process run-length audiovisual content, advertising content or other suitable content such as content from social media. In this manner, a number of metrics can be automatically generated from text that can be used to compare run-length audiovisual content with other run-length audiovisual content (such as for channelization) to match run-length audiovisual content with advertising content (such as to automatically insert advertising into run-length audiovisual content), to detect content in social media (for example to determine the effectiveness of advertising content) or for other suitable purposes.
  • FIG. 6 is a flowchart of a method 600 for improving the image quality of audiovisual data in accordance with an exemplary embodiment of the present invention. Method 600 can be implemented as one or more algorithms running on a general purpose processor so as to create a special purpose machine or in other suitable manners.
  • Method 600 begins at 602 where a quantization noise metric is generated, such as using an algorithm operating on a processor that measures quantization noise and that outputs a relative number or other suitable data. In one exemplary embodiment, a quantization noise metric can determine the variations in macro blocks, blocks, individual pixels, or other suitable data that comprises frames and video. More particularly, the quantization noise metric can determine changes in subsequent adjacent frames of video data and can generate a figure of merit that is used to determine whether there is significant amount of quantization noise, a low level of quantization noise, or other levels of quantization noise. The method then proceeds to 604.
  • At 604 the metric is compared to a jitter threshold, such as using a compare algorithm operating on a processor. In one exemplary embodiment, it may be desirable to avoid filtering for jitter if the quantization noise is below a predetermined threshold. The method then proceeds to 606 where it is determined whether or not to filter the video data based on the jitter threshold comparison. If it is determined to filter the audiovisual data the method proceeds to 608, such as by generating suitable control data from a control algorithm that transfers control to a suitable programming point, where jitter filter is applied at a level equal to the difference between the quantization noise metric and the threshold setting. Otherwise the method proceeds to 610.
  • At 610 the quantization noise metric is compared to an aliasing threshold, such as using a compare algorithm operating on a processor. In one exemplary embodiment, the aliasing threshold can be symmetric with the jitter threshold, such that aliasing is only used when jitter filtering is applied, and at the same threshold level. The method then proceeds to 612 where it is determined whether to apply an aliasing filter, such as by using an algorithm that determines whether or not the threshold was met. If it is determined not to apply an aliasing filter at 612 the method proceeds to 616, such as by using an algorithm that generates control data transferring control to a suitable programming point. Otherwise, the method proceeds to 614 where the aliasing filter is applied. The method then proceeds to 616.
  • At 616 it is determined whether more stages in the filter are present. In one exemplary embodiment, the filtering can be provided in a number of stages, such as where a first threshold is used for slower jitter or aliasing changes, and a second or third or additional thresholds are used for faster metrics. In this manner, using a recursive process, the amount of filtering can help to reduce the entropy or quantization noise and associated bandwidth of video programming. If it is determined at 616 that there are more stages, such as by using a detector algorithm for detecting control data, a loop counter or other suitable algorithms, the method returns to 604, otherwise the method proceeds to 618 or the processed videos output for delivery.
  • In operation, method 600 allows audiovisual data to be processed in a series of stages to reduce the bandwidth of the audiovisual data. In one exemplary embodiment, the jitter filtering and alias filtering is based on a threshold where the quantization noise metric is used to determine if the threshold is met is symmetric for each filter.
  • FIG. 7 is a flowchart of a method 700 for providing advertising data and run-length content for on-demand channels in accordance with an exemplary embodiment of the present invention. Method 700 can be implemented as an algorithm running on a general purpose processor or other suitable embodiments, such as in the manner previously discussed for various types of processes in regards to method 600.
  • Method 700 begins at 702 where a semantic filter is applied to run-length content. In one exemplary embodiment, the semantic filter can be used to classify run-length content based on one or more behavior metrics and one or more weighting metrics, where a frequency in each of the different behavior and weighting metric classes is measured based on the text of the run-length content. The method then proceeds to 704.
  • At 704, the run-length content is channelized. In one exemplary embodiment, depending on the number of behavior metrics and weighting metrics, as well as the frequency measures for run-length content, a number of different channels of channelized content can be created. The channelized content thus tracks the semantic filter to provide content of interest to viewers having predetermined behavioral preferences that match the semantic filter settings. The method then proceeds to 706.
  • At 706 semantic filtering is applied to advertising. In one exemplary embodiment, the semantic filter can be applied to advertising either after processing of run-length content, or in parallel to the processing of run-length content. Likewise, the application of semantic filter to advertising can be performed as part of the advertising development, such that the advertising is modified to match predetermined semantic filter characteristics. The method then proceeds to 708.
  • At 708 the run-length content is preprocessed for advertising. In one exemplary embodiment, certain advertisers may pay to have their products advertised in run-length content regardless of the semantic filtering correlation. In this exemplary embodiment, the advertising can be pre-associated with run-length content. The method then proceeds to 710.
  • At 710 advertising is inserted into additional advertising spaces based on space availability. In one exemplary embodiment, advertising can be inserted into certain marked areas, such as billboards and audiovisual content, backdrops, or in other areas where advertising is expected. The method then proceeds to 712.
  • At 712 the run-length content is output. In one exemplary embodiment, the channelized content can be provided to a plurality of server locations or head-ends throughout the country, where it can be provided in virtual real time, with the appearance of being channelized. Likewise, on demand programming can be accommodated. The method then proceeds to 714.
  • At 714 advertising metrics are received. In one exemplary embodiment, the advertising metrics can include the number of times advertising information was requested, the number of times advertising information was viewed, additional information pertaining to the specific types of ads, or other suitable advertising metrics. The method then proceeds to 716.
  • At 716 social media data is received based on the advertising in the audiovisual ad length audiovisual content. In one exemplary embodiment, the provision of certain types of advertising can be used to select semantic filter data for filtering social media data, such as to determine whether the advertising has had an effect on the discussions in social media. The method then proceeds to 718.
  • At 718 it is determined whether there is correlation between the advertising content and the social media data. If it is determined that there is a correlation then the method proceeds to 720 where positive advertising quality metrics are generated. Otherwise the method proceeds to 722 where negative advertising quality metrics are generated.
  • In operation, method 700 allows run-length audiovisual content, advertising content, social media content, and other suitable content to be filtered using a semantic filter to determine correlations between the content for various purposes. For example, the correlations between run-length audiovisual content from semantic filtering can be used to channelize the run-length audiovisual content. Correlations between the run-length audiovisual content and advertising content from semantic filtering can be used to associate the advertising content with the run-length audiovisual content. Correlations between the social media data content and the advertising content can be used to determine whether the advertising was effective or reach the intended audience or had the intended effect on the audience. In this manner, the provision of run-length audiovisual data and advertising can be automated and optimized.
  • FIG. 8 is a diagram of a screen display 800 in accordance with an exemplary embodiment of the present invention. Screen display 800 is an exemplary portion of an audiovisual run-length audiovisual content program showing an automobile, a user or person on the screen, and a gas pump. In this portion of run-length audiovisual content, the person can be getting out of their automobile to fuel the automobile, such that there is a brand opportunity indicated as brand 1 for the automobile and brand 2 for the fuel pump. In this manner, advertisers can be given the opportunity to associate a brand with either brand 1 or brand 2, such that automobile manufacturer may wish to have their automobile superimposed on brand 1, a gasoline manufacturer may elect to have their gasoline superimposed on brand 2, or other suitable combinations can be used. Because brand 1 and brand 2 may not be identified or tagged in the run-length audiovisual content, such branding can be identified using advertising preprocessing system 412, such as by using the Princeton Video Image processes described at http://www.pvi.tv/pvi/index.asp or other suitable processes.
  • In addition, on the side of video display 800 are brand icon 1 and brand icon 2. These brand icons are user selectable icons that allow a user to request additional information on the associated brand shown on the screen. In this manner, the user viewing the automobile identified as brand can select the brand icon 1 and request additional information. In this manner, the effectiveness of advertising can be determined in real time, and without interruption of programming. Advertising delivery is performed by inserting the advertising into the programming so as to create advertising that is associated with programming. In this manner, recording a video or otherwise removing the programming from real time delivery such as pirating of the content doesn't result in the loss of advertising content.
  • FIG. 9 is a flowchart of a method 900 for monitoring audiovisual content delivery to determine whether brand information has been requested in accordance with an exemplary embodiment of the present invention. Method 900 can be implemented as software operating on a general purpose processor so as to create a special purpose machine.
  • Method 900 begins at 902 where a brand type identifier is received. In one exemplary embodiment the brand type identifier can be associated with a type of product, type of service, or other suitable information. The method then proceeds to 904.
  • At 904 it is determined whether there is an association with the brand type identifier. In one exemplary embodiment, the association can be added in advertising preprocessing system 412 or in other manners so that predetermined advertising content is refused. Otherwise, in this exemplary embodiment the method proceeds to 906 where the associated brand is used with the brand type identifier. Otherwise, the method proceeds to 908 which brand is selected using semantic filtering metrics, such as based on the product type, consumer class and other suitable information. The method then proceeds to 910.
  • At 910 it is determined whether there are more brand identifiers on the screen. If more brand identifiers are on the screen, the method returns to 902, otherwise the method proceeds to 912.
  • At 912 brand icons are generated. In one exemplary embodiment, brand icons can be generated based on new occurrences of brand icons in a frame of video, such that a continuing occurrence of the brand icon within successive frames of video does not generate a new brand icon occurrence. Likewise, brand icons can be generated in order, such that the most recent brand icons will replace the least recent brand icon. The method then proceeds to 914.
  • At 914 it is determined whether a brand icon has been selected, such as by a user selection of the brand icon. If is determined that no brand icon has been selected, the method returns to 902. Otherwise, the method proceeds to 916 where a notification is generated. In one exemplary embodiment, the notification can be a message in a user's account, an email message, other suitable notifications. The method then returns to 902.
  • FIG. 10 is a diagram of a system 1000 and method for filtering video signals in accordance with an exemplary embodiment of the present invention. System 1000 can be implemented in hardware or a suitable combination of software and hardware, such as one or more known software tools, such as in the NVIDIA CUDA software package or other suitable software packages for performing the associated function operating on a general purpose processing platform. System 1000 can also be implemented as a method performed by executing algorithms associated with the software tools or other well known algorithms associated with the disclosed functional blocks on a general purpose processing platform.
  • System 1000 receives a horizontal offset signal and a vertical offset signal, such as from system 1100 of FIG. 11 or other suitable horizontal offset signals and vertical offset signals. The derivative of the offset signals are then determined, and the fast RMS values of the derivatives are determined and Z−n transform is performed on the derivatives to generate a limit. A threshold is applied to the RMS values, and the outputs are multiplied by respective ratios. The ratio-multiplied outputs are further multiplied by the respective Z−n transform outputs, and the signal is subtracted from a summer for derivative removal.
  • In parallel, the horizontal offset and vertical offset are time corrected and processed by a Z−n transform, and are added to the corresponding summers. The horizontal and vertical derivative values are then squared and added, and the square root of the sum is subtracted as velocity correction from an anti-alias frequency setting. The output frequency is then applied to an anti-aliasing filter to generate a corrected horizontal offset and a corrected vertical offset, to improve image quality at low data rates for video data.
  • System 1000 provides phase correlation for video data that is resilient to noise and other defects typical of audiovisual content or advertising content. During repetitive images, phase correlation can generate results with several peaks in the resulting output. If the shift of the centroid exceeds a predetermined spatial threshold, such as by exceeding a predetermined velocity noise limit, the spatial jitter correction ratio should approach 1:1. Each displacement axis is input into independently into system 1000, where the result is directed into two paths. The first path compensates for the derivative phase and prediction delay, and the second path determines the derivative of the motion offset axis as confined within a predicative limit function and the first path, which compensates for the derivative phase and prediction delay. The path results are then subtracted, resulting in reduction of entropy of that axis. To compensate for the artifacts resulting from the removal of the motion derivative, the resultant is then integrated to reduce aliasing. The anti-aliasing filter spectrum is predetermined to affect best results for how aggressive the compression is anticipated. The aliasing filter is further modified to compensate for increases in vector velocity resulting from spatial (multi-axis) change. The spatial offset resultants are the applied to the next frame.
  • FIG. 11 is a diagram of a system 1100 and method for generating offset data in accordance with an exemplary embodiment of the present invention. System 1100 can be implemented in hardware or a suitable combination of software and hardware, such as one or more known software tools, such as in the NVIDIA CODA software package or other suitable software packages for performing the associated function operating on a general purpose processing platform. System 1100 can also be implemented as a method performed by executing algorithms associated with the software tools or other well known algorithms associated with the disclosed functional blocks on a general purpose processing platform.
  • Motion estimation can be accomplished by processing serial input image data sets ga(x, y) and gb(x, y) with window systems 1102 and 1104, respectively, which apply a suitable two-dimensional window function (such as a Hamming window) to the sets of image data to reduce edge effects. Then, a discrete two dimensional Fourier transform of both sets of image data is generated using Fourier transform systems 1106 and 1108:

  • G a =F(g a); and

  • G b =F(g b)
  • The cross-power spectrum is then calculated by normalization system 1110, such as by taking the complex conjugate of the Fourier transform of the second set of image data, multiplying the complex conjugate with the Fourier transforms of the first set of image data element-wise, and normalizing the product element-wise:
  • R = G a G b * G a G b *
  • The normalized cross-correlation can be obtained by applying the inverse Fourier transform or in other suitable manners.

  • r=
    Figure US20110167445A1-20110707-P00001
    {R}
  • The location of the peak in r is generated using peak detector 1112, such as by using quarter pixel edge detection or other suitable processes.
  • ( Δ x , Δ y ) = arg max ( x , y ) { r }
  • A single peak value can be generated that represents the frame centroid shift expressed as a Cartesian (horizontal and vertical) offset, which can then be input to system 1000.
  • While certain exemplary embodiments have been described in detail and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention. It will thus be recognized to those skilled in the art that various modifications may be made to the illustrated and other embodiments of the invention described above, without departing from the broad inventive scope thereof. It will be understood, therefore, that the invention is not limited to the particular embodiments or arrangements disclosed, but is rather intended to cover any changes, adaptations or modifications which are within the scope and the spirit of the invention defined by the appended claims.

Claims (18)

1. A system for providing audiovisual data comprising:
a run-length audiovisual data system for providing run-length audiovisual content;
an advertising content system for providing advertising content; and
a semantic filter system for processing the run-length audiovisual content and the advertising content and matching the run-length audiovisual content to the advertising content based on semantic filter output matching.
2. The system of claim 1 further comprising a content channelization system for receiving the run-length audiovisual content and combining the run-length audiovisual content into a plurality of channels based on semantic filter output matching.
3. The system of claim 1 further comprising an advertising insertion system for receiving the run-length audiovisual content and the advertising content and associating the advertising content with one or more tags in the run-length audiovisual content.
4. The system of claim 1 further comprising an advertising pre-processing system for receiving the run-length audiovisual content designating one or more tags within in the run-length audiovisual content on one or more props in the run-length audiovisual content.
5. The system of claim 1 further comprising an advertising delivery metrics system for receiving a plurality of advertising delivery metrics as a function of user-activated responses to advertising content and generating advertising metrics data.
6. The system of claim 1 further comprising a social media content collection system for receiving run-length audiovisual content delivery data and collecting text content from social media.
7. The system of claim 6 further comprising a semantic filter system for receiving the text content from the social media and the run-length audiovisual content delivery data and performing semantic filtering of the text content from the social media as a function of the run-length audiovisual content delivery data to generate advertisement scoring data.
8. The system of claim 7 further comprising:
an advertising quality metrics system for receiving the advertisement scoring data and the advertising metrics data and generating positive/negative advertising quality data; and
a screen display having a plurality of brand icons associated with one or more brands as they appear on the screen display, wherein the screen display can displace old brand icons with new brand icons.
9. A method for providing audiovisual data comprising:
processing run-length audiovisual data with a semantic filter to generate a plurality of run-length audiovisual data semantic filtering factors;
processing advertising audiovisual data with a semantic filter to generate a plurality of advertising audiovisual data semantic filtering factors; and
matching the run-length audiovisual content to the advertising content based on the plurality of run-length audiovisual data semantic filtering factors and the plurality of advertising audiovisual data semantic filtering factors.
10. The method of claim 9 wherein processing the run-length audiovisual data with the semantic filter to generate the plurality of run-length audiovisual data semantic filtering factors further comprises generating a plurality of behavior metric factors as a function of a plurality of metric weightings for each of the plurality of behavior metric factors.
11. The method of claim 10 wherein generating the plurality of behavior metric factors as the function of the plurality of metric weightings for each of the plurality of behavior metric factors further comprises generating a frequency of occurrence for each metric weighting of each behavior metric factor.
12. The method of claim 9 further comprising receiving the run-length audiovisual content and combining the run-length audiovisual content into one of a plurality of channels based on semantic filter output matching.
13. The method of claim 9 further comprising receiving the run-length audiovisual content and the advertising content and associating the advertising content with one or more tags in the run-length audiovisual content.
14. The method of claim 9 further comprising receiving the run-length audiovisual content designating one or more tags within in the run-length audiovisual content on one or more props in the run-length audiovisual content.
15. The method of claim 9 further comprising receiving a plurality of advertising delivery metrics as a function of user-activated responses to advertising content and generating advertising metrics data.
16. The method of claim 9 further comprising receiving run-length audiovisual content delivery data and collecting text content from social media.
17. The method of claim 16 further comprising receiving the text content from the social media and the run-length audiovisual content delivery data and performing semantic filtering of the text content from the social media as a function of the run-length audiovisual content delivery data to generate advertisement scoring data.
18. The method of claim 17 further comprising receiving the advertisement scoring data and the advertising metrics data and generating positive/negative advertising quality data.
US12/701,300 2010-01-06 2010-02-05 Audiovisual content channelization system Abandoned US20110167445A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/701,300 US20110167445A1 (en) 2010-01-06 2010-02-05 Audiovisual content channelization system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29270310P 2010-01-06 2010-01-06
US12/701,300 US20110167445A1 (en) 2010-01-06 2010-02-05 Audiovisual content channelization system

Publications (1)

Publication Number Publication Date
US20110167445A1 true US20110167445A1 (en) 2011-07-07

Family

ID=44224726

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/701,223 Expired - Fee Related US8559749B2 (en) 2010-01-06 2010-02-05 Audiovisual content delivery system
US12/701,300 Abandoned US20110167445A1 (en) 2010-01-06 2010-02-05 Audiovisual content channelization system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/701,223 Expired - Fee Related US8559749B2 (en) 2010-01-06 2010-02-05 Audiovisual content delivery system

Country Status (1)

Country Link
US (2) US8559749B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110276513A1 (en) * 2010-05-10 2011-11-10 Avaya Inc. Method of automatic customer satisfaction monitoring through social media
US20130298170A1 (en) * 2009-06-12 2013-11-07 Cygnus Broadband, Inc. Video streaming quality of experience recovery using a video quality metric
US8671019B1 (en) * 2011-03-03 2014-03-11 Wms Gaming, Inc. Controlling and rewarding gaming socialization
US20140081954A1 (en) * 2010-11-30 2014-03-20 Kirill Elizarov Media information system and method
US20160381435A1 (en) * 2008-11-26 2016-12-29 Ashwin Navin Annotation of metadata through capture infrastructure
US9538220B2 (en) 2009-06-12 2017-01-03 Wi-Lan Labs, Inc. Video streaming quality of experience degradation control using a video quality metric
USRE49890E1 (en) * 2013-05-21 2024-03-26 Samsung Electronics Co., Ltd. Method and apparatus for providing information by using messenger

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8442265B1 (en) * 2011-10-19 2013-05-14 Facebook Inc. Image selection from captured video sequence based on social components
US8437500B1 (en) * 2011-10-19 2013-05-07 Facebook Inc. Preferred images from captured video sequence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330537B1 (en) * 1999-08-26 2001-12-11 Matsushita Electric Industrial Co., Ltd. Automatic filtering of TV contents using speech recognition and natural language
US20050166224A1 (en) * 2000-03-23 2005-07-28 Michael Ficco Broadcast advertisement adapting method and apparatus
US20070204310A1 (en) * 2006-02-27 2007-08-30 Microsoft Corporation Automatically Inserting Advertisements into Source Video Content Playback Streams
US20080147487A1 (en) * 2006-10-06 2008-06-19 Technorati Inc. Methods and apparatus for conversational advertising
US20090171787A1 (en) * 2007-12-31 2009-07-02 Microsoft Corporation Impressionative Multimedia Advertising
US20100004975A1 (en) * 2008-07-03 2010-01-07 Scott White System and method for leveraging proximity data in a web-based socially-enabled knowledge networking environment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69434237T2 (en) * 1993-11-18 2005-12-08 Digimarc Corp., Tualatin Video with hidden in-band digital data
US5473376A (en) * 1994-12-01 1995-12-05 Motorola, Inc. Method and apparatus for adaptive entropy encoding/decoding of quantized transform coefficients in a video compression system
US6782132B1 (en) * 1998-08-12 2004-08-24 Pixonics, Inc. Video coding and reconstruction apparatus and methods
US6993199B2 (en) * 2001-09-18 2006-01-31 Nokia Mobile Phones Ltd. Method and system for improving coding efficiency in image codecs
US8078474B2 (en) * 2005-04-01 2011-12-13 Qualcomm Incorporated Systems, methods, and apparatus for highband time warping
CA2544459A1 (en) * 2006-04-21 2007-10-21 Evertz Microsystems Ltd. Systems and methods for synchronizing audio and video data signals

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330537B1 (en) * 1999-08-26 2001-12-11 Matsushita Electric Industrial Co., Ltd. Automatic filtering of TV contents using speech recognition and natural language
US20050166224A1 (en) * 2000-03-23 2005-07-28 Michael Ficco Broadcast advertisement adapting method and apparatus
US20070204310A1 (en) * 2006-02-27 2007-08-30 Microsoft Corporation Automatically Inserting Advertisements into Source Video Content Playback Streams
US20080147487A1 (en) * 2006-10-06 2008-06-19 Technorati Inc. Methods and apparatus for conversational advertising
US20090171787A1 (en) * 2007-12-31 2009-07-02 Microsoft Corporation Impressionative Multimedia Advertising
US20100004975A1 (en) * 2008-07-03 2010-01-07 Scott White System and method for leveraging proximity data in a web-based socially-enabled knowledge networking environment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160381435A1 (en) * 2008-11-26 2016-12-29 Ashwin Navin Annotation of metadata through capture infrastructure
US10074108B2 (en) * 2008-11-26 2018-09-11 Free Stream Media Corp. Annotation of metadata through capture infrastructure
US20130298170A1 (en) * 2009-06-12 2013-11-07 Cygnus Broadband, Inc. Video streaming quality of experience recovery using a video quality metric
US9538220B2 (en) 2009-06-12 2017-01-03 Wi-Lan Labs, Inc. Video streaming quality of experience degradation control using a video quality metric
US20110276513A1 (en) * 2010-05-10 2011-11-10 Avaya Inc. Method of automatic customer satisfaction monitoring through social media
US20140081954A1 (en) * 2010-11-30 2014-03-20 Kirill Elizarov Media information system and method
US8671019B1 (en) * 2011-03-03 2014-03-11 Wms Gaming, Inc. Controlling and rewarding gaming socialization
US9286759B2 (en) 2011-03-03 2016-03-15 Bally Gaming, Inc. Controlling and rewarding gaming socialization
USRE49890E1 (en) * 2013-05-21 2024-03-26 Samsung Electronics Co., Ltd. Method and apparatus for providing information by using messenger

Also Published As

Publication number Publication date
US20110164827A1 (en) 2011-07-07
US8559749B2 (en) 2013-10-15

Similar Documents

Publication Publication Date Title
US8559749B2 (en) Audiovisual content delivery system
KR101741352B1 (en) Attention estimation to control the delivery of data and audio/video content
US10075742B2 (en) System for social media tag extraction
US9043860B2 (en) Method and apparatus for extracting advertisement keywords in association with situations of video scenes
JP4865811B2 (en) Viewing tendency management apparatus, system and program
US8151194B1 (en) Visual presentation of video usage statistics
US20170132659A1 (en) Potential Revenue of Video Views
CN107483982B (en) Anchor recommendation method and device
US20170055014A1 (en) Processing video usage information for the delivery of advertising
US20140052740A1 (en) Topic and time based media affinity estimation
US20150134460A1 (en) Method and apparatus for selecting an advertisement for display on a digital sign
WO2011130564A1 (en) Platform-independent interactivity with media broadcasts
MX2011001959A (en) Supplemental information delivery.
US20110217022A1 (en) System and method for enriching video data
US10798425B1 (en) Personalized key object identification in a live video stream
US20140325055A1 (en) System and method for automatic selection of a content format
US20230269436A1 (en) Systems and methods for blending interactive applications with television programs
Tian et al. Intelligent advertising framework for digital signage
JP2018032252A (en) Viewing user log accumulation system, viewing user log accumulation server, and viewing user log accumulation method
US11741364B2 (en) Deep neural networks modeling
Liu The impact of consumer multi-homing behavior on ad prices: Evidence from an online marketplace
US20120116879A1 (en) Automatic information selection based on involvement classification
CN116521974A (en) Media content recommendation method and device, electronic equipment and readable storage medium
JP2022179571A (en) Moving image viewing providing device, moving image viewing providing method, and moving image viewing providing program
CN117241072A (en) Full-platform video data analysis system, method and storage medium based on big data

Legal Events

Date Code Title Description
AS Assignment

Owner name: STREAMING APPLIANCES, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REAMS, ROBERT;REEL/FRAME:026705/0742

Effective date: 20110418

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION