WO2007120716A2 - Method and apparatus for automatically summarizing video - Google Patents
Method and apparatus for automatically summarizing video Download PDFInfo
- Publication number
- WO2007120716A2 WO2007120716A2 PCT/US2007/008951 US2007008951W WO2007120716A2 WO 2007120716 A2 WO2007120716 A2 WO 2007120716A2 US 2007008951 W US2007008951 W US 2007008951W WO 2007120716 A2 WO2007120716 A2 WO 2007120716A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- scenes
- frame
- sampled
- frames
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
- G06F16/739—Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7834—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using audio features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/785—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Definitions
- the present invention relates computer-based techniques for manipulating video data. More specifically, the present invention relates to a computer-based technique for automatically summarizing a video.
- the Internet enables consumers to preview short summaries of videos. This enables a consumer to obtain more information about a video before viewing and/or buying the entire video.
- One embodiment of the present invention provides a system that automatically produces a summary of a video.
- the system partitions the video into scenes and then determines similarities between the scenes.
- the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.
- the system while partitioning the video into scenes, the system first extracts feature vectors for sampled frames in the video.
- the system detects shot boundaries based on distances between feature vectors for successive sampled frames.
- the system also produces a frame-similarity matrix, wherein each element in the frame- similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames.
- the system uses the frame-similarity matrix, the detected shot boundaries and a dynamic-programming technique to compute a shot-similarity matrix, wherein each element in the shot-similarity matrix represents a similarity between a corresponding pair of shots.
- the system determines the scene boundaries by selectively merging successive shots together based on the computed similarities between the successive shots and also based on audio breaks in the video.
- extracting the feature vector for a sampled frame involves producing a color histogram for the sampled frame.
- the system while detecting the shot boundaries, uses an adaptive-threshold technique, which computes a distance between feature vectors for successive frames divided by a maximum distance between successive feature vectors in a preceding window of frames.
- determining the similarities between the scenes involves: using the frame-similarity matrix, the determined scene boundaries and a dynamic- programming technique to produce a scene-similarity matrix. It also involves scoring the scenes based on a metric that rewards scenes which are different from other scenes in the video, and that also rewards scenes which are similar to other scenes in the video.
- selecting the representative scenes involves selecting the representative scenes based on a total score for the selected scenes subject to a time constraint.
- selecting the representative scenes involves clustering similar scenes together and selecting at most one scene from each cluster.
- selecting the representative scenes involves using a dynamic-programming technique to select the representative scenes.
- One embodiment of the present invention provides a system that automatically selects a frame from a video to represent the video. During operation, the system extracts feature vectors for sampled frames in the video. Next, the system determines similarities between sampled frames by determining distances between feature vectors for the samples frames. The system uses the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video.
- FIG. 1 illustrates the process of summarizing a video in accordance with an embodiment of the present invention.
- FIG. 2 presents a detailed flow diagram illustrating the process of summarizing a video in accordance with an embodiment of the present invention.
- FIG. 3 illustrates a system for summarizing a video in accordance with an embodiment of the present invention.
- One embodiment of the present invention provides a technique for automatically summarizing a video, wherein the technique considers both "events" in the video and the "flow" of the video.
- the general process first extracts features from frames 104-107 in video 102 and uses these features to detect transitions between features. These transitions are used along with audio breaks to divide the video into "shots" 108- 1 10.
- the features are color-value histograms for frames of the video and the shot boundaries are defined by abrupt transitions between features for successive frames in the video. These abrupt transitions are likely to be associated with "cuts" made during the shooting process for the video. In contrast, moving objects or camera motion within the video are not likely to cause such abrupt transitions, and are hence unlikely to be detected as sh.ot boundaries.
- successive shots 108-110 are selectively merged into scenes 1 12-1 14 based on similarities between the shots and the audio breaks in the video 102.
- similarities are computed between the scenes, and scenes are automatically selected for summary 118 based on the computed similarities and a time constraint. This entire process is described in more detail below with reference to FIG. 2.
- FIG. 2 presents a more-detailed flow diagram illustrating the process of summarizing a video in accordance with an embodiment of the present invention.
- the system starts by receiving a video 102, which is comprised of a number of frames.
- a sampling mechanism 202 samples frames from video 102. These sampled frames feed into a feature-extraction mechanism 204, which extracts a "feature" for 5 each sampled frame.
- shot-boundary-detection mechanism 208 which detects shot boundaries by comparing features for consecutive frames and by considering audio breaks 222 (which are detected by audio-break detection mechanism 221). These shot boundaries are compiled into a shot-boundary list 210.
- shot- boundary-detection mechanism 208 uses an adaptive thresholding technique. This technique compares the variations of features within a causal window of length w to accurately localize the beginning of fades and dissolves. A shot change is detected if the following holds:
- This technique effectively provides an adaptive threshold which is raised for a sequence of frames containing motion and which is lowered for a sequence of static frames. This adaptive threshold makes it unlikely for the system to classify fast motion as a shot change.
- video 102 can be sampled at different intervals for different purposes. For example, video 102 can be sampled once every 30 frames to produce features for feature matrix 206, while video 102 is sampled every 5 frames for shot-boundary detection mechanism 208. Note that more samples are needed to accurately detect shot boundaries, but the corresponding features do not need to be saved because the shot boundaries are detected based on a small window of preceding frames.
- Similarity-determining mechanism 212 compares features from feature matrix 206 to produce a frame-similarity matrix 214, which contains values representing similarities between pairs of features. In one embodiment of the present invention, these similarities are expressed as "cosine distances" between the feature vectors for the images. More specifically, we can define the following distance metric between two frames A and B, where F A and F B axe feature vectors corresponding to frames A and B.
- shot-boundary list 210 is used to delineate shots in video 102, and then a dynamic-programming mechanism 216 is used to compute similarities between shots. These computed similarities are then used to populate a shot-similarity matrix 218.
- Shots can vary in speed while capturing the same content, or parts of shots can be same.
- one embodiment of the present invention uses the following recurrence relationship to compute the similarity between two shots si and s2 based on the best alignment between the shots:
- scene-detection mechanism 223 uses the shot-similarity matrix 218 along with audio breaks 222 to determine scene boundaries 224. This generally involves using the audio breaks and the similarities between shots to selectively merge successive shots into scenes. Given the shot-similarity matrix 218, there are many ways to detect scenes from it. One embodiment of the present invention treats consecutive shots, which have similarity in the top 5% of all-to-all similarity values to be part of the same scene. However, many alternative techniques exist.
- the scene boundaries are used to delineate scenes within video 102, and the same dynamic-programming mechanism 216 is used to compute similarities between scenes (instead of between shots as was done previously). These computed similarities are then used to populate a scene-similarity matrix 226.
- a scene-scoring mechanism 232 uses information from scene-similarity matrix
- This score captures the relative importance of the scene along with its motion and audio content.
- the basic idea is that important scenes are either very representative of the video or ones that are completely distinct from other scenes in the video. Intuitively, this means that in a story revolving around one set (or a game or talk show) it will pick up scenes from that set as well as some high motion-audio content shots from outside also.
- the score for a scene is affected by how away is it from the mean cohesiveness of the video. So for movies and other video material which does not stay in one setting the average cohesiveness would be low. So scenes of higher similarity to others are more likely to be picked up.
- Information from scene-similarity matrix 226 is also used by scene-clustering mechanism 228 to produce scene clusters 230 containing similar scenes.
- scene-clustering mechanism 228 performs an agglomerative-clustering step which bundles similar shots include clusters and that selects at most one shot from each cluster to appear in the summary.
- the threshold for clustering can be derived from the scene similarity matrix as the fourth quartile of the distribution of similarities.
- the scene scores 234 and the scene clusters 230 feed into another dynamic- programming mechanism 236, which selects scenes to include in the summary.
- the system selects scenes in a manner that optimizes the total scene score while meeting the time line of the summary, and also does not include very similar scenes together (since similar shots are likely to have similar high scores).
- the problem is to pick zero or one scene from each cluster to form the final summary such that the sum total of the scores is maximized.
- d ⁇ bestshot is the shot selected from cluster I, and the value of G is zero if no shot is 5 selected from that cluster.
- one embodiment of the present invention uses a dynamic programming technique to perform this optimization.
- the dynamic program is 0 as follows:
- combining mechanism 240 combines the selected scenes together to form summary 118.
- Another embodiment of the present invention selects a single frame from the 0 video to represent the video.
- the system similarly extracts feature vectors for sampled frames in the video, and similarly determines similarities between sampled frames by determining distances between feature vectors for the samples frames. The system then uses the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video.
- FIG. 3 illustrates a computer-based system for summarizing a video in accordance with an embodiment of the present invention.
- This computer-based system operates within a computer system 300, which can generally include any type of computer 0 system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance.
- computer system 300 includes a number of software modules that implement: sampling mechanism 202, feature-extraction mechanism 204, shot- boundary-detection mechanism 208, similarity determining mechanism 212, dynamic- programming mechanism 216, scene-detection mechanism 220, scene-clustering mechanism 228, scene-scoring mechanism 232, dynamic-programming mechanism 236 and combining mechanism 240. These mechanisms operate collectively to produce a summary 118 for a video 102.
- Video summaries can be used to provide previews for paid videos. There are typically restrictions on how much of a paid video can be shown to potential buyers. Living within these restrictions, the system can automatically generate video previews to be shown as trailers for to potential buyers for the paid videos.
- Video summaries can be useful to a user who missed a whole series or some episodes of their favorite show.
- the system can generate a summary for those episodes that fits within an amount of time that the user is willing to spend to catch up with the series.
- a video summary can comprise a single frame, which is displayed as a representative of the video within search results.
- the above-described techniques can be used the select a frame which best reflects the content of the video.
- the system can display key frames from the interesting parts of the video to serve as anchor points for fast browsing through the video.
Abstract
A method for automatically producing a summary of a video, comprising: receiving the video at a computer system; partitioning the video into scenes; determining similarities between the scenes; selecting representative scenes from the video based on the determined similarities; and combining the selected scenes to produce the summary for the video, wherein partitioning the video into scenes involves: extracting feature vectors for sampled frames in the video; detecting shot boundaries based on distances between feature vectors for successive sampled frames; producing a frame-similarity matrix, wherein each element in the frame-similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames; using the frame-similarity matrix, the detected shot boundaries and a dynamic-programming technique to compute a shot-similarity matrix, wherein each element in the shot-similarity matrix represents a similarity between a corresponding pair of shots; and determining scene boundaries by selectively merging successive shots together based on the computed similarities between the successive shots and also based on audio breaks in the video.
Description
METHOD AND APPARATUS FOR AUTOMATICALLY
SUMMARIZING VIDEO
Inventor: Jay N, Yagnik
BACKGROUND
Field of the Invention [0001] The present invention relates computer-based techniques for manipulating video data. More specifically, the present invention relates to a computer-based technique for automatically summarizing a video.
Related Art
[0002] The recent proliferation of high-bandwidth Internet connections and associated developments in content-distribution technologies presently make it possible for millions of users to efficiently access video content on the Internet. These developments have led to a tremendous increase in the amount of video content that is being downloaded from the Internet. Internet users routinely view video clips from numerous web sites and portals to obtain various types of information and entertainment. At the same time, a number of video- sharing web sites have been recently launched, which are dedicated to sharing and distributing video clips.
[0003] Unlike other distribution channels for video content, the Internet enables consumers to preview short summaries of videos. This enables a consumer to obtain more information about a video before viewing and/or buying the entire video.
[0004] However, generating an effective summary for a video is a challenging a task. A summary should ideally be an interesting and representative version of the original video, so that the viewer is motivated to view or buy the original video. At present, the process of generating summaries is an extremely time-consuming manual process, which is impractical for more than a small number of videos.
[0005] Hence, what is needed is a method and an apparatus for automatically summarizing a video without the above-described problems.
SUMMARY [0006] One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video. [0007] In a variation on this embodiment, while partitioning the video into scenes, the system first extracts feature vectors for sampled frames in the video. Next, the system detects shot boundaries based on distances between feature vectors for successive sampled frames. The system also produces a frame-similarity matrix, wherein each element in the frame- similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames. Next, the system uses the frame-similarity matrix, the detected shot boundaries and a dynamic-programming technique to compute a shot-similarity matrix, wherein each element in the shot-similarity matrix represents a similarity between a corresponding pair of shots. Finally, the system determines the scene boundaries by selectively merging successive shots together based on the computed similarities between the successive shots and also based on audio breaks in the video.
[0008] In a further variation, extracting the feature vector for a sampled frame involves producing a color histogram for the sampled frame.
[0009] In a further variation, the distance between two feature vectors FA and FB is
F - F
1 -
[0010] In a further variation, while detecting the shot boundaries, the system uses an adaptive-threshold technique, which computes a distance between feature vectors for successive frames divided by a maximum distance between successive feature vectors in a preceding window of frames.
[0011] In a further variation, determining the similarities between the scenes involves: using the frame-similarity matrix, the determined scene boundaries and a dynamic- programming technique to produce a scene-similarity matrix. It also involves scoring the
scenes based on a metric that rewards scenes which are different from other scenes in the video, and that also rewards scenes which are similar to other scenes in the video.
[0012] In a further variation, selecting the representative scenes involves selecting the representative scenes based on a total score for the selected scenes subject to a time constraint.
[0013] In a variation on this embodiment, selecting the representative scenes involves clustering similar scenes together and selecting at most one scene from each cluster.
[0014] In a variation on this embodiment, selecting the representative scenes involves using a dynamic-programming technique to select the representative scenes. [0015] One embodiment of the present invention provides a system that automatically selects a frame from a video to represent the video. During operation, the system extracts feature vectors for sampled frames in the video. Next, the system determines similarities between sampled frames by determining distances between feature vectors for the samples frames. The system uses the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video.
BRIEF DESCRIPTION OF THE FIGURES
[0016] FIG. 1 illustrates the process of summarizing a video in accordance with an embodiment of the present invention. [0017] FIG. 2 presents a detailed flow diagram illustrating the process of summarizing a video in accordance with an embodiment of the present invention.
[0018] FIG. 3 illustrates a system for summarizing a video in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
[0019] The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
[0020] The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or any device capable of storing data usable by a computer system.
Overview
[0021] One embodiment of the present invention provides a technique for automatically summarizing a video, wherein the technique considers both "events" in the video and the "flow" of the video.
[0022] Referring to FIG. 1, the general process first extracts features from frames 104-107 in video 102 and uses these features to detect transitions between features. These transitions are used along with audio breaks to divide the video into "shots" 108- 1 10. For example, in one embodiment of the present invention, the features are color-value histograms for frames of the video and the shot boundaries are defined by abrupt transitions between features for successive frames in the video. These abrupt transitions are likely to be associated with "cuts" made during the shooting process for the video. In contrast, moving objects or camera motion within the video are not likely to cause such abrupt transitions, and are hence unlikely to be detected as sh.ot boundaries.
[0023] Next, successive shots 108-110 are selectively merged into scenes 1 12-1 14 based on similarities between the shots and the audio breaks in the video 102.
[0024] Finally, similarities are computed between the scenes, and scenes are automatically selected for summary 118 based on the computed similarities and a time constraint. This entire process is described in more detail below with reference to FIG. 2.
Detailed Process
[0025] FIG. 2 presents a more-detailed flow diagram illustrating the process of summarizing a video in accordance with an embodiment of the present invention. The system starts by receiving a video 102, which is comprised of a number of frames.
[0026] Next, a sampling mechanism 202 samples frames from video 102. These sampled frames feed into a feature-extraction mechanism 204, which extracts a "feature" for
5 each sampled frame. In one embodiment of the present invention, each frame is partitioned into a 4x6 array of tiles and the system extracts color histogram features for each of these tiles. The histogram provides 8 bins for each color, making the total feature vector length 4 χ 6 χ 8 χ 8 χ 8 =12288. (Note that the terms "feature" and "feature vector" are used interchangeably throughout this specification.)
[0027] The features feed into feature matrix 206 for further processing as is described below. They also feed into shot-boundary-detection mechanism 208, which detects shot boundaries by comparing features for consecutive frames and by considering audio breaks 222 (which are detected by audio-break detection mechanism 221). These shot boundaries are compiled into a shot-boundary list 210. In one embodiment of the present invention, shot- boundary-detection mechanism 208 uses an adaptive thresholding technique. This technique compares the variations of features within a causal window of length w to accurately localize the beginning of fades and dissolves. A shot change is detected if the following holds:
diFc'Fc-x) ≥ nhresh
[0028] This technique effectively provides an adaptive threshold which is raised for a sequence of frames containing motion and which is lowered for a sequence of static frames. This adaptive threshold makes it unlikely for the system to classify fast motion as a shot change.
[0029] Note that video 102 can be sampled at different intervals for different purposes. For example, video 102 can be sampled once every 30 frames to produce features for feature matrix 206, while video 102 is sampled every 5 frames for shot-boundary detection mechanism 208. Note that more samples are needed to accurately detect shot boundaries, but the corresponding features do not need to be saved because the shot boundaries are detected based on a small window of preceding frames.
[0030] Similarity-determining mechanism 212 compares features from feature matrix 206 to produce a frame-similarity matrix 214, which contains values representing similarities between pairs of features. In one embodiment of the present invention, these similarities are expressed as "cosine distances" between the feature vectors for the images. More specifically, we can define the following distance metric between two frames A and B,
where FA and FB axe feature vectors corresponding to frames A and B.
[0031] Next, shot-boundary list 210 is used to delineate shots in video 102, and then a dynamic-programming mechanism 216 is used to compute similarities between shots. These computed similarities are then used to populate a shot-similarity matrix 218.
[0032] Computing similarities between shots can be a complicated problem. Shots can vary in speed while capturing the same content, or parts of shots can be same. To account for such speed variations and similarities, one embodiment of the present invention uses the following recurrence relationship to compute the similarity between two shots si and s2 based on the best alignment between the shots:
^ \S*-stαrr.end> S*- sfαrt+V.enrf ) S{S\ stc.rM.end '» s2 started ) S(A***** . *2*».+w)+ ((l ~ Φ.ιM . FΛM ))/ mind . hi) where x : y denotes all the frames from frame x through frame y.
[0033] The above recurrence relationship can be solved by using a dynamic programming technique which has a computational complexity of O (IyI 7), where 1\ and /2 are the lengths of the two shots.
[0034] Next, scene-detection mechanism 223 uses the shot-similarity matrix 218 along with audio breaks 222 to determine scene boundaries 224. This generally involves using the audio breaks and the similarities between shots to selectively merge successive shots into scenes. Given the shot-similarity matrix 218, there are many ways to detect scenes from it. One embodiment of the present invention treats consecutive shots, which have similarity in the top 5% of all-to-all similarity values to be part of the same scene. However, many alternative techniques exist.
[0035] The scene boundaries are used to delineate scenes within video 102, and the same dynamic-programming mechanism 216 is used to compute similarities between scenes (instead of between shots as was done previously). These computed similarities are then used to populate a scene-similarity matrix 226.
[0036] A scene-scoring mechanism 232 uses information from scene-similarity matrix
226 to compute scene scores 234 based on similarities with other scenes. Once we have an all-to-all shot similarity matrix 218, we calculate a score for each scene, which is defined as:
G(s) = wκl J ∑ \s(s, S1. ) - μ\ I + wmolimM{s) + wmdloA(s)
where
[0037] This score captures the relative importance of the scene along with its motion and audio content. The basic idea is that important scenes are either very representative of the video or ones that are completely distinct from other scenes in the video. Intuitively, this means that in a story revolving around one set (or a game or talk show) it will pick up scenes from that set as well as some high motion-audio content shots from outside also. The score for a scene is affected by how away is it from the mean cohesiveness of the video. So for movies and other video material which does not stay in one setting the average cohesiveness would be low. So scenes of higher similarity to others are more likely to be picked up.
[0038] Information from scene-similarity matrix 226 is also used by scene-clustering mechanism 228 to produce scene clusters 230 containing similar scenes. One embodiment of the present invention performs an agglomerative-clustering step which bundles similar shots include clusters and that selects at most one shot from each cluster to appear in the summary. For example, the threshold for clustering can be derived from the scene similarity matrix as the fourth quartile of the distribution of similarities. At the end of clustering, we have a set of clusters each having one or more shots.
[0039] The scene scores 234 and the scene clusters 230 feed into another dynamic- programming mechanism 236, which selects scenes to include in the summary. In doing so, the system selects scenes in a manner that optimizes the total scene score while meeting the time line of the summary, and also does not include very similar scenes together (since similar shots are likely to have similar high scores). Hence, the problem is to pick zero or one scene from each cluster to form the final summary such that the sum total of the scores is maximized.
max ∑G(C,{bestshoή)
where d{bestshot) is the shot selected from cluster I, and the value of G is zero if no shot is 5 selected from that cluster.
[0040] Because trying all possible combinations is computationally intractable, one embodiment of the present invention uses a dynamic programming technique to perform this optimization. We need finite state space for the dynamic program to find an optimal solution. So we divide our summary time into divisions of 0.5 seconds each. The dynamic program is 0 as follows:
(max j (G(C1 (j)) + Score§slist j]), stime - time(j), i - 1) sJtime(j') < stime 0
[0041] This solves for the best combination of scenes in the given time constraint that 5 maximizes our weighted score and also suppresses scenes from the same clusters from appearing together.
[0042] Finally, combining mechanism 240 combines the selected scenes together to form summary 118.
[0043] Another embodiment of the present invention selects a single frame from the 0 video to represent the video. In this embodiment, the system similarly extracts feature vectors for sampled frames in the video, and similarly determines similarities between sampled frames by determining distances between feature vectors for the samples frames. The system then uses the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video. •5
System
[0044] FIG. 3 illustrates a computer-based system for summarizing a video in accordance with an embodiment of the present invention. This computer-based system operates within a computer system 300, which can generally include any type of computer 0 system, including, but not limited to, a computer system based on a microprocessor, a
mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance.
[0045] As is illustrated in FlG. 3, computer system 300 includes a number of software modules that implement: sampling mechanism 202, feature-extraction mechanism 204, shot- boundary-detection mechanism 208, similarity determining mechanism 212, dynamic- programming mechanism 216, scene-detection mechanism 220, scene-clustering mechanism 228, scene-scoring mechanism 232, dynamic-programming mechanism 236 and combining mechanism 240. These mechanisms operate collectively to produce a summary 118 for a video 102.
Applications
[0046] The summaries produced by the present invention can be used in a number of different ways, some of which are listed below.
(1) Video summaries can be used to provide previews for paid videos. There are typically restrictions on how much of a paid video can be shown to potential buyers. Living within these restrictions, the system can automatically generate video previews to be shown as trailers for to potential buyers for the paid videos.
(2) Video summaries can be useful to a user who missed a whole series or some episodes of their favorite show. The system can generate a summary for those episodes that fits within an amount of time that the user is willing to spend to catch up with the series.
(3) A video summary can comprise a single frame, which is displayed as a representative of the video within search results. The above-described techniques can be used the select a frame which best reflects the content of the video. In a related application, while the video is being played back, the system can display key frames from the interesting parts of the video to serve as anchor points for fast browsing through the video.
[0047] The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.
Claims
1. A method for automatically producing a summary of a video, comprising: receiving the video at a computer system; partitioning the video into scenes; determining similarities between the scenes; selecting representative scenes from the video based on the determined similarities; and combining the selected scenes to produce the summary for the video.
2. The method of claim 1, wherein partitioning the video into scenes involves: extracting feature vectors for sampled frames in the video; detecting shot boundaries based on distances between feature vectors for successive sampled frames; producing a frame-similarity matrix, wherein each element in the frame-similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames; using the frame-similarity matrix, the detected shot boundaries and a dynamic- programming technique to compute a shot-similarity matrix, wherein each element in the shot-similarity matrix represents a similarity between a corresponding pair of shots; and determining scene boundaries by selectively merging successive shots together based on the computed similarities between the successive shots and also based on audio breaks in the video.
3. The method of claim 2, wherein extracting the feature vector for a sampled frame involves producing a color histogram for the sampled frame.
4. The method of claim 2, wherein the distance between two feature vectors FA
5. The method of claim 2, wherein detecting the shot boundaries involves using an adaptive-threshold technique which computes a distance between feature vectors for successive frames divided by a maximum distance between successive feature vectors in a preceding window of frames. 5
6. The method of claim 2, wherein determining the similarities between the scenes involves: using the frame-similarity matrix, the determined scene boundaries and a dynamic- programming technique to produce a scene-similarity matrix; and 0 scoring the scenes based on a metric that rewards scenes which are different from other scenes in the video, and that also rewards scenes which are similar to other scenes in the video.
7. The method of claim 6, wherein selecting the representative scenes involves 5 selecting the representative scenes based on a total score for the selected scenes subject to a time constraint.
8. The method of claim 1, wherein selecting the representative scenes involves clustering similar scenes together and selecting at most one scene from each cluster. 0
9. The method of claim 1, wherein selecting the representative scenes involves using a dynamic-programming technique to select the representative scenes.
10. A computer-readable storage medium storing instructions that when executed 5 by a computer cause the computer to perform a method for automatically producing a summary of a video, the method comprising: receiving the video at a computer system; partitioning the video into scenes; determining similarities between the scenes;
(0 selecting representative scenes from the video based on the determined similarities; and combining the selected scenes to produce the summary for the video.
1 1. The computer-readable storage medium of claim 10, wherein partitioning the video into scenes involves: extracting feature vectors for sampled frames in the video;
5 detecting shot boundaries based on distances between feature vectors for successive sampled frames; producing a frame-similarity matrix, wherein each element in the frame-similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames; 0 using the frame-similarity matrix, the detected shot boundaries and a dynamic- programming technique to compute a shot-similarity matrix, wherein each element in the shot-similarity matrix represents a similarity between a corresponding pair of shots; and determining scene boundaries by selectively merging successive shots together based on the computed similarities between the successive shots and also based on audio breaks in 5 the video.
12. The computer-readable storage medium of claim 11, wherein extracting the feature vector for a sampled frame involves producing a color histogram for the sampled frame. 0
14. The computer-readable storage medium of claim 11, wherein detecting the 5 shot boundaries involves using an adaptive-threshold technique which computes a distance between feature vectors for successive frames divided by a maximum distance between successive feature vectors in a preceding window of frames.
15. The computer-readable storage medium of claim 11, wherein determining the Ϊ0 similarities between the scenes involves: using the frame-similarity matrix, the determined scene boundaries and a dynamic- programming technique to produce a scene-similarity matrix; and scoring the scenes based on a metric that rewards scenes which are different from other scenes in the video, and that also rewards scenes which are similar to other scenes in the video.
16. The computer-readable storage medium of claim 15, wherein selecting the representative scenes involves selecting the representative scenes based on a total score for the selected scenes subject to a time constraint.
17. The computer-readable storage medium of claim 10, wherein selecting the representative scenes involves clustering similar scenes together and selecting at most one scene from each cluster.
18. The computer-readable storage medium of claim 10, wherein selecting the representative scenes involves using a dynamic-programming technique to select the representative scenes.
19. An apparatus that automatically produces a summary of a video, comprising: a partitioning mechanism configured to partition the video into scenes; a similarity-determining mechanism configured to determine similarities between the scenes; a selection mechanism configured to select representative scenes from the video based on the determined similarities; and a combining mechanism configured to combine the selected scenes to produce the summary for the video.
20. The apparatus of claim 19, wherein the partitioning mechanism is configured to: extract feature vectors for sampled frames in the video; detect shot boundaries based on distances between feature vectors for successive sampled frames; produce a frame-similarity matrix, wherein each element in the frame-similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames; use the frame-similarity matrix, the shot boundaries and a dynamic-programming technique to compute a shot-similarity matrix, wherein each element in the shot-similarity matrix represents a similarity between a corresponding pair of shots; and to determine scene boundaries by selectively merging successive shots together based on the computed similarities between the successive shots and also based on audio breaks in the video.
21. A method for automatically selecting a frame from a video to represent the video, comprising: receiving the video at a computer system; determining similarities between sampled frames in the video; and using the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video.
22. The method of claim 21, wherein determining the similarities between the sampled frames involves: extracting feature vectors for sampled frames; and determining distances between feature vectors for the samples frames; and using the distances as a measure of similarity.
23. The method of claim 22, determining the similarities between the sampled frames also involves: producing a frame-similarity matrix, wherein each element in the frame-similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames; and adding up rows in the frame similarity matrix to determine how similar each sampled frame is to the other sample frames in the video.
24. The method of claim 22, wherein extracting a feature vector for a sampled frame involves producing a color histogram for the sampled frame.
25. A computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for automatically selecting a frame from a video to represent the video, the method comprising: receiving the video at a computer system; determining similarities between sampled frames in the video; and using the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video.
26. The computer-readable storage medium of claim 25, determining the similarities between the sampled frames involves: extracting feature vectors for sampled frames; and determining distances between feature vectors for the samples frames; and using the distances as a measure of similarity.
27. The computer-readable storage medium of claim 26, determining the similarities between the sampled frames also involves: producing a frame-similarity matrix, wherein each element in the frame-similarity matrix represents a distance between feature vectors for a corresponding pair of sampled frames; and adding up rows in the frame similarity matrix to determine how similar each sampled frame is to the other sample frames in the video.
28. The computer-readable storage medium of claim 26, wherein extracting a feature vector for a sampled frame involves producing a color histogram for the sampled frame.
29. An apparatus that automatically selects a frame from a video to represent the video, comprising: a similarity-determining mechanism configured to determine similarities between sampled frames; and a selection mechanism configured to use the determined similarities to select a sampled frame to represent the video, wherein the selected frame is most similar to the other sampled frames in the video.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US79186906P | 2006-04-12 | 2006-04-12 | |
US60/791,869 | 2006-04-12 | ||
US11/454,386 | 2006-06-15 | ||
US11/454,386 US8699806B2 (en) | 2006-04-12 | 2006-06-15 | Method and apparatus for automatically summarizing video |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2007120716A2 true WO2007120716A2 (en) | 2007-10-25 |
WO2007120716A3 WO2007120716A3 (en) | 2008-04-17 |
Family
ID=38606286
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/008951 WO2007120716A2 (en) | 2006-04-12 | 2007-04-09 | Method and apparatus for automatically summarizing video |
Country Status (2)
Country | Link |
---|---|
US (2) | US8699806B2 (en) |
WO (1) | WO2007120716A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015184768A1 (en) * | 2014-10-23 | 2015-12-10 | 中兴通讯股份有限公司 | Method and device for generating video abstract |
Families Citing this family (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8316081B2 (en) | 2006-04-13 | 2012-11-20 | Domingo Enterprises, Llc | Portable media player enabled to obtain previews of a user's media collection |
KR20090045376A (en) * | 2006-08-25 | 2009-05-07 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Method and apparatus for automatically generating a summary of a multimedia content item |
US20080066107A1 (en) | 2006-09-12 | 2008-03-13 | Google Inc. | Using Viewing Signals in Targeted Video Advertising |
US8214374B1 (en) * | 2011-09-26 | 2012-07-03 | Limelight Networks, Inc. | Methods and systems for abridging video files |
US8396878B2 (en) | 2006-09-22 | 2013-03-12 | Limelight Networks, Inc. | Methods and systems for generating automated tags for video files |
US9015172B2 (en) | 2006-09-22 | 2015-04-21 | Limelight Networks, Inc. | Method and subsystem for searching media content within a content-search service system |
US8966389B2 (en) | 2006-09-22 | 2015-02-24 | Limelight Networks, Inc. | Visual interface for identifying positions of interest within a sequentially ordered information encoding |
US20080276266A1 (en) * | 2007-04-18 | 2008-11-06 | Google Inc. | Characterizing content for identification of advertising |
US8667532B2 (en) * | 2007-04-18 | 2014-03-04 | Google Inc. | Content recognition for targeting video advertisements |
US8433611B2 (en) | 2007-06-27 | 2013-04-30 | Google Inc. | Selection of advertisements for placement with content |
US9064024B2 (en) | 2007-08-21 | 2015-06-23 | Google Inc. | Bundle generation |
US20090083790A1 (en) * | 2007-09-26 | 2009-03-26 | Tao Wang | Video scene segmentation and categorization |
US9824372B1 (en) | 2008-02-11 | 2017-11-21 | Google Llc | Associating advertisements with videos |
US20100037149A1 (en) * | 2008-08-05 | 2010-02-11 | Google Inc. | Annotating Media Content Items |
US8718404B2 (en) * | 2009-02-06 | 2014-05-06 | Thomson Licensing | Method for two-step temporal video registration |
US9167189B2 (en) * | 2009-10-15 | 2015-10-20 | At&T Intellectual Property I, L.P. | Automated content detection, analysis, visual synthesis and repurposing |
US9152708B1 (en) | 2009-12-14 | 2015-10-06 | Google Inc. | Target-video specific co-watched video clusters |
US20140033006A1 (en) * | 2010-02-18 | 2014-01-30 | Adobe Systems Incorporated | System and method for selection preview |
JP5553152B2 (en) | 2010-04-09 | 2014-07-16 | ソニー株式会社 | Image processing apparatus and method, and program |
US20120151343A1 (en) * | 2010-12-13 | 2012-06-14 | Deep Tags, LLC | Deep tags classification for digital media playback |
US9557885B2 (en) | 2011-08-09 | 2017-01-31 | Gopro, Inc. | Digital media editing |
CN103226586B (en) * | 2013-04-10 | 2016-06-22 | 中国科学院自动化研究所 | Video summarization method based on Energy distribution optimal strategy |
CN105612554B (en) | 2013-10-11 | 2019-05-10 | 冒纳凯阿技术公司 | Method for characterizing the image obtained by video-medical equipment |
US9225879B2 (en) * | 2013-12-27 | 2015-12-29 | TCL Research America Inc. | Method and apparatus for video sequential alignment |
US9652667B2 (en) | 2014-03-04 | 2017-05-16 | Gopro, Inc. | Automatic generation of video from spherical content using audio/visual analysis |
US9685194B2 (en) | 2014-07-23 | 2017-06-20 | Gopro, Inc. | Voice-based video tagging |
US10074013B2 (en) | 2014-07-23 | 2018-09-11 | Gopro, Inc. | Scene and activity identification in video summary generation |
US9639762B2 (en) * | 2014-09-04 | 2017-05-02 | Intel Corporation | Real time video summarization |
CN104394422B (en) * | 2014-11-12 | 2017-11-17 | 华为软件技术有限公司 | A kind of Video segmentation point acquisition methods and device |
US9734870B2 (en) | 2015-01-05 | 2017-08-15 | Gopro, Inc. | Media identifier generation for camera-captured media |
KR102306538B1 (en) * | 2015-01-20 | 2021-09-29 | 삼성전자주식회사 | Apparatus and method for editing content |
US9679605B2 (en) | 2015-01-29 | 2017-06-13 | Gopro, Inc. | Variable playback speed template for video editing application |
KR101650153B1 (en) * | 2015-03-19 | 2016-08-23 | 네이버 주식회사 | Cartoon data modifying method and cartoon data modifying device |
US10074015B1 (en) | 2015-04-13 | 2018-09-11 | Google Llc | Methods, systems, and media for generating a summarized video with video thumbnails |
WO2016187235A1 (en) | 2015-05-20 | 2016-11-24 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
CN105007433B (en) * | 2015-06-03 | 2020-05-15 | 南京邮电大学 | Moving object arrangement method based on energy constraint minimization of object |
JP2017045374A (en) * | 2015-08-28 | 2017-03-02 | 富士ゼロックス株式会社 | Information processing device and program |
US9894393B2 (en) | 2015-08-31 | 2018-02-13 | Gopro, Inc. | Video encoding for reduced streaming latency |
US10204273B2 (en) | 2015-10-20 | 2019-02-12 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US9721611B2 (en) | 2015-10-20 | 2017-08-01 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US10229324B2 (en) * | 2015-12-24 | 2019-03-12 | Intel Corporation | Video summarization using semantic information |
US10095696B1 (en) | 2016-01-04 | 2018-10-09 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content field |
US10109319B2 (en) | 2016-01-08 | 2018-10-23 | Gopro, Inc. | Digital media editing |
US9812175B2 (en) | 2016-02-04 | 2017-11-07 | Gopro, Inc. | Systems and methods for annotating a video |
KR20170098079A (en) * | 2016-02-19 | 2017-08-29 | 삼성전자주식회사 | Electronic device method for video recording in electronic device |
US9972066B1 (en) | 2016-03-16 | 2018-05-15 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US10402938B1 (en) | 2016-03-31 | 2019-09-03 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US9838730B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing |
US9794632B1 (en) | 2016-04-07 | 2017-10-17 | Gopro, Inc. | Systems and methods for synchronization based on audio track changes in video editing |
US9838731B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing with audio mixing option |
US9998769B1 (en) | 2016-06-15 | 2018-06-12 | Gopro, Inc. | Systems and methods for transcoding media files |
US10250894B1 (en) | 2016-06-15 | 2019-04-02 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US9922682B1 (en) | 2016-06-15 | 2018-03-20 | Gopro, Inc. | Systems and methods for organizing video files |
US10045120B2 (en) | 2016-06-20 | 2018-08-07 | Gopro, Inc. | Associating audio with three-dimensional objects in videos |
US10185891B1 (en) | 2016-07-08 | 2019-01-22 | Gopro, Inc. | Systems and methods for compact convolutional neural networks |
US10469909B1 (en) | 2016-07-14 | 2019-11-05 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10395119B1 (en) | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
US9836853B1 (en) | 2016-09-06 | 2017-12-05 | Gopro, Inc. | Three-dimensional convolutional neural networks for video highlight detection |
US10268898B1 (en) | 2016-09-21 | 2019-04-23 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video via segments |
US10282632B1 (en) | 2016-09-21 | 2019-05-07 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video |
US10002641B1 (en) | 2016-10-17 | 2018-06-19 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10284809B1 (en) | 2016-11-07 | 2019-05-07 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US10262639B1 (en) | 2016-11-08 | 2019-04-16 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10534966B1 (en) | 2017-02-02 | 2020-01-14 | Gopro, Inc. | Systems and methods for identifying activities and/or events represented in a video |
US10339443B1 (en) | 2017-02-24 | 2019-07-02 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US10127943B1 (en) | 2017-03-02 | 2018-11-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10185895B1 (en) | 2017-03-23 | 2019-01-22 | Gopro, Inc. | Systems and methods for classifying activities captured within images |
US10083718B1 (en) | 2017-03-24 | 2018-09-25 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10187690B1 (en) | 2017-04-24 | 2019-01-22 | Gopro, Inc. | Systems and methods to detect and correlate user responses to media content |
US10395122B1 (en) | 2017-05-12 | 2019-08-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10402698B1 (en) | 2017-07-10 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying interesting moments within videos |
US10614114B1 (en) | 2017-07-10 | 2020-04-07 | Gopro, Inc. | Systems and methods for creating compilations based on hierarchical clustering |
US10402656B1 (en) | 2017-07-13 | 2019-09-03 | Gopro, Inc. | Systems and methods for accelerating video analysis |
US10929945B2 (en) | 2017-07-28 | 2021-02-23 | Google Llc | Image capture devices featuring intelligent use of lightweight hardware-generated statistics |
US10445586B2 (en) | 2017-12-12 | 2019-10-15 | Microsoft Technology Licensing, Llc | Deep learning on image frames to generate a summary |
CN110321799B (en) * | 2019-06-04 | 2022-11-18 | 武汉大学 | Scene number selection method based on SBR and average inter-class distance |
CN113453040B (en) * | 2020-03-26 | 2023-03-10 | 华为技术有限公司 | Short video generation method and device, related equipment and medium |
US20230205815A1 (en) | 2020-05-26 | 2023-06-29 | Nec Corporation | Information processing device, control method and storage medium |
CN117459665A (en) * | 2023-10-25 | 2024-01-26 | 杭州友义文化传媒有限公司 | Video editing method, system and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004040480A1 (en) * | 2002-11-01 | 2004-05-13 | Mitsubishi Denki Kabushiki Kaisha | Method for summarizing unknown content of video |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6263507B1 (en) * | 1996-12-05 | 2001-07-17 | Interval Research Corporation | Browser for use in navigating a body of information, with particular application to browsing information represented by audiovisual data |
US6535639B1 (en) * | 1999-03-12 | 2003-03-18 | Fuji Xerox Co., Ltd. | Automatic video summarization using a measure of shot importance and a frame-packing method |
US7016540B1 (en) * | 1999-11-24 | 2006-03-21 | Nec Corporation | Method and system for segmentation, classification, and summarization of video images |
US7305389B2 (en) * | 2004-04-15 | 2007-12-04 | Microsoft Corporation | Content propagation for enhanced document retrieval |
US7809722B2 (en) | 2005-05-09 | 2010-10-05 | Like.Com | System and method for enabling search and retrieval from image files based on recognized information |
US7551234B2 (en) * | 2005-07-28 | 2009-06-23 | Seiko Epson Corporation | Method and apparatus for estimating shot boundaries in a digital video sequence |
-
2006
- 2006-06-15 US US11/454,386 patent/US8699806B2/en active Active
-
2007
- 2007-04-09 WO PCT/US2007/008951 patent/WO2007120716A2/en active Application Filing
-
2014
- 2014-02-18 US US14/183,070 patent/US8879862B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004040480A1 (en) * | 2002-11-01 | 2004-05-13 | Mitsubishi Denki Kabushiki Kaisha | Method for summarizing unknown content of video |
Non-Patent Citations (4)
Title |
---|
ANER-WOLF A ET AL: "Video summaries and cross-referencing through mosaic-based representation" COMPUTER VISION AND IMAGE UNDERSTANDING, ACADEMIC PRESS, ELSEVIER INC., vol. 95, no. 2, August 2004 (2004-08), pages 201-237, XP004520274 ISSN: 1077-3142 * |
LU S ET AL: "A Novel Video Summarization Framework for Document Preparation and Archival Applications" 2005 PROCEEDINGS IEEE AEROSPACE CONFERENCE BIG SKY, MT, 5 March 2005 (2005-03-05), pages 1-10, XP010864565 ISBN: 0-7803-8870-4 * |
UCHIHASHI S ET AL: "VIDEO MANGA: GENERATING SEMANTICALLY MEANINGFUL VIDEO SUMMARIES" ACM MULTIMEDIA, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE, NEW YORK, NY, US, 1999, pages 383-392, XP001148132 * |
ZHU X ET AL: "HIERARCHICAL VIDEO CONTENT DESCRIPTION AND SUMMARIZATION USING UNIFIED SEMANTIC AND VISUAL SIMILARITY" MULTIMEDIA SYSTEMS, ACM, SPRINGER VERLAG, vol. 9, no. 1, July 2003 (2003-07), pages 31-53, XP001178581 ISSN: 0942-4962 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015184768A1 (en) * | 2014-10-23 | 2015-12-10 | 中兴通讯股份有限公司 | Method and device for generating video abstract |
Also Published As
Publication number | Publication date |
---|---|
US20070245242A1 (en) | 2007-10-18 |
US8879862B2 (en) | 2014-11-04 |
WO2007120716A3 (en) | 2008-04-17 |
US20140161351A1 (en) | 2014-06-12 |
US8699806B2 (en) | 2014-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8879862B2 (en) | Method and apparatus for automatically summarizing video | |
US10867183B2 (en) | Selecting and presenting representative frames for video previews | |
US8804999B2 (en) | Video recommendation system and method thereof | |
US20190066732A1 (en) | Video Skimming Methods and Systems | |
Guan et al. | Keypoint-based keyframe selection | |
Petrovic et al. | Adaptive video fast forward | |
Hanjalic | Adaptive extraction of highlights from a sport video based on excitement modeling | |
US8467610B2 (en) | Video summarization using sparse basis function combination | |
JP4580183B2 (en) | Generation of visually representative video thumbnails | |
US8676030B2 (en) | Methods and systems for interacting with viewers of video content | |
US8467611B2 (en) | Video key-frame extraction using bi-level sparsity | |
US20060048191A1 (en) | Method and apparatus for use in video searching | |
WO2000045603A1 (en) | Signal processing method and video/voice processing device | |
Sreeja et al. | Towards genre-specific frameworks for video summarisation: A survey | |
US11853357B2 (en) | Method and system for dynamically analyzing, modifying, and distributing digital images and video | |
Papadopoulos et al. | Automatic summarization and annotation of videos with lack of metadata information | |
Wang et al. | Real-time summarization of user-generated videos based on semantic recognition | |
Silva et al. | Making a long story short: A multi-importance fast-forwarding egocentric videos with the emphasis on relevant objects | |
JP2006217046A (en) | Video index image generator and generation program | |
Bohm et al. | Prover: Probabilistic video retrieval using the Gauss-tree | |
Tapu et al. | DEEP-AD: a multimodal temporal video segmentation framework for online video advertising | |
Ciocca et al. | Supervised and unsupervised classification post-processing for visual video summaries | |
Apostolidis et al. | Video fragmentation and reverse search on the web | |
Jiang et al. | Trends and opportunities in consumer video content navigation and analysis | |
Benini et al. | Identifying video content consistency by vector quantization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07755278 Country of ref document: EP Kind code of ref document: A2 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07755278 Country of ref document: EP Kind code of ref document: A2 |