US20120210219A1 - Keywords and dynamic folder structures - Google Patents
Keywords and dynamic folder structures Download PDFInfo
- Publication number
- US20120210219A1 US20120210219A1 US13/115,970 US201113115970A US2012210219A1 US 20120210219 A1 US20120210219 A1 US 20120210219A1 US 201113115970 A US201113115970 A US 201113115970A US 2012210219 A1 US2012210219 A1 US 2012210219A1
- Authority
- US
- United States
- Prior art keywords
- keyword
- clip
- media
- collection
- clips
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004458 analytical method Methods 0.000 claims description 92
- 238000001914 filtration Methods 0.000 claims description 32
- 230000008676 import Effects 0.000 claims description 30
- 230000003993 interaction Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 14
- 230000006641 stabilisation Effects 0.000 claims description 12
- 238000011105 stabilization Methods 0.000 claims description 12
- 230000000007 visual effect Effects 0.000 claims description 12
- 230000004048 modification Effects 0.000 claims description 6
- 238000012986 modification Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 238000000034 method Methods 0.000 abstract description 112
- 239000003550 marker Substances 0.000 description 125
- 230000008569 process Effects 0.000 description 107
- 150000001875 compounds Chemical class 0.000 description 33
- 239000002131 composite material Substances 0.000 description 30
- 238000003860 storage Methods 0.000 description 25
- 230000000694 effects Effects 0.000 description 15
- 238000009877 rendering Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 9
- 230000008520 organization Effects 0.000 description 9
- 230000007704 transition Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000009471 action Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 239000003973 paint Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000002730 additional effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 210000005010 torso Anatomy 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
Definitions
- media editing applications for creating media presentations by compositing several pieces of media content such as video, audio, animation, still image, etc.
- Such applications give users the ability to edit, combine, transition, overlay, and piece together different media content in a variety of manners to create a resulting composite presentation.
- Examples of media editing applications include Final Cut Pro® and iMovie®, both sold by Apple Inc.
- Some media editing applications provide bins or folder-like structures to organize media content.
- a user typically imports media content and creates several bins. The user then provides names for the bins and organizes the content by moving different pieces of the content into these bins. In other words, the user clears one or more areas that contain pieces of the content by placing the pieces in other areas where he or she can easily access them later.
- the user searches for it in one of the bins. For instance, a video editor might search a bin called “People” in order to find a video clip having a wide camera shot of a group of extras. After finding the video clip, the video editor may move or copy the video clip into another bin. If the video editor cannot locate the video clip, the editor may import the clip again.
- some media editing applications provide keyword-tagging functionality.
- keyword tagging a user selects one or more pieces of content and associates the selected content with a keyword.
- the user associates the selected content through a keyword display area that lists several user-specified keywords.
- the user initiates a keyword filtering operation on a particular keyword in order to display only those pieces of content that have been associated with the particular keyword.
- keyword tagging a user is limited to filtering down a display area to find content associated with a particular keyword. In some cases, the user has no recollection of which pieces of content are associated with which keywords. Furthermore, in most cases, an application's keyword tagging functionality is a secondary organizational feature to supplement folder-type organization.
- Some embodiments of the invention provide a novel keyword association tool for organizing media content.
- Each keyword can be associated with an entire clip or a portion of the clip.
- the keyword association tool creates a collection (e.g., bin, folder, etc.) in a dynamic collection structure.
- a keyword collection is dynamically added to the collection structure each time a new keyword is associated with a media clip.
- a user can drag and drop a clip onto a keyword collection that corresponds to the keyword. The same technique can be used to associate multiple clips with the keyword by simultaneously dragging and dropping the clips onto the keyword collection.
- the dynamic collection structure may be represented in a sidebar display area with a list of different keyword collections. Accordingly, a user does not have to search for a separate keyword association tool to associate one or more clips with different keywords. Moreover, these keyword collections operate similar to what many computer users have come to believe as bins or folders. For example, a user can (1) create different keyword collections, (2) drag and drop items onto them, and (3) select any one of them to view its keyword associated content.
- the keyword collections either replace or supplement folder-type organization.
- a group of items can be organized into different keyword collections.
- a group of items in a folder or bin can be organized into different keyword collections. Accordingly, the keyword collection feature provides a new model for replacing or supplementing traditional folder-type organization.
- each keyword can be associated with an entire clip or a portion of a clip in some embodiments.
- the user can select the portion of the clip (e.g., using a range selector), and drag and drop the selected portion onto a keyword collection.
- the user can also filter a display area to display each clip or portion of a clip associated with the keyword by selecting the keyword collection.
- each media clip associated with a keyword is displayed with a graphical indication of the association. This allows a user to quickly assess a large group of media clips and determine which clips or ranges of the clips have been tagged with one or more keywords.
- the graphical indication in some embodiments, spans horizontally across a portion of a clip's representation (e.g., a thumbnail representation, filmstrip representation, waveform representation, etc.).
- the keyword collections are provided in a hierarchical structure (e.g., of a sidebar display area) with different types of collections.
- each keyword collection may be a part of another collection such as a media collection, disk collection, etc.
- some embodiments provide filter or smart collections in the hierarchical structure.
- a user can create a smart collection and customize the smart collection to include or exclude each item associated with one or more different keywords.
- a keyword collection When a keyword collection is deleted, some embodiments automatically disassociate or untag each item associated with a keyword of the keyword collection. This allows a user to quickly remove keyword associations from a large group of tagged items. The user can also disassociate a portion of a range associated with a keyword. In some embodiments, when multiple keywords are selected, a display area displays only a union of items associated with keywords of the keyword collections. In some embodiments, a keyword collection can be renamed to quickly associate its contents with another keyword (e.g., with the new name of the keyword collection) or to quickly merge or combine two keyword collections.
- Some embodiments provide a keyword tagging tool for creating keyword associations.
- a user can select a clip or a portion of a clip, and select a keyword from the keyword tagging tool.
- the keyword tagging tool may provide suggested keywords for an auto-fill operation.
- the keyword tagging tool in some embodiments, includes several fields for inputting keyword shortcuts. A user can populate one or more of these fields and use shortcut keys to quickly tag different items.
- one or more different types of analysis are performed on a set of items to automatically organize the set into different keyword collections.
- a clip may be organized into different keyword collections based on an analysis of the number of people (e.g., one person, two persons, group, etc.) in the clip and/or a type of shot (e.g., a close-up, medium, or wide shot).
- Other types of analysis may include image stabilization analysis (e.g., camera movement), color balance analysis, audio analysis (e.g., mono, stereo, silent channels), metadata analysis, etc.
- the analysis operations are performed when one or more items are imported into an application (e.g., media editing application).
- the application identifies a source directory in which a set of items is located.
- the application (1) associates the name of the source directory with the set of items and (2) creates a keyword collection that contains the set of items. Accordingly, the imported items do not have to be manually organized into the keyword collection.
- Some embodiments provide a novel list view that displays a list of media clips and, for each media clip, displays each keyword associated with the media clip.
- the list view includes a list area for displaying the list of media clips and keywords. For example, when a clip is associated with one or more keywords, the list area displays the clip with each associated keyword.
- the list view includes a preview section for displaying a representation of a clip selected from the list view's list area. For example, the preview section may display a filmstrip representation or a sequence of thumbnail images corresponding to a set of frames in a video clip.
- Some embodiments allow a user to associate a keyword with an entire clip or a portion of the clip using the list view. For example, the user can (1) select a video clip from the list area to display the clip's filmstrip representation in the preview section, (2) paint over an area of the representation to select a range of the clip, and (3) tag the range with a keyword. Alternatively, the user can select different ranges of a clip by selecting one or more keywords from the list area. The user can also filter the list area to display each clip or portion of a clip associated with a keyword.
- each media clip associated with a keyword is displayed in the preview section of the list view with a graphical indication of the association.
- the graphical indication in some embodiments, spans horizontally across a portion of a clip's representation. For example, multiple graphical indications may be shown on a video clip's filmstrip representation to indicate different portions that are associated with one or more keywords.
- the list view displays information related to a keyword range.
- the list area may list a starting point and an ending point for each keyword associated with a clip's range.
- the list area displays a duration of the keyword range. In this manner, a user of the list view can quickly see in detail which portions of the clip are associated with one or more keywords.
- the list area displays other items associated with a clip. These items include at least one of (1) a marker, (2) a filter or smart collection, and (3) a ratings marker.
- a marker marks a point in time along a clip's range. For example, a user can mark a point with a marker, and specify a note or make the marker a to-do item.
- the filter or smart collection may indicate in the list view a range of a clip that, based on an analysis, includes people (e.g., one person, two persons, a group, etc.) and/or a type of shot (e.g., a close-up, medium, or wide shot).
- other types of analysis may include image stabilization analysis (e.g., camera movement), color balance analysis, audio analysis (e.g., mono, stereo, silent channels), metadata analysis, etc.
- the list view is an editing tool that can be used to perform a number of different editing operations.
- the list view allows an editor to input notes for media clips or keyword ranges. These notes can be notes that an editor makes regarding the contents of an entire media clip or a range of the media clip associated a keyword.
- the editing operations entail any one of (1) creating a composite or nested clip, (2) creating markers (e.g., to-do items, completed items), (3) adding clips to a timeline for defining a composite presentation, etc.
- the list view is a playback tool that allows a user to play through one or more clips in the list. For example, when a user selects a clip from a list of clips and inputs a playback command, several clips from the list may be played without interruption starting from the selected clip. In some such embodiments, the user can jump to a different marked section (e.g., a keyword range) or different clip and continue playback starting from the marked section or clip.
- a different marked section e.g., a keyword range
- the timeline search tool includes a search field that allows a user to search for clips.
- the timeline search tool may display each clip in a list of clips.
- the timeline search tool may filter this list to display only each clip that satisfies or matches the search parameter.
- each clip in the list of clips is selectable such that a selection of the clip causes the timeline to navigate to the position of the clip in the timeline. For example, when a composite presentation includes many clips that make up a sequence or composite presentation, an editor can easily search the timeline to identify a particular clip and navigate the timeline to the particular clip. Accordingly, the timeline search tool allows the editor to search and navigate the timeline to identify clips.
- the timeline search tool displays different types of clips. These different types of clips include audio clips, video clips, and title clips.
- the timeline search tool allows a user to granularly search and navigate the timeline by specifying the type of clips for which to search.
- the timeline search tool in some embodiments, provides a search function for searching all types of clips.
- the timeline search tool allows a user to search for a clip or a portion of a clip associated with one or more keywords.
- the timeline search tool displays a list of each keyword associated with a clip or a portion of a clip.
- the timeline search tool filters this list to display only each keyword that satisfies or matches the search parameter. For example, when a composite presentation includes many clips tagged with different actors' names, an editor can easily search and navigate the timeline to identify ranges of clips tagged with a particular actor's name.
- a media clip in a timeline may be associated with different types of items. These types of items include at least one of (1) an analysis keyword, (2) a marker, (3) a filter or smart collection, and (4) a ratings marker.
- the timeline search tool allows a user to granularly search and navigate the timeline by specifying the type of items to search.
- the timeline search tool provides a search function for searching all types of items, in some embodiments.
- the timeline search tool allows an editor to search for markers that are placed in the timeline.
- markers can have “to do” notes associated with them, in some embodiments. These notes can be notes that an editor makes as reminders to himself or others regarding tasks that have to be performed.
- the method of some embodiments displays (1) the notes associated with the marker and/or (2) a check box to indicate whether the task associated with the marker has been completed.
- the editor can check the box for a marker in the search view in order to indicate that the marker has been completed.
- the timeline search tool displays each item (e.g., keyword, clip) in chronological order, starting from a first item along the timeline to a last item.
- the timeline search tool includes its own playhead that moves along the list of items. This playhead moves synchronously with the timeline's playhead, in some embodiments.
- the timeline search tool provides a search function for finding missing clips.
- a missing clip is a clip imported into an application that does not link back to its source. For example, a user might have moved or deleted a source file on a hard disk to break the link between the application's file entry and the source file.
- the timeline search tool displays each missing clip in a list. When the missing clip is selected from the list, some embodiments provide a set of options to re-establish the link for the missing clip.
- the timeline search tool displays a total time for the selected items.
- the timeline search tool may display a total time for multiple clips, multiple ranges of clips associated with one or more keywords, etc. Displaying the total time can be useful in a number of different ways. For example, an editor may be restricted to adding only 30 seconds of stock footage. When the stock footage is tagged as such, the editor can select those items corresponding to the stock footage in the timeline search tool and know whether the total duration exceeds 30 seconds.
- FIG. 1 illustrates a graphical user interface (“GUI”) of a media editing application with a keyword association tool.
- GUI graphical user interface
- FIG. 2 illustrates the GUI after associating a video clip with a keyword.
- FIG. 3 illustrates specifying a range of a video clip to associate with a keyword.
- FIG. 4 illustrates an example GUI of a media-editing application of some embodiments.
- FIG. 5 conceptually illustrates several example data structures of several objects of the media editing application.
- FIG. 6 illustrates creating a keyword association by dragging and dropping a range of a clip from one keyword collection to another.
- FIG. 7 illustrates creating a compound clip and associating the compound clip with a keyword.
- FIG. 8 illustrates deleting a clip range from a keyword collection.
- FIG. 9 illustrates removing a keyword from a portion of a clip range.
- FIG. 10 provides an illustrative example of disassociating multiple ranges of video clips by deleting a keyword collection.
- FIG. 11 illustrates combining two keyword collections.
- FIG. 12 provides an illustrative example of selecting multiple keyword collections from the event library.
- FIG. 13 provides an illustrative example of selecting a video clip range.
- FIG. 14 provides an illustrative example of dragging and dropping clips from one event collection to another event collection.
- FIG. 15 provides an illustrative example of dragging and dropping keyword collections from one event collection to another event collection.
- FIG. 16 provides an illustrative example of merging two event collections.
- FIG. 17 conceptually illustrates a process for associating a range of a media clip with a keyword.
- FIG. 18 illustrates an example of a tagging tool according to some embodiments.
- FIG. 19 illustrates the media editing application automatically assigning a shortcut key for a previously used keyword.
- FIG. 20 illustrates an example of using the auto-complete feature of the tagging tool.
- FIG. 21 illustrates an example of using the keyword association tool to perform an auto-apply operation.
- FIG. 22 illustrates removing a keyword from a video clip using the tagging tool.
- FIG. 23 conceptually illustrates a state diagram of a media-editing application of some embodiments.
- FIG. 24 provides an illustrative example of creating a keyword collection by analyzing content.
- FIG. 25 illustrates an example of different groupings that are created based on an analysis of video clips.
- FIG. 26 provides an illustrative example of different groupings that are created after the media editing application has analyzed and fixed image stabilization problems.
- FIG. 27 illustrates automatically importing media clips from different folders of the file system.
- FIG. 28 conceptually illustrates a process for automatically organizing media clips into different keyword collection by analyzing the media clips.
- FIG. 29 provides an illustrative example of creating a smart collection.
- FIG. 30 provides an illustrative example of filtering the smart collection based on keyword.
- FIG. 31 illustrates filtering the event browser based on keywords.
- FIG. 32 illustrates an example of rating a media clip.
- FIG. 33 illustrates an example of filtering an event collection based on ratings or keywords.
- FIG. 34 illustrates the media editing application with a list view according to some embodiments.
- FIG. 35 illustrates expanding a media clip in the list view.
- FIG. 36 illustrates an example of simultaneously expanding multiple different clips in the list view.
- FIG. 37 illustrates the list view with several notes fields for adding notes.
- FIG. 38 illustrates selecting different ranges of a media clip using the list view.
- FIG. 39 illustrates selecting multiple ranges of a media clip using the list view.
- FIG. 40 conceptually illustrates a process for displaying and selecting items in a list view.
- FIG. 41 conceptually illustrates a process for playing items in a list view.
- FIG. 42 illustrates adding a marker to a clip using the list view.
- FIG. 43 provides an illustrative example of editing a marker.
- FIG. 44 provides an illustrative example of defining a marker as a to-do item.
- FIG. 45 provides an illustrative example of adding a video clip to a timeline.
- FIG. 46 provides an illustrative example of a timeline search tool according to some embodiments.
- FIG. 47 provides an illustrative example of the association between the timeline playhead and the index playhead.
- FIG. 48 provides an illustrative example of filtering the timeline search tool.
- FIG. 49 provides an illustrative example of filtering the timeline search tool based on video, audio, and titles.
- FIG. 50 provides an illustrative example of navigating the timeline using the search tool.
- FIG. 51 provides an example workflow for searching the timeline for a to-do marker using the search tool and checking the to-do marker as a completed item.
- FIG. 52 provides an illustrative example of using the timeline search tool to search a list of keywords and markers.
- FIG. 53 provides an illustrative example of using the timeline search tool to search a list of clips.
- FIG. 54 provides an illustrative example of using the timeline search tool to display a time duration for ranges of clips.
- FIG. 55 provides an illustrative example of displaying the total time of selected clip items in the index area of the timeline search tool.
- FIG. 56 provides an illustrative example of using the timeline search tool to find missing clips.
- FIG. 57 conceptually illustrates a process for searching and navigating a timeline of a media editing application.
- FIG. 58 conceptually illustrates several example data structures for a searchable and navigable timeline.
- FIG. 59 conceptually illustrates a software architecture of a media editing application of some embodiments.
- FIG. 60 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
- Some embodiments of the invention provide a novel keyword association tool for organizing media content.
- the keyword association tool is integrated into a sidebar display area as a keyword collection.
- a user can create different keyword collections for different keywords.
- To associate a media clip with a keyword the user can drag and drop the clip onto a corresponding keyword collection.
- the same technique can be used to associate multiple clips with one keyword by simultaneously dragging and dropping the clips onto a keyword collection.
- the keyword association tool is provided as a set of components of a media editing application.
- the media editing application automatically associates keywords with the media clips based on an analysis of the clips (e.g., based on a people detection operation). Each keyword can be associated with the entire clip or a portion of the clip.
- the media editing application of some embodiments includes a first display area for displaying different keyword collections and a second display area for displaying media content.
- the first display area is referred to as an event library
- the second display area is referred to as an event browser. This is because the keyword collections are hierarchically organized under an event category in these examples. However, the keyword collections may exist in their own hierarchy or as a part of different hierarchy.
- FIG. 1 illustrates a graphical user interface (“GUI”) 100 of a media editing application with such a keyword association tool.
- GUI graphical user interface
- This figure illustrates the GUI 100 at four different stages 105 , 110 , 115 , and 120 .
- these stages show how an event library 125 and an event browser 130 can be used to associate a video clip with a keyword.
- the GUI 100 includes the event library 125 , the event browser 130 , and a set of controls 155 - 175 .
- the event library 125 is a sidebar area of the GUI 100 that displays several selectable items representing different collections.
- the collections are listed hierarchically starting with a storage collection 102 , followed by a “year” collection 104 , and an event collection 106 .
- Each particular collection may have multiple other child collections.
- each particular collection includes a corresponding UI item for collapsing or expanding the particular collection in the event library 125 .
- a user of the GUI 100 can select a UI item 108 to hide or reveal each event collection that is associated with the “year” collection 104 .
- the application when media content is imported into the application's library, the application automatically organizes the content into one or more collections. For example, a selectable item representing a new event may be listed in the event library 125 when a set of video clips is imported from a camcorder, digital camera, or hard drive.
- the application may also automatically specify a name for each collection. For example, in FIG. 1 , the names of collections 104 and 106 are specified by the application based on the dates associated with the imported clips and the import date. As will be described in detail by reference to FIG. 27 below, some embodiments automatically create different keyword collections for imported content.
- the event browser 130 is an area in the GUI 100 through which the application's user can organize media content into different collections. To allow the user to easily find content, the event browser 130 may be sorted (e.g., by creation date, reel, scene, clip duration, media type, etc.).
- video clips are represented as thumbnail images. However, depending on the user's preference, the clips may be represented differently. For instance, a video clip may be represented as a filmstrip with several images of the clip displayed as a sequence of thumbnail images.
- audio clips are represented differently from video clips in the event browser 130 .
- an audio clip may be represented as a waveform. That is, a representation of the audio clip may indicate the clip's signal strength at one or more instances in time.
- a video clip representation may include a representation of its associated audio.
- the representation 140 includes a waveform 112 . This waveform 112 spans horizontally across the representation 140 to graphically indicate signal strength of the video clip's audio.
- the set of controls 155 - 175 includes selectable UI items for modifying the display or view of the event library 125 and event browser 130 .
- a user of the GUI 100 can select the control 155 to hide or reveal the event library 125 .
- the user can also select the control 160 to show a drop down list of different sorting or grouping options for collections in the event library 125 and representations in the event browser 130 .
- a user's selection of the control 175 reveals UI items for (1) adjusting the size of clip representations and (2) adding/removing waveforms to/from representations of video clips.
- a selection of the control 165 causes the event browser 130 to switch to a list view from the displayed thumbnails view (e.g., clips view, filmstrip view). Several examples of such a list view are described below in Section VIII.
- the control 170 includes a duration slider that controls how many clips are displayed or how much detail appears in the event browser 130 .
- the duration control includes a slider bar that represents different amounts of time.
- the knob can be moved to expand or contract the amount of detail (e.g., the number of thumbnails representing different frames) shown in each clip's filmstrip representation. Showing more thumbnails for each clip decreases the overall number of clips shown in the event browser 130 . In some embodiments, a shorter time duration displays more detail or more thumbnails, thereby lengthening each clip's filmstrip representation.
- the operations of associating a video clip with a keyword will now be described by reference to the state of this GUI during the four stages 105 , 110 , 115 , and 120 that are illustrated in FIG. 1 .
- the event library 125 lists several different collections. The user selects the event collection 106 to display representations 135 - 150 in the event browser 130 . As mentioned above, the representations 135 - 150 represent different video clips imported into the application's library.
- the second stage 110 shows the GUI 100 after the user selects an area of the event library 125 .
- the selection causes a context menu 118 to appear.
- the context menu 118 displays several menu items related to the event browser 130 .
- the context menu 118 displays menu items for creating and deleting an event, and a menu item 114 for creating a new keyword collection.
- the context menu 118 includes an option to create folders. In some embodiments, these folders are for storing one or more keyword collections or keyword folders.
- the user selects the menu item 114 in the context menu 118 , the user is presented with a keyword collection 116 , as illustrated in the third stage 115 .
- the keyword collection 116 is displayed in the event library 125 .
- the keyword collection 116 is integrated into the sidebar area and categorically listed under the event collection 106 .
- the keyword collection 116 includes graphical and textual elements.
- the graphical element indicates to the user (e.g., through a key symbol) that the collection 116 represents a keyword. Also, to distinguish the keyword collection 116 from other collections, the graphical element displays color differently from graphical elements of those other collections.
- the textual element of the keyword collection 116 represents the keyword.
- the textual element represents a word, term (several words), phrase, or characters (e.g., string, alphanumeric symbols) that the user can use to associate with any media content represented in the event browser 130 .
- the application has specified a default keyword name for the collection. Also, the textual element is highlighted to indicate that a more meaningful keyword can be inputted for the collection 116 .
- the fourth stage 120 shows one way of associating a piece of media content with a keyword.
- the user selects the thumbnail representation 135 of a video clip.
- the selection causes the representation 135 to be highlighted in the event browser 130 .
- the user then drags and drops the representation 135 onto the keyword collection 116 to associate the video clip with the keyword.
- the keyword collection 116 is integrated in a side bar area that has been traditionally reserved for listing bins or folders.
- a user of the GUI does not have to search for a separate keyboard tool to use the application's keyword functionality.
- the keyword collection operates in a manner similar to what many computer users have come to believe as a bin or a folder. In other words, the keyword collection acts a virtual bin or virtual folder that the user can drag and drop items onto in order to create keyword associations.
- FIG. 2 illustrates the GUI 100 after associating a video clip with a keyword. Specifically, in two operational stages 205 and 210 , this figure illustrates how the event library 125 can be used to filter the event browser 130 to only display content of the keyword collection 116 .
- the event library 125 and the event browser 130 are the same GUI elements as those described in FIG. 1 .
- the first stage 205 illustrates the contents of the event collection 106 . Specifically, it illustrates that the representation 135 is not removed from the event browser 130 after the drag and drop operation as illustrated in FIG. 1 . As shown in the first stage 205 , the representation 135 remains in the event collection 106 .
- the second stage 210 shows a user's selection of the keyword collection 116 . The selection causes the event browser 130 to displays the content of the keyword collection 116 . Specifically, the selection causes the event browser to be filtered down to the video clip associated with the keyword.
- the media editing application displays content differently based on their association with one or more keywords. This allows users to quickly assess a large group of media clips and see which ones are associated or not associated with any keywords. For example, in FIG. 2 , a bar 220 is displayed across each of the representations 135 and 225 . The color of the bar (i.e., blue) corresponds to the color of the keyword collection in the event library 125 . Also, the representations 140 - 150 are displayed without any bars. This indicates to a user that the video clips associated with these representations are not marked with any keywords.
- the media editing application allows a user to mark a range of a clip or the entire clip.
- a user may specify a time duration or several different time durations in which one or more keywords are applicable.
- a user may specify that an audio clip includes crowd noise starting at one point in time and ending at another point, and then tag that range as “crowd noise”.
- the entire range of the video clip associated with the representation 135 is marked with the keyword. This is indicated by the bar 220 , as it spans horizontally across the representation 135 in the event browser 130 .
- FIG. 3 illustrates specifying a range of a video clip to associate with a keyword. Specifically, this figure shows how a representation 320 of the video clip can be used to specify the range.
- Four operational stages 300 - 315 are shown in FIG. 3 .
- This figure includes the event browser 130 , the representation 320 , and a preview display area 325 .
- the event browser 130 is the same as the one described above by reference to FIG. 1 .
- a video clip representation displays a thumbnail of an image in the video clip.
- a media clip's representation is an interactive UI item that dynamically displays different images.
- the representation in some embodiments, can be used to preview audio of a media clip.
- FIG. 3 One example of such representation is shown in FIG. 3 with the representation 320 .
- the representation includes a playhead 335 and a range selector 340 .
- the representation 320 can be selected to display different thumbnails and play different audio samples.
- the width of the representation 320 represents a virtual timeline.
- the user can select an interior location within the representation.
- the interior location corresponds to a particular time on the virtual timeline.
- the selection causes the representation to display a thumbnail image of the video clip at that particular time instance.
- an audio sample that corresponds to the particular time instance is played when the user selects the interior location.
- the playhead 335 moves along the representation's virtual timeline. When a user selects an interior location, the playhead 335 moves along the virtual timeline to the selected interior location. The user can use this playhead 335 as a reference point to display different images and play different audio samples associated with the video clip.
- the range selector 340 allows the user to define a range of a clip to be marked with a keyword.
- the range selector 340 allows the user to specify a range to add to a timeline.
- the user can activate the range selector 340 by selecting a representation. The selection causes the range selector 340 to appear. The user can then move the selector's edges along the representation's virtual timeline to specify a range.
- the preview display area 325 displays a preview of a composite presentation that the media editing application creates by compositing several media clips (e.g., audio clips, video clips, etc.). As shown in FIG. 3 , the preview display area 325 displays a preview of a clip selected from the event browser 130 . For example, when a user selects a representation's interior location that corresponds to a particular time instance, the preview display area presents a preview of the representation's associated video clip at that particular instance in time.
- media clips e.g., audio clips, video clips, etc.
- the first stage 300 of FIG. 3 shows the event browser 130 and the preview display area 325 .
- the user has selected an interior location within the representation 320 .
- the selection causes the playhead 335 to be moved along the representation's virtual timeline to the location about the interior location.
- the selection also causes the preview display area 325 to displays a preview of the representation's associated video clip at a time instance corresponding to the playhead 335 .
- the second stage 305 illustrates selection and movement of an edge of the range selector 340 .
- the left edge of the range selector 340 is moved along the virtual timeline to about mid-point. This left edge represents a starting point of a range of the video clip.
- the third stage 310 shows selection and movement of the opposite edge of the range selector 340 .
- the right edge of the range selector 340 is moved towards the left edge.
- the right edge represents an ending point of the range of the video clip.
- the fourth stage 315 shows the event browser 130 after a keyword is associated with the range of the video clip.
- a bar 330 is displayed across only a portion of the representation. This portion represents the range of the video clip that is marked or associated with the keyword.
- a keyword range is specified using the range selector 340 .
- the media editing application allows a user to modify a defined keyword range. For example, when a keyword is applied to a particular range of a clip, the media application may provide UI items and/or shortcut keys to modify the particular range. In this way, the user of the media editing application can define a keyword collection to include specific ranges of one or more clips.
- a range of a clip when a range of a clip is marked with multiple keywords, only one keyword representation (e.g., keyword bar) is displayed.
- a filmstrip representation in an event browser may only display one keyword bar over a range that is associated with multiple keywords.
- a keyword list e.g., a popup list
- the keyword with the shortest range is selected, but the user can select a different range by selecting its corresponding keyword in the keyword list.
- Section I An exemplary media editing application that implements the keyword association features of some embodiments will be described below in Section I.
- Section II describes example keyword operations performed with the media editing application.
- Section IV describes several example operations performed with a keyword tagging tool.
- Section V describes several operations performed by the media editing application to automatically create different keyword collections.
- Section VI describes creating smart collections using keywords.
- Section VII describes marking media content with different ratings.
- Second VIII describes a list view showing keywords associated with media content.
- Section IX describes markers.
- Section X describes a timeline search and index tool for searching and navigating a timeline.
- Section XI describes a software architecture of a media editing application of some embodiments.
- Section XII describes a computer system which implements some embodiments of the invention.
- FIG. 4 illustrates a graphical user interface (GUI) 400 of a media-editing application of some embodiments.
- GUI graphical user interface
- the GUI 400 includes several display areas which may be adjusted in size, opened or closed, replaced with other display areas, etc.
- the GUI 400 includes a clip library 405 (also referred to as an event library), a clip browser 410 (also referred to as an event browser), a timeline 415 , a preview display area 420 , the timeline search tool 445 , an inspector display area 425 , an additional media display area 430 , and a toolbar 435 .
- the event library 405 includes a set of folder-like or bin-line representations through which a user accesses media clips that have been imported into the media-editing application. Some embodiments organize the media clips according to the device (e.g., physical storage device such as an internal or external hard drive, virtual storage device such as a hard drive partition, etc.) on which the media represented by the clips are stored. Some embodiments also enable the user to organize the media clips based on the date the media represented by the clips was created (e.g., recorded by a camera).
- the device e.g., physical storage device such as an internal or external hard drive, virtual storage device such as a hard drive partition, etc.
- Some embodiments also enable the user to organize the media clips based on the date the media represented by the clips was created (e.g., recorded by a camera).
- users may group the media clips into “events”, or organized folders of media clips. For instance, a user might give the events descriptive names that indicate what kind of media is stored in the event (e.g., the “New Event 2-5-11” event shown in clip library 405 might be renamed “European Vacation” as a descriptor of the content).
- the media files corresponding to these clips are stored in a file storage structure that mirrors the folders shown in the clip library.
- each keyword collection is represented as a type of bin or folder that can be selected to reveal each media clip associated with a keyword of the particular keyword collection.
- some embodiments enable a user to perform various clip management actions. These clip management actions may include moving clips between events, creating new events, merging two events together, duplicating events (which, in some embodiments, creates a duplicate copy of the media to which the clips in the event correspond), deleting events, etc.
- some embodiments allow a user to create sub-folders or sub-collections of an event. These sub-folders may include media clips filtered based on tags (e.g., keyword tags). For instance, in the “New Event 2-5-11” event, all media clips showing children might be tagged by the user with a “kids” keyword. Then these particular media clips could be displayed in a sub-folder or keyword collection of the event that filters clips in the event to only display media clips tagged with the “kids” keyword.
- tags e.g., keyword tags
- the clip browser 410 allows the user to view clips from a selected folder or collection (e.g., an event, a sub-folder, etc.) of the clip library 405 .
- a selected folder or collection e.g., an event, a sub-folder, etc.
- the collection “New Event 2-5-11” is selected in the clip library 405 , and the clips belonging to that folder are displayed in the clip browser 410 .
- Some embodiments display the clips as thumbnail filmstrips, as shown in this example. By moving a cursor (or a finger on a touchscreen) over one of the thumbnails (e.g., with a mouse, a touchpad, a touchscreen, etc.), the user can skim through the clip.
- the media-editing application associates that horizontal location with a time in the associated media file, and displays the image from the media file for that time.
- the user can command the application to play back the media file in the thumbnail filmstrip.
- the thumbnails for the clips in the browser display an audio waveform underneath the clip that represents the audio of the media file.
- the audio plays as well.
- Many of the features of the clip browser are user-modifiable. For instance, in some embodiments, the user can modify one or more of the thumbnail size, the percentage of the thumbnail occupied by the audio waveform, whether audio plays back when the user skims through the media files, etc.
- some embodiments enable the user to view the clips in the clip browser 410 in a list view.
- the clips are presented as a list (e.g., with clip name, duration, etc.).
- Some embodiments also display a selected clip from the list in a filmstrip view at the top of the clip browser 410 so that the user can skim through or playback the selected clip.
- the list view displays different ranges of media associated with keywords. The list view in some embodiments allows users to select different ranges of a media clip and/or navigate to different sections of the media clip.
- the timeline 415 provides a visual representation of a composite presentation (or project) being created by the user of the media-editing application. Specifically, it displays one or more geometric shapes that represent one or more media clips that are part of the composite presentation.
- the timeline 415 of some embodiments includes a primary lane (also called a “spine”, “primary compositing lane”, or “central compositing lane”) as well as one or more secondary lanes (also called “anchor lanes”).
- the spine represents a primary sequence of media which, in some embodiments, does not have any gaps.
- the clips in the anchor lanes are anchored to a particular position along the spine (or along a different anchor lane).
- Anchor lanes may be used for compositing (e.g., removing portions of one video and showing a different video in those portions), B-roll cuts (i.e., cutting away from the primary video to a different video whose clip is in the anchor lane), audio clips, or other composite presentation techniques.
- the user can select media clips from the clip browser 410 into the timeline 415 in order to add the clips to a presentation represented in the timeline.
- the user can perform further edits to the media clips (e.g., move the clips around, split the clips, trim the clips, apply effects to the clips, etc.).
- the length (i.e., horizontal expanse) of a clip in the timeline is a function of the length of the media represented by the clip.
- a media clip occupies a particular length of time in the timeline.
- the clips within the timeline are shown as a series of images. The number of images displayed for a clip varies depending on the length of the clip in the timeline, as well as the size of the clips (as the aspect ratio of each image will stay constant).
- the user can skim through the timeline or play back the timeline (either a portion of the timeline or the entire timeline).
- the playback (or skimming) is not shown in the timeline clips, but rather in the preview display area 420 .
- the preview display area 420 (also referred to as a “viewer”) displays images from media files which the user is skimming through, playing back, or editing. These images may be from a composite presentation in the timeline 415 or from a media clip in the clip browser 410 . In this example, the user has been skimming through the beginning of clip 440 , and therefore an image from the start of this media file is displayed in the preview display area 420 . As shown, some embodiments will display the images as large as possible within the display area while maintaining the aspect ratio of the image.
- the inspector display area 425 displays detailed properties about a selected item and allows a user to modify some or all of these properties.
- the selected item might be a clip, a composite presentation, an effect, etc.
- the clip that is shown in the preview display area 420 is also selected, and thus the inspector displays information about media clip 440 .
- This information about the selected media clip that includes duration, file format, file location, frame rate, date created, audio information, etc. In some embodiments, different information is displayed depending on the type of item selected.
- the additional media display area 430 displays various types of additional media, such as video effects, transitions, still images, titles, audio effects, standard audio clips, etc.
- the set of effects is represented by a set of selectable UI items, in which each selectable UI item represents a particular effect.
- each selectable UI item also includes a thumbnail image with the particular effect applied.
- the display area 430 is currently displaying a set of effects for the user to apply to a clip.
- the toolbar 435 includes various selectable items for editing, modifying items that are displayed in one or more display areas, etc.
- the toolbar 435 includes various selectable items for modifying the type of media that is displayed in the additional media display area 430 .
- the illustrated toolbar 435 includes items for video effects, visual transitions between media clips, photos, titles, generators and backgrounds, etc.
- the toolbar 435 includes an selectable inspector item that causes the display of the inspector display area 425 as well as items for applying a retiming operation to a portion of the timeline, adjusting color, and other functions.
- the toolbar 435 also includes selectable items for media management and editing. Selectable items are provided for adding clips from the clip browser 410 to the timeline 415 .
- different selectable items may be used to add a clip to the end of the spine, add a clip at a selected point in the spine (e.g., at the location of a playhead), add an anchored clip at the selected point, perform various trim operations on the media clips in the timeline, etc.
- the media management tools of some embodiments allow a user to mark selected clips as favorites, among other options.
- the timeline search tool 445 allows a user to search and navigate a timeline.
- the timeline search tool 445 of some embodiments includes a search field for searching for clips in the timeline 415 based on their names or associated keywords.
- the timeline search tool 445 includes a display area for displaying search results.
- each result is user-selectable such that a selection of the result causes the timeline to navigate to the position of the clip in the timeline. Accordingly, the timeline search tool 445 allows a content editor to navigate the timeline to identify clips.
- the set of display areas shown in the GUI 400 is one of many possible configurations for the GUI of some embodiments.
- the presence or absence of many of the display areas can be toggled through the GUI (e.g., the inspector display area 425 , additional media display area 430 , and clip library 405 ).
- some embodiments allow the user to modify the size of the various display areas within the UI. For instance, when the display area 430 is removed, the timeline 415 can increase in size to include that area. Similarly, the preview display area 420 increases in size when the inspector display area 425 is removed.
- FIG. 5 conceptually illustrates example data structures for several objects associated with a media editing application. Specifically, the figure illustrates relationships between the objects that facilitate the organization of media clips into different keyword collections. As shown, the figure illustrates (1) an event object 505 , (2) a clip object 510 , (3) a component object 515 , (4) an asset object 525 , (5) a keyword collection object 545 , and (6) a keyword set object 520 . In some embodiments, one or more of the objects in this figure are subclasses of other objects. For example, in some embodiments, the clip object 510 (i.e., collection object), component object 515 , and keyword set object 520 are all subclasses of a general clip object.
- the clip object 510 i.e., collection object
- component object 515 i.e., and keyword set object 520 are all subclasses of a general clip object.
- the event object 505 includes an event ID and a number of different clip collections (including the clip object 510 ).
- the event object 505 is also associated with a number of keyword collection objects (including the keyword collection object 545 ).
- the event ID is a unique identifier for the event object 505 .
- the data structure of the event object 505 may include additional fields in some embodiments, such as the event name, event date (which may be derived from an imported clip), etc.
- the event data structure may be a Core Data (SQLite) database file that includes the assets and clips as objects defined with the file, an XML file that includes the assets and clips as objects defined with the file, etc.
- the clip object 510 or collection object in some embodiments, is an ordered array of clip objects.
- the clip object stores one or more component clips (e.g., the component object 515 ) in the array.
- the clip object 510 stores a clip ID that is a unique identifier for the clip object.
- the clip object 510 is a collection object that can include component clip objects as well as additional collection objects.
- the clip object 510 or collection object only stores the video component clip in the array, and any additional components (generally one or more audio components) are then anchored to that video component.
- the component object 515 includes a component ID, a set of clip attributes, and an asset reference.
- the component ID identifies the component.
- the asset reference of some embodiments stores an event ID and an asset ID, and uniquely identifies a particular asset object (e.g., the asset object 525 ).
- the asset reference is not a direct reference to the asset but rather is used to locate the asset when needed. For example, when the media-editing application needs to identify a particular asset, the application uses the event ID to locate the event that contains the asset, and then the asset ID to locate the particular desired asset.
- the clip object 510 only stores the video component clip in its array, and any additional components (generally one or more audio components) are then anchored to that video component. This is illustrated in FIG.
- each component that is anchored to another clip or collection stores an anchor offset that indicates a particular instance in time along the range of the other clip or collection. That is, the anchor offset may indicate that the component is anchored x number of seconds and/or frames into the other clip or collection. These times refer to the trimmed ranges of the clips in some embodiments.
- the asset object 525 includes an asset ID, reference to a source file, and a set of source file metadata.
- the asset ID identifies the asset, while the source file reference is a pointer to the original media file.
- the source file metadata 530 includes the file type (e.g., audio, video, movie, still image, etc.), the file format (e.g., “.mov”, “.avi”, etc), a set of video properties 535 , a set of audio properties 540 , and additional metadata.
- the set of audio properties 540 includes a sample rate, a number of channels, and additional metadata. Some embodiments include additional properties, such as the file creation date (i.e., the date and/or time at which the media was captured (e.g., filmed, photographed, recorded, etc.)).
- a set of metadata from the source file metadata 530 is displayed in the event browser (e.g., as part of a list view as will be described in detail below in Section VIII).
- the data structure of the asset object 525 may be populated when the source file is imported into the media editing application.
- the asset object 525 additionally store override data that modifies one or more of the video or audio properties. For instance, a user might enter that a media file is actually 1080p, even though the file's metadata, stored in the asset object, indicates that the video is 1080i. When presented to the user, or used within the application, the override will be used and the media file will be treated as 1080p.
- the keyword collection object 545 includes a reference to a keyword.
- a keyword collection references a keyword to identity or filter a group of files (e.g., media clips) to display only those that have been tagged with the keyword.
- a folder or bin-like object references (e.g., directly or indirectly) a file that it contains.
- This difference between a keyword collection and a folder-type or bin-type collection will be further illustrated in several of the examples described below. For example, in FIG. 14 , when two media clips tagged with a keyword are moved from one event collection to another event collection, the media clips' keyword associations are carried over to the other event collection. This keyword association causes a keyword collection for the keyword to be created in the other event collection.
- the keyword collection object 545 includes a reference to a keyword of the keyword set object 520 .
- the relationship between the keyword collection and the keyword set object may be expressed differently, in some embodiments.
- the keyword collection object 545 is associated or is a part of the event object 505 . This is because the keyword collections are hierarchically organized under an event collection in the media editing application. However, the keyword collections may exist in their own hierarchy or as a part of different hierarchy.
- the keyword collection object 545 includes other attributes.
- these attributes include attributes similar to a folder or bin, such as a creation date.
- These attributes may include other collection objects (e.g., filter or smart collection objects).
- the keyword set object 520 includes a keyword set 550 , range attributes, note, and other keyword attributes.
- the keyword set 550 is a set of one or more keywords that are associated with a range of the clip object 510 .
- the keyword set may be specified by a user of the media editing application. As will be described below, the keyword set may be automatically specified by the media editing application. Several examples of automatically assigning one or more keywords by analyzing media clips will be described below by reference to FIGS. 24-27 .
- the keyword set object 520 is a type of anchored object.
- the keyword set object may include an anchor offset that indicates that it is anchored to the clip object 510 at x number of seconds and/or frames into the range of the clip (e.g., the trimmed range of the clip).
- the keyword object's range attribute indicates a starting point and an ending point of the range of a clip that is associated with the keyword set. This may include the actual start time and end time.
- the range attributes may be expressed differently. For example, instead of a start time and an end time, the range may be expressed as a start time and duration (from which the end time can be derived).
- a keyword set object 520 is a type of anchored object.
- the anchor offset is associated with or indicates a starting point of the range of a clip associated with the keyword. Accordingly, the keyword set object 520 may only store the starting point or the anchor offset, in some embodiments.
- the note attribute in some embodiments, is a field that the user can enter for the range of the media clip associated with the keyword set.
- a similar note attribute is also shown for the clip object 510 . This allows a clip object or a collection object to be associated with a note. Several examples of specifying a note for a clip or a keyword range will be described below by reference to FIG. 37 .
- the objects and data structures shown in FIG. 5 are just a few of the many different possible configurations for implementing the keyword organization features of some embodiments.
- the clip object instead of the clip object indirectly referencing a source file, the clip object may directly reference the source file.
- the keyword collection may not be a part of an event object but may be part of a different dynamic collection structure (e.g., folder structure, bin structure) or hierarchical structure.
- a keyword set object in some embodiments, is a keyword object representing only one keyword instead of a set of one or more keywords.
- each keyword object includes its own range attribute, note attribute, etc.
- addition information regarding data structures are described in U.S. patent application Ser. No. 13/111,912, entitled “Data Structures for a Media-Editing Application”. This application is incorporated in the present application by reference.
- FIG. 6 illustrates creating a keyword association by dragging and dropping a range of a clip from one keyword collection to another.
- Five operational stages 605 - 625 are shown in this figure.
- the event library 125 and the event browser 130 are the same as those illustrated in FIG. 1 .
- the first stage 605 shows the event library 125 and the event browser 130 .
- the user has selected a keyword collection 630 that is displayed in the event library 125 .
- the selection causes the event browser 130 to display contents of the keyword collection 630 .
- two video clip representations 655 and 660 are displayed in the event browser 130 .
- the representations are similar to the ones described above by reference to FIG. 3 .
- the representation 660 shows multiple thumbnail images. These thumbnail images represent a sequence of different images of the video clip at different instances in time.
- the second stage 610 shows a selection of the representation 660 .
- the selection causes a range selector 640 to appear.
- the third stage 615 shows the user interacting with the range selector to select a range of the video clip. Specifically, the left edge of the range selector 640 is moved along the virtual timeline to a third thumbnail image 665 .
- the fourth stage 620 shows a drag and drop operation to associate the range of the video clip with a keyword.
- the user drags and drops the range from the keyword collection 630 to a keyword collection 650 . This in turn causes the range of the video clip to be marked with a keyword associated with the keyword collection 650 .
- the fifth stage 625 shows the GUI 100 after the drag and drop operation.
- the user selects the keyword collection 650 from the event library 125 .
- the selection causes the event browser 130 to display only the range of the video clip marked with the keyword of the keyword collection 650 .
- a range of a clip is associated with a keyword through a drag and drop operation.
- Some embodiments allow a user to (1) create a compound clip from multiple different clips and (2) tag a range that spans one or more of the multiple clips in the compound clip.
- a compound clip is any combination of clips (e.g., in a timeline, in an event browser) and nests clips within other clips.
- Compound clips in some embodiments, can contain video and audio clip components, clips, and other compound clips. As such, each compound clip can be considered a mini project or a mini composite presentation, with its own distinct project settings.
- compound clips function just like other clips; a user can add them to a project or timeline, trim them, retime them, and add effects and transitions.
- each compound clip is defined by data structures of the clip object or the collection object similar to those described above by reference to FIG. 5 .
- compound clips can be opened (e.g., in the timeline, in the event browser) to view or edit their contents.
- a visual indication or an icon appears on each compound clip representation. This visual indication indicates to the user that the contents of the compound clip can be viewed or edited.
- FIG. 7 illustrates creating a compound clip and associating the compound clip with a keyword.
- Eight operational stages 705 - 740 of the GUI 100 are shown in this figure.
- the first stage 705 shows two video clips ( 790 and 795 ) in an event browser 130 .
- the user selects the video clip 790 .
- the second stage 710 shows the selection of the video clip 795 along with the video clip 790 .
- the user might have selected an area of the event browser covering both of the clips ( 790 and 795 ) in order to select them.
- the user might have first selected the video clip 790 and then selected the video clip 795 while holding down a hotkey that facilitates multiple selections.
- the third stage 715 shows the activation of a context menu 750 .
- This menu includes an option 745 to create a compound clip from the selected clips 790 and 795 .
- the selection of the option 745 causes a compound clip options window 755 to appear.
- the window 755 includes (1) a text field 760 for inputting a name for the compound clip, (2) a selection box 765 for selecting a default event collection for the compound clip, (3) a set of radio buttons 770 for specifying video properties (e.g., automatically based on the properties of the first video clip, custom), and (4) a set of radio buttons 775 for specifying audio properties (e.g., default settings, custom).
- the user inputs a name for the compound clip.
- the fifth stage 725 shows selection of the button 780 to create the compound clips based on the settings specified through the compound clip options window 755 .
- the selection causes a compound clip 704 to appear.
- the compound clip 704 includes a marking 702 , which provides an indication to a user that it is a compound clip.
- the marking 702 is a user-selectable item that, when selected, reveals both clips and/or provides an option to view or edit the individual clips in the compound clip 704 .
- the user selects the compound clip 704 .
- the compound clip is dragged and dropped onto a keyword collection 785 .
- the drag and drop operation causes the compound clip to be associated with a keyword of the keyword collection 785 .
- the association spans all of the entire ranges of the clips that define the compound clip 704 .
- the eighth stage 740 shows selection of the keyword collection 785 .
- the selection causes the event browser 130 to be filtered down to the compound clip 704 associated with the keyword of the keyword collection 785 .
- FIG. 8 illustrates deleting a clip range from a keyword collection.
- Five operational stages 805 - 825 of the GUI 100 are shown in this figure. Specifically, the first stage 805 shows three representations of video clips ( 860 , 845 , and 870 ) in an event collection 830 .
- the second stage 810 shows a selection of a keyword collection 835 from the event library 125 . The selection causes the event browser 130 to display contents of the keyword collection 835 .
- the keyword collection 835 includes two video clip representations 860 and 845 . These representations represent ranges of video clips marked with a keyword that is associated with the keyword collection 835 .
- the third stage 815 shows the selection of the representation 845 (e.g., through a control click operation).
- the selection causes a context menu 840 to appear.
- the context menu 840 displays several selectable menu items related to the representation's video clip.
- the context menu 840 displays a menu item for copying the video clip (e.g., the clip range) and a menu item 850 for removing all keywords associated with the range of video clip, among other menu items.
- the user can select any of these menu items.
- the media editing application disassociates the keyword from the range of the video clip.
- the representation 845 is removed from the event browser 130 , as shown in the fourth stage 820 .
- the fifth stage 825 illustrates the contents of the event collection 830 . Specifically, it illustrates that deleting the range of video clip from keyword collection 835 did not remove the video clip from the event collection 830 .
- FIG. 9 illustrates removing a keyword from a portion of a clip range.
- Five operational stages 905 - 925 of the GUI 100 are shown in this figure. This figure is similar to the previous example. However, instead of selecting an entire range of a clip from a keyword collection, the user selects a portion of the range.
- a keyword collection 930 includes one video clip representation 935 .
- stages two and three 910 and 915
- the user selects a portion of the clip range using the range selector 340 .
- the fourth stage 920 shows the selection of the representation 935 (e.g., through a control click operation). The selection causes the context menu 840 to appear.
- the media editing application disassociates the keyword from the portion of the range of the video clip.
- the fifth stage 925 illustrates the contents of the keyword collection 930 after disassociating the keyword from the portion of the range.
- two separate representations 940 and 945 are displayed in the event browser 130 .
- the representations 940 and 945 represent the outer ranges that remain associated with the keyword.
- FIG. 10 provides an illustrative example disassociating multiple ranges of video clips by deleting a keyword collection.
- Four operational stages 1005 - 1020 are illustrated in this figure. Specifically, the first stage 1005 shows the contents of the event collection 1025 . As shown, the event collection 1025 includes two video clips that are associated with a keyword. The second stage 1010 shows the contents of the keyword collection 1030 that includes ranges of the two video clips in the event collection 1025 .
- the third stage 1015 shows the GUI 100 after the user selects the keyword collection 1030 in the event library 125 .
- the selection causes a context menu 1035 to appear.
- the context menu 1035 includes a selectable menu item 1040 for deleting the keyword collection.
- the user is presented with the GUI 100 as illustrated in the fourth stage 1020 .
- this fourth stage 1020 illustrates that the video clips in the event collection 1025 are not associated with any keywords.
- the multiple ranges of the different video clips are disassociated with the keyword of the keyword collection. This allows a user to quickly remove keyword associations from a large group of tagged items.
- FIG. 11 illustrates combining two keyword collections. Specifically, it illustrates how two keyword collections 1135 and 1140 are combined when one of the collections is renamed to have the same name as the other collection. This renaming operation allows a group of items tagged with a particular keyword to be quickly tagged with another keyword.
- Four operational stages 1105 - 1120 of the GUI 100 are shown in this figure.
- the first stage 1105 shows that two different keyword collections 1135 and 1140 are listed in event library 125 .
- the event browser 130 displays only each range of the media clip in the collection.
- the event browser 130 displays a video clip representation 1125 .
- the second stage 1110 shows selection of a keyword collection 1140 . The selection causes the event browser 130 to display a video clip representation 1130 .
- the third stage 1115 shows renaming of the collection 1140 .
- the user selects the collection 1140 (e.g., through a double click operation).
- the user can rename the collection through a menu item (e.g., in a context menu).
- the selection of the collection 1140 causes the collection's name field be highlighted. This indicates to the user that a new collection name can be inputted in this name field.
- the fourth stage 1120 shows the event browser 130 after renaming the keyword collection 1140 to a same name as the keyword collection 1135 .
- the renaming causes the media editing application to associate the range of the video clip with the keyword of the keyword collection 1135 .
- the user's selection of keyword collection causes the event browser 130 to display a representation 1130 .
- This representation 1130 represents the range of the video clip that was previously associated with the keyword of the keyword collection 1140 .
- FIG. 12 provides an illustrative example of selecting multiple keyword collections 1220 and 1225 from the event library 125 .
- the first stage 1205 shows contents of the keyword collection 1220 .
- the keyword collection 1220 includes two video clip ranges 1230 and 1235 .
- the second stage 1210 shows contents of the keyword collection 1225 .
- the keyword collection 1225 includes two video clip ranges 1230 and 1240 .
- the video clip range 1230 is the same as the video clip range in the keyword collection 1220 .
- the third stage 1215 shows the selections of multiple collections. Specifically, when the collections 1220 and 1225 are both selected, the event browser 130 displays the union of these collections. This is illustrated in the third stage 1215 as the video clip range 1230 shared between the two collections is displayed only once in the event browser 130 .
- FIG. 13 provides an illustrative example of selecting a video clip range.
- Four operational stages 1305 - 1320 are shown in this figure. Specifically, this figure illustrates that selecting a range of a clip from a keyword collection selects that range in a corresponding event collection.
- the first stage 1305 shows the contents of the event collection 1325 .
- the event collection 1325 includes three video clips.
- the second stage 1310 shows the contents of the keyword collection 1330 .
- the keyword collection 1330 includes two video clip ranges. The video clip ranges of keyword collection 1330 are different ranges of the video clips from those of the event collection 1325 .
- the third stage 1315 shows a selection of a video clip range in the keyword collection 1330 .
- the selection causes the range of the video clip to be highlighted in the keyword collection 1330 .
- the user navigates to the event collection 1325 after selecting the range in the keyword collection 1330 .
- the range or portion of the video clip that corresponds to the range in the keyword collection 1330 is also selected in the event collection 1325 .
- a range of a video clip is selected from a keyword collection.
- another collection e.g., an event collection
- FIG. 14 provides an illustrative example of dragging and dropping clips from one event collection to another event collection. Specifically, this figure illustrates keyword collections that are automatically created by the media editing application when several clips that are associated with a keyword are dragged and dropped from an event collection 1420 to an event collection 1425 . Three operational stages 1405 - 1415 of the GUI 100 are shown in this figure.
- the first stage 1405 shows selection of multiple clips from the event collection 1420 . These clips are associated with a keyword of a keyword collection 1455 .
- the user drags the selected clips to the event collection 1425 . When the user drops the selected clips into the event collection 1425 , the event collection 1425 is associated with the selected clips.
- the third stage 1415 shows that the keyword associations of the clips are carried over from one collection to another.
- the video clips are associated with the same keyword as they were in the event collection 1420 .
- the event browser indicates this by displaying a bar 1435 over each of the two representations 1430 of the video clips.
- the keyword associations of these clips are shown by a keyword collection 1440 that is listed in the event collection 1425 .
- FIG. 15 provides an illustrative example of dragging and dropping keyword collections from one event collection to another event collection. Specifically, this figure illustrates how keyword collections are reusable between different collections. Three operational stages 1505 - 1515 of the GUI 100 are shown in FIG. 15 .
- the user selects multiple keyword collections 1530 from the event library 125 .
- the keyword collections are dragged and dropped onto the event collection 1525 .
- the third stage 1515 shows the GUI 100 after the drag and drop operation.
- the same keyword collections 1535 are listed under the event collection 1525 .
- the contents of the keyword collections 1530 are not copied to the event collection 1525 . That is, the structure of the event collection 1520 is copied without its contents. This allows a user to easily reuse the structure of one collection without having to rebuild it in another.
- a photographer may create multiple event collections for different weddings. To recreate a structure of a first wedding collection in a second wedding collection, the photographer can simply copy keyword collections from the first collection to the second collection.
- FIG. 16 provides an illustrative example of merging two event collections.
- Four operational stages 1605 - 1620 of the GUI 100 are shown in this figure.
- the user selects an event collection 1625 from the event library 125 .
- the event collection 1625 is dragged and dropped onto the event collection 1630 .
- the user is presented with a “merge events” window 1635 as illustrated in the third stage 1615 .
- the “merge events” window 1635 includes a text field 1640 and a pull-down list 1645 .
- the text field 1640 allows the user to specify a name for the merged event.
- the pull-down list 1645 allows the user to select a location for the merged event. In the example illustrated in FIG. 16 , a hard disk is selected as the location.
- the “merge events” window 1635 also displays a notification indicating that when two events are merged all media will be merged into one event.
- the fourth stage 1620 shows the GUI 100 after the merge operations. Here, the contents of the event collection 1625 including a keyword collection 1650 are merged with the event collection 1630 .
- FIG. 17 conceptually illustrates a process 1700 for associating a range of a media clip with a keyword.
- the process 1700 is performed by a media editing application.
- the process 1700 starts when it displays (at 1705 ) a dynamic collection structure.
- a dynamic collection structure is the event library described above by reference to FIG. 1 .
- the process 1700 then receives (at 1710 ) a selection of a range of a media clip. For example, a user of the media editing application might select an entire range of a clip or a portion of a media clip.
- the process 1700 associates (at 1715 ) the range of the media clip with a keyword.
- the process 1700 determines (at 1720 ) whether a keyword collection exists for the keyword. When the keyword collection exists, the process ends. Otherwise, the process 1700 creates (at 1725 ) a keyword collection for the keyword. The process 1700 then adds (at 1730 ) the keyword collection to the dynamic collection structure. The process 1700 then ends.
- the new keyword collection is added a display area or dynamic collection structure without a user having to manually create the new keyword collection. That is, upon association of a new keyword with one or more portion of one or more media clips, a new keyword collection is automatically created and added to the dynamic collection structure.
- a new keyword represents a keyword that is used at a particular level of the hierarchy and that does not collide with a same keyword that exists at the particular level.
- different folder or collections may include their own set of media clips associated with one particular keyword (e.g., “architecture”).
- the new keyword represents one keyword that is unique to a particular collection or folder and not necessarily to the overall dynamic collection structure or sub-collection.
- the media editing application provides a tagging tool for associating media content with keywords.
- FIG. 18 illustrates an example of a tagging tool 1865 according to some embodiments. Specifically, this figure illustrates using the tagging tool 1865 to associate a keyword with a media clip. Five operational stages 1805 - 1825 of the GUI 100 are shown in FIG. 18 .
- the event library 125 and the event browser 130 are the same as those described above by reference to FIG. 1 .
- the first stage 1805 shows the tagging tool 1865 that is displayed over the GUI 100 .
- a user might have activated the tagging tool 1865 by selecting a shortcut key, a menu item, or a toolbar item.
- a user selection of a control 1870 causes the tagging tool to appear and hover over the GUI 100 .
- the event collection 1860 is selected, which causes the event browser 130 to display representations of clips.
- the second stage 1810 to input text, the user selects a text field 1850 of the tagging tool 1865 . The user then inputs a keyword into this text field 1850 .
- the third stage 1815 shows a selection of the representation 1830 .
- the selection causes the representation to be highlighted. This provides an indication to the user that entire range of the representation's video clip is selected.
- the fourth stage 1820 illustrates how a keyword association is created using the tagging tool 1865 .
- the user selects the video clip's representation and selects a key (e.g., an enter key).
- the selections cause the keyword in the text field 1850 to be associated with the video clip.
- the user filters the event browser 130 to display only the associated video clip by selecting the keyword collection 1855 .
- the media editing application provides keyboard access when the tagging tool 1865 is displayed.
- the user can select different hotkeys to perform operations such as playing/pausing a media clip, selecting a range of a clip, inserting a clip into timeline, etc. This allows the user to play and preview different pieces of content while keyword tagging, without having to activate and de-activate the tagging tool.
- the media editing application does not place any restriction on accessing other parts of the GUI 100 , such as the event library 125 and event browser 130 . For example, in FIG. 18 , the user can select the representation or the keyword collection while the tagging tool 1865 is activated.
- FIG. 19 illustrates the media editing application automatically assigning a shortcut key for a previously used keyword.
- Two operational stages 1905 and 1910 of the GUI 100 are shown in FIG. 19 .
- the event library 125 and the event browser 130 are the same as those described above by reference to FIG. 1 .
- the first stage 1905 shows the tagging tool 1865 of the GUI 100 . Also, the user has created the keyword collection 1855 using this tagging tool 1865 . As shown, the tagging tool 1865 includes a selectable item 1915 . When the user selects the selectable item 1915 , the tagging tool 1865 expands to reveal several input fields, as illustrated in the second stage 1910 .
- the second stage 1910 shows that the tagging tool 1865 includes a number of input fields. Each input field is associated with a particular key combination. A user of the GUI 100 can input keywords in these fields to create different keyword shortcuts.
- the text field 1920 of the tagging tool 1865 includes a keyword.
- the media editing application populated this field after the user used the keyword to tag a video clip.
- a subsequent input field may be populated with the other keyword.
- the user can also input text into any one of the text fields to create custom keyword shortcuts or reassign a previously assigned keyword shortcut.
- the tagging tool 1865 includes nine different shortcut slots.
- the media editing application may provide more or fewer shortcuts in some embodiments.
- the media editing application automatically populates a shortcut key slot with a keyword when the keyword is used to mark one or more clips.
- a user can fill up the keyword slots with keywords in order to quickly tag clips using the tool's shortcut feature.
- FIG. 20 illustrates an example of using the auto-complete feature of the tagging tool 1865 .
- the first stage 2005 illustrates a user typing a keyword into a shortcut field of the tagging tool.
- the second stage 2010 shows the tagging tool 1865 displaying suggested keywords based on user input.
- a previously used keyword which the user can choose to auto-complete a phrase, is displayed below the text field 1850 .
- the media editing application builds a custom dictionary of potential keywords.
- the media editing application may store terms or phrases that one or more users have entered, e.g., for more than a certain number of times.
- the suggested keyword is based on a previously used keyword.
- the tagging bar may provide other suggestions based on the user's interaction with the media editing application.
- the user may replace the keyword in the input field 2020 without marking any clips with the keyword.
- the media editing application might suggest the keyword that has been replaced in the field 2020 .
- some embodiments provide a built-in dictionary of common production and editing terms from which the user can choose when tagging an item.
- FIG. 21 illustrates an example of using the keyword tagging tool 1865 to perform an auto-apply operation. Specifically, this figure illustrates how a user can paint over or select a range of a clip, and automatically apply the selected range with one or more keywords.
- One benefit of the auto-apply feature is that it allows the user to quickly paint over or select many different ranges of different clips to quickly tag them.
- the keyword tagging tool 1865 displays two input keywords in the text field 1850 .
- the user activates the auto-apply mode by selecting a user interface item 2125 .
- the selection causes the keyword tagging tool 1865 to display an indication 2155 that the media editing is in an auto-apply mode, as illustrated in the second stage 2110 .
- the user selects an interior location of a clip representation 2140 .
- the selection causes a range selector 2130 to appear.
- the third stage 2115 illustrates a selection of a range of the clip represented by the clip representation 2140 .
- the user uses the range selector 2130 to paint over an area of the clip representation 2140 . This causes a corresponding range to be associated with the two keywords in the text field 1850 .
- the selection of the range causes the media editing application to display an indication 2160 that the range is associated with the two keywords.
- the two associated keywords are displayed over the clip representation 2140 for a set period of time.
- two corresponding keyword collections 2145 and 2150 are also created by the media editing application.
- the fourth stage 2120 illustrates the selection of the keyword collection 2145 . Specifically, the selection causes the event browser 130 to be filtered down to the range of the clip associated with the keyword of the keyword collection 2145 .
- FIG. 22 illustrates removing a keyword from a video clip using the tagging tool 1865 .
- Three operational stages 2205 - 2215 of the GUI 100 are shown in this figure.
- the event browser displays the contents of keyword collection 2225 .
- the user selects the keyword collection 2225 from the event library to display a representation 2220 of a clip range.
- the user selects the representation 2220 .
- the selection causes the text field to display each keyword associated with the clip.
- the text field 2230 displays only that keyword.
- the third stage 2215 illustrates removing the keyword from the text field 2230 .
- the user removes the keyword from the text field 2230 .
- This causes the range of the video clip to be disassociated with the keyword.
- This is illustrated in this third stage 2215 as the representation of the range of the video clip is removed from the keyword collection 2225 .
- the user can select a remove button or a shortcut key to remove all keywords that are associated with the range of the video clip.
- FIG. 23 conceptually illustrates a state diagram 2300 of a media-editing application of some embodiments.
- the state diagram 2300 does not describe all states of the media-editing application, but instead specifically pertains to several example operations that can be performed with the keyword tagging tool 1865 that is described above by reference to FIGS. 18-22 .
- the keyword tagging tool (at state 2305 ) is in a deactivated state.
- the media-editing application may be performing (at state 2310 ) other tasks including import- or editing-related tasks, organizing, playback operations, etc.
- the application could be performing a wide variety of background tasks (e.g., transcoding, analysis, etc.).
- the keyword tagging tool 1865 is in an active state based on an input to activate the tool. For example, a user might have selected a toolbar item, selected a hotkey, etc. Similar to the state 2305 , the application may be (at state 2310 ) performing other tasks. As mentioned above, the media editing application, in some embodiments, provides keyboard access when the keyword tagging tool 1865 is activated or displayed. In other words, the user can select different hotkeys to perform operations such as playing/pausing a media clip, selecting a range of a clip, inserting a clip into timeline, etc. This allows the user to play and preview different pieces of content while keyword tagging, and without having to activate and de-activate the keyword tagging tool. Also, when the keyword tagging tool is activated, the media editing application, in some embodiments, does not place any restriction on accessing other parts of the application's GUI, such as the event library and event browser.
- the keyword tagging tool 1865 In response to receiving a user's selection of an item associated with a set of one or more keywords, the keyword tagging tool 1865 (at state 2320 ) displays the set of keywords.
- the keyword associated with a clip is displayed in an input text field. However, in some embodiments, the keyword may be displayed elsewhere in application's GUI.
- a keyword that is associated with an entire range is displayed differently from a keyword that is applied to a portion of a range. For example, keywords or comments that apply only to a range within the clip may be colored differently (e.g., dimmed) unless the playhead is within the range.
- the application transitions to state 2325 .
- the media editing application disassociates the keyword from the tagged item.
- some embodiments allow a user to add additional keywords to further mark the selected item. For example, in some embodiments, a new keyword may be inputted in a same field in which the associated keyword is displayed by separating the keywords (e.g., by using a semi-colon).
- the media editing application transitions to state 2330 .
- the keyword tagging tool 1865 displays one or more of them.
- the media editing application may display a group of slots for a user to input keyword shortcuts. An example of such keyword slots or fields is described above by reference to FIG. 20 .
- each shortcut key in some embodiments, can be used to associate a selected item with a corresponding keyword regardless of whether the keyword tagging tool 1865 is deactivated or activated.
- a user of the media editing application may play through a list of clips (e.g., a group of clips displayed in the event browser) and quickly tag one or more the clips that are being played.
- the keyword tagging tool 1865 displays the keyword input (e.g., in the tool's input field).
- the keyword tagging tool 1865 may provide suggestions for an auto-complete operation.
- the media editing application in some embodiments, maintains a database of previous user inputs or interactions. For example, as a user adds comments, the media editing application builds a dictionary of potential keywords (e.g., any term or phrase that the user has entered more than X number of times). When the user types a commonly-used phrase, the media editing application may highlight the phrase (e.g., in the keyword tagging tool 1865 ).
- hovering over the highlighted text reveals a pop-up offering to create a new keyword with the string.
- the media editing application comes with a built-in dictionary of common production and editing terms, which the user can choose from when tagging an item.
- the media editing application may provide a command or option to add a last typed string as a keyword.
- the media application associates the input keyword with an item based on a user's input. For example, a user might have selected a video clip from the event browser and selected a key (e.g., an enter key). Alternatively, the user can tag one more clips in an auto-apply mode. An example of automatically applying keywords to a range of a clip is described above by reference to FIG. 21 .
- FIG. 24 provides an illustrative example of creating a keyword collection by analyzing content.
- Four operational stages 2405 - 2420 of the GUI 100 are shown in FIG. 24 .
- the event library 125 and the event browser 130 are the same as those described above by reference to FIG. 1 .
- the first stage 2405 shows a user selection of an event collection 2425 from the event library 125 .
- the selection causes the event browser 130 to display representations 2430 , 2435 , and 2440 for three different video clips.
- a bar 2445 is displayed over a portion of the representation 2440 . This indicates to a user that a range of the representation's video clip is marked with a keyword.
- the second stage 2410 shows the GUI 100 after the user selects an area of the event library 125 .
- the selection causes a context menu 2450 to appear.
- the context menu 2450 includes a selectable menu item 2455 for analyzing and fixing content.
- the user is presented with a dialog box 2460 as illustrated in the third stage 2415 .
- the dialog box 2460 lists several different analysis options.
- the different analysis options are categorized into either video or audio.
- the list of video options includes options for (1) analyzing and fixing image stabilization problems, (2) analyzing for balance color, and (3) finding people.
- the list of audio options includes options for (1) analyzing and fixing audio problems, (2) separating mono and group stereo audio, and (3) removing silent channels.
- the image stabilization operation of some embodiments identifies portions of the video in a media file in which the camera appears to be shaking, and tags the media file (or a portion of the media file with the shaky video) with a keyword.
- the color balancing of some embodiments automatically balances the color of each image in a media file and saves the color balancing information in a color balance file for each media file analyzed.
- the color balancing operation adjusts the colors of an image to give the image a more realistic appearance (e.g., reducing tint due to indoor lighting).
- Different embodiments may use different color balancing algorithms.
- the person detection algorithm identifies locations of people in the images of a media file and saves the person identification information in a person detection file for each media file analyzed.
- the person detection operation of some embodiments identifies faces using a face detection algorithm (e.g., an algorithm that searches for particular groups of pixels that are identified as faces, and extrapolates the rest of a person from the faces).
- a face detection algorithm e.g., an algorithm that searches for particular groups of pixels that are identified as faces, and extrapolates the rest of a person from the faces.
- Some embodiments provide the ability to differentiate between a single person (e.g., in an interview shot), pairs of people, groups of people, etc.
- Other embodiments use different person detection algorithms.
- some embodiments include audio analysis operations at the point of import as well. As shown, these operations may include analysis for audio problems, separation of mono audio channels and identification of stereo pairs, and removal of latent audio channels (i.e., channels of audio that are encoded in the imported file or set of files but do not include any actual recorded audio). Other embodiments may make available at import additional or different audio or video analysis operations, as well as additional transcode options.
- the people analysis operation entails detecting a number of persons in a range of a clip shot and the type of shot. For example, the analysis operation may determine that there are x number of person or people (e.g., one person, two persons, group) in a range of a video clip. The analysis operation may determine whether the identified range of the video clip is a close-up, medium, or wide shot of the person or people.
- the people detection operations entails identifying a face or faces and determining how much space each identified face takes up in frames of the video clip. For example, if a face takes up 80% of the frame, the shot may be classified as a close-up shot.
- the people analysis operation entails identifying faces, shoulders, and torsos.
- the user selects the find people option 2465 and a button 2470 .
- the selections initiate an automatic analysis of the video clips.
- the analysis is done as a background task. This allows users to continue interacting with the application's GUI 100 to perform other tasks while the application performs the analysis.
- the fourth stage 2420 illustrates the GUI after the application has performed the analysis operations on the video clips. Specifically, this stage shows that the application analyzed each of the three video clips and found people in two of the three video clips. Similar to a piece of media content marked with a keyword, each video clip with people is marked with a bar ( 2475 or 2480 ) over a range of the video clip's representation. The range indicates the portion of the video clip with people, as determined by the application based on the people analysis operation.
- the media application displays two different representations for a user-specified keyword and analysis keyword.
- the media editing application displays each analysis keyword representation ( 2475 or 2480 ) in a color that is different from a color of a keyword representation 2445 for a user-specified keyword.
- the user-specified keyword is represented as a blue bar and the analysis keyword is represented as a purple bar.
- different types of keywords may be represented differently in some embodiments.
- the fourth stage 2420 shows the automatic organization of these ranges into a keyword collection 2485 .
- these keyword collections are dynamically generated. For example, when the media editing application does not find any people in a video clip, the event browser may not list a keyword collection for people.
- the media editing application performs additional groupings based on the analysis.
- FIG. 25 illustrates an example of different groupings that were created based on an analysis of video clips.
- Three operational stages 2505 - 2515 of the GUI 100 are shown in FIG. 25 .
- the first stage 2505 shows a user selection of the keyword collection 2520 .
- the selection causes the keyword collection 2520 to be expanded to reveal other sub-collections.
- the second stages 2510 shows different groupings that are created based on the analysis of the video clip.
- the media editing application grouped the ranges into different sub-collections.
- the event library 125 lists a sub-collection for group, medium shot, one person, and wide shot.
- the media editing application may group the ranges of clips into other sub-collections.
- the media editing application provides options for defining different sub-collections. For example, instead of having separate sub-collections for one person and close-up shots, the media editing application may provide one or more selectable items for creating a sub-collection that contains the one person and close-up shots.
- the event browser is filtered to display only a representation 2530 .
- This representation represents a range of the video clip that includes a one-person shot based on the analysis of the video clips.
- the analyzed content is grouped into different sub-collections. Specifically, the ranges of clips are grouped into different smart collections.
- smart collections are different from keyword collections in that a user cannot drag and drop items into them.
- Some embodiments allow the user to create and organize content into different smart collections based on filtering operations. Several examples of creating a smart collection will be described in detail by reference to FIGS. 29 and 30 below.
- the analyzed content may be grouped into other collections.
- the media editing application may create multiple different keyword collections and organize content into these keyword collections.
- a people analysis operation is performed to automatically organize content into a keyword collection and a number of different sub-collections.
- the media editing application (1) analyzes media clips and (2) performs correction operation on one or more ranges of the media clips, and (3) organizes the corrected ranges in a keyword collection.
- FIG. 26 provides an illustrative example of different groupings created after the media editing application has analyzed and fixed image stabilization problems.
- Two operational stages 2605 and 2610 of the GUI 100 are shown in this figure. This example is similar to the previous examples described above. However, in this example, a user selects an option 2625 for analyzing and fixing image stabilization problems in the first stage 2605 .
- the second stages 2610 shows different groupings that are created based on an analysis of image stabilization.
- the media editing application grouped the clip ranges into different sub-collections.
- the event library 125 lists a sub-collection 2630 for clip ranges that are corrected (e.g., stabilized).
- Another sub-collection 2635 is created for other clip ranges that are not corrected or do not need to be corrected.
- a sub-collection in some embodiments, is a filter or smart collection that a user cannot drag and drop items onto.
- a higher level collection or an analysis keyword folder that contains the smart collection is in itself a smart collection.
- a user cannot drag and drop other items onto these smart collections.
- the user can perform an analysis operation to add additional items to these smart collections.
- a media clip in an event browser and an analysis option may be selected to initiate an analysis operation on the selected media clip in order to add one or more ranges of the media clip to one or more analysis keyword collections.
- FIG. 27 illustrates automatically importing media clips from different folders of the file system. Specifically, this figure illustrates how the media editing application (1) imports media content from different folders, (2) creates keywords based on the names of the folders, (3) associates keywords with the corresponding pieces of media content, and (4) creates keyword collections for the keywords. Three operational stages 2705 , 2710 , and 2715 of the GUI 100 are illustrated in FIG. 27 .
- the user selects an import control 2720 .
- the selection causes an import options window 2725 to be displayed.
- the import options window includes a set of controls 2730 for specifying different import options.
- the set of controls 2730 includes an option for adding the imported content to an existing event collection or creating a new event collection.
- the set of controls 2730 includes options for analyzing audio or video to create keyword collections based on the analysis.
- the list of analysis options includes options for (1) analyzing and fixing image stabilization problems, (2) analyzing for balance color, and (3) finding people.
- the list of audio options includes options for (1) analyzing and fixing audio problems, separating mono and group stereo audio, and (3) removing silent channels. These are similar to the ones mentioned above by reference to FIG. 24 .
- the import options window 2725 allows a user to specify one or more of these analysis options during the import session.
- the import options window 2725 includes a control 2735 for specifying whether to import the clips in the different folders as keyword collections.
- the user selects the option 2735 , selects two folders having different media clips, and selects the import button 2740 .
- the third stage 2715 shows the GUI 100 after the user selects the import button 2740 in the import options window 2725 . Specifically, this stage illustrates that the media editing application associated each imported media clip with a corresponding keyword based on the name of the source folder of the media clip. For each folder, the media editing application also creates keyword collections that contain the associated clips. As shown in the third stage 2715 , the imported media clips are represented by representations 2750 . Each of these representations 2750 includes a bar 2745 that indicates that the corresponding video clips are associated with a keyword.
- FIG. 28 conceptually illustrates a process for automatically organizing media clips into different keyword collection by analyzing the media clips.
- the process 2800 is performed by a media editing application.
- the process 2800 starts when it receives (at 2805 ) an input to analyze one or more media clips.
- An example of receiving input during an import operation is described above by reference to FIG. 27 .
- FIGS. 24 and 26 Several other examples of receiving input to analyze a group of media clips are described above by reference to FIGS. 24 and 26 .
- the process 2800 then identifies (at 2810 ) a media clip to analyze.
- the process 2800 analyzes the media clip.
- video analysis operations include (1) analyzing for image stabilization problems, (2) analyzing for balance color, and (3) finding people.
- audio analysis operations include (1) analyzing audio problems, (2) analyzing for mono and group stereo audio, and (3) analyzing for silent channels.
- some embodiments perform other types of analysis to tag the media clip. This may entail analyzing the metadata of a clip and/or identifying a source directory or folder from which the clip originates.
- the process 2800 then (at 2820 ) associates the media clip with one or more keywords based on the analysis.
- the process 2800 then creates a keyword collection for each keyword.
- keyword collections Several examples of creating such keyword collections are described above by reference to FIGS. 24-26 .
- some embodiments also create one or more smart collections or filter collections for each keyword collection. For example, based on a people analysis, a keyword collection may include other collections such as group, medium shot, one person, wide shot, etc.
- the process 2800 determines (at 2830 ) whether there are any other media clips to analyze. When the determination is made that there is another media clip to analyze, the process 2800 returns to 2810 . Otherwise, the process 2800 ends.
- FIG. 29 provides an illustrative example of creating a smart collection.
- Five operational stages 2905 - 2925 are shown in this figure.
- the first stage 2905 shows the GUI 100 after the user selects an area of the event library 125 .
- the selection causes a context menu 2930 to appear.
- the context menu 2930 includes a selectable menu item 2935 for creating a new smart collection.
- the user is presented with a smart collection 2940 as illustrated in the second stage 2910 .
- the smart collection 2940 is displayed in the event library 125 .
- the smart collection 2940 is categorized under an event collection 2945 at a same hierarchical level as keyword collections 2950 .
- the smart collection 2940 includes graphical and textual elements.
- the graphical element provides a visual indication (e.g., through different color, through symbol) that the collection 2940 is different from the event collection 2945 and the keyword collections 2950 .
- the textual element 2955 of the smart collection represents a name of the smart collection 2940 .
- the application has specified a default name for the collection 2940 .
- the textual element is highlighted to indicate that a more meaningful name can be inputted for the collection 2940 .
- the third stage 2915 shows the GUI 100 after the user inputs a name for the smart collection 2940 . Specifically, after inputting the name, the user then selects the collection 2940 to define one or more filter operations. When the user selects the collection 2940 (e.g., through a double-click operation), the GUI 100 displays a filter tool 2960 as illustrated in the fourth stage 2920 .
- the filter tool 2960 includes a filter display area 2965 and a selectable item 2970 .
- the filter display area 2965 is empty which indicates to the user that no filter is applied for the smart collections 2940 .
- the event browser provides the user with the same indication as each of the video clips from the event collection is in the smart collection.
- the fifth stage 2925 shows the selection of the selectable item 2970 .
- the selection causes a list 2975 of different filters to be displayed.
- the list 2975 includes (1) a text filter for filtering a smart collection based on text associated with the content, (2) a ratings filter for filtering based on ratings (e.g., favorite, reject), (3) an excessive shakes filter for filtering based on shakes (e.g., from camera movements), (4) a people filter for filtering based on people (e.g., one person, two persons, group, close-up shot, medium shot, wide shot, etc.), (5) a media type filter for filtering based on media type (e.g., video with audio, audio only, etc.), (6) a format filter for filtering based on format of the content, and (7) a keyword filter for filtering based on keywords.
- a text filter for filtering a smart collection based on text associated with the content
- a ratings filter for filtering based on ratings (e.g., favorite,
- FIG. 30 provides an illustrative example of filtering the smart collection 2940 based on keyword.
- Four operational stages are 3005 - 3020 shown in this figure.
- the first stage 3005 shows the GUI 100 prior to applying a keyword filter.
- the filter display area 2965 is empty which indicates to the user that no filter is applied for the smart collections 2940 .
- the event browser provides the user with the same indication as each of the video clips from the event collection is in the smart collection.
- the filter list is activated to display a list 2975 of different filters.
- the keyword filter is added to the filter display area 2965 as illustrated in the second stage 3010 .
- the media editing application provides a list of existing keywords from which a user can choose from for the keyword filter. This is illustrated in the second stage 3010 as the filter display area 2965 lists several existing keywords.
- the keywords in the filter display area 2965 correspond to keyword collections 3025 in the event library 125 .
- the second stage 3010 shows the contents of the smart collection 2940 after applying the keyword filter operation.
- the event browser is filtered such that only ranges of media that are marked with the keywords are shown.
- the keyword filter operation removes representations 3030 and 3035 from the event browser, as the video clips associated with these representations are not marked with any keyword.
- the user can filter the smart collection 2940 further to include only ranges of media that includes all keywords. For example, by selecting a control 3040 , the smart collection 2940 can be filtered to display ranges of media that are associated with both keywords. In some embodiments, when a media clip is marked with different keywords in different ranges, a smart collection includes only one or more ranges of the different ranges that overlap. Alternatively, the smart collection 2940 may include all the different ranges of the media clip, in some embodiments.
- each keyword in the filter display area 2965 includes a selectable item 3045 for including or excluding the corresponding keyword from the filtering operation.
- the fourth stage 3020 shows a selection of a selectable item 3045 .
- the selection causes the smart collection 2940 to be filtered to only exclude each media range associated with a keyword of the selectable item 3045 .
- This is illustrated in the fourth stage 3020 as the event browser displays the representation 3055 that is marked with a keyword corresponding to a selectable item 3050 in the filter display area 2965 .
- the filter display area 2965 only the keywords that are in one event collection are displayed in the filter display area 2965 . This is because the smart collection 2940 is created at a same hierarchical level as a keyword collection. In some embodiments, when a smart collection is created at a higher level in a hierarchy (e.g., above multiple different collections at a disk level above or at the event level) all the keywords at the same level or below may be displayed in the filter tool as selectable filtering options.
- the media editing application allows a user to perform filtering operations without having to create a smart collection.
- FIG. 31 illustrates filtering the event browser based on keywords. Specifically, this figure illustrates searching for different ranges of clips associated with one or more keywords.
- Each event collection includes two keyword collections.
- the user selects a filter tool 3120 to search for clips at a level above the event collection level (e.g., disk level). Specifically, in this example, the user selects the tool 3120 without selecting any collection. Alternatively, the user might have selected a collection that is at a higher level than the event collection prior to selecting the filter tool 3120 .
- the selection of the filter tool 3120 causes a filter display area 2965 to be displayed.
- the filter display area 2965 displays several selectable items for different keywords. These keywords correspond to the keyword collections of the event collections 3125 and 3130 .
- the selection also causes the event browser 130 to display each clip range associated with the keywords.
- the third stage 3115 shows the GUI 100 after selecting the selectable item 3135 . As shown, the selection causes the event browser to be filtered to exclude each clip range associated with the keyword that corresponds to the selectable item 3135 .
- An event collection may contain media clips or ranges of clips that a user likes or dislikes. For example, there might be several frames where the image is blurry or chaotic, or frames where the imagery is not particularly captivating.
- the media editing application provides a marking tool to rate clips or ranges of clips.
- FIG. 32 illustrates an example of rating a media clip.
- Three operational stages 3205 - 3215 of the GUI 100 are shown in this figure. Specifically, in the first stage 3205 , the user selects a representation 3220 of a clip. In the second stage 3210 , the user selects a UI item 3225 to mark the clip associated with the representation 3220 as a favorite. Alternatively, the user can hit a shortcut key to mark the clip. The user can also select another shortcut key or user interface item 3235 to mark the clip as a reject. Further, when a clip is marked with a rating, the user can select yet another shortcut key or user interface item 3230 to remove the rating.
- some embodiments display an indication of the rating. This is illustrated in the third stage 3215 , as a line or bar 3245 is displayed across the representation 3220 .
- the color of the indication corresponds to a color of the user interface item 3225 .
- FIG. 33 illustrates an example of filtering an event collection 3322 based on ratings or keywords.
- a filtering operation allows a user to quickly identify clips that are tagged, marked, rejected, not rated, or not tagged.
- Four operational stages 3305 - 3320 of the GUI 100 are shown in this figure. Specifically, in the first stage 3305 , the user selects a UI item 3325 . The selection causes a drop down list 3330 to appear, as illustrated in the second stage 3310 . The drop down list 3330 displays several selectable options related to filtering the event collection 3322 through ratings or keywords. For example, the drop down list 3330 displays a selectable option for hiding rejected clips, and a selectable option 3335 for only displaying clips that have no ratings or keywords. The user can select any of these selectable options.
- the user When the user selects the selectable option 3335 in the drop down list 3330 , the user is presented with an event browser 130 as shown in the fourth stage 3320 . Specifically, the selection of the selectable option 3335 causes the event browser 130 to display clips that do not have any associated ratings or keywords.
- the media editing application provides a novel list view that displays different ranges of media associated with keywords.
- the list view in some embodiments allows users to select different ranges of a media clip and/or navigate to different sections of the media clip.
- the list view is another view of the clip browser or event browser. Accordingly, all of the operations described above in relation to the thumbnails view (e.g., clips view, filmstrip view) can be performed in this list view. These operations include creating different keyword collections, associating a clip or a portion of the clip with a keyword, creating composing clips, disassociating a keyword, performing different operations with the keyword tagging tool, etc.
- FIG. 34 illustrates the GUI 100 of the media editing application with such a list view.
- FIG. 34 illustrates the GUI 100 at two different stages 3405 and 3410 .
- the GUI 100 includes the event library 125 and the event browser 130 .
- the event library 125 and event browser 130 are the same as those described above by reference to FIG. 1 .
- the event browser 130 displays different media content items in the list view 3415 .
- the list view includes a list section 3420 and a preview section 3425 . Different from a filmstrip view that displays filmstrip representations of different clips, the list view displays each clip's name and media type along with other information.
- the list section 3420 displays the name of the clip, the start and end times, clip duration, and creation date.
- the information is displayed in different columns with a corresponding column heading (e.g., name, start, end, duration, date created).
- the user can sort the clips in the list by selecting any one of the different column headings.
- Each column can also be resized (e.g., by moving column dividers in between the columns).
- the columns may be rearranged by selecting a column heading and moving it to a new position.
- the list view 3415 allows a user to choose what type of information is displayed in list view. For example, when a column heading is selected (e.g., through a control click operation), the list view 3415 may display a list of different types of information that the user can choose from.
- the preview section 3425 in some embodiments displays a filmstrip representation of a media clip selected from the list section 3420 .
- the filmstrip representation is an interactive UI item.
- the user can select an interior location within the representation to display a preview of the representation's associated clip in a preview display area.
- a playhead 3430 moves along a virtual timeline of the filmstrip representation. The user can use playhead line 3430 as a reference point to display different images and play different audio samples associated with the video clip.
- the first stage 3405 shows the event browser in the list view.
- the user might have changed the view of event browser 130 by selecting a menu item or a toolbar button.
- the selection of a media clip 3440 in the list section 3420 causes the preview section 3425 to display a filmstrip representation 3435 .
- the representation 3435 includes several bars that indicate that representation's associated video clip is marked. Specifically, a bar 3445 having a first visual representation (e.g., red bar) indicates that a first range of the video clip is marked with a reject rating, a bar 3455 having a different second visual representation (e.g., blue bar) indicates that a second range is marked with a keyword, and a bar 3450 having a third visual representation different than the first or second visual representations (e.g., green bar) indicates that a third range is marked with a favorite rating.
- a first visual representation e.g., red bar
- a bar 3455 having a different second visual representation e.g., blue bar
- a bar 3450 having a third visual representation different than the first or second visual representations e.g., green bar
- the keyword bar 3455 (e.g., blue bar) is displayed below the ratings bar 3450 .
- the media editing application may display the ranges differently in other embodiments. For example, instead of different bars, the media editing application may display other indications or other colors to distinguish different ranges associated with keywords and/or ratings markers.
- the second stage 3410 shows the selection of a column heading of the list section 3420 (e.g., through a control click operation).
- the selection causes the GUI 100 to display a list 3460 that allows a user to choose the type of information that is displayed in list section 3420 .
- the type of information or metadata includes stat time, end time, duration, content creation date, notes, reel, scene, shot/take, audio role, media start, media end, frame size, video frame rate, audio channel count, audio same rate, file type, date imported, and codec.
- the list 3460 may include other types of information.
- FIG. 35 illustrates expanding a media clip in a list view.
- Two operational stages of the GUI 100 are shown in this figure.
- a media clip 3515 is selected from the list section 3420 to display the filmstrip representation 3435 in the preview section 3425 .
- the user selects the UI item 3520 adjacent to the media clip information 3515 in the list.
- the selection causes the list view to display additional information related to the media clip in an expanded list 3525 .
- the user can re-select the UI item 3520 to hide the expanded list 3525 .
- the media editing application allows a user to quickly expand or collapse a selected clip by selecting a hotkey. For instance, in the example illustrated in FIG. 35 , the user can expand the selected media clip by selecting a key (e.g., right arrow key) and collapse the clip by selecting another key (e.g., left arrow key).
- the expanded list 3525 displays information related to marked ranges of the media clip. Specifically, for each range associated with a keyword, the expanded list includes a name of the keyword, the start and end times, and range duration. The expanded list 3525 displays the same information for each ratings marker. In some embodiments, the media editing application may display other information (e.g., creation date, notes on different ranges, etc.). In some embodiments, the media clip information in the list may only be expanded when the corresponding media clip is marked with a keyword or rating. For example, when a media clip is not marked, the media clip information 3515 may not have a corresponding UI item to display an expanded list.
- the different sections of the list view allows a user to quickly assess a group of media clips and see what ranges are marked with one or more markings (e.g., keywords, markers).
- the preview section 3435 is displayed above the list section 3420 .
- This example layout of the different sections allows the user view a detailed representation of a media clip (e.g., that includes different visual indications representing different marked rages), and simultaneously view detailed information regarding the media clip (e.g., media clip metadata) and its marked ranges (e.g., marking or range metadata).
- the list view in some embodiments, can be used to associate one or more portions of one or more media clips with different markings
- the list section 3420 is dynamically updated with the marked ranges. For example, when a user drags a selected range of the media clip to a keyword collection, a keyword entry is dynamically added to the list section 3420 .
- the association is created, the entry for the marking is also selected from the list section 3420 .
- FIG. 36 illustrates an example of simultaneously expanding multiple different clips in the list view 3415 .
- Two operational stages 3605 and 3610 are shown in this figure. Specifically, in the first stage 3605 , the user selects all the media clips in the list view 3415 . The user might have selected these items in a number of different ways (e.g., by selecting a first item in the list and selecting a last item while holding a modifier, by using a select all shortcut, by selecting an area with these items, etc.).
- the user selects a hotkey (e.g., right arrow key) to expand each media clip that can be expanded.
- a hotkey e.g., right arrow key
- the selection causes (1) a media clip 3615 to expand and reveal a ratings marker and (2) a media clip 3620 to expand and reveal two ratings markers and two keywords.
- some embodiments provide one or more selectable user interface items for expanding multiple media clips.
- the list view 3415 allows a user to input notes for (1) media clips and (2) ranges of media clips. For example, a user can add a note to an entire clip or only a portion of the clip associated with a keyword.
- FIG. 37 illustrates the list view 3415 with several fields for adding notes.
- the list section 3420 of the list view 3415 displays information about several clips. Specifically, the list section 3420 displays additional information regarding a keyword 3715 and two markers ( 3720 and 3730 ) related to a media clip 3710 in an expanded list.
- the list section 3420 includes a “Notes” column 3725 . As shown, the user can add notes to the entire clip 3710 using the notes field 3735 . The user can also add notes to the different ranges using notes field 3740 - 3750 .
- FIG. 38 illustrates selecting different ranges of a media clip using the list view 3415 .
- this figure illustrates how the list view 3415 can be used to granularly select different ranges of the clip that are marked with a rating or associated with a keyword.
- this allows a user to easily select a marked or tagged range, and modify the selected range. For example, the user can trim or expand the range associated with a particular keyword. When one or more ranges are selected, the user can associate the range with a keyword, add the range to a timeline, etc.
- Three operational stages 3805 - 3815 of the GUI 100 are shown in this figure.
- the GUI 100 includes the preview display area 3855 and the event browser 130 .
- the preview display area 3855 is described above by reference to FIG. 3 .
- the first stage 3805 shows the event browser 130 displaying the list view 3415 .
- the video clip information 3820 in the list section 3420 is selected and expanded.
- the selection of the video clip information 3820 in the list section causes a preview of the video clip to be displayed in the preview display area 3855 .
- the selection also causes a filmstrip representation 3835 of the video clip to be displayed in the preview section 3425 .
- the second stage 3810 shows a selection of a keyword 3825 from the expanded list 3830 .
- the selection causes the range of the video clip associated with the keyword to be highlighted.
- the filmstrip representation 3835 is highlighted with a range selector 3840 .
- the user can specify a different range by selecting and moving either edge of the range selector 3840 .
- the selection of the keyword 3825 causes the preview display area 3855 to display a preview of the range.
- the preview display area 3855 displays an image associated with the starting point of the keyword range. The user can play a preview of the video clip starting from this position.
- the third stage 3815 shows selection of a ratings marker 3845 from the expanded list 3830 .
- the selection causes the range of the video clip associated with the marker to be highlighted. Similar to the second stage 3810 , the media clip is highlighted with the range selector 3840 . Also, the preview display area 3855 displays an image associated with the starting point of the range associated with the ratings marker 3845 .
- the media editing application allows a user to navigate during playback. For example, in the list view illustrated in FIG. 38 , the user can start playback (e.g., by selecting a space key) and play different clips in the list.
- the playback is uninterrupted in that multiple clips are played one after another in the preview display area. For example, the user can start playback for a clip and select another clip or a hotkey to jump to a next clip. In this case, the preview display area of the media editing application will continue playback starting from the next clip without interruption.
- range items e.g., marker, keyword
- the user can navigate between the range items. For example, a user might start playback of a clip that corresponds to the clip information 3820 . The playback would moves past the different ranges. During playback, the user can select any one the ranges to continue the playback starting from the selected range.
- FIG. 39 illustrates selecting multiple ranges of a media clip using the list view 3415 .
- Two operational stages 3905 - 3910 of the GUI 100 are shown in this figure. Specifically, the first stage 3905 shows the selection of a ratings marker 3920 from the list section 3420 . The selection causes the range of the video clip associated with the marker to be selected.
- the preview section 3425 provides an indication of the selection as the range corresponding to the ratings marker 3920 is highlighted in a filmstrip representation 3925 of the video clip.
- the second stage 3910 shows the selection of the ratings marker 3920 and a keyword 3930 from the list section 3420 .
- the selection causes the range of the video clip associated with the ratings marker 3920 and the keyword 3930 to be selected.
- the range between the endpoint of the marker 3920 and start point of the keyword 3930 is also selected.
- the preview section 3425 provides an indication of the selection of this composite range. Specifically, a composite range starting from the marker's range and ending at the keyword's range is highlighted in the preview section 3425 .
- some embodiments allow the user to add the selected composite range to a timeline to create a composite presentation.
- the user can add the range to the timeline by selecting a hotkey or by dragging the selected range from the preview section 3425 to the timeline.
- adding clips to the timeline are described below by reference to FIG. 45 .
- markings are selected from the list view to select corresponding ranges in the preview section.
- markings e.g., marker, keyword
- a portion of the representation e.g., filmstrip representation
- the corresponding ranges or items is selected in the list view.
- FIG. 40 conceptually illustrates a process 4000 for displaying and selecting items (e.g., different ranges of media) in a list view.
- the list view in some embodiments allows users to select different ranges of a media clip and/or navigate to different sections of the media clip.
- the process 4000 is performed by a media editing application in some embodiments. As shown in this figure, the process 4000 begins by identifying (at 4005 ) media clips to display in the list. Next, the process 4000 displays (at 4010 ) identified media clips in a list view (e.g., in the event browser as described above).
- the process 4000 determines (at 4015 ) whether a selection of a media clip in the list has been received. When the determination is made that a selection of a media clip has been received, the process 4000 identifies (at 4030 ) items (e.g., keywords, markers) associated with the selected media clip. The process 4000 then displays (at 4035 ) a clip representation based on the identified items with the media clip range as being selected. The process then provides (at 4040 ) a preview of the media clip. Next, the process 4000 moves on to 4070 .
- items e.g., keywords, markers
- the process 4000 proceeds to 4020 .
- the process 4000 determines (at 4020 ) whether it has received a selection to expand a media clip in the list. If it is determined that the process has received a selection to expand a media clip, the process identifies (at 4050 ) each keyword associated with the media clip. The process then displays (at 4055 ) each identified keyword in the list. Afterwards, the process goes on to 4058 . If the process 4000 determines (at 4020 ) that it did not receive a selection to expand any media clip in the list, it moves on to 4070 .
- the process 4000 determines whether it has received a selection of a keyword in the list. When the determination is made that the process has received such a selection, the process displays (at 4060 ) a corresponding clip representation with keyword range selected. The process then provides (at 4065 ) a preview of the media clip starting from the selected keyword range.
- the process 4000 determines (at 4070 ) whether there is additional user input for the list view. If it is determined that there is additional user input for the list view, it returns to 4015 . Otherwise, the process 4000 terminates.
- FIG. 41 conceptually illustrates a process 4100 for playing items (e.g., clips, keyword ranges) in a list view.
- the process 4100 is performed by a media editing application.
- the process 4100 starts when it receives (at 4105 ) a selection of a list view item. Examples of such list view items include media clips, keywords, smart collections, markers, etc.
- the process 4100 then receives (at 4110 ) a playback input.
- a play button or a hotkey e.g., space key.
- the process 4100 starts (at 4115 ) the playback of the items in the list view starting from a range of a selected item. For example, when the selected item is a marker, the playback may start at a time associated with the marker.
- the process 4100 determines (at 4120 ) whether an item in the list view has been selected.
- examples of such list view items include media clips, keywords, smart collections, markers, etc.
- the process 4100 continuously monitors user input during playback to make this determination.
- the process 4100 jumps (at 4140 ) to a starting point of a range of the selected item and continues playback from the starting point. For example, during playback, a user might select a keyword. In this case, the playback continues starting from a starting point of a range of a clip associated with the keyword. When a clip is selected, the playback continues from a starting point of the clip.
- the user can alternatively select another item in the list view by selecting a hotkey (e.g., directional keys) for a next or previous item in the list.
- a hotkey e.g., directional keys
- the selection of a next or previous item skips any inner range items and moves to the next or previous item.
- the user selection of the next item causes the playback to continue from the next clip.
- the clip is expanded in the list view to reveal the associated keywords
- the user selection of the next item causes the playback to continue starting from a range of a next keyword.
- the process 4100 determines (at 4125 ) whether an input to stop playback has been received. In some embodiments, the process 4100 continuously monitors user input during playback to make this determination.
- the process 4100 ends. Otherwise, the process 4100 determines (at 4130 ) whether there are any other ranges to playback. That is, the process 4100 may have reached the end of the list. In this example, when there are no more clips or ranges to play, the process 4100 ends. Otherwise, the process 4100 continues (at 4135 ) playback starting from a range of a next item. In some embodiments, when the process 4100 finishes playing a last item in the list view, the playback continues from the first item in the list.
- markers for marking different media clips are reference points that a user can place within media clips to identify specific frames or samples. The user can use these markers to flag different locations on a clip with editing notes or other descriptive information.
- a user can use the markers for task management.
- the markers may have “to do” notes associated with them. These notes can be notes that an editor makes as reminders to himself or others regarding tasks that have to be performed. Accordingly, some embodiments displays (1) the notes associated with the marker and (2) a check box to indicate whether the task associated with the marker has been completed.
- markers are classified by appearance. For example, an informational marker may appear in one color while a to-do marker may appear in another color.
- markers are added to a clip in a list view of the event browser. However, the markers may be added in a different view or in a timeline. For example, the markers may be added in a filmstrip view that displays filmstrip representations of different clips.
- FIG. 42 illustrates adding a marker to a clip using the list view 3415 .
- Three operational stages 4205 - 4215 of the GUI 100 are shown in this figure.
- the preview section 3425 of the list view 3415 displays a filmstrip representation 4240 of a video clip.
- a user has selected a video clip information item 4220 from the list section 3420 of the list view.
- the user has selected a UI item 4225 in the list section 3420 of the list view 3415 to display information regarding a keyword associated with the video clip.
- the keyword information 4230 indicates a range of the video clip associated with the keyword.
- the association of the keyword to the range of the video clip is represented in the preview section 3425 with a bar 4235 that spans horizontally across the filmstrip representation 4240 .
- the user selects an upper edge of the filmstrip representation 4240 .
- a line 4245 moves along a virtual timeline to the selected location. The user can drag the line along the virtual timeline and use it as a reference point to specify a location for the marker.
- the third stage 4215 illustrates associating a marker with a video clip.
- the user selects a menu item for adding a marker or selecting a hotkey.
- the marker is associated with the video clip at a specific point in the duration of the video clip. This is indicated by the list section that lists information 4255 related to the maker.
- the marker information 4255 indicates that the name of the marker is “Marker 1”.
- the marker information 4255 also indicates that a range (i.e., one second) of the video clip is associated with the marker.
- a marker representation is also added to the filmstrip representation 4240 in the preview section 3425 . Specifically, a marker 4250 is added to a position corresponding to the selected location described in the list view.
- the user can reposition or delete the marker. For example, a user can reposition the marker 4250 in the preview section 3425 by dragging the marker to a new location. Alternatively, the user can delete the marker by selecting and removing the marker (e.g., by pressing a delete key).
- the media editing application may allow the user to navigate between the markers. For example, the media editing application may provide a hotkey or a selectable UI item for navigating to the next/previous marker.
- the marker 4250 is added with the user specifying a location along the duration using the filmstrip representation 4240 in a list view.
- these markers can also be added, deleted, or modified in a different view (e.g., thumbnail view, filmstrip view).
- These markers can also be added, deleted, or modified in the timeline. Several examples of modifying markers in the timeline are described below by reference to FIG. 51 .
- the user can add a marker during playback of the video clip associated with the filmstrip.
- the user can select the filmstrip representation and play the video clip (e.g., by selecting a play button or a hotkey).
- the line 4245 moves horizontally across the virtual timeline of the filmstrip representation 4240 .
- the user can identify a location within the clip and pause the playback (e.g., by selecting a pause button or a pause hotkey).
- the user can then mark the location.
- the user may simply mark a location as the video clip plays (e.g., by selecting a menu item for marking a clip or by selecting a hotkey).
- FIG. 43 provides an illustrative example of editing a marker.
- Four operational stages 4305 - 4320 of the GUI 100 are shown in this figure.
- a user selects a marker 4325 .
- the selection causes a marker editor 4330 to appear as illustrated in the second stage 4310 .
- the marker editor includes a text field 4335 for specifying a name or description of the marker, a control 4340 for deleting the marker, a control 4345 for defining the marker as a to-do item, and a control 4350 for applying changes to the marker or closing the marker editor 4330 .
- the user types in the text field 4335 to provide a descriptive name or note for the marker 4325 .
- the third stage 4315 illustrates the marker editor after the user inputs a different name for the marker.
- the fourth stage 4320 illustrates the event browser 130 after the user selects the control 4350 .
- the marker information 4355 in the list section 3420 indicates that the name of the marker has been changed from “Marker 1” to “Scene 1 Start”.
- FIG. 44 provides an illustrative example of defining a marker as a to-do item.
- Two operational stages 4405 and 4410 of the GUI 100 are shown in this figure.
- the first stage 4405 illustrates a selection of the control 4345 for defining the marker as a to-do item.
- the second stage 4410 illustrates the GUI 100 after the user selects the control 4345 .
- the selection causes the marker to change its appearance.
- the marker changes color (e.g., from blue to red).
- the control 4345 is replaced with a control 4415 or check box for indicating whether the to-do item is a completed item.
- a selection of this control causes the marker to appear differently. For example, the marker may change from a red color to a green color to indicate that the task is completed.
- FIG. 45 provides an illustrative example of adding a video clip to a timeline.
- the GUI 100 includes the preview display area 325 , the event library 125 , the event browser 130 , and the timeline 4525 .
- Two operational stages 4505 and 4510 of the GUI 100 are illustrated in this figure.
- the preview display area 325 , the event library 125 , and the event browser 130 are the same as those described above (e.g., FIGS. 1 , 3 , 34 ).
- the first stage 4505 illustrates a selection of a video clip to add to the timeline 4525 .
- the user selects the video clip from the list view by selecting the video clip information 4530 .
- the user can drag the video clip information 4530 in the list section or the representation 4535 in the preview section 3425 to the timeline.
- the user can also select a hotkey to add the video clip to the timeline.
- a range of the clip may be added to the timeline.
- a range of a clip may be added by selecting a filmstrip representation in a keyword collection that represents a range of a video clip associated with a keyword.
- a range of a clip can also be selected from any one or more of the keywords or other items (e.g., ratings marker) displayed in the list view.
- the user can use a range selector to define a range of a clip to add to the timeline.
- Some embodiments provide a novel timeline search tool for searching and navigating a timeline.
- the search tool includes a search field for searching for clips in the timeline based on their names or associated keywords.
- the search tool includes a display area for displaying search results.
- each result is user-selectable such that a selection of the result causes the timeline to navigate to the position of the clip in the timeline. Accordingly, the timeline search tool allows a content editor to navigate the timeline to identify clips.
- FIG. 46 provides an illustrative example of a timeline search tool 4630 according to some embodiments. Two operational stages 4605 and 4610 are show in this figure. As shown in FIG. 46 , the timeline search tool 4630 includes (1) a search field 4615 for specifying one or more search parameters, (2) a control 4660 for entering a clip view, (3) a control 4635 for entering a keyword view, (4) an index area 4620 , and (5) an index playhead 4625 .
- a timeline 4650 displays one of several different clips that are in a composite presentation.
- a user or content editor might have added these clips to the timeline in a current editing session or by opening a composite project (alternatively may be referred to as a “project”) that was defined in a previous editing session.
- the timeline search tool 4630 is displayed adjacent to the timeline 4650 .
- the timeline search tool 4630 may be displayed elsewhere in some embodiments.
- the timeline search tool may be provided in its own window separate from the timeline 4650 .
- the timeline search tool 4630 may be closed or opened (e.g., by selecting a toolbar button, menu item, shortcut key, etc).
- the first stage 4605 shows the timeline search tool 4630 in a clip view.
- the user can switch to a keyword view by selecting the control 4635 .
- the index area 4620 lists each clip (e.g., a range of a clip) that is added to the timeline 4650 .
- One or more scrollbars may be displayed when the list of clips does not fit in the index area 4620 .
- Each particular clip listed in the index area 4620 represents an index to the particular clip in the timeline 4650 .
- the user can select any one of the indices to navigate to a position of a corresponding clip in the timeline 4650 . For example, when the composite presentation is for a two-hour program with many ranges of different clips, the user can select an index for a clip range and quickly navigate to the clip range in the timeline 4650 .
- each clip includes (1) a clip icon that indicates the type of clip (e.g., video, audio, title), (2) a clip name, and (3) time duration.
- a user can choose what types of clips are listed in the index area 4620 by selecting one or more controls from a set of controls 4640 . For example, the user can specify whether only video clips, audio clips, or title clips are displayed in the index area 4620 .
- the index area 4620 displays the clips differently. For example, each clip may be represented by one or more thumbnail images, waveform, etc.
- the second stage 4610 shows the timeline search tool 4630 in a keyword view.
- the user can switch to a clip view by selecting the control 4660 .
- the index area 4620 lists each keyword that is associated with one or more ranges of a clip in the timeline. These keywords may be user-specified keywords or analysis keywords in some embodiments. In addition to keywords, some embodiments list markers (e.g., ratings marker, to-do markers, etc.). In some embodiments, the index area 4620 lists smart collections. For example, the index area 4620 may list different smart collections related to an analysis keyword such as one person, two persons, a group of people, wide shot, close-up, etc. Similar to the clip view, one or more scrollbars may be displayed when the list of items does not fit in the index area 4620 .
- each item represents an index to the item in the timeline 4650 .
- the user can select any one of the indices to navigate to a position of a corresponding item in the timeline 4650 .
- each item in the index area 4620 includes an icon that indicates its type, name, and time duration. Also, the items are listed in the index area 4620 in chronological order starting with a first item in the timeline 4650 and ending with a last item in the timeline.
- a user can choose which types of items are displayed in the index area 4620 by selecting one or more controls of a set of controls 4645 below the index area. For example, the user can specify that only markers, keywords, incomplete to-do markers, or completed to-do markers be displayed in the index area.
- the index playhead 4625 is positioned at the top of index area 4620 above any other items (e.g., clip, keyword, and marker in both views).
- the position of the index playhead 4625 provides a reference point to one or more clips that is displayed in the timeline 4650 .
- the position of the index playhead 4625 indicates that the timeline is displaying a first clip in the composite presentation that the user is creating.
- the position of the index playhead 4625 provides a reference point to one or more items (e.g., keywords, markers) that is associated with a particular clip in the timeline 4650 .
- the position of the index playhead 4625 also corresponds to a timeline playhead 4655 in the timeline. This index playhead 4625 moves synchronously with the timeline playhead 4655 , in some embodiments.
- FIG. 47 provides an illustrative example of the association between the timeline playhead 4655 and the index playhead 4625 . Specifically, in three operational stages 4705 - 4715 , this figure illustrates how the index playhead 4625 is moved when a user selects and moves the timeline playhead 4655 .
- the timeline playhead 4655 is situated at a position on the timeline 4650 that corresponds to a starting point of the composite presentation.
- the timeline search tool 4630 is in a keyword view and displays a list of keywords and markers.
- the position of the index playhead 4625 corresponds to the position of the timeline playhead 4655 . This is shown in the first stage 4705 as the index playhead is situated at the top of the index area 4620 above the keywords and markers.
- the second stage 4710 shows a selection and movement of the timeline playhead 4655 past a first marker 4720 .
- the movement causes the index playhead 4625 to be moved down by following the chronological order of the indices in the index area 4620 .
- the index playhead 4625 is moved to a position below a first marker item 4725 corresponding to the first marker 4720 in the timeline 4650 .
- the third stage 4715 shows a selection and movement of the timeline playhead 4655 past a second marker 4730 . As shown, the movement causes the index playhead 4625 to be moved down in the list of markers and keywords. Specifically, in the third stage 4715 , the index playhead 4625 is moved to a position below a second marker item 4735 corresponding to the second marker 4730 in the timeline 4650 .
- FIG. 48 provides an illustrative example of filtering the timeline search tool 4630 . Specifically, this figure illustrates in six operational stages 4805 - 4830 how the set of controls 4835 - 4860 can be used to filter the index area 4620 . In this example, as the search tool is in a keyword view, only the set of controls 4835 - 4860 related to keyword search is shown below the index area 4620 .
- the index area lists all keywords and markers associated with clips in the timeline. Specifically, the control 4835 for showing all items is activated, which causes the index area 4620 to list each item.
- the second stage 4810 shows selection of a control 4840 , which causes the index area 4620 to display only markers.
- the third stage 4815 shows selection of a control 4845 for displaying only keywords. Accordingly, in the third stage 4815 , only keywords are listed in the index area 4620 . Specifically, the index area 4620 lists two selectable keyword items. The first item corresponds to a range of a clip associated with both first and second keywords. The second item corresponds to a range associated with the first keyword. In some embodiments, the time (e.g., time code) listed for each item (e.g., clip, keyword, marker, etc.) in the index area 4620 corresponds to a starting point of the range of the item along the sequence or composite presentation. For example, in the third stage 4815 , the first range associated with the two keywords is around 35 seconds into the sequence.
- time e.g., time code
- the fourth stage 4820 shows selection of a control 4850 for displaying only analysis keywords. This causes the index area 4620 display only two selectable items for analysis keywords associated with the clips in the sequence.
- the fifth stage 4825 shows selection of a control 4855 that causes the index area to display only to-do markers.
- the sixth stage 4830 shows selection of a control 4860 that causes the index area 4620 to only list completed to-do markers. In the sixth stage 4830 , the index area 4620 is empty because the clips in the timeline are not associated with any completed to-do markers.
- FIG. 49 provides an illustrative example of filtering the timeline search tool 4630 based on video, audio, and titles. Four operational stages 4905 - 4920 are illustrated in this figure. In this example, as the timeline search tool 4630 is in a clips view mode, only the set of controls 4925 - 4940 relating to clips search is shown below the index area 4620 .
- the index area 4620 lists each clip in the timeline because the control 4925 corresponding to all clips is activated. Specifically, the index area 4620 lists a title clip 4945 , an audio clip 4950 , and a video clip 4955 .
- title clips are synthesized clips generated by a media editing application. For example, a user might add one or more title clips to a composite presentation using a title effects tool. This title effects tool may provide different options for defining a title clip to add to the composite presentation.
- title clips do not reference any source media on a disk, in some embodiments.
- titles may play a critical role in movies, providing important bookends (e.g., opening titles and closing credits), and conveying time and dates within the movie.
- Titles, especially in the lower third of the screen, are also used in documentaries and informational videos to convey details about onscreen subjects or products.
- the selection of the control 4930 for showing only video clips causes the index area 4620 to only list a video clip 4955 .
- the third stage 4915 shows selection of a control 4935 for displaying only audio clips.
- the selection of the control 4935 causes the index area 4620 to only list the audio clip 4950 .
- the fourth stage 4920 shows selection of a control 4940 for displaying only title clips, which causes the index area 4620 to list only the title clip 4945 .
- FIG. 50 provides an illustrative example of navigating the timeline using the timeline search tool 4630 .
- Three operational stages 5005 - 5015 are shown in this figure.
- the timeline playhead 4655 is situated at a position on the timeline that corresponds to a starting point of the composite presentation.
- the timeline search tool 4630 is in a keyword search mode and displays a list of keywords and markers.
- the second stage 5010 shows a selection of a to-do marker item 5020 in the index area 4620 .
- the selection causes the timeline playhead 4655 to move to a position of a marker 5035 corresponding to the to-do marker item 5020 in the index area 4620 .
- the third stage 5015 shows a selection of a keyword item 5025 in the index area 4620 .
- the selection causes the playhead to be moved to a starting point of a range 5030 associated with the keyword corresponding to the keyword item 5025 .
- the selection also causes the clip range 5030 associated with the keyword to be selected in the timeline 4650 . This provides a visual indication to the user of the range of the sequence that is tagged with the keyword.
- the ranges of the keywords are selected in the timeline and the timeline may move such that the beginning or starting point of a range associated with a first keyword in the index area 4620 is aligned with the timeline's playhead.
- the selection mechanism allows users to inspect the timeline and perform a number of different operations. These operations include removing items from the timeline (e.g., clips, tags and Markers), editing operations (e.g., adding effects), etc.
- the user selects a keyword and marker in the timeline search tool 4630 to navigate the timeline 4650 .
- This is particularly useful when the timeline is densely populated with multiple different clips (e.g., ranges of clips).
- the timeline search tool 4630 can be used to locate a particular item and navigate to the particular item in the timeline 4650 .
- the timeline search tool 4630 allows the user to navigate to the items (e.g., clips, keywords, markers) in a similar manner as navigating to items in a list. For example, a user can select an item through a directional key (e.g., up key, down key), which causes the timeline to navigate to the position of the item.
- a directional key e.g., up key, down key
- FIG. 51 provides an example workflow for searching the timeline 4650 for a to-do marker using the timeline search tool 4630 and specifying the to-do marker as a completed item (e.g., by selecting a checkbox).
- Three operational stages 5105 - 5115 are shown in this figure.
- the control 4855 for displaying only to-do markers with an incomplete flag is selected.
- the selection causes the index area 4620 to display only a marker item 5120 corresponding to a to-do marker 5125 in the timeline.
- the user selects this marker item 5120 , which causes navigation across the timeline to the to-do marker 5125 .
- the second stage 5110 shows the timeline 4650 after the user selects (e.g., through a double click operation) the to-do marker 5125 on the timeline. As shown, the selection causes a popup window 5130 to appear.
- the pop-up window includes information related to the to-do marker and a check box 5135 for flagging the to-do marker as being a completed item.
- the third stage 5115 shows the timeline 4650 after the user selects a check box 5135 to flag the to-do marker 5125 as a completed item.
- the appearance of the marker 5125 is different from its appearance in the first and second stages 5105 and 5110 .
- the marker changes color (e.g., from red to green) to indicate that the to-do task is completed.
- the index area 4620 that displays incomplete to-do markers is cleared as the to-do marker 5125 has been flagged as being a completed item.
- the control 4860 can be selected to display each completed task in the index area 4620 .
- the to-do marker 5125 is checked as being completed using the timeline.
- the to-do marker may be flagged as being completed using the timeline search tool 4630 .
- the marker item 5120 may be selected to mark the to-do marker 5125 as a completed item.
- FIG. 52 provides an illustrative example of using the timeline search tool 4630 to search a list of keywords and markers. Specifically, this figure illustrates in three operational stages 5205 - 5215 how the search field 4615 can be used to filter the index area 4620 of the timeline search tool 4630 . In this example, as the search tool is in a keyword view mode, only the set of controls related to keyword search is shown below the index area 4620 .
- the index area 4620 lists all keywords or markers that are associated with the clips in the timeline.
- the user inputs a letter “s” into the search field 4615 in the second stage 5210 . This causes the index area 4620 to only display keywords and markers that includes the letter “s”.
- the third stage 5215 illustrates inputting an additional letter into the search field 4615 . Specifically, the user inputs the letter “t” in addition to the previous input of the letter “s”. This causes the index area 4620 to only display each keyword or marker that includes the sequence of letters “st”.
- FIG. 53 provides an illustrative example of using the timeline search tool 4630 to search a list of clips. Specifically, this figure illustrates in three operational stages 5305 - 5315 how the search field 4615 can be used to filter the index area 4620 of the timeline search tool 4630 based on clips. In this example, as the timeline search tool 4630 is in a clip view mode, only the set of controls related to clips searches is shown below the index area 4620 .
- the index area 4620 lists all clips in the timeline.
- the user inputs a letter “a” into the search field 4615 . This causes the index area 4620 to only display clips that includes the letter “a”.
- the third stage 5315 illustrates inputting an additional letter into the search field 4615 . Specifically, the user inputs the letter “b” in addition to the previous input of the letter “a”. This causes the index area 4620 to only display each clip that includes the sequence of letters “ab”.
- FIG. 54 provides an illustrative example of using the timeline search tool 4630 to display time duration for ranges of clips (e.g., that are associated with one or more keywords). Three operational stages 5410 - 5420 of the timeline search tool 4630 are illustrated in this figure.
- a user selects a first keyword item 5430 in the index area 4620 of the timeline search tool 4630 .
- the selection causes the timeline search tool 4630 to display a total time for the range of a clip associated with a keyword corresponding to the first keyword item 5430 .
- the total time is displayed in a display area 5425 .
- the user selects a second keyword item 5435 , while selecting the first keyword item 5430 .
- This causes the total time of the two ranges of clips associated with the keywords to be displayed in the display area 5425 .
- the third stage 5420 shows the total time of three clip ranges in the display area 5425 .
- the total duration includes a duration for a clip range that is associated with a set of analysis keywords that correspond to an analysis keyword item 5440 .
- a total duration is displayed when multiple items corresponding to one or more keywords are selected from the index area 4620 of the timeline search tool 4630 .
- Displaying the total time can be useful in a number of different ways. For example, an editor may be restricted to adding only 30 seconds of stock footage. Here, when the stock footage is tagged as such, the editor can select those items corresponding to the stock footage in the index area 4620 and know whether the total duration exceeds 30 seconds.
- FIG. 55 provides an illustrative example of displaying the total time of several clips in the timeline search tool 4630 .
- This figure is similar to the previous example.
- the user selects multiple items corresponding to different clips. Specifically, in the first state 5505 , the user selects a first item 5515 to display a total duration for a first clip in the display area 5425 of the timeline search tool 4630 . In the second stage 5510 , the user selects a second item 5520 corresponding to a second clip, while selecting the first item 5515 . This causes the display area 5425 to display the total duration of both the first and second clips.
- the timeline search tool 4630 allows a user to find missing clips.
- a missing clip is a clip imported into the media editing application that does not link back to its source. For example, a user might have moved or deleted a source file on a hard disk to break the link between the application's file entry and the source file.
- FIG. 56 provides an illustrative example of using the timeline search tool 4630 to find missing clips.
- the timeline 4650 includes a number of different clips. This is indicated by the listing of clips in the index area 4620 of the timeline search tool 4630 .
- the second stage 5610 shows the timeline 4650 after a search parameter for finding missing clips is inputted by the user into the search field 4615 of the timeline search tool 4630 .
- the search parameter is a predefined search parameter or keyword to search for missing clips in the timeline.
- the user types the word “missing” into the search field 4615 .
- a different word or parameter can be used, in some embodiments.
- the input causes the index area 4620 of the timeline search tool 4630 to display an index item 5620 for a missing or offline clip.
- the user selects the index item 5620 to navigate to the missing clip.
- the third stage 5615 shows the timeline 4650 after the user selects the index item 5620 .
- the selection causes the timeline to be navigated to the missing clip.
- the user can select the index item 5620 or the clip representation 5625 to delete the clip from the project.
- the user can reestablish the broken link.
- a selection of the index item 5620 may cause a clip inspector to be displayed. This clip inspector allows the user to identify the location of the missing clip in order to reestablish the broken link.
- FIG. 57 conceptually illustrates a process 5700 for searching and navigating a timeline of a media editing application.
- the process is performed through a timeline search tool of the media editing application.
- the process 5700 begins by identifying (at 5705 ) clips in the timeline.
- the process identifies (at 5710 ) items (e.g., keywords, markers) associated with the clips in the timeline. Examples of such associated items include media clips, keywords, smart collections, markers, etc.
- items e.g., keywords, markers
- the process 5700 determines (at 5715 ) whether it is in a clip view mode. When the determination is made that the process is in a clip view mode, the process 5700 displays (at 5720 ) identified clips as indices in an index display area. Next, the process 5700 determines (at 5725 ) whether it has received a selection of a listed index item. In some embodiments, the process 5700 continuously monitors user actions in the clip view mode to make this determination.
- the process 5700 navigates (at 5730 ) to the position of the selected clip in the timeline.
- the process 5700 determines whether it has received any search parameter.
- the process filters (at 5740 ) the indices displayed in index display area based on the received search parameters.
- the process then goes on to 5770 .
- the process 5700 continuously monitors a search field to determine whether the user has inputted a search parameter (e.g., a letter, a number).
- the process 5700 determines that it is not in a clip view mode or is in a keyword view mode
- the process displays (at 5745 ) the identified items (e.g., keywords, markers) as indices in the index display area.
- the process 5700 determines (at 5750 ) whether it has received a selection of a listed index item.
- the process 5700 transitions to 5760 .
- the process navigates (at 5755 ) to the position of the selected item in the timeline.
- the process 5700 determines (at 5760 ) whether any search parameter has been received. When the determination is made that a search parameter has not been received, the process transitions to 5770 . In contrast, when the determination is made that a search parameter has been received, the process 5700 filters (at 5765 ) the indices displayed in index display area based on the received search parameters. The process then goes on to 5770 .
- the process 5700 determines whether there is any additional input for the timeline search tool. If it is determined that there is additional input for the timeline search tool, the process 5700 returns to 5715 to continue its navigation and filtering. Otherwise, the process 5700 terminates.
- FIG. 58 conceptually illustrates several example data structures for a searchable and navigable timeline.
- the data structures are all contained within a project data structure that contains a single sequence for generating a composite presentation.
- FIG. 58 illustrates a timeline sequence 5805 that includes a primary collection data structure 5810 .
- the primary collection data structure 5810 is in itself an array of one or more clip objects or collection objects. Several examples of such clip objects are described above by reference to FIG. 5 .
- the sequence 5805 includes (1) a sequence ID, (2) sequence attributes, and (3) the primary collection 5810 .
- the sequence ID identifies the timeline sequence 5805 .
- a user sets the sequence attributes for the project in the timeline. For example, the user might have specified several settings that correspond to these sequence attributes when creating the project.
- the primary collection 5810 includes the collection ID and the array of clips.
- the collection ID identifies the primary collection.
- the array includes several clips (i.e., clip 1 to clip N). These represent clips or collections that have been added to the timeline.
- the array is ordered based on the locations of media clips in the timeline and only includes clips in the primary lane of the primary collection. The application assumes that there is no gap between these items, and thus no timing data is needed between the items.
- some embodiments remove a sequence container data structure and copy the rest of the data structure (e.g., the clip and its components) into the data structure for the clip in the timeline.
- the clip 5815 includes (1) a clip ID, (2) range attributes, (3) a set of keywords, and (4) a set of markers.
- the clip ID uniquely identifies the clip 5815 .
- the range attributes indicate a total range and/or trimmed ranges associated with the clip 5815 .
- the clip 5815 is a compound clip that includes multiple clips. An example of a compound clip is described above by reference to FIG. 7 .
- the clip 5815 includes a set of anchored items. Some embodiments include a set of anchored items for each clip or collection object. For example, each first clip that is anchored to a second clip may store an anchor offset that indicates a particular instance in time along the range of the second clip. That is, the anchor offset may indicate that the first clip is anchored x number of seconds and/or frames into the second clip. These times refer to the trimmed ranges of the clips in some embodiments.
- the timeline search tool displays the list of clips and provides a selectable link to each clip based on the array of clips. For example, the ordering of the clips in the array and the range attributes provide indications of starting and ending points along the timeline of each clip.
- each clip can include other clip attributes such as one or more components, clip objects, notes, etc.
- the keyword set 5820 represents keywords associated with the clip 5815 .
- An example of such a keyword set is described above by reference to FIG. 5 .
- the keyword set 5820 includes one or more keywords that are associated with a particular range of the clip 5815 .
- the keyword's range attributes indicate a starting point and an ending point of the range of a clip that is associated with the keyword. This may include the actual start time and end time.
- the range attributes may be expressed differently. For example, instead of a start time and an end time, the range may be expressed as a start time and duration (from which the end time can be derived).
- the marker 5825 includes a marker ID and range attributes.
- the marker ID identifies the marker 5825 .
- the range attributes of the marker 5825 only indicate a single instance in time, in some embodiments.
- the marker 5825 may include attributes related a note field, an attribute which indicates whether the marker is a to-do marker, etc.
- the timeline search tool displays the list of keywords and markers, and provides a selectable link to each of these items based the marker and keyword associations of the clips in the array of clips. That is, the ordering of the clips, each clip's range attributes, each marker or keyword's range attributes all provide an indication of where each associated keyword or marker is located along the timeline.
- a keyword set may be represented as a single keyword instead of a set of one or more keywords.
- each keyword is associated with its own range attribute.
- addition information regarding data structures are described in U.S. patent application Ser. No. 13/111,912, entitled “Data Structures for a Media-Editing Application”. This Application is incorporated in the present application by reference.
- FIG. 59 conceptually illustrates the software architecture of a media editing application 5900 of some embodiments.
- the media editing application is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within an operating system.
- the application is provided as part of a server-based solution.
- the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server.
- the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine.
- the media editing application 5900 includes a user interface (UI) interaction and generation module 5905 , a media ingest module 5910 , editing modules 5915 , a rendering engine 5920 , a playback module 5925 , analysis modules 5940 , a keyword association module 5935 , a keyword collection module 5930 , and a timeline search module 5995 .
- UI user interface
- the user interface interaction and generation module 5905 generates a number of different UI elements, including a keyword tagging tool 5906 , a timeline 5945 , a timeline search tool 5904 , a thumbnails view 5908 , a list view 5902 , a preview display area 5912 , and a set of analysis and import tools 5990 .
- the figure also illustrates stored data associated with the media-editing application: source files 5950 , event data 5955 , project data 5960 , and other data 5965 .
- the source files 5950 store media files (e.g., video files, audio files, combined video and audio files, etc.) imported into the application.
- the source files 5950 of some embodiments also store transcoded versions of the imported files as well as analysis data (e.g., people detection data, shake detection data, color balance data, etc.).
- the event data 5955 stores the events information used by some embodiments to populate the thumbnails view 5908 (e.g., filmstrip view) and the list view 5902 .
- the event data 5955 may be a set of clip object data structures stored as one or more SQLite database (or other format) files in some embodiments.
- the project data 5960 stores the project information used by some embodiments to specify a composite presentation in the timeline 5945 .
- the project data 5960 may also be a set of clip object data structures stored as one or more SQLite database (or other format) files in some embodiments.
- the four sets of data 5950 - 5965 are stored in a single physical storage (e.g., an internal hard drive, external hard drive, etc.).
- the data may be split between multiple physical storages.
- the source files might be stored on an external hard drive with the event data, project data, and other data on an internal drive.
- Some embodiments store event data with their associated source files and render files in one set of folders, and the project data with associated render files in a separate set of folders.
- FIG. 59 also illustrates an operating system 5970 that includes input device driver(s) 5975 , display module 5980 , and media import module 5985 .
- the device drivers 5975 , display module 5980 , and media import module 5985 are part of the operating system 5970 even when the media editing application 5900 is an application separate from the operating system 5970 .
- the input device drivers 5975 may include drivers for translating signals from a keyboard, mouse, touchpad, tablet, touchscreen, etc. A user interacts with one or more of these input devices, each of which send signals to its corresponding device driver. The device driver then translates the signals into user input data that is provided to the UI interaction and generation module 5905 .
- the present application describes a graphical user interface that provides users with numerous ways to perform different sets of operations and functionalities. In some embodiments, these operations and functionalities are performed based on different commands that are received from users through different input devices (e.g., keyboard, trackpad, touchpad, mouse, etc.). For example, the present application illustrates the use of a cursor in the graphical user interface to control (e.g., select, move) objects in the graphical user interface. However, in some embodiments, objects in the graphical user interface can also be controlled or manipulated through other controls, such as touch control. In some embodiments, touch control is implemented through an input device that can detect the presence and location of touch on a display of the device. An example of such a device is a touch screen device.
- a user can directly manipulate objects by interacting with the graphical user interface that is displayed on the display of the touch screen device. For instance, a user can select a particular object in the graphical user interface by simply touching that particular object on the display of the touch screen device.
- touch control can be used to control the cursor in some embodiments.
- the display module 5980 translates the output of a user interface for a display device. That is, the display module 5980 receives signals (e.g., from the UI interaction and generation module 5905 ) describing what should be displayed and translates these signals into pixel information that is sent to the display device.
- the display device may be an LCD, plasma screen, CRT monitor, touchscreen, etc.
- the media import module 5985 receives media files (e.g., audio files, video files, etc.) from storage devices (e.g., external drives, recording devices, etc.) through one or more ports (e.g., a USB port, Firewire port, etc.) of the device on which the application 5900 operates and translates this media data for the media-editing application or stores the data directly onto a storage of the device.
- media files e.g., audio files, video files, etc.
- storage devices e.g., external drives, recording devices, etc.
- ports e.g., a USB port, Firewire port, etc.
- the UI interaction and generation module 5905 of the media editing application 5900 interprets the user input data received from the input device drivers 5975 and passes it to various modules, including the timeline search module 5995 , the editing modules 5915 , the rendering engine 5920 , the playback module 5925 , the analysis modules 5940 , the keyword association module 5935 , and the keyword collection module 5930 .
- the UI interaction and generation module 5905 also manages the display of the UI, and outputs this display information to the display module 5980 . This UI display information may be based on information from the editing modules 5915 , the playback module 5925 , and the data 5950 - 5965 .
- the UI interaction and generation module 5905 generates a basic GUI and populates the GUI with information from the other modules and stored data.
- the UI interaction and generation module 5905 in some embodiments, generates a number of different UI elements. These elements, in some embodiments, include the keyword tagging tool 5906 , the timeline 5945 , the timeline search tool 5904 , the thumbnails view 5908 , the list view 5902 , the preview display area 5912 , and the set of analysis/import tools 5990 . All of these UI elements are described in many different examples above. For example, several operations performed with the thumbnails view 5908 are described above by reference to FIGS. 1-3 and 6 - 16 . Several example operations performed with the list view 5902 are described above by reference to FIGS. 34-45 .
- the media editing application in some embodiments, maintains a database of previous user input or interactions to provide an auto-complete feature.
- the media editing application in some embodiments, maintains a list of common production and/or editing terms. In some embodiments, these data items are stored in the storage 5965 .
- the media ingest module 5910 manages the import of source media into the media-editing application 5900 . Some embodiments, as shown, receive source media from the media import module 5985 of the operating system 5970 . The media ingest module 5910 receives instructions through the UI interaction and generation module 5905 as to which files should be imported, then instructs the media import module 5985 to enable this import (e.g., from an external drive, from a camera, etc.). The media ingest module 5910 stores these source files 5950 in specific file folders associated with the application. In some embodiments, the media ingest module 5910 also manages the creation of event data structures upon import of source files and the creation of the clip and asset data structures contained in the events.
- the editing modules 5915 include a variety of modules for editing media in the clip browser as well as in the timeline.
- the editing modules 5915 handle the creation of projects, addition and subtraction of clips from projects, trimming or other editing processes within the timeline, application of effects and transitions, or other editing processes.
- the editing modules 5915 create and modify project and clip data structures in both the event data 5955 and the project data 5960 .
- the rendering engine 5920 handles the rendering of images for the media-editing application.
- the rendering engine 5920 manages the creation of images for the media-editing application.
- the rendering engine 5920 outputs the requested image according to the project or event data.
- the rendering engine 5920 retrieves the project data or event data that identifies how to create the requested image and generates a render graph that is a series of nodes indicating either images to retrieve from the source files or operations to perform on the source files.
- the rendering engine 5920 schedules the retrieval of the necessary images through disk read operations and the decoding of those images.
- the render engine 5920 performs various operations to generate an output image.
- these operations include blend operations, effects (e.g., blur or other pixel value modification operations), color space conversions, resolution transforms, etc.
- these processing operations are actually part of the operating system and are performed by a GPU or CPU of the device on which the application 5900 operates.
- the output of the rendering engine (a rendered image) may be stored as render files in storage 5965 or sent to a destination for additional processing or output (e.g., playback).
- the playback module 5925 handles the playback of images (e.g., in a preview display area 5912 of the user interface). Some embodiments do not include a playback module and the rendering engine directly outputs its images for integration into the GUI, or directly to the display module 5980 for display at a particular portion of the display device.
- the analysis modules 5940 perform analysis on clips. Each module may perform a particular type of analysis. Examples of such analysis include analysis of the number of people in the clip (e.g., one person, two persons, group) and/or a type of shot (e.g., a close-up, medium, or wide shot). Other types of analysis may include image stabilization analysis (e.g., camera movement), color balance analysis, audio analysis (e.g., mono, stereo, silent channels), metadata analysis, etc. As shown, the analysis modules 5940 , in some embodiments, utilize the rendering engine 5920 to create copies of corrected media clips. For example, when excessive shake is detected in a portion of clip, the rendering engine 5920 may create a corrected version the clip.
- the analysis modules 5940 operate in conjunction with the keyword association module 5935 to associate each analyzed clip (e.g., a portion of a clip or an entire clip) with one or more keywords.
- the keyword association module 5935 may receive range attributes from the analysis modules 5940 to associate a range of a clip with a keyword.
- the keyword association module 5935 associates a clip object or a collection object with a keyword set. The association of a keyword set with a clip object or collection object is described above by reference to FIG. 5 .
- the keyword collection module 5930 facilitates creation and deletion of keyword collections.
- the keyword collection module 5930 may operate in conjunction with the keyword association module 5935 to create a keyword collection for each clip or portion of a clip associated with a keyword.
- the keyword collection module 5930 allows a user to create or delete a keyword collection for a particular keyword prior to the particular keyword being associated with any clips. For example, the user can create different keyword collections, and then drag and drop different portions of clips to create the keyword association.
- the timeline search module 5995 facilitates the search and navigation of the timeline 5945 .
- the search and navigation is based on a sequence associated with the timeline 5945 .
- a sequence in the timeline may include multiple different clips.
- Each clip may include range attributes indicating its position along the sequence.
- the timeline search module 5995 based on the sequence and the range attributes, provides links to clip or collection objects that allow the timeline 5945 to be navigated.
- the timeline search module 5995 provides a list of other items (e.g., keywords, markers) and a selectable link to each of these items based associations of the items with the clips or collections in the sequence.
- the timeline search module 5995 provides a search result by filtering the list of items in the timeline search tool 5904 .
- filtering the list of items in a timeline search tool are described above by reference to FIGS. 53 and 54 .
- the media-editing application 5900 While many of the features of the media-editing application 5900 have been described as being performed by one module (e.g., the UI interaction and generation module 5905 , the media ingest module 5910 , etc.), one of ordinary skill in the art will recognize that the functions described herein might be split up into multiple modules. Similarly, functions described as being performed by multiple different modules might be performed by a single module in some embodiments (e.g., the playback module 5925 might be part of the UI interaction and generation module 5905 ).
- Computer readable storage medium also referred to as computer readable medium.
- computational element(s) such as processors or other computational elements like ASICs and FPGAs
- Computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc.
- the computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
- the term “software” includes firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs when installed to operate on one or more computer systems define one or more specific machine implementations that execute and perform the operations of the software programs.
- FIG. 60 illustrates a computer system with which some embodiments of the invention are implemented.
- a computer system includes various types of computer readable media and interfaces for various other types of computer readable media.
- Computer system 6000 includes a bus 6005 , at least one processing unit (e.g., a processor) 6010 , a graphics processing unit (GPU) 6020 , a system memory 6025 , a read-only memory 6030 , a permanent storage device 6035 , input devices 6040 , and output devices 6045 .
- processing unit e.g., a processor
- GPU graphics processing unit
- the bus 6005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 6000 .
- the bus 6005 communicatively connects the processor 6010 with the read-only memory 6030 , the GPU 6020 , the system memory 6025 , and the permanent storage device 6035 .
- the processor 6010 retrieves instructions to execute and data to process in order to execute the processes of the invention.
- the processor comprises a Field Programmable Gate Array (FPGA), an ASIC, or various other electronic components for executing instructions. Some instructions are passed to and executed by the GPU 6020 .
- the GPU 6020 can offload various computations or complement the image processing provided by the processor 6010 .
- the read-only-memory (ROM) 6030 stores static data and instructions that are needed by the processor 6010 and other modules of the computer system.
- the permanent storage device 6035 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 6000 is off. Some embodiments of the invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 6035 .
- the system memory 6025 is a read-and-write memory device. However, unlike storage device 6035 , the system memory is a volatile read-and-write memory such as a random access memory.
- the system memory stores some of the instructions and data that the processor needs at runtime.
- the invention's processes are stored in the system memory 6025 , the permanent storage device 6035 , and/or the read-only memory 6030 .
- the various memory units include instructions for processing multimedia items in accordance with some embodiments. From these various memory units, the processor 6010 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
- the bus 6005 also connects to the input and output devices 6040 and 6045 .
- the input devices enable the user to communicate information and commands to the computer system.
- the input devices 6040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
- the output devices 6045 display images generated by the computer system.
- the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD).
- bus 6005 also couples the computer 6000 to a network 6065 through a network adapter (not shown).
- the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), an intranet, or a network of networks such as the Internet. Any or all components of computer system 6000 may be used in conjunction with the invention.
- Some embodiments include electronic components, such as microprocessors, storage, and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
- computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks.
- CD-ROM compact discs
- CD-R recordable compact discs
- the computer-readable media may store a computer program that is executable by a device such as an electronics device, a microprocessor, a processor, a multi-processor (e.g., a chip with several processing units on it) and includes sets of instructions for performing various operations.
- the computer program excludes any wireless signals, wired download signals, and/or any other ephemeral signals
- Examples of hardware devices configured to store and execute sets of instructions include, but are not limited to, application specific integrated circuits (ASICs), field programmable gate arrays (FPGA), programmable logic devices (PLDs), ROM, and RAM devices.
- ASICs application specific integrated circuits
- FPGA field programmable gate arrays
- PLDs programmable logic devices
- ROM read only memory
- RAM random access memory
- Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
- the terms “display” or “displaying” mean displaying on an electronic device.
- the terms “computer readable medium” and “computer readable media” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
- the present application describes a graphical user interface that provides users with numerous ways to perform different sets of operations and functionalities. In some embodiments, these operations and functionalities are performed based on different commands that are received from users through different input devices (e.g., keyboard, track pad, touchpad, mouse, etc.). For example, the present application describes the use of a cursor in the graphical user interface to control (e.g., select, move) objects in the graphical user interface. However, in some embodiments, objects in the graphical user interface can also be controlled or manipulated through other controls, such as touch control. In some embodiments, touch control is implemented through an input device that can detect the presence and location of touch on a display of the device. An example of such a device is a touch screen device.
- a user can directly manipulate objects by interacting with the graphical user interface that is displayed on the display of the touch screen device. For instance, a user can select a particular object in the graphical user interface by simply touching that particular object on the display of the touch screen device.
- touch control can be used to control the cursor in some embodiments.
- FIGS. 17 , 40 , 41 , 28 , and 57 conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. Specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process.
Abstract
Some embodiments of the invention provide a keyword association tool for organizing media content. Each keyword can be associated with an entire clip or a portion of the clip. For each specified keyword, the keyword association tool creates a collection (e.g., bin, folder, etc.) in a dynamic collection structure. In some embodiments, a keyword collection is dynamically added to the collection structure each time a new keyword is associated with a media clip. To associate a clip with a keyword, a user can drag and drop a clip onto a keyword collection that corresponds to the keyword. The same technique can be used to associate multiple clips with the keyword by simultaneously dragging and dropping the clips onto the keyword collection.
Description
- This application claims the benefit of U.S. Provisional Application 61/443,709, entitled “Keywords and Dynamic Folder Structures”, filed Feb. 16, 2011. U.S. Provisional Application 61/443,709 is incorporated herein by reference.
- To date, many media editing applications exist for creating media presentations by compositing several pieces of media content such as video, audio, animation, still image, etc. Such applications give users the ability to edit, combine, transition, overlay, and piece together different media content in a variety of manners to create a resulting composite presentation. Examples of media editing applications include Final Cut Pro® and iMovie®, both sold by Apple Inc.
- Some media editing applications provide bins or folder-like structures to organize media content. In these applications, a user typically imports media content and creates several bins. The user then provides names for the bins and organizes the content by moving different pieces of the content into these bins. In other words, the user clears one or more areas that contain pieces of the content by placing the pieces in other areas where he or she can easily access them later. When a user needs a piece of content, the user searches for it in one of the bins. For instance, a video editor might search a bin called “People” in order to find a video clip having a wide camera shot of a group of extras. After finding the video clip, the video editor may move or copy the video clip into another bin. If the video editor cannot locate the video clip, the editor may import the clip again.
- In addition to bins, some media editing applications provide keyword-tagging functionality. With keyword tagging, a user selects one or more pieces of content and associates the selected content with a keyword. Typically, the user associates the selected content through a keyword display area that lists several user-specified keywords. To make use of the keyword association, the user initiates a keyword filtering operation on a particular keyword in order to display only those pieces of content that have been associated with the particular keyword.
- There are a number of shortcomings associated with the organization approaches mentioned above. For example, with bin organization, a user must search through different bins to find the right pieces of content. In addition, the user must move or copy the pieces of content in between different bins to organize them.
- With keyword tagging, a user is limited to filtering down a display area to find content associated with a particular keyword. In some cases, the user has no recollection of which pieces of content are associated with which keywords. Furthermore, in most cases, an application's keyword tagging functionality is a secondary organizational feature to supplement folder-type organization.
- Some embodiments of the invention provide a novel keyword association tool for organizing media content. Each keyword can be associated with an entire clip or a portion of the clip. For each specified keyword, the keyword association tool creates a collection (e.g., bin, folder, etc.) in a dynamic collection structure. In some embodiments, a keyword collection is dynamically added to the collection structure each time a new keyword is associated with a media clip. To associate a clip with a keyword, a user can drag and drop a clip onto a keyword collection that corresponds to the keyword. The same technique can be used to associate multiple clips with the keyword by simultaneously dragging and dropping the clips onto the keyword collection.
- In some embodiments, the dynamic collection structure may be represented in a sidebar display area with a list of different keyword collections. Accordingly, a user does not have to search for a separate keyword association tool to associate one or more clips with different keywords. Moreover, these keyword collections operate similar to what many computer users have come to believe as bins or folders. For example, a user can (1) create different keyword collections, (2) drag and drop items onto them, and (3) select any one of them to view its keyword associated content.
- In some embodiments, the keyword collections either replace or supplement folder-type organization. For example, without creating any folders, a group of items can be organized into different keyword collections. Alternatively, a group of items in a folder or bin can be organized into different keyword collections. Accordingly, the keyword collection feature provides a new model for replacing or supplementing traditional folder-type organization.
- As mentioned, each keyword can be associated with an entire clip or a portion of a clip in some embodiments. To associate a keyword with a portion of a clip, the user can select the portion of the clip (e.g., using a range selector), and drag and drop the selected portion onto a keyword collection. The user can also filter a display area to display each clip or portion of a clip associated with the keyword by selecting the keyword collection.
- In some embodiments, each media clip associated with a keyword is displayed with a graphical indication of the association. This allows a user to quickly assess a large group of media clips and determine which clips or ranges of the clips have been tagged with one or more keywords. To provide an indication of a range, the graphical indication, in some embodiments, spans horizontally across a portion of a clip's representation (e.g., a thumbnail representation, filmstrip representation, waveform representation, etc.).
- In some embodiments, the keyword collections are provided in a hierarchical structure (e.g., of a sidebar display area) with different types of collections. For example, each keyword collection may be a part of another collection such as a media collection, disk collection, etc. Alternatively, or conjunctively, some embodiments provide filter or smart collections in the hierarchical structure. In some such embodiments, a user can create a smart collection and customize the smart collection to include or exclude each item associated with one or more different keywords.
- When a keyword collection is deleted, some embodiments automatically disassociate or untag each item associated with a keyword of the keyword collection. This allows a user to quickly remove keyword associations from a large group of tagged items. The user can also disassociate a portion of a range associated with a keyword. In some embodiments, when multiple keywords are selected, a display area displays only a union of items associated with keywords of the keyword collections. In some embodiments, a keyword collection can be renamed to quickly associate its contents with another keyword (e.g., with the new name of the keyword collection) or to quickly merge or combine two keyword collections.
- Some embodiments provide a keyword tagging tool for creating keyword associations. To associate a clip with a keyword, a user can select a clip or a portion of a clip, and select a keyword from the keyword tagging tool. When inputting a keyword, the keyword tagging tool may provide suggested keywords for an auto-fill operation. The keyword tagging tool, in some embodiments, includes several fields for inputting keyword shortcuts. A user can populate one or more of these fields and use shortcut keys to quickly tag different items.
- In some embodiments, one or more different types of analysis are performed on a set of items to automatically organize the set into different keyword collections. For example, a clip may be organized into different keyword collections based on an analysis of the number of people (e.g., one person, two persons, group, etc.) in the clip and/or a type of shot (e.g., a close-up, medium, or wide shot). Other types of analysis may include image stabilization analysis (e.g., camera movement), color balance analysis, audio analysis (e.g., mono, stereo, silent channels), metadata analysis, etc.
- The analysis operations, in some embodiments, are performed when one or more items are imported into an application (e.g., media editing application). Alternatively, or conjunctively with the analysis operations, the application, in some embodiments, identifies a source directory in which a set of items is located. When the set of items are imported, the application (1) associates the name of the source directory with the set of items and (2) creates a keyword collection that contains the set of items. Accordingly, the imported items do not have to be manually organized into the keyword collection.
- Some embodiments provide a novel list view that displays a list of media clips and, for each media clip, displays each keyword associated with the media clip. In some embodiments, the list view includes a list area for displaying the list of media clips and keywords. For example, when a clip is associated with one or more keywords, the list area displays the clip with each associated keyword. In some embodiments, the list view includes a preview section for displaying a representation of a clip selected from the list view's list area. For example, the preview section may display a filmstrip representation or a sequence of thumbnail images corresponding to a set of frames in a video clip.
- Some embodiments allow a user to associate a keyword with an entire clip or a portion of the clip using the list view. For example, the user can (1) select a video clip from the list area to display the clip's filmstrip representation in the preview section, (2) paint over an area of the representation to select a range of the clip, and (3) tag the range with a keyword. Alternatively, the user can select different ranges of a clip by selecting one or more keywords from the list area. The user can also filter the list area to display each clip or portion of a clip associated with a keyword.
- In some embodiments, each media clip associated with a keyword is displayed in the preview section of the list view with a graphical indication of the association. To provide an indication of the clip's range that is associated with the keyword, the graphical indication, in some embodiments, spans horizontally across a portion of a clip's representation. For example, multiple graphical indications may be shown on a video clip's filmstrip representation to indicate different portions that are associated with one or more keywords.
- Alternatively, or conjunctively with the graphical indication, the list view, in some embodiments, displays information related to a keyword range. For example, the list area may list a starting point and an ending point for each keyword associated with a clip's range. In some embodiments, the list area displays a duration of the keyword range. In this manner, a user of the list view can quickly see in detail which portions of the clip are associated with one or more keywords.
- In some embodiments, the list area displays other items associated with a clip. These items include at least one of (1) a marker, (2) a filter or smart collection, and (3) a ratings marker. In some embodiments, a marker marks a point in time along a clip's range. For example, a user can mark a point with a marker, and specify a note or make the marker a to-do item. In some embodiments, the filter or smart collection may indicate in the list view a range of a clip that, based on an analysis, includes people (e.g., one person, two persons, a group, etc.) and/or a type of shot (e.g., a close-up, medium, or wide shot). As mentioned, other types of analysis may include image stabilization analysis (e.g., camera movement), color balance analysis, audio analysis (e.g., mono, stereo, silent channels), metadata analysis, etc.
- The list view, in some embodiments, is an editing tool that can be used to perform a number of different editing operations. In some such embodiments, the list view allows an editor to input notes for media clips or keyword ranges. These notes can be notes that an editor makes regarding the contents of an entire media clip or a range of the media clip associated a keyword. In some embodiments, the editing operations entail any one of (1) creating a composite or nested clip, (2) creating markers (e.g., to-do items, completed items), (3) adding clips to a timeline for defining a composite presentation, etc.
- In some embodiments, the list view is a playback tool that allows a user to play through one or more clips in the list. For example, when a user selects a clip from a list of clips and inputs a playback command, several clips from the list may be played without interruption starting from the selected clip. In some such embodiments, the user can jump to a different marked section (e.g., a keyword range) or different clip and continue playback starting from the marked section or clip.
- Some embodiments provide a novel timeline search tool for searching and navigating a timeline. In some embodiments, the timeline search tool includes a search field that allows a user to search for clips. For example, the timeline search tool may display each clip in a list of clips. When a user inputs a search parameter, the timeline search tool may filter this list to display only each clip that satisfies or matches the search parameter.
- In some embodiments, each clip in the list of clips is selectable such that a selection of the clip causes the timeline to navigate to the position of the clip in the timeline. For example, when a composite presentation includes many clips that make up a sequence or composite presentation, an editor can easily search the timeline to identify a particular clip and navigate the timeline to the particular clip. Accordingly, the timeline search tool allows the editor to search and navigate the timeline to identify clips.
- In some embodiments, the timeline search tool displays different types of clips. These different types of clips include audio clips, video clips, and title clips. In some such embodiments, the timeline search tool allows a user to granularly search and navigate the timeline by specifying the type of clips for which to search. Alternatively, the timeline search tool, in some embodiments, provides a search function for searching all types of clips.
- Alternatively, or conjunctively with the clips search, the timeline search tool allows a user to search for a clip or a portion of a clip associated with one or more keywords. In some such embodiments, the timeline search tool displays a list of each keyword associated with a clip or a portion of a clip. When a user inputs a search parameter, the timeline search tool filters this list to display only each keyword that satisfies or matches the search parameter. For example, when a composite presentation includes many clips tagged with different actors' names, an editor can easily search and navigate the timeline to identify ranges of clips tagged with a particular actor's name.
- In some embodiments, a media clip in a timeline may be associated with different types of items. These types of items include at least one of (1) an analysis keyword, (2) a marker, (3) a filter or smart collection, and (4) a ratings marker. In some such embodiments, the timeline search tool allows a user to granularly search and navigate the timeline by specifying the type of items to search. Alternatively, the timeline search tool provides a search function for searching all types of items, in some embodiments.
- In some embodiments, the timeline search tool allows an editor to search for markers that are placed in the timeline. As mentioned, such markers can have “to do” notes associated with them, in some embodiments. These notes can be notes that an editor makes as reminders to himself or others regarding tasks that have to be performed. Accordingly, when retrieving markers for display after a search, the method of some embodiments displays (1) the notes associated with the marker and/or (2) a check box to indicate whether the task associated with the marker has been completed. In some embodiments, the editor can check the box for a marker in the search view in order to indicate that the marker has been completed.
- The timeline search tool, in some embodiments, displays each item (e.g., keyword, clip) in chronological order, starting from a first item along the timeline to a last item. To provide an indication of the location of a playhead along the timeline, the timeline search tool, in some embodiments, includes its own playhead that moves along the list of items. This playhead moves synchronously with the timeline's playhead, in some embodiments.
- In some embodiments, the timeline search tool provides a search function for finding missing clips. A missing clip is a clip imported into an application that does not link back to its source. For example, a user might have moved or deleted a source file on a hard disk to break the link between the application's file entry and the source file. In some such embodiments, when the user inputs a pre-defined search parameter or keyword into the tool's search field, the timeline search tool displays each missing clip in a list. When the missing clip is selected from the list, some embodiments provide a set of options to re-establish the link for the missing clip.
- In some embodiments, when one or more items are selected from the list of items, the timeline search tool displays a total time for the selected items. For example, the timeline search tool may display a total time for multiple clips, multiple ranges of clips associated with one or more keywords, etc. Displaying the total time can be useful in a number of different ways. For example, an editor may be restricted to adding only 30 seconds of stock footage. When the stock footage is tagged as such, the editor can select those items corresponding to the stock footage in the timeline search tool and know whether the total duration exceeds 30 seconds.
- The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description, and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
- The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
-
FIG. 1 illustrates a graphical user interface (“GUI”) of a media editing application with a keyword association tool. -
FIG. 2 illustrates the GUI after associating a video clip with a keyword. -
FIG. 3 illustrates specifying a range of a video clip to associate with a keyword. -
FIG. 4 illustrates an example GUI of a media-editing application of some embodiments. -
FIG. 5 conceptually illustrates several example data structures of several objects of the media editing application. -
FIG. 6 illustrates creating a keyword association by dragging and dropping a range of a clip from one keyword collection to another. -
FIG. 7 illustrates creating a compound clip and associating the compound clip with a keyword. -
FIG. 8 illustrates deleting a clip range from a keyword collection. -
FIG. 9 illustrates removing a keyword from a portion of a clip range. -
FIG. 10 provides an illustrative example of disassociating multiple ranges of video clips by deleting a keyword collection. -
FIG. 11 illustrates combining two keyword collections. -
FIG. 12 provides an illustrative example of selecting multiple keyword collections from the event library. -
FIG. 13 provides an illustrative example of selecting a video clip range. -
FIG. 14 provides an illustrative example of dragging and dropping clips from one event collection to another event collection. -
FIG. 15 provides an illustrative example of dragging and dropping keyword collections from one event collection to another event collection. -
FIG. 16 provides an illustrative example of merging two event collections. -
FIG. 17 conceptually illustrates a process for associating a range of a media clip with a keyword. -
FIG. 18 illustrates an example of a tagging tool according to some embodiments. -
FIG. 19 illustrates the media editing application automatically assigning a shortcut key for a previously used keyword. -
FIG. 20 illustrates an example of using the auto-complete feature of the tagging tool. -
FIG. 21 illustrates an example of using the keyword association tool to perform an auto-apply operation. -
FIG. 22 illustrates removing a keyword from a video clip using the tagging tool. -
FIG. 23 conceptually illustrates a state diagram of a media-editing application of some embodiments. -
FIG. 24 provides an illustrative example of creating a keyword collection by analyzing content. -
FIG. 25 illustrates an example of different groupings that are created based on an analysis of video clips. -
FIG. 26 provides an illustrative example of different groupings that are created after the media editing application has analyzed and fixed image stabilization problems. -
FIG. 27 illustrates automatically importing media clips from different folders of the file system. -
FIG. 28 conceptually illustrates a process for automatically organizing media clips into different keyword collection by analyzing the media clips. -
FIG. 29 provides an illustrative example of creating a smart collection. -
FIG. 30 provides an illustrative example of filtering the smart collection based on keyword. -
FIG. 31 illustrates filtering the event browser based on keywords. -
FIG. 32 illustrates an example of rating a media clip. -
FIG. 33 illustrates an example of filtering an event collection based on ratings or keywords. -
FIG. 34 illustrates the media editing application with a list view according to some embodiments. -
FIG. 35 illustrates expanding a media clip in the list view. -
FIG. 36 illustrates an example of simultaneously expanding multiple different clips in the list view. -
FIG. 37 illustrates the list view with several notes fields for adding notes. -
FIG. 38 illustrates selecting different ranges of a media clip using the list view. -
FIG. 39 illustrates selecting multiple ranges of a media clip using the list view. -
FIG. 40 conceptually illustrates a process for displaying and selecting items in a list view. -
FIG. 41 conceptually illustrates a process for playing items in a list view. -
FIG. 42 illustrates adding a marker to a clip using the list view. -
FIG. 43 provides an illustrative example of editing a marker. -
FIG. 44 provides an illustrative example of defining a marker as a to-do item. -
FIG. 45 provides an illustrative example of adding a video clip to a timeline. -
FIG. 46 provides an illustrative example of a timeline search tool according to some embodiments. -
FIG. 47 provides an illustrative example of the association between the timeline playhead and the index playhead. -
FIG. 48 provides an illustrative example of filtering the timeline search tool. -
FIG. 49 provides an illustrative example of filtering the timeline search tool based on video, audio, and titles. -
FIG. 50 provides an illustrative example of navigating the timeline using the search tool. -
FIG. 51 provides an example workflow for searching the timeline for a to-do marker using the search tool and checking the to-do marker as a completed item. -
FIG. 52 provides an illustrative example of using the timeline search tool to search a list of keywords and markers. -
FIG. 53 provides an illustrative example of using the timeline search tool to search a list of clips. -
FIG. 54 provides an illustrative example of using the timeline search tool to display a time duration for ranges of clips. -
FIG. 55 provides an illustrative example of displaying the total time of selected clip items in the index area of the timeline search tool. -
FIG. 56 provides an illustrative example of using the timeline search tool to find missing clips. -
FIG. 57 conceptually illustrates a process for searching and navigating a timeline of a media editing application. -
FIG. 58 conceptually illustrates several example data structures for a searchable and navigable timeline. -
FIG. 59 conceptually illustrates a software architecture of a media editing application of some embodiments. -
FIG. 60 conceptually illustrates an electronic system with which some embodiments of the invention are implemented. - In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
- Some embodiments of the invention provide a novel keyword association tool for organizing media content. In some embodiments, the keyword association tool is integrated into a sidebar display area as a keyword collection. A user can create different keyword collections for different keywords. To associate a media clip with a keyword, the user can drag and drop the clip onto a corresponding keyword collection. The same technique can be used to associate multiple clips with one keyword by simultaneously dragging and dropping the clips onto a keyword collection.
- In some embodiments, the keyword association tool is provided as a set of components of a media editing application. In some such embodiments, the media editing application automatically associates keywords with the media clips based on an analysis of the clips (e.g., based on a people detection operation). Each keyword can be associated with the entire clip or a portion of the clip.
- The media editing application of some embodiments includes a first display area for displaying different keyword collections and a second display area for displaying media content. In many of the examples described below, the first display area is referred to as an event library, and the second display area is referred to as an event browser. This is because the keyword collections are hierarchically organized under an event category in these examples. However, the keyword collections may exist in their own hierarchy or as a part of different hierarchy.
- For some embodiments of the invention,
FIG. 1 illustrates a graphical user interface (“GUI”) 100 of a media editing application with such a keyword association tool. This figure illustrates theGUI 100 at fourdifferent stages event library 125 and anevent browser 130 can be used to associate a video clip with a keyword. Each of these stages will be described in detail below after an introduction of the elements ofGUI 100. As shown inFIG. 1 , theGUI 100 includes theevent library 125, theevent browser 130, and a set of controls 155-175. - In some embodiments, the
event library 125 is a sidebar area of theGUI 100 that displays several selectable items representing different collections. In the example illustrated inFIG. 1 , the collections are listed hierarchically starting with astorage collection 102, followed by a “year”collection 104, and anevent collection 106. Each particular collection may have multiple other child collections. Also, each particular collection includes a corresponding UI item for collapsing or expanding the particular collection in theevent library 125. For instance, a user of theGUI 100 can select aUI item 108 to hide or reveal each event collection that is associated with the “year”collection 104. - In some embodiments, when media content is imported into the application's library, the application automatically organizes the content into one or more collections. For example, a selectable item representing a new event may be listed in the
event library 125 when a set of video clips is imported from a camcorder, digital camera, or hard drive. The application may also automatically specify a name for each collection. For example, inFIG. 1 , the names ofcollections FIG. 27 below, some embodiments automatically create different keyword collections for imported content. - The
event browser 130 is an area in theGUI 100 through which the application's user can organize media content into different collections. To allow the user to easily find content, theevent browser 130 may be sorted (e.g., by creation date, reel, scene, clip duration, media type, etc.). In the example illustrated inFIG. 1 , video clips are represented as thumbnail images. However, depending on the user's preference, the clips may be represented differently. For instance, a video clip may be represented as a filmstrip with several images of the clip displayed as a sequence of thumbnail images. - In some embodiments, audio clips are represented differently from video clips in the
event browser 130. For instance, an audio clip may be represented as a waveform. That is, a representation of the audio clip may indicate the clip's signal strength at one or more instances in time. In some embodiments, a video clip representation may include a representation of its associated audio. For instance, inFIG. 1 , therepresentation 140 includes awaveform 112. Thiswaveform 112 spans horizontally across therepresentation 140 to graphically indicate signal strength of the video clip's audio. - The set of controls 155-175 includes selectable UI items for modifying the display or view of the
event library 125 andevent browser 130. For instance, a user of theGUI 100 can select thecontrol 155 to hide or reveal theevent library 125. The user can also select thecontrol 160 to show a drop down list of different sorting or grouping options for collections in theevent library 125 and representations in theevent browser 130. In addition, a user's selection of thecontrol 175 reveals UI items for (1) adjusting the size of clip representations and (2) adding/removing waveforms to/from representations of video clips. Also, a selection of thecontrol 165 causes theevent browser 130 to switch to a list view from the displayed thumbnails view (e.g., clips view, filmstrip view). Several examples of such a list view are described below in Section VIII. - The
control 170 includes a duration slider that controls how many clips are displayed or how much detail appears in theevent browser 130. The duration control includes a slider bar that represents different amounts of time. The knob can be moved to expand or contract the amount of detail (e.g., the number of thumbnails representing different frames) shown in each clip's filmstrip representation. Showing more thumbnails for each clip decreases the overall number of clips shown in theevent browser 130. In some embodiments, a shorter time duration displays more detail or more thumbnails, thereby lengthening each clip's filmstrip representation. - Having described the elements of the
GUI 100, the operations of associating a video clip with a keyword will now be described by reference to the state of this GUI during the fourstages FIG. 1 . In thefirst stage 105, theevent library 125 lists several different collections. The user selects theevent collection 106 to display representations 135-150 in theevent browser 130. As mentioned above, the representations 135-150 represent different video clips imported into the application's library. - The
second stage 110 shows theGUI 100 after the user selects an area of theevent library 125. The selection causes acontext menu 118 to appear. Thecontext menu 118 displays several menu items related to theevent browser 130. For example, thecontext menu 118 displays menu items for creating and deleting an event, and amenu item 114 for creating a new keyword collection. Thecontext menu 118 includes an option to create folders. In some embodiments, these folders are for storing one or more keyword collections or keyword folders. When the user selects themenu item 114 in thecontext menu 118, the user is presented with akeyword collection 116, as illustrated in thethird stage 115. - As shown in the
third stage 115, thekeyword collection 116 is displayed in theevent library 125. Specifically, thekeyword collection 116 is integrated into the sidebar area and categorically listed under theevent collection 106. Thekeyword collection 116 includes graphical and textual elements. In the example illustrated inFIG. 1 , the graphical element indicates to the user (e.g., through a key symbol) that thecollection 116 represents a keyword. Also, to distinguish thekeyword collection 116 from other collections, the graphical element displays color differently from graphical elements of those other collections. - The textual element of the
keyword collection 116 represents the keyword. In other words, the textual element represents a word, term (several words), phrase, or characters (e.g., string, alphanumeric symbols) that the user can use to associate with any media content represented in theevent browser 130. As shown in thethird stage 115, the application has specified a default keyword name for the collection. Also, the textual element is highlighted to indicate that a more meaningful keyword can be inputted for thecollection 116. - The
fourth stage 120 shows one way of associating a piece of media content with a keyword. Here, the user selects thethumbnail representation 135 of a video clip. The selection causes therepresentation 135 to be highlighted in theevent browser 130. The user then drags and drops therepresentation 135 onto thekeyword collection 116 to associate the video clip with the keyword. - In the example described above, the
keyword collection 116 is integrated in a side bar area that has been traditionally reserved for listing bins or folders. A user of the GUI does not have to search for a separate keyboard tool to use the application's keyword functionality. Moreover, the keyword collection operates in a manner similar to what many computer users have come to believe as a bin or a folder. In other words, the keyword collection acts a virtual bin or virtual folder that the user can drag and drop items onto in order to create keyword associations. -
FIG. 2 illustrates theGUI 100 after associating a video clip with a keyword. Specifically, in twooperational stages event library 125 can be used to filter theevent browser 130 to only display content of thekeyword collection 116. Theevent library 125 and theevent browser 130 are the same GUI elements as those described inFIG. 1 . - The
first stage 205 illustrates the contents of theevent collection 106. Specifically, it illustrates that therepresentation 135 is not removed from theevent browser 130 after the drag and drop operation as illustrated inFIG. 1 . As shown in thefirst stage 205, therepresentation 135 remains in theevent collection 106. Thesecond stage 210 shows a user's selection of thekeyword collection 116. The selection causes theevent browser 130 to displays the content of thekeyword collection 116. Specifically, the selection causes the event browser to be filtered down to the video clip associated with the keyword. - In some embodiments, the media editing application displays content differently based on their association with one or more keywords. This allows users to quickly assess a large group of media clips and see which ones are associated or not associated with any keywords. For example, in
FIG. 2 , abar 220 is displayed across each of therepresentations event library 125. Also, the representations 140-150 are displayed without any bars. This indicates to a user that the video clips associated with these representations are not marked with any keywords. - In some embodiments, the media editing application allows a user to mark a range of a clip or the entire clip. In other words, a user may specify a time duration or several different time durations in which one or more keywords are applicable. For example, a user may specify that an audio clip includes crowd noise starting at one point in time and ending at another point, and then tag that range as “crowd noise”. In the example illustrated in
FIG. 2 , the entire range of the video clip associated with therepresentation 135 is marked with the keyword. This is indicated by thebar 220, as it spans horizontally across therepresentation 135 in theevent browser 130. -
FIG. 3 illustrates specifying a range of a video clip to associate with a keyword. Specifically, this figure shows how arepresentation 320 of the video clip can be used to specify the range. Four operational stages 300-315 are shown inFIG. 3 . This figure includes theevent browser 130, therepresentation 320, and apreview display area 325. Theevent browser 130 is the same as the one described above by reference toFIG. 1 . - As mentioned above, a video clip representation displays a thumbnail of an image in the video clip. In some embodiments, a media clip's representation is an interactive UI item that dynamically displays different images. Also, the representation, in some embodiments, can be used to preview audio of a media clip. One example of such representation is shown in
FIG. 3 with therepresentation 320. As shown, the representation includes aplayhead 335 and arange selector 340. - In the example illustrated in
FIG. 3 , therepresentation 320 can be selected to display different thumbnails and play different audio samples. In particular, the width of therepresentation 320 represents a virtual timeline. The user can select an interior location within the representation. The interior location corresponds to a particular time on the virtual timeline. The selection causes the representation to display a thumbnail image of the video clip at that particular time instance. Similarly, an audio sample that corresponds to the particular time instance is played when the user selects the interior location. - The playhead 335 moves along the representation's virtual timeline. When a user selects an interior location, the
playhead 335 moves along the virtual timeline to the selected interior location. The user can use thisplayhead 335 as a reference point to display different images and play different audio samples associated with the video clip. - The
range selector 340 allows the user to define a range of a clip to be marked with a keyword. In some embodiments, therange selector 340 allows the user to specify a range to add to a timeline. The user can activate therange selector 340 by selecting a representation. The selection causes therange selector 340 to appear. The user can then move the selector's edges along the representation's virtual timeline to specify a range. - In some embodiments, the
preview display area 325 displays a preview of a composite presentation that the media editing application creates by compositing several media clips (e.g., audio clips, video clips, etc.). As shown inFIG. 3 , thepreview display area 325 displays a preview of a clip selected from theevent browser 130. For example, when a user selects a representation's interior location that corresponds to a particular time instance, the preview display area presents a preview of the representation's associated video clip at that particular instance in time. - The
first stage 300 ofFIG. 3 shows theevent browser 130 and thepreview display area 325. The user has selected an interior location within therepresentation 320. The selection causes theplayhead 335 to be moved along the representation's virtual timeline to the location about the interior location. The selection also causes thepreview display area 325 to displays a preview of the representation's associated video clip at a time instance corresponding to theplayhead 335. - The
second stage 305 illustrates selection and movement of an edge of therange selector 340. Specifically, in this example, the left edge of therange selector 340 is moved along the virtual timeline to about mid-point. This left edge represents a starting point of a range of the video clip. Similarly, thethird stage 310 shows selection and movement of the opposite edge of therange selector 340. In particular, the right edge of therange selector 340 is moved towards the left edge. The right edge represents an ending point of the range of the video clip. - The
fourth stage 315 shows theevent browser 130 after a keyword is associated with the range of the video clip. Here, abar 330 is displayed across only a portion of the representation. This portion represents the range of the video clip that is marked or associated with the keyword. - In the example described above, a keyword range is specified using the
range selector 340. In some embodiments, the media editing application allows a user to modify a defined keyword range. For example, when a keyword is applied to a particular range of a clip, the media application may provide UI items and/or shortcut keys to modify the particular range. In this way, the user of the media editing application can define a keyword collection to include specific ranges of one or more clips. - In some embodiments, when a range of a clip is marked with multiple keywords, only one keyword representation (e.g., keyword bar) is displayed. For example, a filmstrip representation in an event browser may only display one keyword bar over a range that is associated with multiple keywords. In some embodiments, when the user selects the keyword representation (e.g., keyword bar), a keyword list (e.g., a popup list) appears showing all the keywords represented by the bar. In some such embodiments, by default the keyword with the shortest range is selected, but the user can select a different range by selecting its corresponding keyword in the keyword list.
- Several more example operations of the media editing application and the keyword association tool are described below. However, before describing these examples, an exemplary media editing application that implements the keyword association features of some embodiments will be described below in Section I. To differentiate keyword tagging from bin-type or folder-type collections, an example keyword data structure will be described in Section II. Section III describes example keyword operations performed with the media editing application. Section IV describes several example operations performed with a keyword tagging tool. Section V describes several operations performed by the media editing application to automatically create different keyword collections. Section VI describes creating smart collections using keywords. Section VII describes marking media content with different ratings. Second VIII describes a list view showing keywords associated with media content. Section IX describes markers. Section X describes a timeline search and index tool for searching and navigating a timeline. Section XI describes a software architecture of a media editing application of some embodiments. Finally, Section XII describes a computer system which implements some embodiments of the invention.
-
FIG. 4 illustrates a graphical user interface (GUI) 400 of a media-editing application of some embodiments. One of ordinary skill will recognize that thegraphical user interface 400 is only one of many possible GUIs for such a media-editing application. In fact, theGUI 400 includes several display areas which may be adjusted in size, opened or closed, replaced with other display areas, etc. TheGUI 400 includes a clip library 405 (also referred to as an event library), a clip browser 410 (also referred to as an event browser), atimeline 415, apreview display area 420, thetimeline search tool 445, aninspector display area 425, an additionalmedia display area 430, and atoolbar 435. - The
event library 405 includes a set of folder-like or bin-line representations through which a user accesses media clips that have been imported into the media-editing application. Some embodiments organize the media clips according to the device (e.g., physical storage device such as an internal or external hard drive, virtual storage device such as a hard drive partition, etc.) on which the media represented by the clips are stored. Some embodiments also enable the user to organize the media clips based on the date the media represented by the clips was created (e.g., recorded by a camera). - Within a storage device and/or date, users may group the media clips into “events”, or organized folders of media clips. For instance, a user might give the events descriptive names that indicate what kind of media is stored in the event (e.g., the “New Event 2-5-11” event shown in
clip library 405 might be renamed “European Vacation” as a descriptor of the content). In some embodiments, the media files corresponding to these clips are stored in a file storage structure that mirrors the folders shown in the clip library. - As will be described in detail in Sections III-IV below, some embodiments enable users to organize media clips into different keyword collections. In some such embodiments, each keyword collection is represented as a type of bin or folder that can be selected to reveal each media clip associated with a keyword of the particular keyword collection.
- Within the clip library, some embodiments enable a user to perform various clip management actions. These clip management actions may include moving clips between events, creating new events, merging two events together, duplicating events (which, in some embodiments, creates a duplicate copy of the media to which the clips in the event correspond), deleting events, etc. In addition, some embodiments allow a user to create sub-folders or sub-collections of an event. These sub-folders may include media clips filtered based on tags (e.g., keyword tags). For instance, in the “New Event 2-5-11” event, all media clips showing children might be tagged by the user with a “kids” keyword. Then these particular media clips could be displayed in a sub-folder or keyword collection of the event that filters clips in the event to only display media clips tagged with the “kids” keyword.
- The
clip browser 410 allows the user to view clips from a selected folder or collection (e.g., an event, a sub-folder, etc.) of theclip library 405. As shown in this example, the collection “New Event 2-5-11” is selected in theclip library 405, and the clips belonging to that folder are displayed in theclip browser 410. Some embodiments display the clips as thumbnail filmstrips, as shown in this example. By moving a cursor (or a finger on a touchscreen) over one of the thumbnails (e.g., with a mouse, a touchpad, a touchscreen, etc.), the user can skim through the clip. That is, when the user places the cursor at a particular horizontal location within the thumbnail filmstrip, the media-editing application associates that horizontal location with a time in the associated media file, and displays the image from the media file for that time. In addition, the user can command the application to play back the media file in the thumbnail filmstrip. - In addition, the thumbnails for the clips in the browser display an audio waveform underneath the clip that represents the audio of the media file. In some embodiments, as a user skims through or plays back the thumbnail filmstrip, the audio plays as well. Many of the features of the clip browser are user-modifiable. For instance, in some embodiments, the user can modify one or more of the thumbnail size, the percentage of the thumbnail occupied by the audio waveform, whether audio plays back when the user skims through the media files, etc.
- In addition, some embodiments enable the user to view the clips in the
clip browser 410 in a list view. In this view, the clips are presented as a list (e.g., with clip name, duration, etc.). Some embodiments also display a selected clip from the list in a filmstrip view at the top of theclip browser 410 so that the user can skim through or playback the selected clip. As will be described in detail in the Section VIII below, the list view displays different ranges of media associated with keywords. The list view in some embodiments allows users to select different ranges of a media clip and/or navigate to different sections of the media clip. - The
timeline 415 provides a visual representation of a composite presentation (or project) being created by the user of the media-editing application. Specifically, it displays one or more geometric shapes that represent one or more media clips that are part of the composite presentation. Thetimeline 415 of some embodiments includes a primary lane (also called a “spine”, “primary compositing lane”, or “central compositing lane”) as well as one or more secondary lanes (also called “anchor lanes”). The spine represents a primary sequence of media which, in some embodiments, does not have any gaps. The clips in the anchor lanes are anchored to a particular position along the spine (or along a different anchor lane). Anchor lanes may be used for compositing (e.g., removing portions of one video and showing a different video in those portions), B-roll cuts (i.e., cutting away from the primary video to a different video whose clip is in the anchor lane), audio clips, or other composite presentation techniques. - The user can select media clips from the
clip browser 410 into thetimeline 415 in order to add the clips to a presentation represented in the timeline. Within the timeline, the user can perform further edits to the media clips (e.g., move the clips around, split the clips, trim the clips, apply effects to the clips, etc.). The length (i.e., horizontal expanse) of a clip in the timeline is a function of the length of the media represented by the clip. As the timeline is broken into increments of time, a media clip occupies a particular length of time in the timeline. As shown, in some embodiments the clips within the timeline are shown as a series of images. The number of images displayed for a clip varies depending on the length of the clip in the timeline, as well as the size of the clips (as the aspect ratio of each image will stay constant). - As with the clips in the clip browser, the user can skim through the timeline or play back the timeline (either a portion of the timeline or the entire timeline). In some embodiments, the playback (or skimming) is not shown in the timeline clips, but rather in the
preview display area 420. - The preview display area 420 (also referred to as a “viewer”) displays images from media files which the user is skimming through, playing back, or editing. These images may be from a composite presentation in the
timeline 415 or from a media clip in theclip browser 410. In this example, the user has been skimming through the beginning ofclip 440, and therefore an image from the start of this media file is displayed in thepreview display area 420. As shown, some embodiments will display the images as large as possible within the display area while maintaining the aspect ratio of the image. - The
inspector display area 425 displays detailed properties about a selected item and allows a user to modify some or all of these properties. The selected item might be a clip, a composite presentation, an effect, etc. In this case, the clip that is shown in thepreview display area 420 is also selected, and thus the inspector displays information aboutmedia clip 440. This information about the selected media clip that includes duration, file format, file location, frame rate, date created, audio information, etc. In some embodiments, different information is displayed depending on the type of item selected. - The additional
media display area 430 displays various types of additional media, such as video effects, transitions, still images, titles, audio effects, standard audio clips, etc. In some embodiments, the set of effects is represented by a set of selectable UI items, in which each selectable UI item represents a particular effect. In some embodiments, each selectable UI item also includes a thumbnail image with the particular effect applied. Thedisplay area 430 is currently displaying a set of effects for the user to apply to a clip. - The
toolbar 435 includes various selectable items for editing, modifying items that are displayed in one or more display areas, etc. Thetoolbar 435 includes various selectable items for modifying the type of media that is displayed in the additionalmedia display area 430. The illustratedtoolbar 435 includes items for video effects, visual transitions between media clips, photos, titles, generators and backgrounds, etc. In addition, thetoolbar 435 includes an selectable inspector item that causes the display of theinspector display area 425 as well as items for applying a retiming operation to a portion of the timeline, adjusting color, and other functions. Thetoolbar 435 also includes selectable items for media management and editing. Selectable items are provided for adding clips from theclip browser 410 to thetimeline 415. In some embodiments, different selectable items may be used to add a clip to the end of the spine, add a clip at a selected point in the spine (e.g., at the location of a playhead), add an anchored clip at the selected point, perform various trim operations on the media clips in the timeline, etc. The media management tools of some embodiments allow a user to mark selected clips as favorites, among other options. - The
timeline search tool 445 allows a user to search and navigate a timeline. Thetimeline search tool 445 of some embodiments includes a search field for searching for clips in thetimeline 415 based on their names or associated keywords. Thetimeline search tool 445 includes a display area for displaying search results. In some such embodiments, each result is user-selectable such that a selection of the result causes the timeline to navigate to the position of the clip in the timeline. Accordingly, thetimeline search tool 445 allows a content editor to navigate the timeline to identify clips. Several examples of the timeline search tool will be described in Section X below. - One or ordinary skill will also recognize that the set of display areas shown in the
GUI 400 is one of many possible configurations for the GUI of some embodiments. For instance, in some embodiments, the presence or absence of many of the display areas can be toggled through the GUI (e.g., theinspector display area 425, additionalmedia display area 430, and clip library 405). In addition, some embodiments allow the user to modify the size of the various display areas within the UI. For instance, when thedisplay area 430 is removed, thetimeline 415 can increase in size to include that area. Similarly, thepreview display area 420 increases in size when theinspector display area 425 is removed. -
FIG. 5 conceptually illustrates example data structures for several objects associated with a media editing application. Specifically, the figure illustrates relationships between the objects that facilitate the organization of media clips into different keyword collections. As shown, the figure illustrates (1) anevent object 505, (2) aclip object 510, (3) acomponent object 515, (4) anasset object 525, (5) akeyword collection object 545, and (6) a keyword setobject 520. In some embodiments, one or more of the objects in this figure are subclasses of other objects. For example, in some embodiments, the clip object 510 (i.e., collection object),component object 515, and keyword setobject 520 are all subclasses of a general clip object. - In the example illustrated in
FIG. 5 , theevent object 505 includes an event ID and a number of different clip collections (including the clip object 510). Theevent object 505 is also associated with a number of keyword collection objects (including the keyword collection object 545). The event ID is a unique identifier for theevent object 505. The data structure of theevent object 505 may include additional fields in some embodiments, such as the event name, event date (which may be derived from an imported clip), etc. The event data structure may be a Core Data (SQLite) database file that includes the assets and clips as objects defined with the file, an XML file that includes the assets and clips as objects defined with the file, etc. - The
clip object 510 or collection object, in some embodiments, is an ordered array of clip objects. The clip object stores one or more component clips (e.g., the component object 515) in the array. In addition, theclip object 510 stores a clip ID that is a unique identifier for the clip object. In some embodiments, theclip object 510 is a collection object that can include component clip objects as well as additional collection objects. In some embodiments, theclip object 510 or collection object only stores the video component clip in the array, and any additional components (generally one or more audio components) are then anchored to that video component. - The
component object 515 includes a component ID, a set of clip attributes, and an asset reference. The component ID identifies the component. The asset reference of some embodiments stores an event ID and an asset ID, and uniquely identifies a particular asset object (e.g., the asset object 525). In some embodiments, the asset reference is not a direct reference to the asset but rather is used to locate the asset when needed. For example, when the media-editing application needs to identify a particular asset, the application uses the event ID to locate the event that contains the asset, and then the asset ID to locate the particular desired asset. As mentioned, theclip object 510 only stores the video component clip in its array, and any additional components (generally one or more audio components) are then anchored to that video component. This is illustrated inFIG. 5 as thecomponent object 515 includes a set of one or more anchored components 555 (e.g., audio components). In some embodiments, each component that is anchored to another clip or collection stores an anchor offset that indicates a particular instance in time along the range of the other clip or collection. That is, the anchor offset may indicate that the component is anchored x number of seconds and/or frames into the other clip or collection. These times refer to the trimmed ranges of the clips in some embodiments. - The
asset object 525, as shown, includes an asset ID, reference to a source file, and a set of source file metadata. The asset ID identifies the asset, while the source file reference is a pointer to the original media file. As shown, thesource file metadata 530 includes the file type (e.g., audio, video, movie, still image, etc.), the file format (e.g., “.mov”, “.avi”, etc), a set ofvideo properties 535, a set ofaudio properties 540, and additional metadata. The set ofaudio properties 540 includes a sample rate, a number of channels, and additional metadata. Some embodiments include additional properties, such as the file creation date (i.e., the date and/or time at which the media was captured (e.g., filmed, photographed, recorded, etc.)). - In some embodiments, a set of metadata from the
source file metadata 530 is displayed in the event browser (e.g., as part of a list view as will be described in detail below in Section VIII). The data structure of theasset object 525, as well as several other objects, may be populated when the source file is imported into the media editing application. In some embodiments, theasset object 525 additionally store override data that modifies one or more of the video or audio properties. For instance, a user might enter that a media file is actually 1080p, even though the file's metadata, stored in the asset object, indicates that the video is 1080i. When presented to the user, or used within the application, the override will be used and the media file will be treated as 1080p. - Different from a folder or bin-like object that stores direct or indirect reference to each file that it contains, the
keyword collection object 545 includes a reference to a keyword. As such, a keyword collection references a keyword to identity or filter a group of files (e.g., media clips) to display only those that have been tagged with the keyword. In comparison, a folder or bin-like object references (e.g., directly or indirectly) a file that it contains. This difference between a keyword collection and a folder-type or bin-type collection will be further illustrated in several of the examples described below. For example, inFIG. 14 , when two media clips tagged with a keyword are moved from one event collection to another event collection, the media clips' keyword associations are carried over to the other event collection. This keyword association causes a keyword collection for the keyword to be created in the other event collection. - In the example illustrated in
FIG. 5 , thekeyword collection object 545 includes a reference to a keyword of the keyword setobject 520. However, the relationship between the keyword collection and the keyword set object may be expressed differently, in some embodiments. Here, thekeyword collection object 545 is associated or is a part of theevent object 505. This is because the keyword collections are hierarchically organized under an event collection in the media editing application. However, the keyword collections may exist in their own hierarchy or as a part of different hierarchy. - As shown in
FIG. 5 , thekeyword collection object 545 includes other attributes. In some embodiments, these attributes include attributes similar to a folder or bin, such as a creation date. These attributes may include other collection objects (e.g., filter or smart collection objects). Several examples of keyword collections that are associated with or contain other collections are described below by reference toFIGS. 25 and 26 . - As shown in
FIG. 5 , the keyword setobject 520 includes akeyword set 550, range attributes, note, and other keyword attributes. In some embodiments, the keyword set 550 is a set of one or more keywords that are associated with a range of theclip object 510. The keyword set may be specified by a user of the media editing application. As will be described below, the keyword set may be automatically specified by the media editing application. Several examples of automatically assigning one or more keywords by analyzing media clips will be described below by reference toFIGS. 24-27 . - In some embodiments, the keyword set
object 520 is a type of anchored object. For example, in the example illustrated inFIG. 5 , the keyword set object may include an anchor offset that indicates that it is anchored to theclip object 510 at x number of seconds and/or frames into the range of the clip (e.g., the trimmed range of the clip). - In some embodiments, the keyword object's range attribute indicates a starting point and an ending point of the range of a clip that is associated with the keyword set. This may include the actual start time and end time. In some embodiments, the range attributes may be expressed differently. For example, instead of a start time and an end time, the range may be expressed as a start time and duration (from which the end time can be derived). As mentioned above, in some embodiments, a keyword set
object 520 is a type of anchored object. In some such embodiments, the anchor offset is associated with or indicates a starting point of the range of a clip associated with the keyword. Accordingly, the keyword setobject 520 may only store the starting point or the anchor offset, in some embodiments. - The note attribute, in some embodiments, is a field that the user can enter for the range of the media clip associated with the keyword set. A similar note attribute is also shown for the
clip object 510. This allows a clip object or a collection object to be associated with a note. Several examples of specifying a note for a clip or a keyword range will be described below by reference toFIG. 37 . - One or ordinary skill will also recognize that the objects and data structures shown in
FIG. 5 are just a few of the many different possible configurations for implementing the keyword organization features of some embodiments. For instance, in some embodiments, instead of the clip object indirectly referencing a source file, the clip object may directly reference the source file. Also, the keyword collection may not be a part of an event object but may be part of a different dynamic collection structure (e.g., folder structure, bin structure) or hierarchical structure. In addition, a keyword set object, in some embodiments, is a keyword object representing only one keyword instead of a set of one or more keywords. In some such embodiments, each keyword object includes its own range attribute, note attribute, etc. Also, addition information regarding data structures are described in U.S. patent application Ser. No. 13/111,912, entitled “Data Structures for a Media-Editing Application”. This application is incorporated in the present application by reference. -
FIG. 6 illustrates creating a keyword association by dragging and dropping a range of a clip from one keyword collection to another. Five operational stages 605-625 are shown in this figure. Theevent library 125 and theevent browser 130 are the same as those illustrated inFIG. 1 . - The
first stage 605 shows theevent library 125 and theevent browser 130. The user has selected akeyword collection 630 that is displayed in theevent library 125. The selection causes theevent browser 130 to display contents of thekeyword collection 630. As shown in thisfirst stage 605, twovideo clip representations event browser 130. The representations are similar to the ones described above by reference toFIG. 3 . However, in the example illustrated inFIG. 6 , therepresentation 660 shows multiple thumbnail images. These thumbnail images represent a sequence of different images of the video clip at different instances in time. - The
second stage 610 shows a selection of therepresentation 660. The selection causes arange selector 640 to appear. Thethird stage 615 shows the user interacting with the range selector to select a range of the video clip. Specifically, the left edge of therange selector 640 is moved along the virtual timeline to athird thumbnail image 665. - The
fourth stage 620 shows a drag and drop operation to associate the range of the video clip with a keyword. As shown, the user drags and drops the range from thekeyword collection 630 to akeyword collection 650. This in turn causes the range of the video clip to be marked with a keyword associated with thekeyword collection 650. - The
fifth stage 625 shows theGUI 100 after the drag and drop operation. Here, the user selects thekeyword collection 650 from theevent library 125. The selection causes theevent browser 130 to display only the range of the video clip marked with the keyword of thekeyword collection 650. - In the previous example, a range of a clip is associated with a keyword through a drag and drop operation. Some embodiments allow a user to (1) create a compound clip from multiple different clips and (2) tag a range that spans one or more of the multiple clips in the compound clip. In some embodiments, a compound clip is any combination of clips (e.g., in a timeline, in an event browser) and nests clips within other clips. Compound clips, in some embodiments, can contain video and audio clip components, clips, and other compound clips. As such, each compound clip can be considered a mini project or a mini composite presentation, with its own distinct project settings. In some embodiments, compound clips function just like other clips; a user can add them to a project or timeline, trim them, retime them, and add effects and transitions. In some embodiments, each compound clip is defined by data structures of the clip object or the collection object similar to those described above by reference to
FIG. 5 . - In some embodiments, compound clips can be opened (e.g., in the timeline, in the event browser) to view or edit their contents. In some such embodiments, a visual indication or an icon appears on each compound clip representation. This visual indication indicates to the user that the contents of the compound clip can be viewed or edited.
-
FIG. 7 illustrates creating a compound clip and associating the compound clip with a keyword. Eight operational stages 705-740 of theGUI 100 are shown in this figure. Thefirst stage 705 shows two video clips (790 and 795) in anevent browser 130. Here, the user selects thevideo clip 790. Thesecond stage 710 shows the selection of thevideo clip 795 along with thevideo clip 790. The user might have selected an area of the event browser covering both of the clips (790 and 795) in order to select them. Alternatively, the user might have first selected thevideo clip 790 and then selected thevideo clip 795 while holding down a hotkey that facilitates multiple selections. - The
third stage 715 shows the activation of acontext menu 750. This menu includes anoption 745 to create a compound clip from the selectedclips fourth stage 720, the selection of theoption 745 causes a compoundclip options window 755 to appear. Thewindow 755 includes (1) atext field 760 for inputting a name for the compound clip, (2) aselection box 765 for selecting a default event collection for the compound clip, (3) a set ofradio buttons 770 for specifying video properties (e.g., automatically based on the properties of the first video clip, custom), and (4) a set ofradio buttons 775 for specifying audio properties (e.g., default settings, custom). - In the
fourth stage 720, the user inputs a name for the compound clip. Thefifth stage 725 shows selection of thebutton 780 to create the compound clips based on the settings specified through the compoundclip options window 755. As shown in thesixth stage 730, the selection causes acompound clip 704 to appear. Thecompound clip 704 includes a marking 702, which provides an indication to a user that it is a compound clip. In some embodiments, the marking 702 is a user-selectable item that, when selected, reveals both clips and/or provides an option to view or edit the individual clips in thecompound clip 704. As shown in thesixth stage 730, the user selects thecompound clip 704. In theseventh stage 735, the compound clip is dragged and dropped onto akeyword collection 785. The drag and drop operation causes the compound clip to be associated with a keyword of thekeyword collection 785. As the entire clip was dragged and dropped onto thekeyword collection 785, the association spans all of the entire ranges of the clips that define thecompound clip 704. - Lastly, the
eighth stage 740 shows selection of thekeyword collection 785. The selection causes theevent browser 130 to be filtered down to thecompound clip 704 associated with the keyword of thekeyword collection 785. - In the previous two examples, one or more media clips are associated with a particular keyword.
FIG. 8 illustrates deleting a clip range from a keyword collection. Five operational stages 805-825 of theGUI 100 are shown in this figure. Specifically, thefirst stage 805 shows three representations of video clips (860, 845, and 870) in anevent collection 830. Thesecond stage 810 shows a selection of akeyword collection 835 from theevent library 125. The selection causes theevent browser 130 to display contents of thekeyword collection 835. Thekeyword collection 835 includes twovideo clip representations keyword collection 835. - The
third stage 815 shows the selection of the representation 845 (e.g., through a control click operation). The selection causes acontext menu 840 to appear. Thecontext menu 840 displays several selectable menu items related to the representation's video clip. In thethird stage 815, thecontext menu 840 displays a menu item for copying the video clip (e.g., the clip range) and amenu item 850 for removing all keywords associated with the range of video clip, among other menu items. The user can select any of these menu items. When the user selectsmenu item 850 in thecontext menu 840, the media editing application disassociates the keyword from the range of the video clip. As a result, therepresentation 845 is removed from theevent browser 130, as shown in thefourth stage 820. - The
fifth stage 825 illustrates the contents of theevent collection 830. Specifically, it illustrates that deleting the range of video clip fromkeyword collection 835 did not remove the video clip from theevent collection 830. -
FIG. 9 illustrates removing a keyword from a portion of a clip range. Five operational stages 905-925 of theGUI 100 are shown in this figure. This figure is similar to the previous example. However, instead of selecting an entire range of a clip from a keyword collection, the user selects a portion of the range. As shown in thefirst stage 905, akeyword collection 930 includes onevideo clip representation 935. In stages two and three (910 and 915), the user selects a portion of the clip range using therange selector 340. Thefourth stage 920 shows the selection of the representation 935 (e.g., through a control click operation). The selection causes thecontext menu 840 to appear. When the user selects themenu item 850 in thecontext menu 840, the media editing application disassociates the keyword from the portion of the range of the video clip. - The
fifth stage 925 illustrates the contents of thekeyword collection 930 after disassociating the keyword from the portion of the range. Here, twoseparate representations event browser 130. As the middle range of the video clip is disassociated with the keyword, therepresentations - In the previous two examples, a user navigates to a keyword collection to disassociate a keyword from a range of a video clip.
FIG. 10 provides an illustrative example disassociating multiple ranges of video clips by deleting a keyword collection. Four operational stages 1005-1020 are illustrated in this figure. Specifically, thefirst stage 1005 shows the contents of theevent collection 1025. As shown, theevent collection 1025 includes two video clips that are associated with a keyword. Thesecond stage 1010 shows the contents of thekeyword collection 1030 that includes ranges of the two video clips in theevent collection 1025. - The
third stage 1015 shows theGUI 100 after the user selects thekeyword collection 1030 in theevent library 125. The selection causes acontext menu 1035 to appear. Thecontext menu 1035 includes aselectable menu item 1040 for deleting the keyword collection. When the user selects themenu item 1040, the user is presented with theGUI 100 as illustrated in thefourth stage 1020. In particular, thisfourth stage 1020 illustrates that the video clips in theevent collection 1025 are not associated with any keywords. In other words, by deleting thekeyword collection 1030, the multiple ranges of the different video clips are disassociated with the keyword of the keyword collection. This allows a user to quickly remove keyword associations from a large group of tagged items. -
FIG. 11 illustrates combining two keyword collections. Specifically, it illustrates how twokeyword collections GUI 100 are shown in this figure. - The
first stage 1105 shows that twodifferent keyword collections event library 125. As akeyword collection 1135 is selected, theevent browser 130 displays only each range of the media clip in the collection. Here, theevent browser 130 displays avideo clip representation 1125. Thesecond stage 1110 shows selection of akeyword collection 1140. The selection causes theevent browser 130 to display avideo clip representation 1130. - The
third stage 1115 shows renaming of thecollection 1140. To rename thekeyword collection 1140, the user selects the collection 1140 (e.g., through a double click operation). Alternatively, the user can rename the collection through a menu item (e.g., in a context menu). Here, the selection of thecollection 1140 causes the collection's name field be highlighted. This indicates to the user that a new collection name can be inputted in this name field. - The
fourth stage 1120 shows theevent browser 130 after renaming thekeyword collection 1140 to a same name as thekeyword collection 1135. Specifically, the renaming causes the media editing application to associate the range of the video clip with the keyword of thekeyword collection 1135. As a result, the user's selection of keyword collection causes theevent browser 130 to display arepresentation 1130. Thisrepresentation 1130 represents the range of the video clip that was previously associated with the keyword of thekeyword collection 1140. -
FIG. 12 provides an illustrative example of selectingmultiple keyword collections event library 125. Thefirst stage 1205 shows contents of thekeyword collection 1220. As shown, thekeyword collection 1220 includes two video clip ranges 1230 and 1235. Thesecond stage 1210 shows contents of thekeyword collection 1225. Thekeyword collection 1225 includes two video clip ranges 1230 and 1240. Thevideo clip range 1230 is the same as the video clip range in thekeyword collection 1220. - The
third stage 1215 shows the selections of multiple collections. Specifically, when thecollections event browser 130 displays the union of these collections. This is illustrated in thethird stage 1215 as thevideo clip range 1230 shared between the two collections is displayed only once in theevent browser 130. -
FIG. 13 provides an illustrative example of selecting a video clip range. Four operational stages 1305-1320 are shown in this figure. Specifically, this figure illustrates that selecting a range of a clip from a keyword collection selects that range in a corresponding event collection. - The
first stage 1305 shows the contents of theevent collection 1325. Here, theevent collection 1325 includes three video clips. Thesecond stage 1310 shows the contents of thekeyword collection 1330. Thekeyword collection 1330 includes two video clip ranges. The video clip ranges ofkeyword collection 1330 are different ranges of the video clips from those of theevent collection 1325. - The
third stage 1315 shows a selection of a video clip range in thekeyword collection 1330. The selection causes the range of the video clip to be highlighted in thekeyword collection 1330. In thefourth stage 1320, the user navigates to theevent collection 1325 after selecting the range in thekeyword collection 1330. As shown, the range or portion of the video clip that corresponds to the range in thekeyword collection 1330 is also selected in theevent collection 1325. In the example described above, a range of a video clip is selected from a keyword collection. In some embodiments, when a user selects multiple ranges of different clips and then navigates to another collection (e.g., an event collection) with those same clips, the ranges remain selected in the other collection. -
FIG. 14 provides an illustrative example of dragging and dropping clips from one event collection to another event collection. Specifically, this figure illustrates keyword collections that are automatically created by the media editing application when several clips that are associated with a keyword are dragged and dropped from anevent collection 1420 to anevent collection 1425. Three operational stages 1405-1415 of theGUI 100 are shown in this figure. - The
first stage 1405 shows selection of multiple clips from theevent collection 1420. These clips are associated with a keyword of akeyword collection 1455. In thesecond stage 1410, the user drags the selected clips to theevent collection 1425. When the user drops the selected clips into theevent collection 1425, theevent collection 1425 is associated with the selected clips. - The
third stage 1415 shows that the keyword associations of the clips are carried over from one collection to another. In the example illustrated in thethird stage 1415, the video clips are associated with the same keyword as they were in theevent collection 1420. The event browser indicates this by displaying abar 1435 over each of the tworepresentations 1430 of the video clips. In addition, the keyword associations of these clips are shown by akeyword collection 1440 that is listed in theevent collection 1425. - In the previous example, several clips are dragged and dropped from one event collection to another event collection.
FIG. 15 provides an illustrative example of dragging and dropping keyword collections from one event collection to another event collection. Specifically, this figure illustrates how keyword collections are reusable between different collections. Three operational stages 1505-1515 of theGUI 100 are shown inFIG. 15 . - As shown in the
first stage 1505, the user selectsmultiple keyword collections 1530 from theevent library 125. In thesecond stage 1510, the keyword collections are dragged and dropped onto theevent collection 1525. Thethird stage 1515 shows theGUI 100 after the drag and drop operation. As shown, thesame keyword collections 1535 are listed under theevent collection 1525. However, the contents of thekeyword collections 1530 are not copied to theevent collection 1525. That is, the structure of theevent collection 1520 is copied without its contents. This allows a user to easily reuse the structure of one collection without having to rebuild it in another. For example, a photographer may create multiple event collections for different weddings. To recreate a structure of a first wedding collection in a second wedding collection, the photographer can simply copy keyword collections from the first collection to the second collection. -
FIG. 16 provides an illustrative example of merging two event collections. Four operational stages 1605-1620 of theGUI 100 are shown in this figure. As shown in thefirst stage 1605, the user selects anevent collection 1625 from theevent library 125. In thesecond stage 1610, theevent collection 1625 is dragged and dropped onto theevent collection 1630. When theevent collection 1625 is dropped onto theevent collection 1630, the user is presented with a “merge events”window 1635 as illustrated in thethird stage 1615. - As shown in the
third stage 1615, the “merge events”window 1635 includes atext field 1640 and a pull-down list 1645. Thetext field 1640 allows the user to specify a name for the merged event. The pull-down list 1645 allows the user to select a location for the merged event. In the example illustrated inFIG. 16 , a hard disk is selected as the location. The “merge events”window 1635 also displays a notification indicating that when two events are merged all media will be merged into one event. Thefourth stage 1620 shows theGUI 100 after the merge operations. Here, the contents of theevent collection 1625 including akeyword collection 1650 are merged with theevent collection 1630. -
FIG. 17 conceptually illustrates aprocess 1700 for associating a range of a media clip with a keyword. In some embodiments, theprocess 1700 is performed by a media editing application. Theprocess 1700 starts when it displays (at 1705) a dynamic collection structure. An example of a dynamic collection structure is the event library described above by reference toFIG. 1 . - The
process 1700 then receives (at 1710) a selection of a range of a media clip. For example, a user of the media editing application might select an entire range of a clip or a portion of a media clip. Theprocess 1700 associates (at 1715) the range of the media clip with a keyword. - The
process 1700 then determines (at 1720) whether a keyword collection exists for the keyword. When the keyword collection exists, the process ends. Otherwise, theprocess 1700 creates (at 1725) a keyword collection for the keyword. Theprocess 1700 then adds (at 1730) the keyword collection to the dynamic collection structure. Theprocess 1700 then ends. Here, the new keyword collection is added a display area or dynamic collection structure without a user having to manually create the new keyword collection. That is, upon association of a new keyword with one or more portion of one or more media clips, a new keyword collection is automatically created and added to the dynamic collection structure. - In some embodiments, when there are multiple collections or folders in a hierarchy, a new keyword represents a keyword that is used at a particular level of the hierarchy and that does not collide with a same keyword that exists at the particular level. For example, different folder or collections may include their own set of media clips associated with one particular keyword (e.g., “architecture”). In this instance, the new keyword represents one keyword that is unique to a particular collection or folder and not necessarily to the overall dynamic collection structure or sub-collection.
- In some embodiments, the media editing application provides a tagging tool for associating media content with keywords.
FIG. 18 illustrates an example of atagging tool 1865 according to some embodiments. Specifically, this figure illustrates using thetagging tool 1865 to associate a keyword with a media clip. Five operational stages 1805-1825 of theGUI 100 are shown inFIG. 18 . Theevent library 125 and theevent browser 130 are the same as those described above by reference toFIG. 1 . - The
first stage 1805 shows thetagging tool 1865 that is displayed over theGUI 100. A user might have activated thetagging tool 1865 by selecting a shortcut key, a menu item, or a toolbar item. In the example illustrated inFIG. 18 , a user selection of acontrol 1870 causes the tagging tool to appear and hover over theGUI 100. In thefirst stage 1805, theevent collection 1860 is selected, which causes theevent browser 130 to display representations of clips. In thesecond stage 1810, to input text, the user selects atext field 1850 of thetagging tool 1865. The user then inputs a keyword into thistext field 1850. - The
third stage 1815 shows a selection of therepresentation 1830. The selection causes the representation to be highlighted. This provides an indication to the user that entire range of the representation's video clip is selected. Thefourth stage 1820 illustrates how a keyword association is created using thetagging tool 1865. Specifically, to associate the keyword with the video clip, the user selects the video clip's representation and selects a key (e.g., an enter key). The selections cause the keyword in thetext field 1850 to be associated with the video clip. This in turn causes akeyword collection 1855 to be displayed in theevent library 125. In thefifth stage 1825, the user filters theevent browser 130 to display only the associated video clip by selecting thekeyword collection 1855. - In some embodiments, the media editing application provides keyboard access when the
tagging tool 1865 is displayed. In other words, the user can select different hotkeys to perform operations such as playing/pausing a media clip, selecting a range of a clip, inserting a clip into timeline, etc. This allows the user to play and preview different pieces of content while keyword tagging, without having to activate and de-activate the tagging tool. Also, when thetagging tool 1865 is activated, the media editing application does not place any restriction on accessing other parts of theGUI 100, such as theevent library 125 andevent browser 130. For example, inFIG. 18 , the user can select the representation or the keyword collection while thetagging tool 1865 is activated. -
FIG. 19 illustrates the media editing application automatically assigning a shortcut key for a previously used keyword. Twooperational stages GUI 100 are shown inFIG. 19 . Theevent library 125 and theevent browser 130 are the same as those described above by reference toFIG. 1 . - The
first stage 1905 shows thetagging tool 1865 of theGUI 100. Also, the user has created thekeyword collection 1855 using thistagging tool 1865. As shown, thetagging tool 1865 includes aselectable item 1915. When the user selects theselectable item 1915, thetagging tool 1865 expands to reveal several input fields, as illustrated in thesecond stage 1910. - The
second stage 1910 shows that thetagging tool 1865 includes a number of input fields. Each input field is associated with a particular key combination. A user of theGUI 100 can input keywords in these fields to create different keyword shortcuts. - As shown in the
second stage 1910, thetext field 1920 of thetagging tool 1865 includes a keyword. Here, the media editing application populated this field after the user used the keyword to tag a video clip. When the user uses another keyword, a subsequent input field may be populated with the other keyword. The user can also input text into any one of the text fields to create custom keyword shortcuts or reassign a previously assigned keyword shortcut. In the example illustrated inFIG. 19 , thetagging tool 1865 includes nine different shortcut slots. However, the media editing application may provide more or fewer shortcuts in some embodiments. - In the example described above, the media editing application automatically populates a shortcut key slot with a keyword when the keyword is used to mark one or more clips. Instead of having the application populate the tagging tool, a user can fill up the keyword slots with keywords in order to quickly tag clips using the tool's shortcut feature.
-
FIG. 20 illustrates an example of using the auto-complete feature of thetagging tool 1865. Specifically, thefirst stage 2005 illustrates a user typing a keyword into a shortcut field of the tagging tool. Thesecond stage 2010 shows thetagging tool 1865 displaying suggested keywords based on user input. Here, as the user types, a previously used keyword, which the user can choose to auto-complete a phrase, is displayed below thetext field 1850. - In some embodiments, the media editing application builds a custom dictionary of potential keywords. For instance, the media editing application may store terms or phrases that one or more users have entered, e.g., for more than a certain number of times. In the example illustrated in
FIG. 20 , the suggested keyword is based on a previously used keyword. However, the tagging bar may provide other suggestions based on the user's interaction with the media editing application. For example, the user may replace the keyword in theinput field 2020 without marking any clips with the keyword. However, when the user types a keyword in thetext field 1850, the media editing application might suggest the keyword that has been replaced in thefield 2020. In conjunction with this learning capability, or instead of it, some embodiments provide a built-in dictionary of common production and editing terms from which the user can choose when tagging an item. - In the previous example, the media editing application provides a set of suggested keywords for an auto-fill operation.
FIG. 21 illustrates an example of using thekeyword tagging tool 1865 to perform an auto-apply operation. Specifically, this figure illustrates how a user can paint over or select a range of a clip, and automatically apply the selected range with one or more keywords. One benefit of the auto-apply feature is that it allows the user to quickly paint over or select many different ranges of different clips to quickly tag them. - In the
first stage 2105, thekeyword tagging tool 1865 displays two input keywords in thetext field 1850. Here, the user activates the auto-apply mode by selecting auser interface item 2125. The selection causes thekeyword tagging tool 1865 to display anindication 2155 that the media editing is in an auto-apply mode, as illustrated in thesecond stage 2110. - In the
second stage 2110, the user selects an interior location of aclip representation 2140. The selection causes arange selector 2130 to appear. Thethird stage 2115 illustrates a selection of a range of the clip represented by theclip representation 2140. In particular, the user uses therange selector 2130 to paint over an area of theclip representation 2140. This causes a corresponding range to be associated with the two keywords in thetext field 1850. - As shown in the
third stage 2115, the selection of the range causes the media editing application to display anindication 2160 that the range is associated with the two keywords. Specifically, the two associated keywords are displayed over theclip representation 2140 for a set period of time. As the range of the clip is associated with the two keywords, twocorresponding keyword collections - The
fourth stage 2120 illustrates the selection of thekeyword collection 2145. Specifically, the selection causes theevent browser 130 to be filtered down to the range of the clip associated with the keyword of thekeyword collection 2145. -
FIG. 22 illustrates removing a keyword from a video clip using thetagging tool 1865. Three operational stages 2205-2215 of theGUI 100 are shown in this figure. In thefirst stage 2205, the event browser displays the contents ofkeyword collection 2225. Specifically, the user selects thekeyword collection 2225 from the event library to display arepresentation 2220 of a clip range. - In the
second stage 2210, the user selects therepresentation 2220. The selection causes the text field to display each keyword associated with the clip. Here, as the range of the video clip is associated with only one keyword, thetext field 2230 displays only that keyword. Thethird stage 2215 illustrates removing the keyword from thetext field 2230. Specifically, the user removes the keyword from thetext field 2230. This causes the range of the video clip to be disassociated with the keyword. This is illustrated in thisthird stage 2215 as the representation of the range of the video clip is removed from thekeyword collection 2225. Alternatively, the user can select a remove button or a shortcut key to remove all keywords that are associated with the range of the video clip. -
FIG. 23 conceptually illustrates a state diagram 2300 of a media-editing application of some embodiments. One of ordinary skill will recognize that the state diagram 2300 does not describe all states of the media-editing application, but instead specifically pertains to several example operations that can be performed with thekeyword tagging tool 1865 that is described above by reference toFIGS. 18-22 . As shown, the keyword tagging tool (at state 2305) is in a deactivated state. At thisstate 2305, the media-editing application may be performing (at state 2310) other tasks including import- or editing-related tasks, organizing, playback operations, etc. In addition, at many of the other states, the application could be performing a wide variety of background tasks (e.g., transcoding, analysis, etc.). - At
state 2315, thekeyword tagging tool 1865 is in an active state based on an input to activate the tool. For example, a user might have selected a toolbar item, selected a hotkey, etc. Similar to thestate 2305, the application may be (at state 2310) performing other tasks. As mentioned above, the media editing application, in some embodiments, provides keyboard access when thekeyword tagging tool 1865 is activated or displayed. In other words, the user can select different hotkeys to perform operations such as playing/pausing a media clip, selecting a range of a clip, inserting a clip into timeline, etc. This allows the user to play and preview different pieces of content while keyword tagging, and without having to activate and de-activate the keyword tagging tool. Also, when the keyword tagging tool is activated, the media editing application, in some embodiments, does not place any restriction on accessing other parts of the application's GUI, such as the event library and event browser. - In response to receiving a user's selection of an item associated with a set of one or more keywords, the keyword tagging tool 1865 (at state 2320) displays the set of keywords. In the example described above by reference to
FIG. 22 , the keyword associated with a clip is displayed in an input text field. However, in some embodiments, the keyword may be displayed elsewhere in application's GUI. - In some embodiments, a keyword that is associated with an entire range is displayed differently from a keyword that is applied to a portion of a range. For example, keywords or comments that apply only to a range within the clip may be colored differently (e.g., dimmed) unless the playhead is within the range.
- When the displayed keyword is removed from the
keyword tagging tool 1865, the application transitions tostate 2325. At this state, the media editing application disassociates the keyword from the tagged item. Instead of removing the displayed keyword, some embodiments allow a user to add additional keywords to further mark the selected item. For example, in some embodiments, a new keyword may be inputted in a same field in which the associated keyword is displayed by separating the keywords (e.g., by using a semi-colon). - When the media editing application receives input to show keyword shortcuts, the media editing application transitions to
state 2330. Here, if there are any keyword shortcuts, thekeyword tagging tool 1865 displays one or more of them. Alternatively, the media editing application may display a group of slots for a user to input keyword shortcuts. An example of such keyword slots or fields is described above by reference toFIG. 20 . - As mentioned above, some embodiments automatically populate the keyword shortcuts based on a user's previous interaction with the media editing application. When one or more shortcut keys are specified, each shortcut key, in some embodiments, can be used to associate a selected item with a corresponding keyword regardless of whether the
keyword tagging tool 1865 is deactivated or activated. For example, a user of the media editing application may play through a list of clips (e.g., a group of clips displayed in the event browser) and quickly tag one or more the clips that are being played. Several examples of playing and navigating through clips in the event browser are described below by reference toFIG. 41 . - When the media editing application receives a keyword input, the keyword tagging tool 1865 (at state 2335) displays the keyword input (e.g., in the tool's input field). In receiving the input, the
keyword tagging tool 1865 may provide suggestions for an auto-complete operation. To provide the suggestion, the media editing application, in some embodiments, maintains a database of previous user inputs or interactions. For example, as a user adds comments, the media editing application builds a dictionary of potential keywords (e.g., any term or phrase that the user has entered more than X number of times). When the user types a commonly-used phrase, the media editing application may highlight the phrase (e.g., in the keyword tagging tool 1865). In some embodiments, hovering over the highlighted text reveals a pop-up offering to create a new keyword with the string. In some embodiments, the media editing application comes with a built-in dictionary of common production and editing terms, which the user can choose from when tagging an item. In some embodiments, the media editing application may provide a command or option to add a last typed string as a keyword. - At
state 2340, the media application associates the input keyword with an item based on a user's input. For example, a user might have selected a video clip from the event browser and selected a key (e.g., an enter key). Alternatively, the user can tag one more clips in an auto-apply mode. An example of automatically applying keywords to a range of a clip is described above by reference toFIG. 21 . - Some embodiments of the invention automatically organize content into different keyword collections. In some such embodiments, the media editing application analyzes content, creates one or more keyword collections, and associates the content with the keyword collections.
FIG. 24 provides an illustrative example of creating a keyword collection by analyzing content. Four operational stages 2405-2420 of theGUI 100 are shown inFIG. 24 . Theevent library 125 and theevent browser 130 are the same as those described above by reference toFIG. 1 . - The
first stage 2405 shows a user selection of anevent collection 2425 from theevent library 125. The selection causes theevent browser 130 to displayrepresentations bar 2445 is displayed over a portion of therepresentation 2440. This indicates to a user that a range of the representation's video clip is marked with a keyword. - The
second stage 2410 shows theGUI 100 after the user selects an area of theevent library 125. The selection causes acontext menu 2450 to appear. Thecontext menu 2450 includes aselectable menu item 2455 for analyzing and fixing content. When the user selects themenu item 2455, the user is presented with adialog box 2460 as illustrated in thethird stage 2415. - As shown in the
third stage 2415, thedialog box 2460 lists several different analysis options. In the example illustrated inFIG. 24 , the different analysis options are categorized into either video or audio. The list of video options includes options for (1) analyzing and fixing image stabilization problems, (2) analyzing for balance color, and (3) finding people. The list of audio options includes options for (1) analyzing and fixing audio problems, (2) separating mono and group stereo audio, and (3) removing silent channels. - The image stabilization operation of some embodiments identifies portions of the video in a media file in which the camera appears to be shaking, and tags the media file (or a portion of the media file with the shaky video) with a keyword. The color balancing of some embodiments automatically balances the color of each image in a media file and saves the color balancing information in a color balance file for each media file analyzed.
- The color balancing operation adjusts the colors of an image to give the image a more realistic appearance (e.g., reducing tint due to indoor lighting). Different embodiments may use different color balancing algorithms.
- The person detection algorithm identifies locations of people in the images of a media file and saves the person identification information in a person detection file for each media file analyzed. The person detection operation of some embodiments identifies faces using a face detection algorithm (e.g., an algorithm that searches for particular groups of pixels that are identified as faces, and extrapolates the rest of a person from the faces). Some embodiments provide the ability to differentiate between a single person (e.g., in an interview shot), pairs of people, groups of people, etc. Other embodiments use different person detection algorithms.
- In addition to the video operations, some embodiments include audio analysis operations at the point of import as well. As shown, these operations may include analysis for audio problems, separation of mono audio channels and identification of stereo pairs, and removal of latent audio channels (i.e., channels of audio that are encoded in the imported file or set of files but do not include any actual recorded audio). Other embodiments may make available at import additional or different audio or video analysis operations, as well as additional transcode options.
- Returning to
FIG. 24 , in thethird stage 2415, the user selects thefind people option 2465 to perform a people analysis operation on the video clips. In some embodiments, the people analysis operation entails detecting a number of persons in a range of a clip shot and the type of shot. For example, the analysis operation may determine that there are x number of person or people (e.g., one person, two persons, group) in a range of a video clip. The analysis operation may determine whether the identified range of the video clip is a close-up, medium, or wide shot of the person or people. In some embodiments, the people detection operations entails identifying a face or faces and determining how much space each identified face takes up in frames of the video clip. For example, if a face takes up 80% of the frame, the shot may be classified as a close-up shot. In some embodiments, the people analysis operation entails identifying faces, shoulders, and torsos. - In the example illustrated in the
third stage 2415, the user selects thefind people option 2465 and abutton 2470. The selections initiate an automatic analysis of the video clips. In some embodiments, the analysis is done as a background task. This allows users to continue interacting with the application'sGUI 100 to perform other tasks while the application performs the analysis. - The
fourth stage 2420 illustrates the GUI after the application has performed the analysis operations on the video clips. Specifically, this stage shows that the application analyzed each of the three video clips and found people in two of the three video clips. Similar to a piece of media content marked with a keyword, each video clip with people is marked with a bar (2475 or 2480) over a range of the video clip's representation. The range indicates the portion of the video clip with people, as determined by the application based on the people analysis operation. - In some embodiments, the media application displays two different representations for a user-specified keyword and analysis keyword. For example, in the example illustrated in FIG. 24, the media editing application displays each analysis keyword representation (2475 or 2480) in a color that is different from a color of a
keyword representation 2445 for a user-specified keyword. Specifically, in thefourth stage 2420, the user-specified keyword is represented as a blue bar and the analysis keyword is represented as a purple bar. However, different types of keywords may be represented differently in some embodiments. - In addition to associating an analysis keyword to the ranges of the video clips, the
fourth stage 2420 shows the automatic organization of these ranges into akeyword collection 2485. In some embodiments, these keyword collections are dynamically generated. For example, when the media editing application does not find any people in a video clip, the event browser may not list a keyword collection for people. - In some embodiments, the media editing application performs additional groupings based on the analysis.
FIG. 25 illustrates an example of different groupings that were created based on an analysis of video clips. Three operational stages 2505-2515 of theGUI 100 are shown inFIG. 25 . Specifically, thefirst stage 2505 shows a user selection of thekeyword collection 2520. The selection causes thekeyword collection 2520 to be expanded to reveal other sub-collections. - The
second stages 2510 shows different groupings that are created based on the analysis of the video clip. Here, the media editing application grouped the ranges into different sub-collections. For example, theevent library 125 lists a sub-collection for group, medium shot, one person, and wide shot. However, depending on the analysis, the media editing application may group the ranges of clips into other sub-collections. In some embodiments, the media editing application provides options for defining different sub-collections. For example, instead of having separate sub-collections for one person and close-up shots, the media editing application may provide one or more selectable items for creating a sub-collection that contains the one person and close-up shots. - As shown in
FIG. 25 , when the user selects thesub-collection 2525, the user is presented with the event browser as illustrated in thethird stage 2515. As shown in this stage, the event browser is filtered to display only arepresentation 2530. This representation represents a range of the video clip that includes a one-person shot based on the analysis of the video clips. - In the example described above, the analyzed content is grouped into different sub-collections. Specifically, the ranges of clips are grouped into different smart collections. In some embodiments, smart collections are different from keyword collections in that a user cannot drag and drop items into them. Some embodiments allow the user to create and organize content into different smart collections based on filtering operations. Several examples of creating a smart collection will be described in detail by reference to
FIGS. 29 and 30 below. - In some embodiments, the analyzed content may be grouped into other collections. For example, instead of creating one keyword collection, the media editing application may create multiple different keyword collections and organize content into these keyword collections.
- In the previous example, a people analysis operation is performed to automatically organize content into a keyword collection and a number of different sub-collections. In some embodiments, the media editing application (1) analyzes media clips and (2) performs correction operation on one or more ranges of the media clips, and (3) organizes the corrected ranges in a keyword collection.
-
FIG. 26 provides an illustrative example of different groupings created after the media editing application has analyzed and fixed image stabilization problems. Twooperational stages GUI 100 are shown in this figure. This example is similar to the previous examples described above. However, in this example, a user selects anoption 2625 for analyzing and fixing image stabilization problems in thefirst stage 2605. Thesecond stages 2610 shows different groupings that are created based on an analysis of image stabilization. Specifically, the media editing application grouped the clip ranges into different sub-collections. For example, theevent library 125 lists asub-collection 2630 for clip ranges that are corrected (e.g., stabilized). Anothersub-collection 2635 is created for other clip ranges that are not corrected or do not need to be corrected. As mentioned above, a sub-collection, in some embodiments, is a filter or smart collection that a user cannot drag and drop items onto. - In some embodiments, a higher level collection or an analysis keyword folder that contains the smart collection is in itself a smart collection. For example, once the media editing application creates different smart collections based on the analysis, a user cannot drag and drop other items onto these smart collections. In some such embodiments, the user can perform an analysis operation to add additional items to these smart collections. For example, a media clip in an event browser and an analysis option may be selected to initiate an analysis operation on the selected media clip in order to add one or more ranges of the media clip to one or more analysis keyword collections.
-
FIG. 27 illustrates automatically importing media clips from different folders of the file system. Specifically, this figure illustrates how the media editing application (1) imports media content from different folders, (2) creates keywords based on the names of the folders, (3) associates keywords with the corresponding pieces of media content, and (4) creates keyword collections for the keywords. Threeoperational stages GUI 100 are illustrated inFIG. 27 . - In the
first stage 2705, the user selects animport control 2720. The selection causes animport options window 2725 to be displayed. As shown, the import options window includes a set ofcontrols 2730 for specifying different import options. The set ofcontrols 2730 includes an option for adding the imported content to an existing event collection or creating a new event collection. The set ofcontrols 2730 includes options for analyzing audio or video to create keyword collections based on the analysis. The list of analysis options includes options for (1) analyzing and fixing image stabilization problems, (2) analyzing for balance color, and (3) finding people. The list of audio options includes options for (1) analyzing and fixing audio problems, separating mono and group stereo audio, and (3) removing silent channels. These are similar to the ones mentioned above by reference toFIG. 24 . However, theimport options window 2725 allows a user to specify one or more of these analysis options during the import session. - As shown in the
second stage 2710, theimport options window 2725 includes acontrol 2735 for specifying whether to import the clips in the different folders as keyword collections. In thissecond stage 2710, the user selects theoption 2735, selects two folders having different media clips, and selects theimport button 2740. - The
third stage 2715 shows theGUI 100 after the user selects theimport button 2740 in theimport options window 2725. Specifically, this stage illustrates that the media editing application associated each imported media clip with a corresponding keyword based on the name of the source folder of the media clip. For each folder, the media editing application also creates keyword collections that contain the associated clips. As shown in thethird stage 2715, the imported media clips are represented byrepresentations 2750. Each of theserepresentations 2750 includes abar 2745 that indicates that the corresponding video clips are associated with a keyword. -
FIG. 28 conceptually illustrates a process for automatically organizing media clips into different keyword collection by analyzing the media clips. In some embodiments, theprocess 2800 is performed by a media editing application. Theprocess 2800 starts when it receives (at 2805) an input to analyze one or more media clips. An example of receiving input during an import operation is described above by reference toFIG. 27 . Several other examples of receiving input to analyze a group of media clips are described above by reference toFIGS. 24 and 26 . - The
process 2800 then identifies (at 2810) a media clip to analyze. At 2815, theprocess 2800 analyzes the media clip. Several example video analysis operations include (1) analyzing for image stabilization problems, (2) analyzing for balance color, and (3) finding people. Several example audio analysis operations include (1) analyzing audio problems, (2) analyzing for mono and group stereo audio, and (3) analyzing for silent channels. In addition to the media clip analysis or instead of it, some embodiments perform other types of analysis to tag the media clip. This may entail analyzing the metadata of a clip and/or identifying a source directory or folder from which the clip originates. - The
process 2800 then (at 2820) associates the media clip with one or more keywords based on the analysis. At 2825, theprocess 2800 then creates a keyword collection for each keyword. Several examples of creating such keyword collections are described above by reference toFIGS. 24-26 . In addition to keyword collections, some embodiments also create one or more smart collections or filter collections for each keyword collection. For example, based on a people analysis, a keyword collection may include other collections such as group, medium shot, one person, wide shot, etc. - The
process 2800 then determines (at 2830) whether there are any other media clips to analyze. When the determination is made that there is another media clip to analyze, theprocess 2800 returns to 2810. Otherwise, theprocess 2800 ends. - Some embodiments allow a user to create smart collections using keywords.
FIG. 29 provides an illustrative example of creating a smart collection. Five operational stages 2905-2925 are shown in this figure. Thefirst stage 2905 shows theGUI 100 after the user selects an area of theevent library 125. The selection causes acontext menu 2930 to appear. Thecontext menu 2930 includes aselectable menu item 2935 for creating a new smart collection. When the user selects themenu item 2935, the user is presented with asmart collection 2940 as illustrated in thesecond stage 2910. - As shown in the
second stage 2910, thesmart collection 2940 is displayed in theevent library 125. In this example, thesmart collection 2940 is categorized under anevent collection 2945 at a same hierarchical level askeyword collections 2950. Thesmart collection 2940 includes graphical and textual elements. In the example illustrated inFIG. 29 , the graphical element provides a visual indication (e.g., through different color, through symbol) that thecollection 2940 is different from theevent collection 2945 and thekeyword collections 2950. - The
textual element 2955 of the smart collection represents a name of thesmart collection 2940. In thesecond stage 2910, the application has specified a default name for thecollection 2940. Also, the textual element is highlighted to indicate that a more meaningful name can be inputted for thecollection 2940. - The
third stage 2915 shows theGUI 100 after the user inputs a name for thesmart collection 2940. Specifically, after inputting the name, the user then selects thecollection 2940 to define one or more filter operations. When the user selects the collection 2940 (e.g., through a double-click operation), theGUI 100 displays afilter tool 2960 as illustrated in thefourth stage 2920. - The
filter tool 2960 includes afilter display area 2965 and aselectable item 2970. Here, thefilter display area 2965 is empty which indicates to the user that no filter is applied for thesmart collections 2940. The event browser provides the user with the same indication as each of the video clips from the event collection is in the smart collection. - The
fifth stage 2925 shows the selection of theselectable item 2970. The selection causes alist 2975 of different filters to be displayed. In this example, thelist 2975 includes (1) a text filter for filtering a smart collection based on text associated with the content, (2) a ratings filter for filtering based on ratings (e.g., favorite, reject), (3) an excessive shakes filter for filtering based on shakes (e.g., from camera movements), (4) a people filter for filtering based on people (e.g., one person, two persons, group, close-up shot, medium shot, wide shot, etc.), (5) a media type filter for filtering based on media type (e.g., video with audio, audio only, etc.), (6) a format filter for filtering based on format of the content, and (7) a keyword filter for filtering based on keywords. -
FIG. 30 provides an illustrative example of filtering thesmart collection 2940 based on keyword. Four operational stages are 3005-3020 shown in this figure. Thefirst stage 3005 shows theGUI 100 prior to applying a keyword filter. Thefilter display area 2965 is empty which indicates to the user that no filter is applied for thesmart collections 2940. The event browser provides the user with the same indication as each of the video clips from the event collection is in the smart collection. In thefirst stage 3005, the filter list is activated to display alist 2975 of different filters. - When the user selects the keyword filter from the
list 2975, the keyword filter is added to thefilter display area 2965 as illustrated in thesecond stage 3010. In some embodiments, when a keyword filtering option is activated, the media editing application provides a list of existing keywords from which a user can choose from for the keyword filter. This is illustrated in thesecond stage 3010 as thefilter display area 2965 lists several existing keywords. The keywords in thefilter display area 2965 correspond tokeyword collections 3025 in theevent library 125. - The
second stage 3010 shows the contents of thesmart collection 2940 after applying the keyword filter operation. In particular, the event browser is filtered such that only ranges of media that are marked with the keywords are shown. For instance, the keyword filter operation removesrepresentations - In the example illustrated in
FIG. 30 , the user can filter thesmart collection 2940 further to include only ranges of media that includes all keywords. For example, by selecting acontrol 3040, thesmart collection 2940 can be filtered to display ranges of media that are associated with both keywords. In some embodiments, when a media clip is marked with different keywords in different ranges, a smart collection includes only one or more ranges of the different ranges that overlap. Alternatively, thesmart collection 2940 may include all the different ranges of the media clip, in some embodiments. - As shown in
FIG. 30 , each keyword in thefilter display area 2965 includes aselectable item 3045 for including or excluding the corresponding keyword from the filtering operation. Thefourth stage 3020 shows a selection of aselectable item 3045. The selection causes thesmart collection 2940 to be filtered to only exclude each media range associated with a keyword of theselectable item 3045. This is illustrated in thefourth stage 3020 as the event browser displays therepresentation 3055 that is marked with a keyword corresponding to aselectable item 3050 in thefilter display area 2965. - In the example described above, only the keywords that are in one event collection are displayed in the
filter display area 2965. This is because thesmart collection 2940 is created at a same hierarchical level as a keyword collection. In some embodiments, when a smart collection is created at a higher level in a hierarchy (e.g., above multiple different collections at a disk level above or at the event level) all the keywords at the same level or below may be displayed in the filter tool as selectable filtering options. - In some embodiments, the media editing application allows a user to perform filtering operations without having to create a smart collection.
FIG. 31 illustrates filtering the event browser based on keywords. Specifically, this figure illustrates searching for different ranges of clips associated with one or more keywords. - Three operational stages 3105-3115 of the
GUI 100 are shown in this figure. In thefirst stage 3105, twoevent collections 3125 are 3130 are listed in theevent library 125. Each event collection includes two keyword collections. Here, the user selects afilter tool 3120 to search for clips at a level above the event collection level (e.g., disk level). Specifically, in this example, the user selects thetool 3120 without selecting any collection. Alternatively, the user might have selected a collection that is at a higher level than the event collection prior to selecting thefilter tool 3120. - As illustrated in the
second stage 3110, the selection of thefilter tool 3120 causes afilter display area 2965 to be displayed. Thefilter display area 2965 displays several selectable items for different keywords. These keywords correspond to the keyword collections of theevent collections event browser 130 to display each clip range associated with the keywords. Thethird stage 3115 shows theGUI 100 after selecting theselectable item 3135. As shown, the selection causes the event browser to be filtered to exclude each clip range associated with the keyword that corresponds to theselectable item 3135. - An event collection may contain media clips or ranges of clips that a user likes or dislikes. For example, there might be several frames where the image is blurry or chaotic, or frames where the imagery is not particularly captivating. In some embodiments, the media editing application provides a marking tool to rate clips or ranges of clips.
-
FIG. 32 illustrates an example of rating a media clip. Three operational stages 3205-3215 of theGUI 100 are shown in this figure. Specifically, in thefirst stage 3205, the user selects arepresentation 3220 of a clip. In thesecond stage 3210, the user selects aUI item 3225 to mark the clip associated with therepresentation 3220 as a favorite. Alternatively, the user can hit a shortcut key to mark the clip. The user can also select another shortcut key oruser interface item 3235 to mark the clip as a reject. Further, when a clip is marked with a rating, the user can select yet another shortcut key oruser interface item 3230 to remove the rating. - When a clip range is marked with a rating, some embodiments display an indication of the rating. This is illustrated in the
third stage 3215, as a line orbar 3245 is displayed across therepresentation 3220. Here, the color of the indication corresponds to a color of theuser interface item 3225. -
FIG. 33 illustrates an example of filtering anevent collection 3322 based on ratings or keywords. Such a filtering operation allows a user to quickly identify clips that are tagged, marked, rejected, not rated, or not tagged. Four operational stages 3305-3320 of theGUI 100 are shown in this figure. Specifically, in thefirst stage 3305, the user selects aUI item 3325. The selection causes a drop downlist 3330 to appear, as illustrated in thesecond stage 3310. The drop downlist 3330 displays several selectable options related to filtering theevent collection 3322 through ratings or keywords. For example, the drop downlist 3330 displays a selectable option for hiding rejected clips, and aselectable option 3335 for only displaying clips that have no ratings or keywords. The user can select any of these selectable options. - When the user selects the
selectable option 3335 in the drop downlist 3330, the user is presented with anevent browser 130 as shown in thefourth stage 3320. Specifically, the selection of theselectable option 3335 causes theevent browser 130 to display clips that do not have any associated ratings or keywords. - In some embodiments, the media editing application provides a novel list view that displays different ranges of media associated with keywords. The list view in some embodiments allows users to select different ranges of a media clip and/or navigate to different sections of the media clip. In some embodiments, the list view is another view of the clip browser or event browser. Accordingly, all of the operations described above in relation to the thumbnails view (e.g., clips view, filmstrip view) can be performed in this list view. These operations include creating different keyword collections, associating a clip or a portion of the clip with a keyword, creating composing clips, disassociating a keyword, performing different operations with the keyword tagging tool, etc.
FIG. 34 illustrates theGUI 100 of the media editing application with such a list view. This figure illustrates theGUI 100 at twodifferent stages FIG. 34 , theGUI 100 includes theevent library 125 and theevent browser 130. Theevent library 125 andevent browser 130 are the same as those described above by reference toFIG. 1 . - As shown in
FIG. 34 , theevent browser 130 displays different media content items in thelist view 3415. The list view includes alist section 3420 and apreview section 3425. Different from a filmstrip view that displays filmstrip representations of different clips, the list view displays each clip's name and media type along with other information. - In the example illustrated in the
FIG. 34 , thelist section 3420 displays the name of the clip, the start and end times, clip duration, and creation date. The information is displayed in different columns with a corresponding column heading (e.g., name, start, end, duration, date created). The user can sort the clips in the list by selecting any one of the different column headings. Each column can also be resized (e.g., by moving column dividers in between the columns). - In some embodiments, the columns may be rearranged by selecting a column heading and moving it to a new position. In some embodiments, the
list view 3415 allows a user to choose what type of information is displayed in list view. For example, when a column heading is selected (e.g., through a control click operation), thelist view 3415 may display a list of different types of information that the user can choose from. - The
preview section 3425 in some embodiments displays a filmstrip representation of a media clip selected from thelist section 3420. Similar to the examples described above, the filmstrip representation is an interactive UI item. For example, the user can select an interior location within the representation to display a preview of the representation's associated clip in a preview display area. In the example illustrated inFIG. 34 , when a user selects an interior location of thefilmstrip representation 3435, aplayhead 3430 moves along a virtual timeline of the filmstrip representation. The user can useplayhead line 3430 as a reference point to display different images and play different audio samples associated with the video clip. - Having described the elements of the
list view 3415, the operations will now be described by reference to the state of theGUI 100 during the twostages first stage 3405 shows the event browser in the list view. The user might have changed the view ofevent browser 130 by selecting a menu item or a toolbar button. - In the
first stage 3405, the selection of amedia clip 3440 in thelist section 3420 causes thepreview section 3425 to display afilmstrip representation 3435. Similar to the examples described above, therepresentation 3435 includes several bars that indicate that representation's associated video clip is marked. Specifically, abar 3445 having a first visual representation (e.g., red bar) indicates that a first range of the video clip is marked with a reject rating, abar 3455 having a different second visual representation (e.g., blue bar) indicates that a second range is marked with a keyword, and abar 3450 having a third visual representation different than the first or second visual representations (e.g., green bar) indicates that a third range is marked with a favorite rating. - In the example illustrated in
FIG. 34 , the keyword bar 3455 (e.g., blue bar) is displayed below the ratings bar 3450. However, the media editing application may display the ranges differently in other embodiments. For example, instead of different bars, the media editing application may display other indications or other colors to distinguish different ranges associated with keywords and/or ratings markers. - The
second stage 3410 shows the selection of a column heading of the list section 3420 (e.g., through a control click operation). The selection causes theGUI 100 to display alist 3460 that allows a user to choose the type of information that is displayed inlist section 3420. In the example illustrated inFIG. 34 , the type of information or metadata includes stat time, end time, duration, content creation date, notes, reel, scene, shot/take, audio role, media start, media end, frame size, video frame rate, audio channel count, audio same rate, file type, date imported, and codec. However, depending on the type of content (e.g., image, document), thelist 3460 may include other types of information. -
FIG. 35 illustrates expanding a media clip in a list view. Two operational stages of theGUI 100 are shown in this figure. In thefirst stage 3505, amedia clip 3515 is selected from thelist section 3420 to display thefilmstrip representation 3435 in thepreview section 3425. The user then selects theUI item 3520 adjacent to themedia clip information 3515 in the list. As shown in thesecond stage 3510, the selection causes the list view to display additional information related to the media clip in an expandedlist 3525. The user can re-select theUI item 3520 to hide the expandedlist 3525. In some embodiments, the media editing application allows a user to quickly expand or collapse a selected clip by selecting a hotkey. For instance, in the example illustrated inFIG. 35 , the user can expand the selected media clip by selecting a key (e.g., right arrow key) and collapse the clip by selecting another key (e.g., left arrow key). - As shown in
FIG. 35 , the expandedlist 3525 displays information related to marked ranges of the media clip. Specifically, for each range associated with a keyword, the expanded list includes a name of the keyword, the start and end times, and range duration. The expandedlist 3525 displays the same information for each ratings marker. In some embodiments, the media editing application may display other information (e.g., creation date, notes on different ranges, etc.). In some embodiments, the media clip information in the list may only be expanded when the corresponding media clip is marked with a keyword or rating. For example, when a media clip is not marked, themedia clip information 3515 may not have a corresponding UI item to display an expanded list. - As shown in
FIG. 35 , the different sections of the list view allows a user to quickly assess a group of media clips and see what ranges are marked with one or more markings (e.g., keywords, markers). In the example illustrated inFIG. 35 , thepreview section 3435 is displayed above thelist section 3420. This example layout of the different sections allows the user view a detailed representation of a media clip (e.g., that includes different visual indications representing different marked rages), and simultaneously view detailed information regarding the media clip (e.g., media clip metadata) and its marked ranges (e.g., marking or range metadata). - As mentioned above, the list view, in some embodiments, can be used to associate one or more portions of one or more media clips with different markings In some embodiments, when users are marking ranges using a representation (e.g., filmstrip representation) in the
preview section 3435, thelist section 3420 is dynamically updated with the marked ranges. For example, when a user drags a selected range of the media clip to a keyword collection, a keyword entry is dynamically added to thelist section 3420. In some embodiments, when the association is created, the entry for the marking is also selected from thelist section 3420. - In the previous example, one media clip is expanded in the list view to display a list of associated keywords and ratings.
FIG. 36 illustrates an example of simultaneously expanding multiple different clips in thelist view 3415. Twooperational stages first stage 3605, the user selects all the media clips in thelist view 3415. The user might have selected these items in a number of different ways (e.g., by selecting a first item in the list and selecting a last item while holding a modifier, by using a select all shortcut, by selecting an area with these items, etc.). - In the
second stage 3610, the user selects a hotkey (e.g., right arrow key) to expand each media clip that can be expanded. As shown, the selection causes (1) amedia clip 3615 to expand and reveal a ratings marker and (2) amedia clip 3620 to expand and reveal two ratings markers and two keywords. Alternatively, or in conjunction with the hotkey, some embodiments provide one or more selectable user interface items for expanding multiple media clips. - In some embodiments, the
list view 3415 allows a user to input notes for (1) media clips and (2) ranges of media clips. For example, a user can add a note to an entire clip or only a portion of the clip associated with a keyword.FIG. 37 illustrates thelist view 3415 with several fields for adding notes. In this figure, thelist section 3420 of thelist view 3415 displays information about several clips. Specifically, thelist section 3420 displays additional information regarding akeyword 3715 and two markers (3720 and 3730) related to amedia clip 3710 in an expanded list. - In this example, the
list section 3420 includes a “Notes”column 3725. As shown, the user can add notes to theentire clip 3710 using thenotes field 3735. The user can also add notes to the different ranges using notes field 3740-3750. -
FIG. 38 illustrates selecting different ranges of a media clip using thelist view 3415. Specifically, this figure illustrates how thelist view 3415 can be used to granularly select different ranges of the clip that are marked with a rating or associated with a keyword. In some embodiments, this allows a user to easily select a marked or tagged range, and modify the selected range. For example, the user can trim or expand the range associated with a particular keyword. When one or more ranges are selected, the user can associate the range with a keyword, add the range to a timeline, etc. Three operational stages 3805-3815 of theGUI 100 are shown in this figure. As shown inFIG. 38 , theGUI 100 includes thepreview display area 3855 and theevent browser 130. Thepreview display area 3855 is described above by reference toFIG. 3 . - The
first stage 3805 shows theevent browser 130 displaying thelist view 3415. Thevideo clip information 3820 in thelist section 3420 is selected and expanded. The selection of thevideo clip information 3820 in the list section causes a preview of the video clip to be displayed in thepreview display area 3855. The selection also causes afilmstrip representation 3835 of the video clip to be displayed in thepreview section 3425. - The
second stage 3810 shows a selection of akeyword 3825 from the expandedlist 3830. The selection causes the range of the video clip associated with the keyword to be highlighted. Here, thefilmstrip representation 3835 is highlighted with arange selector 3840. The user can specify a different range by selecting and moving either edge of therange selector 3840. - As shown in the
second stage 3810, the selection of thekeyword 3825 causes thepreview display area 3855 to display a preview of the range. Specifically, thepreview display area 3855 displays an image associated with the starting point of the keyword range. The user can play a preview of the video clip starting from this position. - The
third stage 3815 shows selection of aratings marker 3845 from the expandedlist 3830. The selection causes the range of the video clip associated with the marker to be highlighted. Similar to thesecond stage 3810, the media clip is highlighted with therange selector 3840. Also, thepreview display area 3855 displays an image associated with the starting point of the range associated with theratings marker 3845. - In some embodiments, the media editing application allows a user to navigate during playback. For example, in the list view illustrated in
FIG. 38 , the user can start playback (e.g., by selecting a space key) and play different clips in the list. In some embodiments, the playback is uninterrupted in that multiple clips are played one after another in the preview display area. For example, the user can start playback for a clip and select another clip or a hotkey to jump to a next clip. In this case, the preview display area of the media editing application will continue playback starting from the next clip without interruption. - In some embodiments, when media clip information is expanded to reveal range items (e.g., marker, keyword), the user can navigate between the range items. For example, a user might start playback of a clip that corresponds to the
clip information 3820. The playback would moves past the different ranges. During playback, the user can select any one the ranges to continue the playback starting from the selected range. Several examples of these playback operations are described by reference toFIG. 41 below. -
FIG. 39 illustrates selecting multiple ranges of a media clip using thelist view 3415. Two operational stages 3905-3910 of theGUI 100 are shown in this figure. Specifically, thefirst stage 3905 shows the selection of aratings marker 3920 from thelist section 3420. The selection causes the range of the video clip associated with the marker to be selected. Thepreview section 3425 provides an indication of the selection as the range corresponding to theratings marker 3920 is highlighted in afilmstrip representation 3925 of the video clip. - The
second stage 3910 shows the selection of theratings marker 3920 and akeyword 3930 from thelist section 3420. The selection causes the range of the video clip associated with theratings marker 3920 and thekeyword 3930 to be selected. In the example illustrated inFIG. 39 , the range between the endpoint of themarker 3920 and start point of thekeyword 3930 is also selected. Thepreview section 3425 provides an indication of the selection of this composite range. Specifically, a composite range starting from the marker's range and ending at the keyword's range is highlighted in thepreview section 3425. When a composite range is selected, some embodiments allow the user to add the selected composite range to a timeline to create a composite presentation. For example, when a clip range spanning two keywords is selected, the user can add the range to the timeline by selecting a hotkey or by dragging the selected range from thepreview section 3425 to the timeline. Several examples of adding clips to the timeline are described below by reference toFIG. 45 . - In the examples described above, several different markings (e.g., marker, keyword) are selected from the list view to select corresponding ranges in the preview section. In some embodiments, when a user selects a portion of the representation (e.g., filmstrip representation) that match marked ranges in the list view, the corresponding ranges or items is selected in the list view.
-
FIG. 40 conceptually illustrates aprocess 4000 for displaying and selecting items (e.g., different ranges of media) in a list view. As mentioned above, the list view in some embodiments allows users to select different ranges of a media clip and/or navigate to different sections of the media clip. Theprocess 4000 is performed by a media editing application in some embodiments. As shown in this figure, theprocess 4000 begins by identifying (at 4005) media clips to display in the list. Next, theprocess 4000 displays (at 4010) identified media clips in a list view (e.g., in the event browser as described above). - The
process 4000 then determines (at 4015) whether a selection of a media clip in the list has been received. When the determination is made that a selection of a media clip has been received, theprocess 4000 identifies (at 4030) items (e.g., keywords, markers) associated with the selected media clip. Theprocess 4000 then displays (at 4035) a clip representation based on the identified items with the media clip range as being selected. The process then provides (at 4040) a preview of the media clip. Next, theprocess 4000 moves on to 4070. - At 4015, when the determination is made that the received input is not a selection of a media clip in the list, the
process 4000 proceeds to 4020. Theprocess 4000 determines (at 4020) whether it has received a selection to expand a media clip in the list. If it is determined that the process has received a selection to expand a media clip, the process identifies (at 4050) each keyword associated with the media clip. The process then displays (at 4055) each identified keyword in the list. Afterwards, the process goes on to 4058. If theprocess 4000 determines (at 4020) that it did not receive a selection to expand any media clip in the list, it moves on to 4070. - At 4058, the
process 4000 determines whether it has received a selection of a keyword in the list. When the determination is made that the process has received such a selection, the process displays (at 4060) a corresponding clip representation with keyword range selected. The process then provides (at 4065) a preview of the media clip starting from the selected keyword range. - Next, the
process 4000 determines (at 4070) whether there is additional user input for the list view. If it is determined that there is additional user input for the list view, it returns to 4015. Otherwise, theprocess 4000 terminates. -
FIG. 41 conceptually illustrates aprocess 4100 for playing items (e.g., clips, keyword ranges) in a list view. In some embodiments, theprocess 4100 is performed by a media editing application. Theprocess 4100 starts when it receives (at 4105) a selection of a list view item. Examples of such list view items include media clips, keywords, smart collections, markers, etc. - The
process 4100 then receives (at 4110) a playback input. For example, a user of the media editing application might select a play button or a hotkey (e.g., space key). When theprocess 4100 receives the playback input, theprocess 4100 starts (at 4115) the playback of the items in the list view starting from a range of a selected item. For example, when the selected item is a marker, the playback may start at a time associated with the marker. - The
process 4100 then determines (at 4120) whether an item in the list view has been selected. As mentioned above, examples of such list view items include media clips, keywords, smart collections, markers, etc. In some embodiments, theprocess 4100 continuously monitors user input during playback to make this determination. - When the determination is made that an item has been selected, the
process 4100 jumps (at 4140) to a starting point of a range of the selected item and continues playback from the starting point. For example, during playback, a user might select a keyword. In this case, the playback continues starting from a starting point of a range of a clip associated with the keyword. When a clip is selected, the playback continues from a starting point of the clip. - The user can alternatively select another item in the list view by selecting a hotkey (e.g., directional keys) for a next or previous item in the list. In some embodiments, when an item in the list is not expanded, the selection of a next or previous item skips any inner range items and moves to the next or previous item. For example, when a clip tagged with keywords is not expanded in the list view to reveal the associated keywords, the user selection of the next item causes the playback to continue from the next clip. However, when the clip is expanded in the list view to reveal the associated keywords, the user selection of the next item causes the playback to continue starting from a range of a next keyword.
- The
process 4100 then determines (at 4125) whether an input to stop playback has been received. In some embodiments, theprocess 4100 continuously monitors user input during playback to make this determination. - When the determination is made that an input to stop playback has been received, the
process 4100 ends. Otherwise, theprocess 4100 determines (at 4130) whether there are any other ranges to playback. That is, theprocess 4100 may have reached the end of the list. In this example, when there are no more clips or ranges to play, theprocess 4100 ends. Otherwise, theprocess 4100 continues (at 4135) playback starting from a range of a next item. In some embodiments, when theprocess 4100 finishes playing a last item in the list view, the playback continues from the first item in the list. - Some embodiments of the media editing application provide markers for marking different media clips. In some embodiments, the markers are reference points that a user can place within media clips to identify specific frames or samples. The user can use these markers to flag different locations on a clip with editing notes or other descriptive information.
- In some embodiments, a user can use the markers for task management. For example, the markers may have “to do” notes associated with them. These notes can be notes that an editor makes as reminders to himself or others regarding tasks that have to be performed. Accordingly, some embodiments displays (1) the notes associated with the marker and (2) a check box to indicate whether the task associated with the marker has been completed.
- In some embodiments, markers are classified by appearance. For example, an informational marker may appear in one color while a to-do marker may appear in another color. In several of the examples described below, markers are added to a clip in a list view of the event browser. However, the markers may be added in a different view or in a timeline. For example, the markers may be added in a filmstrip view that displays filmstrip representations of different clips.
-
FIG. 42 illustrates adding a marker to a clip using thelist view 3415. Three operational stages 4205-4215 of theGUI 100 are shown in this figure. As shown in thefirst stage 4205, thepreview section 3425 of thelist view 3415 displays afilmstrip representation 4240 of a video clip. To display therepresentation 4240, a user has selected a videoclip information item 4220 from thelist section 3420 of the list view. Also, the user has selected aUI item 4225 in thelist section 3420 of thelist view 3415 to display information regarding a keyword associated with the video clip. Specifically, thekeyword information 4230 indicates a range of the video clip associated with the keyword. The association of the keyword to the range of the video clip is represented in thepreview section 3425 with abar 4235 that spans horizontally across thefilmstrip representation 4240. - In the
second stage 4210, the user selects an upper edge of thefilmstrip representation 4240. When the user selects the upper edge, aline 4245 moves along a virtual timeline to the selected location. The user can drag the line along the virtual timeline and use it as a reference point to specify a location for the marker. - The
third stage 4215 illustrates associating a marker with a video clip. Here, to associate the marker with the video clip, the user selects a menu item for adding a marker or selecting a hotkey. The marker is associated with the video clip at a specific point in the duration of the video clip. This is indicated by the list section that listsinformation 4255 related to the maker. In the example illustrated inFIG. 42 , themarker information 4255 indicates that the name of the marker is “Marker 1”. Themarker information 4255 also indicates that a range (i.e., one second) of the video clip is associated with the marker. A marker representation is also added to thefilmstrip representation 4240 in thepreview section 3425. Specifically, amarker 4250 is added to a position corresponding to the selected location described in the list view. - In some embodiments, once a marker is added, the user can reposition or delete the marker. For example, a user can reposition the
marker 4250 in thepreview section 3425 by dragging the marker to a new location. Alternatively, the user can delete the marker by selecting and removing the marker (e.g., by pressing a delete key). When there are multiple markers, the media editing application may allow the user to navigate between the markers. For example, the media editing application may provide a hotkey or a selectable UI item for navigating to the next/previous marker. - In the example described above, the
marker 4250 is added with the user specifying a location along the duration using thefilmstrip representation 4240 in a list view. In some embodiments, these markers can also be added, deleted, or modified in a different view (e.g., thumbnail view, filmstrip view). These markers can also be added, deleted, or modified in the timeline. Several examples of modifying markers in the timeline are described below by reference toFIG. 51 . - In some embodiments, the user can add a marker during playback of the video clip associated with the filmstrip. For example, the user can select the filmstrip representation and play the video clip (e.g., by selecting a play button or a hotkey). As the preview of the video clip plays (e.g., in a preview display area), the
line 4245 moves horizontally across the virtual timeline of thefilmstrip representation 4240. The user can identify a location within the clip and pause the playback (e.g., by selecting a pause button or a pause hotkey). The user can then mark the location. Instead of pausing the video clip, the user may simply mark a location as the video clip plays (e.g., by selecting a menu item for marking a clip or by selecting a hotkey). -
FIG. 43 provides an illustrative example of editing a marker. Four operational stages 4305-4320 of theGUI 100 are shown in this figure. In thefirst stage 4305, a user selects amarker 4325. The selection causes amarker editor 4330 to appear as illustrated in thesecond stage 4310. The marker editor includes atext field 4335 for specifying a name or description of the marker, acontrol 4340 for deleting the marker, acontrol 4345 for defining the marker as a to-do item, and acontrol 4350 for applying changes to the marker or closing themarker editor 4330. - In the
second stage 4310, the user types in thetext field 4335 to provide a descriptive name or note for themarker 4325. Thethird stage 4315 illustrates the marker editor after the user inputs a different name for the marker. Lastly, thefourth stage 4320 illustrates theevent browser 130 after the user selects thecontrol 4350. Here, themarker information 4355 in thelist section 3420 indicates that the name of the marker has been changed from “Marker 1” to “Scene 1 Start”. -
FIG. 44 provides an illustrative example of defining a marker as a to-do item. Twooperational stages GUI 100 are shown in this figure. Thefirst stage 4405 illustrates a selection of thecontrol 4345 for defining the marker as a to-do item. Thesecond stage 4410 illustrates theGUI 100 after the user selects thecontrol 4345. As shown in thesecond stage 4410, the selection causes the marker to change its appearance. In the example illustrated inFIG. 44 , the marker changes color (e.g., from blue to red). Also, in themarker editor 4330, thecontrol 4345 is replaced with acontrol 4415 or check box for indicating whether the to-do item is a completed item. In some embodiments, a selection of this control causes the marker to appear differently. For example, the marker may change from a red color to a green color to indicate that the task is completed. -
FIG. 45 provides an illustrative example of adding a video clip to a timeline. In this example, theGUI 100 includes thepreview display area 325, theevent library 125, theevent browser 130, and thetimeline 4525. Twooperational stages GUI 100 are illustrated in this figure. Thepreview display area 325, theevent library 125, and theevent browser 130 are the same as those described above (e.g.,FIGS. 1 , 3, 34). - The
first stage 4505 illustrates a selection of a video clip to add to thetimeline 4525. Here, the user selects the video clip from the list view by selecting thevideo clip information 4530. To add the video clip, the user can drag thevideo clip information 4530 in the list section or therepresentation 4535 in thepreview section 3425 to the timeline. The user can also select a hotkey to add the video clip to the timeline. As mentioned above, a range of the clip may be added to the timeline. For example, a range of a clip may be added by selecting a filmstrip representation in a keyword collection that represents a range of a video clip associated with a keyword. A range of a clip can also be selected from any one or more of the keywords or other items (e.g., ratings marker) displayed in the list view. Alternatively, the user can use a range selector to define a range of a clip to add to the timeline. - Some embodiments provide a novel timeline search tool for searching and navigating a timeline. In some embodiments, the search tool includes a search field for searching for clips in the timeline based on their names or associated keywords. The search tool includes a display area for displaying search results. In some such embodiments, each result is user-selectable such that a selection of the result causes the timeline to navigate to the position of the clip in the timeline. Accordingly, the timeline search tool allows a content editor to navigate the timeline to identify clips.
-
FIG. 46 provides an illustrative example of atimeline search tool 4630 according to some embodiments. Twooperational stages FIG. 46 , thetimeline search tool 4630 includes (1) asearch field 4615 for specifying one or more search parameters, (2) acontrol 4660 for entering a clip view, (3) acontrol 4635 for entering a keyword view, (4) anindex area 4620, and (5) anindex playhead 4625. - In the
first stage 4605, atimeline 4650 displays one of several different clips that are in a composite presentation. A user or content editor might have added these clips to the timeline in a current editing session or by opening a composite project (alternatively may be referred to as a “project”) that was defined in a previous editing session. As shown inFIG. 46 , thetimeline search tool 4630 is displayed adjacent to thetimeline 4650. However, thetimeline search tool 4630 may be displayed elsewhere in some embodiments. For example, the timeline search tool may be provided in its own window separate from thetimeline 4650. In the example illustrated inFIG. 46 , thetimeline search tool 4630 may be closed or opened (e.g., by selecting a toolbar button, menu item, shortcut key, etc). - The
first stage 4605 shows thetimeline search tool 4630 in a clip view. At any time, the user can switch to a keyword view by selecting thecontrol 4635. In the clip view, theindex area 4620 lists each clip (e.g., a range of a clip) that is added to thetimeline 4650. One or more scrollbars may be displayed when the list of clips does not fit in theindex area 4620. - Each particular clip listed in the
index area 4620 represents an index to the particular clip in thetimeline 4650. The user can select any one of the indices to navigate to a position of a corresponding clip in thetimeline 4650. For example, when the composite presentation is for a two-hour program with many ranges of different clips, the user can select an index for a clip range and quickly navigate to the clip range in thetimeline 4650. - In the example illustrated in
FIG. 46 , the clips are listed in chronological order starting with a first clip in thetimeline 4650 and ending with a last clip in the timeline. Also, each clip includes (1) a clip icon that indicates the type of clip (e.g., video, audio, title), (2) a clip name, and (3) time duration. A user can choose what types of clips are listed in theindex area 4620 by selecting one or more controls from a set ofcontrols 4640. For example, the user can specify whether only video clips, audio clips, or title clips are displayed in theindex area 4620. In some embodiments, instead of different icons, theindex area 4620 displays the clips differently. For example, each clip may be represented by one or more thumbnail images, waveform, etc. - The
second stage 4610 shows thetimeline search tool 4630 in a keyword view. At any time, the user can switch to a clip view by selecting thecontrol 4660. In the keyword view, theindex area 4620 lists each keyword that is associated with one or more ranges of a clip in the timeline. These keywords may be user-specified keywords or analysis keywords in some embodiments. In addition to keywords, some embodiments list markers (e.g., ratings marker, to-do markers, etc.). In some embodiments, theindex area 4620 lists smart collections. For example, theindex area 4620 may list different smart collections related to an analysis keyword such as one person, two persons, a group of people, wide shot, close-up, etc. Similar to the clip view, one or more scrollbars may be displayed when the list of items does not fit in theindex area 4620. - Not unlike the listing of clips in the clip view, each item (e.g., keyword, marker, smart collection) represents an index to the item in the
timeline 4650. The user can select any one of the indices to navigate to a position of a corresponding item in thetimeline 4650. In the example illustrated in thesecond stage 4610, each item in theindex area 4620 includes an icon that indicates its type, name, and time duration. Also, the items are listed in theindex area 4620 in chronological order starting with a first item in thetimeline 4650 and ending with a last item in the timeline. - In the
second stage 4610, a user can choose which types of items are displayed in theindex area 4620 by selecting one or more controls of a set ofcontrols 4645 below the index area. For example, the user can specify that only markers, keywords, incomplete to-do markers, or completed to-do markers be displayed in the index area. - As shown in
FIG. 46 , the index playhead 4625 is positioned at the top ofindex area 4620 above any other items (e.g., clip, keyword, and marker in both views). In the clip view, the position of the index playhead 4625 provides a reference point to one or more clips that is displayed in thetimeline 4650. For example, in thefirst stage 4605, the position of the index playhead 4625 indicates that the timeline is displaying a first clip in the composite presentation that the user is creating. Similarly, in the keyword view, the position of the index playhead 4625 provides a reference point to one or more items (e.g., keywords, markers) that is associated with a particular clip in thetimeline 4650. The position of the index playhead 4625 also corresponds to atimeline playhead 4655 in the timeline. This index playhead 4625 moves synchronously with thetimeline playhead 4655, in some embodiments. -
FIG. 47 provides an illustrative example of the association between thetimeline playhead 4655 and theindex playhead 4625. Specifically, in three operational stages 4705-4715, this figure illustrates how the index playhead 4625 is moved when a user selects and moves thetimeline playhead 4655. - In the
first stage 4705, thetimeline playhead 4655 is situated at a position on thetimeline 4650 that corresponds to a starting point of the composite presentation. Thetimeline search tool 4630 is in a keyword view and displays a list of keywords and markers. The position of the index playhead 4625 corresponds to the position of thetimeline playhead 4655. This is shown in thefirst stage 4705 as the index playhead is situated at the top of theindex area 4620 above the keywords and markers. - The
second stage 4710 shows a selection and movement of thetimeline playhead 4655 past afirst marker 4720. As shown, the movement causes the index playhead 4625 to be moved down by following the chronological order of the indices in theindex area 4620. Specifically, in thesecond stage 4710, the index playhead 4625 is moved to a position below afirst marker item 4725 corresponding to thefirst marker 4720 in thetimeline 4650. - The
third stage 4715 shows a selection and movement of thetimeline playhead 4655 past asecond marker 4730. As shown, the movement causes the index playhead 4625 to be moved down in the list of markers and keywords. Specifically, in thethird stage 4715, the index playhead 4625 is moved to a position below asecond marker item 4735 corresponding to thesecond marker 4730 in thetimeline 4650. -
FIG. 48 provides an illustrative example of filtering thetimeline search tool 4630. Specifically, this figure illustrates in six operational stages 4805-4830 how the set of controls 4835-4860 can be used to filter theindex area 4620. In this example, as the search tool is in a keyword view, only the set of controls 4835-4860 related to keyword search is shown below theindex area 4620. - In the
first stage 4805, the index area lists all keywords and markers associated with clips in the timeline. Specifically, thecontrol 4835 for showing all items is activated, which causes theindex area 4620 to list each item. Thesecond stage 4810 shows selection of acontrol 4840, which causes theindex area 4620 to display only markers. - The
third stage 4815 shows selection of acontrol 4845 for displaying only keywords. Accordingly, in thethird stage 4815, only keywords are listed in theindex area 4620. Specifically, theindex area 4620 lists two selectable keyword items. The first item corresponds to a range of a clip associated with both first and second keywords. The second item corresponds to a range associated with the first keyword. In some embodiments, the time (e.g., time code) listed for each item (e.g., clip, keyword, marker, etc.) in theindex area 4620 corresponds to a starting point of the range of the item along the sequence or composite presentation. For example, in thethird stage 4815, the first range associated with the two keywords is around 35 seconds into the sequence. - The
fourth stage 4820 shows selection of acontrol 4850 for displaying only analysis keywords. This causes theindex area 4620 display only two selectable items for analysis keywords associated with the clips in the sequence. Thefifth stage 4825 shows selection of acontrol 4855 that causes the index area to display only to-do markers. Thesixth stage 4830 shows selection of acontrol 4860 that causes theindex area 4620 to only list completed to-do markers. In thesixth stage 4830, theindex area 4620 is empty because the clips in the timeline are not associated with any completed to-do markers. - In the previous example, the
timeline search tool 4630 is filtered based on keywords, markers, analysis keywords, to-do markers, and completed to-do markers.FIG. 49 provides an illustrative example of filtering thetimeline search tool 4630 based on video, audio, and titles. Four operational stages 4905-4920 are illustrated in this figure. In this example, as thetimeline search tool 4630 is in a clips view mode, only the set of controls 4925-4940 relating to clips search is shown below theindex area 4620. - In the
first stage 4905, theindex area 4620 lists each clip in the timeline because thecontrol 4925 corresponding to all clips is activated. Specifically, theindex area 4620 lists atitle clip 4945, anaudio clip 4950, and avideo clip 4955. In some embodiments, title clips are synthesized clips generated by a media editing application. For example, a user might add one or more title clips to a composite presentation using a title effects tool. This title effects tool may provide different options for defining a title clip to add to the composite presentation. - In contrast to audio and video clips, title clips do not reference any source media on a disk, in some embodiments. In general, titles may play a critical role in movies, providing important bookends (e.g., opening titles and closing credits), and conveying time and dates within the movie. Titles, especially in the lower third of the screen, are also used in documentaries and informational videos to convey details about onscreen subjects or products.
- As shown in the
second stage 4910, the selection of thecontrol 4930 for showing only video clips causes theindex area 4620 to only list avideo clip 4955. Thethird stage 4915 shows selection of acontrol 4935 for displaying only audio clips. The selection of thecontrol 4935 causes theindex area 4620 to only list theaudio clip 4950. Thefourth stage 4920 shows selection of acontrol 4940 for displaying only title clips, which causes theindex area 4620 to list only thetitle clip 4945. -
FIG. 50 provides an illustrative example of navigating the timeline using thetimeline search tool 4630. Three operational stages 5005-5015 are shown in this figure. In thefirst stage 5005, thetimeline playhead 4655 is situated at a position on the timeline that corresponds to a starting point of the composite presentation. Thetimeline search tool 4630 is in a keyword search mode and displays a list of keywords and markers. - The
second stage 5010 shows a selection of a to-do marker item 5020 in theindex area 4620. As shown, the selection causes thetimeline playhead 4655 to move to a position of amarker 5035 corresponding to the to-do marker item 5020 in theindex area 4620. Thethird stage 5015 shows a selection of akeyword item 5025 in theindex area 4620. The selection causes the playhead to be moved to a starting point of arange 5030 associated with the keyword corresponding to thekeyword item 5025. The selection also causes theclip range 5030 associated with the keyword to be selected in thetimeline 4650. This provides a visual indication to the user of the range of the sequence that is tagged with the keyword. In some embodiments, when multiple keywords are selected from theindex area 4620, the ranges of the keywords are selected in the timeline and the timeline may move such that the beginning or starting point of a range associated with a first keyword in theindex area 4620 is aligned with the timeline's playhead. In some embodiments, the selection mechanism allows users to inspect the timeline and perform a number of different operations. These operations include removing items from the timeline (e.g., clips, tags and Markers), editing operations (e.g., adding effects), etc. - In the example described above, the user selects a keyword and marker in the
timeline search tool 4630 to navigate thetimeline 4650. This is particularly useful when the timeline is densely populated with multiple different clips (e.g., ranges of clips). For example, when the composite presentation is a long presentation, there may be many clips (e.g., audio clips, subtitles, video clips, titles, images). In such a situation, thetimeline search tool 4630 can be used to locate a particular item and navigate to the particular item in thetimeline 4650. In some embodiments, thetimeline search tool 4630 allows the user to navigate to the items (e.g., clips, keywords, markers) in a similar manner as navigating to items in a list. For example, a user can select an item through a directional key (e.g., up key, down key), which causes the timeline to navigate to the position of the item. -
FIG. 51 provides an example workflow for searching thetimeline 4650 for a to-do marker using thetimeline search tool 4630 and specifying the to-do marker as a completed item (e.g., by selecting a checkbox). Three operational stages 5105-5115 are shown in this figure. In thefirst stage 5105, thecontrol 4855 for displaying only to-do markers with an incomplete flag is selected. The selection causes theindex area 4620 to display only amarker item 5120 corresponding to a to-do marker 5125 in the timeline. The user then selects thismarker item 5120, which causes navigation across the timeline to the to-do marker 5125. - The
second stage 5110 shows thetimeline 4650 after the user selects (e.g., through a double click operation) the to-do marker 5125 on the timeline. As shown, the selection causes apopup window 5130 to appear. The pop-up window includes information related to the to-do marker and acheck box 5135 for flagging the to-do marker as being a completed item. - The
third stage 5115 shows thetimeline 4650 after the user selects acheck box 5135 to flag the to-do marker 5125 as a completed item. In thethird stage 5115, the appearance of themarker 5125 is different from its appearance in the first andsecond stages index area 4620 that displays incomplete to-do markers is cleared as the to-do marker 5125 has been flagged as being a completed item. Also, thecontrol 4860 can be selected to display each completed task in theindex area 4620. - In the example described above, the to-
do marker 5125 is checked as being completed using the timeline. In some embodiments, the to-do marker may be flagged as being completed using thetimeline search tool 4630. For example, instead of flagging the marker using the pop-upwindow 5130, themarker item 5120 may be selected to mark the to-do marker 5125 as a completed item. -
FIG. 52 provides an illustrative example of using thetimeline search tool 4630 to search a list of keywords and markers. Specifically, this figure illustrates in three operational stages 5205-5215 how thesearch field 4615 can be used to filter theindex area 4620 of thetimeline search tool 4630. In this example, as the search tool is in a keyword view mode, only the set of controls related to keyword search is shown below theindex area 4620. - In the
first stage 5205, theindex area 4620 lists all keywords or markers that are associated with the clips in the timeline. The user inputs a letter “s” into thesearch field 4615 in thesecond stage 5210. This causes theindex area 4620 to only display keywords and markers that includes the letter “s”. Thethird stage 5215 illustrates inputting an additional letter into thesearch field 4615. Specifically, the user inputs the letter “t” in addition to the previous input of the letter “s”. This causes theindex area 4620 to only display each keyword or marker that includes the sequence of letters “st”. -
FIG. 53 provides an illustrative example of using thetimeline search tool 4630 to search a list of clips. Specifically, this figure illustrates in three operational stages 5305-5315 how thesearch field 4615 can be used to filter theindex area 4620 of thetimeline search tool 4630 based on clips. In this example, as thetimeline search tool 4630 is in a clip view mode, only the set of controls related to clips searches is shown below theindex area 4620. - In the
first stage 5305, theindex area 4620 lists all clips in the timeline. In thesecond stage 5310, the user inputs a letter “a” into thesearch field 4615. This causes theindex area 4620 to only display clips that includes the letter “a”. Thethird stage 5315 illustrates inputting an additional letter into thesearch field 4615. Specifically, the user inputs the letter “b” in addition to the previous input of the letter “a”. This causes theindex area 4620 to only display each clip that includes the sequence of letters “ab”. -
FIG. 54 provides an illustrative example of using thetimeline search tool 4630 to display time duration for ranges of clips (e.g., that are associated with one or more keywords). Three operational stages 5410-5420 of thetimeline search tool 4630 are illustrated in this figure. - As shown in the
first stage 5410, a user selects afirst keyword item 5430 in theindex area 4620 of thetimeline search tool 4630. The selection causes thetimeline search tool 4630 to display a total time for the range of a clip associated with a keyword corresponding to thefirst keyword item 5430. Specifically, the total time is displayed in adisplay area 5425. - In the
second stage 5415, the user selects asecond keyword item 5435, while selecting thefirst keyword item 5430. This causes the total time of the two ranges of clips associated with the keywords to be displayed in thedisplay area 5425. Thethird stage 5420 shows the total time of three clip ranges in thedisplay area 5425. However, in this third stage, the total duration includes a duration for a clip range that is associated with a set of analysis keywords that correspond to ananalysis keyword item 5440. - In the example described above, a total duration is displayed when multiple items corresponding to one or more keywords are selected from the
index area 4620 of thetimeline search tool 4630. Displaying the total time can be useful in a number of different ways. For example, an editor may be restricted to adding only 30 seconds of stock footage. Here, when the stock footage is tagged as such, the editor can select those items corresponding to the stock footage in theindex area 4620 and know whether the total duration exceeds 30 seconds. -
FIG. 55 provides an illustrative example of displaying the total time of several clips in thetimeline search tool 4630. This figure is similar to the previous example. However, in the example illustrated inFIG. 55 , the user selects multiple items corresponding to different clips. Specifically, in thefirst state 5505, the user selects afirst item 5515 to display a total duration for a first clip in thedisplay area 5425 of thetimeline search tool 4630. In thesecond stage 5510, the user selects asecond item 5520 corresponding to a second clip, while selecting thefirst item 5515. This causes thedisplay area 5425 to display the total duration of both the first and second clips. - In some embodiments, the
timeline search tool 4630 allows a user to find missing clips. A missing clip is a clip imported into the media editing application that does not link back to its source. For example, a user might have moved or deleted a source file on a hard disk to break the link between the application's file entry and the source file.FIG. 56 provides an illustrative example of using thetimeline search tool 4630 to find missing clips. - In the
first stage 5605, thetimeline 4650 includes a number of different clips. This is indicated by the listing of clips in theindex area 4620 of thetimeline search tool 4630. Thesecond stage 5610 shows thetimeline 4650 after a search parameter for finding missing clips is inputted by the user into thesearch field 4615 of thetimeline search tool 4630. In some embodiments, the search parameter is a predefined search parameter or keyword to search for missing clips in the timeline. In this example, the user types the word “missing” into thesearch field 4615. However, a different word or parameter can be used, in some embodiments. - As shown in the
second stage 5610, the input causes theindex area 4620 of thetimeline search tool 4630 to display anindex item 5620 for a missing or offline clip. The user then selects theindex item 5620 to navigate to the missing clip. - The
third stage 5615 shows thetimeline 4650 after the user selects theindex item 5620. Specifically, the selection causes the timeline to be navigated to the missing clip. Here, the user can select theindex item 5620 or theclip representation 5625 to delete the clip from the project. Alternatively, the user can reestablish the broken link. For example, a selection of theindex item 5620 may cause a clip inspector to be displayed. This clip inspector allows the user to identify the location of the missing clip in order to reestablish the broken link. -
FIG. 57 conceptually illustrates aprocess 5700 for searching and navigating a timeline of a media editing application. In some embodiments, the process is performed through a timeline search tool of the media editing application. As shown in this figure, theprocess 5700 begins by identifying (at 5705) clips in the timeline. Next, the process identifies (at 5710) items (e.g., keywords, markers) associated with the clips in the timeline. Examples of such associated items include media clips, keywords, smart collections, markers, etc. - The
process 5700 then determines (at 5715) whether it is in a clip view mode. When the determination is made that the process is in a clip view mode, theprocess 5700 displays (at 5720) identified clips as indices in an index display area. Next, theprocess 5700 determines (at 5725) whether it has received a selection of a listed index item. In some embodiments, theprocess 5700 continuously monitors user actions in the clip view mode to make this determination. - When the determination is made that the
process 5700 has received a selection of a listed index item, the process navigates (at 5730) to the position of the selected clip in the timeline. At 5735, theprocess 5700 determines whether it has received any search parameter. - When the determination is made that the process has received some search parameters, the process filters (at 5740) the indices displayed in index display area based on the received search parameters. The process then goes on to 5770. In some embodiments, the
process 5700 continuously monitors a search field to determine whether the user has inputted a search parameter (e.g., a letter, a number). - Back at 5715, when the
process 5700 determines that it is not in a clip view mode or is in a keyword view mode, the process displays (at 5745) the identified items (e.g., keywords, markers) as indices in the index display area. Theprocess 5700 then determines (at 5750) whether it has received a selection of a listed index item. When the determination is made that theprocess 5700 has not received a selection of a listed index item, theprocess 5700 transitions to 5760. In contrast, when the determination is made that theprocess 5700 has received a selection of a listed index item, the process navigates (at 5755) to the position of the selected item in the timeline. - The
process 5700 then determines (at 5760) whether any search parameter has been received. When the determination is made that a search parameter has not been received, the process transitions to 5770. In contrast, when the determination is made that a search parameter has been received, theprocess 5700 filters (at 5765) the indices displayed in index display area based on the received search parameters. The process then goes on to 5770. - At 5770, the
process 5700 determines whether there is any additional input for the timeline search tool. If it is determined that there is additional input for the timeline search tool, theprocess 5700 returns to 5715 to continue its navigation and filtering. Otherwise, theprocess 5700 terminates. - For some embodiments of the invention,
FIG. 58 conceptually illustrates several example data structures for a searchable and navigable timeline. In some embodiments, the data structures are all contained within a project data structure that contains a single sequence for generating a composite presentation.FIG. 58 illustrates atimeline sequence 5805 that includes a primarycollection data structure 5810. Here, the primarycollection data structure 5810 is in itself an array of one or more clip objects or collection objects. Several examples of such clip objects are described above by reference toFIG. 5 . - As shown in
FIG. 58 , thesequence 5805 includes (1) a sequence ID, (2) sequence attributes, and (3) theprimary collection 5810. The sequence ID identifies thetimeline sequence 5805. In some embodiments, a user sets the sequence attributes for the project in the timeline. For example, the user might have specified several settings that correspond to these sequence attributes when creating the project. - The
primary collection 5810 includes the collection ID and the array of clips. The collection ID identifies the primary collection. The array includes several clips (i.e.,clip 1 to clip N). These represent clips or collections that have been added to the timeline. In some embodiments, the array is ordered based on the locations of media clips in the timeline and only includes clips in the primary lane of the primary collection. The application assumes that there is no gap between these items, and thus no timing data is needed between the items. When a clip collection stored in an event is added to a project in a timeline, some embodiments remove a sequence container data structure and copy the rest of the data structure (e.g., the clip and its components) into the data structure for the clip in the timeline. - As shown in
FIG. 58 , theclip 5815 includes (1) a clip ID, (2) range attributes, (3) a set of keywords, and (4) a set of markers. The clip ID uniquely identifies theclip 5815. In some embodiments, the range attributes indicate a total range and/or trimmed ranges associated with theclip 5815. In some embodiments, theclip 5815 is a compound clip that includes multiple clips. An example of a compound clip is described above by reference toFIG. 7 . - In some embodiments, the
clip 5815 includes a set of anchored items. Some embodiments include a set of anchored items for each clip or collection object. For example, each first clip that is anchored to a second clip may store an anchor offset that indicates a particular instance in time along the range of the second clip. That is, the anchor offset may indicate that the first clip is anchored x number of seconds and/or frames into the second clip. These times refer to the trimmed ranges of the clips in some embodiments. - In some embodiments, the timeline search tool displays the list of clips and provides a selectable link to each clip based on the array of clips. For example, the ordering of the clips in the array and the range attributes provide indications of starting and ending points along the timeline of each clip. As mentioned above by reference to
FIG. 5 , each clip can include other clip attributes such as one or more components, clip objects, notes, etc. - The
keyword set 5820 represents keywords associated with theclip 5815. An example of such a keyword set is described above by reference toFIG. 5 . As mentioned, thekeyword set 5820 includes one or more keywords that are associated with a particular range of theclip 5815. In some embodiments, the keyword's range attributes indicate a starting point and an ending point of the range of a clip that is associated with the keyword. This may include the actual start time and end time. In some embodiments, the range attributes may be expressed differently. For example, instead of a start time and an end time, the range may be expressed as a start time and duration (from which the end time can be derived). - The
marker 5825 includes a marker ID and range attributes. The marker ID identifies themarker 5825. In contrast to a keyword's range attributes, which may indicate a range or duration of time, the range attributes of themarker 5825 only indicate a single instance in time, in some embodiments. In addition, themarker 5825 may include attributes related a note field, an attribute which indicates whether the marker is a to-do marker, etc. - In some embodiments, the timeline search tool displays the list of keywords and markers, and provides a selectable link to each of these items based the marker and keyword associations of the clips in the array of clips. That is, the ordering of the clips, each clip's range attributes, each marker or keyword's range attributes all provide an indication of where each associated keyword or marker is located along the timeline.
- One of ordinary skill will also recognize that the objects and data structures shown in
FIG. 58 are just a few of many different possible configurations for a timeline search tool of some embodiments. For example, a keyword set may be represented as a single keyword instead of a set of one or more keywords. In some such embodiments, each keyword is associated with its own range attribute. Also, addition information regarding data structures are described in U.S. patent application Ser. No. 13/111,912, entitled “Data Structures for a Media-Editing Application”. This Application is incorporated in the present application by reference. - In some embodiments, the processes described above are implemented as software running on a particular machine, such as a computer or a handheld device, or stored in a machine readable medium.
FIG. 59 conceptually illustrates the software architecture of amedia editing application 5900 of some embodiments. In some embodiments, the media editing application is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within an operating system. Furthermore, in some embodiments, the application is provided as part of a server-based solution. In some such embodiments, the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server. In other such embodiments, the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine. - The
media editing application 5900 includes a user interface (UI) interaction and generation module 5905, a media ingestmodule 5910,editing modules 5915, arendering engine 5920, aplayback module 5925,analysis modules 5940, akeyword association module 5935, akeyword collection module 5930, and atimeline search module 5995. As shown, the user interface interaction and generation module 5905 generates a number of different UI elements, including akeyword tagging tool 5906, atimeline 5945, atimeline search tool 5904, athumbnails view 5908, alist view 5902, apreview display area 5912, and a set of analysis andimport tools 5990. - The figure also illustrates stored data associated with the media-editing application: source files 5950,
event data 5955,project data 5960, andother data 5965. In some embodiments, the source files 5950 store media files (e.g., video files, audio files, combined video and audio files, etc.) imported into the application. The source files 5950 of some embodiments also store transcoded versions of the imported files as well as analysis data (e.g., people detection data, shake detection data, color balance data, etc.). Theevent data 5955 stores the events information used by some embodiments to populate the thumbnails view 5908 (e.g., filmstrip view) and thelist view 5902. Theevent data 5955 may be a set of clip object data structures stored as one or more SQLite database (or other format) files in some embodiments. Theproject data 5960 stores the project information used by some embodiments to specify a composite presentation in thetimeline 5945. Theproject data 5960 may also be a set of clip object data structures stored as one or more SQLite database (or other format) files in some embodiments. - In some embodiments, the four sets of data 5950-5965 are stored in a single physical storage (e.g., an internal hard drive, external hard drive, etc.). In some embodiments, the data may be split between multiple physical storages. For instance, the source files might be stored on an external hard drive with the event data, project data, and other data on an internal drive. Some embodiments store event data with their associated source files and render files in one set of folders, and the project data with associated render files in a separate set of folders.
-
FIG. 59 also illustrates anoperating system 5970 that includes input device driver(s) 5975,display module 5980, andmedia import module 5985. In some embodiments, as illustrated, thedevice drivers 5975,display module 5980, andmedia import module 5985 are part of theoperating system 5970 even when themedia editing application 5900 is an application separate from theoperating system 5970. - The
input device drivers 5975 may include drivers for translating signals from a keyboard, mouse, touchpad, tablet, touchscreen, etc. A user interacts with one or more of these input devices, each of which send signals to its corresponding device driver. The device driver then translates the signals into user input data that is provided to the UI interaction and generation module 5905. - The present application describes a graphical user interface that provides users with numerous ways to perform different sets of operations and functionalities. In some embodiments, these operations and functionalities are performed based on different commands that are received from users through different input devices (e.g., keyboard, trackpad, touchpad, mouse, etc.). For example, the present application illustrates the use of a cursor in the graphical user interface to control (e.g., select, move) objects in the graphical user interface. However, in some embodiments, objects in the graphical user interface can also be controlled or manipulated through other controls, such as touch control. In some embodiments, touch control is implemented through an input device that can detect the presence and location of touch on a display of the device. An example of such a device is a touch screen device. In some embodiments, with touch control, a user can directly manipulate objects by interacting with the graphical user interface that is displayed on the display of the touch screen device. For instance, a user can select a particular object in the graphical user interface by simply touching that particular object on the display of the touch screen device. As such, when touch control is utilized, a cursor may not even be provided for enabling selection of an object of a graphical user interface in some embodiments. However, when a cursor is provided in a graphical user interface, touch control can be used to control the cursor in some embodiments.
- The
display module 5980 translates the output of a user interface for a display device. That is, thedisplay module 5980 receives signals (e.g., from the UI interaction and generation module 5905) describing what should be displayed and translates these signals into pixel information that is sent to the display device. The display device may be an LCD, plasma screen, CRT monitor, touchscreen, etc. - The
media import module 5985 receives media files (e.g., audio files, video files, etc.) from storage devices (e.g., external drives, recording devices, etc.) through one or more ports (e.g., a USB port, Firewire port, etc.) of the device on which theapplication 5900 operates and translates this media data for the media-editing application or stores the data directly onto a storage of the device. - The UI interaction and generation module 5905 of the
media editing application 5900 interprets the user input data received from theinput device drivers 5975 and passes it to various modules, including thetimeline search module 5995, theediting modules 5915, therendering engine 5920, theplayback module 5925, theanalysis modules 5940, thekeyword association module 5935, and thekeyword collection module 5930. The UI interaction and generation module 5905 also manages the display of the UI, and outputs this display information to thedisplay module 5980. This UI display information may be based on information from theediting modules 5915, theplayback module 5925, and the data 5950-5965. In some embodiments, the UI interaction and generation module 5905 generates a basic GUI and populates the GUI with information from the other modules and stored data. - As shown, the UI interaction and generation module 5905, in some embodiments, generates a number of different UI elements. These elements, in some embodiments, include the
keyword tagging tool 5906, thetimeline 5945, thetimeline search tool 5904, the thumbnails view 5908, thelist view 5902, thepreview display area 5912, and the set of analysis/import tools 5990. All of these UI elements are described in many different examples above. For example, several operations performed with the thumbnails view 5908 are described above by reference toFIGS. 1-3 and 6-16. Several example operations performed with thelist view 5902 are described above by reference toFIGS. 34-45 . Also, several example operations performed with the set of analysis/import tools 5990 are described above by reference toFIGS. 24-28 . In addition, several example operations performed with thetimeline 5945 and thetimeline search tool 5904 are described above by reference toFIGS. 46-57 . Further, several example operations performed with thekeyword tagging tool 5906 are described above by reference toFIGS. 18-23 . As mentioned, the media editing application, in some embodiments, maintains a database of previous user input or interactions to provide an auto-complete feature. The media editing application, in some embodiments, maintains a list of common production and/or editing terms. In some embodiments, these data items are stored in thestorage 5965. - The media ingest
module 5910 manages the import of source media into the media-editing application 5900. Some embodiments, as shown, receive source media from themedia import module 5985 of theoperating system 5970. The media ingestmodule 5910 receives instructions through the UI interaction and generation module 5905 as to which files should be imported, then instructs themedia import module 5985 to enable this import (e.g., from an external drive, from a camera, etc.). The media ingestmodule 5910 stores thesesource files 5950 in specific file folders associated with the application. In some embodiments, the media ingestmodule 5910 also manages the creation of event data structures upon import of source files and the creation of the clip and asset data structures contained in the events. - The
editing modules 5915 include a variety of modules for editing media in the clip browser as well as in the timeline. Theediting modules 5915 handle the creation of projects, addition and subtraction of clips from projects, trimming or other editing processes within the timeline, application of effects and transitions, or other editing processes. In some embodiments, theediting modules 5915 create and modify project and clip data structures in both theevent data 5955 and theproject data 5960. - The
rendering engine 5920 handles the rendering of images for the media-editing application. In some embodiments, therendering engine 5920 manages the creation of images for the media-editing application. When an image is requested by a destination within the application (e.g., the playback module 5925) therendering engine 5920 outputs the requested image according to the project or event data. Therendering engine 5920 retrieves the project data or event data that identifies how to create the requested image and generates a render graph that is a series of nodes indicating either images to retrieve from the source files or operations to perform on the source files. In some embodiments, therendering engine 5920 schedules the retrieval of the necessary images through disk read operations and the decoding of those images. - In some embodiments, the render
engine 5920 performs various operations to generate an output image. In some embodiments, these operations include blend operations, effects (e.g., blur or other pixel value modification operations), color space conversions, resolution transforms, etc. In some embodiments, one or more of these processing operations are actually part of the operating system and are performed by a GPU or CPU of the device on which theapplication 5900 operates. The output of the rendering engine (a rendered image) may be stored as render files instorage 5965 or sent to a destination for additional processing or output (e.g., playback). - The
playback module 5925 handles the playback of images (e.g., in apreview display area 5912 of the user interface). Some embodiments do not include a playback module and the rendering engine directly outputs its images for integration into the GUI, or directly to thedisplay module 5980 for display at a particular portion of the display device. - The
analysis modules 5940 perform analysis on clips. Each module may perform a particular type of analysis. Examples of such analysis include analysis of the number of people in the clip (e.g., one person, two persons, group) and/or a type of shot (e.g., a close-up, medium, or wide shot). Other types of analysis may include image stabilization analysis (e.g., camera movement), color balance analysis, audio analysis (e.g., mono, stereo, silent channels), metadata analysis, etc. As shown, theanalysis modules 5940, in some embodiments, utilize therendering engine 5920 to create copies of corrected media clips. For example, when excessive shake is detected in a portion of clip, therendering engine 5920 may create a corrected version the clip. - In some embodiments, the
analysis modules 5940 operate in conjunction with thekeyword association module 5935 to associate each analyzed clip (e.g., a portion of a clip or an entire clip) with one or more keywords. For example, thekeyword association module 5935 may receive range attributes from theanalysis modules 5940 to associate a range of a clip with a keyword. In some embodiments, thekeyword association module 5935 associates a clip object or a collection object with a keyword set. The association of a keyword set with a clip object or collection object is described above by reference toFIG. 5 . - In some embodiments, the
keyword collection module 5930 facilitates creation and deletion of keyword collections. For example, thekeyword collection module 5930 may operate in conjunction with thekeyword association module 5935 to create a keyword collection for each clip or portion of a clip associated with a keyword. Thekeyword collection module 5930, in some embodiments, allows a user to create or delete a keyword collection for a particular keyword prior to the particular keyword being associated with any clips. For example, the user can create different keyword collections, and then drag and drop different portions of clips to create the keyword association. - The
timeline search module 5995 facilitates the search and navigation of thetimeline 5945. In some embodiments, the search and navigation is based on a sequence associated with thetimeline 5945. For example, a sequence in the timeline may include multiple different clips. Each clip may include range attributes indicating its position along the sequence. In some embodiments, based on the sequence and the range attributes, thetimeline search module 5995 provides links to clip or collection objects that allow thetimeline 5945 to be navigated. In some embodiments, thetimeline search module 5995 provides a list of other items (e.g., keywords, markers) and a selectable link to each of these items based associations of the items with the clips or collections in the sequence. That is, the ordering of the clips, each clip's range attributes, and each item's range attributes all provide an indication of the location along the timeline of each item. In some embodiments, thetimeline search module 5995 provides a search result by filtering the list of items in thetimeline search tool 5904. Several examples of filtering the list of items in a timeline search tool are described above by reference toFIGS. 53 and 54 . - While many of the features of the media-
editing application 5900 have been described as being performed by one module (e.g., the UI interaction and generation module 5905, the media ingestmodule 5910, etc.), one of ordinary skill in the art will recognize that the functions described herein might be split up into multiple modules. Similarly, functions described as being performed by multiple different modules might be performed by a single module in some embodiments (e.g., theplayback module 5925 might be part of the UI interaction and generation module 5905). - Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational element(s) (such as processors or other computational elements like ASICs and FPGAs), they cause the computational element(s) to perform the actions indicated in the instructions. “Computer” is meant in its broadest sense, and can include any electronic device with a processor. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
- In this specification, the term “software” includes firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs when installed to operate on one or more computer systems define one or more specific machine implementations that execute and perform the operations of the software programs.
-
FIG. 60 illustrates a computer system with which some embodiments of the invention are implemented. Such a computer system includes various types of computer readable media and interfaces for various other types of computer readable media.Computer system 6000 includes abus 6005, at least one processing unit (e.g., a processor) 6010, a graphics processing unit (GPU) 6020, asystem memory 6025, a read-only memory 6030, apermanent storage device 6035,input devices 6040, andoutput devices 6045. - The
bus 6005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of thecomputer system 6000. For instance, thebus 6005 communicatively connects theprocessor 6010 with the read-only memory 6030, theGPU 6020, thesystem memory 6025, and thepermanent storage device 6035. - From these various memory units, the
processor 6010 retrieves instructions to execute and data to process in order to execute the processes of the invention. In some embodiments, the processor comprises a Field Programmable Gate Array (FPGA), an ASIC, or various other electronic components for executing instructions. Some instructions are passed to and executed by theGPU 6020. TheGPU 6020 can offload various computations or complement the image processing provided by theprocessor 6010. - The read-only-memory (ROM) 6030 stores static data and instructions that are needed by the
processor 6010 and other modules of the computer system. Thepermanent storage device 6035, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when thecomputer system 6000 is off. Some embodiments of the invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as thepermanent storage device 6035. - Other embodiments use a removable storage device (such as a floppy disk, flash drive, or ZIP® disk, and its corresponding disk drive) as the permanent storage device. Like the
permanent storage device 6035, thesystem memory 6025 is a read-and-write memory device. However, unlikestorage device 6035, the system memory is a volatile read-and-write memory such as a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in thesystem memory 6025, thepermanent storage device 6035, and/or the read-only memory 6030. For example, the various memory units include instructions for processing multimedia items in accordance with some embodiments. From these various memory units, theprocessor 6010 retrieves instructions to execute and data to process in order to execute the processes of some embodiments. - The
bus 6005 also connects to the input andoutput devices input devices 6040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). Theoutput devices 6045 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). - Finally, as shown in
FIG. 60 ,bus 6005 also couples thecomputer 6000 to anetwork 6065 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), an intranet, or a network of networks such as the Internet. Any or all components ofcomputer system 6000 may be used in conjunction with the invention. - Some embodiments include electronic components, such as microprocessors, storage, and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by a device such as an electronics device, a microprocessor, a processor, a multi-processor (e.g., a chip with several processing units on it) and includes sets of instructions for performing various operations. The computer program excludes any wireless signals, wired download signals, and/or any other ephemeral signals
- Examples of hardware devices configured to store and execute sets of instructions include, but are not limited to, application specific integrated circuits (ASICs), field programmable gate arrays (FPGA), programmable logic devices (PLDs), ROM, and RAM devices. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium” and “computer readable media” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
- The present application describes a graphical user interface that provides users with numerous ways to perform different sets of operations and functionalities. In some embodiments, these operations and functionalities are performed based on different commands that are received from users through different input devices (e.g., keyboard, track pad, touchpad, mouse, etc.). For example, the present application describes the use of a cursor in the graphical user interface to control (e.g., select, move) objects in the graphical user interface. However, in some embodiments, objects in the graphical user interface can also be controlled or manipulated through other controls, such as touch control. In some embodiments, touch control is implemented through an input device that can detect the presence and location of touch on a display of the device. An example of such a device is a touch screen device. In some embodiments, with touch control, a user can directly manipulate objects by interacting with the graphical user interface that is displayed on the display of the touch screen device. For instance, a user can select a particular object in the graphical user interface by simply touching that particular object on the display of the touch screen device. As such, when touch control is utilized, a cursor may not even be provided for enabling selection of an object of a graphical user interface in some embodiments. However, when a cursor is provided in a graphical user interface, touch control can be used to control the cursor in some embodiments.
- While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the Figures (including
FIGS. 17 , 40, 41, 28, and 57) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. Specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process.
Claims (34)
1. A non-transitory machine readable medium storing a program having a user interface (UI), the program for execution by at least one processing unit, the UI comprising:
a first display area for displaying a plurality of media clip;
a marking tool for tagging media clips with keywords in order to tag different media clips with different keywords; and
a second display area for creating different keyword folders representing different keywords, wherein a modification to a particular keyword folder results in the retagging of any media clip that was previously tagged with the particular keyword folder's keyword.
2. The non-transitory machine readable medium of claim 1 , wherein each keyword folder has a name corresponding to the folder's keyword, wherein the modification is a modification to the name of a particular keyword folder that changes the particular folder's keyword from a first keyword to a second keyword, and retags with the second keyword any media clip that was previously tagged with the particular keyword folder's first keyword.
3. The non-transitory machine readable medium of claim 2 , wherein the particular keyword folder is a first keyword folder, wherein the modification combines the first keyword folder with a second keyword folder when the first keyword folder is renamed to a same name as the second keyword folder.
4. The Non-transitory machine readable medium of claim 1 , wherein the modification is a deletion operation of the particular keyword folder that results in any media clip that was previously tagged with the particular keyword folder's keyword to be untagged.
5. A non-transitory machine readable medium storing a program having a user interface (UI) for organizing a plurality of media clips, the program for execution by at least one processing unit, the UI comprising:
a keyword marking tool for associating one or more keywords with one or portions of one or more media clips; and
a display area for dynamically displaying different keywords collections, said display area dynamically adding a new keyword collection each time a new keyword is associated with one of said media clips, wherein a selection of any one of the dynamically created keyword collection results in the display of a set of media clips associated with a keyword of the keyword collection.
6. The non-transitory machine readable medium of claim 5 , wherein the display area is further for displaying a hierarchical structure that includes the different keyword collections, wherein the new keyword collection is added to the hierarchical structure.
7. The non-transitory machine readable medium of claim 6 , wherein each particular keyword collection associated with a particular keyword is displayed in the hierarchical structure as a folder or bin to provide an indication that the particular keyword collection contains each media clip associated with the particular keyword.
8. The non-transitory machine readable medium of claim 6 , wherein the hierarchical structure comprises a media collection for storing the plurality of media clips and the different keyword collections.
9. The non-transitory machine readable medium of claim 5 , wherein the different keyword collections are part of a hierarchical structure having different hierarchical levels, wherein the new keyword represents a keyword that is used at a particular level of the hierarchical structure and that does not collide with a same keyword that exists at the particular level.
10. The non-transitory machine readable medium of claim 5 , wherein each particular keyword collection is for associating a particular keyword of the particular keyword collection with one or more portions of one or more media clips that is dragged and dropped onto the particular keyword collection.
11. The non-transitory machine readable medium of claim 5 , wherein the selection of the keyword collection results in a filtering operation to display on each portion of the media clip associated with the keyword.
12. The non-transitory machine readable medium of claim 5 , wherein the display area is a first display area, wherein the UI further comprises a second display area for (i) displaying the plurality of media clips and (ii) displaying only the set of media clips associated the keyword in response to the selection of the keyword collection.
13. The non-transitory machine readable medium of claim 12 , wherein the second display area display each media clip associated with the keyword with a graphical indication of the association with the keyword.
14. The non-transitory machine readable medium of claim 5 , wherein the new keyword collection is added the display area without a user having to create the new keyword collection upon association of the new keyword with said one media clip.
15. The non-transitory machine readable medium of claim 5 , wherein a selection of two keyword collections results in a display of a union of media clips associated with two keywords corresponding to the two keyword collections.
16. A non-transitory machine readable medium storing a program having a user interface (UI) for organizing a plurality of media clips, the program for execution by at least one processing unit, the UI comprising:
a first display area for displaying a plurality of media clip; and
a second display area for creating different keyword folders, each keyword folder representing a keyword;
a first tool for adding a new keyword folder to the second display area; and
a second marking tool for tagging media clips with keywords in order to associate different media clips with different keywords,
wherein a selection of a particular keyword folder results in the display of any media clip that is associated with the particular keyword of the particular keyword folder.
17. The non-transitory machine readable medium of claim 16 , wherein the tagging comprises tagging portions of the media clips with the different keywords.
18. The non-transitory machine readable medium of claim 17 , wherein the selection of the particular keyword folder results in the display of a visual representation of each portion of any media clip that is associated with the particular keyword of the particular keyword folder.
19. A non-transitory machine readable medium storing a program having a user interface (UI) for organizing a plurality of media clips, the program for execution by at least one processing unit, the UI comprising:
a dynamic folder structure for displaying different folders that are associated with different keywords, wherein a new folder is dynamically added to the folder structure each time a new keyword is associated with a media clip; and
a display area for displaying each media clip associated with a particular keyword when a folder associated with the particular keyword is selected.
20. The non-transitory machine readable medium of claim 19 , wherein each particular folder is displayed in the dynamic folder structure with a folder representation to provide an indication that the particular folder contains each media clip associated with a particular keyword of the particular folder.
21. The non-transitory machine readable medium of claim 19 , wherein the dynamic folder structure is further for displaying different media folders for storing a set of keyword folders and a set of media clips, wherein merging two media folders into one folder causes each media clip and keyword folder in the two media folders to be merged into the one media folder.
22. The non-transitory machine readable medium of claim 19 , wherein the UI further comprises a keyword association tool for associating media clips with keywords.
23. The non-transitory machine readable medium of claim 22 , wherein the keyword association tool allows a user to specify different keywords as shortcut keys.
24. The non-transitory machine readable medium of claim 22 , wherein the keyword association tool provides suggested keywords for an auto-complete operation based on previous user interactions with the UI.
25. The non-transitory machine readable medium of claim 22 , wherein the keyword association tool is further for (i) displaying a particular keyword associated with a particular media clip when the particular media clip is selected, and (ii) disassociating the particular keywords from the particular media clip.
26. A non-transitory machine readable medium storing a program having a user interface (UI), the program for execution by at least one processing unit, the UI comprising:
a first display area for displaying a plurality of media clip; and
a second display area for creating different keyword folders representing different keywords, wherein a drag and drop operation of a portion of a particular media clip selected from the first display area to a particular keyword folder associates the portion of the particular media clip with a particular keyword of the particular keyword folder, wherein a selection of the particular keyword folder results in the display of any portion of any media clip that is associated with the particular keyword.
27. The non-transitory machine readable medium of claim 26 , wherein the selection of the particular keyword folder results in the display of each associated portion based on a filtering operation on the particular keyword.
28. The non-transitory machine readable medium of claim 26 , wherein the second display area is further for creating different smart collections, each smart collection containing any media clip that satisfies one or more filtering parameters.
29. A non-transitory machine readable medium storing a program for execution by at least one processing unit, the program comprising:
a set of instructions for analyzing contents of a plurality of media clips;
a set of instructions for associating a set of media clips with a set of keywords based on the analysis of the plurality of media clips;
a set of instructions for auto-organizing the set of media clips by creating a set of keyword collections, each particular keyword collection associated with a particular keyword in the set of keywords; and
a set of instructions for displaying each media clip associated with the particular keyword in response to a selection of the particular keyword collection.
30. The non-transitory machine readable medium of claim 29 , wherein the program further comprises a set of instructions for displaying the set of keyword collections in a dynamic collection structure.
31. The non-transitory machine readable medium of claim 30 , wherein the program further comprises a set of instructions for creating one or more inner collections for each keyword collection based on the analysis.
32. The non-transitory machine readable medium of claim 29 , wherein the set of instructions for analyzing the plurality of media clips comprises a set of instructions for performing at least one of a people analysis, color balance analysis, image stabilization analysis, and audio analysis.
33. The non-transitory machine readable medium of claim 29 , wherein the set of instructions for analyzing the plurality of media clips comprises a set of instructions for analyzing metadata of the plurality of media clips.
34. The non-transitory machine readable medium of claim 29 , wherein the program further comprises a set of instructions for analyzing, during an import operation, a source directory of each particular media clip to use a name of the source directory as a keyword for the particular media clip.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/115,966 US9026909B2 (en) | 2011-02-16 | 2011-05-25 | Keyword list view |
US13/115,970 US20120210219A1 (en) | 2011-02-16 | 2011-05-25 | Keywords and dynamic folder structures |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161443709P | 2011-02-16 | 2011-02-16 | |
US13/115,970 US20120210219A1 (en) | 2011-02-16 | 2011-05-25 | Keywords and dynamic folder structures |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120210219A1 true US20120210219A1 (en) | 2012-08-16 |
Family
ID=46637856
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/115,970 Abandoned US20120210219A1 (en) | 2011-02-16 | 2011-05-25 | Keywords and dynamic folder structures |
US13/115,966 Active 2033-02-10 US9026909B2 (en) | 2011-02-16 | 2011-05-25 | Keyword list view |
US13/115,973 Active 2031-09-27 US8745499B2 (en) | 2011-01-28 | 2011-05-25 | Timeline search and index |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/115,966 Active 2033-02-10 US9026909B2 (en) | 2011-02-16 | 2011-05-25 | Keyword list view |
US13/115,973 Active 2031-09-27 US8745499B2 (en) | 2011-01-28 | 2011-05-25 | Timeline search and index |
Country Status (1)
Country | Link |
---|---|
US (3) | US20120210219A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120290937A1 (en) * | 2011-05-12 | 2012-11-15 | Lmr Inventions, Llc | Distribution of media to mobile communication devices |
US20140156656A1 (en) * | 2012-11-30 | 2014-06-05 | Apple Inc. | Managed Assessment of Submitted Digital Content |
US9026909B2 (en) | 2011-02-16 | 2015-05-05 | Apple Inc. | Keyword list view |
US9128994B2 (en) | 2013-03-14 | 2015-09-08 | Microsoft Technology Licensing, Llc | Visually representing queries of multi-source data |
US9240215B2 (en) | 2011-09-20 | 2016-01-19 | Apple Inc. | Editing operations facilitated by metadata |
US20160034559A1 (en) * | 2014-07-31 | 2016-02-04 | Samsung Electronics Co., Ltd. | Method and device for classifying content |
US9536564B2 (en) | 2011-09-20 | 2017-01-03 | Apple Inc. | Role-facilitated editing operations |
US9870802B2 (en) | 2011-01-28 | 2018-01-16 | Apple Inc. | Media clip management |
US10324605B2 (en) | 2011-02-16 | 2019-06-18 | Apple Inc. | Media-editing application with novel editing tools |
US10705715B2 (en) * | 2014-02-06 | 2020-07-07 | Edupresent Llc | Collaborative group video production system |
US20220050810A1 (en) * | 2019-03-14 | 2022-02-17 | Rovi Guides, Inc. | Automatically assigning application shortcuts to folders with user-defined names |
EP3996092A1 (en) * | 2020-11-09 | 2022-05-11 | Blackmagic Design Pty Ltd | Video editing or media management system |
GB2609706A (en) * | 2021-05-26 | 2023-02-15 | Adobe Inc | Interacting with semantic video segments through interactive tiles |
US11747972B2 (en) | 2011-02-16 | 2023-09-05 | Apple Inc. | Media-editing application with novel editing tools |
US11942117B2 (en) | 2019-04-01 | 2024-03-26 | Blackmagic Design Pty Ltd | Media management system |
Families Citing this family (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102176731A (en) * | 2010-12-27 | 2011-09-07 | 华为终端有限公司 | Method for intercepting audio file or video file and mobile phone |
US9161073B2 (en) | 2011-02-11 | 2015-10-13 | Sony Corporation | System and method to remove outdated or erroneous assets from favorites or recently-viewed lists |
US10200756B2 (en) | 2011-02-11 | 2019-02-05 | Sony Interactive Entertainment LLC | Synchronization of favorites and/or recently viewed lists between registered content playback devices |
US9955202B2 (en) | 2011-02-11 | 2018-04-24 | Sony Network Entertainment International Llc | Removal of unavailable services and/or content items from a list of favorite and/or recently viewed services and/or content items associated with a user account |
US20120210224A1 (en) * | 2011-02-11 | 2012-08-16 | Sony Network Entertainment International Llc | System and method to add an asset as a favorite for convenient access or sharing on a second display |
US9098611B2 (en) * | 2012-11-26 | 2015-08-04 | Intouch Technologies, Inc. | Enhanced video interaction for a user interface of a telepresence network |
US9946429B2 (en) * | 2011-06-17 | 2018-04-17 | Microsoft Technology Licensing, Llc | Hierarchical, zoomable presentations of media sets |
USD717813S1 (en) * | 2011-07-25 | 2014-11-18 | Facebook, Inc. | Display panel of a programmed computer system with a graphical user interface |
US9128581B1 (en) * | 2011-09-23 | 2015-09-08 | Amazon Technologies, Inc. | Providing supplemental information for a digital work in a user interface |
USD731507S1 (en) * | 2011-11-17 | 2015-06-09 | Axell Corporation | Display screen with animated graphical user interface |
USD731504S1 (en) * | 2011-11-17 | 2015-06-09 | Axell Corporation | Display screen with graphical user interface |
KR20130056583A (en) * | 2011-11-22 | 2013-05-30 | 삼성전자주식회사 | Method and apparatus for managing of time limited contents in electric device |
KR20130096978A (en) * | 2012-02-23 | 2013-09-02 | 삼성전자주식회사 | User terminal device, server, information providing system based on situation and method thereof |
US11847300B2 (en) * | 2012-03-12 | 2023-12-19 | Comcast Cable Communications, Llc | Electronic information hierarchy |
US10389779B2 (en) | 2012-04-27 | 2019-08-20 | Arris Enterprises Llc | Information processing |
US10277933B2 (en) * | 2012-04-27 | 2019-04-30 | Arris Enterprises Llc | Method and device for augmenting user-input information related to media content |
KR101964348B1 (en) * | 2012-05-18 | 2019-04-01 | 삼성전자주식회사 | Method for line up contents of media equipment, apparatus thereof, and medium storing program source thereof |
US9552414B2 (en) * | 2012-05-22 | 2017-01-24 | Quixey, Inc. | Dynamic filtering in application search |
US9646394B2 (en) | 2012-06-14 | 2017-05-09 | Ntrepid Corporation | Case data visualization application |
US9767110B2 (en) * | 2012-06-14 | 2017-09-19 | Ntrepid Corporation | Case data visualization application |
US9075895B2 (en) * | 2012-06-14 | 2015-07-07 | Ntrepid Corporation | Case data visualization application |
US20140012574A1 (en) * | 2012-06-21 | 2014-01-09 | Maluuba Inc. | Interactive timeline for presenting and organizing tasks |
US20130344468A1 (en) * | 2012-06-26 | 2013-12-26 | Robert Taaffe Lindsay | Obtaining Structured Data From Freeform Textual Answers in a Research Poll |
EP2687967A1 (en) * | 2012-07-19 | 2014-01-22 | Hewlett-Packard Development Company, L.P. | Editing information with slider edit tools |
US9348512B2 (en) * | 2012-08-08 | 2016-05-24 | Nuance Communications, Inc. | Methods for facilitating text entry |
USD738894S1 (en) * | 2012-08-29 | 2015-09-15 | Samsung Electronics Co., Ltd. | Portable electronic device with a graphical user interface |
KR101328199B1 (en) * | 2012-11-05 | 2013-11-13 | 넥스트리밍(주) | Method and terminal and recording medium for editing moving images |
US9912713B1 (en) | 2012-12-17 | 2018-03-06 | MiMedia LLC | Systems and methods for providing dynamically updated image sets for applications |
KR101978216B1 (en) * | 2013-01-04 | 2019-05-14 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
US8537983B1 (en) * | 2013-03-08 | 2013-09-17 | Noble Systems Corporation | Multi-component viewing tool for contact center agents |
US9298758B1 (en) | 2013-03-13 | 2016-03-29 | MiMedia, Inc. | Systems and methods providing media-to-media connection |
US9465521B1 (en) * | 2013-03-13 | 2016-10-11 | MiMedia, Inc. | Event based media interface |
US9886173B2 (en) | 2013-03-15 | 2018-02-06 | Ambient Consulting, LLC | Content presentation and augmentation system and method |
US9460057B2 (en) | 2013-03-15 | 2016-10-04 | Filmstrip, Inc. | Theme-based media content generation system and method |
US9626365B2 (en) | 2013-03-15 | 2017-04-18 | Ambient Consulting, LLC | Content clustering system and method |
US10365797B2 (en) | 2013-03-15 | 2019-07-30 | Ambient Consulting, LLC | Group membership content presentation and augmentation system and method |
US9183232B1 (en) | 2013-03-15 | 2015-11-10 | MiMedia, Inc. | Systems and methods for organizing content using content organization rules and robust content information |
US10257301B1 (en) | 2013-03-15 | 2019-04-09 | MiMedia, Inc. | Systems and methods providing a drive interface for content delivery |
USD750666S1 (en) * | 2013-09-10 | 2016-03-01 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with icon |
GB2519537A (en) * | 2013-10-23 | 2015-04-29 | Life On Show Ltd | A method and system of generating video data with captions |
USD746845S1 (en) * | 2013-10-25 | 2016-01-05 | Microsoft Corporation | Display screen with graphical user interface |
USD746846S1 (en) * | 2013-10-25 | 2016-01-05 | Microsoft Corporation | Display screen with graphical user interface |
USD748121S1 (en) * | 2013-10-25 | 2016-01-26 | Microsoft Corporation | Display screen with animated graphical user interface |
USD748120S1 (en) * | 2013-10-25 | 2016-01-26 | Microsoft Corporation | Display screen with animated graphical user interface |
US20150121224A1 (en) * | 2013-10-28 | 2015-04-30 | At&T Intellectual Property I, L.P. | Method and System to Control a Seek Position of Multimedia Content Using a Rotatable Video Frame Graphic |
KR102106920B1 (en) * | 2013-11-26 | 2020-05-06 | 엘지전자 주식회사 | Mobile terminal and method for controlling of the same |
TWD163530S (en) * | 2014-02-20 | 2014-10-11 | 優力勤股份有限公司 | Graphical user interface on display screen |
US9363568B2 (en) | 2014-03-31 | 2016-06-07 | Rovi Guides, Inc. | Systems and methods for receiving product data |
CN105100922B (en) * | 2014-04-24 | 2018-10-23 | 海信集团有限公司 | A kind of data information localization method and device applied to smart television |
US9767101B2 (en) * | 2014-06-20 | 2017-09-19 | Google Inc. | Media store with a canonical layer for content |
US10102285B2 (en) | 2014-08-27 | 2018-10-16 | International Business Machines Corporation | Consolidating video search for an event |
USD780203S1 (en) * | 2014-10-02 | 2017-02-28 | Deere & Company | Display screen with a graphical user interface |
US10146799B2 (en) * | 2014-11-02 | 2018-12-04 | International Business Machines Corporation | Saving events information in image metadata |
CN105653248A (en) * | 2014-11-14 | 2016-06-08 | 索尼公司 | Control device, method and electronic equipment |
USD774077S1 (en) * | 2015-02-09 | 2016-12-13 | Express Scripts, Inc. | Display screen with graphical user interface |
TWI550421B (en) * | 2015-03-06 | 2016-09-21 | Video search method and system | |
WO2016171874A1 (en) * | 2015-04-22 | 2016-10-27 | Google Inc. | Providing user-interactive graphical timelines |
CN104954848A (en) * | 2015-05-12 | 2015-09-30 | 乐视致新电子科技(天津)有限公司 | Intelligent terminal display graphic user interface control method and device |
US9940746B2 (en) | 2015-06-18 | 2018-04-10 | Apple Inc. | Image fetching for timeline scrubbing of digital media |
TW201710646A (en) * | 2015-09-02 | 2017-03-16 | 湯姆生特許公司 | Method, apparatus and system for facilitating navigation in an extended scene |
USD791155S1 (en) * | 2015-09-30 | 2017-07-04 | Cognitive Scale, Inc. | Display screen with cognitive commerce personal shopper trainer access graphical user interface |
USD781329S1 (en) * | 2015-10-07 | 2017-03-14 | Biogen Ma Inc. | Display screen with graphical user interface |
USD773513S1 (en) * | 2015-10-07 | 2016-12-06 | Biogen Ma, Inc. | Display screen with graphical user interface |
USD781330S1 (en) * | 2015-10-07 | 2017-03-14 | Biogen Ma Inc. | Display screen with graphical user interface |
USD787530S1 (en) * | 2015-10-14 | 2017-05-23 | Patentcloud Corporation | Display screen with graphical user interface |
USD786889S1 (en) * | 2015-10-14 | 2017-05-16 | Patentcloud Corporation | Display screen with graphical user interface |
USD787529S1 (en) * | 2015-10-14 | 2017-05-23 | Patentcloud Corporation | Display screen with graphical user interface |
US10402062B2 (en) * | 2016-04-16 | 2019-09-03 | Apple Inc. | Organized timeline |
TWI698773B (en) * | 2016-04-29 | 2020-07-11 | 姚秉洋 | Method for displaying an on-screen keyboard, computer program product thereof, and non-transitory computer-readable medium thereof |
GB2552689A (en) | 2016-08-03 | 2018-02-07 | Nec Corp | Communication system |
USD819057S1 (en) * | 2017-01-19 | 2018-05-29 | Patentcloud Corporation | Display screen with graphical user interface |
USD864226S1 (en) | 2017-02-22 | 2019-10-22 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with graphical user interface |
US11018884B2 (en) | 2017-04-24 | 2021-05-25 | Microsoft Technology Licensing, Llc | Interactive timeline that displays representations of notable events based on a filter or a search |
CN109429093B (en) * | 2017-08-31 | 2022-08-19 | 中兴通讯股份有限公司 | Video editing method and terminal |
US11856315B2 (en) | 2017-09-29 | 2023-12-26 | Apple Inc. | Media editing application with anchored timeline for captions and subtitles |
USD861023S1 (en) * | 2017-10-27 | 2019-09-24 | Canva Pty Ltd. | Display screen or portion thereof with a graphical user interface |
USD829239S1 (en) | 2017-12-08 | 2018-09-25 | Technonet Co., Ltd. | Video player display screen or portion thereof with graphical user interface |
US11243996B2 (en) | 2018-05-07 | 2022-02-08 | Apple Inc. | Digital asset search user interface |
US10853417B2 (en) * | 2018-08-17 | 2020-12-01 | Adobe Inc. | Generating a platform-based representative image for a digital video |
US20200201904A1 (en) * | 2018-12-21 | 2020-06-25 | AdLaunch International Inc. | Generation of a video file |
US20210216053A1 (en) * | 2020-01-10 | 2021-07-15 | Johnson Controls Technology Company | Building automation systems with automatic metadata tagging and management |
US20240070197A1 (en) * | 2022-08-30 | 2024-02-29 | Twelve Labs, Inc. | Method and apparatus for providing user interface for video retrieval |
Citations (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040046804A1 (en) * | 2002-09-11 | 2004-03-11 | Chang Peter H. | User-driven menu generation system with multiple submenus |
US20040268224A1 (en) * | 2000-03-31 | 2004-12-30 | Balkus Peter A. | Authoring system for combining temporal and nontemporal digital media |
US20050120127A1 (en) * | 2000-04-07 | 2005-06-02 | Janette Bradley | Review and approval system |
US20050183041A1 (en) * | 2004-02-12 | 2005-08-18 | Fuji Xerox Co., Ltd. | Systems and methods for creating and interactive 3D visualization of indexed media |
US20060242164A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Systems, methods, and user interfaces for storing, searching, navigating, and retrieving electronic information |
US20070022159A1 (en) * | 2002-03-28 | 2007-01-25 | Webex Communications, Inc. | conference recording system |
US20070136656A1 (en) * | 2005-12-09 | 2007-06-14 | Adobe Systems Incorporated | Review of signature based content |
US20070204238A1 (en) * | 2006-02-27 | 2007-08-30 | Microsoft Corporation | Smart Video Presentation |
US20070240072A1 (en) * | 2006-04-10 | 2007-10-11 | Yahoo! Inc. | User interface for editing media assests |
US20070266304A1 (en) * | 2006-05-15 | 2007-11-15 | Microsoft Corporation | Annotating media files |
US20080126191A1 (en) * | 2006-11-08 | 2008-05-29 | Richard Schiavi | System and method for tagging, searching for, and presenting items contained within video media assets |
US20080172399A1 (en) * | 2007-01-17 | 2008-07-17 | Liang-Yu Chi | System and method for automatically organizing bookmarks through the use of tag data |
US20080184121A1 (en) * | 2007-01-31 | 2008-07-31 | Kulas Charles J | Authoring tool for providing tags associated with items in a video playback |
US20080222170A1 (en) * | 2002-02-20 | 2008-09-11 | Microsoft Corporation | Computer system architecture for automatic context associations |
US20080306921A1 (en) * | 2000-01-31 | 2008-12-11 | Kenneth Rothmuller | Digital Media Management Apparatus and Methods |
US20090006475A1 (en) * | 2007-06-29 | 2009-01-01 | Microsoft Corporation | Collecting and Presenting Temporal-Based Action Information |
US20090031239A1 (en) * | 2007-07-17 | 2009-01-29 | Gridiron Software Inc. | Asset browser for computing environment |
US7561160B2 (en) * | 2004-07-15 | 2009-07-14 | Olympus Corporation | Data editing program, data editing method, data editing apparatus and storage medium |
US20090182644A1 (en) * | 2008-01-16 | 2009-07-16 | Nicholas Panagopulos | Systems and methods for content tagging, content viewing and associated transactions |
US20090249185A1 (en) * | 2006-12-22 | 2009-10-01 | Google Inc. | Annotation Framework For Video |
US20100083173A1 (en) * | 2008-07-03 | 2010-04-01 | Germann Stephen R | Method and system for applying metadata to data sets of file objects |
US20100082585A1 (en) * | 2008-09-23 | 2010-04-01 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
US20100088295A1 (en) * | 2008-10-03 | 2010-04-08 | Microsoft Corporation | Co-location visual pattern mining for near-duplicate image retrieval |
US20100158471A1 (en) * | 2006-04-24 | 2010-06-24 | Sony Corporation | Image processing device and image processing method |
US7779358B1 (en) * | 2006-11-30 | 2010-08-17 | Adobe Systems Incorporated | Intelligent content organization based on time gap analysis |
US20100241962A1 (en) * | 2009-03-23 | 2010-09-23 | Peterson Troy A | Multiple content delivery environment |
US20100274673A1 (en) * | 2008-11-01 | 2010-10-28 | Bitesize Media, Inc. | Non-Intrusive Media Linked and Embedded Information Delivery |
US20110010624A1 (en) * | 2009-07-10 | 2011-01-13 | Vanslette Paul J | Synchronizing audio-visual data with event data |
US7889946B1 (en) * | 2005-02-28 | 2011-02-15 | Adobe Systems Incorporated | Facilitating computer-assisted tagging of object instances in digital images |
US7925669B2 (en) * | 2004-10-13 | 2011-04-12 | Sony Corporation | Method and apparatus for audio/video attribute and relationship storage and retrieval for efficient composition |
US20110116769A1 (en) * | 2007-08-03 | 2011-05-19 | Loilo Inc | Interface system for editing video data |
US20110145428A1 (en) * | 2009-12-10 | 2011-06-16 | Hulu Llc | Method and apparatus for navigating a media program via a transcript of media program dialog |
US20110161348A1 (en) * | 2007-08-17 | 2011-06-30 | Avi Oron | System and Method for Automatically Creating a Media Compilation |
US20120017153A1 (en) * | 2010-07-15 | 2012-01-19 | Ken Matsuda | Dynamic video editing |
US20130132839A1 (en) * | 2010-11-30 | 2013-05-23 | Michael Berry | Dynamic Positioning of Timeline Markers for Efficient Display |
US8826117B1 (en) * | 2009-03-25 | 2014-09-02 | Google Inc. | Web-based system for video editing |
Family Cites Families (105)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5241671C1 (en) | 1989-10-26 | 2002-07-02 | Encyclopaedia Britannica Educa | Multimedia search system using a plurality of entry path means which indicate interrelatedness of information |
AU4279893A (en) | 1992-04-10 | 1993-11-18 | Avid Technology, Inc. | A method and apparatus for representing and editing multimedia compositions |
US5682326A (en) | 1992-08-03 | 1997-10-28 | Radius Inc. | Desktop digital video processing system |
US5659792A (en) | 1993-01-15 | 1997-08-19 | Canon Information Systems Research Australia Pty Ltd. | Storyboard system for the simultaneous timing of multiple independent video animation clips |
US5664216A (en) | 1994-03-22 | 1997-09-02 | Blumenau; Trevor | Iconic audiovisual data editing environment |
US5521841A (en) | 1994-03-31 | 1996-05-28 | Siemens Corporate Research, Inc. | Browsing contents of a given video sequence |
JP3837746B2 (en) | 1995-04-08 | 2006-10-25 | ソニー株式会社 | Editing system |
US5659539A (en) | 1995-07-14 | 1997-08-19 | Oracle Corporation | Method and apparatus for frame accurate access of digital audio-visual information |
US5732184A (en) | 1995-10-20 | 1998-03-24 | Digital Processing Systems, Inc. | Video and audio cursor video editing system |
US6154601A (en) | 1996-04-12 | 2000-11-28 | Hitachi Denshi Kabushiki Kaisha | Method for editing image information with aid of computer and editing system |
US6161115A (en) | 1996-04-12 | 2000-12-12 | Avid Technology, Inc. | Media editing system with improved effect management |
US5781188A (en) | 1996-06-27 | 1998-07-14 | Softimage | Indicating activeness of clips and applying effects to clips and tracks in a timeline of a multimedia work |
US6628303B1 (en) | 1996-07-29 | 2003-09-30 | Avid Technology, Inc. | Graphical user interface for a motion video planning and editing system for a computer |
US6154600A (en) | 1996-08-06 | 2000-11-28 | Applied Magic, Inc. | Media editor for non-linear editing system |
CA2257316C (en) | 1997-04-12 | 2006-06-13 | Sony Corporation | Editing device and editing method |
US6686918B1 (en) | 1997-08-01 | 2004-02-03 | Avid Technology, Inc. | Method and system for editing or modifying 3D animations in a non-linear editing environment |
US6134380A (en) | 1997-08-15 | 2000-10-17 | Sony Corporation | Editing apparatus with display of prescribed information on registered material |
JP3944807B2 (en) | 1998-04-02 | 2007-07-18 | ソニー株式会社 | Material selection device and material selection method |
US20020023103A1 (en) | 1998-04-21 | 2002-02-21 | Rejean Gagne | System and method for accessing and manipulating time-based data using meta-clip objects |
JP4131300B2 (en) | 1998-06-26 | 2008-08-13 | ソニー株式会社 | Edit list creation device |
US6144375A (en) | 1998-08-14 | 2000-11-07 | Praja Inc. | Multi-perspective viewer for content-based interactivity |
US6366296B1 (en) | 1998-09-11 | 2002-04-02 | Xerox Corporation | Media browser using multimodal analysis |
JP4103192B2 (en) | 1998-09-17 | 2008-06-18 | ソニー株式会社 | Editing system and editing method |
JP4129657B2 (en) | 1999-02-25 | 2008-08-06 | ソニー株式会社 | Editing apparatus and editing method |
WO2000052645A1 (en) | 1999-03-01 | 2000-09-08 | Matsushita Electric Industrial Co., Ltd. | Document image processor, method for extracting document title, and method for imparting document tag information |
US6539163B1 (en) | 1999-04-16 | 2003-03-25 | Avid Technology, Inc. | Non-linear editing system and method employing reference clips in edit sequences |
AUPQ464099A0 (en) | 1999-12-14 | 2000-01-13 | Canon Kabushiki Kaisha | Emotive editing system |
US6970859B1 (en) | 2000-03-23 | 2005-11-29 | Microsoft Corporation | Searching and sorting media clips having associated style and attributes |
US20010036356A1 (en) | 2000-04-07 | 2001-11-01 | Autodesk, Inc. | Non-linear video editing system |
US20010056434A1 (en) | 2000-04-27 | 2001-12-27 | Smartdisk Corporation | Systems, methods and computer program products for managing multimedia content |
US20040125124A1 (en) | 2000-07-24 | 2004-07-01 | Hyeokman Kim | Techniques for constructing and browsing a hierarchical video structure |
KR20040041082A (en) | 2000-07-24 | 2004-05-13 | 비브콤 인코포레이티드 | System and method for indexing, searching, identifying, and editing portions of electronic multimedia files |
US6476826B1 (en) | 2000-08-22 | 2002-11-05 | Vastvideo, Inc. | Integrated system and method for processing video |
US7444593B1 (en) | 2000-10-04 | 2008-10-28 | Apple Inc. | Disk space management and clip remainder during edit operations |
US7325199B1 (en) | 2000-10-04 | 2008-01-29 | Apple Inc. | Integrated time line for editing |
US6629104B1 (en) | 2000-11-22 | 2003-09-30 | Eastman Kodak Company | Method for adding personalized metadata to a collection of digital images |
WO2002052565A1 (en) | 2000-12-22 | 2002-07-04 | Muvee Technologies Pte Ltd | System and method for media production |
US6741996B1 (en) | 2001-04-18 | 2004-05-25 | Microsoft Corporation | Managing user clips |
GB2374719A (en) | 2001-04-20 | 2002-10-23 | Discreet Logic Inc | Image data processing apparatus with two timelines |
US7432940B2 (en) | 2001-10-12 | 2008-10-07 | Canon Kabushiki Kaisha | Interactive animation of sprites in a video production |
US7480864B2 (en) | 2001-10-12 | 2009-01-20 | Canon Kabushiki Kaisha | Zoom editor |
US6928613B1 (en) | 2001-11-30 | 2005-08-09 | Victor Company Of Japan | Organization, selection, and application of video effects according to zones |
US7289132B1 (en) | 2003-12-19 | 2007-10-30 | Apple Inc. | Method and apparatus for image acquisition, organization, manipulation, and publication |
US7035435B2 (en) | 2002-05-07 | 2006-04-25 | Hewlett-Packard Development Company, L.P. | Scalable video summarization and navigation system and method |
US8238718B2 (en) | 2002-06-19 | 2012-08-07 | Microsoft Corporaton | System and method for automatically generating video cliplets from digital video |
US7073127B2 (en) | 2002-07-01 | 2006-07-04 | Arcsoft, Inc. | Video editing GUI with layer view |
US20040098379A1 (en) | 2002-11-19 | 2004-05-20 | Dan Huang | Multi-indexed relationship media organization system |
US7298930B1 (en) | 2002-11-29 | 2007-11-20 | Ricoh Company, Ltd. | Multimodal access of meeting recordings |
US7082572B2 (en) | 2002-12-30 | 2006-07-25 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatus for interactive map-based analysis of digital video content |
US7117453B2 (en) | 2003-01-21 | 2006-10-03 | Microsoft Corporation | Media frame object visualization system |
US20040151469A1 (en) | 2003-01-31 | 2004-08-05 | Engholm Kathryn A. | Video editing timeline with measurement results |
US8392834B2 (en) | 2003-04-09 | 2013-03-05 | Hewlett-Packard Development Company, L.P. | Systems and methods of authoring a multimedia file |
US20040212637A1 (en) | 2003-04-22 | 2004-10-28 | Kivin Varghese | System and Method for Marking and Tagging Wireless Audio and Video Recordings |
US7818658B2 (en) | 2003-12-09 | 2010-10-19 | Yi-Chih Chen | Multimedia presentation system |
JP4061285B2 (en) | 2004-03-31 | 2008-03-12 | 英特維數位科技股▲ふん▼有限公司 | Image editing apparatus, program, and recording medium |
JP4385974B2 (en) * | 2004-05-13 | 2009-12-16 | ソニー株式会社 | Image display method, image processing apparatus, program, and recording medium |
US7975062B2 (en) | 2004-06-07 | 2011-07-05 | Sling Media, Inc. | Capturing and sharing media content |
US7903927B2 (en) | 2004-07-08 | 2011-03-08 | Sony Corporation | Editing apparatus and control method thereof, and program and recording medium |
JP2006031292A (en) * | 2004-07-14 | 2006-02-02 | Fuji Xerox Co Ltd | Document processing apparatus, document processing method, and document processing program |
JP4727342B2 (en) | 2004-09-15 | 2011-07-20 | ソニー株式会社 | Image processing apparatus, image processing method, image processing program, and program storage medium |
US20060078288A1 (en) | 2004-10-12 | 2006-04-13 | Huang Jau H | System and method for embedding multimedia editing information in a multimedia bitstream |
US20070124282A1 (en) | 2004-11-25 | 2007-05-31 | Erland Wittkotter | Video data directory |
US20060136556A1 (en) | 2004-12-17 | 2006-06-22 | Eclips, Llc | Systems and methods for personalizing audio data |
US7548936B2 (en) | 2005-01-12 | 2009-06-16 | Microsoft Corporation | Systems and methods to present web image search results for effective image browsing |
US7434155B2 (en) | 2005-04-04 | 2008-10-07 | Leitch Technology, Inc. | Icon bar display for video editing system |
US20060233514A1 (en) | 2005-04-14 | 2006-10-19 | Shih-Hsiung Weng | System and method of video editing |
US7313755B2 (en) | 2005-04-20 | 2007-12-25 | Microsoft Corporation | Media timeline sorting |
US9648281B2 (en) | 2005-05-23 | 2017-05-09 | Open Text Sa Ulc | System and method for movie segment bookmarking and sharing |
US20070079321A1 (en) * | 2005-09-30 | 2007-04-05 | Yahoo! Inc. | Picture tagging |
US20070203945A1 (en) | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media preview, analysis, purchase, and display |
US7668869B2 (en) | 2006-04-03 | 2010-02-23 | Digitalsmiths Corporation | Media access system |
US7890867B1 (en) | 2006-06-07 | 2011-02-15 | Adobe Systems Incorporated | Video editing functions displayed on or near video sequences |
US20080072166A1 (en) | 2006-09-14 | 2008-03-20 | Reddy Venkateshwara N | Graphical user interface for creating animation |
US20080104127A1 (en) | 2006-11-01 | 2008-05-01 | United Video Properties, Inc. | Presenting media guidance search results based on relevancy |
US20080120328A1 (en) | 2006-11-20 | 2008-05-22 | Rexee, Inc. | Method of Performing a Weight-Based Search |
JP4905103B2 (en) | 2006-12-12 | 2012-03-28 | 株式会社日立製作所 | Movie playback device |
US20080288869A1 (en) | 2006-12-22 | 2008-11-20 | Apple Inc. | Boolean Search User Interface |
US9142253B2 (en) | 2006-12-22 | 2015-09-22 | Apple Inc. | Associating keywords to media |
US20100050080A1 (en) | 2007-04-13 | 2010-02-25 | Scott Allan Libert | Systems and methods for specifying frame-accurate images for media asset management |
US7539659B2 (en) | 2007-06-15 | 2009-05-26 | Microsoft Corporation | Multidimensional timeline browsers for broadcast media |
US20090089690A1 (en) | 2007-09-28 | 2009-04-02 | Yahoo! Inc. | System and method for improved tag entry for a content item |
WO2009046324A2 (en) | 2007-10-05 | 2009-04-09 | Flickbitz Corporation | Online search, storage, manipulation, and delivery of video content |
US20090204894A1 (en) | 2008-02-11 | 2009-08-13 | Nikhil Bhatt | Image Application Performance Optimization |
WO2009114134A2 (en) | 2008-03-13 | 2009-09-17 | United Video Properties, Inc. | Systems and methods for synchronizing time-shifted media content and related communications |
US8091033B2 (en) | 2008-04-08 | 2012-01-03 | Cisco Technology, Inc. | System for displaying search results along a timeline |
US20090259623A1 (en) | 2008-04-11 | 2009-10-15 | Adobe Systems Incorporated | Systems and Methods for Associating Metadata with Media |
WO2010028169A2 (en) | 2008-09-05 | 2010-03-11 | Fotonauts, Inc. | Reverse tagging of images in system for managing and sharing digital images |
US20100077289A1 (en) | 2008-09-08 | 2010-03-25 | Eastman Kodak Company | Method and Interface for Indexing Related Media From Multiple Sources |
US8270815B2 (en) | 2008-09-22 | 2012-09-18 | A-Peer Holding Group Llc | Online video and audio editing |
WO2010068740A2 (en) * | 2008-12-10 | 2010-06-17 | Simple One Media, Llc | Statistical and visual sports analysis system |
JP2012054619A (en) | 2009-03-19 | 2012-03-15 | Grass Valley Co Ltd | Editing apparatus, editing method, editing program and data structure |
US8407596B2 (en) | 2009-04-22 | 2013-03-26 | Microsoft Corporation | Media timeline interaction |
US8522144B2 (en) | 2009-04-30 | 2013-08-27 | Apple Inc. | Media editing application with candidate clip management |
US20100281371A1 (en) | 2009-04-30 | 2010-11-04 | Peter Warner | Navigation Tool for Video Presentations |
US8566721B2 (en) | 2009-04-30 | 2013-10-22 | Apple Inc. | Editing key-indexed graphs in media editing applications |
US8612858B2 (en) | 2009-05-01 | 2013-12-17 | Apple Inc. | Condensing graphical representations of media clips in a composite display area of a media-editing application |
US8856655B2 (en) | 2009-05-01 | 2014-10-07 | Apple Inc. | Media editing application with capability to focus on graphical composite elements in a media compositing area |
US20110072037A1 (en) | 2009-09-18 | 2011-03-24 | Carey Leigh Lotzer | Intelligent media capture, organization, search and workflow |
US10324605B2 (en) | 2011-02-16 | 2019-06-18 | Apple Inc. | Media-editing application with novel editing tools |
US20120210219A1 (en) | 2011-02-16 | 2012-08-16 | Giovanni Agnoli | Keywords and dynamic folder structures |
US9536564B2 (en) | 2011-09-20 | 2017-01-03 | Apple Inc. | Role-facilitated editing operations |
US20130073964A1 (en) | 2011-09-20 | 2013-03-21 | Brian Meaney | Outputting media presentations using roles assigned to content |
US20130073962A1 (en) | 2011-09-20 | 2013-03-21 | Colleen Pendergast | Modifying roles assigned to media content |
US20130073960A1 (en) | 2011-09-20 | 2013-03-21 | Aaron M. Eppolito | Audio meters and parameter controls |
US20130073961A1 (en) | 2011-09-20 | 2013-03-21 | Giovanni Agnoli | Media Editing Application for Assigning Roles to Media Content |
-
2011
- 2011-05-25 US US13/115,970 patent/US20120210219A1/en not_active Abandoned
- 2011-05-25 US US13/115,966 patent/US9026909B2/en active Active
- 2011-05-25 US US13/115,973 patent/US8745499B2/en active Active
Patent Citations (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080306921A1 (en) * | 2000-01-31 | 2008-12-11 | Kenneth Rothmuller | Digital Media Management Apparatus and Methods |
US20040268224A1 (en) * | 2000-03-31 | 2004-12-30 | Balkus Peter A. | Authoring system for combining temporal and nontemporal digital media |
US20050120127A1 (en) * | 2000-04-07 | 2005-06-02 | Janette Bradley | Review and approval system |
US20080222170A1 (en) * | 2002-02-20 | 2008-09-11 | Microsoft Corporation | Computer system architecture for automatic context associations |
US20070022159A1 (en) * | 2002-03-28 | 2007-01-25 | Webex Communications, Inc. | conference recording system |
US20040046804A1 (en) * | 2002-09-11 | 2004-03-11 | Chang Peter H. | User-driven menu generation system with multiple submenus |
US20050183041A1 (en) * | 2004-02-12 | 2005-08-18 | Fuji Xerox Co., Ltd. | Systems and methods for creating and interactive 3D visualization of indexed media |
US7561160B2 (en) * | 2004-07-15 | 2009-07-14 | Olympus Corporation | Data editing program, data editing method, data editing apparatus and storage medium |
US7925669B2 (en) * | 2004-10-13 | 2011-04-12 | Sony Corporation | Method and apparatus for audio/video attribute and relationship storage and retrieval for efficient composition |
US7889946B1 (en) * | 2005-02-28 | 2011-02-15 | Adobe Systems Incorporated | Facilitating computer-assisted tagging of object instances in digital images |
US20060242164A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Systems, methods, and user interfaces for storing, searching, navigating, and retrieving electronic information |
US20070136656A1 (en) * | 2005-12-09 | 2007-06-14 | Adobe Systems Incorporated | Review of signature based content |
US20070204238A1 (en) * | 2006-02-27 | 2007-08-30 | Microsoft Corporation | Smart Video Presentation |
US20070240072A1 (en) * | 2006-04-10 | 2007-10-11 | Yahoo! Inc. | User interface for editing media assests |
US20100158471A1 (en) * | 2006-04-24 | 2010-06-24 | Sony Corporation | Image processing device and image processing method |
US20070266304A1 (en) * | 2006-05-15 | 2007-11-15 | Microsoft Corporation | Annotating media files |
US20080126191A1 (en) * | 2006-11-08 | 2008-05-29 | Richard Schiavi | System and method for tagging, searching for, and presenting items contained within video media assets |
US7779358B1 (en) * | 2006-11-30 | 2010-08-17 | Adobe Systems Incorporated | Intelligent content organization based on time gap analysis |
US20090249185A1 (en) * | 2006-12-22 | 2009-10-01 | Google Inc. | Annotation Framework For Video |
US20080172399A1 (en) * | 2007-01-17 | 2008-07-17 | Liang-Yu Chi | System and method for automatically organizing bookmarks through the use of tag data |
US20080184121A1 (en) * | 2007-01-31 | 2008-07-31 | Kulas Charles J | Authoring tool for providing tags associated with items in a video playback |
US20090006475A1 (en) * | 2007-06-29 | 2009-01-01 | Microsoft Corporation | Collecting and Presenting Temporal-Based Action Information |
US20090031239A1 (en) * | 2007-07-17 | 2009-01-29 | Gridiron Software Inc. | Asset browser for computing environment |
US20110116769A1 (en) * | 2007-08-03 | 2011-05-19 | Loilo Inc | Interface system for editing video data |
US20110161348A1 (en) * | 2007-08-17 | 2011-06-30 | Avi Oron | System and Method for Automatically Creating a Media Compilation |
US20090182644A1 (en) * | 2008-01-16 | 2009-07-16 | Nicholas Panagopulos | Systems and methods for content tagging, content viewing and associated transactions |
US20100083173A1 (en) * | 2008-07-03 | 2010-04-01 | Germann Stephen R | Method and system for applying metadata to data sets of file objects |
US20100082585A1 (en) * | 2008-09-23 | 2010-04-01 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
US20100088295A1 (en) * | 2008-10-03 | 2010-04-08 | Microsoft Corporation | Co-location visual pattern mining for near-duplicate image retrieval |
US20100274673A1 (en) * | 2008-11-01 | 2010-10-28 | Bitesize Media, Inc. | Non-Intrusive Media Linked and Embedded Information Delivery |
US20100241962A1 (en) * | 2009-03-23 | 2010-09-23 | Peterson Troy A | Multiple content delivery environment |
US8826117B1 (en) * | 2009-03-25 | 2014-09-02 | Google Inc. | Web-based system for video editing |
US20110010624A1 (en) * | 2009-07-10 | 2011-01-13 | Vanslette Paul J | Synchronizing audio-visual data with event data |
US20110145428A1 (en) * | 2009-12-10 | 2011-06-16 | Hulu Llc | Method and apparatus for navigating a media program via a transcript of media program dialog |
US20120017153A1 (en) * | 2010-07-15 | 2012-01-19 | Ken Matsuda | Dynamic video editing |
US20130132839A1 (en) * | 2010-11-30 | 2013-05-23 | Michael Berry | Dynamic Positioning of Timeline Markers for Efficient Display |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9870802B2 (en) | 2011-01-28 | 2018-01-16 | Apple Inc. | Media clip management |
US11747972B2 (en) | 2011-02-16 | 2023-09-05 | Apple Inc. | Media-editing application with novel editing tools |
US11157154B2 (en) | 2011-02-16 | 2021-10-26 | Apple Inc. | Media-editing application with novel editing tools |
US9026909B2 (en) | 2011-02-16 | 2015-05-05 | Apple Inc. | Keyword list view |
US10324605B2 (en) | 2011-02-16 | 2019-06-18 | Apple Inc. | Media-editing application with novel editing tools |
US20120290937A1 (en) * | 2011-05-12 | 2012-11-15 | Lmr Inventions, Llc | Distribution of media to mobile communication devices |
US9240215B2 (en) | 2011-09-20 | 2016-01-19 | Apple Inc. | Editing operations facilitated by metadata |
US9536564B2 (en) | 2011-09-20 | 2017-01-03 | Apple Inc. | Role-facilitated editing operations |
US10489734B2 (en) | 2012-11-30 | 2019-11-26 | Apple Inc. | Managed assessment of submitted digital content |
US8990188B2 (en) * | 2012-11-30 | 2015-03-24 | Apple Inc. | Managed assessment of submitted digital content |
US20140156656A1 (en) * | 2012-11-30 | 2014-06-05 | Apple Inc. | Managed Assessment of Submitted Digital Content |
US9128994B2 (en) | 2013-03-14 | 2015-09-08 | Microsoft Technology Licensing, Llc | Visually representing queries of multi-source data |
US10705715B2 (en) * | 2014-02-06 | 2020-07-07 | Edupresent Llc | Collaborative group video production system |
US20160034559A1 (en) * | 2014-07-31 | 2016-02-04 | Samsung Electronics Co., Ltd. | Method and device for classifying content |
US20220050810A1 (en) * | 2019-03-14 | 2022-02-17 | Rovi Guides, Inc. | Automatically assigning application shortcuts to folders with user-defined names |
US11755533B2 (en) * | 2019-03-14 | 2023-09-12 | Rovi Guides, Inc. | Automatically assigning application shortcuts to folders with user-defined names |
US11942117B2 (en) | 2019-04-01 | 2024-03-26 | Blackmagic Design Pty Ltd | Media management system |
EP3996092A1 (en) * | 2020-11-09 | 2022-05-11 | Blackmagic Design Pty Ltd | Video editing or media management system |
US11721365B2 (en) | 2020-11-09 | 2023-08-08 | Blackmagic Design Pty Ltd | Video editing or media management system |
GB2609706A (en) * | 2021-05-26 | 2023-02-15 | Adobe Inc | Interacting with semantic video segments through interactive tiles |
Also Published As
Publication number | Publication date |
---|---|
US20120210220A1 (en) | 2012-08-16 |
US8745499B2 (en) | 2014-06-03 |
US20120210218A1 (en) | 2012-08-16 |
US9026909B2 (en) | 2015-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8745499B2 (en) | Timeline search and index | |
US11157154B2 (en) | Media-editing application with novel editing tools | |
US8875025B2 (en) | Media-editing application with media clips grouping capabilities | |
US20130073964A1 (en) | Outputting media presentations using roles assigned to content | |
US8966367B2 (en) | Anchor override for a media-editing application with an anchored timeline | |
US8555170B2 (en) | Tool for presenting and editing a storyboard representation of a composite presentation | |
JP6214619B2 (en) | Generating multimedia clips | |
US9536564B2 (en) | Role-facilitated editing operations | |
US20130073961A1 (en) | Media Editing Application for Assigning Roles to Media Content | |
US9997196B2 (en) | Retiming media presentations | |
US7917550B2 (en) | System and methods for enhanced metadata entry | |
US10015463B2 (en) | Logging events in media files including frame matching | |
Myers et al. | A multi-view intelligent editor for digital video libraries | |
US20130073962A1 (en) | Modifying roles assigned to media content | |
US20070162857A1 (en) | Automated multimedia authoring | |
US11747972B2 (en) | Media-editing application with novel editing tools | |
US20110307526A1 (en) | Editing 3D Video | |
WO2013177476A1 (en) | Systems and methods involving creation of information modules, including server, media searching. user interface and/or other features | |
US20140006978A1 (en) | Intelligent browser for media editing applications | |
EP2742599A1 (en) | Logging events in media files including frame matching | |
Dixon | How to Use Adobe Premiere 6.5 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AGNOLI, GIOVANNI;PENDERGAST, COLLEEN;OLSHAVSKY, RYAN M.;AND OTHERS;REEL/FRAME:026339/0916 Effective date: 20110523 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |