US20100077289A1 - Method and Interface for Indexing Related Media From Multiple Sources - Google Patents

Method and Interface for Indexing Related Media From Multiple Sources Download PDF

Info

Publication number
US20100077289A1
US20100077289A1 US12/206,319 US20631908A US2010077289A1 US 20100077289 A1 US20100077289 A1 US 20100077289A1 US 20631908 A US20631908 A US 20631908A US 2010077289 A1 US2010077289 A1 US 2010077289A1
Authority
US
United States
Prior art keywords
digital content
record
records
capture
content records
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/206,319
Inventor
Madirakshi Das
Cathleen D. Cerosaletti
Alexander C. Loui
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intellectual Ventures Fund 83 LLC
Original Assignee
Eastman Kodak Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eastman Kodak Co filed Critical Eastman Kodak Co
Priority to US12/206,319 priority Critical patent/US20100077289A1/en
Assigned to EASTMAN KODAK COMPANY reassignment EASTMAN KODAK COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CEROSALETTI, CATHLEEN D., DAS, MADIRAKSHI, LOUI, ALEXANDER C.
Priority to PCT/US2009/004990 priority patent/WO2010027481A1/en
Publication of US20100077289A1 publication Critical patent/US20100077289A1/en
Assigned to CITICORP NORTH AMERICA, INC., AS AGENT reassignment CITICORP NORTH AMERICA, INC., AS AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EASTMAN KODAK COMPANY, PAKON, INC.
Priority to US13/463,183 priority patent/US9218367B2/en
Assigned to KODAK REALTY, INC., LASER-PACIFIC MEDIA CORPORATION, NPEC INC., KODAK AVIATION LEASING LLC, KODAK PORTUGUESA LIMITED, PAKON, INC., FPC INC., QUALEX INC., EASTMAN KODAK COMPANY, CREO MANUFACTURING AMERICA LLC, KODAK AMERICAS, LTD., KODAK PHILIPPINES, LTD., EASTMAN KODAK INTERNATIONAL CAPITAL COMPANY, INC., FAR EAST DEVELOPMENT LTD., KODAK (NEAR EAST), INC., KODAK IMAGING NETWORK, INC. reassignment KODAK REALTY, INC. PATENT RELEASE Assignors: CITICORP NORTH AMERICA, INC., WILMINGTON TRUST, NATIONAL ASSOCIATION
Assigned to INTELLECTUAL VENTURES FUND 83 LLC reassignment INTELLECTUAL VENTURES FUND 83 LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EASTMAN KODAK COMPANY
Assigned to MONUMENT PEAK VENTURES, LLC reassignment MONUMENT PEAK VENTURES, LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: INTELLECTUAL VENTURES FUND 83 LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2477Temporal data queries

Definitions

  • the invention relates generally to the field of digital image processing, and in particular to a method for associating and viewing related video and still images.
  • the date/time of file creation (not capture date/time) is used for video, which effectively removes video clips from the natural timeline and places them at one end of a batch of media transferred from capture device to storage device.
  • videos are inserted at the point in the timeline indicated by the start of capture. Still images or video captured during the duration of a longer video clip appear after its thumbnail representation, with no indication of possible overlap; where overlaps could be in time or another relevant concept such as location or event.
  • This mode of display makes it difficult to pick the best representation of a given moment; choose between different modalities or create composites of different modalities.
  • An alternative is to provide browsing mechanisms that explicitly show overlaps between captures of one or more modalities and also allows the user to switch between them on a UI display.
  • a method for organizing digital content records including: receiving a first set of digital content records captured from a first digital-content capture device; receiving a second set of digital content records captured from a second digital-content capture device; ordering the first set of digital content records and the second set of digital content records along a common capture timeline; and storing results of the ordering step in a processor-accessible memory system.
  • a method for organizing digital content records including: receiving a first set of digital content records captured from a first digital-content capture device, each digital content record in the first set having associated therewith time/date of capture information defining when the associated digital content record was captured, wherein the capture information associated with a particular digital content record from the first set defines that its associated digital content record was captured over a contiguous span of time; receiving a second set of digital content records captured from a second digital-content capture device, each digital content record in the second set having associated therewith time/date of capture information defining when the associated digital content record was captured; ordering the first set of digital content records and the second set of digital content records along a common capture timeline based at least upon the time/date of capture information, or a derivative thereof associated with each of the digital content records in the first and second sets, wherein the ordering step causes the particular digital content record and at least one other digital content record to be associated with a same time/date within the span of time in the capture timeline; and storing results
  • a method for displaying digital content records including: receiving a set of digital content records organized along a timeline, each digital content record being associated with a point on or segment of the timeline based at least upon its time/date of capture and, optionally, span of capture, at least two digital content records being associated with at least a same point on the timeline; identifying a current point on the timeline; displaying a digital content record of the set of digital content records as a focus record, the focus record associated with the current point on the timeline and being displayed prominently on a display; displaying first other digital content records of the set of digital content records on the display, the first other digital content records having time/dates of capture or spans of capture temporally adjacent to the current point on the timeline and being displayed less prominently than the focus record on the display; and displaying second other digital content records of the set of digital content records on the display, the second other digital content records having a time/date of capture or a span of capture equal to or including the current point on the timeline.
  • a method for presenting digital content records including: instructing presentation of a first digital content record on an output device, wherein the first digital content record is a video or audio digital content record; identifying a second digital content record having an association with the first digital content record, wherein the association is based at least upon adjacency in time, a common object represented therein, a common event during which the first and second digital content records were captured, or a common location at which the digital content records were captured; and instructing presentation of the second digital content record on the output device while the first digital content record is being presented.
  • a system for indexing media from different sources including: a means for receiving a first set of digital content records captured from a first digital-content capture device; a means for receiving a second set of digital content records captured from a second digital-content capture device; a means for ordering the first set of digital content records and the second set of digital content records along a common capture timeline; and a means for storing results of the ordering step in a processor-accessible memory system.
  • FIG. 1 illustrates a system for automatically indexing media from different sources, according to an embodiment of the present invention
  • FIG. 2 illustrates a method for indexing multiple media from different sources, according to an embodiment of the present invention
  • FIG. 3 illustrates media from different sources is aligned according to the time of capture, according to an embodiment of the present invention
  • FIG. 4 illustrates an example of the input image sequence from different sources, according to an embodiment of the present invention
  • FIG. 5 illustrates an example of the viewing window, according to an embodiment of the present invention
  • FIG. 6 illustrates a possible use scenario that further illustrates the concept of the present invention
  • FIG. 7 illustrates a method for displaying digital content records, according to an embodiment of the present invention
  • FIG. 8 illustrates a method for presenting digital content records, according to an embodiment of the present invention.
  • FIG. 1 illustrates a system 100 for automatically indexing media from different sources, according to an embodiment of the present invention.
  • the system 100 includes a data processing system 110 , a peripheral system 120 , a user interface system 130 , and a processor-accessible memory system 140 .
  • the processor-accessible memory system 140 , the peripheral system 120 , and the user interface system 130 are communicatively connected to the data processing system 110 .
  • the data processing system 110 includes one or more data processing devices that implement the processes of the various embodiments of the present invention, including the example processes of FIGS. 2 , 3 , 4 , 5 , 6 , 7 , and 8 described herein.
  • the phrases “data processing device” or “data processor” are intended to include any data processing device, such as a central processing unit (“CPU”), a desktop computer, a laptop computer, a mainframe computer, a personal digital assistant, a BlackberryTM, a digital camera, cellular phone, or any other device for processing data, managing data, or handling data, whether implemented with electrical, magnetic, optical, biological components, or otherwise.
  • the processor-accessible memory system 140 includes one or more processor-accessible memories configured to store information, including the information needed to execute the processes of the various embodiments of the present invention, including the example processes of FIGS. 2 , 3 , 4 , 5 , 6 , 7 , and 8 described herein.
  • the processor-accessible memory system 140 may be a distributed processor-accessible memory system including multiple processor-accessible memories communicatively connected to the data processing system 110 via a plurality of computers and/or devices.
  • the processor-accessible memory system 140 need not be a distributed processor-accessible memory system and, consequently, may include one or more processor-accessible memories located within a single data processor or device.
  • processor-accessible memory is intended to include any processor-accessible data storage device, whether volatile or nonvolatile, electronic, magnetic, optical, or otherwise, including but not limited to, floppy disks, hard disks, Compact Discs, DVDs, flash memories, ROMs, and RAMs.
  • the phrase “communicatively connected” is intended to include any type of connection, whether wired or wireless, between devices, data processors, or programs in which data may be communicated. Further, the phrase “communicatively connected” is intended to include a connection between devices or programs within a single data processor, a connection between devices or programs located in different data processors, and a connection between devices not located in data processors at all.
  • the processor-accessible memory system 140 is shown separately from the data processing system 110 , one skilled in the art will appreciate that the processor-accessible memory system 140 may be stored completely or partially within the data processing system 110 .
  • the peripheral system 120 and the user interface system 130 are shown separately from the data processing system 110 , one skilled in the art will appreciate that one or both of such systems may be stored completely or partially within the data processing system 110 .
  • the peripheral system 120 may include one or more devices configured to provide digital content records to the data processing system 110 .
  • the peripheral system 120 may include digital video cameras, cellular phones, regular digital cameras, or other data processors.
  • the data processing system 110 upon receipt of digital content records from a device in the peripheral system 120 , may store such digital content records in the processor-accessible memory system 140 .
  • the user interface system 130 may include a mouse, a keyboard, another computer, or any device or combination of devices from which data is input to the data processing system 110 .
  • the peripheral system 120 is shown separately from the user interface system 130 , the peripheral system 120 may be included as part of the user interface system 130 .
  • the user interface system 130 also may include a display device, a processor-accessible memory, or any device or combination of devices to which data is output by the data processing system 110 .
  • a display device e.g., a liquid crystal display
  • a processor-accessible memory e.g., a liquid crystal display
  • any device or combination of devices to which data is output by the data processing system 110 e.g., a liquid crystal display
  • the user interface system 130 includes a processor-accessible memory, such memory may be part of the processor-accessible memory system 140 even though the user interface system 130 and the processor-accessible memory system 140 are shown separately in FIG. 1 .
  • digital content record refers to any digital content record, such as a digital still image, a digital audio file, a digital video file, etc.
  • digital content record and “media” are used interchangeably in this invention.
  • text input is enabled in the device (such as a text messaging in a cell-phone), these can also be included in the broad category of captured “digital content record.”
  • multiple users can upload their digital content record to a common location.
  • the digital content record can be captured with different cameras as well as different picture takers. The collection may not be owned by any one user.
  • the first step 210 is the receiving of a first set of digital content records captured from a first digital-content capture device Each of said digital the content records having associated therewith time/date of capture information defining when the associated digital content record was captured and wherein the capture information associated with a particular digital content record from the first set defines that its associated digital content record was captured over a contiguous span of time.
  • Second step 220 is the receiving a second set of digital content records captured from a second digital-content capture device, each digital content record in the second set also having associated therewith time/date of capture information.
  • step 230 is to place the digital content record on a common capture timeline.
  • Media from digital sources contain the time of capture as part of the metadata associated with the digital content record.
  • the digital content records from different sources are aligned according to the time of capture as shown in FIG. 3 .
  • the time setting on each device capturing a scene may not be synchronized, leading to different time stamps on digital content record captured at the same instant.
  • the user can modify the time/date of capture on each device at any given instant, allowing the system to slide the timeline for each device till they are aligned 100 based on the modified time/date of capture information.
  • the user can manually align a single digital content record capture from each capture stream, and the system can align the timelines based on the time differences between the aligned digital content record 120 .
  • the user may also provide time correspondences between different sources even when they have correct time settings if they wish to combine the digital content record for some special purpose. For example, they may intend to combine correctly time-stamped digital content record taken in different time zones to show events that were occurring concurrently in different locations.
  • User input may also be used to keep digital content record from overlapping time-frames separate. For example, the user may choose to keep the timelines separate when the media streams are from unrelated events. In this case, all future steps are applied to the separated streams individually.
  • Alignment of the digital content may also be based on user-provided annotations at user-defined points along the common timeline, the user-provided annotations include text data, audio data, video data, or graphical data such as text data includes text messages, web links, or web logs.
  • the digital content record is then ordered chronologically based on their relative position on the common time-line, wherein the ordering step causes the particular digital content record and at least one other digital content record to be associated with a same time/date within the span of time in the capture timeline.
  • the start time of the video is used for the ordering step.
  • the end time of a video clip can also be computed, if not available in the metadata inserted by the capturing device, by computing the total number of frames divided by the frame-rate of the capture device, and adding this to the known start time. The end time is needed to determine the time difference from the next digital content record, as described later.
  • key-frames are extracted from video clips 240 .
  • Calic and Izquierdo propose a real-time method for scene change detection and key-frame extraction by analyzing statistics of the macro-block features extracted from the MPEG compressed stream in “Efficient Key-Frame Extraction and Video Analysis” published in IEEE International Conference on Information Technology: Coding and Computing, 2002.
  • the time of capture of each key-frame is computed by dividing the frame number by the frame-rate of the capture device, and adding this to the known start time.
  • the digital content record on the merged timeline is clustered into events 250 .
  • a method for automatically grouping images into events and sub-events is described in U.S. Pat. No. 6,606,411 B1, to Loui and Pavie (which is hereby incorporated herein by reference).
  • Date and time of capture provided by digital camera metadata and block-level color histogram similarity are used to determine events and sub-events.
  • time intervals between adjacent digital content record time differences
  • a histogram of the time differences vs. number of digital content record is then prepared. If desired, the histogram can then be then mapped to a scaled histogram using a time difference scaling function.
  • This mapping substantially maintains small time differences and compresses large time differences.
  • a two-means clustering is then performed on the mapped time-difference histogram for separating the mapped histogram into two clusters based on the time difference. Normally, events are separated by large time differences. The cluster having larger time differences is considered to represent time differences that correspond to the boundaries between events.
  • the “image” set that is provided to the event clustering algorithm includes still images as well as key-frames from video clips (along with their computed time of capture) from all sources combined. For example, referring to FIG.
  • the input set of images would be B 1 -A 1 -B 2 -C 1 -B 3 -B 4 a -A 2 -B 4 b -C 2 a -B 4 c -B 5 -C 2 b -C 2 c -A 3 a -A 3 b -B 6 ; where the first letter refers to the source, the number refers to the order within that source and the last letter, if present, indicates the key-frame's order within the video clip.
  • the algorithm produces event groups based on time differences, and sub-events based on image similarity. Since block-level color histogram similarity is used to determine sub-events, each sub-event extracted using U.S. Pat. No. 6,606,411 has consistent color distribution, and therefore, these pictures are likely to be of the same scene. It is to be noted that digital content record from different sources may be part of the same event, since there is no distinction made based on the source of the digital content record.
  • 260 links are created between digital content record segments contained within a single event as follows: (a) Still images and other one-time digital inputs such as text/voice annotation are linked to other still and video key-frames from all sources that are within a threshold (typically, a few minutes) of their capture time. (b) Video clips and other continuous captures such as sound recording are linked to still images and key-frames from all sources that fall within their duration of capture.
  • step 270 involves the ability to view the linked content.
  • FIG. 5 provides an example of the viewing window.
  • the digital content record (still or video) being currently viewed appears in the main viewing area. If there are any sound/text snippets that link to the content being currently viewed, an icon appears as shown, that the user can click to access.
  • All digital content record linked by the digital content record being viewed appear as thumbnails in the lower panel.
  • the digital content record closest in time to the digital content record being viewed appears in the picture-in-picture area. Clicking on this area swaps the content in the main area with this content. Clicking on a thumbnail moves the thumbnail to the main viewing area.
  • the links between digital content record are created based on semantic object matches. For example, links are generated between images containing a particular person and video segments that contain the same person. This allows a user to view still images taken of people that appear in videos, or view a video clip of what a person is doing or saying at the instant a particular still image was taken.
  • Gallagher et al describe a method for clustering faces into groups of similar faces that are likely to represent distinct individuals using available face recognition technology. Since all the digital content record in our application is from the same event, further refinement of people recognition is possible as described in commonly assigned patent application Ser. No. 11/755,343, filed May 30, 2007 by Lawther et al, entitled “Composite person model from image collections”. In this application, clothing and other contextual information that are likely to remain the same during the event are used to improve recognition of individuals.
  • Another example of links based on semantic objects is to link images and video frames where similar objects are present in the background that indicates that the two captures were taken against the same backdrop. This allows the user to view still images captured of the same scene that is seen in a video clip, or view the same scene captured from different viewpoints.
  • Ser. No. 11/960,800 filed Dec. 20, 2007, entitled “Grouping images by location”
  • This method uses SIFT features, described by Lowe in International Journal of Computer Vision, Vol 60, No 2., 2004 to match image backgrounds after filtering the features to retain only the features that correspond to potentially unique objects in the image.
  • FIG. 6 shows a possible use scenario that further illustrates the concept.
  • the software automatically places all of the digital content record on a common timeline and groups the pictures by event, people, and scene. The user's ability to move from one type of digital content record to another and choose the best capture of a particular moment is demonstrated in windows 5 and 6 .
  • FIG. 6 also shows an instance where links to other digital content record containing the same person captured during a neighboring time interval is useful.
  • the present invention also embodies a method for displaying digital content records.
  • Related media may be displayed based on its current location along a timeline.
  • the method for displaying digital content records begins with step 710 receiving a set of digital content records organized along a timeline, each digital content record being associated with a point on or segment of the timeline based at least upon its time/date of capture and, optionally, span of capture, at least two digital content records being associated with at least a same point on the timeline.
  • Step 720 requires identifying a current point on the timeline.
  • a digital content record of the set of digital content records is displayed as a focus record 730 , the focus record being associated with the current point on the timeline and being displayed prominently on a display.
  • the first other digital content records of the set of digital content records on the display are then displayed, preferably as a scroll-bar of content at the bottom of the display, the first other digital content records having time/dates of capture or spans of capture temporally adjacent to the current point on the timeline and being displayed less prominently than the focus record on the display 740 .
  • the second other digital content records, or the overlapping media, of the set of digital content records on the display is then displayed, the second other digital content records having a time/date of capture or a span of capture equal to or including the current point on the timeline 750 .
  • the overlapping media may be displayed in a region on the display that overlaps with the focus record, creating a “picture-in-picture” where the focus record occupies a prominent part of the display region and the overlapping media occupies a smaller region.
  • This display may include the ability of the user to swap the focus record with the overlapping media by selecting the overlapping media being displayed.
  • the scroll bar and overlapping media may occupy different parts of the display.
  • the first other digital content records may be displayed in a first region of the display that does not overlap with a second region of the display in which the second other digital content records are displayed.
  • the related digital content may also be limited to non-overlapping location or region on the display whereby the focus record is displayed in a main region of the display that does not overlap with the first region or the second region.
  • the display may also include a user-selectable visual representation, or icon, configured to cause, when selected, display of text or audio comments associated with the focus record and the current point on the timeline.
  • the present invention further embodies a method for presenting digital content records.
  • the initial step is the presentation of a first digital content record on an output device 810 , wherein the first digital content record is a video or audio digital content record.
  • a second digital content record having an association with the first digital content record is then identified 820 , wherein the association is based at least upon adjacency in time, a common object represented therein, a common event during which the first and second digital content records were captured, or a common location at which the digital content records were captured.
  • the second digital content record is presented on the output device while the first digital content record is being presented 830 .

Abstract

The invention relates generally to the field of digital image processing, and in particular to a method for associating and viewing related video and still images. In particular, the present invention is directed to methods for associating and/or viewing digital content records comprising ordering a first set of digital content records and the second set of digital content records based upon information associated with each of the digital content records.

Description

    FIELD OF THE INVENTION
  • The invention relates generally to the field of digital image processing, and in particular to a method for associating and viewing related video and still images.
  • BACKGROUND OF THE INVENTION
  • The proliferation of digital image and video capture devices has led to multiple modalities of capture being present at any picture-taking occasion. For example, it is possible to have both videos and still images since most digital cameras now support capture of video clips; and digital camcorders can capture still images. In an important family event or a public event, such as weddings and sports matches, there are usually multiple still and video capture devices capturing the scene simultaneously. This scenario results in videos and stills that overlap in time. For instance, multiple stills may be captured during the duration of a video clip and multiple video sequences may overlap to various degrees. The current state of the art in consumer image management software, such as Google Picasa, Adobe Photo Album and Kodak EasyShare, display still images and videos in chronological order with no ability to indicate overlapping captures. In some cases, the date/time of file creation (not capture date/time) is used for video, which effectively removes video clips from the natural timeline and places them at one end of a batch of media transferred from capture device to storage device. In the best cases, videos are inserted at the point in the timeline indicated by the start of capture. Still images or video captured during the duration of a longer video clip appear after its thumbnail representation, with no indication of possible overlap; where overlaps could be in time or another relevant concept such as location or event.
  • This mode of display makes it difficult to pick the best representation of a given moment; choose between different modalities or create composites of different modalities. An alternative is to provide browsing mechanisms that explicitly show overlaps between captures of one or more modalities and also allows the user to switch between them on a UI display.
  • In U.S. Pat. No. 6,950,989, Rosenzweig et al describe a timeline-based browsing view for image collections. The images in the collection can be viewed at different time granularity (year-by-year, month-by-month etc), and also along other metadata such as location taken and people in picture. However, it is assumed that all media in the collection can be placed in order on the timeline, and overlaps in time between media are not handled.
  • A few patents discuss some aspects of media overlaps in time or media captured at the same event, but in very limited circumstances, and in contexts other than browsing a consumer image collection. In U.S. Pat. No. 6,701,014, Syeda-Mahmood describes a way to associate slides (say in Microsoft PowerPoint) to slides that are being shown on a screen in a video of the presentation. In U.S. Pat. No. 7,102,644, Hoddie et al describe a way to embed movies within a movie, in cases where there is overlap in content between them. The intention is to allow video editors to edit all the related clips at the same time, so that any changes made in one stream can be reflected in the other related ones. In U.S. Pat. No.7,028,264, Santoro et al describe an interface that shows multiple sources on the same screen, but these sources are not related to each other and are not linked in any way. For example, the sources could be different television channels covering the news, sports, weather and stocks. In U.S. Pat. No. 6,978,047, Montgomery describes storing multiple views of the same event for surveillance applications, but in this case, the video cameras are synchronized. This system does not provide means for relating asynchronous captures that occur in the consumer event captures, and there is no browsing interface provided. In U.S. Pat. No. 7,158,689, Valleriano et al handle asynchronously captured images of an event, but the event type is a special case of a timed event such as a race, and contestants are tracked at various fixed stations. These methods are specific to the applications being described, and provides no framework for handling the generalized problem of browsing multiple sources of media captured asynchronously at the same event.
  • SUMMARY OF THE INVENTION
  • In accordance with one aspect of the present invention there is provided a method for organizing digital content records including: receiving a first set of digital content records captured from a first digital-content capture device; receiving a second set of digital content records captured from a second digital-content capture device; ordering the first set of digital content records and the second set of digital content records along a common capture timeline; and storing results of the ordering step in a processor-accessible memory system.
  • In accordance with another aspect of the present invention there is provided a method for organizing digital content records including: receiving a first set of digital content records captured from a first digital-content capture device, each digital content record in the first set having associated therewith time/date of capture information defining when the associated digital content record was captured, wherein the capture information associated with a particular digital content record from the first set defines that its associated digital content record was captured over a contiguous span of time; receiving a second set of digital content records captured from a second digital-content capture device, each digital content record in the second set having associated therewith time/date of capture information defining when the associated digital content record was captured; ordering the first set of digital content records and the second set of digital content records along a common capture timeline based at least upon the time/date of capture information, or a derivative thereof associated with each of the digital content records in the first and second sets, wherein the ordering step causes the particular digital content record and at least one other digital content record to be associated with a same time/date within the span of time in the capture timeline; and storing results of the ordering step in a processor-accessible memory system.
  • In accordance with another aspect of the present invention there is provided a method for displaying digital content records including: receiving a set of digital content records organized along a timeline, each digital content record being associated with a point on or segment of the timeline based at least upon its time/date of capture and, optionally, span of capture, at least two digital content records being associated with at least a same point on the timeline; identifying a current point on the timeline; displaying a digital content record of the set of digital content records as a focus record, the focus record associated with the current point on the timeline and being displayed prominently on a display; displaying first other digital content records of the set of digital content records on the display, the first other digital content records having time/dates of capture or spans of capture temporally adjacent to the current point on the timeline and being displayed less prominently than the focus record on the display; and displaying second other digital content records of the set of digital content records on the display, the second other digital content records having a time/date of capture or a span of capture equal to or including the current point on the timeline.
  • In accordance with yet another aspect of the present invention there is provided a method for presenting digital content records including: instructing presentation of a first digital content record on an output device, wherein the first digital content record is a video or audio digital content record; identifying a second digital content record having an association with the first digital content record, wherein the association is based at least upon adjacency in time, a common object represented therein, a common event during which the first and second digital content records were captured, or a common location at which the digital content records were captured; and instructing presentation of the second digital content record on the output device while the first digital content record is being presented.
  • In accordance with a further aspect of the present invention there is provided a system for indexing media from different sources including: a means for receiving a first set of digital content records captured from a first digital-content capture device; a means for receiving a second set of digital content records captured from a second digital-content capture device; a means for ordering the first set of digital content records and the second set of digital content records along a common capture timeline; and a means for storing results of the ordering step in a processor-accessible memory system.
  • These and other aspects, objects, features and advantages of the present invention will be more clearly understood and appreciated from a review of the following detailed description of the preferred embodiments and appended claims and by reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will be more readily understood from the detailed description of exemplary embodiments presented below considered in conjunction with the attached drawings, of which:
  • FIG. 1 illustrates a system for automatically indexing media from different sources, according to an embodiment of the present invention;
  • FIG. 2 illustrates a method for indexing multiple media from different sources, according to an embodiment of the present invention;
  • FIG. 3 illustrates media from different sources is aligned according to the time of capture, according to an embodiment of the present invention;
  • FIG. 4 illustrates an example of the input image sequence from different sources, according to an embodiment of the present invention;
  • FIG. 5 illustrates an example of the viewing window, according to an embodiment of the present invention;
  • FIG. 6 illustrates a possible use scenario that further illustrates the concept of the present invention;
  • FIG. 7 illustrates a method for displaying digital content records, according to an embodiment of the present invention;
  • FIG. 8 illustrates a method for presenting digital content records, according to an embodiment of the present invention.
  • It is to be understood that the attached drawings are for purposes of illustrating the concepts of the invention and may not be to scale.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 illustrates a system 100 for automatically indexing media from different sources, according to an embodiment of the present invention. The system 100 includes a data processing system 110, a peripheral system 120, a user interface system 130, and a processor-accessible memory system 140. The processor-accessible memory system 140, the peripheral system 120, and the user interface system 130 are communicatively connected to the data processing system 110.
  • The data processing system 110 includes one or more data processing devices that implement the processes of the various embodiments of the present invention, including the example processes of FIGS. 2, 3, 4, 5, 6, 7, and 8 described herein. The phrases “data processing device” or “data processor” are intended to include any data processing device, such as a central processing unit (“CPU”), a desktop computer, a laptop computer, a mainframe computer, a personal digital assistant, a Blackberry™, a digital camera, cellular phone, or any other device for processing data, managing data, or handling data, whether implemented with electrical, magnetic, optical, biological components, or otherwise.
  • The processor-accessible memory system 140 includes one or more processor-accessible memories configured to store information, including the information needed to execute the processes of the various embodiments of the present invention, including the example processes of FIGS. 2, 3, 4, 5, 6, 7, and 8 described herein. The processor-accessible memory system 140 may be a distributed processor-accessible memory system including multiple processor-accessible memories communicatively connected to the data processing system 110 via a plurality of computers and/or devices. On the other hand, the processor-accessible memory system 140 need not be a distributed processor-accessible memory system and, consequently, may include one or more processor-accessible memories located within a single data processor or device.
  • The phrase “processor-accessible memory” is intended to include any processor-accessible data storage device, whether volatile or nonvolatile, electronic, magnetic, optical, or otherwise, including but not limited to, floppy disks, hard disks, Compact Discs, DVDs, flash memories, ROMs, and RAMs.
  • The phrase “communicatively connected” is intended to include any type of connection, whether wired or wireless, between devices, data processors, or programs in which data may be communicated. Further, the phrase “communicatively connected” is intended to include a connection between devices or programs within a single data processor, a connection between devices or programs located in different data processors, and a connection between devices not located in data processors at all. In this regard, although the processor-accessible memory system 140 is shown separately from the data processing system 110, one skilled in the art will appreciate that the processor-accessible memory system 140 may be stored completely or partially within the data processing system 110. Further in this regard, although the peripheral system 120 and the user interface system 130 are shown separately from the data processing system 110, one skilled in the art will appreciate that one or both of such systems may be stored completely or partially within the data processing system 110.
  • The peripheral system 120 may include one or more devices configured to provide digital content records to the data processing system 110. For example, the peripheral system 120 may include digital video cameras, cellular phones, regular digital cameras, or other data processors. The data processing system 110, upon receipt of digital content records from a device in the peripheral system 120, may store such digital content records in the processor-accessible memory system 140.
  • The user interface system 130 may include a mouse, a keyboard, another computer, or any device or combination of devices from which data is input to the data processing system 110. In this regard, although the peripheral system 120 is shown separately from the user interface system 130, the peripheral system 120 may be included as part of the user interface system 130.
  • The user interface system 130 also may include a display device, a processor-accessible memory, or any device or combination of devices to which data is output by the data processing system 110. In this regard, if the user interface system 130 includes a processor-accessible memory, such memory may be part of the processor-accessible memory system 140 even though the user interface system 130 and the processor-accessible memory system 140 are shown separately in FIG. 1.
  • The main steps in automatically indexing media from different sources are shown in FIG. 2. The phrase, “digital content record”, as used herein, refers to any digital content record, such as a digital still image, a digital audio file, a digital video file, etc. Note that the phrase, “digital content record” and “media” are used interchangeably in this invention. If text input is enabled in the device (such as a text messaging in a cell-phone), these can also be included in the broad category of captured “digital content record.” In this scenario, multiple users can upload their digital content record to a common location. The digital content record can be captured with different cameras as well as different picture takers. The collection may not be owned by any one user.
  • Referring to FIG. 2, the first step 210 is the receiving of a first set of digital content records captured from a first digital-content capture device Each of said digital the content records having associated therewith time/date of capture information defining when the associated digital content record was captured and wherein the capture information associated with a particular digital content record from the first set defines that its associated digital content record was captured over a contiguous span of time. Second step 220 is the receiving a second set of digital content records captured from a second digital-content capture device, each digital content record in the second set also having associated therewith time/date of capture information.
  • Next, step 230 is to place the digital content record on a common capture timeline. Media from digital sources contain the time of capture as part of the metadata associated with the digital content record. The digital content records from different sources are aligned according to the time of capture as shown in FIG. 3. The time setting on each device capturing a scene may not be synchronized, leading to different time stamps on digital content record captured at the same instant. In this case, the user can modify the time/date of capture on each device at any given instant, allowing the system to slide the timeline for each device till they are aligned 100 based on the modified time/date of capture information. Alternatively, the user can manually align a single digital content record capture from each capture stream, and the system can align the timelines based on the time differences between the aligned digital content record 120. The user may also provide time correspondences between different sources even when they have correct time settings if they wish to combine the digital content record for some special purpose. For example, they may intend to combine correctly time-stamped digital content record taken in different time zones to show events that were occurring concurrently in different locations. User input may also be used to keep digital content record from overlapping time-frames separate. For example, the user may choose to keep the timelines separate when the media streams are from unrelated events. In this case, all future steps are applied to the separated streams individually.
  • Alignment of the digital content may also be based on user-provided annotations at user-defined points along the common timeline, the user-provided annotations include text data, audio data, video data, or graphical data such as text data includes text messages, web links, or web logs.
  • Automated time alignment of the capture devices based on image similarity is another alternative. A method for aligning media streams when the capture date-time information is unavailable is described in commonly assigned U.S. Patent Application 20060200475 entitled “Additive clustering of images lacking individual date/time information.”
  • The digital content record is then ordered chronologically based on their relative position on the common time-line, wherein the ordering step causes the particular digital content record and at least one other digital content record to be associated with a same time/date within the span of time in the capture timeline. For video clips, the start time of the video is used for the ordering step. Note that the end time of a video clip can also be computed, if not available in the metadata inserted by the capturing device, by computing the total number of frames divided by the frame-rate of the capture device, and adding this to the known start time. The end time is needed to determine the time difference from the next digital content record, as described later.
  • Referring to FIG. 2, key-frames are extracted from video clips 240. There are many published methods for extracting key-frames from video. As an example, Calic and Izquierdo propose a real-time method for scene change detection and key-frame extraction by analyzing statistics of the macro-block features extracted from the MPEG compressed stream in “Efficient Key-Frame Extraction and Video Analysis” published in IEEE International Conference on Information Technology: Coding and Computing, 2002. The time of capture of each key-frame is computed by dividing the frame number by the frame-rate of the capture device, and adding this to the known start time.
  • Referring to FIG. 2, the digital content record on the merged timeline is clustered into events 250. A method for automatically grouping images into events and sub-events is described in U.S. Pat. No. 6,606,411 B1, to Loui and Pavie (which is hereby incorporated herein by reference). Date and time of capture provided by digital camera metadata and block-level color histogram similarity are used to determine events and sub-events. First, time intervals between adjacent digital content record (time differences) are computed. A histogram of the time differences vs. number of digital content record is then prepared. If desired, the histogram can then be then mapped to a scaled histogram using a time difference scaling function. This mapping substantially maintains small time differences and compresses large time differences. A two-means clustering is then performed on the mapped time-difference histogram for separating the mapped histogram into two clusters based on the time difference. Normally, events are separated by large time differences. The cluster having larger time differences is considered to represent time differences that correspond to the boundaries between events.
  • In the scenario of this invention, the “image” set that is provided to the event clustering algorithm (described in U.S. Pat. No. 6,606,411) includes still images as well as key-frames from video clips (along with their computed time of capture) from all sources combined. For example, referring to FIG. 4, the input set of images would be B1-A1-B2-C1-B3-B4 a-A2-B4 b-C2 a-B4 c-B5-C2 b-C2 c-A3 a-A3 b-B6; where the first letter refers to the source, the number refers to the order within that source and the last letter, if present, indicates the key-frame's order within the video clip. The algorithm produces event groups based on time differences, and sub-events based on image similarity. Since block-level color histogram similarity is used to determine sub-events, each sub-event extracted using U.S. Pat. No. 6,606,411 has consistent color distribution, and therefore, these pictures are likely to be of the same scene. It is to be noted that digital content record from different sources may be part of the same event, since there is no distinction made based on the source of the digital content record.
  • Referring to FIG. 2, 260 links are created between digital content record segments contained within a single event as follows: (a) Still images and other one-time digital inputs such as text/voice annotation are linked to other still and video key-frames from all sources that are within a threshold (typically, a few minutes) of their capture time. (b) Video clips and other continuous captures such as sound recording are linked to still images and key-frames from all sources that fall within their duration of capture.
  • Referring to FIG. 2, step 270 involves the ability to view the linked content. FIG. 5 provides an example of the viewing window. The digital content record (still or video) being currently viewed appears in the main viewing area. If there are any sound/text snippets that link to the content being currently viewed, an icon appears as shown, that the user can click to access. All digital content record linked by the digital content record being viewed appear as thumbnails in the lower panel. The digital content record closest in time to the digital content record being viewed appears in the picture-in-picture area. Clicking on this area swaps the content in the main area with this content. Clicking on a thumbnail moves the thumbnail to the main viewing area.
  • In another embodiment, the links between digital content record are created based on semantic object matches. For example, links are generated between images containing a particular person and video segments that contain the same person. This allows a user to view still images taken of people that appear in videos, or view a video clip of what a person is doing or saying at the instant a particular still image was taken. In commonly assigned patent application Ser. No. 11/559,544, filed Nov. 14, 2006, entitled “User Interface for Face Recognition”, Gallagher et al describe a method for clustering faces into groups of similar faces that are likely to represent distinct individuals using available face recognition technology. Since all the digital content record in our application is from the same event, further refinement of people recognition is possible as described in commonly assigned patent application Ser. No. 11/755,343, filed May 30, 2007 by Lawther et al, entitled “Composite person model from image collections”. In this application, clothing and other contextual information that are likely to remain the same during the event are used to improve recognition of individuals.
  • Another example of links based on semantic objects is to link images and video frames where similar objects are present in the background that indicates that the two captures were taken against the same backdrop. This allows the user to view still images captured of the same scene that is seen in a video clip, or view the same scene captured from different viewpoints. In commonly assigned application Ser. No. 11/960,800, filed Dec. 20, 2007, entitled “Grouping images by location”, a method for determining groups of images captured at the same location is described. This method uses SIFT features, described by Lowe in International Journal of Computer Vision, Vol 60, No 2., 2004 to match image backgrounds after filtering the features to retain only the features that correspond to potentially unique objects in the image.
  • FIG. 6 shows a possible use scenario that further illustrates the concept. In window 3, the software automatically places all of the digital content record on a common timeline and groups the pictures by event, people, and scene. The user's ability to move from one type of digital content record to another and choose the best capture of a particular moment is demonstrated in windows 5 and 6. FIG. 6 also shows an instance where links to other digital content record containing the same person captured during a neighboring time interval is useful.
  • The present invention also embodies a method for displaying digital content records. Related media may be displayed based on its current location along a timeline. Referring now to FIG. 7, the method for displaying digital content records begins with step 710 receiving a set of digital content records organized along a timeline, each digital content record being associated with a point on or segment of the timeline based at least upon its time/date of capture and, optionally, span of capture, at least two digital content records being associated with at least a same point on the timeline. Step 720 requires identifying a current point on the timeline. A digital content record of the set of digital content records is displayed as a focus record 730, the focus record being associated with the current point on the timeline and being displayed prominently on a display. The first other digital content records of the set of digital content records on the display are then displayed, preferably as a scroll-bar of content at the bottom of the display, the first other digital content records having time/dates of capture or spans of capture temporally adjacent to the current point on the timeline and being displayed less prominently than the focus record on the display 740. The second other digital content records, or the overlapping media, of the set of digital content records on the display is then displayed, the second other digital content records having a time/date of capture or a span of capture equal to or including the current point on the timeline 750. The overlapping media may be displayed in a region on the display that overlaps with the focus record, creating a “picture-in-picture” where the focus record occupies a prominent part of the display region and the overlapping media occupies a smaller region. This display may include the ability of the user to swap the focus record with the overlapping media by selecting the overlapping media being displayed. The scroll bar and overlapping media may occupy different parts of the display. Thus, the first other digital content records may be displayed in a first region of the display that does not overlap with a second region of the display in which the second other digital content records are displayed. The related digital content may also be limited to non-overlapping location or region on the display whereby the focus record is displayed in a main region of the display that does not overlap with the first region or the second region. The display may also include a user-selectable visual representation, or icon, configured to cause, when selected, display of text or audio comments associated with the focus record and the current point on the timeline.
  • Referring now to FIG. 8, the present invention further embodies a method for presenting digital content records. The initial step is the presentation of a first digital content record on an output device 810, wherein the first digital content record is a video or audio digital content record. Then, a second digital content record having an association with the first digital content record is then identified 820, wherein the association is based at least upon adjacency in time, a common object represented therein, a common event during which the first and second digital content records were captured, or a common location at which the digital content records were captured. Finally, the second digital content record is presented on the output device while the first digital content record is being presented 830.
  • It is to be understood that the exemplary embodiment(s) is/are merely illustrative of the present invention and that many variations of the above-described embodiment(s) can be devised by one skilled in the art without departing from the scope of the invention. It is therefore intended that all such variations be included within the scope of the following claims and their equivalents.

Claims (20)

1. A method implemented at least in part by a data processing system, the method for organizing digital content records, and the method comprising the steps of:
receiving a first set of digital content records captured from a first digital-content capture device, each digital content record in the first set having associated therewith time/date of capture information defining when the associated digital content record was captured, wherein the capture information associated with a particular digital content record from the first set defines that its associated digital content record was captured over a contiguous span of time;
receiving a second set of digital content records captured from a second digital-content capture device, each digital content record in the second set having associated therewith time/date of capture information defining when the associated digital content record was captured;
ordering the first set of digital content records and the second set of digital content records along a common capture timeline based at least upon the time/date of capture information, or a derivative thereof, associated with each of the digital content records in the first and second sets, wherein the ordering step causes the particular digital content record and at least one other digital content record to be associated with a same time/date within the span of time in the capture timeline; and
storing results of the ordering step in a processor-accessible memory system.
2. The method of claim 1, further comprising the step of modifying the time/date of capture information for both sets of digital content records to accord with each other, wherein the ordering step orders the digital content records based at least upon the modified time/date of capture information.
3. The method of claim 1, wherein the digital content records include digital video records, digital audio records, digital still image records.
4. The method of claim 1, wherein the ordering step orders the digital content records along the common timeline also based upon (a) objects identified in, (b) scenery identified in, (c) events associated with, or (d) locations associated with the digital content records.
5. The method of claim 1, further comprising the step of associating user-provided annotations at user-defined points along the common timeline.
6. The method of claim 5, wherein the user-provided annotations include text data, audio data, video data, or graphical data.
7. The method of claim 6, wherein the user-provided annotations include text data, and wherein the text data includes text messages, web links, or web logs.
8. The method of claim 1, further comprising the step of providing the time/date of capture information for at least some of the digital content records in the first set or the second set.
9. A method implemented at least in part by a data processing system, the method for displaying digital content records, and the method comprising the steps of:
receiving a set of digital content records organized along a timeline, each digital content record being associated with a point on or segment of the timeline based at least upon its time/date of capture and, optionally, span of capture, at least two digital content records being associated with at least a same point on the timeline; identifying a current point on the timeline;
displaying a digital content record of the set of digital content records as a focus record, the focus record associated with the current point on the timeline and being displayed prominently on a display;
displaying first other digital content records of the set of digital content records on the display, the first other digital content records having time/dates of capture or spans of capture temporally adjacent to the current point on the timeline and being displayed less prominently than the focus record on the display; and
displaying second other digital content records of the set of digital content records on the display, the second other digital content records having a time/date of capture or a span of capture equal to or including the current point on the timeline.
10. The method of claim 9, wherein the first other digital content records are displayed in a first region of the display that does not overlap with a second region of the display in which the second other digital content records are displayed.
11. The method of claim 10, wherein the focus record is displayed in a main region of the display that does not overlap with the first region or the second region.
12. The method of claim 9, further comprising the step of receiving a selection of the focus record, wherein the identifying step identifies the current point on the timeline based upon the time/date of capture of the focus record.
13. The method of claim 9, wherein one of the second other digital content records is displayed in a manner overlapping only a portion of the focus record.
14. The method of claim 9, wherein the focus record is a video, and wherein at least one of the first other digital content records or the second other digital content records is a still image.
15. The method of claim 9, further comprising the step of displaying on the display a user-selectable visual representation configured to cause, when selected, display of text or audio comments associated with the focus record and the current point on the timeline.
16. A method implemented at least in part by a data processing system, the method for presenting digital content records, and the method comprising the steps of:
instructing presentation of a first digital content record on an output device, wherein the first digital content record is a video or audio digital content record; identifying a second digital content record having an association with the first digital content record, wherein the association is based at least upon adjacency in time, a common object represented therein, a common event during which the first and second digital content records were captured, or a common location at which the digital content records were captured; and
instructing presentation of the second digital content record on the output device while the first digital content record is being presented.
17. The method of claim 16, wherein the first digital content record and the second digital content record are presented in a picture-in-picture manner.
18. The method of claim 16,
wherein the first digital content record is instructed to be presented in a more prominent manner than the second digital content record, and wherein the method further comprises the steps of:
receiving user-input pertaining to the second digital content record;
in response to the received user-input, instructing presentation of the second digital content record in a more prominent manner than the first digital content record.
19. A method for organizing digital content records comprising the steps of:
receiving a first set of digital content records captured from a first digital-content capture device;
receiving a second set of digital content records captured from a second digital-content capture device;
ordering the first set of digital content records and the second set of digital content records along a common capture timeline; and storing results of the ordering step in a processor-accessible memory system.
20. A system for indexing media from different sources comprising:
a means for receiving a first set of digital content records captured from a first digital-content capture device;
a means for receiving a second set of digital content records captured from a second digital-content capture device; a means for ordering the first set of digital content records and the second set of digital content records along a common capture timeline; and
a means for storing results of the ordering step in a processor-accessible memory system.
US12/206,319 2008-09-08 2008-09-08 Method and Interface for Indexing Related Media From Multiple Sources Abandoned US20100077289A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/206,319 US20100077289A1 (en) 2008-09-08 2008-09-08 Method and Interface for Indexing Related Media From Multiple Sources
PCT/US2009/004990 WO2010027481A1 (en) 2008-09-08 2009-09-04 Indexing related media from multiple sources
US13/463,183 US9218367B2 (en) 2008-09-08 2012-05-03 Method and interface for indexing related media from multiple sources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/206,319 US20100077289A1 (en) 2008-09-08 2008-09-08 Method and Interface for Indexing Related Media From Multiple Sources

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/463,183 Division US9218367B2 (en) 2008-09-08 2012-05-03 Method and interface for indexing related media from multiple sources

Publications (1)

Publication Number Publication Date
US20100077289A1 true US20100077289A1 (en) 2010-03-25

Family

ID=41508805

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/206,319 Abandoned US20100077289A1 (en) 2008-09-08 2008-09-08 Method and Interface for Indexing Related Media From Multiple Sources
US13/463,183 Expired - Fee Related US9218367B2 (en) 2008-09-08 2012-05-03 Method and interface for indexing related media from multiple sources

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/463,183 Expired - Fee Related US9218367B2 (en) 2008-09-08 2012-05-03 Method and interface for indexing related media from multiple sources

Country Status (2)

Country Link
US (2) US20100077289A1 (en)
WO (1) WO2010027481A1 (en)

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100329635A1 (en) * 2009-06-25 2010-12-30 Hiromi Nishiura Video reproducing apparatus
US20110013084A1 (en) * 2003-04-05 2011-01-20 David Robert Black Method and apparatus for synchronizing audio and video streams
US20110264700A1 (en) * 2010-04-26 2011-10-27 Microsoft Corporation Enriching online videos by content detection, searching, and information aggregation
WO2012064532A1 (en) 2010-11-09 2012-05-18 Eastman Kodak Company Aligning and summarizing different photo streams
WO2012064494A1 (en) 2010-11-09 2012-05-18 Eastman Kodak Company Aligning and annotating different photo streams
US8380039B2 (en) 2010-11-09 2013-02-19 Eastman Kodak Company Method for aligning different photo streams
US20130250139A1 (en) * 2012-03-22 2013-09-26 Trung Tri Doan Method And System For Tagging And Organizing Images Generated By Mobile Communications Devices
US8555170B2 (en) 2010-08-10 2013-10-08 Apple Inc. Tool for presenting and editing a storyboard representation of a composite presentation
US20130268828A1 (en) * 2012-04-05 2013-10-10 Nokia Corporation User event content, associated apparatus and methods
US8612517B1 (en) * 2012-01-30 2013-12-17 Google Inc. Social based aggregation of related media content
US8621355B2 (en) 2011-02-02 2013-12-31 Apple Inc. Automatic synchronization of media clips
US20140033050A1 (en) * 2012-07-26 2014-01-30 Samsung Electronics Co., Ltd. Method of transmitting inquiry message, display device for the method, method of sharing information, and mobile terminal
US8745499B2 (en) 2011-01-28 2014-06-03 Apple Inc. Timeline search and index
US8819557B2 (en) 2010-07-15 2014-08-26 Apple Inc. Media-editing application with a free-form space for organizing or compositing media clips
US20140245145A1 (en) * 2013-02-26 2014-08-28 Alticast Corporation Method and apparatus for playing contents
US8875025B2 (en) 2010-07-15 2014-10-28 Apple Inc. Media-editing application with media clips grouping capabilities
US8910046B2 (en) 2010-07-15 2014-12-09 Apple Inc. Media-editing application with anchored timeline
US8966367B2 (en) 2011-02-16 2015-02-24 Apple Inc. Anchor override for a media-editing application with an anchored timeline
US9111579B2 (en) 2011-11-14 2015-08-18 Apple Inc. Media editing with multi-camera media clips
US9143742B1 (en) 2012-01-30 2015-09-22 Google Inc. Automated aggregation of related media content
US20150269442A1 (en) * 2014-03-18 2015-09-24 Vivotek Inc. Monitoring system and related image searching method
US20160050172A1 (en) * 2014-08-18 2016-02-18 KnowMe Systems, Inc. Digital media message generation
US20160063103A1 (en) * 2014-08-27 2016-03-03 International Business Machines Corporation Consolidating video search for an event
CN105612758A (en) * 2013-11-20 2016-05-25 纳宝株式会社 Video-providing method and video-providing system
US20160379058A1 (en) * 2015-06-26 2016-12-29 Canon Kabushiki Kaisha Method, system and apparatus for segmenting an image set to generate a plurality of event clusters
US9536564B2 (en) 2011-09-20 2017-01-03 Apple Inc. Role-facilitated editing operations
US9564173B2 (en) 2009-04-30 2017-02-07 Apple Inc. Media editing application for auditioning different types of media clips
US9659598B2 (en) 2014-07-21 2017-05-23 Avigilon Corporation Timeline synchronization control method for multiple display views
US9773525B2 (en) * 2007-08-16 2017-09-26 Adobe Systems Incorporated Timeline management
US9870800B2 (en) * 2014-08-27 2018-01-16 International Business Machines Corporation Multi-source video input
US9870802B2 (en) 2011-01-28 2018-01-16 Apple Inc. Media clip management
US20180027308A1 (en) * 2016-07-25 2018-01-25 Lenovo (Beijing) Co., Ltd. Method and device for combining videos
US20180053531A1 (en) * 2016-08-18 2018-02-22 Bryan Joseph Wrzesinski Real time video performance instrument
US9997196B2 (en) 2011-02-16 2018-06-12 Apple Inc. Retiming media presentations
US20180184138A1 (en) * 2015-06-15 2018-06-28 Piksel, Inc. Synchronisation of streamed content
US10037185B2 (en) 2014-08-18 2018-07-31 Nightlight Systems Llc Digital media message generation
US10038657B2 (en) 2014-08-18 2018-07-31 Nightlight Systems Llc Unscripted digital media message generation
CN108476343A (en) * 2016-01-07 2018-08-31 Mfu股份有限公司 The video broadcasting method and device that each of music is segmented
US10210179B2 (en) * 2008-11-18 2019-02-19 Excalibur Ip, Llc Dynamic feature weighting
US20190066821A1 (en) * 2017-08-10 2019-02-28 Nuance Communications, Inc. Automated clinical documentation system and method
US10268751B2 (en) * 2015-03-18 2019-04-23 Naver Corporation Methods, systems, apparatuses, and/or non-transitory computer readable media for providing event-related data over a network
US10304493B2 (en) * 2015-03-19 2019-05-28 Naver Corporation Cartoon content editing method and cartoon content editing apparatus
US10735360B2 (en) 2014-08-18 2020-08-04 Nightlight Systems Llc Digital media messages and files
US10735361B2 (en) 2014-08-18 2020-08-04 Nightlight Systems Llc Scripted digital media message generation
US10809970B2 (en) 2018-03-05 2020-10-20 Nuance Communications, Inc. Automated clinical documentation system and method
US11043207B2 (en) 2019-06-14 2021-06-22 Nuance Communications, Inc. System and method for array data simulation and customized acoustic modeling for ambient ASR
US11216480B2 (en) 2019-06-14 2022-01-04 Nuance Communications, Inc. System and method for querying data points from graph data structures
US11222103B1 (en) 2020-10-29 2022-01-11 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US11222716B2 (en) 2018-03-05 2022-01-11 Nuance Communications System and method for review of automated clinical documentation from recorded audio
US11227679B2 (en) 2019-06-14 2022-01-18 Nuance Communications, Inc. Ambient clinical intelligence system and method
US11240542B2 (en) 2016-01-14 2022-02-01 Avigilon Corporation System and method for multiple video playback
US11316865B2 (en) 2017-08-10 2022-04-26 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US20220264053A1 (en) * 2019-10-30 2022-08-18 Beijing Bytedance Network Technology Co., Ltd. Video processing method and device, terminal, and storage medium
US11515020B2 (en) 2018-03-05 2022-11-29 Nuance Communications, Inc. Automated clinical documentation system and method
US11531807B2 (en) 2019-06-28 2022-12-20 Nuance Communications, Inc. System and method for customized text macros
WO2023042166A1 (en) * 2021-09-19 2023-03-23 Glossai Ltd Systems and methods for indexing media content using dynamic domain-specific corpus and model generation
US11670408B2 (en) 2019-09-30 2023-06-06 Nuance Communications, Inc. System and method for review of automated clinical documentation
US11747972B2 (en) 2011-02-16 2023-09-05 Apple Inc. Media-editing application with novel editing tools

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140164988A1 (en) * 2012-12-06 2014-06-12 Microsoft Corporation Immersive view navigation

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606411B1 (en) * 1998-09-30 2003-08-12 Eastman Kodak Company Method for automatically classifying images into events
US20030231198A1 (en) * 2002-06-18 2003-12-18 Koninklijke Philips Electronics N.V. System and method for providing videomarks for a video program
US6701014B1 (en) * 2000-06-14 2004-03-02 International Business Machines Corporation Method and apparatus for matching slides in video
US6950989B2 (en) * 2000-12-20 2005-09-27 Eastman Kodak Company Timeline-based graphical user interface for efficient image database browsing and retrieval
US6978047B2 (en) * 2000-11-29 2005-12-20 Etreppid Technologies Llc Method and apparatus for storing digital video content provided from a plurality of cameras
US7028264B2 (en) * 1999-10-29 2006-04-11 Surfcast, Inc. System and method for simultaneous display of multiple information sources
US20060090141A1 (en) * 2001-05-23 2006-04-27 Eastman Kodak Company Method and system for browsing large digital multimedia object collections
US7102644B2 (en) * 1995-12-11 2006-09-05 Apple Computer, Inc. Apparatus and method for storing a movie within a movie
US20060200475A1 (en) * 2005-03-04 2006-09-07 Eastman Kodak Company Additive clustering of images lacking individualized date-time information
US7158689B2 (en) * 2002-11-25 2007-01-02 Eastman Kodak Company Correlating captured images and timed event data
US20080044155A1 (en) * 2006-08-17 2008-02-21 David Kuspa Techniques for positioning audio and video clips
US20080112621A1 (en) * 2006-11-14 2008-05-15 Gallagher Andrew C User interface for face recognition
US20080309647A1 (en) * 2007-06-15 2008-12-18 Blose Andrew C Determining presentation effects for a sequence of digital content records
US20090265647A1 (en) * 2008-04-22 2009-10-22 Apple Inc. Modifying Time Associated With Digital Media Items

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10304313A (en) * 1997-04-25 1998-11-13 Sony Corp Television receiver having recording/reproducing function and recording/reproducing method therefor
US6384868B1 (en) * 1997-07-09 2002-05-07 Kabushiki Kaisha Toshiba Multi-screen display apparatus and video switching processing apparatus
US6307550B1 (en) * 1998-06-11 2001-10-23 Presenter.Com, Inc. Extracting photographic images from video
JP3389545B2 (en) * 1999-12-27 2003-03-24 シャープ株式会社 Recording device, reproducing device, and recording / reproducing device connecting these devices
US6901207B1 (en) * 2000-03-30 2005-05-31 Lsi Logic Corporation Audio/visual device for capturing, searching and/or displaying audio/visual material
US7697815B2 (en) * 2001-02-28 2010-04-13 Kddi Corporation Video playback unit, video delivery unit and recording medium
US6910191B2 (en) * 2001-11-02 2005-06-21 Nokia Corporation Program guide data selection device
KR100806873B1 (en) * 2002-08-08 2008-02-22 삼성전자주식회사 A/V program recording/playing apparatus and displaying method for recording program list thereof
US8087054B2 (en) * 2002-09-30 2011-12-27 Eastman Kodak Company Automated event content processing method and system
KR100998899B1 (en) * 2003-08-30 2010-12-09 엘지전자 주식회사 Method for service of thumbnail image and broadcasting receiver
JP4035497B2 (en) * 2003-09-26 2008-01-23 キヤノン株式会社 Image display system, image display apparatus, image display method, and program
EP1671483B1 (en) * 2003-10-06 2014-04-09 Disney Enterprises, Inc. System and method of playback and feature control for video players
US20050108643A1 (en) * 2003-11-17 2005-05-19 Nokia Corporation Topographic presentation of media files in a media diary application
US7681141B2 (en) * 2004-05-11 2010-03-16 Sony Computer Entertainment America Inc. Fast scrolling in a graphical user interface
JP2006163948A (en) * 2004-12-08 2006-06-22 Canon Inc Information processor and its method
WO2007038612A2 (en) * 2005-09-26 2007-04-05 Cognisign, Llc Apparatus and method for processing user-specified search image points
KR101240261B1 (en) * 2006-02-07 2013-03-07 엘지전자 주식회사 The apparatus and method for image communication of mobile communication terminal
US20070192729A1 (en) * 2006-02-10 2007-08-16 Microsoft Corporation Document overview scrollbar
JP2008134866A (en) * 2006-11-29 2008-06-12 Sony Corp Content browsing method, content browsing device and content browsing program
US8276098B2 (en) * 2006-12-22 2012-09-25 Apple Inc. Interactive image thumbnails
US20080298643A1 (en) 2007-05-30 2008-12-04 Lawther Joel S Composite person model from image collection
US8122378B2 (en) * 2007-06-08 2012-02-21 Apple Inc. Image capture and manipulation
US20100281370A1 (en) * 2007-06-29 2010-11-04 Janos Rohaly Video-assisted margin marking for dental models
US8150098B2 (en) 2007-12-20 2012-04-03 Eastman Kodak Company Grouping images by location
US8875023B2 (en) * 2007-12-27 2014-10-28 Microsoft Corporation Thumbnail navigation bar for video

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7102644B2 (en) * 1995-12-11 2006-09-05 Apple Computer, Inc. Apparatus and method for storing a movie within a movie
US6606411B1 (en) * 1998-09-30 2003-08-12 Eastman Kodak Company Method for automatically classifying images into events
US7028264B2 (en) * 1999-10-29 2006-04-11 Surfcast, Inc. System and method for simultaneous display of multiple information sources
US6701014B1 (en) * 2000-06-14 2004-03-02 International Business Machines Corporation Method and apparatus for matching slides in video
US6978047B2 (en) * 2000-11-29 2005-12-20 Etreppid Technologies Llc Method and apparatus for storing digital video content provided from a plurality of cameras
US6950989B2 (en) * 2000-12-20 2005-09-27 Eastman Kodak Company Timeline-based graphical user interface for efficient image database browsing and retrieval
US20060090141A1 (en) * 2001-05-23 2006-04-27 Eastman Kodak Company Method and system for browsing large digital multimedia object collections
US20030231198A1 (en) * 2002-06-18 2003-12-18 Koninklijke Philips Electronics N.V. System and method for providing videomarks for a video program
US7158689B2 (en) * 2002-11-25 2007-01-02 Eastman Kodak Company Correlating captured images and timed event data
US20060200475A1 (en) * 2005-03-04 2006-09-07 Eastman Kodak Company Additive clustering of images lacking individualized date-time information
US20080044155A1 (en) * 2006-08-17 2008-02-21 David Kuspa Techniques for positioning audio and video clips
US20080112621A1 (en) * 2006-11-14 2008-05-15 Gallagher Andrew C User interface for face recognition
US20080309647A1 (en) * 2007-06-15 2008-12-18 Blose Andrew C Determining presentation effects for a sequence of digital content records
US20090265647A1 (en) * 2008-04-22 2009-10-22 Apple Inc. Modifying Time Associated With Digital Media Items

Cited By (114)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8558953B2 (en) 2003-04-05 2013-10-15 Apple Inc. Method and apparatus for synchronizing audio and video streams
US20110013084A1 (en) * 2003-04-05 2011-01-20 David Robert Black Method and apparatus for synchronizing audio and video streams
US8810728B2 (en) 2003-04-05 2014-08-19 Apple Inc. Method and apparatus for synchronizing audio and video streams
US9773525B2 (en) * 2007-08-16 2017-09-26 Adobe Systems Incorporated Timeline management
US10210179B2 (en) * 2008-11-18 2019-02-19 Excalibur Ip, Llc Dynamic feature weighting
US9564173B2 (en) 2009-04-30 2017-02-07 Apple Inc. Media editing application for auditioning different types of media clips
US20100329635A1 (en) * 2009-06-25 2010-12-30 Hiromi Nishiura Video reproducing apparatus
US20110264700A1 (en) * 2010-04-26 2011-10-27 Microsoft Corporation Enriching online videos by content detection, searching, and information aggregation
US9443147B2 (en) * 2010-04-26 2016-09-13 Microsoft Technology Licensing, Llc Enriching online videos by content detection, searching, and information aggregation
US9600164B2 (en) 2010-07-15 2017-03-21 Apple Inc. Media-editing application with anchored timeline
US9323438B2 (en) 2010-07-15 2016-04-26 Apple Inc. Media-editing application with live dragging and live editing capabilities
US8875025B2 (en) 2010-07-15 2014-10-28 Apple Inc. Media-editing application with media clips grouping capabilities
US8819557B2 (en) 2010-07-15 2014-08-26 Apple Inc. Media-editing application with a free-form space for organizing or compositing media clips
US8910046B2 (en) 2010-07-15 2014-12-09 Apple Inc. Media-editing application with anchored timeline
US8555170B2 (en) 2010-08-10 2013-10-08 Apple Inc. Tool for presenting and editing a storyboard representation of a composite presentation
US8380039B2 (en) 2010-11-09 2013-02-19 Eastman Kodak Company Method for aligning different photo streams
WO2012064494A1 (en) 2010-11-09 2012-05-18 Eastman Kodak Company Aligning and annotating different photo streams
US8805165B2 (en) 2010-11-09 2014-08-12 Kodak Alaris Inc. Aligning and summarizing different photo streams
WO2012064532A1 (en) 2010-11-09 2012-05-18 Eastman Kodak Company Aligning and summarizing different photo streams
US8745499B2 (en) 2011-01-28 2014-06-03 Apple Inc. Timeline search and index
US9870802B2 (en) 2011-01-28 2018-01-16 Apple Inc. Media clip management
US8621355B2 (en) 2011-02-02 2013-12-31 Apple Inc. Automatic synchronization of media clips
US8966367B2 (en) 2011-02-16 2015-02-24 Apple Inc. Anchor override for a media-editing application with an anchored timeline
US9026909B2 (en) 2011-02-16 2015-05-05 Apple Inc. Keyword list view
US9997196B2 (en) 2011-02-16 2018-06-12 Apple Inc. Retiming media presentations
US11747972B2 (en) 2011-02-16 2023-09-05 Apple Inc. Media-editing application with novel editing tools
US11157154B2 (en) 2011-02-16 2021-10-26 Apple Inc. Media-editing application with novel editing tools
US10324605B2 (en) 2011-02-16 2019-06-18 Apple Inc. Media-editing application with novel editing tools
US9536564B2 (en) 2011-09-20 2017-01-03 Apple Inc. Role-facilitated editing operations
US9437247B2 (en) 2011-11-14 2016-09-06 Apple Inc. Preview display for multi-camera media clips
US9792955B2 (en) 2011-11-14 2017-10-17 Apple Inc. Automatic generation of multi-camera media clips
US9111579B2 (en) 2011-11-14 2015-08-18 Apple Inc. Media editing with multi-camera media clips
US9143742B1 (en) 2012-01-30 2015-09-22 Google Inc. Automated aggregation of related media content
US8645485B1 (en) * 2012-01-30 2014-02-04 Google Inc. Social based aggregation of related media content
US8612517B1 (en) * 2012-01-30 2013-12-17 Google Inc. Social based aggregation of related media content
US20130250139A1 (en) * 2012-03-22 2013-09-26 Trung Tri Doan Method And System For Tagging And Organizing Images Generated By Mobile Communications Devices
US9595015B2 (en) * 2012-04-05 2017-03-14 Nokia Technologies Oy Electronic journal link comprising time-stamped user event image content
US20130268828A1 (en) * 2012-04-05 2013-10-10 Nokia Corporation User event content, associated apparatus and methods
US10228810B2 (en) * 2012-07-26 2019-03-12 Samsung Electronics Co., Ltd. Method of transmitting inquiry message, display device for the method, method of sharing information, and mobile terminal
US20140033050A1 (en) * 2012-07-26 2014-01-30 Samsung Electronics Co., Ltd. Method of transmitting inquiry message, display device for the method, method of sharing information, and mobile terminal
US9514367B2 (en) * 2013-02-26 2016-12-06 Alticast Corporation Method and apparatus for playing contents
US20140245145A1 (en) * 2013-02-26 2014-08-28 Alticast Corporation Method and apparatus for playing contents
CN105612758A (en) * 2013-11-20 2016-05-25 纳宝株式会社 Video-providing method and video-providing system
US20160261930A1 (en) * 2013-11-20 2016-09-08 Naver Corporation Video-providing method and video-providing system
US11095954B2 (en) * 2013-11-20 2021-08-17 Naver Corporation Video-providing method and video-providing system
US9715630B2 (en) * 2014-03-18 2017-07-25 Vivotek Inc. Monitoring system and related image searching method
US20150269442A1 (en) * 2014-03-18 2015-09-24 Vivotek Inc. Monitoring system and related image searching method
US10741220B2 (en) 2014-07-21 2020-08-11 Avigilon Corporation Timeline synchronization control method for multiple display views
US9659598B2 (en) 2014-07-21 2017-05-23 Avigilon Corporation Timeline synchronization control method for multiple display views
US10269393B2 (en) 2014-07-21 2019-04-23 Avigilon Corporation Timeline synchronization control method for multiple display views
US10038657B2 (en) 2014-08-18 2018-07-31 Nightlight Systems Llc Unscripted digital media message generation
US10735360B2 (en) 2014-08-18 2020-08-04 Nightlight Systems Llc Digital media messages and files
US10992623B2 (en) 2014-08-18 2021-04-27 Nightlight Systems Llc Digital media messages and files
US20160050172A1 (en) * 2014-08-18 2016-02-18 KnowMe Systems, Inc. Digital media message generation
US10735361B2 (en) 2014-08-18 2020-08-04 Nightlight Systems Llc Scripted digital media message generation
US11082377B2 (en) 2014-08-18 2021-08-03 Nightlight Systems Llc Scripted digital media message generation
US10728197B2 (en) 2014-08-18 2020-07-28 Nightlight Systems Llc Unscripted digital media message generation
US10037185B2 (en) 2014-08-18 2018-07-31 Nightlight Systems Llc Digital media message generation
US10691408B2 (en) 2014-08-18 2020-06-23 Nightlight Systems Llc Digital media message generation
US9973459B2 (en) * 2014-08-18 2018-05-15 Nightlight Systems Llc Digital media message generation
US11847163B2 (en) 2014-08-27 2023-12-19 International Business Machines Corporation Consolidating video search for an event
US9870800B2 (en) * 2014-08-27 2018-01-16 International Business Machines Corporation Multi-source video input
US20160063103A1 (en) * 2014-08-27 2016-03-03 International Business Machines Corporation Consolidating video search for an event
US10332561B2 (en) 2014-08-27 2019-06-25 International Business Machines Corporation Multi-source video input
US10713297B2 (en) 2014-08-27 2020-07-14 International Business Machines Corporation Consolidating video search for an event
US10102285B2 (en) * 2014-08-27 2018-10-16 International Business Machines Corporation Consolidating video search for an event
US10268751B2 (en) * 2015-03-18 2019-04-23 Naver Corporation Methods, systems, apparatuses, and/or non-transitory computer readable media for providing event-related data over a network
US10304493B2 (en) * 2015-03-19 2019-05-28 Naver Corporation Cartoon content editing method and cartoon content editing apparatus
US20180184138A1 (en) * 2015-06-15 2018-06-28 Piksel, Inc. Synchronisation of streamed content
US10791356B2 (en) * 2015-06-15 2020-09-29 Piksel, Inc. Synchronisation of streamed content
US20160379058A1 (en) * 2015-06-26 2016-12-29 Canon Kabushiki Kaisha Method, system and apparatus for segmenting an image set to generate a plurality of event clusters
US10318816B2 (en) * 2015-06-26 2019-06-11 Canon Kabushiki Kaisha Method, system and apparatus for segmenting an image set to generate a plurality of event clusters
US20190026366A1 (en) * 2016-01-07 2019-01-24 Mfu Co., Inc Method and device for playing video by each segment of music
CN108476343A (en) * 2016-01-07 2018-08-31 Mfu股份有限公司 The video broadcasting method and device that each of music is segmented
US11240542B2 (en) 2016-01-14 2022-02-01 Avigilon Corporation System and method for multiple video playback
US10721545B2 (en) * 2016-07-25 2020-07-21 Lenovo (Beijing) Co., Ltd. Method and device for combining videos
US20180027308A1 (en) * 2016-07-25 2018-01-25 Lenovo (Beijing) Co., Ltd. Method and device for combining videos
US20180053531A1 (en) * 2016-08-18 2018-02-22 Bryan Joseph Wrzesinski Real time video performance instrument
US10978187B2 (en) 2017-08-10 2021-04-13 Nuance Communications, Inc. Automated clinical documentation system and method
US11853691B2 (en) * 2017-08-10 2023-12-26 Nuance Communications, Inc. Automated clinical documentation system and method
US11074996B2 (en) 2017-08-10 2021-07-27 Nuance Communications, Inc. Automated clinical documentation system and method
US11043288B2 (en) 2017-08-10 2021-06-22 Nuance Communications, Inc. Automated clinical documentation system and method
US10957428B2 (en) 2017-08-10 2021-03-23 Nuance Communications, Inc. Automated clinical documentation system and method
US11101022B2 (en) 2017-08-10 2021-08-24 Nuance Communications, Inc. Automated clinical documentation system and method
US11101023B2 (en) 2017-08-10 2021-08-24 Nuance Communications, Inc. Automated clinical documentation system and method
US11114186B2 (en) 2017-08-10 2021-09-07 Nuance Communications, Inc. Automated clinical documentation system and method
US10957427B2 (en) * 2017-08-10 2021-03-23 Nuance Communications, Inc. Automated clinical documentation system and method
US11295838B2 (en) 2017-08-10 2022-04-05 Nuance Communications, Inc. Automated clinical documentation system and method
US20190066821A1 (en) * 2017-08-10 2019-02-28 Nuance Communications, Inc. Automated clinical documentation system and method
US10546655B2 (en) 2017-08-10 2020-01-28 Nuance Communications, Inc. Automated clinical documentation system and method
US11605448B2 (en) 2017-08-10 2023-03-14 Nuance Communications, Inc. Automated clinical documentation system and method
US11482308B2 (en) 2017-08-10 2022-10-25 Nuance Communications, Inc. Automated clinical documentation system and method
US11482311B2 (en) 2017-08-10 2022-10-25 Nuance Communications, Inc. Automated clinical documentation system and method
US11404148B2 (en) 2017-08-10 2022-08-02 Nuance Communications, Inc. Automated clinical documentation system and method
US11257576B2 (en) 2017-08-10 2022-02-22 Nuance Communications, Inc. Automated clinical documentation system and method
US11322231B2 (en) 2017-08-10 2022-05-03 Nuance Communications, Inc. Automated clinical documentation system and method
US11316865B2 (en) 2017-08-10 2022-04-26 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US11295839B2 (en) 2017-08-10 2022-04-05 Nuance Communications, Inc. Automated clinical documentation system and method
US11270261B2 (en) 2018-03-05 2022-03-08 Nuance Communications, Inc. System and method for concept formatting
US11515020B2 (en) 2018-03-05 2022-11-29 Nuance Communications, Inc. Automated clinical documentation system and method
US11222716B2 (en) 2018-03-05 2022-01-11 Nuance Communications System and method for review of automated clinical documentation from recorded audio
US11250383B2 (en) 2018-03-05 2022-02-15 Nuance Communications, Inc. Automated clinical documentation system and method
US11295272B2 (en) 2018-03-05 2022-04-05 Nuance Communications, Inc. Automated clinical documentation system and method
US11250382B2 (en) 2018-03-05 2022-02-15 Nuance Communications, Inc. Automated clinical documentation system and method
US10809970B2 (en) 2018-03-05 2020-10-20 Nuance Communications, Inc. Automated clinical documentation system and method
US11494735B2 (en) 2018-03-05 2022-11-08 Nuance Communications, Inc. Automated clinical documentation system and method
US11227679B2 (en) 2019-06-14 2022-01-18 Nuance Communications, Inc. Ambient clinical intelligence system and method
US11043207B2 (en) 2019-06-14 2021-06-22 Nuance Communications, Inc. System and method for array data simulation and customized acoustic modeling for ambient ASR
US11216480B2 (en) 2019-06-14 2022-01-04 Nuance Communications, Inc. System and method for querying data points from graph data structures
US11531807B2 (en) 2019-06-28 2022-12-20 Nuance Communications, Inc. System and method for customized text macros
US11670408B2 (en) 2019-09-30 2023-06-06 Nuance Communications, Inc. System and method for review of automated clinical documentation
US20220264053A1 (en) * 2019-10-30 2022-08-18 Beijing Bytedance Network Technology Co., Ltd. Video processing method and device, terminal, and storage medium
US11222103B1 (en) 2020-10-29 2022-01-11 Nuance Communications, Inc. Ambient cooperative intelligence system and method
WO2023042166A1 (en) * 2021-09-19 2023-03-23 Glossai Ltd Systems and methods for indexing media content using dynamic domain-specific corpus and model generation

Also Published As

Publication number Publication date
US20120260175A1 (en) 2012-10-11
WO2010027481A1 (en) 2010-03-11
US9218367B2 (en) 2015-12-22

Similar Documents

Publication Publication Date Title
US9218367B2 (en) Method and interface for indexing related media from multiple sources
US10714145B2 (en) Systems and methods to associate multimedia tags with user comments and generate user modifiable snippets around a tag time for efficient storage and sharing of tagged items
US8687941B2 (en) Automatic static video summarization
US8879890B2 (en) Method for media reliving playback
US9082452B2 (en) Method for media reliving on demand
Boreczky et al. An interactive comic book presentation for exploring video
JP4228320B2 (en) Image processing apparatus and method, and program
Truong et al. Video abstraction: A systematic review and classification
Chen et al. Tiling slideshow
Lee et al. Constructing a SenseCam visual diary as a media process
US20160189414A1 (en) Autocaptioning of images
Chen et al. Visual storylines: Semantic visualization of movie sequence
US20110080424A1 (en) Image processing
US20180025215A1 (en) Anonymous live image search
WO2008014408A1 (en) Method and system for displaying multimedia content
JP2004080750A (en) System and method for whiteboard and audio capture
KR20070118635A (en) Summarization of audio and/or visual data
US20110052086A1 (en) Electronic Apparatus and Image Processing Method
US20110304644A1 (en) Electronic apparatus and image display method
JP2008067334A (en) Information processor, method and program
El-Bendary et al. PCA-based home videos annotation system
US20110304779A1 (en) Electronic Apparatus and Image Processing Method
Saravanan Segment based indexing technique for video data file
JP5180052B2 (en) Image evaluation apparatus and image evaluation program
Chu et al. Enabling portable animation browsing by transforming animations into comics

Legal Events

Date Code Title Description
AS Assignment

Owner name: EASTMAN KODAK COMPANY,NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DAS, MADIRAKSHI;CEROSALETTI, CATHLEEN D.;LOUI, ALEXANDER C.;SIGNING DATES FROM 20080820 TO 20080825;REEL/FRAME:021500/0514

AS Assignment

Owner name: CITICORP NORTH AMERICA, INC., AS AGENT, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNORS:EASTMAN KODAK COMPANY;PAKON, INC.;REEL/FRAME:028201/0420

Effective date: 20120215

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: KODAK PHILIPPINES, LTD., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: EASTMAN KODAK INTERNATIONAL CAPITAL COMPANY, INC.,

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: FAR EAST DEVELOPMENT LTD., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: PAKON, INC., INDIANA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK REALTY, INC., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: FPC INC., CALIFORNIA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK AMERICAS, LTD., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: NPEC INC., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: LASER-PACIFIC MEDIA CORPORATION, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK IMAGING NETWORK, INC., CALIFORNIA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: QUALEX INC., NORTH CAROLINA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: CREO MANUFACTURING AMERICA LLC, WYOMING

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK (NEAR EAST), INC., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK PORTUGUESA LIMITED, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: EASTMAN KODAK COMPANY, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK AVIATION LEASING LLC, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

AS Assignment

Owner name: INTELLECTUAL VENTURES FUND 83 LLC, NEVADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EASTMAN KODAK COMPANY;REEL/FRAME:031108/0430

Effective date: 20130201

AS Assignment

Owner name: MONUMENT PEAK VENTURES, LLC, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:INTELLECTUAL VENTURES FUND 83 LLC;REEL/FRAME:064599/0304

Effective date: 20230728