WO1998002827A1 - Digital video system having a data base of coded data for digital audio and video information - Google Patents

Digital video system having a data base of coded data for digital audio and video information Download PDF

Info

Publication number
WO1998002827A1
WO1998002827A1 PCT/US1997/012061 US9712061W WO9802827A1 WO 1998002827 A1 WO1998002827 A1 WO 1998002827A1 US 9712061 W US9712061 W US 9712061W WO 9802827 A1 WO9802827 A1 WO 9802827A1
Authority
WO
WIPO (PCT)
Prior art keywords
digital
video
coding
information
video system
Prior art date
Application number
PCT/US1997/012061
Other languages
French (fr)
Inventor
James Stigler
Ken Mendoza
Rodney D. Kent
Original Assignee
Lava, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lava, Inc. filed Critical Lava, Inc.
Priority to AU37244/97A priority Critical patent/AU3724497A/en
Priority to MXPA99000549A priority patent/MXPA99000549A/en
Priority to EP97934108A priority patent/EP1027660A1/en
Priority to JP10506161A priority patent/JP2001502858A/en
Publication of WO1998002827A1 publication Critical patent/WO1998002827A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Definitions

  • This invention relates to a digital video system and method for manipulating digital video information.
  • U.S. Patent No. 5,467,288 to Fasciano et al. issued November 14, 1995, is directed to a digital audio workstation for the audio portions of video programs.
  • the Fasciano workstation combines audio editing capability with the ability to immediately display video images associated with th ⁇ - audio program. An operator's indication of a point or segment of audio information is detected and used to retrieve and display the video images that correspond to the indicated audio programming.
  • the workstation includes a labeling and notation system for recording digitized audio or video information. It provides a means for storing in association with a particular point of the audio or video information, a digitized voice or textual message for later reference regarding that information.
  • U.S. Patent No. 5,045,940 to Peters et al. issued September 3, 1991 is directed to a data pipeline system which synchronizes the display of digitized audio and video data regardless of the speed with which the data was recorded on its linear medium.
  • the video data is played at a constant speed, synchronized by the audio speed.
  • the above systems do not provide for the need to analyze, index, annotate, store and retrieve large amounts of video information. They cannot support an unlimited quantity of video. They do not permit a transcript to be displayed simultaneously with video or permit ease of subtitling. Subtitling is a painstaking and labor intensive process for the film industry and an impediment to entry into foreign markets.
  • a digital video system comprising coding and control means, adapted to receive digital reference video information, for coding the digital reference video information to generate coded data; and coded data storing means for storing the coded data from the coding and control means.
  • FIG. 1 A is a functional block diagram of a preferred embodiment of the present invention
  • FIG. IB is a functional block diagram of the coding and control means shown in FIG. 1A;
  • FIG. 1C is a chart showing the structure of the coded data store of FIG. 1A for indexing data
  • FIG. ID is a software flowchart of the preferred embodiment of the present invention.
  • FIG. IE is a map of time reference information
  • FIG. 2A is a drawing of the main button bar of the present invention.
  • FIG. 2B is a diagram of the manager button bar of the present invention.
  • FIG. 2C is a diagram of the application tool bar of the present invention.
  • FIG. 3 is a diagram of the user list window of the user module of the present invention.
  • FIG. 4 is a diagram of the user detail window of the user module of the present invention.
  • FIG. 5. is a table showing the coding and transcription rights of the user detail window of the user module of the present invention.
  • FIG. 6 is a table of the system management rights of the user detail window of the user module of the present invention.
  • FIG. 7 is a diagram of the module sub-menu of the study module of the present invention.
  • FIG. 8 is a diagram of the study list window of the study module of the present invention.
  • FIGS. 9A and 9B are diagrams of the study detail window of the study module of the present invention.
  • FIG. 10A is a diagram of the study outline of the study detail window of the study module of the present invention before dragging a characteristic
  • FIG. 1 OB is a diagram of the study outline of the study detail window of the study module of the present invention after dragging a characteristic
  • FIG. 11 is a diagram of the select an event/sampling method choice menu for creating a new event type and opening the event type detail window of the present invention
  • FIG. 12 is a diagram illustrating creating a new pass in the study outline of the study detail window of the study module of the present invention.
  • FIG. 13 is a diagram of the event type detail window of the study module of the present invention.
  • FIG. 14 is a diagram of the characteristic detail window of the study module of the present invention.
  • FIG. 15 is a diagram of the unit selection window of the study module of the present invention.
  • FIG. 16 is a diagram of the use units from other study window of the study module of the present invention.
  • FIG. 17 is a diagram of the unit list window of the unit module of the present invention.
  • FIG. 18 is a diagram of the unit detail window of the unit module of the present invention.
  • FIG. 19 is a table of the palettes which may be opened over the video window of the present invention.
  • FIG. 20 is a diagram of the video window of the present invention
  • FIG. 21 A is a diagram of the title area of the video window of the present invention
  • FIG. 21B is a diagram of the video area of the video window of the present invention.
  • FIG. 21 C is a diagram of the mark area of the video window of the present invention.
  • FIG. 21D is a diagram of the instance area of the video window of the present invention.
  • FIG. 22 is a diagram of the mark area of the video window of the present invention.
  • FIG. 23 is a diagram of the instance area of the video window of the present invention.
  • FIG. 24 is a diagram of the List Area of the video window
  • FIG. 25 is a diagram of the List Area with two transcripts displayed
  • FIG. 26 is a diagram of the Select an Outline window
  • FIG. 27 is a diagram of the outline description window
  • FIG. 28 is a diagram of the outline palette
  • FIG. 29 is a diagram of the outline item window
  • FIG. 30 is a diagram of the sample definition window
  • FIG. 31 is a diagram of the sample palette
  • FIG. 32 is a diagram of the sample information window
  • FIG. 33 is a diagram of the unit analysis window
  • FIG. 34 is a diagram of the define unit variable window
  • FIG. 35 is a diagram of the define event variable window
  • FIG. 36 is a diagram of the instance analysis window
  • FIG. 37 is a diagram of the define analysis variable window
  • FIG. 38 is a diagram of the search window contents common for text and event instance searches.
  • FIG. 39 is a diagram of the event instance searcl- window; and FIG. 40 is a diagram of the text search window.
  • FIG. 1A there is shown a digital video system in accordance with the preferred embodiment of the present invention including coding and control means 1 for coding digital reference video information and generating coded data and coded data store 2 for storing the coded data from the coding and control means.
  • the coding and control means 1 is adapted to receive digital reference video information from video reference source 3.
  • the coding and control means 1 is connected via a databus 5 to the coded data store 2.
  • the coding and control means 1 includes a general multipurpose computer which operates in accordance with an operations program and an applications program.
  • an output 6 which may be a display connected to an input/output interface.
  • the video reference source 3 may be a video cassette recorder such as a SONY model EV-9850.
  • the coding and control means 1 may be an Apple Macintosh 8500/132 computer system.
  • the coded data store 2 may be a hard disk such as a Quantum XP32150 and a CD-ROM drive such as a SONY CPU 75.5-25.
  • the output 6 may be a display monitor such as an Apple Multiple Scan 17 M2A94.
  • video information from a video reference source 3 may be digitized by digital encoder 9 and compressed by compressor 10.
  • the digital video information may be stored in digital storage means 11. Alternatively, if the video information is already digitized, it may be directly stored in digital storage means 11.
  • Digital video information from digital storage means 11 may be decoded and decompressed by decode/decompression means 12 and input to the coding and control means 1.
  • the video reference source 3 may be an analog video tape, a camera, or a video broadcast.
  • the coding and control means 1 may generate coded data automatically, or by interactive operation with a user, by interactive operation with a user in real time, or semi-automatically. For semiautomatic control, the user inputs parameters.
  • the coding and control means performs the function of indexing only. Indexing is the process through which derivative information is added to the reference video information or stored separately. This derivative information provides the ability to encode instances of events and/or conduct searches based on event criteria.
  • 'Reference information is video or audio information such as a video tape and its corresponding audio sound track.
  • Derivative information is information generated during the coding process such as indices of events in the video, attributes, characteristics, choices, selected choices and time reference values associated with the above. Derivative information also includes linking data generated during the coding process which includes time reference values, and unit and segment designations.
  • Additional “information” is information that is input to the video system in addition to reference information. It includes digital or analog information such as a transcript of audio reference information, notes, annotations, a static picture, graphics, a document such as an exhibit, or input from an oscilloscope.
  • the coding and control means 1 may be used interactively by a user to mark the start point of a video clip and a time reference value representing a mark in point is generated as coded data and stored in the coded data store 2. Further, the user may optionally interactively mark the end point of a video clip and a time reference value representing the mark out point is generated at. coded data and stored in the coded data store 2. The user may interactively mark an event type in one pass through the digital reference video information. The user may mark plural passes through the reference video information to mark plural event types. The mark in and mark out points are stored in indices for event types.
  • the coded data that is added may be codes of data that are transparent to a standard player of video but which are capable of interpretation by an modified player.
  • the coded data may be a time reference value indicating the unit of digital reference video information. Additionally the coded data may be a time reference value indicating the segment within a unit of digital reference video information. Thus, unlimited quantities of digital reference video information may be identified and accessed with the added codes. There may be more than one source of reference video information in the invention.
  • an audio reference source 4 which is optional.
  • the digital system of the present invention may operate with simply a source of video reference information 3.
  • a source of audio reference information 4 a source of digital additional information X D 13 or a source of analog additional information X A 14 may be added.
  • the audio reference information is input to digital storage means 11. If the audio reference information from source 4 is already digital, it may be directly input and stored in digital storage means 11. Alternatively, if the audio reference information from source 4 is analog, the information may be digitized and compressed by digital encoder 7 and compression means 8 before being stored in digital storage means 11. The digitized audio reference information is output from digital storage means 11 to coding and control means 1 via decode/decompression means 12. The compression and decompression means 8 and 12 are optional.
  • the audio reference sources 4 may be separate tracks of a stereo recording. Each track is considered a separate source of audio reference information.
  • the video reference source 3 and the audio reference source 4 may be a video cassette recorder such as SONY EVO-9850.
  • the digital video encoder 9 and compressor 10 may be a MPEG-1 Encoder such as the Future Tel Prime View II.
  • the digital audio encoder 7 and compressor 8 may be a sound encoder such as the Sound Blaster 16.
  • a PC-compatible computer system such as a Gateway P5-133 stores the data to a digital storage means 11 such as a compact disc recording system like a Hyundai CDR00 or a hard disk like a Seagate ST72430N.
  • the coding and control means 1 codes the reference video and audio information to generate coded data. Whenever there is more than one source of information such as an audio reference source 4, a source of additional digital information 13 or a source of additional analog information 14, the coding and control means 1 performs a linking function. Linking is the process by which information from different sources are synchronized or correlated with each other. This is accomplished through the use of time reference data. Linking provides the ability to play and view video, audio and additional information in a synchronized manner. The linking data permits instant random access of information. The coding and control means 1 performs the linking function in addition to the indexing function discussed above. Linking and indexing are referred to as 'boding".
  • the coding and control means 1 When there is more than one source of information the coding and control means 1 performs linking and/or indexing.
  • the linking data which comprises time reference values is stored as coded data in coded data store 2. Additionally, the indices of data that is added by the process of coding is stored in coded data store 2.
  • the digital video system may include a source of additional information which may be analog or digital.
  • the additional inf ormation from source 14 may be digitized by digital encoder 15.
  • the additional information from source 13 or 14 may be the transcript of the audio reference information, notes or annotations regarding the audio or video reference information, a static picture, graphics, or a document such as an exhibit for a videotaped deposition with or without comments.
  • the source of the additional information may be a scanner or stored digital information or a transcript of a deposition being produced in real time by a stenographer.
  • the annotations or notes may be produced in real time also.
  • the coding and control means codes the reference video information, reference audio information, and additional analog or digital information to generate coded data which includes linking data and indexing data.
  • the coded data which is generated is attribute data.
  • the attribute data may be an event type.
  • event types may be 'Questions," 'Pause Sounds, "or 'Writing on Board” for a study of a video of a teacher's teaching methods. These are events which take place in the video.
  • the attribute data may regard a characteristic associated with an event type. This creates an index of characteristics.
  • characteristics for the event type of 'Questions may be ' dministrative questions," 'questions regarding discipline, "or 'content of questions.
  • the attribute data may include a plurality of choices for a characteristic.
  • choices for the characteristic of 'administrative questions may include 'administrative questions regarding attendance, " ' dministrative questions regarding grades, "or 'administrative questions regarding homework.
  • a fourth table designates a selected choice of a plurality of possible choices. Thus for example, the selection may be 'administrative questions regarding grades.”
  • a fifth table is created which includes time reference values associated with each instance of the event type. So for example, an index is created of time reference values associated with each time a question is asked for the event type 'Questions".
  • the user interactively marks the mark in point of the video reference information that designates each instance of a question being asked. Additionally, the user may optionally mark the mark out point when the question is finished being asked.
  • the digital video system of the invention also permits automatic or semi-automatic coding and control.
  • the coding and control means 1 may create an index of the time reference values corresponding to each time the video scene changes.
  • the user may input the parameter N. For example, the user may change N to 5 and change the operation of the system so that the coding and control means 1 compares five frames to determine if a scene has been changed.
  • the user may change the threshold amount T, from 50% to 20% for example.
  • the indexing and control means 1 includes the ability to search for instances of an event type.
  • the coding and control means 1 may search for instances of one event type occurring within a time interval Y of instances of a second event type.
  • the system can determine each instance when one event occurred within a time interval Y of a second event.
  • the coding and control means 1 includes an alarm feature.
  • An alarm may be set at each instance of an event type.
  • the coding and control means 1 controls a system action.
  • the system may position the video and play. Other system actions such as stopping the video, highlighting text of a transcript or subtitling may occur.
  • the coded data stored 2 may be a relational database, an object database or a hierarchical database.
  • the coding and control means 1 performs the linking function when there is more than one source of information.
  • Linking data is stored to relate digital video and digital audio information.
  • Linking data may also link digital video or digital audio information to additional information from sources 13 and 14.
  • Linking data includes time reference values. Correlation and synchronization may occur automatically, semi-automatically or interactively. Synchronization is the addition of time reference information to data which has no time reference. Correlation is the translation or transformation of information with one time bas- to information with another time base to make sure that they occur at the same time.
  • the digital system of the present invention operates on time reference values that are normalized unitless values.
  • time reference values are added to information that includes no time reference such as a document which is an exhibit for a videotaped deposition.
  • both sources of information include time reference information
  • the correlation process transforms one or both to the time reference normalized unitless values employed by the system.
  • One or both sources of information may be transformed or points may be chosen that are synched together.
  • the time reference information of one source can be transformed to a different time scale by a transformation function.
  • the transformation function may be linear, non-linear, continuous, or not continuous. Additionally, the transformation function may be a simple offset. The transformation function may disregard blocks of video between time reference values, for skipping advertising commercials, for example.
  • Time codes with hour, minute, second and frame designations are frequently used in the film industry.
  • the coding and control means 1 correlates these designations to the normalized unitless time reference values employed by the system.
  • the coding and control means 1 may transform a time scale to the time code designation with hour, minute, second and frame designations.
  • the coding and control means 1 may correlate two sources of information by simply checking the drift over a time interval and selecting points to synch the two information sources together.
  • the coding function of the digital system of the present invention is not just an editing function. Information is added. Indices are create-.!. Further, a database of linking data is created. The original reference data is not necessarily modified.
  • the coded data store may be in any format including edit decision list (EDL) which is the industry standard, or any other binary form.
  • EDL edit decision list
  • the coded data store 2 stores the data base indices which are created, linking data, and data from the additional sources 13 and 14 which may include static pictures, graphics, documents such as deposition exhibits, and text which may include transcripts, translations, annotations, or close captioned data.
  • Subtitles are stored as a transcript. There may be multiple transcripts or translations or annotations or documents. This permits multiple subtitles.
  • FIG. IB illustrates the coding and control means 1 of FIG. 1A.
  • the coding and control means includes controller 16. Controller 16 is connected to derivative data coding means 17 and correlation and synch means 18. Controller 16 is also connected to the coded data store 2 and to the output 6. Digital information from the digital storage means 11 is input to the derivative data coding means 17. If information from one source only is input to the derivative data coding means 17, only the indexing function is performed. If information from two sources is input to the derivative data coding means 17 indexing and linking is performed.
  • the coding and control means 1 may further include correlation and synch means 18 for receiving additional data X D and X A .
  • the correlation and synch means 18 correlates data with a time reference to the video information from the digital storage means 11 and synchronizes data without a time reference base to the digital video information from the digital storage means 11.
  • Control loop 19 illustrates the control operation of the controller 16. The user may be part of control loop 19 in interactive or semiautomatic operation.
  • Control loop 20 illustrates the control function of controller 16 over correlation and synch means 18. The user may be a part of control loop 20 in interactive and semi-automatic operation.
  • control loops 19 and 20 also include input/output interface devices which may include a keyboard, mouse, stylus, tablet, touchscreen, scanner or printer.
  • FIG. IC is a chart showing the structure of the coded data store 2 of 1A for indexing data.
  • FIG. ID is a software flowchart. The following define the indices of the coded data store 2.
  • Characteristics are variables which are applicable to a particular event type. An example would be Event Type 'Teacher
  • CharChoices contains valid values of the parent Characteristics variable. For example, in the example of the Characteristic
  • CharChoices serves as a data validation tool to confine user data entry to a known input that is statistically analyzable.
  • Event Types Stores model information of the event code such as whether the code can have an in and out point. Serves as a parent to the characteristic table which includes possible variables to characterize the event type.
  • Instances Contains instances of particular event types with time reference information.
  • InstCharChoice Stores actual value attributes to a characteristic of a particular event instance. For example, one instance of the teacher question might have a value in the characteristic 'Difficulty Level" of
  • OutlineHdng Stores the major headings for a particular outline.
  • OutlineSubHdng Stores actual instances that are contained in an outline. These instances were originally coded and stored in the instance table, but when they are copied to an outline are completely independent of the original instance.
  • Pass Filters Stores filter records which are created by the sampling process.
  • Samples Stores samples for the purposes of further characterization.
  • These instances are either a random samp-e of previously coded instances or computer generated time slices created using sampling methodologies.
  • Segments Corresponds to physical media where the video is stored. This table is a 'many" to the parent Units table.
  • SeqNums Stores Sequence numbers for all tables. Sessions Sessions keeps track of coding for each study down to the pass and unit level. Therefore, a user may go back to his/her previous work and resume from where they left off.
  • Studies Pass Stores information for a particular pass in the study such as pointers to filters and locked status for sampling.
  • StudyUnits Contains references to units that are attached to a particular study. Since there may be multiple units for each study and there may be multiple studies that utilize a particular unit, this table functions as a join table in the many-to-many relationship.
  • Study Event This table stores particular information relevant to the use of a particular event type in a particular study and a particular pass. Since there may be multiple Event Types for each study and there may be multiple studies that utilize a particular Event Type, this table functions as a join table in the many-to-many relationship. Transcribe Stores textual transcript and notes and time reference values for each utterance which correspond to a Unit. Units Parent table of videos viewable.
  • the coded data store 2 stores data representing time reference values relating the digital audio information to the digital video information and vice versa. Accordingly, for any point in the video information, the corresponding audio information may be instantly and randomly accessed with no time delay. Additionally, for any point in the audio information, the corresponding video frame information may be instantly and randomly accessed with no time delay.
  • the coded data store 2 stores attribute data.
  • the attribute data is stored in an index and is derivative data that is added during the coding process.
  • the attribute data may be an event type, such as any action shown in the video such as a person in the video raising his hand or standing up or making a field goal. Attribute data may be time reference data indicating instances of an event type.
  • the attribute data may also include a characteristic associated with an event, such as directness or acting shy.
  • the attribute data may also include a plurality of choices of characteristics such as being succinct or being vague. It may be the chosen choice of plural possible choices.
  • the coded data store 2 stores time reference data corresponding to the attribute data.
  • the coded data store 2 stores data representing the text of a transcript of the digitized audio information.
  • a video deposition can be digitized.
  • the video information originates at reference source 3 and the audio information originates at reference source 4.
  • the video and audio information may be digitized and/or compressed via digital encoders 7 and 9 and compressors 8 and 10 and stored in a digital storage means 11.
  • a tran * : z ⁇ pt of the deposition may be stored in coded data store 2. More than one transcript, foreign language translations, for example, may be stored.
  • Coding and control means 1 accesses video information from digital storage means 11 , audio information from digital storage means 11 and the transcript information from coded data store 2 and simultaneously displays the video and the text of the transcript on output disp-ay 6. Additionally, the audio is played.
  • the video is displayed in one area of a Video Window called the video area and the text of the transcript is displayed in a transcript area. More than one transcript may be displayed.
  • Video Window is illustrated in FIG. 20 and is described in detail later.
  • subtitles can be added to the video information and displayed on output display 6 in the same are ⁇ as the video.
  • the viewer can view the video information with subtitles and simultaneously watch the text of the transcript on output display 6.
  • the attribute data that is stored may be regarding video scene changes.
  • the time reference data of the scene change is stored in the coded data store 2. This may be performed interactively or automatically or semi- automatically. If there are a number of times that an event occurs the time reference values associated with each occurrence of the event are stored in the coded data store 2.
  • the present invention has a presentation ability where a presentation may be displayed on output display 6.
  • the video associated with each stored time reference value is displayed in sequence to create a presentation. For example, in an application dealing with legal services and videotaped depositions, every time a witness squints his eyes may be kept track of by storing a time reference value associated with each occurrence of the event during the coding process.
  • the time reference values represent the times at which the pertinent video portion starts and finishes.
  • the digital system of the invention includes search abilities where a word or a phrase may be searched in the text of the transcript of the digitized audio information. A search of notes, annotations or a digitized document for a word or phrase may also be performed. Additionally, the present invention includes the ability to perform statistical analysis on the attribute data. Random sampling of instances of an event type can be performed. Coding and control means 1 accesses coded data store 2 and analyzes the data in accordance with standard statistical analysis.
  • the invention includes a method of analyzing video information including storing digital video information, storing digital audio information, storing coded data linking the digital video and digital audio information, storing coded data regarding events in indices, and computing statistical quantities based on the coded data.
  • the present invention results in a video analysis file for a multimedia spreadsheet containing time-dependent information linked to video information.
  • the video information and textual (transcript, annotations or digitized documents) information can be searched.
  • the video information may be stored on a CD-ROM disk employing the MPEG-1 video standard. Other video standards may be employed. Additionally, other storage media may be employed.
  • the coded data store 2 and digital storage means 11 illustrated in FIG. 1A may actually be parts of the same memory.
  • Analog videotapes may be converted into digital video format by a standard digitized video transfer service that is fast and inexpensive, and deals with high volume at a low cost.
  • the digital video service digitizes the video, compresses it and synchronizes it with the audio.
  • the systeu- may digitize the video and audio information from reference sources 3 and 4.
  • the source of information may be a commercial, broadcast or analog video tape.
  • the present invention permits video analysis so that the user may view, index, link, organize, mark, annotate and analyze video information. This is referred to as 'boding."
  • buttons and controls permit the marking, coding and annotation of the video.
  • a transcription module permits synchronized subtitles. Multiple subtitles are possible, which is of importance to the foreign market for films which may require subtitles for different languages.
  • the present invention has note- taking abilities. Searches may be performed for the video information, notes, the transcript of the audio information, coded annotations or digitized documents.
  • a presentation feature permits the selection and organization of video segments into an outline to present them sequentially on a display or to record them to a VCR or a computer file.
  • Complex coding and annotations are performed in several passes such that multiple users may code and annotate the digitized information.
  • One user may make several passes through the video for coding, marking and annotating or several users may each make a pass coding, marking and annotating for separate reasons.
  • Information may be stored and displayed in a spreadsheet format and/or transferred to a statistical analysis program, and/or to a graphics program. Types of statistical analyses which may be conducted, for example, are random sampling, sequential analysis, cluster analysis and linear regression. Standard algorithms for statistical analysis are well known. Additionally, the information may be input to a project tracking program or standard reports may be prepared. Spreadsheets and graphs may be displayed and printed.
  • the present invention has use in video analysis for research and high end analysis, the legal field and the sports market.
  • the present invention would be useful in research in fields of behavior, education, psychology, science, product marketing, market research and focus groups, and the medical fields.
  • teaching practices may be researched.
  • Verbal utterances are transcribed, multiple analysts mark and code the events and annotate the video information for verbal and nonverbal events, lesson content and teacher behavior.
  • the transcribed utterances, marks, codes, and annotations are linked to the video and stored. The information may be consolidated, organized, presented or input for statist:- al analysis and interpretation.
  • Other fields of research where the invention has application are industrial process improvement, quality control, human factors analysis, software usability testing, industrial engineering, and human/computer interactions evaluations.
  • videos of operators at a computer system can be analyzed to determine if the computer system and software is user friendly
  • the present invention would be useful in legal services where videotaped depositions may be annotated and analyzed. Deposition exhibits may be stored in the coded data store with or without notes on the documents.
  • the present invention includes applications and operations software, firmware, and functional hardware modules such as a User Module, a Menu Manager, a Unit Module, a Study Module, a Video Window, a Transcribe Mode, a List Manager, an Outline Presentation Feature, a Sampling Feature, an Analysis Module and a Search Module. Reports may be created and output.
  • a unit is composed of a video and transcript data.
  • a unit may span several tapes, CD's or disks. These media are referred to as segments, and a unit has at least one segment.
  • the present invention may handle multiple segments per unit. This permits the present invention to accommodate an unlimited quantity of video information.
  • a unit may include plural transcripts stored in memory.
  • a transcript is the text of speech in the video, foreign language translation, subtitles or description or comments about the video.
  • a study includes a collection of units.
  • a study is defined to specify coding rules for the units associated with it, for example, what event types and characteristics are to be recorded.
  • a unit may be associated with one or more studies.
  • a session is a specific coding pass for a specific unit by one user.
  • the number of sessions that are created for a study is equal to the number of units included in the study multiplied by the number of coding passes defined for the study.
  • a session must be open in order to go into code mode on the coding window. If no session is open, the user is prompted to open one.
  • the User Module includes all windows, controls, and areas that are used to define users, control security, logon, and do primary navigation through the interactive digital system.
  • the User Module is briefly mentioned here for the purpose of describing logon and is explained in more detail later.
  • the interactive video analysis program of the pr sent invention requires a logon before granting access to the program functions and data.
  • the purpose of the logon process is not only to secure the database content, but also to identify the user, assign access privileges, and track information such as how long a user has been logged on.
  • the user is assigned access privileges and presented with the program's main button bar which contains icons that allow entry to various parts of the program. The number and type of icon that appear on the button bar are for a given user dependent on the privileges granted to him in his user record.
  • the main button bar, or alternatively the application tool bar, is part of the Menu Manager.
  • the main button bar is illustrated in FIG. 2A.
  • the manage button bar of FIG. 2B is accessed from the main button bar of FIG. 2A and is an extension of the main button bar. Access to commonly accessed modules ⁇ -» provided by the main and manage button bars.
  • FIG. 2C replaces the main button bar and manage button bar of FIGS. 2A AND 2B.
  • icon 21 represents 'View
  • icon 22 represents 'bode
  • icon 22 represents 'bode
  • Area 28 displays the units, for example, which unit is current or permits selection of previously defined units.
  • Area 29 represents the 'outline" feature and area 30 is directed to Sessions" selection.
  • the application wide tool bar provides access to the most commonly accessed modules including Video- View Mode, Video-Code Mode, Video-Transcribe Mode, Search Module, Unit Module, Analysis Module, Help Module, Session Selection, Unit Selection, and Outline Selection.
  • the Video- View Mode opens the Video Window, making the view mode the active module. If the user has never accessed a unit record, the user will be presented with a unit selection dialog.
  • the Video-Code Mode opens the Video Window, making the code mode the active module. If the user has never accessed a session, the user will be presented with a session selection dialog.
  • the Video Transcribe Mode opens the Video Window, making the transcribe mode the active module.
  • the transcribe mode is activated the transcription looping palette will be displayed automatically.
  • the Search Module opens the search window, making it the current module.
  • the Unit Module opens the Unit Module, making it the current module. Study Module
  • the Study Module opens the Study Module, making it the current window.
  • the Analysis Module opens the Analysis Module, making it the current window.
  • the Help Module opens the interactive video analysis help system.
  • the session selection popup provides the ability to change the current session when in Video-Code Mode.
  • the unit selection popup provides the ability to change the current unit when in Video- View Mode.
  • the outline selection popup provides the ability to change the current outline when in Video-Transcribe Mode.
  • FIG. 3 illustrates the user list window.
  • the user list window lists the users.
  • the user detail window of the User Module is illustrated in FIG. 4. It is the primary window that contains all data needed to define a user, including information about the user and security privileges. This window is presented when adding a new user, or when editing an existing user.
  • the fields and controls in the window include, the window name 'User Detail, "the first name, the last name, the user code, phone number, e-mail address, department, custom fields, whether logged on now, last logon date, number of logons since, logged hours reset count, comments, logon id, set password, and login enabled.
  • the user detail window includes coding and transcription rights area 31. This is a field of four check boxes that grant privileges to code video (create instances) or edit the transcription text as shown in the table of FIG. 5.
  • the user detail window also includes system management rights area 32. This area is a field of five check boxes that grant privileges to manage setup of the study and various other resources as shown in the table of FIG. 6.
  • the user detail window further includes the make-same-as button, navigation controls, a print user detail report button and a cancel /save user button.
  • the collection of windows and procedures that together allow definition of studies, event types, characteristics, choices, units and samples comprise the "Study Module” .
  • the Study Module is reached from the main button bar or the applications tool bar that is presented when the interactive video analysis program is initiated.
  • a study can be thought of as a plan for marking events that are seen in the video or in the transcription text of the audio information.
  • a study contains one or ore event types, which are labels for the events that are to be marked. Each event type may also have one or more characteristics, which are values recorded about the event.
  • Event instance When an event is marked in the video or transcript text it is formally called an "event instance" .
  • the project is first initialized, one study is created. A default study is used when the user does not choose to create a predefined coding plan (study), but rather wishes to use the program in a mode when event type can be assigned at will.
  • the Study Module may be accessed by selecting the study button from the application tool bar or by selecting study from the module submenu.
  • the module is first opened the user is presented with a standard find dialog whereby he can search for specific records which he wishes to work with.
  • the find dialog screen is illustrated in FIG. 7.
  • Double-clicking on a list item results in opening that item for edit. For example, double-clicking on a study in the studies list window, as illustrated in FIG. 8, results in opening a study for edit in the study detail window.
  • the ok/cancel button has the action of returning to the original window.
  • the First control goes to the first record in the selection displayed in the list.
  • the Prev button goes to the record immediately before the current record in the selection displayed in the list.
  • the Next button goes to the record immediately after
  • a study can be constrained to be a subset of another study. This means that the study can only contain units that were specified in the other study (either all the units, or a subset of the units). If additional units are added to the "parent study", they become available to the constrained study but are not automatically added. Constraining a study to be a subset of another study also means that the event types for the parent study are available as event filters in the sample definition for the constrained study. As explained in detail below, a study is constrained when the "constrain unit selection to be a subset of the specified study” button is checked on the "use units from other study window" as illustrated in FIG. 16. The constraint cannot be added after any sessions have been created for the study. The constraint can be removed any time as long as the constraint study does not include any event types from the parent study in its sample criteria.
  • Every project contains a default study that is created when the project is first created.
  • the default study allows entry into code mode of the Video Window shown in FIG. 20 if no formal studies have been defined. Event types and characteristics may be added to the default study at will from the Video Window.
  • the default study is maintained from the video window and not from the Study Module, hence, it does not appear in the study listing window shown in FIG. 8. It does appear whenever studies are listed in all other modules.
  • a session is always open for the default study which is called the default session. If no other studies have been created in the project, the default session is opened without prompting when the user goes into code mode on the study window. Applied Rules
  • Units may be added to a study.
  • a unit cannot be added to a study unless it is locked.
  • the purpose of the lock is to insure that the set of units for a specific study does not change once a pass has been locked.
  • Studies may be deleted. A study cannot be deleted if it is constrained by another study or if the study itself is locked. A study should not be allowed to be constrained to another study that is not locked yet.
  • the studies list window shown in FIG. 8 presents all the studies defined for the project.
  • the window displays only the three fields: study name, description, and author. Double-clicking on a study record opens the study detail window for the selected study.
  • the study detail window is the primary window that contains all data needed to define a study. This window is presented when creating a new study or when editing an existing study.
  • the study detail window is illustrated in FIGS. 9A and 9B.
  • the study detail window includes a number of fields and controls.
  • Field 41 is the window title. The name of this window is "Study Detail" .
  • Field 42 is the study name. In this field the name of the study may be entered.
  • Field 43 is the author. This is a non-enterable area that is filled by the program using login data.
  • Field 44 is the create date area which includes the date and time when the study was initially created. This is a non-enterable area that is filled by the program when the study record is created.
  • Field 45 is the study description which is a scrollable enterable area for text to describe the study.
  • Field 46 is the study outline which is a scrollable area that shows the event types, characteristics, and choices created for the study. Event types are shown in bold in FIG. 9A.
  • Characteristics are displayed in plain text under each event type. Choices are displayed in italics under each characteristic. Thus, as shown in FIG. 9A the event type is "Question Asked", the characteristic is a “Directness” and the choices are "Succinct” and "Vague”.
  • FIG. 9 A illustrates a study detail window for a study for video analysis of teaching practices.
  • teaching practices may be analyzed by video taping teachers interacting with students in the school room.
  • Various event types such as asking questions or raising one's hand or answering a question are analyzed by marking the events in the video.
  • event type 46a is displayed in bold with the event code and event type name (e.g., "Questions Asked”); the type of marking associated with the event type (for example, "Vi/T” means "mark Video In Point and text" for each instance); and the pass in which the event type is to be marked (e.g., "1 ").
  • Event type detail window which is shown in FIG. 13.
  • V Video In and Out points are to be narked
  • V Video In point
  • Vi Video In point
  • E Event 3
  • Vi/T means the Video In point and the text are to be marked for the event type. If no marking is turned on, then nothing is displayed (for example, see “Answer” in Pass 3 in the illustration).
  • action opens that event type in the event type detail window.
  • the characteristic label 46b as shown in FIG. 9 A is displayed in plain text with the characteristic code (e.g., "DI "), name of the characteristic, and date data entry type (e.g. , "select one"). Characteristics are displayed immediately under the event type to which they are associated. When a characteristic is double-clicked, the action is to open that characteristic in the characteristic detail window as shown in FIG. 14.
  • the order in which the characteristics are displayed under the event type is also the order in which they are displayed on the Video Window.
  • the user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic belonging to the same event type.
  • Characteristics cannot be dragged from one event type to a different event type (for example: the user cannot drag characteristic "Directness” from event type "Question Asked” to event type "Answer”), but characteristics can be dragged from one event type to the same event type that belongs to a different pass through the video (for example: the user can drag characteristic "Effectiveness" from "Answer” in pass 3 to "Answer” in pass 2).
  • When a characteristic is moved all associated choice values are moved with the characteristic and retain their same order.
  • FIGS. 10A and 10B illustrate dragging a characteristic.
  • FIG. 10A illustrates the before condition.
  • the characteristic "Appropriateness” in pass 1 will be dragged to pass 2.
  • FIG. 10B illustrates the after condition.
  • the characteristic "Appropriateness” was dragged from pass 1 to pass 2. The action is to create a new appearance in the event type "Question Asked” in pass 2, with “Appropriateness” underneath it.
  • the choice value 46c illustrated in FIG. 9A is displayed in plain text with a user-defined value (e.g., " 1 ") and choice name. Choices are displayed immediately under the characteristic to which they are associated. The user can change the order of choices by clicking on a choice value and dragging it above or below another choice value belonging to the same characteristic. Choice values cannot be dragged from one characteristic to another or between passes.
  • a user-defined value e.g., " 1 ”
  • Choices are displayed immediately under the characteristic to which they are associated. The user can change the order of choices by clicking on a choice value and dragging it above or below another choice value belonging to the same characteristic. Choice values cannot be dragged from one characteristic to another or between passes.
  • the pass separator line 46d shown in FIG. 9 A separates the passes through the video being analyzed. If more than one pass has been created, a pass separator line is drawn between the event types of each pass. The pass separator line cannot be dragged or selected.
  • Button 47 is the add event type button. The action of this button is to create a new event type and open the event type detail window shown in FIG. 13. The "select an event/sampling method" menu for creating a new event type and opening an event type detail window is illustrated in FIG. 11.
  • Button 48 of the study detail window of FIG. 9 A is the "remove from study” button. The action of this button is to remove the highlighted item from the study along with all indented items under it.
  • removing an event type also removes the associated characteristics and choice values directly under it. If the last event type is removed from a pass, the pass is automatically deleted and removed from the "passes and sampling" area 55 of the study detail window. Pass 1 may no. be deleted.
  • the pass display area 49 displays the pass to which the highlighted event is assigned. It is also a control tool to select the pass.
  • the pass display 49a is a non- enterable area which displays the pass of the currently highlighted event type.
  • the pass selector area 49b is a control tool that works only when an event type is selected. Clicking the up-arrow moves the selected event type to the next higher pass. Similarly, clicking the down-arrow has the action of moving the selected event to the next lower pass. If the pass number is set to a value greater than any existing pass, the action is to create a new pass. Each pass must contain at least one event type.
  • the show characteristics checkbox 50 when checked, is for displaying all characteristics under the appropriate event type in the study outline area 46 and to enable the "show choices" checkbox.
  • the show choices checkbox 51 when checked, displays all choice values under the appropriate characteristic in the study outline area
  • the add pass button 52 has the action of creating a new pass.
  • the FIG. 12 illustrates a newly created pass represented by a separator line and a pass number. New event types will be added to the pass, and existing event types can be dragged to the pass.
  • the specified units area 53 of the study detail window of FIG. 9A has the action of presenting the unit selection window shown in FIG. 15.
  • the specified units area 53 is a non-enterable text area to the right of the button which displays the number of units selected for the study. The button is disabled when the checkbox titled "Include all units in project" is checked.
  • Area 54 includes a unit constraint message. If a constraint is in effect that effects the selection of units, the text describing the constraint is displayed in this area. There are two possible values of the message: “Constrained to the subset of [study]” and “Constrained to include all units in the project” . The second constraint is imposed when the checkbox "Include all units in project" 60 is chosen.
  • Area 56 is the unit selection description. This area is a scrollable enteraHe area for text to describe the units selected for the study.
  • Area 55 is the "passes and sampling” area. This is a scrollable non-enterable area that displays an entry for each pass with the pass number and its sample mode.
  • Area 57 includes navigation controls: First, Prev, Next and Last.
  • Button 58 is the print study button which is used to print the study detail report.
  • Buttons 59 are the cancel/save study buttons.
  • the save button saves all the study data and returns to the studies list window shown in FIG. 8.
  • the cancel button returns to the studies list window of FIG. 8 without saving the changes to the study data.
  • Checkbox 60 is the "Include all units in project" checkbox which has the action of setting behavior of the study so that all units in the project --.re automatically included in the study. Units may be added any time to the project, and they are automatically added to -he study.
  • the Event Type Detail Window is the "Include all units in project" checkbox which has the action of setting behavior of the study so that all units in the project --.re automatically included in the study. Units may be added any time to the project, and they are automatically added to -he study.
  • the event type detail window is illustrated in FIG. 13. This window is for entry of all attributes for an event type.
  • the window is reached through the study detail window of FIG. 9A when either an event type is double-clicked in the study outline 46 or when a new event is added employing button 47.
  • a number of fields and controls of the event type detail window are described below.
  • the window title area 61 gives the name of the window which is "Event Type Detail" .
  • the event code area 62 is an area for the code that uniquely identifies this event type when analysis data is created or exported.
  • the event name area 63 is the area for the name of the event type.
  • the saved search area 64 is a non-enterable text area which appears only if this event type was created by the Search Module to mark instances retrieved from a search. The area provides information only. An event type created to be a saved search can have characteristics, but cannot have video marking or text marking turned on. No new instances can be coded for a saved search event type.
  • the coding instruction area 65 is a scrollable text area for entry of instructions on how to mark this event type. This text area is presented when help is selected on the Video Window.
  • the event instance coding area 66 contains checkboxes for specifying the rules identified at areas 67, 68 and 69 for how event instances are to be coded.
  • instances will be marked using video, text or both. This means that "video marking”, “text marking” or both will be checked. Instances can be marked for this event type in all passes in which the event type occurs, unless checkbox 69 entitled “restrict instance coding to earlier pass only” is checked. In this case, new instances can only be marked in the first pass in which the event type appears in the coding plan. For example, the same event type may appear in pass 2 and in pass 3. If the event instance coding is "mark video” and checkbox 69 "restrict to the earliest pass only” is checked, new instances may be marked in pass 2, but not in pass 3. An example of where this would be done is when one pass is for instance hunting (such as pass 2) and another pass is reserved for just characterizing (pass 3).
  • the event instance coding requirement determines what will be saved in the database for each instance. If an event type is defined as "Video In” only, then any "Video Out” or text marking is ignored when instances are created.
  • the "mark video" checkbox 67 specifies whether instances are to be marked using Video In or Out Points.
  • the checked condition means that new instances are to be marked using the video mark controls.
  • Th unchecked condition means that no video is to be marked for this event type.
  • Three choices are presented for how the video is to be marked for an event type. This governs the behavior of the mark controls on the Video Window when an instance of this event type is marked. The choices are:
  • the text marking checkbox 68 specifies whether instances are to be marked using text.
  • the text condition means that new instances are to be marked using the text mark control.
  • the unchecked condition means that no text is to be marked for this event type.
  • the "restrict instance coding to earlier pass only" checkbox 69 specifies whether instances can be marked in all passes in which the event type appears in the coding plan, or only in one pass.
  • the checked condition means that event instances can only be marked in the first pass (first means the first sequential pass, not necessarily pass 1) in which the event type appears. If the event type appears in other passes in the coding plan, it behaves as if "mark video" and "mark text" are both unchecked. For example, event types in other passes can only be for entering characteristic values, not for marking new instances.
  • the characteristics outline area 70 is a scrollable area that shows all the characteristics and choices associated with an event type for all passes. Characteristics are displayed in plain text. Choices are displayed under each characteristic in italics. If a characteristic in the characteristics outline area 70 is double-clicked, the item is opened for edit in the characteristic detail window illustrated in FIG. 14. If a choice value is double-clicked, its parent characteristic is opened for edit in the characteristic detail window. The order in which the characteristics are displayed in the outline is also the order in which they are displayed on the Video Window. The user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic within the same pass. When a characteristic is moved, all associated choices are moved within the characteristic and retain their same order. A characteristic can belong to only one pass.
  • the add characteristic button 71 has the action of creating a new characteristic and displaying the characteristic detail window illustrated in FIG. 14.
  • the delete characteristic/choice button 72 has the action of deleting what is selected in the characteristics outline and all indented items under it. For example, deleting a characteristic also deletes all of its associated choice values.
  • the print event type button 73 has the action of printing the event detail report.
  • the cancel/save event buttons 74 includes a save button which has the action of saving all the event type data and returning to the study detail window and a cancel button which has the action of returning to the study detail window without saving the changes in the event type data.
  • the characteristic detail window as illustrated in FIG. 14 is for entry of all attributes for a characteristic. This window is reached either through the study detail window illustrated in FIG. 9A or the event type detail window illustrated in FIG. 13 when either a characteristic is double-clicked in the outline or when a new characteristic is added.
  • the fields and controls of the characteristic detail window are described below.
  • the window title area 81 gives the name of this window which is "Characteristic Detail" .
  • the characteristic code area 82 is an enterable area for the code that identifies this characteristic when analysis data is created or exported.
  • the characteristic name area 83 is an enterable area for the name of the characteristic.
  • the coding instruction area 84 is a scrollable text area for entry of instructions on how to mark the characteristic. This text is available when help is selected on the Video Window.
  • the data entry type area 85 presents four options on how data is to be collected for this characteristic. This governs the behavior o the mark controls on the Video Window when values for this characteristic are recorded. The options are:
  • each choice has a choice value that is programmatically determined; the first choice value is 1 , then 2, then 4, then 8, etc.
  • the data entry type can not be changed once a session has been created for the pass in which this characteristic appears, nor can choices be added, changed, or deleted.
  • the choice list area 86 is a scrollable area that shows the choices associated with this characteristic. Choices can be added and deleted using add and delete choice buttons 87 and 88. Drag action allows the choices to be arranged in any order.
  • the add choice button 87 has the action of creating a new line in the characteristic outline for entry of a new choice.
  • the new line is enterable.
  • the delete choice button 88 has the action of deleting the selected choice after confirmation from the user.
  • the print characteristic button 89 has the action of printing the characteristic detail report.
  • the cancel /save characteristic buttons 90 can return to the study detail window or the event type detail window without saving the changes for cancel or with saving the changes for save.
  • the unit selection window is illustrated in FIG. 15.
  • the unit selection window allows specification of the units to be included in the study.
  • the window is presented when the specified unit button is clicked on the study detail window illustrated in FIG. 9A.
  • the "units selected for study" area 102 is filled with the units that have already been selected for the study. No units are displayed in the "unit list” area 97 unless the study is constrained to be a subset of another study. In this case this area is filled with all the units in the parent study.
  • the window title area 91 gives the name of this window which is "Unit Selection for Study: " followed by the name of the current study such as "Math Lessons” .
  • the "unit selection description” area 92 is a scrollable text area for a description of this selection of units. This is the same text as appears in the "unit selection description” area on the study detail window of FIG. 9A.
  • the "show all units" button 93 has action which depends on the constraint condition. If the unit selection is constrained to be a subset of another study, the button action is to display all the units specified for the parent study. Otherwise, the button action is to display all the units in the project in the video list.
  • the "find units" button 94 has the action of presenting a search enabling the user to search for video units that will be displayed in the units list 97.
  • the search allows search on any of the unit fields, using the user-defined custom fields.
  • the units found as a result of this search are displayed in the unit listing area 97. If the unit selection is not constrained to be a subset of another study, the find action is to search all the units in the project. If the unit selection is constrained to be a subset of another study, the find action is to limit search to the units specified for the parent study.
  • the "copy from study” button 95 has the action of presenting the "use units from other study” window, prompting the user to select a study.
  • the units from the selected study are displayed in the unit listing area 97. If the checkbox entitled “constrain to be a subset of the specified study” is checked on the "use units from other study” window, the constraint message 96 is displayed on the window.
  • Area 96 is the constraint message and checkbox. The message and checkbox only appear when a unit selection constraint is in effect. There are two possible values for the message:
  • the message appears as "constrained to be a subset of [study] " . If the unit selection was constrained to include all the units in the project from the units menu on the study detail window of FIG. 9A, the message appears "Include all units in the project” .
  • the unit listing area 97 is a scrollable area which lists video units by unit ID and name. This area is filled by action of the "all" , "find", and "copy study” buttons 93-95. Units in this area 97 are copied to the "units selected for study" area 102 by clicking and dragging. When a unit is dragged to the 'Units selected for study” list 102, the unit appears grayed in the list. Units are removed from this list by clicking and dragging to the remove video icon 101.
  • the clear unit listing button 98 has the action of clearing the contents of the unit listing area 97.
  • the copy to study button 99 has the action of copying the highlighted unit in the unit listing area 97 to the unit selected for study listing are 102.
  • the checkbox 100 entitled "Randomly select units and add to the study” has the action of creating a random sample if the checkbox is checked.
  • the action of checkbox 100 is to change the behavior of the copy to study button. When checked, the sample number area 100a becomes enterable. When unchecked, the sample number area is non-enterable. If checked, and a sample number is given, the copy to study button has the following action:
  • a random number of units is selected consisting of (sample number) units from the unit listing area, and added to the existing selection in the "units selected for study" listing area. Units in the unit listing area that apparently appear in the "units selected for study” listing area are ignored for purpose of creating the sample.
  • the "remove video from list” icon 101 is a drag destination.
  • the action is to remove videos from the list from which they were dragged. For example, if a video is dragged from the unit listing 97 to this icon, it is removed from the unit listing area. This is not the same action as deleting the unit.
  • the "units selected for study" area 102 is a scrollable area which lists units associated with the study. Units are listed by unit ID and name. Units can be added to this list from the unit list 97 by clicking and dragging.
  • the "clear units selected for study” button 103 has the action of clearing the contents of the "units selected for study” listing area 102 after confirmation with the user.
  • the print unit selection information button 104 has the action of printing the units in study detail report.
  • the cancel/save choice buttons 105 include the save button which saves the units selected for study selection and returns to the window that made the call to this window. The cancel button returns to the window that made the call without saving the changes.
  • the "use units from other study” window is illustrated in FIG. 16.
  • This window is used to fill the units list 97 of the window shown in FIG. 15 with all the units that belong to another study.
  • the window also contains a checkbox that imposes the constraint that only units from the selected study, the parent study, can be used in the current study.
  • This window is opened when the "copy from study” button 95 of the window shown in FIG. 15 is clicked on the unit selection window.
  • the "units from other study” window shown in FIG. 16 includes a number of fields and controls.
  • the study list area 111 is a scrollable area which contains all the studies in the project, except for studies constrained to "Include all units in project" and the current study.
  • the study description area 112 is in a non-enterable scrollable area of text that contains the description of the highlighted study shown in area 111. This text is from the unit selection description area on the study detail window illustrated in FIG. 9A.
  • the button 113 labeled "Replace current selection in unit list” causes the action of displaying the units for the highlighted study in the unit list on the unit selection window.
  • the checkbox entitled “Constrain unit selection to be a subset of the specified study” 114 imposes a constraint on the study so that only units from the selected study in area 111 can be used for the current study. Action is to constrain the contents of the units listing area so it only contains the units specified for the selected study.
  • the button entitled “Add units to current selection in unit list” displays the units for the highlighted study in the unit list on the unit selection window.
  • the purpose of the unit module is to include all the windows, controls and areas that are used to define video units, open and work with sessions, and manage sessions in the interactive video analysis system.
  • the unit module includes a unit list window.
  • the unit list window is illustrated in FIG. 17.
  • the unit list window presents all the units defined for the project. For example, this includes all the units in the database.
  • the unit list window displays the unit ID and the unit name.
  • nine custom unit fields may also appear. Double-clicking on a record presents the unit detail window.
  • the unit detail window is illustrated in FIG. 18
  • the unit detail window is the primary window that contains all data needed to define a unit, including creation of the transcript. This window is presented when adding a new unit or when editing an existing unit.
  • the fields and controls of the unit detail window are described below:
  • the window title area 121 gives the name of this window which is "Unit Detail".
  • the unit name area 122 is an enterable area for the name of the unit. This must be a unique name. Internally, all attributes for the unit are associated with an internal unit identifier, so the unit name can be changed.
  • the unit ID area 123 is an enterable area for code to identify the unit.
  • the create date area 124 gives the date and time when the unit was initially created. This is a non-enterable area that is filled by the program when the unit record is created.
  • the descrip-i ⁇ n area 125 is a scrollable text area for entry in order to describe the unit. Nine custom unit fields are an optional feature. Each field is an enterable area for storing data up to 30 characters long. The field names are customizable.
  • the segment list area 126 is a scrollable area that displays all the segments for this unit. Each segment is displayed with its name (file name from the path), length (determined by reading the media on which the segment is recorded), start time (calculated using the sum of the previous segment length), and end time (calculated using the sum of the previous segment length plus the length of this segment).
  • the sequence number is by default the order in which the segments were created. The sequence number determines the order in which the segments are to be viewed. Order of the segments can be changed by dragging. Dragging is only supported for a new record. When a segment is moved by dragging the start and end times of all other segments are recalculated.
  • the add segment button 128 has the action of presenting the open-file dialog, prompting the user to select the volume and file that contains the segment.
  • the file name is entered as the segment name in the segment list 126.
  • the length is also determined and written into the segment list.
  • the first frame of the video is displayed in the video view area.
  • the delete segment button 129 has the action of prompting the user to confirm that the highlighted segment in the segment list 126 is to be deleted. Upon confirmation, the segment is deleted and the start and end times of all other segments are recalculated.
  • the study membership area 130 is a scrollable area that lists all studies in which this unit is included. Units are assigned to a study on a study detail window on FIG. 9A. When such an assignment is made, the study is included in this area 130.
  • the transcript information area 131 is a non-enterable area which displays the size of each transcript (the number of characters).
  • the import transcript button 132 prompts the user for which transcript to import, and then presents the standard open file dialog prompting the file name to import. When the file is selected, the file is imported using tab-delimited fields. The size of the transcript is written to the transcript size area 135.
  • the edit transcript button 133 opens the Video
  • the export transcript button 134 has the action of prompting the user for which transcript to export, then presents the standard new file prompting the file name for the export file.
  • Navigation controls 138 operate as previously described.
  • the print button 137 has the action of printing the unit detail report.
  • the cancel/save unit buttons 136 include the save button which prompts the user for confirmation that the segment sequence is correct. After confirmation, the user is either returned to the window or the unit data is saved. For an existing record, the save button action is to save the unit data and return to the unit list window of FIG. 17. If the cancel button is used, any changes to segments or to the transcript are rolled back after confirmation.
  • a video position indicator/control 139 has the same operation as the video position indicator/control of the Video Window. It indicates the relative positions of the current frame in the segment. Session Handling And Management
  • Coding a unit for a specific study takes place in a session.
  • a session When a user goes into code mode on the video window, a session must be opened that sets the coding parameters. The progress of coding can be tracked by monitoring the status of the sessions for a study.
  • the present invention includes various windows to open and close sessions during coding and management windows that give detailed session information. If 'bode" is selected on the main button bar, and if the user has no other currently opened sessions, the user is prompted to open a session for the current study on the create session window. If the user has previous sessions that are still open, the resume session window is presented and the user may open an existing session or create a session. After a session is active, the user may move freely back and forth between view mode and code mode on the video window. While in code mode, the user may open the session info window to display information about the session.
  • Session management is performed from the session listing window which is accessed by clicking session on the manage button bar. Double-clicking on a session in the session listing window opens the session detail window which provides information similar to the session info window.
  • the session info window presents information about the current session including the name of the current study, the number of this particular pass, with the total number of passes in the study, a text description of the study, information about the unit including the unit name and unit I.D., the number of the segments that make up the unit and the total length in hours, minutes, seconds and frames for the entire unit, including all segments. Additionally, the session info window gives information about the sample that is in effect for the current pass, a pass outline which contains all the indented event types, characteristics and choice values for the current pass, a print button and a button to close the window.
  • a session placemark saves a time code to close the window.
  • a session placemark saves a time code with the session so that when the session is resumed the video is automatically positioned at the placemark. This occurs when a session is ended without closing it.
  • the select a study window appears.
  • a select button chooses a selected study.
  • the session list window is opened, listing all the sessions for the selected study. Clicking on a record listed presents the session detail window.
  • the session detail window gives information about the session. The information includes the name of the study, the pass number for the particular session, along with the total number of passes defined for the study, the name of the video unit being coded, the unit I.D.
  • the session status such as "never opened, '"opened,” 'reopened, “ and 'closed, " the name of the user who opened the session, the length of the unit in hours, minutes, and seconds, the total elapsed time in the code mode between when the unit was opened and closed, the number of events that have been coded in the session, and the number of characteristics recorded for event instances.
  • Sample information such as the sample method that is in effect for the pass in the session and the sample size created for the session is displayed.
  • the Video Window is used to: (i) play the video belonging to a unit; (ii) display, edit, and/or synchronize transcription text belonging to the unit; (iii) create event types and structure characteristics under them (for the default study only); (iv) mark and characterize event instances; (v) retrieve previously marked event instances for editing or viewing.
  • An “event instance” is the marked occurrence 01 a predefined event ("event type") within video or transcription text.
  • the video and/or text is associated with an event type and characteristic to create a specific instance.
  • the Video Window may be opened through one of several actions:
  • the window is opened to display a specified unit (including video and transcription text).
  • the Video Window supports three modes of operation: view mode, transcribe mode, and code mode.
  • Event instances are marked only in code mode. During the coding process, when an event instance is observed, the following steps are performed:
  • the event type listing displays all the event types that can be coded in a particular pass, no other event types may be coded.
  • Clicking the save instance button completes the marking of an instance.
  • the instance can only be edited by recalling it by clicking on it in the instance listing, editing it using the frame controls, selecting a different event type or characteristic values, and clicking save instance to save the updates.
  • the instance is sorted into the instance listing if the event type is checked and is displayed in a different color to distinguish it from previously created instances.
  • buttons that mark or change the In/Out Points and text selection are disabled.
  • the event type listing displays all the event types defined for the current study rather than for the current session and allows event types to be
  • S SUI BSTITUTE SHEET (RULE 26) checked so instances for the event type are displayed in the instance listing. Characteristic values may be viewed for each instance, but not changed. If there is no current study, nothing appears in the event type listing.
  • initialization depends on the mode in which it is to be opened.
  • FIG. 19 includes a table of the palettes that may be opened over the Video Window.
  • the palettes include the sample palette, the outline palette, search results palette, and transcribe video loop palette.
  • the current video segment may be changed in a number of ways: (1) by selecting the segment switching buttons on the sides of the progress bar, (2) when the video plays to the end of the current segment, and (3) when an instance is clicked in the instance listing, or a time code is clicked in any palette that is not the current segment.
  • the path of the required segment is retrieved from the unit file. If the path does not exist because the segment is a removable shrine., the user is prompted to open the file containing the segment. If any invalid path is entered, an error is given and the user is prompted again. If cancel is clicked, the user is returned to the Video Window in the current segment.
  • the Video Window has five major areas: the title area, the video area, the mark area, the instance area, and the list area.
  • the title area is illustrated in FIG. 21 A.
  • the video area is illustrated in FIG. 21B and contains the video display area, play controls, relative position indicators, zoom and sound controls and drawing tools.
  • the mark area is illustrated in FIG. 21C and contains controls to Mark Instances, refine In and Out Points on the video, and save marked instances.
  • the instance area is illustrated in FIG. 21D and contains listings of event types, characteristic labels, characteristic choices, and events instances that have already been marked.
  • the list area contains the transcript text and controls to change the mode of operation.
  • the video position indicator/control 142 acts like a thermometer. As the video plays, the grey area moves from left to right, filling up the thermometer. It displays the relative position of the current frame in the current segment. At the end of the segment, the thermometer is completely filled with grey. Increments on the control indicate tenths of the segment. The end of the grey area can be dragged back and forth. When released, the action is to move the current video frame to the location in the video corresponding to the relative position of the control. The video resumes the current play condition. A small amount of grey is always displayed on the thermometer, even when the current frame is the first frame of the segment. This is so that the end of the grey can be picked up using the click and drag action even when the first frame of the video is the current location.
  • a subtitle area 143 displays the transcription text that corresponds to the video. Two lines of the text are displayed.
  • Button 144 is the zoom tool. The action is to zoom the selected area to fill the frame of the video display.
  • Button 145 is the unzoom tool which restores the video display to lx magnification.
  • Button 146 is the volume control. Click action pops a thermometer used to control the volume.
  • Button 147 is the mute control. The button toggles the sound on or off.
  • Area 148 gives the current video frame.
  • Button 149 moves the video back five seconds.
  • Button 150 goes to the beginning of the current video segment and resumes the current play condition.
  • Button 151 is the pause button and button 152 is the play button.
  • Button 153 is the subtitle control which toggles the video subtitle through three modes: 1) Display subtitles from transcript one;
  • Button 154 is the draw tool which enables drawing on the video display. The cursor becomes a pencil and drawing starts upon mouse down and continues as the mouse is moved until mouse up. The draw tool can only be selected when the video is paused.
  • Button 155 is the eraser tool which enables erasure of lines created using the draw tool.
  • Button 156 is the scissor tool which copies the currently displayed frame to the clipboard. Drawings made over the video using the draw tool are copied as well. The scissors tool can only be selected when the video is paused.
  • Button 157 is the frame advance which advances the video by one frame.
  • Button 158 is the open video dialogue which opens a window to display the video in a larger area.
  • the link control 159 controls the link between the videc and transcript area. When 'on" the video is linked with the transcript. In other words, when the video is moved the closest utterance is highlighted in the transcript area. When the link control button is 'off," moving the video has no effect on the transcript area.
  • FIG. 21C With respect to the mark area of the Video Window, reference is made to FIG. 21C and FIG. 22.
  • the action of the controls in the mark area is dependent on the current video mode (view, code, and transcribe).
  • the Mark In button 161 is disabled in the view mode.
  • the button action In the code mode the button action is to "grab” the time code of the current video frame regardless of the play condition and display it in the In Point area 162.
  • the button action In the transcribe mode, the button action is to "grab” the time code of the current video frame regardless of play condition and display it in the In Point area 162 and in the ti ⁇ code area for the utterance in which the insertion point is positioned.
  • Button action is to overwrite any previous contents in the In Point area and the utterance time code area with the time code of the current video frame.
  • the In Point area 162 is a non-enterable area which displays the time code of the frame that is the beginning of the instance. This area is updated by one of five actions: (1) clicking the Mark In button in the code and transcribe modes such that the area gets the time code for the current frame; (2) manipulating the In Point frame control in the code and transcribe modes so that the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires a video-in or exhaustive segmentation coding so that the area gets the In Point of the instance; (4) highlighting an utterance in the view and transcribe modes so the area gets the time code of the utterance; and (5) clicking an outline item on the outline palette so that the area gets the In Point of the outline item.
  • the In Point frame control button 163 has identical action in the code and transcribe modes. Control is disabled in the view mode. Control action is to incrementally move the video forwards or backwards a few frames to "fine tune" the In Point.
  • the Mark Out button 164 is enabled in code mode only.
  • the button action is exactly analogous to the Mark In button 161 , except the Out Point is set and displayed in the Out Point area 165.
  • the Out Point area 165 is a non-enterable area which displays the time code of the frame that is the end of the instance. If there is no Out Point for the instance, the area is blank. This area is updated by one of four actions: (1) clicking the Mark Out button in the code mode so that the area gets the time code for the current frame; (2) manipulating the Out Point frame control in the code mode so the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires Video Out coding so that the area gets the Out Point of the instance or becomes a blank; and (4) highlighting an utterance in the view and transcribe modes so that the area becomes blank.
  • the Out Point frame control button 166 is only enabled in the code mode.
  • the control is analogous to the In Point frame control 163 except the Out
  • Point is adjusted.
  • the mark text button 167 is enabled only in the code mode.
  • the button action is to register the position of the highlighted text as the instance marking.
  • the button appearance changes to signify that text has been marked. Internally, the time code of the beginning of the utterance in which the highlighted text begins is retained, along with the position of the first and last characters of the highlighted text.
  • the event type listing area 170 is a scrollable area in which the action and contents depend on the mode.
  • the area is blank in the transcribe mode.
  • the scrollable area contains a row for each event type that can be coded in the current pass. Only event types that are listed here can be coded in a particular session. In code mode with the outline palette open, this area is blank. In view mode, the area contains a row for each event type defined in the study. If there is no current study, the area is blank.
  • the event type listing contains four columns.
  • the first column is the checkmark that indicates that instances of this event type are to be displayed in the instance listing area.
  • the second column is the unique event type code.
  • the third column is the event type name.
  • the fourth column is the event instance coding requirement. In both modes, if an event type is double-clicked the action is to place a checkmark next to it or to remove the checkmark.
  • the checkmark indicates that event instances with this event type are to be listed in the "previously marked instances” area. In the illustration the event type "Question Asked” is checked. All the instances of questions being asked in this unit are listed in the "previously marked instances” area.
  • clicking an event type has the action of refreshing the characteristic labels popup 171 to contain all the characteristics structured under the highlighted event type for the current pass.
  • the action is to refresh the characteristics label popup to contain all the characteristics structured under the highlighted event type in the study.
  • the characteristics labels area 171 is a popup that contains labels
  • the next/previous characteristic buttons 172 are a two-button cluster that have the action of selecting the next item in the characteristic label popup, or selecting the previous item in the popup.
  • the characteristic count area 173 is a non- enterable text display of the sequence number of the currently displayed characteristic label and the total number of characteristic labels for the current pass.
  • the characteristic value area 174 is either a scrollable area or an enterable area.
  • the clear button 175 has the action of clearing the In Point and Out Point areas and resetting the Mark In, Mark Out, and mark text buttons to normal display (for example, removing any reverse video).
  • the save instance button 176 only has action in the code mode and is disabled in the other modes.
  • the button name is "save instance” unless an event instance is selected in the event instance listing, in which case the button name is "save changes”.
  • the action of the button is to validate data entry.
  • An event type must be selected. All characteristics must be given values. All the points must be marked to satisfy the event instance coding rules for the selected event type.
  • the event type help button 177 only applies to the code and view modes.
  • the action is to present a dialog containing the coding instruction for the highlighted event type.
  • the show/add event type button 178 is only visible in the code mode for the default study.
  • the action is to present a window to select one or more event types previously created for the default study to be included in the event type listing area.
  • a button on this window allows users to create a new event type for the default study.
  • the button is provided so that the user may select which of the previously defined event types for the default study are to be included in the event type listing area. This allows the user to select just those event types of immediate interest for addition to the listing.
  • the user also has the option of creating new event types for the default study using the event type detail window.
  • the edit event type button 179 is only visible in the code mode for the default study. The action of the button is to allow the user to edit the highlighted event type.
  • the remove/delete event type button 180 is only visible in the code mode for the default study. The action of the button is to prompt the user for whether the highlighted event type is to be removed from the event type listing or is to be deleted permanently with all its instances.
  • the instance area provides a listing of instances that have been marked for selected event types and controls to retrieve an instance for viewing or editing, to add instances to an outline, and delete instances. This area is active only in the code and view modes. The area is disabled in code mode when the outline window is open.
  • the instance listing area 181 is a scrollable area that contains all the instances marked in the current session for the event types that are checked in the event type listing. Each instance is listed with a time code and event type code. The meaning of the time code depends on the event type. If the video is marked, the In Point is displayed. If only text is marked, the time code of the beginning of the utterance is displayed. A symbol is placed after the time code to indicate that the time code corresponds to the video frame closest to the beginning of the utterance in the event of marked text. Clicking an instance moves the video to the beginning of the instance and resumes the playing condition.
  • the delete instance button 182 is enabled in the code mode only. The action of the button 12061
  • the add to outline button 183 is enabled in the code and view modes only. Action is to add the instance to the current outline.
  • the return to In Point button 184 is enabled in the code and view modes only.
  • the action of the button is to move the video to the first frame of the highlighted event instance. The video resumes the prior play condition.
  • the pause button 185 is enabled in the code and view modes only. The action is to pause the video at the current frame.
  • the play to Out Point button 186 is enabled in the code and view modes only.
  • the action of the button is to play the video starting at the current frame and stop at the Out Point for the highlighted event instance.
  • the go to Out Point button 187 is enabled in the code and view modes only.
  • the action of the button is to move the video to three seconds before the Out Point of the highlighted event instance, play the video to the Out Point, and stop.
  • the transcribe mode has two operations: (i) transcribing the spoken words or actions on the video into text; and (ii) assigning time reference values to each of the utterances in the video.
  • the first operation transcribing video content into text, is largely accomplished by watching the video and entering text into the list area. This process is aided by the Transcribe- Video Loop palette.
  • the palette provides a control that enables the user to play a short segment of video over and over without touching any controls. The user sets the loop start point and end point. When the contents of the loop have been successfully transcribed, a 'leap' button moves the loop to the next increment of video.
  • the list manager is used to display and work with the text associated with the video.
  • this text is the transcription of what is being said in the video, though the text may actually be anything - observations about the video, translation, etc.
  • the text is referred to as the 'transcription' or transcript.
  • each speaker takes turns speaking; the transcription of each turn is an 'utterance'; e.g. an utterance is the transcription of one speaker's turn at speech.
  • Utterances are records in the database; each utterance has a time reference value (In point), two transcription text fields, and a speaker field.
  • the area on the screen that the list manager controls is called the 'List Area' .
  • the List Area is shown in FIG. 24. It is the right side of the Video Window of FIG. 20.
  • the list manager gets its name because it is not a conventional text area; it displays text from utterance records in the transcript so that the text looks like a contiguous block. Actions on the text block update the utterance records.
  • Each utterance is associated with a time reference value that synchronizes it with the video; an In point is marked that identifies where the utterance begins in the video. (Note: there is no Out point associated with an utterance; the out point is assumed to be the In point of the next consecutive utterance.)
  • Each utterance is also associated with a speaker.
  • Utterances in the list area are always displayed in the order as entered or specified (in case of an insertion) by the user.
  • the list are supports three modes of operation: View Mode, Transcribe Mode and Code Mode.
  • the area behaves differently in each of the three modes. For instance, the action of clicking in the area to create an insertion point is the same in all three modes, but a subsequent action of typing characters would have the effect of inserting characters into the text area only in Transcribe mode; it would have no effect at all in View mode or Code mode.
  • the List Area in View Mode displays the transcript text next to the video. Clicking on the text has the action of moving the video to the point closest to the utterance. Moving the video using other controls on the Video Window has the effect of highlighting the utterance closest to the video.
  • the text can not be changed in any manner, nor may the time reference values associated with it be changed.
  • View mode affects the other controls on the Video window as well: new event instances can not be marked or edited, and characteristic values can not be recorded or changed.
  • the purpose of the Transcribe Mode is to allow text entry and edit, and to provide controls for marking the text to synchronize it with the video.
  • the marking process is limited to marking the video In point for each utterance; event instances can not be marked or edited, and characteristic values can not be recorded or changed.
  • Code Mode The purpose of Code Mode is to mark event instances and enter characteristic values.
  • the coding process typically starts only after the entire Unit is transcribed and time reference values are associated with every utterance, as the time reference value is used during coding.
  • the list area has a header area 191 with the mode icons 195.
  • the time column 192 displays the time reference value associated with each utterance. This is the point on the video that was marked to correspond with the beginning of the utterance (e.g. the time reference value is the In point for when this utterance is made in the video). If the utterance has not been marked, the time reference value is displayed at 00:00:00.
  • the speaker column 193 identifies the speaker.
  • the transcript 1 column 194 displays the text of the first transcript. This area is enterable in the Transcribe Mode.
  • Area splitter 196 allows the user to split the transcript text area into two halves so that a second transcript is displayed. This is shown in FIG. 25.
  • a video may be on more than one media unit (disk, tape, etc.) (segments). Segment boundaries are identified in the list area as a heavy horizontal line that goes across all four columns.
  • action is to move the video to the beginning time reference value of the utterance or the closest previous utterance that has a time reference value.
  • the text is fully editable and selectable.
  • all key actions have the effect of highlighting the entire utterance in the Code Mode or navigating between highlighted utterances.
  • the Code Mode instances are marked. A full set of actions is supported to select text so it can be marked. Highlighted text can not be changed.
  • the list area is updated to scroll to the marked utterance, and highlight the marked selection within the utterance.
  • the list area is updated to scroll to the closest utterance, and highlight the utterance.
  • the list area is updated to scroll to the closest utterance to the current video frame and highlight the utterance.
  • the Video Window menubar contains commands for Find and Find Again. The effect on the list area is identical for each of these commands.
  • the user is prompted for a text value and/or speaker name; the list manager searches for the next instance starting at the current insertion point position.
  • each utterance is marked to identify the time reference value on the video to which it belongs.
  • the Mark In button and controls are enabled to allow exact video positioning of the In point of each utterance.
  • the list area tracks the current insertion position and/or highlight range in Code Mode: the utterance ID, time reference value, and character offset is available to the mark controls so the exact insertion point position or highlight range can be recorded with the instance.
  • the outline presentation feature allows the user to select and structure the video and transcript text from event instances.
  • the intended use of this feature is to prepare presentations that include selected instances.
  • the outline palette for the current outline is opened when Show Outline is requested anywhere. If no current outline is active, the user is prompted to select one by the Select An Outline window shown in FIG. 26. It displays outlines that have been created. The author of each outline is displayed in the scrollable area. The user may select an outline or pushing the plus button will create a new outline. The negative button will delete the selected outline if the user is the author.
  • the outline description window is displayed when an outline is created. It has two enterable areas as shown in FIG. 27: the outline name and the description.
  • the outline palette is shown in FIG. 28.
  • Event instances dragged to the Outline icon ⁇ p on the Video Window of FIG. 20 become part of the current oudine. If there is no current outline, the user is prompted to specify one, or create a new one. The current oudine remains in effect until a different outline is selected.
  • the Outline Item window When the event instance is dropped on the Outline icon, the Outline Item window, shown in FIG. 29, is opened to prompt the user for a description of the item.
  • the Outline Item window displays all the headers for the current outline (in the same order as specified in the outline) so a header for the item can be specified as an optional step.
  • the item is added as the last item under the header. If no outline header is specified, the item is added as the first item in the orphan area.
  • an Outline Item is created from the unit, time reference value, event type, and text selection of the instance.
  • the outline item is completely independent of the instance.
  • the outline item may be edited for In/Out point, text selection, or deleted entirely, without affecting the event instance, and vice-versa.
  • Outline items After outline items have been created, they can be structured in the Outline Palette. Outline items can be structured under and moved between headers, and the order of headers can be changed. Once the outline is complete, it can be printed and the video portion can be exported to an MPEG file.
  • the outline palette When the outline palette is active, it can be used to control the video display. Clicking an outline item moves the video to the associated time reference value.
  • the outline item's time reference value can be edited for In and Out points.
  • the oudine item's transcript marking may also be edited.
  • Outline items retain association with the utterances (Transcript 1 and Transcript 2) associated with the outline item (by time reference value) corresponding to the video. The user may specify whether these are to be printed with the outline.
  • the outiine area 200 is a scrollable area that contains all oudine items and oudine headers. Outline items are indented under outline headers. Drag and drop action is supported in this area to allow headers and outline items to be moved freely through the outline area. Outline headers appear in bold and are numbered with whole numbers. When a header is moved the outline items move with it. A header may be clicked and dragged anywhere in the outline area. Outline items appear in plain text and are numbered with decimal numbers that begin with the header number. Outline items appear with the event code that went along with the event instance from which the item was created. Items may be clicked and dragged anywhere in the outline area - under the same header, under a different header, or to the orphan area.
  • the video window If an item is clicked, in the video window, the video is moved to the In point of the outline item, the utterance closest to the current video frame is highlighted, and the current play condition is resumed. If the oudine item points to video from a unit or segment not currently mounted, the user is prompted to insert it.
  • the In and Out points of the outline item appear in the Mark controls.
  • the Mark controls are enabled when the Oudine window is displayed, so the In and/or Out points of the oudine item can be edited. This has no effect whatsoever on the instance from which the outline item was created. If an item is not associated with a header, it is displayed on the top of the oudine area 200a and is called an 'orphan".
  • the study area 201 displays the study from which the event instance was taken to create the highlighted outline item.
  • the unit area 202 displays the name of the video unit associated with the highlighted outline item.
  • the In point area 203 displays the In point of the video associated with the highlighted outline item.
  • the duration area 204 displays the duration of the video associated with the outline item.
  • Play Outline button 205 has the Button action to play the video starting at the In Point of the first outline item and continue playing each outline item in the order of appearance in the outline. Play stops at the Out Point of the last oudine item.
  • the system supports the creation of a new MPEG file based on the instances that have been moved into an outline. That is, given marked video in and video out points, the system can create a new MPEG file which contains only the marked video content.
  • the new MPEG file also contains the relevant additional information such as transcript text, and derivative information such as event, characteristic and instance information.
  • the exported MPEG file is viewable.
  • LAVA MPEG viewer made by LAVA, L.L.C.
  • not only is the MPEG file viewable not only is the MPEG file viewable, but all of the relevant additional and derivative information such as the transcript text, event, characteristic and instance information is viewable and accessible for random positioning, searching, subtitling and manipulation.
  • Two types of output can be produced from an Oudine: a printed Outline Report and a MPEG file created from the outline item in the order specified on the oudine containing video from the outline.
  • Sampling is the creation of a specific subset of video or event instances that can be used for new instance hunting or the creation of a specific subset of event instances that can be characterized. There are five methods for creating samples.
  • the sample method is specified on the sample definition window and displayed on the study definition window for each coding pass.
  • the samples are presented to the coder in the Sample Palette so they can be visited one by one.
  • the samples are saved in the database so they can be retrieved into the Sample Palette anytime.
  • FIG. 30 shows the sample definition window. Area 210 permits a choice of sampling method.
  • This sample method means that no samples will be created.
  • the coder can use all the video in the search for event instances.
  • This method means the sample is to be created from a specified percentage of the total event instances that occur in the Unit that belong to the 'Specify Event" area. The default value for percentage is 100% . An event must be selected from the 'Specify Event" popup if this sample method is chosen.
  • This method means the sample is to be created from a specified number of the event instances that occur in the Unit that belong to the 'Specify Event" area. An event must be selected from the 'Specify Event” popup if this sample method is chosen.
  • This method means the sample is to be created from a specified number of video clips from the Unit with a specified duration. Two parameters are required for this option: the number of samples to be created from the Unit, and the duration in seconds of each sample.
  • the number of clips refers to the entire video, not from each event.
  • ⁇ -Sample periods may not overlap
  • sample periods may not span from one instance to another; e.g. the sample period must be wholly contained within a single event instance.
  • This method means the sample is to be created from randomly selected video clips of a given duration from the Unit.
  • the number of samples is given in terms of Samples per minute of video”. Three parameters are required for this option: the number of samples desired, the interval of time over which the samples are to be chosen, and the duration in seconds of each sample.
  • the event filter area 219 allows restriction of the selection of event instances or time samples to periods within another event type.
  • Time samples restrict the creation of new instances to the sample periods, according to the Event Coding Constraint specified ii, the sample definition.
  • - Instance samples allow the retrieval of selected instances, typically for characterization.
  • a time sample is created by specifying one of the time sample methods (Random Time Sample or Proportional Random Time Sample).
  • An instance sample is created by specifying one of the event sample methods (Fractional Event Sample or Quantitative Event Sample).
  • the instance listing on the Video window limits the display of existing instances to only instances with an In point within the time period of the highlighted sample in the Sample Palette. For example, if five event instances are listed in the Sample Palette and one is highlighted, only event instances with an In point within the time period (e.g. From Video In to Video Out) of the highlighted instance would be listed in the event listing (subject to the other controls that specify what event types are to be displayed in the instance listing).
  • the sample palette is shown in FIG. 31.
  • Checkmarks 223 next to the sample list area 224 may be set.
  • the sample list area contains an entry for each sample with time reference values for the In point and Out point of the sample.
  • FIG. 32 is the sample information window which is opened from choosing the Show Sample Info button 222 on the sample palette.
  • the event filter area is a non- enterable scrollable area that contains text describing the event filter in effect for the current pass.
  • the illustration shows the format for how the filter is to be described - it follows the same conventions as the 'Within" area in the Sample Definition Window.
  • the analysis module is used to gather statistics about event instances across video units.
  • the module provides functions for defining variables, searching for and retrieving information about event instances, displaying the results, and exporting the data. Typically the results of an analysis will be exported for further analysis in a statistical program.
  • a window requests the user to designate a unit analysis or an instance analysis.
  • the analysis module allows the user to produce statistical information about event instances on either a Unit by Unit basis or an Instance by Instance basis.
  • the results can be displayed or exported for further analysis.
  • Unit analysis aggregates information about the instances found in a unit and returns statistics such as count, mean, and standard deviation about the event instances found in the unit.
  • Event Instance analysis returns characteristic values directly for each instance found in the units included in the analysis.
  • FIG. 33 shows the unit analysis window.
  • Area 232 is the analysis description area.
  • Area 236 is the variable definition area. There are four columns: the sequence number, the variable description, the short variable name and the statistic that will be calculated for the variable such as: count, mean, SD (standard deviation). Variables may be dragged to change order and added or deleted.
  • the execute analysis button 242 executes the analysis.
  • the analysis results area 243 has a column for each variable defined in variable listing area 237 and a row for each unit in the analysis.
  • a unit variable may be added and defined.
  • the unit value will be returned for each unit in the analysis.
  • An event variable may be added and defined.
  • a calculated value will be returned for each unit in the analysis.
  • the calculated variable is a statistic about instances matching a description.
  • FIG. 34 shows the define unit variable window and
  • FIG. 35 shows the define event variable window.
  • the event criteria area 255 specifies event instances to be found for analysis. Event instances are found for the event type in area 254 that occur within other instances and/or have specific characteristic values.
  • Area 256 sets additional criteria.
  • the event variable is calculated using the attribute designated in area 257.
  • Area 258 indicates the calculation to perform (mean, count instances, total, standard deviation, total number, sum, minimum, maximum, range, or count before/after for exhaustive segmentation).
  • FIG. 36 illustrates the instance analysis window.
  • Area 262 describes the analysis.
  • Area 264 specifies the event type and is analogous to the define event variable window of FIG. 35 for unit analysis.
  • Area 265 is the variable listing area. It has four columns. The first three are the same as for unit analysis. The fourth column is 'origin". The origin is
  • Variables may be added and deleted.
  • Area 269 gives the analysis results with a column for each variable in variable listing area 265 and a row for each event instance in the analysis.
  • FIG. 37 is the define analysis variable window.
  • the search module is used to perform ad-hoc searches for text or event instances, display the results, and allow the results to be used to control the Video Window.
  • the Search Module allows the user to search for text or event instances across multiple video units.
  • the results can be displayed in a palette over the Video Window so each 'find' can be viewed.
  • the Search Window is designed to allow multiple iterative searches. Each search can begin with the results of the previous search: the new search results can be added to or subtracted from the previous search results.
  • search There are two types of searches: searches for text strings within the transcript text ('Text Search'), and searches for event instances that match a give event type and other criteria ('Instance Search'). Each search has its own window, but most of the controls in each window are identical.
  • the search module is accessed from the main button bar for a text search or an instance search.
  • FIG. 38 is a search window with features common to text and instance searches.
  • Area 271 indicates if it is a text or instance search.
  • Area 272 shows the relationship to a previous search.
  • Area 277 designates units to search.
  • Area 281 specifies what is being searched for: the event instance or word or phrase. Multiple criteria may be set to identify the characteristic or position.
  • Button 282 executes the search.
  • Area 283 lists the results. Button 284 will add die result to an outline.
  • Area 285 gives the instant count.
  • search within a study button is selected on the search window a unit selection for search window permits the user to select individual units within a study to limit the search.
  • a results palette permits the search results to be examined and there is a checkmark that may be set for each result.
  • FIG. 39 shows the event instance search window.
  • a search is done for an event type occurring within an event type where a particular characteristic has a valid characteristic value.
  • a 'Saved Search' event type This provides several capabilities:
  • Characteristic values can be applied to the event instances, and a later pass can be created to record other characteristics.
  • FIG. 40 is the text search window.
  • the text search can search the text of multiple units of video. It finds all instances of a word or phrase.
  • the search term is input in area 291.
  • the speaker is input in area 292.
  • Area 293 indicates which transcripts are searched.
  • Area 294 permits searching text within an event type with a characteristic and selected choice of characteristic.
  • the study listing report lists studies in the current selection, sorted in the current sort order.
  • the study detail report details one study, giving all details about it.
  • the event detail report details one event type belonging to a study, giving all details about it.
  • the characteristic detail report details one characteristic belonging to the study, giving all details about it.
  • the units in study detail report lists all the units that have been selected for a single study.
  • the unit listing report lists all units in the current selection, sorted in the current sort order.
  • the unit detail report gives all details about a unit.
  • the session listing report prints the contents of the current session list window.
  • the session detail report prints the contents of the current session detail window.
  • the user listing report lists all users in the current selection, sorted in the current sort order.
  • the user detail report details one user.
  • the system settings report prints all the system settings.
  • the outline report is printed from the outline palette.
  • the search report gives results of an event instance search or a text search.
  • the search criteria report gives the search criteria.
  • the analysis results report prints the data created for the analysis that is displayed.
  • the analysis variable definition report prints the description of all die variables defined in the analysis.
  • the sample detail report describes the sample and lists the time reference values in the sample.

Abstract

A digital video system having coding and control station adapted to receive digital reference video information (3), for coding the digital reference video information to generate coded data (9); and coded data store (2) for storing the coded data. The coded data may include time reference data, attribute data, multiple transcript data or annotations or static documents. Subtitling is performed with ease. Video and transcripts of audio are displayed simultaneously. Slow motion and fast action are permitted. Search and analysis of video and text are available.

Description

DIGITAL VIDEO SYSTEM HAVING A DATA BASE OF CODED DATA FOR DIGITAL AUDIO AND VIDEO INFORMATION
FIELD OF THE INVENTION
This invention relates to a digital video system and method for manipulating digital video information.
BACKGROUND OF THE INVENTION
Recent advances in computer hardware and software are advancing the digital revolution by making it possible to economically store video, in a digital format, on a personal computer. The software applications that use this technology are only now beginning to emerge, but they are expected to become as commonplace in the next decade as spreadsheet and text manipulation programs are today. In June of 1995, the Multimedia PC Working Group and the SPA, an influential organization comprised of leading PC software publishers, approved the new multimedia PC standard platform. It employs MPEG, a video compression industry standard that allows for the efficient storage and VHS-quality playback of digital video on a personal computer. The use of digital audio and video information has the advantage of nearly instant access to any point of information without a time delay. Popular digital video applications include multimedia publishing and digital video editing. Multimedia publishing includes desktop delivery of games, reference materials, computer-based training courses, advertising, interactive music CDs and electronic books. The user can view and browse the digital information. Digital video editing is used to edit video and audio clips.
U.S. Patent No. 5,467,288 to Fasciano et al. issued November 14, 1995, is directed to a digital audio workstation for the audio portions of video programs. The Fasciano workstation combines audio editing capability with the ability to immediately display video images associated with th<- audio program. An operator's indication of a point or segment of audio information is detected and used to retrieve and display the video images that correspond to the indicated audio programming. The workstation includes a labeling and notation system for recording digitized audio or video information. It provides a means for storing in association with a particular point of the audio or video information, a digitized voice or textual message for later reference regarding that information.
U.S. Patent No. 5,045,940 to Peters et al. issued September 3, 1991 is directed to a data pipeline system which synchronizes the display of digitized audio and video data regardless of the speed with which the data was recorded on its linear medium. The video data is played at a constant speed, synchronized by the audio speed. The above systems do not provide for the need to analyze, index, annotate, store and retrieve large amounts of video information. They cannot support an unlimited quantity of video. They do not permit a transcript to be displayed simultaneously with video or permit ease of subtitling. Subtitling is a painstaking and labor intensive process for the film industry and an impediment to entry into foreign markets. These systems do not permit searches of video or text for words or events or permit real time coding of video. Additionally, these systems do not permit changing the time domain during which video is displayed. They do not permit viewing video clips sequentially in the form of a presentation. They do not have an alarm feature which can designate the time to perform a system action.
OB ECTS OF THE INVENTION
It is therefore an object of the present invention to overcome the above- mentioned shortcomings of the background art.
It is another object of the present invention to provide an interactive digital video system which can accommodate an unlimited quantity of video information.
It is yet another object of the present invention to provide a digital video system relating digitized video information to digitized audio information.
It is an additional object of the present invention to provide a digital video system relating digitized video information to additional information such as a transcript, annotations, scanned documents or exhibits, or waveforms from an oscilloscope.
It is still an additional object of the present invention to provide a digital video system which permits ease of subtitling.
It is a further object of the present invention to provide a digital video system which permits multiple subtitles, for example in different languages.
It is still another object of the present invention to provide a digital system which permits analysis of video and audio information.
It is yet a further object of the present invention to provide a digital video system which permits searches of video information for an event or word or phrase.
It is an additional object of the present invention to permit viewing video information and a textual transcript of audio information simultaneously.
It is yet an additional object of the present invention to provide a digital video system that permits searches of a transcript of the audio information.
It is one more object of the present invention to provide a digital video system that provides multiple transcripts. It is still a further object of the present invention to provide a digital video system that permits real time coding of video.
It is an object of the present invention to provide a digital video system which permits changing the time domain during which video is played from slow motion to real time to fast motion.
It is another object of the present invention to provide a digital video system which has an alarm feature.
It is yet another object of the present invention to provide a digital video system which has a presentation mode.
In order to achieve these and other objects of the invention, there is provided a digital video system comprising coding and control means, adapted to receive digital reference video information, for coding the digital reference video information to generate coded data; and coded data storing means for storing the coded data from the coding and control means.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention may be better understood and further advantages and uses thereof more readily apparent, when considered in view of the following detailed description of exemplary embodiments, taken with the accompanying drawings in which:
FIG. 1 A is a functional block diagram of a preferred embodiment of the present invention;
FIG. IB is a functional block diagram of the coding and control means shown in FIG. 1A;
FIG. 1C is a chart showing the structure of the coded data store of FIG. 1A for indexing data;
FIG. ID is a software flowchart of the preferred embodiment of the present invention;
FIG. IE is a map of time reference information;
FIG. 2A is a drawing of the main button bar of the present invention;
FIG. 2B is a diagram of the manager button bar of the present invention;
FIG. 2C is a diagram of the application tool bar of the present invention;
FIG. 3 is a diagram of the user list window of the user module of the present invention;
FIG. 4 is a diagram of the user detail window of the user module of the present invention;
FIG. 5. is a table showing the coding and transcription rights of the user detail window of the user module of the present invention;
FIG. 6 is a table of the system management rights of the user detail window of the user module of the present invention;
FIG. 7 is a diagram of the module sub-menu of the study module of the present invention;
FIG. 8 is a diagram of the study list window of the study module of the present invention;
FIGS. 9A and 9B are diagrams of the study detail window of the study module of the present invention;
FIG. 10A is a diagram of the study outline of the study detail window of the study module of the present invention before dragging a characteristic;
FIG. 1 OB is a diagram of the study outline of the study detail window of the study module of the present invention after dragging a characteristic;
FIG. 11 is a diagram of the select an event/sampling method choice menu for creating a new event type and opening the event type detail window of the present invention;
FIG. 12 is a diagram illustrating creating a new pass in the study outline of the study detail window of the study module of the present invention;
FIG. 13 is a diagram of the event type detail window of the study module of the present invention;
FIG. 14 is a diagram of the characteristic detail window of the study module of the present invention;
FIG. 15 is a diagram of the unit selection window of the study module of the present invention;
FIG. 16 is a diagram of the use units from other study window of the study module of the present invention;
FIG. 17 is a diagram of the unit list window of the unit module of the present invention;
FIG. 18 is a diagram of the unit detail window of the unit module of the present invention;
FIG. 19 is a table of the palettes which may be opened over the video window of the present invention;
FIG. 20 is a diagram of the video window of the present invention; FIG. 21 A is a diagram of the title area of the video window of the present invention;
FIG. 21B is a diagram of the video area of the video window of the present invention;
FIG. 21 C is a diagram of the mark area of the video window of the present invention;
FIG. 21D is a diagram of the instance area of the video window of the present invention;
FIG. 22 is a diagram of the mark area of the video window of the present invention;
FIG. 23 is a diagram of the instance area of the video window of the present invention;
FIG. 24 is a diagram of the List Area of the video window;
FIG. 25 is a diagram of the List Area with two transcripts displayed;
FIG. 26 is a diagram of the Select an Outline window;
FIG. 27 is a diagram of the outline description window;
FIG. 28 is a diagram of the outline palette;
FIG. 29 is a diagram of the outline item window;
FIG. 30 is a diagram of the sample definition window;
FIG. 31 is a diagram of the sample palette;
FIG. 32 is a diagram of the sample information window;
FIG. 33 is a diagram of the unit analysis window;
FIG. 34 is a diagram of the define unit variable window;
FIG. 35 is a diagram of the define event variable window;
FIG. 36 is a diagram of the instance analysis window;
FIG. 37 is a diagram of the define analysis variable window;
FIG. 38 is a diagram of the search window contents common for text and event instance searches;
FIG. 39 is a diagram of the event instance searcl- window; and FIG. 40 is a diagram of the text search window.
DETAILED DESCRIPTION OF THE INVENTION
With reference to FIG. 1A, there is shown a digital video system in accordance with the preferred embodiment of the present invention including coding and control means 1 for coding digital reference video information and generating coded data and coded data store 2 for storing the coded data from the coding and control means. The coding and control means 1 is adapted to receive digital reference video information from video reference source 3. The coding and control means 1 is connected via a databus 5 to the coded data store 2. The coding and control means 1 includes a general multipurpose computer which operates in accordance with an operations program and an applications program. Also illustrated in FIG. 1 A is an output 6 which may be a display connected to an input/output interface.
The video reference source 3 may be a video cassette recorder such as a SONY model EV-9850. The coding and control means 1 may be an Apple Macintosh 8500/132 computer system. The coded data store 2 may be a hard disk such as a Quantum XP32150 and a CD-ROM drive such as a SONY CPU 75.5-25. The output 6 may be a display monitor such as an Apple Multiple Scan 17 M2A94.
As illustrated in FIG. 1A video information from a video reference source 3 may be digitized by digital encoder 9 and compressed by compressor 10. The digital video information may be stored in digital storage means 11. Alternatively, if the video information is already digitized, it may be directly stored in digital storage means 11. Digital video information from digital storage means 11 may be decoded and decompressed by decode/decompression means 12 and input to the coding and control means 1. The video reference source 3 may be an analog video tape, a camera, or a video broadcast. The coding and control means 1 may generate coded data automatically, or by interactive operation with a user, by interactive operation with a user in real time, or semi-automatically. For semiautomatic control, the user inputs parameters. When the only source of information is video information, the coding and control means performs the function of indexing only. Indexing is the process through which derivative information is added to the reference video information or stored separately. This derivative information provides the ability to encode instances of events and/or conduct searches based on event criteria.
Terms
'Reference" information is video or audio information such as a video tape and its corresponding audio sound track.
'Derivative" information is information generated during the coding process such as indices of events in the video, attributes, characteristics, choices, selected choices and time reference values associated with the above. Derivative information also includes linking data generated during the coding process which includes time reference values, and unit and segment designations.
"Additional "information is information that is input to the video system in addition to reference information. It includes digital or analog information such as a transcript of audio reference information, notes, annotations, a static picture, graphics, a document such as an exhibit, or input from an oscilloscope.
The coding and control means 1 may be used interactively by a user to mark the start point of a video clip and a time reference value representing a mark in point is generated as coded data and stored in the coded data store 2. Further, the user may optionally interactively mark the end point of a video clip and a time reference value representing the mark out point is generated at. coded data and stored in the coded data store 2. The user may interactively mark an event type in one pass through the digital reference video information. The user may mark plural passes through the reference video information to mark plural event types. The mark in and mark out points are stored in indices for event types.
The coded data that is added may be codes of data that are transparent to a standard player of video but which are capable of interpretation by an modified player. The coded data may be a time reference value indicating the unit of digital reference video information. Additionally the coded data may be a time reference value indicating the segment within a unit of digital reference video information. Thus, unlimited quantities of digital reference video information may be identified and accessed with the added codes. There may be more than one source of reference video information in the invention.
Also illustrated in FIG. 1A is an audio reference source 4 which is optional. The digital system of the present invention may operate with simply a source of video reference information 3. Optionally, however , a source of audio reference information 4, a source of digital additional information XD 13 or a source of analog additional information XA 14 may be added. There may be plural sources 4 of audio reference information, plural sources of digital additional information 13 or plural sources of additional analog information 14. Any combination of sources of information may be included.
When there is a source of audio reference information 4, the audio reference information is input to digital storage means 11. If the audio reference information from source 4 is already digital, it may be directly input and stored in digital storage means 11. Alternatively, if the audio reference information from source 4 is analog, the information may be digitized and compressed by digital encoder 7 and compression means 8 before being stored in digital storage means 11. The digitized audio reference information is output from digital storage means 11 to coding and control means 1 via decode/decompression means 12. The compression and decompression means 8 and 12 are optional. The audio reference sources 4 may be separate tracks of a stereo recording. Each track is considered a separate source of audio reference information.
The video reference source 3 and the audio reference source 4 may be a video cassette recorder such as SONY EVO-9850. The digital video encoder 9 and compressor 10 may be a MPEG-1 Encoder such as the Future Tel Prime View II. The digital audio encoder 7 and compressor 8 may be a sound encoder such as the Sound Blaster 16. A PC-compatible computer system such as a Gateway P5-133 stores the data to a digital storage means 11 such as a compact disc recording system like a Yamaha CDR00 or a hard disk like a Seagate ST72430N.
The coding and control means 1 codes the reference video and audio information to generate coded data. Whenever there is more than one source of information such as an audio reference source 4, a source of additional digital information 13 or a source of additional analog information 14, the coding and control means 1 performs a linking function. Linking is the process by which information from different sources are synchronized or correlated with each other. This is accomplished through the use of time reference data. Linking provides the ability to play and view video, audio and additional information in a synchronized manner. The linking data permits instant random access of information. The coding and control means 1 performs the linking function in addition to the indexing function discussed above. Linking and indexing are referred to as 'boding". When there is more than one source of information the coding and control means 1 performs linking and/or indexing. The linking data which comprises time reference values is stored as coded data in coded data store 2. Additionally, the indices of data that is added by the process of coding is stored in coded data store 2.
In addition to audio reference and video reference information, the digital video system may include a source of additional information which may be analog or digital. In the event that the additional information is analog the additional inf ormation from source 14 may be digitized by digital encoder 15. The additional information from source 13 or 14 may be the transcript of the audio reference information, notes or annotations regarding the audio or video reference information, a static picture, graphics, or a document such as an exhibit for a videotaped deposition with or without comments. The source of the additional information may be a scanner or stored digital information or a transcript of a deposition being produced in real time by a stenographer. The annotations or notes may be produced in real time also. The coding and control means codes the reference video information, reference audio information, and additional analog or digital information to generate coded data which includes linking data and indexing data.
During the coding process, when indexing is performed interactively by a user, the coded data which is generated is attribute data. The attribute data may be an event type. This creates a first table which is an index of event types. For example, event types may be 'Questions," 'Pause Sounds, "or 'Writing on Board" for a study of a video of a teacher's teaching methods. These are events which take place in the video. The attribute data may regard a characteristic associated with an event type. This creates an index of characteristics. In the example mentioned, characteristics for the event type of 'Questions" may be ' dministrative questions," 'questions regarding discipline, "or 'content of questions. " This creates a second table which is an index of characteristics associated with each event type. The attribute data may include a plurality of choices for a characteristic. In the above example, choices for the characteristic of 'administrative questions" may include 'administrative questions regarding attendance, " ' dministrative questions regarding grades, "or 'administrative questions regarding homework. " This creates a third table which is an index of choices of a characteristic. A fourth table designates a selected choice of a plurality of possible choices. Thus for example, the selection may be 'administrative questions regarding grades." A fifth table is created which includes time reference values associated with each instance of the event type. So for example, an index is created of time reference values associated with each time a question is asked for the event type 'Questions". During the coding process, the user interactively marks the mark in point of the video reference information that designates each instance of a question being asked. Additionally, the user may optionally mark the mark out point when the question is finished being asked.
The digital video system of the invention also permits automatic or semi-automatic coding and control. For example, the coding and control means 1 may create an index of the time reference values corresponding to each time the video scene changes. The coding and control means 1 may generate a time reference value for a scene change by comparing the digitized data of a number of frames N to determine a scene change. So for example, where N = 3 the coding and control means 1 may compare three frames to determine if a scene has been changed. The user may input the parameter N. For example, the user may change N to 5 and change the operation of the system so that the coding and control means 1 compares five frames to determine if a scene has been changed. Depending upon the type of events being shown in the video, it may be necessary to determine a threshold amount of changed data to determine if a scene has changed. For example, a camera cut can be more easily determined than fading. Further, if the video is of a sports event, there may be a lot of dynamic action in the video even though no scene change has occurred. Thus, the user may input the threshold amount T, for example T=50%, of changed data which is necessary to determine if a scene has changed. The user may change the threshold amount T, from 50% to 20% for example.
The indexing and control means 1 includes the ability to search for instances of an event type. The coding and control means 1 may search for instances of one event type occurring within a time interval Y of instances of a second event type. Thus, the system can determine each instance when one event occurred within a time interval Y of a second event.
The coding and control means 1 includes an alarm feature. An alarm may be set at each instance of an event type. When the alarm occurs, the coding and control means 1 controls a system action. Thus, for example, each time a question is asked in the video, the system may position the video and play. Other system actions such as stopping the video, highlighting text of a transcript or subtitling may occur.
The coded data stored 2 may be a relational database, an object database or a hierarchical database.
The coding and control means 1 performs the linking function when there is more than one source of information. Linking data is stored to relate digital video and digital audio information. Linking data may also link digital video or digital audio information to additional information from sources 13 and 14. Linking data includes time reference values. Correlation and synchronization may occur automatically, semi-automatically or interactively. Synchronization is the addition of time reference information to data which has no time reference. Correlation is the translation or transformation of information with one time bas- to information with another time base to make sure that they occur at the same time.
The digital system of the present invention operates on time reference values that are normalized unitless values. During synchronization, time reference values are added to information that includes no time reference such as a document which is an exhibit for a videotaped deposition. If both sources of information include time reference information, the correlation process transforms one or both to the time reference normalized unitless values employed by the system. One or both sources of information may be transformed or points may be chosen that are synched together. The time reference information of one source can be transformed to a different time scale by a transformation function. The transformation function may be linear, non-linear, continuous, or not continuous. Additionally, the transformation function may be a simple offset. The transformation function may disregard blocks of video between time reference values, for skipping advertising commercials, for example.
Time codes with hour, minute, second and frame designations are frequently used in the film industry. The coding and control means 1 correlates these designations to the normalized unitless time reference values employed by the system.
Likewise, the coding and control means 1 may transform a time scale to the time code designation with hour, minute, second and frame designations. The coding and control means 1 may correlate two sources of information by simply checking the drift over a time interval and selecting points to synch the two information sources together. The coding function of the digital system of the present invention is not just an editing function. Information is added. Indices are create-.!. Further, a database of linking data is created. The original reference data is not necessarily modified.
New data is created though the system can be used for editing The coded data store may be in any format including edit decision list (EDL) which is the industry standard, or any other binary form. The coded data store 2 stores the data base indices which are created, linking data, and data from the additional sources 13 and 14 which may include static pictures, graphics, documents such as deposition exhibits, and text which may include transcripts, translations, annotations, or close captioned data. Subtitles are stored as a transcript. There may be multiple transcripts or translations or annotations or documents. This permits multiple subtitles.
FIG. IB illustrates the coding and control means 1 of FIG. 1A. The coding and control means includes controller 16. Controller 16 is connected to derivative data coding means 17 and correlation and synch means 18. Controller 16 is also connected to the coded data store 2 and to the output 6. Digital information from the digital storage means 11 is input to the derivative data coding means 17. If information from one source only is input to the derivative data coding means 17, only the indexing function is performed. If information from two sources is input to the derivative data coding means 17 indexing and linking is performed. The coding and control means 1 may further include correlation and synch means 18 for receiving additional data XD and XA. The correlation and synch means 18 correlates data with a time reference to the video information from the digital storage means 11 and synchronizes data without a time reference base to the digital video information from the digital storage means 11. Control loop 19 illustrates the control operation of the controller 16. The user may be part of control loop 19 in interactive or semiautomatic operation. Control loop 20 illustrates the control function of controller 16 over correlation and synch means 18. The user may be a part of control loop 20 in interactive and semi-automatic operation.
For interactive or semi-automatic operation, the control loops 19 and 20 also include input/output interface devices which may include a keyboard, mouse, stylus, tablet, touchscreen, scanner or printer. FIG. IC is a chart showing the structure of the coded data store 2 of 1A for indexing data. FIG. ID is a software flowchart. The following define the indices of the coded data store 2.
DEFINITIONS
Characteristics Characteristics are variables which are applicable to a particular event type. An example would be Event Type 'Teacher
Question" and a characteristics of the question might be
'Difficulty Level."
CharChoices CharChoices contains valid values of the parent Characteristics variable. For example, in the example of the Characteristic
'Difficulty Level" the CharChoices might oe 'High," 'Medium" and 'Low. " CharChoices serves as a data validation tool to confine user data entry to a known input that is statistically analyzable.
Event Types Stores model information of the event code such as whether the code can have an in and out point. Serves as a parent to the characteristic table which includes possible variables to characterize the event type.
Instances Contains instances of particular event types with time reference information.
InstCharChoice Stores actual value attributes to a characteristic of a particular event instance. For example, one instance of the teacher question might have a value in the characteristic 'Difficulty Level" of
'Medium. "
OutlineHdng Stores the major headings for a particular outline. OutlineSubHdng Stores actual instances that are contained in an outline. These instances were originally coded and stored in the instance table, but when they are copied to an outline are completely independent of the original instance.
Pass Filters Stores filter records which are created by the sampling process.
These records are used to screen areas for coding.
Samples Stores samples for the purposes of further characterization.
These instances are either a random samp-e of previously coded instances or computer generated time slices created using sampling methodologies.
Segments Corresponds to physical media where the video is stored. This table is a 'many" to the parent Units table.
SeqNums Stores Sequence numbers for all tables. Sessions Sessions keeps track of coding for each study down to the pass and unit level. Therefore, a user may go back to his/her previous work and resume from where they left off.
Studies Parent table to all information used to code view and analyze units. The following tables are children: Study Units,
Study Events, Study Pass.
Studies Pass Stores information for a particular pass in the study such as pointers to filters and locked status for sampling.
StudyUnits Contains references to units that are attached to a particular study. Since there may be multiple units for each study and there may be multiple studies that utilize a particular unit, this table functions as a join table in the many-to-many relationship.
Study Event This table stores particular information relevant to the use of a particular event type in a particular study and a particular pass. Since there may be multiple Event Types for each study and there may be multiple studies that utilize a particular Event Type, this table functions as a join table in the many-to-many relationship. Transcribe Stores textual transcript and notes and time reference values for each utterance which correspond to a Unit. Units Parent table of videos viewable.
Users Contains all valid users of the system.
The coded data store 2 stores data representing time reference values relating the digital audio information to the digital video information and vice versa. Accordingly, for any point in the video information, the corresponding audio information may be instantly and randomly accessed with no time delay. Additionally, for any point in the audio information, the corresponding video frame information may be instantly and randomly accessed with no time delay. The coded data store 2 stores attribute data. The attribute data is stored in an index and is derivative data that is added during the coding process. The attribute data may be an event type, such as any action shown in the video such as a person in the video raising his hand or standing up or making a field goal. Attribute data may be time reference data indicating instances of an event type. The attribute data may also include a characteristic associated with an event, such as directness or acting shy. The attribute data may also include a plurality of choices of characteristics such as being succinct or being vague. It may be the chosen choice of plural possible choices. The coded data store 2 stores time reference data corresponding to the attribute data.
Additionally in the invention, the coded data store 2 stores data representing the text of a transcript of the digitized audio information. For use in legal services such as recording and analyzing depositions, a video deposition can be digitized. The video information originates at reference source 3 and the audio information originates at reference source 4. The video and audio information may be digitized and/or compressed via digital encoders 7 and 9 and compressors 8 and 10 and stored in a digital storage means 11. Additionally, a tran*: zήpt of the deposition may be stored in coded data store 2. More than one transcript, foreign language translations, for example, may be stored. Coding and control means 1 accesses video information from digital storage means 11 , audio information from digital storage means 11 and the transcript information from coded data store 2 and simultaneously displays the video and the text of the transcript on output disp-ay 6. Additionally, the audio is played. The video is displayed in one area of a Video Window called the video area and the text of the transcript is displayed in a transcript area. More than one transcript may be displayed.
Notes and annotations and static documents in the form of text or pictures/graphics may be stored in the coded data store 2 and may be simultaneously displayed in a second transcript area of the Video Window. The Video Window is illustrated in FIG. 20 and is described in detail later.
Additionally or alternatively, subtitles can be added to the video information and displayed on output display 6 in the same areε as the video. When the digital video system is operated employing subtitles, the viewer can view the video information with subtitles and simultaneously watch the text of the transcript on output display 6.
The attribute data that is stored may be regarding video scene changes.
Every time the scene in the video changes the time reference data of the scene change is stored in the coded data store 2. This may be performed interactively or automatically or semi- automatically. If there are a number of times that an event occurs the time reference values associated with each occurrence of the event are stored in the coded data store 2. The present invention has a presentation ability where a presentation may be displayed on output display 6. The video associated with each stored time reference value is displayed in sequence to create a presentation. For example, in an application dealing with legal services and videotaped depositions, every time a witness squints his eyes may be kept track of by storing a time reference value associated with each occurrence of the event during the coding process. The time reference values represent the times at which the pertinent video portion starts and finishes. Then a presentation may be made of each occurrence of the event one after the next. The digital system of the invention includes search abilities where a word or a phrase may be searched in the text of the transcript of the digitized audio information. A search of notes, annotations or a digitized document for a word or phrase may also be performed. Additionally, the present invention includes the ability to perform statistical analysis on the attribute data. Random sampling of instances of an event type can be performed. Coding and control means 1 accesses coded data store 2 and analyzes the data in accordance with standard statistical analysis. The invention includes a method of analyzing video information including storing digital video information, storing digital audio information, storing coded data linking the digital video and digital audio information, storing coded data regarding events in indices, and computing statistical quantities based on the coded data.
The present invention results in a video analysis file for a multimedia spreadsheet containing time-dependent information linked to video information. The video information and textual (transcript, annotations or digitized documents) information can be searched. The video information may be stored on a CD-ROM disk employing the MPEG-1 video standard. Other video standards may be employed. Additionally, other storage media may be employed. The coded data store 2 and digital storage means 11 illustrated in FIG. 1A may actually be parts of the same memory.
Analog videotapes may be converted into digital video format by a standard digitized video transfer service that is fast and inexpensive, and deals with high volume at a low cost. The digital video service digitizes the video, compresses it and synchronizes it with the audio. Alternatively, the systeu- may digitize the video and audio information from reference sources 3 and 4. The source of information may be a commercial, broadcast or analog video tape.
The present invention permits video analysis so that the user may view, index, link, organize, mark, annotate and analyze video information. This is referred to as 'boding." On screen buttons and controls permit the marking, coding and annotation of the video. A transcription module permits synchronized subtitles. Multiple subtitles are possible, which is of importance to the foreign market for films which may require subtitles for different languages. The present invention has note- taking abilities. Searches may be performed for the video information, notes, the transcript of the audio information, coded annotations or digitized documents. A presentation feature permits the selection and organization of video segments into an outline to present them sequentially on a display or to record them to a VCR or a computer file.
Complex coding and annotations are performed in several passes such that multiple users may code and annotate the digitized information. One user may make several passes through the video for coding, marking and annotating or several users may each make a pass coding, marking and annotating for separate reasons. Information may be stored and displayed in a spreadsheet format and/or transferred to a statistical analysis program, and/or to a graphics program. Types of statistical analyses which may be conducted, for example, are random sampling, sequential analysis, cluster analysis and linear regression. Standard algorithms for statistical analysis are well known. Additionally, the information may be input to a project tracking program or standard reports may be prepared. Spreadsheets and graphs may be displayed and printed.
The present invention has use in video analysis for research and high end analysis, the legal field and the sports market. The present invention would be useful in research in fields of behavior, education, psychology, science, product marketing, market research and focus groups, and the medical fields. For example, teaching practices may be researched. Verbal utterances are transcribed, multiple analysts mark and code the events and annotate the video information for verbal and nonverbal events, lesson content and teacher behavior. The transcribed utterances, marks, codes, and annotations are linked to the video and stored. The information may be consolidated, organized, presented or input for statist:- al analysis and interpretation. Other fields of research where the invention has application are industrial process improvement, quality control, human factors analysis, software usability testing, industrial engineering, and human/computer interactions evaluations. For example, videos of operators at a computer system can be analyzed to determine if the computer system and software is user friendly The present invention would be useful in legal services where videotaped depositions may be annotated and analyzed. Deposition exhibits may be stored in the coded data store with or without notes on the documents. Additionally, there is an application for the present invention in the sports market where sports videos may be annotated and coded for analysis by coaches and athletes. The present invention includes applications and operations software, firmware, and functional hardware modules such as a User Module, a Menu Manager, a Unit Module, a Study Module, a Video Window, a Transcribe Mode, a List Manager, an Outline Presentation Feature, a Sampling Feature, an Analysis Module and a Search Module. Reports may be created and output.
A unit, as used herein, is composed of a video and transcript data. A unit may span several tapes, CD's or disks. These media are referred to as segments, and a unit has at least one segment. The present invention may handle multiple segments per unit. This permits the present invention to accommodate an unlimited quantity of video information. A unit may include plural transcripts stored in memory. A transcript is the text of speech in the video, foreign language translation, subtitles or description or comments about the video.
A study includes a collection of units. A study is defined to specify coding rules for the units associated with it, for example, what event types and characteristics are to be recorded. A unit may be associated with one or more studies.
When an analyst starts coding for a study, the basic unit of work is called a session. A session is a specific coding pass for a specific unit by one user. The number of sessions that are created for a study is equal to the number of units included in the study multiplied by the number of coding passes defined for the study. A session must be open in order to go into code mode on the coding window. If no session is open, the user is prompted to open one.
THE USER MODULE
The User Module includes all windows, controls, and areas that are used to define users, control security, logon, and do primary navigation through the interactive digital system. The User Module is briefly mentioned here for the purpose of describing logon and is explained in more detail later.
The interactive video analysis program of the pr sent invention requires a logon before granting access to the program functions and data. The purpose of the logon process is not only to secure the database content, but also to identify the user, assign access privileges, and track information such as how long a user has been logged on. After a successful logon, the user is assigned access privileges and presented with the program's main button bar which contains icons that allow entry to various parts of the program. The number and type of icon that appear on the button bar are for a given user dependent on the privileges granted to him in his user record. The main button bar, or alternatively the application tool bar, is part of the Menu Manager.
THE MENU MANAGER
The main button bar is illustrated in FIG. 2A. The manage button bar of FIG. 2B is accessed from the main button bar of FIG. 2A and is an extension of the main button bar. Access to commonly accessed modules ι-» provided by the main and manage button bars. In another preferred embodiment, the application tool bar of
FIG. 2C replaces the main button bar and manage button bar of FIGS. 2A AND 2B.
With respect to FIG. 2C, icon 21 represents 'View, "icon 22 represents 'bode, " icon
23 represents 'transcribe, "icon 24 represents 'study," icon 25 represents 'Unit" for defining new units, icon 26 represents 'search, "and icon 27 represents 'analysis. "
Area 28 displays the units, for example, which unit is current or permits selection of previously defined units. Area 29 represents the 'outline" feature and area 30 is directed to Sessions" selection. The application wide tool bar provides access to the most commonly accessed modules including Video- View Mode, Video-Code Mode, Video-Transcribe Mode, Search Module, Unit Module, Analysis Module, Help Module, Session Selection, Unit Selection, and Outline Selection.
Video- View Mode
The Video- View Mode opens the Video Window, making the view mode the active module. If the user has never accessed a unit record, the user will be presented with a unit selection dialog.
Video-Code Mode
The Video-Code Mode opens the Video Window, making the code mode the active module. If the user has never accessed a session, the user will be presented with a session selection dialog.
Video-Transcribe Mode
The Video Transcribe Mode opens the Video Window, making the transcribe mode the active module. When the transcribe mode is activated the transcription looping palette will be displayed automatically.
Search Module
The Search Module opens the search window, making it the current module.
Unit Module
The Unit Module opens the Unit Module, making it the current module. Study Module
The Study Module opens the Study Module, making it the current window.
Analysis Module
The Analysis Module opens the Analysis Module, making it the current window.
Help Module
The Help Module opens the interactive video analysis help system.
Session Selection Popup
The session selection popup provides the ability to change the current session when in Video-Code Mode.
Unit Selection Popup
The unit selection popup provides the ability to change the current unit when in Video- View Mode.
Outline Selection Popup
The outline selection popup provides the ability to change the current outline when in Video-Transcribe Mode.
THE USER MODULE
The User Module is now described in more detail. Users are added via an application preferences dialog. User List Window
FIG. 3 illustrates the user list window. The user list window lists the users.
User Detail Window
The user detail window of the User Module is illustrated in FIG. 4. It is the primary window that contains all data needed to define a user, including information about the user and security privileges. This window is presented when adding a new user, or when editing an existing user. The fields and controls in the window include, the window name 'User Detail, " the first name, the last name, the user code, phone number, e-mail address, department, custom fields, whether logged on now, last logon date, number of logons since, logged hours reset count, comments, logon id, set password, and login enabled. The user detail window includes coding and transcription rights area 31. This is a field of four check boxes that grant privileges to code video (create instances) or edit the transcription text as shown in the table of FIG. 5. The user detail window also includes system management rights area 32. This area is a field of five check boxes that grant privileges to manage setup of the study and various other resources as shown in the table of FIG. 6. The user detail window further includes the make-same-as button, navigation controls, a print user detail report button and a cancel /save user button.
THE STUDY MODULE
The collection of windows and procedures that together allow definition of studies, event types, characteristics, choices, units and samples comprise the "Study Module" . The Study Module is reached from the main button bar or the applications tool bar that is presented when the interactive video analysis program is initiated. A study can be thought of as a plan for marking events that are seen in the video or in the transcription text of the audio information. A study contains one or ore event types, which are labels for the events that are to be marked. Each event type may also have one or more characteristics, which are values recorded about the event. When an event is marked in the video or transcript text it is formally called an "event instance" . When the project is first initialized, one study is created. A default study is used when the user does not choose to create a predefined coding plan (study), but rather wishes to use the program in a mode when event type can be assigned at will.
General Navigation
The Study Module may be accessed by selecting the study button from the application tool bar or by selecting study from the module submenu. When the module is first opened the user is presented with a standard find dialog whereby he can search for specific records which he wishes to work with. The find dialog screen is illustrated in FIG. 7.
Generic Actions
Double-clicking on a list item results in opening that item for edit. For example, double-clicking on a study in the studies list window, as illustrated in FIG. 8, results in opening a study for edit in the study detail window. There is a close box in the title bar or a save button whereby the record is saved and the user is returned to the previous window. The ok/cancel button has the action of returning to the original window.
Navigation Controls
General navigation controls such as First, Prev, Next and Last are included. The First control goes to the first record in the selection displayed in the list. The Prev button goes to the record immediately before the current record in the selection displayed in the list. The Next button goes to the record immediately after
'SUBSTTΓUTE SHEET (RULE 26) the current record in the selection displayed in the list. The Last button goes to the last record in the selection displayed in the list.
Constrained Studies
A study can be constrained to be a subset of another study. This means that the study can only contain units that were specified in the other study (either all the units, or a subset of the units). If additional units are added to the "parent study", they become available to the constrained study but are not automatically added. Constraining a study to be a subset of another study also means that the event types for the parent study are available as event filters in the sample definition for the constrained study. As explained in detail below, a study is constrained when the "constrain unit selection to be a subset of the specified study" button is checked on the "use units from other study window" as illustrated in FIG. 16. The constraint cannot be added after any sessions have been created for the study. The constraint can be removed any time as long as the constraint study does not include any event types from the parent study in its sample criteria.
The Default Study
Every project contains a default study that is created when the project is first created. The default study allows entry into code mode of the Video Window shown in FIG. 20 if no formal studies have been defined. Event types and characteristics may be added to the default study at will from the Video Window. The default study is maintained from the video window and not from the Study Module, hence, it does not appear in the study listing window shown in FIG. 8. It does appear whenever studies are listed in all other modules. A session is always open for the default study which is called the default session. If no other studies have been created in the project, the default session is opened without prompting when the user goes into code mode on the study window. Applied Rules
Units may be added to a study. A unit cannot be added to a study unless it is locked. The purpose of the lock is to insure that the set of units for a specific study does not change once a pass has been locked.
Studies may be deleted. A study cannot be deleted if it is constrained by another study or if the study itself is locked. A study should not be allowed to be constrained to another study that is not locked yet.
Studies List Window
The studies list window shown in FIG. 8 presents all the studies defined for the project. The window displays only the three fields: study name, description, and author. Double-clicking on a study record opens the study detail window for the selected study.
Study Detail Window
The study detail window is the primary window that contains all data needed to define a study. This window is presented when creating a new study or when editing an existing study. The study detail window is illustrated in FIGS. 9A and 9B.
Referring to FIG. 9A the study detail window includes a number of fields and controls. Field 41 is the window title. The name of this window is "Study Detail" . Field 42 is the study name. In this field the name of the study may be entered. Field 43 is the author. This is a non-enterable area that is filled by the program using login data. Field 44 is the create date area which includes the date and time when the study was initially created. This is a non-enterable area that is filled by the program when the study record is created. Field 45 is the study description which is a scrollable enterable area for text to describe the study. Field 46 is the study outline which is a scrollable area that shows the event types, characteristics, and choices created for the study. Event types are shown in bold in FIG. 9A. Characteristics are displayed in plain text under each event type. Choices are displayed in italics under each characteristic. Thus, as shown in FIG. 9A the event type is "Question Asked", the characteristic is a "Directness" and the choices are "Succinct" and "Vague".
A line separates each pass, the pass separation line is designated 46d. If an event type or characteristic is double-clicked that item is opened for edit in its appropriate detail window. If a choice value is double-clicked on, its parent characteristic is opened for edit in the characteristic detail window.
FIG. 9 A illustrates a study detail window for a study for video analysis of teaching practices. In the field of research for education, teaching practices may be analyzed by video taping teachers interacting with students in the school room. Various event types such as asking questions or raising one's hand or answering a question are analyzed by marking the events in the video. As shown in FIG. 9A, event type 46a is displayed in bold with the event code and event type name (e.g., "Questions Asked"); the type of marking associated with the event type (for example, "Vi/T" means "mark Video In Point and text" for each instance); and the pass in which the event type is to be marked (e.g., "1 "). Detailed descriptions of the meaning of each of these are given under the "event type detail window" which is shown in FIG. 13.
Allowable mark values are:
V = Video In and Out points are to be narked
Vi = Video In point is to be marked
E = Exhaustive segmentation
T = Text is to be marked
"V" , "Vi", and "E" are mutually exclusive. "T" may be used by itself or combined with "V" or "Vi" . For example, "Vi/T" means the Video In point and the text are to be marked for the event type. If no marking is turned on, then nothing is displayed (for example, see "Answer" in Pass 3 in the illustration).
When an event type is double-clicked, action opens that event type in the event type detail window.
The characteristic label 46b as shown in FIG. 9 A is displayed in plain text with the characteristic code (e.g., "DI "), name of the characteristic, and date data entry type (e.g. , "select one"). Characteristics are displayed immediately under the event type to which they are associated. When a characteristic is double-clicked, the action is to open that characteristic in the characteristic detail window as shown in FIG. 14.
The order in which the characteristics are displayed under the event type is also the order in which they are displayed on the Video Window. The user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic belonging to the same event type. Characteristics cannot be dragged from one event type to a different event type (for example: the user cannot drag characteristic "Directness" from event type "Question Asked" to event type "Answer"), but characteristics can be dragged from one event type to the same event type that belongs to a different pass through the video (for example: the user can drag characteristic "Effectiveness" from "Answer" in pass 3 to "Answer" in pass 2). When a characteristic is moved all associated choice values are moved with the characteristic and retain their same order. FIGS. 10A and 10B illustrate dragging a characteristic. FIG. 10A illustrates the before condition. The characteristic "Appropriateness" in pass 1 will be dragged to pass 2. FIG. 10B illustrates the after condition. The characteristic "Appropriateness" was dragged from pass 1 to pass 2. The action is to create a new appearance in the event type "Question Asked" in pass 2, with "Appropriateness" underneath it.
The choice value 46c illustrated in FIG. 9A is displayed in plain text with a user-defined value (e.g., " 1 ") and choice name. Choices are displayed immediately under the characteristic to which they are associated. The user can change the order of choices by clicking on a choice value and dragging it above or below another choice value belonging to the same characteristic. Choice values cannot be dragged from one characteristic to another or between passes.
The pass separator line 46d shown in FIG. 9 A separates the passes through the video being analyzed. If more than one pass has been created, a pass separator line is drawn between the event types of each pass. The pass separator line cannot be dragged or selected. Button 47 is the add event type button. The action of this button is to create a new event type and open the event type detail window shown in FIG. 13. The "select an event/sampling method" menu for creating a new event type and opening an event type detail window is illustrated in FIG. 11. Button 48 of the study detail window of FIG. 9 A is the "remove from study" button. The action of this button is to remove the highlighted item from the study along with all indented items under it. For example, removing an event type also removes the associated characteristics and choice values directly under it. If the last event type is removed from a pass, the pass is automatically deleted and removed from the "passes and sampling" area 55 of the study detail window. Pass 1 may no. be deleted.
The pass display area 49 displays the pass to which the highlighted event is assigned. It is also a control tool to select the pass. The pass display 49a is a non- enterable area which displays the pass of the currently highlighted event type. The pass selector area 49b is a control tool that works only when an event type is selected. Clicking the up-arrow moves the selected event type to the next higher pass. Similarly, clicking the down-arrow has the action of moving the selected event to the next lower pass. If the pass number is set to a value greater than any existing pass, the action is to create a new pass. Each pass must contain at least one event type.
The show characteristics checkbox 50, when checked, is for displaying all characteristics under the appropriate event type in the study outline area 46 and to enable the "show choices" checkbox. The show choices checkbox 51, when checked, displays all choice values under the appropriate characteristic in the study outline area
46. The add pass button 52 has the action of creating a new pass. The FIG. 12 illustrates a newly created pass represented by a separator line and a pass number. New event types will be added to the pass, and existing event types can be dragged to the pass.
The specified units area 53 of the study detail window of FIG. 9A has the action of presenting the unit selection window shown in FIG. 15. The specified units area 53 is a non-enterable text area to the right of the button which displays the number of units selected for the study. The button is disabled when the checkbox titled "Include all units in project" is checked.
Area 54 includes a unit constraint message. If a constraint is in effect that effects the selection of units, the text describing the constraint is displayed in this area. There are two possible values of the message: "Constrained to the subset of [study]" and "Constrained to include all units in the project" . The second constraint is imposed when the checkbox "Include all units in project" 60 is chosen. Area 56 is the unit selection description. This area is a scrollable enteraHe area for text to describe the units selected for the study. Area 55 is the "passes and sampling" area. This is a scrollable non-enterable area that displays an entry for each pass with the pass number and its sample mode. Area 57 includes navigation controls: First, Prev, Next and Last.
Button 58 is the print study button which is used to print the study detail report. Buttons 59 are the cancel/save study buttons. The save button saves all the study data and returns to the studies list window shown in FIG. 8. The cancel button returns to the studies list window of FIG. 8 without saving the changes to the study data. Checkbox 60 is the "Include all units in project" checkbox which has the action of setting behavior of the study so that all units in the project --.re automatically included in the study. Units may be added any time to the project, and they are automatically added to -he study. The Event Type Detail Window
The event type detail window is illustrated in FIG. 13. This window is for entry of all attributes for an event type. The window is reached through the study detail window of FIG. 9A when either an event type is double-clicked in the study outline 46 or when a new event is added employing button 47. A number of fields and controls of the event type detail window are described below.
The window title area 61 gives the name of the window which is "Event Type Detail" . The event code area 62 is an area for the code that uniquely identifies this event type when analysis data is created or exported. The event name area 63 is the area for the name of the event type. The saved search area 64 is a non-enterable text area which appears only if this event type was created by the Search Module to mark instances retrieved from a search. The area provides information only. An event type created to be a saved search can have characteristics, but cannot have video marking or text marking turned on. No new instances can be coded for a saved search event type.
The coding instruction area 65 is a scrollable text area for entry of instructions on how to mark this event type. This text area is presented when help is selected on the Video Window. The event instance coding area 66 contains checkboxes for specifying the rules identified at areas 67, 68 and 69 for how event instances are to be coded.
Typically instances will be marked using video, text or both. This means that "video marking", "text marking" or both will be checked. Instances can be marked for this event type in all passes in which the event type occurs, unless checkbox 69 entitled "restrict instance coding to earlier pass only" is checked. In this case, new instances can only be marked in the first pass in which the event type appears in the coding plan. For example, the same event type may appear in pass 2 and in pass 3. If the event instance coding is "mark video" and checkbox 69 "restrict to the earliest pass only" is checked, new instances may be marked in pass 2, but not in pass 3. An example of where this would be done is when one pass is for instance hunting (such as pass 2) and another pass is reserved for just characterizing (pass 3).
The event instance coding requirement determines what will be saved in the database for each instance. If an event type is defined as "Video In" only, then any "Video Out" or text marking is ignored when instances are created.
The "mark video" checkbox 67 specifies whether instances are to be marked using Video In or Out Points. The checked condition means that new instances are to be marked using the video mark controls. Th unchecked condition means that no video is to be marked for this event type. Three choices are presented for how the video is to be marked for an event type. This governs the behavior of the mark controls on the Video Window when an instance of this event type is marked. The choices are:
(1) mark with In Point only; only the Video In Point is to be marked for this event type;
(2) mark with In and Out Points: both the Video In Point and Out Point are to be marked for this event type; and
(3) exhausted segmentation: only the Video In Point is to be marked for this event type; the program automatically assigns an Out Point equal to the time code of the next In Point (i.e., every In Point is also the Out Point for the prior In Point).
The text marking checkbox 68 specifies whether instances are to be marked using text. The text condition means that new instances are to be marked using the text mark control. The unchecked condition means that no text is to be marked for this event type.
The "restrict instance coding to earlier pass only" checkbox 69 specifies whether instances can be marked in all passes in which the event type appears in the coding plan, or only in one pass. The checked condition means that event instances can only be marked in the first pass (first means the first sequential pass, not necessarily pass 1) in which the event type appears. If the event type appears in other passes in the coding plan, it behaves as if "mark video" and "mark text" are both unchecked. For example, event types in other passes can only be for entering characteristic values, not for marking new instances.
The characteristics outline area 70 is a scrollable area that shows all the characteristics and choices associated with an event type for all passes. Characteristics are displayed in plain text. Choices are displayed under each characteristic in italics. If a characteristic in the characteristics outline area 70 is double-clicked, the item is opened for edit in the characteristic detail window illustrated in FIG. 14. If a choice value is double-clicked, its parent characteristic is opened for edit in the characteristic detail window. The order in which the characteristics are displayed in the outline is also the order in which they are displayed on the Video Window. The user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic within the same pass. When a characteristic is moved, all associated choices are moved within the characteristic and retain their same order. A characteristic can belong to only one pass.
The add characteristic button 71 has the action of creating a new characteristic and displaying the characteristic detail window illustrated in FIG. 14.
The delete characteristic/choice button 72 has the action of deleting what is selected in the characteristics outline and all indented items under it. For example, deleting a characteristic also deletes all of its associated choice values.
The print event type button 73 has the action of printing the event detail report. The cancel/save event buttons 74 includes a save button which has the action of saving all the event type data and returning to the study detail window and a cancel button which has the action of returning to the study detail window without saving the changes in the event type data. Characteristic Detail Window
The characteristic detail window as illustrated in FIG. 14 is for entry of all attributes for a characteristic. This window is reached either through the study detail window illustrated in FIG. 9A or the event type detail window illustrated in FIG. 13 when either a characteristic is double-clicked in the outline or when a new characteristic is added. The fields and controls of the characteristic detail window are described below. The window title area 81 gives the name of this window which is "Characteristic Detail" . The characteristic code area 82 is an enterable area for the code that identifies this characteristic when analysis data is created or exported. The characteristic name area 83 is an enterable area for the name of the characteristic. The coding instruction area 84 is a scrollable text area for entry of instructions on how to mark the characteristic. This text is available when help is selected on the Video Window.
The data entry type area 85 presents four options on how data is to be collected for this characteristic. This governs the behavior o the mark controls on the Video Window when values for this characteristic are recorded. The options are:
(1) Enter Numeric Value: the value to be entered must be a real number.
(2) Enter Text Value: the value can be any keyboard-entered data.
(3) Select One from Choice List: the choices in the choice list are presented, and only one may be chosen. In this case, each choice must have a choice value (enterable by the user).
(4) Select All Applicable Choices: the choices in the choice list are presented, and all that apply may be chosen. In this case, each choice has a choice value that is programmatically determined; the first choice value is 1 , then 2, then 4, then 8, etc. The data entry type can not be changed once a session has been created for the pass in which this characteristic appears, nor can choices be added, changed, or deleted.
The choice list area 86 is a scrollable area that shows the choices associated with this characteristic. Choices can be added and deleted using add and delete choice buttons 87 and 88. Drag action allows the choices to be arranged in any order.
The add choice button 87 has the action of creating a new line in the characteristic outline for entry of a new choice. The new line is enterable. The delete choice button 88 has the action of deleting the selected choice after confirmation from the user. The print characteristic button 89 has the action of printing the characteristic detail report. The cancel /save characteristic buttons 90 can return to the study detail window or the event type detail window without saving the changes for cancel or with saving the changes for save.
Unit Selection Window
The unit selection window is illustrated in FIG. 15. The unit selection window allows specification of the units to be included in the study. The window is presented when the specified unit button is clicked on the study detail window illustrated in FIG. 9A. When the window is first opened, the "units selected for study" area 102 is filled with the units that have already been selected for the study. No units are displayed in the "unit list" area 97 unless the study is constrained to be a subset of another study. In this case this area is filled with all the units in the parent study.
The fields and controls of the unit selection window are described below:
The window title area 91 gives the name of this window which is "Unit Selection for Study: " followed by the name of the current study such as "Math Lessons" . The "unit selection description" area 92 is a scrollable text area for a description of this selection of units. This is the same text as appears in the "unit selection description" area on the study detail window of FIG. 9A.
The "show all units" button 93 has action which depends on the constraint condition. If the unit selection is constrained to be a subset of another study, the button action is to display all the units specified for the parent study. Otherwise, the button action is to display all the units in the project in the video list.
The "find units" button 94 has the action of presenting a search enabling the user to search for video units that will be displayed in the units list 97. The search allows search on any of the unit fields, using the user-defined custom fields. The units found as a result of this search are displayed in the unit listing area 97. If the unit selection is not constrained to be a subset of another study, the find action is to search all the units in the project. If the unit selection is constrained to be a subset of another study, the find action is to limit search to the units specified for the parent study.
The "copy from study" button 95 has the action of presenting the "use units from other study" window, prompting the user to select a study. When a study is selected, the units from the selected study are displayed in the unit listing area 97. If the checkbox entitled "constrain to be a subset of the specified study" is checked on the "use units from other study" window, the constraint message 96 is displayed on the window. Area 96 is the constraint message and checkbox. The message and checkbox only appear when a unit selection constraint is in effect. There are two possible values for the message:
If the unit selection was constrained to be a subset of another study in the "use unit from other study" window, the message appears as "constrained to be a subset of [study] " . If the unit selection was constrained to include all the units in the project from the units menu on the study detail window of FIG. 9A, the message appears "Include all units in the project" .
The unit listing area 97 is a scrollable area which lists video units by unit ID and name. This area is filled by action of the "all" , "find", and "copy study" buttons 93-95. Units in this area 97 are copied to the "units selected for study" area 102 by clicking and dragging. When a unit is dragged to the 'Units selected for study" list 102, the unit appears grayed in the list. Units are removed from this list by clicking and dragging to the remove video icon 101.
The clear unit listing button 98 has the action of clearing the contents of the unit listing area 97. The copy to study button 99 has the action of copying the highlighted unit in the unit listing area 97 to the unit selected for study listing are 102.
The checkbox 100 entitled "Randomly select units and add to the study" has the action of creating a random sample if the checkbox is checked. The action of checkbox 100 is to change the behavior of the copy to study button. When checked, the sample number area 100a becomes enterable. When unchecked, the sample number area is non-enterable. If checked, and a sample number is given, the copy to study button has the following action:
A random number of units is selected consisting of (sample number) units from the unit listing area, and added to the existing selection in the "units selected for study" listing area. Units in the unit listing area that apparently appear in the "units selected for study" listing area are ignored for purpose of creating the sample.
The "remove video from list" icon 101 is a drag destination. The action is to remove videos from the list from which they were dragged. For example, if a video is dragged from the unit listing 97 to this icon, it is removed from the unit listing area. This is not the same action as deleting the unit.
The "units selected for study" area 102 is a scrollable area which lists units associated with the study. Units are listed by unit ID and name. Units can be added to this list from the unit list 97 by clicking and dragging.
The "clear units selected for study" button 103 has the action of clearing the contents of the "units selected for study" listing area 102 after confirmation with the user. The print unit selection information button 104 has the action of printing the units in study detail report. The cancel/save choice buttons 105 include the save button which saves the units selected for study selection and returns to the window that made the call to this window. The cancel button returns to the window that made the call without saving the changes.
Use Units From Other Study Window
The "use units from other study" window is illustrated in FIG. 16. This window is used to fill the units list 97 of the window shown in FIG. 15 with all the units that belong to another study. The window also contains a checkbox that imposes the constraint that only units from the selected study, the parent study, can be used in the current study. This window is opened when the "copy from study" button 95 of the window shown in FIG. 15 is clicked on the unit selection window. The "units from other study" window shown in FIG. 16 includes a number of fields and controls. The study list area 111 is a scrollable area which contains all the studies in the project, except for studies constrained to "Include all units in project" and the current study. The study description area 112 is in a non-enterable scrollable area of text that contains the description of the highlighted study shown in area 111. This text is from the unit selection description area on the study detail window illustrated in FIG. 9A. The button 113 labeled "Replace current selection in unit list" causes the action of displaying the units for the highlighted study in the unit list on the unit selection window. The checkbox entitled "Constrain unit selection to be a subset of the specified study" 114 imposes a constraint on the study so that only units from the selected study in area 111 can be used for the current study. Action is to constrain the contents of the units listing area so it only contains the units specified for the selected study. The button entitled "Add units to current selection in unit list" displays the units for the highlighted study in the unit list on the unit selection window.
THE UNIT MODULE
The purpose of the unit module is to include all the windows, controls and areas that are used to define video units, open and work with sessions, and manage sessions in the interactive video analysis system. The unit module includes a unit list window.
Unit List Window
The unit list window is illustrated in FIG. 17. The unit list window presents all the units defined for the project. For example, this includes all the units in the database. The unit list window displays the unit ID and the unit name. Optionally, nine custom unit fields may also appear. Double-clicking on a record presents the unit detail window.
Unit Detail Window
The unit detail window is illustrated in FIG. 18 The unit detail window is the primary window that contains all data needed to define a unit, including creation of the transcript. This window is presented when adding a new unit or when editing an existing unit. The fields and controls of the unit detail window are described below:
The window title area 121 gives the name of this window which is "Unit Detail". The unit name area 122 is an enterable area for the name of the unit. This must be a unique name. Internally, all attributes for the unit are associated with an internal unit identifier, so the unit name can be changed. The unit ID area 123 is an enterable area for code to identify the unit. The create date area 124 gives the date and time when the unit was initially created. This is a non-enterable area that is filled by the program when the unit record is created. The descrip-iυn area 125 is a scrollable text area for entry in order to describe the unit. Nine custom unit fields are an optional feature. Each field is an enterable area for storing data up to 30 characters long. The field names are customizable.
The segment list area 126 is a scrollable area that displays all the segments for this unit. Each segment is displayed with its name (file name from the path), length (determined by reading the media on which the segment is recorded), start time (calculated using the sum of the previous segment length), and end time (calculated using the sum of the previous segment length plus the length of this segment). The sequence number is by default the order in which the segments were created. The sequence number determines the order in which the segments are to be viewed. Order of the segments can be changed by dragging. Dragging is only supported for a new record. When a segment is moved by dragging the start and end times of all other segments are recalculated.
The add segment button 128 has the action of presenting the open-file dialog, prompting the user to select the volume and file that contains the segment. When the user selects a file, the file name is entered as the segment name in the segment list 126. The length is also determined and written into the segment list. The first frame of the video is displayed in the video view area.
The delete segment button 129 has the action of prompting the user to confirm that the highlighted segment in the segment list 126 is to be deleted. Upon confirmation, the segment is deleted and the start and end times of all other segments are recalculated. The study membership area 130 is a scrollable area that lists all studies in which this unit is included. Units are assigned to a study on a study detail window on FIG. 9A. When such an assignment is made, the study is included in this area 130.
The transcript information area 131 is a non-enterable area which displays the size of each transcript (the number of characters). The import transcript button 132 prompts the user for which transcript to import, and then presents the standard open file dialog prompting the file name to import. When the file is selected, the file is imported using tab-delimited fields. The size of the transcript is written to the transcript size area 135. The edit transcript button 133 opens the Video
Window with the transcript text. The export transcript button 134 has the action of prompting the user for which transcript to export, then presents the standard new file prompting the file name for the export file. Navigation controls 138 operate as previously described. The print button 137 has the action of printing the unit detail report. The cancel/save unit buttons 136 include the save button which prompts the user for confirmation that the segment sequence is correct. After confirmation, the user is either returned to the window or the unit data is saved. For an existing record, the save button action is to save the unit data and return to the unit list window of FIG. 17. If the cancel button is used, any changes to segments or to the transcript are rolled back after confirmation. A video position indicator/control 139 has the same operation as the video position indicator/control of the Video Window. It indicates the relative positions of the current frame in the segment. Session Handling And Management
Coding a unit for a specific study takes place in a session. When a user goes into code mode on the video window, a session must be opened that sets the coding parameters. The progress of coding can be tracked by monitoring the status of the sessions for a study. The present invention includes various windows to open and close sessions during coding and management windows that give detailed session information. If 'bode" is selected on the main button bar, and if the user has no other currently opened sessions, the user is prompted to open a session for the current study on the create session window. If the user has previous sessions that are still open, the resume session window is presented and the user may open an existing session or create a session. After a session is active, the user may move freely back and forth between view mode and code mode on the video window. While in code mode, the user may open the session info window to display information about the session.
Session management is performed from the session listing window which is accessed by clicking session on the manage button bar. Double-clicking on a session in the session listing window opens the session detail window which provides information similar to the session info window.
The session info window presents information about the current session including the name of the current study, the number of this particular pass, with the total number of passes in the study, a text description of the study, information about the unit including the unit name and unit I.D., the number of the segments that make up the unit and the total length in hours, minutes, seconds and frames for the entire unit, including all segments. Additionally, the session info window gives information about the sample that is in effect for the current pass, a pass outline which contains all the indented event types, characteristics and choice values for the current pass, a print button and a button to close the window. A session placemark saves a time code to close the window. A session placemark saves a time code with the session so that when the session is resumed the video is automatically positioned at the placemark. This occurs when a session is ended without closing it.
When the user clicks sessions on the manage button bar, the select a study window appears. A select button chooses a selected study. The session list window is opened, listing all the sessions for the selected study. Clicking on a record listed presents the session detail window. The session detail window gives information about the session. The information includes the name of the study, the pass number for the particular session, along with the total number of passes defined for the study, the name of the video unit being coded, the unit I.D. for the video unit, the session status such as "never opened, '"opened," 'reopened, " and 'closed, " the name of the user who opened the session, the length of the unit in hours, minutes, and seconds, the total elapsed time in the code mode between when the unit was opened and closed, the number of events that have been coded in the session, and the number of characteristics recorded for event instances. Sample information such as the sample method that is in effect for the pass in the session and the sample size created for the session is displayed.
THE VIDEO WINDOW
The Video Window is used to: (i) play the video belonging to a unit; (ii) display, edit, and/or synchronize transcription text belonging to the unit; (iii) create event types and structure characteristics under them (for the default study only); (iv) mark and characterize event instances; (v) retrieve previously marked event instances for editing or viewing.
An "event instance" is the marked occurrence 01 a predefined event ("event type") within video or transcription text. The video and/or text is associated with an event type and characteristic to create a specific instance.
The Video Window may be opened through one of several actions:
(1) By selecting the code button on the application tool bar or selecting code from the view menu.
(2) By selecting the transcriber button on the application tool bar or selecting transcribe from the view menu.
(3) By selecting the view button on the application tool bar or selecting view from the view menu.
(4) The show in palette button is clicked on t e search window - so that the window is opened in view mode.
In each case, the window is opened to display a specified unit (including video and transcription text).
The Video Window supports three modes of operation: view mode, transcribe mode, and code mode.
Code Mode
Event instances are marked only in code mode. During the coding process, when an event instance is observed, the following steps are performed:
(1) Click the Mark In button to mark an initial In Point, then "fine tune" the video position using the "In Point" frame control to incrementally position the video to precisely zero in on the video frame where the event instance begins. The In Point time code becomes part of the instance record.
(2) Highlight a text selection and click the Mark Text button. The location of the text selection becomes part of the instance record; if the event type requires text only, the time code of the beginning of the utterance in which the highlighted text begins also becomes part of the instance record.
(3) Click the Mark Out button to mark the Out Point of the instance, then "fine tune" the video position using the "Out Point" frame control.
(4) Click to select an event type. The event type listing displays all the event types that can be coded in a particular pass, no other event types may be coded.
(5) Scroll through the characteristics and enter or select values for each one.
These steps can be done in any order. Clicking the save instance button completes the marking of an instance. After save instance is clicked, the instance can only be edited by recalling it by clicking on it in the instance listing, editing it using the frame controls, selecting a different event type or characteristic values, and clicking save instance to save the updates. When a new instance is saved, the instance is sorted into the instance listing if the event type is checked and is displayed in a different color to distinguish it from previously created instances.
Transcribe Mode
In the transcribe mode, only the Mark In button and the In Point frame control are enabled. This allows marking the In Point of each utterance. This is not the same as marking an instance. Event types, event instance-., and characteristics are not displayed in the transcribe mode.
View Mode
In the view mode, the buttons that mark or change the In/Out Points and text selection are disabled. The event type listing displays all the event types defined for the current study rather than for the current session and allows event types to be
S SUI BSTITUTE SHEET (RULE 26) checked so instances for the event type are displayed in the instance listing. Characteristic values may be viewed for each instance, but not changed. If there is no current study, nothing appears in the event type listing.
Pre-Processing
When the Video Window is first opened, initialization depends on the mode in which it is to be opened.
Various palettes may be opened over the Video Window. These palettes may only appear in certain modes. FIG. 19 includes a table of the palettes that may be opened over the Video Window. The palettes include the sample palette, the outline palette, search results palette, and transcribe video loop palette.
Segment Switching
The current video segment may be changed in a number of ways: (1) by selecting the segment switching buttons on the sides of the progress bar, (2) when the video plays to the end of the current segment, and (3) when an instance is clicked in the instance listing, or a time code is clicked in any palette that is not the current segment.
The path of the required segment is retrieved from the unit file. If the path does not exist because the segment is a removable medie., the user is prompted to open the file containing the segment. If any invalid path is entered, an error is given and the user is prompted again. If cancel is clicked, the user is returned to the Video Window in the current segment.
Areas of the Video Window
The Video Window has five major areas: the title area, the video area, the mark area, the instance area, and the list area.
The title area is illustrated in FIG. 21 A. The video area is illustrated in FIG. 21B and contains the video display area, play controls, relative position indicators, zoom and sound controls and drawing tools. The mark area is illustrated in FIG. 21C and contains controls to Mark Instances, refine In and Out Points on the video, and save marked instances. The instance area is illustrated in FIG. 21D and contains listings of event types, characteristic labels, characteristic choices, and events instances that have already been marked. The list area contains the transcript text and controls to change the mode of operation.
With respect to the video area 141 illustrated in FIG. 21B, the video position indicator/control 142 acts like a thermometer. As the video plays, the grey area moves from left to right, filling up the thermometer. It displays the relative position of the current frame in the current segment. At the end of the segment, the thermometer is completely filled with grey. Increments on the control indicate tenths of the segment. The end of the grey area can be dragged back and forth. When released, the action is to move the current video frame to the location in the video corresponding to the relative position of the control. The video resumes the current play condition. A small amount of grey is always displayed on the thermometer, even when the current frame is the first frame of the segment. This is so that the end of the grey can be picked up using the click and drag action even when the first frame of the video is the current location.
A subtitle area 143 displays the transcription text that corresponds to the video. Two lines of the text are displayed. Button 144 is the zoom tool. The action is to zoom the selected area to fill the frame of the video display. Button 145 is the unzoom tool which restores the video display to lx magnification. Button 146 is the volume control. Click action pops a thermometer used to control the volume. Button 147 is the mute control. The button toggles the sound on or off. Area 148 gives the current video frame. Button 149 moves the video back five seconds. Button 150 goes to the beginning of the current video segment and resumes the current play condition. Button 151 is the pause button and button 152 is the play button. Button 153 is the subtitle control which toggles the video subtitle through three modes: 1) Display subtitles from transcript one;
2) Display subtitles from transcript two; and
3) Do not display subtitles.
Button 154 is the draw tool which enables drawing on the video display. The cursor becomes a pencil and drawing starts upon mouse down and continues as the mouse is moved until mouse up. The draw tool can only be selected when the video is paused. Button 155 is the eraser tool which enables erasure of lines created using the draw tool. Button 156 is the scissor tool which copies the currently displayed frame to the clipboard. Drawings made over the video using the draw tool are copied as well. The scissors tool can only be selected when the video is paused. Button 157 is the frame advance which advances the video by one frame. Button 158 is the open video dialogue which opens a window to display the video in a larger area. The link control 159 controls the link between the videc and transcript area. When 'on" the video is linked with the transcript. In other words, when the video is moved the closest utterance is highlighted in the transcript area. When the link control button is 'off," moving the video has no effect on the transcript area.
With respect to the mark area of the Video Window, reference is made to FIG. 21C and FIG. 22. The action of the controls in the mark area is dependent on the current video mode (view, code, and transcribe).
The Mark In button 161 is disabled in the view mode. In the code mode the button action is to "grab" the time code of the current video frame regardless of the play condition and display it in the In Point area 162. In the transcribe mode, the button action is to "grab" the time code of the current video frame regardless of play condition and display it in the In Point area 162 and in the tiπ code area for the utterance in which the insertion point is positioned. Button action is to overwrite any previous contents in the In Point area and the utterance time code area with the time code of the current video frame.
The In Point area 162 is a non-enterable area which displays the time code of the frame that is the beginning of the instance. This area is updated by one of five actions: (1) clicking the Mark In button in the code and transcribe modes such that the area gets the time code for the current frame; (2) manipulating the In Point frame control in the code and transcribe modes so that the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires a video-in or exhaustive segmentation coding so that the area gets the In Point of the instance; (4) highlighting an utterance in the view and transcribe modes so the area gets the time code of the utterance; and (5) clicking an outline item on the outline palette so that the area gets the In Point of the outline item.
The In Point frame control button 163 has identical action in the code and transcribe modes. Control is disabled in the view mode. Control action is to incrementally move the video forwards or backwards a few frames to "fine tune" the In Point.
The Mark Out button 164 is enabled in code mode only. The button action is exactly analogous to the Mark In button 161 , except the Out Point is set and displayed in the Out Point area 165.
The Out Point area 165 is a non-enterable area which displays the time code of the frame that is the end of the instance. If there is no Out Point for the instance, the area is blank. This area is updated by one of four actions: (1) clicking the Mark Out button in the code mode so that the area gets the time code for the current frame; (2) manipulating the Out Point frame control in the code mode so the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires Video Out coding so that the area gets the Out Point of the instance or becomes a blank; and (4) highlighting an utterance in the view and transcribe modes so that the area becomes blank.
The Out Point frame control button 166 is only enabled in the code mode. The control is analogous to the In Point frame control 163 except the Out
Point is adjusted. The mark text button 167 is enabled only in the code mode. The button action is to register the position of the highlighted text as the instance marking. The button appearance changes to signify that text has been marked. Internally, the time code of the beginning of the utterance in which the highlighted text begins is retained, along with the position of the first and last characters of the highlighted text.
The event type listing area 170 is a scrollable area in which the action and contents depend on the mode. The area is blank in the transcribe mode. In the code mode, the scrollable area contains a row for each event type that can be coded in the current pass. Only event types that are listed here can be coded in a particular session. In code mode with the outline palette open, this area is blank. In view mode, the area contains a row for each event type defined in the study. If there is no current study, the area is blank.
The event type listing contains four columns. The first column is the checkmark that indicates that instances of this event type are to be displayed in the instance listing area. The second column is the unique event type code. The third column is the event type name. The fourth column is the event instance coding requirement. In both modes, if an event type is double-clicked the action is to place a checkmark next to it or to remove the checkmark. The checkmark indicates that event instances with this event type are to be listed in the "previously marked instances" area. In the illustration the event type "Question Asked" is checked. All the instances of questions being asked in this unit are listed in the "previously marked instances" area.
In code mode, clicking an event type has the action of refreshing the characteristic labels popup 171 to contain all the characteristics structured under the highlighted event type for the current pass. In the view mode, the action is to refresh the characteristics label popup to contain all the characteristics structured under the highlighted event type in the study.
The characteristics labels area 171 is a popup that contains labels
(names) of all the characteristics structured under the highlighted event type in area 170 for the current pass. Selecting an item from the popup has the action of refreshing the characteristic value area 174 to display choices for the selected characteristic, or making it an enterable area, if the selected characteristic has a data entry type of "text" or "numeric" . Selecting an item from the popup also has the action of refreshing the characteristic count 173 to display the sequence number of the selected characteristic.
The next/previous characteristic buttons 172 are a two-button cluster that have the action of selecting the next item in the characteristic label popup, or selecting the previous item in the popup. The characteristic count area 173 is a non- enterable text display of the sequence number of the currently displayed characteristic label and the total number of characteristic labels for the current pass. The characteristic value area 174 is either a scrollable area or an enterable area. The clear button 175 has the action of clearing the In Point and Out Point areas and resetting the Mark In, Mark Out, and mark text buttons to normal display (for example, removing any reverse video).
The save instance button 176 only has action in the code mode and is disabled in the other modes. The button name is "save instance" unless an event instance is selected in the event instance listing, in which case the button name is "save changes". The action of the button is to validate data entry. An event type must be selected. All characteristics must be given values. All the points must be marked to satisfy the event instance coding rules for the selected event type.
The event type help button 177 only applies to the code and view modes. The action is to present a dialog containing the coding instruction for the highlighted event type. The show/add event type button 178 is only visible in the code mode for the default study. The action is to present a window to select one or more event types previously created for the default study to be included in the event type listing area. A button on this window allows users to create a new event type for the default study. The button is provided so that the user may select which of the previously defined event types for the default study are to be included in the event type listing area. This allows the user to select just those event types of immediate interest for addition to the listing. The user also has the option of creating new event types for the default study using the event type detail window.
The edit event type button 179 is only visible in the code mode for the default study. The action of the button is to allow the user to edit the highlighted event type.
The remove/delete event type button 180 is only visible in the code mode for the default study. The action of the button is to prompt the user for whether the highlighted event type is to be removed from the event type listing or is to be deleted permanently with all its instances.
Instance Area
With reference to FIG. 21 D and 23 there is shown the instance area of the Video Window. The instance area provides a listing of instances that have been marked for selected event types and controls to retrieve an instance for viewing or editing, to add instances to an outline, and delete instances. This area is active only in the code and view modes. The area is disabled in code mode when the outline window is open.
The instance listing area 181 is a scrollable area that contains all the instances marked in the current session for the event types that are checked in the event type listing. Each instance is listed with a time code and event type code. The meaning of the time code depends on the event type. If the video is marked, the In Point is displayed. If only text is marked, the time code of the beginning of the utterance is displayed. A symbol is placed after the time code to indicate that the time code corresponds to the video frame closest to the beginning of the utterance in the event of marked text. Clicking an instance moves the video to the beginning of the instance and resumes the playing condition.
After selecting an instance, the following controls can be used. The delete instance button 182 is enabled in the code mode only. The action of the button 12061
-56-
is to delete the highlighted instance after confirmation with the user. The add to outline button 183 is enabled in the code and view modes only. Action is to add the instance to the current outline. The return to In Point button 184 is enabled in the code and view modes only. The action of the button is to move the video to the first frame of the highlighted event instance. The video resumes the prior play condition. The pause button 185 is enabled in the code and view modes only. The action is to pause the video at the current frame. The play to Out Point button 186 is enabled in the code and view modes only. The action of the button is to play the video starting at the current frame and stop at the Out Point for the highlighted event instance. The go to Out Point button 187 is enabled in the code and view modes only. The action of the button is to move the video to three seconds before the Out Point of the highlighted event instance, play the video to the Out Point, and stop.
TRANSCRIBE MODE
The transcribe mode has two operations: (i) transcribing the spoken words or actions on the video into text; and (ii) assigning time reference values to each of the utterances in the video.
The first operation, transcribing video content into text, is largely accomplished by watching the video and entering text into the list area. This process is aided by the Transcribe- Video Loop palette. The palette provides a control that enables the user to play a short segment of video over and over without touching any controls. The user sets the loop start point and end point. When the contents of the loop have been successfully transcribed, a 'leap' button moves the loop to the next increment of video.
The second operation, assigning time reference values to utterances, is accomplished using the same frame controls and the Mark-In control as described in the "Video Window". LIST MANAGER
The list manager is used to display and work with the text associated with the video. Typically this text is the transcription of what is being said in the video, though the text may actually be anything - observations about the video, translation, etc. Because it is anticipated that its most common use will be to hold transcription of speech in the video, the text is referred to as the 'transcription' or transcript. In speech, each speaker takes turns speaking; the transcription of each turn is an 'utterance'; e.g. an utterance is the transcription of one speaker's turn at speech.
Utterances are records in the database; each utterance has a time reference value (In point), two transcription text fields, and a speaker field.
The area on the screen that the list manager controls is called the 'List Area' . The List Area is shown in FIG. 24. It is the right side of the Video Window of FIG. 20. The list manager gets its name because it is not a conventional text area; it displays text from utterance records in the transcript so that the text looks like a contiguous block. Actions on the text block update the utterance records.
During transcription the video is transcribed and synchronized with the transcript. Each utterance is associated with a time reference value that synchronizes it with the video; an In point is marked that identifies where the utterance begins in the video. (Note: there is no Out point associated with an utterance; the out point is assumed to be the In point of the next consecutive utterance.) Each utterance is also associated with a speaker.
Utterances in the list area are always displayed in the order as entered or specified (in case of an insertion) by the user.
Mode of Operation
The list are supports three modes of operation: View Mode, Transcribe Mode and Code Mode. The area behaves differently in each of the three modes. For instance, the action of clicking in the area to create an insertion point is the same in all three modes, but a subsequent action of typing characters would have the effect of inserting characters into the text area only in Transcribe mode; it would have no effect at all in View mode or Code mode.
View Mode
The List Area in View Mode displays the transcript text next to the video. Clicking on the text has the action of moving the video to the point closest to the utterance. Moving the video using other controls on the Video Window has the effect of highlighting the utterance closest to the video. The text can not be changed in any manner, nor may the time reference values associated with it be changed. View mode affects the other controls on the Video window as well: new event instances can not be marked or edited, and characteristic values can not be recorded or changed.
Transcribe Mode
The purpose of the Transcribe Mode is to allow text entry and edit, and to provide controls for marking the text to synchronize it with the video. The marking process is limited to marking the video In point for each utterance; event instances can not be marked or edited, and characteristic values can not be recorded or changed.
Code Mode
The purpose of Code Mode is to mark event instances and enter characteristic values. The coding process typically starts only after the entire Unit is transcribed and time reference values are associated with every utterance, as the time reference value is used during coding.
There are icons which can be clicked to change the mode. The list area has a header area 191 with the mode icons 195. The time column 192 displays the time reference value associated with each utterance. This is the point on the video that was marked to correspond with the beginning of the utterance (e.g. the time reference value is the In point for when this utterance is made in the video). If the utterance has not been marked, the time reference value is displayed at 00:00:00.
The speaker column 193 identifies the speaker. The transcript 1 column 194 displays the text of the first transcript. This area is enterable in the Transcribe Mode.
Area splitter 196 allows the user to split the transcript text area into two halves so that a second transcript is displayed. This is shown in FIG. 25. A video may be on more than one media unit (disk, tape, etc.) (segments). Segment boundaries are identified in the list area as a heavy horizontal line that goes across all four columns.
Whenever an utterance is highlighted, action is to move the video to the beginning time reference value of the utterance or the closest previous utterance that has a time reference value. In the transcribe mode the text is fully editable and selectable. In the view mode, all key actions have the effect of highlighting the entire utterance in the Code Mode or navigating between highlighted utterances. In the Code Mode instances are marked. A full set of actions is supported to select text so it can be marked. Highlighted text can not be changed.
Whenever an event instance that has text coding (as determined by the event type) is selected on the Video Window, the list area is updated to scroll to the marked utterance, and highlight the marked selection within the utterance. Whenever an event instance that does not have text coding is selected on the Video Window, the list area is updated to scroll to the closest utterance, and highlight the utterance. As the video plays or is moved, the list area is updated to scroll to the closest utterance to the current video frame and highlight the utterance. Find/Find Again
The Video Window menubar contains commands for Find and Find Again. The effect on the list area is identical for each of these commands. The user is prompted for a text value and/or speaker name; the list manager searches for the next instance starting at the current insertion point position.
Marking Time Reference Values
After the transcript text has been entered, each utterance is marked to identify the time reference value on the video to which it belongs. In Transcribe Mode, the Mark In button and controls are enabled to allow exact video positioning of the In point of each utterance.
When in Code Mode, the list area tracks the current insertion position and/or highlight range in Code Mode: the utterance ID, time reference value, and character offset is available to the mark controls so the exact insertion point position or highlight range can be recorded with the instance.
OUTLINE PRESENTATION FEATURE
The outline presentation feature allows the user to select and structure the video and transcript text from event instances. The intended use of this feature is to prepare presentations that include selected instances.
The outline palette for the current outline is opened when Show Outline is requested anywhere. If no current outline is active, the user is prompted to select one by the Select An Outline window shown in FIG. 26. It displays outlines that have been created. The author of each outline is displayed in the scrollable area. The user may select an outline or pushing the plus button will create a new outline. The negative button will delete the selected outline if the user is the author.
The outline description window is displayed when an outline is created. It has two enterable areas as shown in FIG. 27: the outline name and the description. The outline palette is shown in FIG. 28.
Event instances dragged to the Outline icon <p on the Video Window of FIG. 20 become part of the current oudine. If there is no current outline, the user is prompted to specify one, or create a new one. The current oudine remains in effect until a different outline is selected.
When the event instance is dropped on the Outline icon, the Outline Item window, shown in FIG. 29, is opened to prompt the user for a description of the item. The Outline Item window displays all the headers for the current outline (in the same order as specified in the outline) so a header for the item can be specified as an optional step.
If the outiine header is specified on the Outline Item window, the item is added as the last item under the header. If no outline header is specified, the item is added as the first item in the orphan area.
When an event instance is added to the outline, an Outline Item is created from the unit, time reference value, event type, and text selection of the instance. After creation, the outline item is completely independent of the instance. The outline item may be edited for In/Out point, text selection, or deleted entirely, without affecting the event instance, and vice-versa.
If the event instance has already been used to create an outline item for the current outline, the user is warned of this and prompted for whether the action should be ignored, or whether a second outline item should be created.
After outline items have been created, they can be structured in the Outline Palette. Outline items can be structured under and moved between headers, and the order of headers can be changed. Once the outline is complete, it can be printed and the video portion can be exported to an MPEG file.
When the outline palette is active, it can be used to control the video display. Clicking an outline item moves the video to the associated time reference value. The outline item's time reference value can be edited for In and Out points. The oudine item's transcript marking may also be edited. Outline items retain association with the utterances (Transcript 1 and Transcript 2) associated with the outline item (by time reference value) corresponding to the video. The user may specify whether these are to be printed with the outline.
The outiine area 200 is a scrollable area that contains all oudine items and oudine headers. Outline items are indented under outline headers. Drag and drop action is supported in this area to allow headers and outline items to be moved freely through the outline area. Outline headers appear in bold and are numbered with whole numbers. When a header is moved the outline items move with it. A header may be clicked and dragged anywhere in the outline area. Outline items appear in plain text and are numbered with decimal numbers that begin with the header number. Outline items appear with the event code that went along with the event instance from which the item was created. Items may be clicked and dragged anywhere in the outline area - under the same header, under a different header, or to the orphan area.
If an item is clicked, in the video window, the video is moved to the In point of the outline item, the utterance closest to the current video frame is highlighted, and the current play condition is resumed. If the oudine item points to video from a unit or segment not currently mounted, the user is prompted to insert it.
In the Video window, the In and Out points of the outline item appear in the Mark controls. The Mark controls are enabled when the Oudine window is displayed, so the In and/or Out points of the oudine item can be edited. This has no effect whatsoever on the instance from which the outline item was created. If an item is not associated with a header, it is displayed on the top of the oudine area 200a and is called an 'orphan".
If an outline item is highlighted areas 201 , 202, 203 and 204 of the outline palette are filled. The study area 201 displays the study from which the event instance was taken to create the highlighted outline item. The unit area 202 displays the name of the video unit associated with the highlighted outline item. The In point area 203 displays the In point of the video associated with the highlighted outline item. The duration area 204 displays the duration of the video associated with the outline item. Play Outline button 205 has the Button action to play the video starting at the In Point of the first outline item and continue playing each outline item in the order of appearance in the outline. Play stops at the Out Point of the last oudine item.
Export Mode
The system supports the creation of a new MPEG file based on the instances that have been moved into an outline. That is, given marked video in and video out points, the system can create a new MPEG file which contains only the marked video content. The new MPEG file also contains the relevant additional information such as transcript text, and derivative information such as event, characteristic and instance information. When viewed with one of the generally available MPEG viewers, the exported MPEG file is viewable. However, when viewed with a LAVA MPEG viewer (made by LAVA, L.L.C.), not only is the MPEG file viewable, but all of the relevant additional and derivative information such as the transcript text, event, characteristic and instance information is viewable and accessible for random positioning, searching, subtitling and manipulation.
Two types of output can be produced from an Oudine: a printed Outline Report and a MPEG file created from the outline item in the order specified on the oudine containing video from the outline.
SAMPLING
Sampling is the creation of a specific subset of video or event instances that can be used for new instance hunting or the creation of a specific subset of event instances that can be characterized. There are five methods for creating samples.
The sample method is specified on the sample definition window and displayed on the study definition window for each coding pass. The samples are presented to the coder in the Sample Palette so they can be visited one by one. The samples are saved in the database so they can be retrieved into the Sample Palette anytime. FIG. 30 shows the sample definition window. Area 210 permits a choice of sampling method.
No Sampling
This sample method means that no samples will be created. The coder can use all the video in the search for event instances.
Fractional Event Sample
This method means the sample is to be created from a specified percentage of the total event instances that occur in the Unit that belong to the 'Specify Event" area. The default value for percentage is 100% . An event must be selected from the 'Specify Event" popup if this sample method is chosen.
Quantitative Event Sample
This method means the sample is to be created from a specified number of the event instances that occur in the Unit that belong to the 'Specify Event" area. An event must be selected from the 'Specify Event" popup if this sample method is chosen.
Quantitative Time Sample
This method means the sample is to be created from a specified number of video clips from the Unit with a specified duration. Two parameters are required for this option: the number of samples to be created from the Unit, and the duration in seconds of each sample.
If Occurring within Event" is specified, the number of clips refers to the entire video, not from each event.
If an 'Occurring within Event" event filter is specified, die random selection of video is from the set of all instances of the event type that are at least as long as the value entered; that is, if the criteria was to randomly select 12 clips of 15 seconds each using = 'Teacher Questions" as die event filter, then the first processing pass would be to find all instances of 'Teacher Questions" at least 15 seconds long. The second pass would be to randomly select twelve 15 second intervals within the selection, so that every possible 15 second period within the selection, has an equal probability of being selected for the sample. Additional constraints are:
-Sample periods may not overlap
-Sample periods may not span from one instance to another; e.g. the sample period must be wholly contained within a single event instance.
Proportional Time Sample
This method means the sample is to be created from randomly selected video clips of a given duration from the Unit. The number of samples is given in terms of Samples per minute of video". Three parameters are required for this option: the number of samples desired, the interval of time over which the samples are to be chosen, and the duration in seconds of each sample.
The Event Filter
The event filter area 219 allows restriction of the selection of event instances or time samples to periods within another event type.
- Time samples restrict the creation of new instances to the sample periods, according to the Event Coding Constraint specified ii, the sample definition.
- Instance samples allow the retrieval of selected instances, typically for characterization.
A time sample is created by specifying one of the time sample methods (Random Time Sample or Proportional Random Time Sample). An instance sample is created by specifying one of the event sample methods (Fractional Event Sample or Quantitative Event Sample).
If an Occurring within Event" event filter is in effect, the instance listing on the Video window limits the display of existing instances to only instances with an In point within the time period of the highlighted sample in the Sample Palette. For example, if five event instances are listed in the Sample Palette and one is highlighted, only event instances with an In point within the time period (e.g. From Video In to Video Out) of the highlighted instance would be listed in the event listing (subject to the other controls that specify what event types are to be displayed in the instance listing).
The sample palette is shown in FIG. 31. Checkmarks 223 next to the sample list area 224 may be set. The sample list area contains an entry for each sample with time reference values for the In point and Out point of the sample.
FIG. 32 is the sample information window which is opened from choosing the Show Sample Info button 222 on the sample palette. The event filter area is a non- enterable scrollable area that contains text describing the event filter in effect for the current pass. The illustration shows the format for how the filter is to be described - it follows the same conventions as the 'Within" area in the Sample Definition Window.
ANALYSIS MODULE
The analysis module is used to gather statistics about event instances across video units. The module provides functions for defining variables, searching for and retrieving information about event instances, displaying the results, and exporting the data. Typically the results of an analysis will be exported for further analysis in a statistical program.
From the main button bar the user may choose the analysis module. A window requests the user to designate a unit analysis or an instance analysis.
The analysis module allows the user to produce statistical information about event instances on either a Unit by Unit basis or an Instance by Instance basis. The results can be displayed or exported for further analysis.
There are two 'flavors" of analysis - Unit analysis and Event analysis. Unit analysis aggregates information about the instances found in a unit and returns statistics such as count, mean, and standard deviation about the event instances found in the unit. Event Instance analysis returns characteristic values directly for each instance found in the units included in the analysis.
In unit analysis the user specifies the event variables. In instance analysis the user specifies characteristics. FIG. 33 shows the unit analysis window. Area 232 is the analysis description area. Area 236 is the variable definition area. There are four columns: the sequence number, the variable description, the short variable name and the statistic that will be calculated for the variable such as: count, mean, SD (standard deviation). Variables may be dragged to change order and added or deleted. The execute analysis button 242 executes the analysis.
The analysis results area 243 has a column for each variable defined in variable listing area 237 and a row for each unit in the analysis. A unit variable may be added and defined. The unit value will be returned for each unit in the analysis. An event variable may be added and defined. A calculated value will be returned for each unit in the analysis. The calculated variable is a statistic about instances matching a description. FIG. 34 shows the define unit variable window and FIG. 35 shows the define event variable window. The event criteria area 255 specifies event instances to be found for analysis. Event instances are found for the event type in area 254 that occur within other instances and/or have specific characteristic values. Area 256 sets additional criteria. The event variable is calculated using the attribute designated in area 257. Area 258 indicates the calculation to perform (mean, count instances, total, standard deviation, total number, sum, minimum, maximum, range, or count before/after for exhaustive segmentation).
FIG. 36 illustrates the instance analysis window. Area 262 describes the analysis. Area 264 specifies the event type and is analogous to the define event variable window of FIG. 35 for unit analysis. Area 265 is the variable listing area. It has four columns. The first three are the same as for unit analysis. The fourth column is 'origin". The origin is
'Unit" for unit variables
'ϊnst."for instance properties
'Char" for characteristic values
Variables may be added and deleted. There is a button 268 to execute analysis. Area 269 gives the analysis results with a column for each variable in variable listing area 265 and a row for each event instance in the analysis. FIG. 37 is the define analysis variable window.
SEARCH MODULE
The search module is used to perform ad-hoc searches for text or event instances, display the results, and allow the results to be used to control the Video Window.
The Search Module allows the user to search for text or event instances across multiple video units. The results can be displayed in a palette over the Video Window so each 'find' can be viewed.
The Search Window is designed to allow multiple iterative searches. Each search can begin with the results of the previous search: the new search results can be added to or subtracted from the previous search results.
There are two types of searches: searches for text strings within the transcript text ('Text Search'), and searches for event instances that match a give event type and other criteria ('Instance Search'). Each search has its own window, but most of the controls in each window are identical.
The search module is accessed from the main button bar for a text search or an instance search. FIG. 38 is a search window with features common to text and instance searches. Area 271 indicates if it is a text or instance search. Area 272 shows the relationship to a previous search. Area 277 designates units to search. Area 281 specifies what is being searched for: the event instance or word or phrase. Multiple criteria may be set to identify the characteristic or position. Button 282 executes the search. Area 283 lists the results. Button 284 will add die result to an outline. Area 285 gives the instant count.
If the search within a study button is selected on the search window a unit selection for search window permits the user to select individual units within a study to limit the search.
When the Show In Palette button is pushed a results palette permits the search results to be examined and there is a checkmark that may be set for each result.
For event searching the results are event instances. FIG. 39 shows the event instance search window. A search is done for an event type occurring within an event type where a particular characteristic has a valid characteristic value. The operator area 290 may be = , < , > , £, 3, -, contains, or includes.
Marking Results as Instances
After instances have been found, they may be marked as instances of a special class of event type (called a 'Saved Search' event type). This provides several capabilities:
The user can quickly retrieve the instances in a future search by specifying the Saved Search event type instead of complex criteria;
Characteristic values can be applied to the event instances, and a later pass can be created to record other characteristics.
FIG. 40 is the text search window. The text search can search the text of multiple units of video. It finds all instances of a word or phrase. The search term is input in area 291. The speaker is input in area 292. Area 293 indicates which transcripts are searched. Area 294 permits searching text within an event type with a characteristic and selected choice of characteristic. REPORTS
Study Module Reports
The study listing report lists studies in the current selection, sorted in the current sort order. The study detail report details one study, giving all details about it. The event detail report details one event type belonging to a study, giving all details about it. The characteristic detail report details one characteristic belonging to the study, giving all details about it. The units in study detail report lists all the units that have been selected for a single study.
Unit and Session Reports
The unit listing report lists all units in the current selection, sorted in the current sort order. The unit detail report gives all details about a unit. The session listing report prints the contents of the current session list window. The session detail report prints the contents of the current session detail window.
User Reports
The user listing report lists all users in the current selection, sorted in the current sort order. The user detail report details one user. The system settings report prints all the system settings.
Outline Report
The outline report is printed from the outline palette.
Search Report
The search report gives results of an event instance search or a text search. The search criteria report gives the search criteria. Analysis Reports
The analysis results report prints the data created for the analysis that is displayed. The analysis variable definition report prints the description of all die variables defined in the analysis.
Sample Reports
The sample detail report describes the sample and lists the time reference values in the sample.
Transcript Report
The transcript report details the contents of the list manager.
The above description is included to illustrate the operation of the preferred embodiments of the present invention and not meant to limit the scope of the invention. The scope of the invention is to be limited only by the following claims. From the above discussion, many variations will be apparent to one skilled in the art that would yet be encompassed by the spirit and scope of the invention.

Claims

We claim:
1. A digital video system comprising: coding and control means, adapted to receive digital reference video information, for coding said digital reference video information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
2. The digital video system of Claim 1 , wherein said coding and control means include derivative data coding means and a controller in a control loop.
3. The digital video system of Claim 2, wherein said control loop includes the user.
4. The digital video system of Claim 2, wherein said derivative data coding means includes means for creating indices of coded data derived from said digital reference video information.
5. A digital video system comprising: digital storage means for storing digital reference video information; coding and control means for coding said digital reference video information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
6. The digital video system of Claim 5, further comprising output means.
7. The digital video system of Claim 5, wherein said digital reference video information is encoded and compressed.
8. The digital video system of Claim 7, further comprising means for decoding and decompressing said encoded and compressed digital reference video information.
9. The digital video system of Claim 6, wherein said output means further comprises display means.
10. The digital video system of Claim 5, wherein said digital storage means receives digital reference video information from more than one video source.
11. The digital video system of Claim 5, wherein said coding and control means include derivative data coding means and a controller in a control loop.
12. The digital video system of Claim 11 , wherein said control loop includes the user.
13. The digital video system of Claim 11 , wherein said derivative data coding means includes means for creating indices of coded data derived from said digital reference video information.
14. The digital video system of Claim 10, wherein said coding and control means include derivative data coding means and a controller in a control loop, and said derivative data coding means includes means for linking said digital reference video information from more than one video source together.
15. A digital video system comprising: digital storage means for storing digital reference video information and digital reference audio information; coding and control means for coding said digital reference video and audio information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
16. The digital video system of Claim 15, further comprising output means.
17. The digital video system of Claim 15, wherein said digital reference video and audio information is encoded and compressed.
18. The digital video system of Claim 17, further comprising means for decoding and decompressing said encoded and compressed digital reference video and audio information.
19. The digital video system of Claim 16, wherein said output means further comprises display means.
20. The digital video system of Claim 15, wherein said digital storage means receives digital reference video information from more than one video source.
21. The digital video system of Claim 15, wherein said digital storage means receives digital reference audio information from more than one audio source.
22. The digital video system of Claim 15, wherein said coding and control means include derivative data coding means and a controller in a control loop.
23. The digital video system of Claim 22, wherein said control loop includes the user.
24. The digital video system of Claim 22, wherein said derivative data coding means includes means for creating indices of coded data derived from at least one of said digital reference video and audio information.
25. The digital video system of Claim 22, wherein said derivative data coding means includes means for linking said digital reference video and audio information.
26. The digital video system of Claim 20, wherein said coding and control means include derivative data coding means and a controller in a control loop, and said derivative data coding means includes means for linking said digital reference video information from more than one video source together.
27. The digital video system of Claim 21 , wherein said coding and control means includes derivative data coding means and controller in a control loop, and said derivative data coding means includes means for linking said digital reference audio information from more than one audio source together.
28. A digital video system comprising: digital storage means for storing digital reference video information; coding and control means, having an input means for receiving digital additional information, for coding said digital reference video information and said digital additional information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
29. The digital video system of Claim 28, further comprising a digital encoder for receiving analog additional information from an analog source and outputting digitally encoded analog additional information to said coding and control means.
30. The digital video system of Claim 28, wherein said coding and control means receives digital additional information from more than one source.
31. The digital video system of Claim 28, further comprising output means.
32. The digital video system of Claim 28, wherein said digital reference video information is encoded and compressed.
33. The digital video system of Claim 32, further comprising means for decoding and decompressing said encoded and compressed digital reference video information.
34. The digital video system of Claim 31 , wherein said output means further comprises display means.
35. The digital video system of Claim 28, wherein said coding and control means include derivative data coding means and a controller in a first control loop and correlation and synch means in a second control loop with said controller.
36. The digital video system of Claim 35, wherein at least one of said control loops include the user.
37. The digital video system of Claim 35, wherein said derivative data coding means includes means for creating indices of coded data derived from said digital reference video information.
38. The digital video system of Claim 35, wherein said correlation and synch means links said digital reference video information and said digital additional information together.
39. The digital video system of Claim 29, wherein said coding and control means include derivative data coding means and a controller in a first control loop and correlation and synch means in a second control loop with said controller.
40. The digital video system of Claim 39, wherein said correlation and synch means links said digital reference video information and said digitally encoded analog additional information together.
41. A digital video system comprising: digital storage means for storing digital reference video information and digital reference audio information; a digital encoder for receiving analog additional information from an analog source and outputting digitally encoded analog additional information to a coding and control means; said coding and control means, having an input means for receiving digital additional information and digitally encoded analog additional information, for coding said reference video and audio information and said digital additional information and digitally encoded analog additional information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
42. The digital video system of Claim 41 , further comprising output means.
43. The digital video system of Claim 41 , wherein said digital reference video and audio information is encoded and compressed.
44. The digital video system of Claim 43, further comprising means for decoding and decompressing said encoded and compressed digital reference video and audio information.
45. The digital video system of Claim 42, wherein said output means further comprises display means.
46. The digital video system of Claim 41 , wherein said digital storage means receives digital reference video information from more than one video source.
47. The digital video system of Claim 41 , wherein said digital storage means receives digital reference audio information from more than one audio source.
48. The digital video system of Claim 41, wherein said input means of said coding and control means receives digital additional information from a digital source.
49. The digital video system of Claim 41 , wherein said coding and control means include derivative data coding means and a controller in a first control loop and correlation and synch means in a second control loop with said controller.
50. The digital video system of Claim 49, wherein at least one of said control loops include the user.
51. The digital video system of Claim 49, wherein said derivative data coding means includes means for creating indices of coded data derived from at least one of said digital reference video and audio information and means for linking said digital reference video and audio information together.
52. The digital video system of Claim 49, wherein said correlation and synch means links said digital reference video and audio information and said digital additional information and said digitally encoded analog additional information together.
53. The digital video system of Claim 41 , wherein said digital storage means receives digital additional information from more than one source.
54. The digital video system of Claim 41 , wherein said digital storage means receives digitally encoded analog additional information from more than one source.
PCT/US1997/012061 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and video information WO1998002827A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU37244/97A AU3724497A (en) 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and ideo information
MXPA99000549A MXPA99000549A (en) 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and video information.
EP97934108A EP1027660A1 (en) 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and video information
JP10506161A JP2001502858A (en) 1996-07-12 1997-07-11 Digital image system having a database of digital audio and image information coded data.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US67856396A 1996-07-12 1996-07-12
US08/678,563 1996-07-12

Publications (1)

Publication Number Publication Date
WO1998002827A1 true WO1998002827A1 (en) 1998-01-22

Family

ID=24723323

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/012061 WO1998002827A1 (en) 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and video information

Country Status (6)

Country Link
EP (1) EP1027660A1 (en)
JP (1) JP2001502858A (en)
AU (1) AU3724497A (en)
CA (1) CA2260077A1 (en)
MX (1) MXPA99000549A (en)
WO (1) WO1998002827A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000262479A (en) * 1999-03-17 2000-09-26 Hitachi Ltd Health examination method, executing device therefor, and medium with processing program recorded thereon
WO2001026377A1 (en) * 1999-10-04 2001-04-12 Obvious Technology, Inc. Network distribution and management of interactive video and multi-media containers
WO2001047281A2 (en) * 1999-12-09 2001-06-28 General Instrument Corporation Non real-time delivery of mpeg-2 programs via an mpeg-2 transport stream
WO2002041634A2 (en) * 2000-11-14 2002-05-23 Koninklijke Philips Electronics N.V. Summarization and/or indexing of programs
WO2002094321A1 (en) * 2001-05-23 2002-11-28 Tanabe Seiyaku Co., Ltd. Compositions for promoting healing of bone fracture
EP1262881A1 (en) * 2001-05-31 2002-12-04 Project Automation S.p.A. Method for the management of data originating from procedural statements
US7756393B2 (en) 2001-10-23 2010-07-13 Thomson Licensing Frame advance and slide show trick modes
USRE42728E1 (en) 1997-07-03 2011-09-20 Sony Corporation Network distribution and management of interactive video and multi-media containers
CN115471780A (en) * 2022-11-11 2022-12-13 荣耀终端有限公司 Method and device for testing sound-picture time delay

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101426978B1 (en) * 2007-01-31 2014-08-07 톰슨 라이센싱 Method and apparatus for automatically categorizing potential shot and scene detection information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524193A (en) * 1991-10-15 1996-06-04 And Communications Interactive multimedia annotation method and apparatus
US5559949A (en) * 1995-03-20 1996-09-24 International Business Machine Corporation Computer program product and program storage device for linking and presenting movies with their underlying source information
US5600775A (en) * 1994-08-26 1997-02-04 Emotion, Inc. Method and apparatus for annotating full motion video and other indexed data structures
US5625833A (en) * 1988-05-27 1997-04-29 Wang Laboratories, Inc. Document annotation & manipulation in a data processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625833A (en) * 1988-05-27 1997-04-29 Wang Laboratories, Inc. Document annotation & manipulation in a data processing system
US5524193A (en) * 1991-10-15 1996-06-04 And Communications Interactive multimedia annotation method and apparatus
US5600775A (en) * 1994-08-26 1997-02-04 Emotion, Inc. Method and apparatus for annotating full motion video and other indexed data structures
US5559949A (en) * 1995-03-20 1996-09-24 International Business Machine Corporation Computer program product and program storage device for linking and presenting movies with their underlying source information
US5596705A (en) * 1995-03-20 1997-01-21 International Business Machines Corporation System and method for linking and presenting movies with their underlying source information

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6573907B1 (en) 1997-07-03 2003-06-03 Obvious Technology Network distribution and management of interactive video and multi-media containers
USRE45594E1 (en) 1997-07-03 2015-06-30 Sony Corporation Network distribution and management of interactive video and multi-media containers
USRE42728E1 (en) 1997-07-03 2011-09-20 Sony Corporation Network distribution and management of interactive video and multi-media containers
JP2000262479A (en) * 1999-03-17 2000-09-26 Hitachi Ltd Health examination method, executing device therefor, and medium with processing program recorded thereon
WO2001026377A1 (en) * 1999-10-04 2001-04-12 Obvious Technology, Inc. Network distribution and management of interactive video and multi-media containers
US6771657B1 (en) 1999-12-09 2004-08-03 General Instrument Corporation Non real-time delivery of MPEG-2 programs via an MPEG-2 transport stream
WO2001047281A3 (en) * 1999-12-09 2002-03-07 Gen Instrument Corp Non real-time delivery of mpeg-2 programs via an mpeg-2 transport stream
WO2001047281A2 (en) * 1999-12-09 2001-06-28 General Instrument Corporation Non real-time delivery of mpeg-2 programs via an mpeg-2 transport stream
WO2002041634A3 (en) * 2000-11-14 2003-11-20 Koninkl Philips Electronics Nv Summarization and/or indexing of programs
WO2002041634A2 (en) * 2000-11-14 2002-05-23 Koninklijke Philips Electronics N.V. Summarization and/or indexing of programs
WO2002094321A1 (en) * 2001-05-23 2002-11-28 Tanabe Seiyaku Co., Ltd. Compositions for promoting healing of bone fracture
EP1262881A1 (en) * 2001-05-31 2002-12-04 Project Automation S.p.A. Method for the management of data originating from procedural statements
US7756393B2 (en) 2001-10-23 2010-07-13 Thomson Licensing Frame advance and slide show trick modes
CN115471780A (en) * 2022-11-11 2022-12-13 荣耀终端有限公司 Method and device for testing sound-picture time delay

Also Published As

Publication number Publication date
EP1027660A1 (en) 2000-08-16
CA2260077A1 (en) 1998-01-22
JP2001502858A (en) 2001-02-27
MXPA99000549A (en) 2003-09-11
AU3724497A (en) 1998-02-09

Similar Documents

Publication Publication Date Title
US7739255B2 (en) System for and method of visual representation and review of media files
JP3185505B2 (en) Meeting record creation support device
US6332147B1 (en) Computer controlled display system using a graphical replay device to control playback of temporal data representing collaborative activities
US6789109B2 (en) Collaborative computer-based production system including annotation, versioning and remote interaction
US6938029B1 (en) System and method for indexing recordings of observed and assessed phenomena using pre-defined measurement items
US9348829B2 (en) Media management system and process
US5717869A (en) Computer controlled display system using a timeline to control playback of temporal data representing collaborative activities
US7506262B2 (en) User interface for creating viewing and temporally positioning annotations for media content
EP0774719B1 (en) A multimedia based reporting system with recording and playback of dynamic annotation
US6366296B1 (en) Media browser using multimodal analysis
US6571054B1 (en) Method for creating and utilizing electronic image book and recording medium having recorded therein a program for implementing the method
US20050160113A1 (en) Time-based media navigation system
US20150378544A1 (en) Automated Content Detection, Analysis, Visual Synthesis and Repurposing
US20050080789A1 (en) Multimedia information collection control apparatus and method
JP3574606B2 (en) Hierarchical video management method, hierarchical management device, and recording medium recording hierarchical management program
WO2010073695A1 (en) Edited information provision device, edited information provision method, program, and storage medium
EP1027660A1 (en) Digital video system having a data base of coded data for digital audio and video information
US20040056881A1 (en) Image retrieval system
US9817829B2 (en) Systems and methods for prioritizing textual metadata
JPH06208780A (en) Image material management system
US20070240058A1 (en) Method and apparatus for displaying multiple frames on a display screen
WO2006030995A9 (en) Index-based authoring and editing system for video contents
JP2565048B2 (en) Scenario presentation device
JPH07334523A (en) Information processor
Benedetti et al. A structured video browsing tool

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2260077

Country of ref document: CA

Ref document number: 2260077

Country of ref document: CA

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 1998 506161

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: PA/A/1999/000549

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 1997934108

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1997934108

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1997934108

Country of ref document: EP