WO2010118528A1 - Visual structure for creating multimedia works - Google Patents
Visual structure for creating multimedia works Download PDFInfo
- Publication number
- WO2010118528A1 WO2010118528A1 PCT/CA2010/000586 CA2010000586W WO2010118528A1 WO 2010118528 A1 WO2010118528 A1 WO 2010118528A1 CA 2010000586 W CA2010000586 W CA 2010000586W WO 2010118528 A1 WO2010118528 A1 WO 2010118528A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- medium
- timeline
- outline
- media work
- visual structure
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 claims description 20
- 230000008520 organization Effects 0.000 claims description 14
- 238000009877 rendering Methods 0.000 claims description 10
- 238000013515 script Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 102100031102 C-C motif chemokine 4 Human genes 0.000 description 1
- 101100054773 Caenorhabditis elegans act-2 gene Proteins 0.000 description 1
- 101100000858 Caenorhabditis elegans act-3 gene Proteins 0.000 description 1
- HEFNNWSXXWATRW-UHFFFAOYSA-N Ibuprofen Chemical compound CC(C)CC1=CC=C(C(C)C(O)=O)C=C1 HEFNNWSXXWATRW-UHFFFAOYSA-N 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
Definitions
- the present invention relates to the field of software tools for use in creating and manipulating multimedia works, and more particularly to tools that allow an author to structure a narrative or informational discourse.
- the single or completed multi-media work can be a movie, a website, an interactive multi-media experience, a presentation, etc.
- the hierarchical structure provides a navigation and selection tool so that users can identify, select, and modify segments of the multi-media work.
- the tool can also be used as a template to represent a conventional structure, to be fleshed-out by the author. Outline and timeline aspects of the multi-media work are provided together in a single visual structure.
- a system for creating a multi-media work comprising: an output device; an input device for receiving a user input; and a processing device for executing instructions relating at least to a first medium and a second medium, wherein the instructions, when executed by the processing device, implement the processing device to: render the multi-media work from the at least first medium and second medium, as a function of the user input; and generate a visual structure for output on the output device by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the output device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters.
- a method for creating a multi-media work comprising: receiving a user input relating to at least a first medium and a second medium; rendering the multi-media work from at least the first medium and the second medium, based on the user input; generating a visual structure for output on a display device, by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the display device,
- the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters; and outputting the visual structure for display on the display device.
- a computer-readable medium storing instructions for execution in a processing device, to implement the processing device to: receive a user input relating to at least a first medium and a second medium; render the multi-media work from at least the first medium and the second medium, based on the user input; generate a visual structure for output on a display device, by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the display device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters; and output the visual structure for display on the display device.
- a graphical user interface for display on a display device, the graphical user interface comprising: a first display area for displaying a multi-media work on the display device, the multimedia work being rendered from at least a first and second medium; a second display area for displaying a timeline of the multi-media work across a first axis of the display device, to represent at least one of: a chronological organization and a spatial organization of elements forming part of the multi-media work; and wherein the second display area is further for displaying an outline of the multi-media work, the outline providing a hierarchical organization of the elements organized in accordance with the timeline.
- timeline should be understood to define either a chronological or a spatial organization to define a schedule of events to be followed for example.
- Timeline therefore includes a conventional timeline where time is measured and shown in equal increments and a visual structure timeline, which is a chronological 2PC00
- Fig. 1a is a schematic illustrating a generic three level visual structure, in accordance with an embodiment
- Fig. 1b is a schematic illustrating a three level visual structure with scenes, shots, and shot details, in accordance with an embodiment
- FIG. 2 is a screen shot of a graphical user interface with a visual structure, in accordance with an embodiment
- Fig. 3 is a screen shot of the graphical user interface of Fig. 2, which shows details of an element in the visual structure in accordance with a timeline view mode;
- Fig. 4 is a screen shot of the graphical user interface of Fig. 2, which shows details of an element in the visual structure in accordance with a script view mode;
- FIG. 5 is a flow chart of a method for creating a multi-media work in accordance with an embodiment
- FIG. 6 is a schematic illustration of a system for creating a multi-media work in accordance with an embodiment.
- a software tool that provides a hierarchical, multilevel representation of a multi-media work, through an interactive visual structure which makes it possible to visualize the organizational complexity of the multimedia elements of the work, and to then modify these elements via an interaction therewith.
- the elements can either be visually represented in accordance with a detailed (low level) representation or a broad (high level) representation.
- a low level representation refers to a detailed view of the work
- a high level representation refers to a broad overview representation of the work where the overall harnessive structure is apparent.
- the tool can visually represent a hierarchy of grouped elements; each level in the hierarchy corresponding to the different levels of the total structure.
- the demarcations between levels are not fixed or limited in number, since there are many ways in which a tendive work can be structured.
- the highest level represents the large scale structure of the sexual work. For example, acts in a play.
- the middle level represents a smaller scale structure embedded within the higher level, such as scenes for each act.
- the lowest level represents the most detailed structure, also embedded within the level directly above it, such as shots in a scene.
- the different elements within each level are positioned in time in accordance with a timeline to represent their position chronologically or spatially in the multi-media work.
- each level in the hierarchy is itself structured with discrete units of the sexual structure defined at that level. Take for example the level containing the set of scenes. This level may contain multiple scenes which 2PC00
- the unit structure on one level is related in some way to the unit structure in adjacent levels. For example, multiple units, or group of units, in a given level can be grouped together within a single unit in the next higher level.
- the lowest level contains the largest amount of detailed information on the subject matter of the project, but yet it encompasses the smallest amount of scope. Subsequent higher levels group sections from lower levels in a nested fashion, thus hiding some detail but gaining scope as the levels rise.
- FIG. 1a An example of such a visual three level structure 20 is shown in Fig. 1a.
- "Structure level 3" 22 includes one unit 24 under “Section 3.1".
- "Structure level 2" 26 is a breakdown of "Structure level 3" 22 into two separate units 28 and 32, under “Section 2.1” and “Section 2.2" respectively.
- each unit of "Structure Level 2" 26 is further broken down into two more units to form units 36, 38, 40 and 42, under “Section 1.1", “Section 1.2", “Section 1.3” and Section 1.4", respectively.
- each of the three hierarchical level 22, 26, and 34 is displayed as separate, stacked, horizontal bands 44, while the discrete units (also referred to as elements) within each level are organized within the given level in accordance with a timeline 46 extending horizontally along those bands 44.
- the horizontal bands 44 illustrating the three hierarchical levels 22, 26, and 34 together provide an outline 48 of the multi-media work which extends vertically.
- the units within one level might represent the different scenes of the movie which are displayed along the timeline 46, in this case horizontally on an output device.
- the units in the next lower level are also displayed along the timeline 46, in this case horizontally, in accordance with their occurrence in time.
- the third level which provides the shot details are also ordered sequentially along the timeline 46 to match their position in time within the multi-media work.
- all the elements displayed in accordance with the timeline 46 are themselves organized hierarchically along the outline 48.
- Fig. 2 illustrates a graphical user interface (GUI) 50 for display on an output device to a user, in accordance with one embodiment.
- the multimedia work is an animation.
- a first display area 52 here an upper portion of the screen is for displaying a rendering of the animation, with all mediums being combined together, such as three-dimensional (3D) graphics and audio in this case.
- 3D three-dimensional
- a second display area 54 in this case a lower portion of the screen, illustrates a visual structure 56 which comprises a combined outline 58 and timeline 60.
- the outline 58 is shown across the vertical axis 62 of the screen and represents a hierarchical organization of the work via multiple levels (here three). In this case, the work is meant to be based on a narrative.
- the outline 58 illustrates the narrative as being formed by three separate acts 66 at a highest "Act level” 70; each one of these acts 66 further being formed by a given number of grouped scenes 67 (at a mid "scene level” 72), and each one of these grouped scenes 67 further including a certain number of scenes 68 (at a lowest "scene level” 74). 2PC00
- the timeline 60 is shown across the horizontal axis 64 of the screen, whereby the elements in the multiple levels of the outline 58, here the acts 66, the grouped scenes 67, and the scenes 68, are each placed in their respective chronological order along the horizontal axis 64, while remaining within their respective levels (70, 72 and 74).
- each level 70, 72, 74 and each element are each visually represented by an image thumbnail, a descriptive label, or any other similar icon.
- a flag or an information bubble can be associated to an element in order to qualify it with additional information. The colour and/or length of the flags is used to convey some information about a scene (it's relative importance for example) while the bubble is used to add extra information that would not otherwise fit on the screen. The bubble may pop-up when the user passes his mouse over the scene thumbnail.
- the user is also able to add or modify such descriptive labels (either visual or semantic) in an attempt to characterize the contents within that element.
- Fig. 2 also shows a dramatic curve visual indicator (not numbered) such as Freytag's Pyramid and the like.
- the dramatic curve indicator is the curve which is shown to rise at a climax near the end of Act 2 and falls in Act 3.
- the GUI 50 provides a navigation and selection tool allowing users to interact with the visual structure 56 to identify, select, or modify any element displayed and forming part of the work.
- the navigation tool allows the user to move between elements at a given level, and up or down through the various levels 70, 72, 74.
- one or more of the levels 70, 72, 74 are kept hidden by the GUI 50 until they are selected by the user, upon which they "explode” in view to display details such as any sub-levels (also referred to as nested levels). This allows the user to keep the perspective to a limited view and easily manage the 2PC00
- such a navigation tool allows entire units to be selected and displaced in the timeline, thereby rearranging the story.
- the "Lost in the City” label represents a grouping of scenes, which comprises scenes 9 through 13.
- Scene 11 is labeled "The realization" as shown in the information bubble and may be moved in the story.
- an element is selected via interaction with the GUI 50, the user is able to view, access and modify the contents of that selected element. For example, if a scene is selected, then the contents of that scene (i.e. the elements of that scene at a lower level also referred to as sub-elements) are displayed for being accessed and modified.
- a content of a selected element is viewed in accordance with various possible viewing modes, such as: a timeline view mode, in a script view mode, or in any other format that will allow the user to view and modify the contents of the selected element.
- Fig. 3 illustrates the contents of a given scene in accordance with a timeline view mode.
- the contents are text (above line D), voice (below line D) 1 animation triggers (line A), camera control (line C), sound (line S) and music (line M).
- Such elements forming part of that scene are aligned one on top of the other and positioned in time along the horizontal axis of the screen (timeline).
- icons 75 are placed along the timeline to indicate start positions for a given content.
- an animation action of walking is represented as a walking man; and smiling and a smiley face. .
- the 'walking man' is meant to represent a control icon for a plurality of walking/running animations with many degrees of control over the style and manner of the action; similarly the 'smiley face' icons are meant to represent advanced control over the facial features and looks of the 3D agent to which they are linked.
- a camera control is illustrated by a camera icon, a given sound in a scene as a speakerphone, and P1352PC00
- Figure 4 illustrates a script view mode, whereby similar icons 75 are displayed within narrative text (also referred as script) which is then used by the tool to generate the animation.
- narrative text also referred as script
- Figs. 2 to 4 as the user creates the multi-media work, or interacts with the GUI 50 to modify and update the work, he/she may switch between the various view modes such as illustrated in Figures 2 to 4.
- the visual structure view mode typically set as default mode
- Fig. 2 it is possible to view and modify the high level structure of the work (overall organization, themes, sequence, etc.).
- the user may input various content to the story, such as the script, the cast, the camera positions, the voices, the sound track (music), and any parameters or further content which will have an impact on the rendering of the multi-media work.
- the tool uses such input to update the rendering of the work, such that any changes made are viewed in real time via the rendered animation shown in the first display area 52 of the GUI 50.
- a pre-defined structural template is provided to represent a sexual structure.
- the above described GUI 50 with the herein detailed system and method implementing a multi-media creation tool, provide a guide for user to create their own work based on the given pre-defined structural template.
- the template defines a structure for a highest level, while structures and details of lower are to be defined by the user via an input device.
- FIG. 5 there is illustrated a method 80 for creating a multimedia work in accordance with an embodiment.
- a user input is received.
- the user input relates to at least a first medium and a second medium of the animation.
- medium it is intended to refer to a type of media, such as a sound data stream and an image data stream.
- This step involves, in one embodiment, the reception of any type of user instructions (such as a text input), user modifications, user interaction data occurring upon the user navigating among, selecting and displacing elements as described hereinabove in reference to Figures 1 to 4.
- step 84 the multi-media work is rendered based on the user input, and from the first medium and the second medium.
- a visual structure is generated for output on a display device.
- This step involves combining an outline of the multi-media work with a timeline of the multi-media work.
- the outline and the timeline are each for extending across distinct, respective axes of the display device.
- the combining of the outline with the timeline jointly organize the first and the second medium together in accordance with respective, distinct organizational parameters.
- the visual structure is generated as above described in relation to Figures 1 to 4.
- the outline and timeline are provided along a distinct axis of the output device, which are, in one embodiment, a vertical and a horizontal axis respectively. In another embodiment, these are inversed. In another example, any of these axes are non-longitudinal or have a curvature. 2PC00
- step 88 the visual structure generated in step 86 is outputted for display on a display device.
- step 90 which is optional, the generating of the visual structure in step 86 and the rendering in step 84 are dependent on user modifications entered via a user input in relation to the at least two medium or to an initial visual structure. Alternatively, or additionally, any one of those steps are re-performed to update the visual structure and the rendered animation based on user inputs.
- FIG. 6 there is illustrated a system 100 for creating a multimedia work in accordance with an embodiment.
- the system 100 comprise an input device 102; a processing device 104, a memory 106, a user interface 108; an output device 110 and a database(s) 112.
- the input device 102 is any type of input unit such as a mouse, a keyboard, a touch screen, a microphone, a camera, and the like, for inputting information in the form of a data signal readable by the processing device.
- Input data can be, for example, user settings for the various media in the work, user instructions reflective of user interaction with a graphical user interface implemented via the user interface 108, user instructions reflective of user modifications to the visual structure outputted, scripts or dialogs.
- the processing device 104 is any type of processor, processor assembly comprising multiple processing elements (not shown), having access to a memory 106 to retrieve instructions stored thereon, and execute such instructions. Upon execution of such instructions, the instructions implement the processing device 104 to perform as series of tasks as described above in reference to Fig. 5.
- the memory 106 can be any kind of memory device, such as random access memory, read only or rewritable memory, internal processor caches, and the like. 2PC00
- the user interface 108 comprises components for implementing the GUI as per described herein above in reference to Figs. 2 to 4.
- the output device 110 is any type of output component such as a display device, a touch screen, a speaker.
- the database 112 is optional and is used to store multi-media works, their various media, related elements and their parameters, and templates such as described above, which are usable to create the multi-media work using the system.
- FIG. 6 While illustrated in Fig. 6 as groups of discrete components communicating with each other via distinct data signal connections, it should be noted that such components are in one embodiment, provided by a combination of hardware and software components, with some components being implemented by a given function or operation of a hardware or software system. In addition, many of the data paths illustrated are implementable by data communication occurring within a computer application or an operating system. The structure of the system illustrated in Fig. 6 is thus provided for efficiency of teaching.
Abstract
There is described herein a system for creating a multi-media work, comprising: an output device; an input device for receiving a user input; and a processing device for executing instructions relating at least to a first medium and a second medium. The instructions, when executed by the processing device, implement the processing device to: render the multi-media work from the at least first medium and second medium, as a function of the user input; and generate a visual structure for output on the output device by combining an outline of the multi-media work with a timeline of the multi-media work. The outline and the timeline each is for extending across distinct, respective axes of the output device. The outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters.
Description
2PC00
VISUAL STRUCTURE FOR CREATING MULTIMEDIA WORKS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority of US provisional patent application 61/169,895 filed April 16, 2009 and entitled VISUAL STRUCTURE FOR CREATING MULTIMEDIA WORKS.
TECHNICAL FIELD
[0002] The present invention relates to the field of software tools for use in creating and manipulating multimedia works, and more particularly to tools that allow an author to structure a narrative or informational discourse.
BACKGROUND OF THE INVENTION
[0003] Various tools exist to allow an individual who wants to create a discursive work to structure the work as it is being created. An outline is used to help structure a piece of work. A timeline is used to organize various elements in time and space, and establish cross-relationships between elements.
[0004] In the world of multi-media, a single work can easily contain multiple different types of mediums, such as sound, images, text, etc. The difficulty encountered when creating multi-media work is that there is no macroscopic representation of the work that allows the author to visualize the complexity of the organized multimedia elements. Outlines are very text-focused and can only provide structural help in the context of word-processing. Timelines are limited in the information that they provide since they deal mainly with the spatial aspect of events as they occur over time.
[0005] Therefore, there is a need to provide a tool that will allow authors to create multi-media works by addressing the broader structural aspects of the work.
SUMMARY
[0006] There is described herein an arbitrarily-deep hierarchical structure that allows dissimilar multi-media elements needed for a single work to be assembled. The single or completed multi-media work can be a movie, a website, an interactive multi-media experience, a presentation, etc. The hierarchical structure provides a navigation and selection tool so that users can identify, select, and modify segments of the multi-media work. The tool can also be used as a template to represent a conventional structure, to be fleshed-out by the author. Outline and timeline aspects of the multi-media work are provided together in a single visual structure.
[0007] In accordance with an embodiment, there is provided a system for creating a multi-media work, comprising: an output device; an input device for receiving a user input; and a processing device for executing instructions relating at least to a first medium and a second medium, wherein the instructions, when executed by the processing device, implement the processing device to: render the multi-media work from the at least first medium and second medium, as a function of the user input; and generate a visual structure for output on the output device by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the output device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters.
[0008] In accordance with an embodiment, there is provided a method for creating a multi-media work, the method comprising: receiving a user input relating to at least a first medium and a second medium; rendering the multi-media work from at least the first medium and the second medium, based on the user input; generating a visual structure for output on a display device, by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the display device,
- 2 -
2PC00
the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters; and outputting the visual structure for display on the display device.
[0009] In accordance with an embodiment, there is provided a computer-readable medium storing instructions for execution in a processing device, to implement the processing device to: receive a user input relating to at least a first medium and a second medium; render the multi-media work from at least the first medium and the second medium, based on the user input; generate a visual structure for output on a display device, by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the display device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters; and output the visual structure for display on the display device.
[0010] In accordance with an embodiment, there is provided a graphical user interface for display on a display device, the graphical user interface comprising: a first display area for displaying a multi-media work on the display device, the multimedia work being rendered from at least a first and second medium; a second display area for displaying a timeline of the multi-media work across a first axis of the display device, to represent at least one of: a chronological organization and a spatial organization of elements forming part of the multi-media work; and wherein the second display area is further for displaying an outline of the multi-media work, the outline providing a hierarchical organization of the elements organized in accordance with the timeline.
[0011] The term "timeline" should be understood to define either a chronological or a spatial organization to define a schedule of events to be followed for example. Timeline therefore includes a conventional timeline where time is measured and shown in equal increments and a visual structure timeline, which is a chronological
2PC00
ordering of the scenes, but is not linearly related to time as there is no mention of the length of time of scenes (see Fig. 2, for example).
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
[0013] Fig. 1a is a schematic illustrating a generic three level visual structure, in accordance with an embodiment;
[0014] Fig. 1b is a schematic illustrating a three level visual structure with scenes, shots, and shot details, in accordance with an embodiment;
[0015] Fig. 2 is a screen shot of a graphical user interface with a visual structure, in accordance with an embodiment;
[0016] Fig. 3 is a screen shot of the graphical user interface of Fig. 2, which shows details of an element in the visual structure in accordance with a timeline view mode; and
[0017] Fig. 4 is a screen shot of the graphical user interface of Fig. 2, which shows details of an element in the visual structure in accordance with a script view mode;
[0018] Fig. 5 is a flow chart of a method for creating a multi-media work in accordance with an embodiment; and
[0019] Fig. 6 is a schematic illustration of a system for creating a multi-media work in accordance with an embodiment.
[0020] It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
2PC00
DETAILED DESCRIPTION
[0021] There is described herein a software tool that provides a hierarchical, multilevel representation of a multi-media work, through an interactive visual structure which makes it possible to visualize the organizational complexity of the multimedia elements of the work, and to then modify these elements via an interaction therewith. The elements can either be visually represented in accordance with a detailed (low level) representation or a broad (high level) representation. For the purposes of this description, a low level representation refers to a detailed view of the work, while a high level representation refers to a broad overview representation of the work where the overall discursive structure is apparent. These high level and low level representations are also referred herein as respective degrees of detail.
[0022] In one embodiment, the tool can visually represent a hierarchy of grouped elements; each level in the hierarchy corresponding to the different levels of the total structure. The demarcations between levels are not fixed or limited in number, since there are many ways in which a discursive work can be structured.
[0023] For illustrative purposes and in reference to Figs. 1a and 1b, an example is provided herein with three levels in the hierarchy. In this example, the highest level represents the large scale structure of the discursive work. For example, acts in a play. The middle level represents a smaller scale structure embedded within the higher level, such as scenes for each act. The lowest level represents the most detailed structure, also embedded within the level directly above it, such as shots in a scene. In addition to providing the different hierarchical levels, in the form of an outline but for multi-media, the different elements within each level are positioned in time in accordance with a timeline to represent their position chronologically or spatially in the multi-media work.
[0024] In one embodiment, each level in the hierarchy is itself structured with discrete units of the discursive structure defined at that level. Take for example the level containing the set of scenes. This level may contain multiple scenes which
2PC00
can be ordered and grouped in arbitrary ways within that level. The positioning of these units with respect to each other is relevant since it is meant to denote some logical order; for example the scenes might be in chronological order for a narrative work.
[0025] There is a nesting aspect such that the unit structure on one level is related in some way to the unit structure in adjacent levels. For example, multiple units, or group of units, in a given level can be grouped together within a single unit in the next higher level. The lowest level contains the largest amount of detailed information on the subject matter of the project, but yet it encompasses the smallest amount of scope. Subsequent higher levels group sections from lower levels in a nested fashion, thus hiding some detail but gaining scope as the levels rise.
[0026] An example of such a visual three level structure 20 is shown in Fig. 1a. "Structure level 3" 22 includes one unit 24 under "Section 3.1". "Structure level 2" 26 is a breakdown of "Structure level 3" 22 into two separate units 28 and 32, under "Section 2.1" and "Section 2.2" respectively. In "Structure Level 1" 34, each unit of "Structure Level 2" 26 is further broken down into two more units to form units 36, 38, 40 and 42, under "Section 1.1", "Section 1.2", "Section 1.3" and Section 1.4", respectively.
[0027] The number of units on any given level is arbitrary, and can be modified by the user at will while the work is being created.
[0028] In another example of a three-level visual structure 20 as illustrated in Fig. 1 b, each of the three hierarchical level 22, 26, and 34, is displayed as separate, stacked, horizontal bands 44, while the discrete units (also referred to as elements) within each level are organized within the given level in accordance with a timeline 46 extending horizontally along those bands 44. In this way, the horizontal bands 44 illustrating the three hierarchical levels 22, 26, and 34 together provide an outline 48 of the multi-media work which extends vertically.
2PC00
[0029] For example, when creating a movie with this tool, the units within one level might represent the different scenes of the movie which are displayed along the timeline 46, in this case horizontally on an output device. Similarly, the units in the next lower level (the shot sequences) are also displayed along the timeline 46, in this case horizontally, in accordance with their occurrence in time. The third level which provides the shot details, are also ordered sequentially along the timeline 46 to match their position in time within the multi-media work. Similarly, all the elements displayed in accordance with the timeline 46, are themselves organized hierarchically along the outline 48.
[0030] Fig. 2 illustrates a graphical user interface (GUI) 50 for display on an output device to a user, in accordance with one embodiment. In this example, the multimedia work is an animation. A first display area 52, here an upper portion of the screen is for displaying a rendering of the animation, with all mediums being combined together, such as three-dimensional (3D) graphics and audio in this case. As the multi-media work is being created by the user interacting with the interface 50, the user is able to visualize in real-time any updates that are made to the work, whether it be related to story, character animation, sound, and the like, as per the rendering displayed in this area 52.
[0031] A second display area 54, in this case a lower portion of the screen, illustrates a visual structure 56 which comprises a combined outline 58 and timeline 60. The outline 58 is shown across the vertical axis 62 of the screen and represents a hierarchical organization of the work via multiple levels (here three). In this case, the work is meant to be based on a narrative. As seen, the outline 58 illustrates the narrative as being formed by three separate acts 66 at a highest "Act level" 70; each one of these acts 66 further being formed by a given number of grouped scenes 67 (at a mid "scene level" 72), and each one of these grouped scenes 67 further including a certain number of scenes 68 (at a lowest "scene level" 74).
2PC00
[0032] The timeline 60 is shown across the horizontal axis 64 of the screen, whereby the elements in the multiple levels of the outline 58, here the acts 66, the grouped scenes 67, and the scenes 68, are each placed in their respective chronological order along the horizontal axis 64, while remaining within their respective levels (70, 72 and 74).
[0033] Still referring to Fig. 2, in one embodiment, each level 70, 72, 74 and each element (66, 67, 68) are each visually represented by an image thumbnail, a descriptive label, or any other similar icon. In an embodiment, a flag or an information bubble can be associated to an element in order to qualify it with additional information. The colour and/or length of the flags is used to convey some information about a scene (it's relative importance for example) while the bubble is used to add extra information that would not otherwise fit on the screen. The bubble may pop-up when the user passes his mouse over the scene thumbnail. In one embodiment, the user is also able to add or modify such descriptive labels (either visual or semantic) in an attempt to characterize the contents within that element. This offers an additional visual tool to the user to help organize the elements and recognize features of each element. Fig. 2 also shows a dramatic curve visual indicator (not numbered) such as Freytag's Pyramid and the like. The dramatic curve indicator is the curve which is shown to rise at a climax near the end of Act 2 and falls in Act 3.
[0034] In one embodiment, the GUI 50 provides a navigation and selection tool allowing users to interact with the visual structure 56 to identify, select, or modify any element displayed and forming part of the work. For example, the navigation tool allows the user to move between elements at a given level, and up or down through the various levels 70, 72, 74.
[0035] In one embodiment, one or more of the levels 70, 72, 74, are kept hidden by the GUI 50 until they are selected by the user, upon which they "explode" in view to display details such as any sub-levels (also referred to as nested levels). This allows the user to keep the perspective to a limited view and easily manage the
2PC00
elements of the work. Alternatively, all levels are shown to the user, as the navigation tool is used to move between levels and between elements within a same level.
[0036] In one embodiment, such a navigation tool allows entire units to be selected and displaced in the timeline, thereby rearranging the story. For example, in Figure 2, the "Lost in the City" label represents a grouping of scenes, which comprises scenes 9 through 13. Scene 11 is labeled "The realization" as shown in the information bubble and may be moved in the story.
[0037] Still referring to Fig. 2, in one embodiment, once an element is selected via interaction with the GUI 50, the user is able to view, access and modify the contents of that selected element. For example, if a scene is selected, then the contents of that scene (i.e. the elements of that scene at a lower level also referred to as sub-elements) are displayed for being accessed and modified. A content of a selected element is viewed in accordance with various possible viewing modes, such as: a timeline view mode, in a script view mode, or in any other format that will allow the user to view and modify the contents of the selected element.
[0038] Fig. 3 illustrates the contents of a given scene in accordance with a timeline view mode. In this view, the contents are text (above line D), voice (below line D)1 animation triggers (line A), camera control (line C), sound (line S) and music (line M). Such elements forming part of that scene are aligned one on top of the other and positioned in time along the horizontal axis of the screen (timeline). In one embodiment as shown in Fig. 3, icons 75 are placed along the timeline to indicate start positions for a given content. For example, an animation action of walking is represented as a walking man; and smiling and a smiley face. . The 'walking man' is meant to represent a control icon for a plurality of walking/running animations with many degrees of control over the style and manner of the action; similarly the 'smiley face' icons are meant to represent advanced control over the facial features and looks of the 3D agent to which they are linked. Similarly, a camera control is illustrated by a camera icon, a given sound in a scene as a speakerphone, and
P1352PC00
music as musical notes. Any other icon can be used to provide additional content information in time.
[0039] Figure 4 illustrates a script view mode, whereby similar icons 75 are displayed within narrative text (also referred as script) which is then used by the tool to generate the animation. The method via which such icons are generated and provided within the text so as to be used in generating the animation, is fully described in International Patent Application No. 12/482,784 which will soon be published.
[0040] Referring to Figs. 2 to 4, as the user creates the multi-media work, or interacts with the GUI 50 to modify and update the work, he/she may switch between the various view modes such as illustrated in Figures 2 to 4. When in the visual structure view mode (typically set as default mode) of Fig. 2, it is possible to view and modify the high level structure of the work (overall organization, themes, sequence, etc.). In the other modes (Fig. 3 and 4), the user may input various content to the story, such as the script, the cast, the camera positions, the voices, the sound track (music), and any parameters or further content which will have an impact on the rendering of the multi-media work. Subsequently, the tool uses such input to update the rendering of the work, such that any changes made are viewed in real time via the rendered animation shown in the first display area 52 of the GUI 50.
[0041] In one embodiment, a pre-defined structural template is provided to represent a discursive structure. In this way, the above described GUI 50, with the herein detailed system and method implementing a multi-media creation tool, provide a guide for user to create their own work based on the given pre-defined structural template. In one embodiment, the template defines a structure for a highest level, while structures and details of lower are to be defined by the user via an input device.
[0042] For example consider a template for a dramatic play where the highest level structure is provided, such as a five act structure, and within each act there are
2PC00
provided a defined number of scenes, and the major themes of each scene are given. All the user has to do is choose characters, define the sets, and write the dialog in each scene. The amount of detail provided in each template is arbitrary. In another embodiment, all levels are predetermined, and the user only fills in the details. In one example, libraries of templates are created with different amounts of details. Users are also able to create their own templates using the tool.
[0043] In reference to Fig. 5, there is illustrated a method 80 for creating a multimedia work in accordance with an embodiment.
[0044] In step 82, a user input is received. In one embodiment, the user input relates to at least a first medium and a second medium of the animation. By "medium", it is intended to refer to a type of media, such as a sound data stream and an image data stream. This step involves, in one embodiment, the reception of any type of user instructions (such as a text input), user modifications, user interaction data occurring upon the user navigating among, selecting and displacing elements as described hereinabove in reference to Figures 1 to 4.
[0045] In step 84, the multi-media work is rendered based on the user input, and from the first medium and the second medium.
[0046] In step 86, a visual structure is generated for output on a display device. This step involves combining an outline of the multi-media work with a timeline of the multi-media work. The outline and the timeline are each for extending across distinct, respective axes of the display device. The combining of the outline with the timeline jointly organize the first and the second medium together in accordance with respective, distinct organizational parameters. The visual structure is generated as above described in relation to Figures 1 to 4. The outline and timeline are provided along a distinct axis of the output device, which are, in one embodiment, a vertical and a horizontal axis respectively. In another embodiment, these are inversed. In another example, any of these axes are non-longitudinal or have a curvature.
2PC00
[0047] In step 88, the visual structure generated in step 86 is outputted for display on a display device.
[0048] In step 90, which is optional, the generating of the visual structure in step 86 and the rendering in step 84 are dependent on user modifications entered via a user input in relation to the at least two medium or to an initial visual structure. Alternatively, or additionally, any one of those steps are re-performed to update the visual structure and the rendered animation based on user inputs.
[0049] In reference to Fig. 6, there is illustrated a system 100 for creating a multimedia work in accordance with an embodiment.
[0050] The system 100 comprise an input device 102; a processing device 104, a memory 106, a user interface 108; an output device 110 and a database(s) 112.
[0051] The input device 102 is any type of input unit such as a mouse, a keyboard, a touch screen, a microphone, a camera, and the like, for inputting information in the form of a data signal readable by the processing device. Input data can be, for example, user settings for the various media in the work, user instructions reflective of user interaction with a graphical user interface implemented via the user interface 108, user instructions reflective of user modifications to the visual structure outputted, scripts or dialogs.
[0052] The processing device 104 is any type of processor, processor assembly comprising multiple processing elements (not shown), having access to a memory 106 to retrieve instructions stored thereon, and execute such instructions. Upon execution of such instructions, the instructions implement the processing device 104 to perform as series of tasks as described above in reference to Fig. 5.
[0053] The memory 106 can be any kind of memory device, such as random access memory, read only or rewritable memory, internal processor caches, and the like.
2PC00
[0054] The user interface 108 comprises components for implementing the GUI as per described herein above in reference to Figs. 2 to 4.
[0055] The output device 110 is any type of output component such as a display device, a touch screen, a speaker.
[0056] The database 112 is optional and is used to store multi-media works, their various media, related elements and their parameters, and templates such as described above, which are usable to create the multi-media work using the system.
[0057] While illustrated in Fig. 6 as groups of discrete components communicating with each other via distinct data signal connections, it should be noted that such components are in one embodiment, provided by a combination of hardware and software components, with some components being implemented by a given function or operation of a hardware or software system. In addition, many of the data paths illustrated are implementable by data communication occurring within a computer application or an operating system. The structure of the system illustrated in Fig. 6 is thus provided for efficiency of teaching.
[0058] It should be noted that the herein described subject matter, in addition to being carried out as a method or embodied in a system, it may also be embodied as a computer readable medium, or an electrical or electro-magnetic signal.
[0059] While preferred embodiments have been described above and illustrated in the accompanying drawings, it will be evident to those skilled in the art that modifications may be made therein without departing from the scope of this disclosure. Such modifications are considered as possible variants comprised in the scope of the disclosure.
Claims
1. A system for creating a multi-media work, comprising: an output device; an input device for receiving a user input; and a processing device for executing instructions relating at least to a first medium and a second medium, wherein the instructions, when executed by the processing device, implement the processing device to: render the multi-media work from the at least first medium and second medium, as a function of the user input; and generate a visual structure for output on the output device by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the output device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters.
2. The system of claim 1, wherein the distinct, respective axes comprise a vertical axis and a horizontal axis of the output device.
3. The system of claim 1, wherein the output device comprises a display device.
4. The system of claim 1, wherein the input device comprises an input device for receiving user settings of at least one of the first medium and the second medium
5. The system of claim 1 , wherein the input device comprises an input device for receiving user instructions for at least one of interacting with and modifying the visual structure, and wherein the instructions implement the P1352PC00
processing device to render the multi-media work as a function of the user instructions.
6. The system of claim 1, wherein the outline comprises multiple levels, each one of the multiple levels illustrating an element, the element comprising at least one type of medium.
7. The system of claim 6, wherein the processing device is implemented to generate the visual structure with the element for being visually represented on the output device in accordance with a user-selected degree of detail.
8. The system of claim 6, wherein at least one of the multiple levels is nested in another one of the multiple levels.
9. The system of claim 8, wherein the multiple levels comprise at least one of: an order of acts, an order of scenes in one of the acts, and an order of shots in one of the scenes.
10. The system of claim 6, wherein the element comprises at least one of. an act, a scene in the act, and a shot in the scene.
11. The system of claim 1 , wherein one of the respective, distinct organizational parameters associated with the timeline provides at least one of: a chronological and a spatial organization of elements forming part of the visual structure.
12. The system of claim 1, wherein one of the respective, distinct organizational parameters associated with the outline provides a hierarchical organization of elements forming part of the visual structure.
13. A method for creating a multi-media work, the method comprising: P1352PC00
receiving a user input relating to at least a first medium and a second medium; rendering the multi-media work from at least the first medium and the second medium, based on the user input; generating a visual structure for output on a display device, by combining an outline of the multi-media work with a timeline of the multi-media work, the outline and the timeline each for extending across distinct, respective axes of the display device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters; and outputting the visual structure for display on the display device.
14. The method of claim 13, wherein the receiving comprise receiving user instructions for at least one of navigating among, selecting and displacing elements forming part of the visual structure, and wherein the rendering comprise rendering the multi-media work as a function of the user instructions.
15. The method of claim 13, wherein the generating the visual structure comprises generating and organizing multiple levels, each one of the multiple levels illustrating an element composed of at least one type of medium.
16. The method of claim 15, wherein the generating comprises generating the visual structure with the element for being visually represented on the display device in accordance with a user-selected degree of detail.
17. The method of claim 15, wherein the generating comprises nesting at least one of the multiple levels in an other one of the multiple levels.
18. The method of claim 13, wherein the generating the visual structure comprises setting one of the respective, distinct organizational parameters P1352PC00
associated with the timeline to provide at least one of: a chronological and a spatial organization of elements forming part of the visual structure.
19. The method of claim 18, wherein the setting to provide the at least one of a chronological and a spatial organization comprises defining at least one of: an order of acts; an order of scenes in one of the acts; and an order of shots in one of the scenes.
20. The method of claim 13, wherein the generating the visual structure comprises setting one of the respective, distinct organizational parameters associated with the outline to provide a hierarchical organization of elements forming part of the visual structure.
21. The method of claim 20, wherein the elements comprise at least one of: an act, a scene in the act, and a shot in the scene.
22. The method of claim 13, wherein the generating the visual structure comprises respectively selecting a vertical axis and a horizontal axis of the display device as the distinct, respective axes of the outline and the timeline.
23. The method of claim 13, wherein the rendering the multi-media work comprises synchronizing at least the first medium and the second medium together in accordance with at least one of the outline, the timeline and the user input.
24. The method of claim 13, wherein the generating a visual structure comprises generating the visual structure based on the user input and a pre-defined structural template. P1352PC00
25. A computer-readable medium storing instructions for execution in a processing device, to implement the processing device to:
receive a user input relating to at least a first medium and a second medium;
render the multi-media work from at least the first medium and the second medium, based on the user input;
generate a visual structure for output on a display device, by combining an outline of the multi-media work with a timeline of the multimedia work, the outline and the timeline each for extending across distinct, respective axes of the display device, the outline combined with the timeline jointly organizing the first and the second medium together in accordance with respective, distinct organizational parameters; and
output the visual structure for display on the display device.
26. A graphical user interface for display on a display device, the graphical user interface comprising:
a first display area for displaying a multi-media work on the display device, the multi-media work being rendered from at least a first and second medium;
a second display area for displaying a timeline of the multi-media work across a first axis of the display device, to represent at least one of: a chronological organization and a spatial organization of elements forming part of the multi-media work; and P1352PC00
wherein the second display area is further for displaying an outline of the multi-media work, the outline providing a hierarchical organization of the elements organized in accordance with the timeline.
27. The graphical user interface of claim 26, wherein the first band is horizontal and the second band is vertical on the display device.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16989509P | 2009-04-16 | 2009-04-16 | |
US61/169,895 | 2009-04-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010118528A1 true WO2010118528A1 (en) | 2010-10-21 |
Family
ID=42982092
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2010/000586 WO2010118528A1 (en) | 2009-04-16 | 2010-04-16 | Visual structure for creating multimedia works |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2010118528A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013074992A3 (en) * | 2011-11-18 | 2013-10-10 | Lucasfilm Entertainment Company Ltd. | Interaction between 3d animation and corresponding script |
US8731339B2 (en) * | 2012-01-20 | 2014-05-20 | Elwha Llc | Autogenerating video from text |
US8937620B1 (en) * | 2011-04-07 | 2015-01-20 | Google Inc. | System and methods for generation and control of story animation |
WO2014207563A3 (en) * | 2013-06-27 | 2015-04-23 | Plotagon Ab | System, method and apparatus for generating hand gesture animation determined on dialogue length and emotion |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000073875A2 (en) * | 1999-05-27 | 2000-12-07 | Screenplay Systems, Inc. | Method and apparatus for creating, editing, printing, and displaying works containing presentation metric components |
WO2002050657A1 (en) * | 2000-12-19 | 2002-06-27 | Coolernet, Inc. | System and method for multimedia authoring and playback |
US20040125133A1 (en) * | 2002-12-30 | 2004-07-01 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatus for interactive network sharing of digital video content |
US20040125124A1 (en) * | 2000-07-24 | 2004-07-01 | Hyeokman Kim | Techniques for constructing and browsing a hierarchical video structure |
WO2006056901A1 (en) * | 2004-11-25 | 2006-06-01 | Koninklijke Philips Electronics N.V. | User interface for content authoring |
US20070070066A1 (en) * | 2005-09-13 | 2007-03-29 | Bakhash E E | System and method for providing three-dimensional graphical user interface |
WO2007102862A1 (en) * | 2006-03-09 | 2007-09-13 | Thomson Licensing | Content access tree |
-
2010
- 2010-04-16 WO PCT/CA2010/000586 patent/WO2010118528A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000073875A2 (en) * | 1999-05-27 | 2000-12-07 | Screenplay Systems, Inc. | Method and apparatus for creating, editing, printing, and displaying works containing presentation metric components |
US20040125124A1 (en) * | 2000-07-24 | 2004-07-01 | Hyeokman Kim | Techniques for constructing and browsing a hierarchical video structure |
WO2002050657A1 (en) * | 2000-12-19 | 2002-06-27 | Coolernet, Inc. | System and method for multimedia authoring and playback |
US20040125133A1 (en) * | 2002-12-30 | 2004-07-01 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatus for interactive network sharing of digital video content |
WO2006056901A1 (en) * | 2004-11-25 | 2006-06-01 | Koninklijke Philips Electronics N.V. | User interface for content authoring |
US20070070066A1 (en) * | 2005-09-13 | 2007-03-29 | Bakhash E E | System and method for providing three-dimensional graphical user interface |
WO2007102862A1 (en) * | 2006-03-09 | 2007-09-13 | Thomson Licensing | Content access tree |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8937620B1 (en) * | 2011-04-07 | 2015-01-20 | Google Inc. | System and methods for generation and control of story animation |
WO2013074992A3 (en) * | 2011-11-18 | 2013-10-10 | Lucasfilm Entertainment Company Ltd. | Interaction between 3d animation and corresponding script |
US9003287B2 (en) | 2011-11-18 | 2015-04-07 | Lucasfilm Entertainment Company Ltd. | Interaction between 3D animation and corresponding script |
US8731339B2 (en) * | 2012-01-20 | 2014-05-20 | Elwha Llc | Autogenerating video from text |
US9036950B2 (en) | 2012-01-20 | 2015-05-19 | Elwha Llc | Autogenerating video from text |
US9189698B2 (en) | 2012-01-20 | 2015-11-17 | Elwha Llc | Autogenerating video from text |
US9552515B2 (en) | 2012-01-20 | 2017-01-24 | Elwha Llc | Autogenerating video from text |
US10402637B2 (en) | 2012-01-20 | 2019-09-03 | Elwha Llc | Autogenerating video from text |
WO2014207563A3 (en) * | 2013-06-27 | 2015-04-23 | Plotagon Ab | System, method and apparatus for generating hand gesture animation determined on dialogue length and emotion |
WO2014207565A3 (en) * | 2013-06-27 | 2015-04-30 | Plotagon Ab | System, apparatus and method for movie camera placement based on a manuscript |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10657693B2 (en) | Method for scripting inter-scene transitions | |
US8271962B2 (en) | Scripted interactive screen media | |
US7565608B2 (en) | Animation on object user interface | |
Beaudouin-Lafon et al. | Prototyping tools and techniques | |
Thüring et al. | Hypermedia and cognition: Designing for comprehension | |
Rizvic et al. | Guidelines for interactive digital storytelling presentations of cultural heritage | |
US7945863B1 (en) | Localized exploded view | |
US20100194778A1 (en) | Projecting data dimensions on a visualization data set | |
US8127238B2 (en) | System and method for controlling actions within a programming environment | |
CN105074640B (en) | It is drawn by the grass of free form to participate in demonstrating | |
WO1998004327A1 (en) | Managing role playing game information | |
KR20130113457A (en) | User interface | |
JP2002197489A (en) | Interpolation animation operation on graphic base of graphical user interface | |
Berthaut | 3D interaction techniques for musical expression | |
WO2010118528A1 (en) | Visual structure for creating multimedia works | |
Corradini et al. | Animating an interactive conversational character for an educational game system | |
RU2433480C2 (en) | Device and method of providing video frame sequence, device and method of providing scene model, scene model, device and method of creating menu structure and computer programme | |
Mateevitsi et al. | A game-engine based virtual museum authoring and presentation system | |
Kandikonda | Using virtual reality and augmented reality to teach human anatomy | |
Cheong et al. | Prism: A framework for authoring interactive narratives | |
Braun | Storytelling & conversation to improve the fun factor in software applications | |
Pleuß | Modeling the user interface of multimedia applications | |
Douma et al. | SpicyNodes: Radial layout authoring for the general public | |
Miranto et al. | Exploring concept design of user experience with diegetic approach for cultural heritage virtual reality exhibition,‖ in | |
US20240127704A1 (en) | Systems and methods for generating content through an interactive script and 3d virtual characters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10764026 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10764026 Country of ref document: EP Kind code of ref document: A1 |