US20110154197A1 - System and method for algorithmic movie generation based on audio/video synchronization - Google Patents

System and method for algorithmic movie generation based on audio/video synchronization Download PDF

Info

Publication number
US20110154197A1
US20110154197A1 US12/642,135 US64213509A US2011154197A1 US 20110154197 A1 US20110154197 A1 US 20110154197A1 US 64213509 A US64213509 A US 64213509A US 2011154197 A1 US2011154197 A1 US 2011154197A1
Authority
US
United States
Prior art keywords
user
content
content items
engine
movie
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/642,135
Inventor
Louis Hawthorne
d'Armond Lee Speers
Michael Renn Neal
Abigail Betsy Wright
Spencer Stuart McCall
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SACRED AGENT Inc
Original Assignee
SACRED AGENT Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SACRED AGENT Inc filed Critical SACRED AGENT Inc
Priority to US12/642,135 priority Critical patent/US20110154197A1/en
Assigned to SACRED AGENT, INC. reassignment SACRED AGENT, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NEAL, MICHAEL R., SPEERS, D'ARMOND L., WRIGHT, ABIGAIL BETSY, HAWTHORNE, LOUIS, MCCALL, SPENCER STUART
Priority to PCT/US2010/060086 priority patent/WO2011075440A2/en
Publication of US20110154197A1 publication Critical patent/US20110154197A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/41Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/437Administration of user profiles, e.g. generation, initialisation, adaptation, distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • G06F16/4393Multimedia presentations, e.g. slide shows, multimedia albums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs

Definitions

  • a multimedia experience is a movie-like presentation of a script of content created for and presented to an online user, preferably based on his/her current context.
  • the content may include one or more content items of a text, an image, a video, or audio clip.
  • the user's context may include the user's profile, characteristics, desires, his/her rating of content items, and history of the user's interactions with an online content vendor/system (e.g., the number of visits by the user).
  • Such techniques include but are not limited to, transitions tied to image changes as a fade in or out, gently scrolling text and/or images to a defined point of interest, color transitions in imagery, and transitions on music changes in beat or tempo. While many users may not consciously notice these effects, these effects can be profound in creating a personal or emotional reaction by the user to the generated MME.
  • FIG. 1 depicts an example of a system diagram to support algorithmic movie generation.
  • FIG. 2 illustrates an example of various information that may be included in a user's profile.
  • FIG. 3 depicts a flowchart of an example of a process to establish the user's profile.
  • FIG. 4 illustrates an example of various types of content items and the potential elements in each of them.
  • FIG. 5 depicts examples of sliders that can be used to set values of psychoactive tags on image items.
  • FIGS. 6( a )-( b ) depict examples of adjustment points along a timeline of a content script template.
  • FIG. 7 depicts an example of adjusting the start time of a content item based on beat detection.
  • FIG. 8 depicts an example of rules-based synchronization based on tempo detection.
  • FIG. 9 depicts an example of adjustment of the item beginning transition to coincide with the duration of a measure.
  • FIG. 10 depicts an example of change of item transition time based on key change detection.
  • FIG. 11 depicts an example of rules-based synchronization based on dynamics change detection.
  • FIG. 12 depicts a flowchart of an example of a process to create an image progression in a movie based on psychoactive properties of the images.
  • FIG. 13 depicts a flowchart of an example of a process to support algorithmic movie generation.
  • a new approach is proposed that contemplates systems and methods to create a film-quality, personalized multimedia experience (MME)/movie composed of one or more highly targeted and customized content items using algorithmic filmmaking techniques.
  • MME personalized multimedia experience
  • each of the content items can be individually identified, retrieved, composed, and presented to a user online as part of the movie.
  • a rich content database is created and embellished with meaningful, accurate, and properly organized multimedia content items tagged with meta-information.
  • a software agent interacts with the user to create, learn, and explore the user's context to determine which content items need to be retrieved and how they should be customized in order to create a script of content to meet the user's current need.
  • the retrieved and/or customized multimedia content items such as text, images, or video clips are utilized by the software agent to create a script of movie-like content via automatic filmmaking techniques such as audio synchronization, image control and manipulation, and appropriately customized dialog and content.
  • automatic filmmaking techniques such as audio synchronization, image control and manipulation, and appropriately customized dialog and content.
  • one or more progressions of images can also be generated and inserted during creation of the movie-like content to effectuate an emotional state-change in the user.
  • the audio and visual (images and videos) content items are the two key elements of the content, each having specific appeals to create a deep personal, emotional, and psychological experience for a user in need.
  • Such experience can be amplified for the user with the use of filmmaking techniques so that the user can have an experience that helps him/her focus on interaction with the content instead of distractions he/she may encounter at the moment.
  • Such a personalized movie making approach has numerous potential commercial applications that include but are not limited to advertising, self-help, entertainment, and education.
  • the capability to automatically create a movie from content items in a content database personalized to a user can also be used, for a non-limiting example, to generate video essays for a topic such as a news event or a short history lesson to replace the manual and less-compelling photo essays currently used on many Internet news sites.
  • FIG. 1 depicts an example of a system diagram to support algorithmic movie generation.
  • the diagrams depict components as functionally separate, such depiction is merely for illustrative purposes. It will be apparent that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware components. Furthermore, it will also be apparent that such components, regardless of how they are combined or divided, can execute on the same host or multiple hosts, and wherein the multiple hosts can be connected by one or more networks.
  • the system 100 includes a user interaction engine 102 , which includes at least a user interface 104 , and a display component 106 ; an event generation engine 108 , which includes at least an event component 110 ; a profile engine 112 , which includes at least a profiling component 114 ; a profile library (database) 116 coupled to the event generation engine 108 and the profile engine 112 ; a filmmaking engine 118 , which includes at least a content component 120 , a script generating engine 122 , and a director component 124 ; a script template library (database) 126 a content library (database) 128 , and a rules library (database) 130 , all coupled to the filmmaking engine 118 ; and a network 132 .
  • a user interaction engine 102 which includes at least a user interface 104 , and a display component 106
  • an event generation engine 108 which includes at least an event component 110
  • a profile engine 112 which includes at least a profiling component
  • the term engine refers to software, firmware, hardware, or other component that is used to effectuate a purpose.
  • the engine will typically include software instructions that are stored in non-volatile memory (also referred to as secondary memory).
  • non-volatile memory also referred to as secondary memory
  • the processor executes the software instructions in memory.
  • the processor may be a shared processor, a dedicated processor, or a combination of shared or dedicated processors.
  • a typical program will include calls to hardware components (such as I/O devices), which typically requires the execution of drivers.
  • the drivers may or may not be considered part of the engine, but the distinction is not critical.
  • library or database is used broadly to include any known or convenient means for storing data, whether centralized or distributed, relational or otherwise.
  • each of the engines and libraries can run on one or more hosting devices (hosts).
  • a host can be a computing device, a communication device, a storage device, or any electronic device capable of running a software component.
  • a computing device can be but is not limited to a laptop PC, a desktop PC, a tablet PC, an iPod, an iPhone, a PDA, or a server machine.
  • a storage device can be but is not limited to a hard disk drive, a flash memory drive, or any portable storage device.
  • a communication device can be but is not limited to a mobile phone.
  • the user interaction engine 102 each has a communication interface (not shown), which is a software component that enables the engines to communicate with each other following certain communication protocols, such as TCP/IP protocol.
  • the communication protocols between two devices are well known to those of skill in the art.
  • the network 132 enables the user interaction engine 102 , the event generation engine 108 , the profile engine 112 , and the filmmaking engine 118 to communicate and interact with each other.
  • the network 132 can be a communication network based on certain communication protocols, such as TCP/IP protocol.
  • Such network can be but is not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, WiFi, and mobile communication network.
  • WAN wide area network
  • LAN local area network
  • wireless network Bluetooth, WiFi, and mobile communication network.
  • the physical connections of the network and the communication protocols are well known to those of skill in the art.
  • the user interaction engine 102 is configured to enable a user to submit a topic or situation to which the user intends to seek help or counseling or to have a related movie created via the user interface 104 and to present to the user a script of content relevant to addressing the topic or the movie request submitted by the user via the display component 106 .
  • the topic problem, question, interest, issue, event, condition, or concern, hereinafter referred to a topic
  • the topic can be related to one or more of personal, emotional, psychological, relational, physical, practical, or any other need of the user.
  • the creative situation can be derived from databases of specific content.
  • a wildlife conservation organization may create a specific database of images of wildlife and landscapes with motivational and conservation messages.
  • the user interface 104 can be a Web-based browser, which allows the user to access the system 100 remotely via the network 132 .
  • the event generation engine 108 determines an event that is relevant to the user and/or the user's current context, wherein such event would trigger the generation of a movie by the filmmaking engine 118 even without an explicit inquiry from the user via the user interaction engine 102 .
  • the triggering event can be but is not limited to a birthday, a tradition, or a holiday (such as Christmas, Ramadan, Easter, Yom Kippur).
  • Such triggering event can be identified by the event component 110 of the event generation engine 108 based on a published calendar as well as information of the user's profile and history maintained in the profile library 116 discussed below.
  • the event component 110 of the event generation engine 108 may be alerted by a news feed such as RSS to an event of interest to the user and may in turn inform the filmmaking engine 118 to create a movie or specific content in a movie for the user.
  • the filmmaking engine 118 receives such notification from the event generation engine 108 whenever an event that might have an impact on the automatically generated movie occurs.
  • the event component 110 may notify the filmmaking engine 118 of important observances such as Ramadan for a Muslim, wherein the filmmaking engine 118 may decide to use such information or not when composing a movie.
  • the most recent exciting win by a sports team of a university may trigger the event component 110 to provide notification to the filmmaking engine 118 to include relevant text, imagery or video clips of such win into a sports highlight movie of the university being specifically created for the user.
  • the profile engine 112 establishes and maintains a profile of the user in the profile library 116 via the profiling component 114 for the purpose of identifying user-context for generating and customizing the content to be presented to the user.
  • the profile may contain at least the following information of the user: gender and date of birth, parental status, marital status, universities attended, relationship status, as well as his/her current interests, hobbies, income level, habits; psycho-emotional information such as his/her current issues and concerns, psychological, emotional, and religious traditions, belief system, degree of adherence and influences; community information that defines how the user interacts with the online community of experts and professionals, and other information the user is willing to share.
  • FIG. 2 illustrates an example of various information that may be included in a user profile.
  • the profile engine 112 may establish the profile of the user by initiating one or more questions during pseudo-conversational interactions with the user via the user interaction engine 102 for the purpose of soliciting and gathering at least part of the information for the user profile listed above.
  • questions focus on the aspects of the user's life that are not available through other means.
  • the questions initiated by the profile engine 112 may focus on the personal interests or the emotional and/or psychological dimensions as well as dynamic and community profiles of the user.
  • the questions may focus on the user's personal interest, which may not be truly obtained by simply observing the user's purchasing habits.
  • the profile engine 112 updates the profile of the user via the profiling component 114 based on the prior history/record of content viewing and dates of one or more of:
  • the profile library 116 embedded in a computer readable medium, which in operation, maintains a set of user profiles of the users.
  • the profile of the user stored in the profile library 116 can be updated to include the topic submitted by the user as well as the content presented to him/her as part of the user history. If the user optionally provides feedback on the content, the profile of the user can also be updated to include the user's feedback on the content.
  • FIG. 3 depicts a flowchart of an example of a process to establish the user's profile. Although this figure depicts functional steps in a particular order for purposes of illustration, the process is not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
  • the flowchart 300 starts at block 302 where identity of the user submitting a topic for help or counseling is established. If the user is a first time visitor, the flowchart 300 continues to block 304 where the user is registered, and the flowchart 300 continues to block 306 where a set of interview questions are initiated to solicit information from the user for the purpose of establishing the user's profile. The flowchart 300 ends at block 308 where the profile of the user is provided to the filmmaking engine 118 for the purpose of retrieving and customizing the content relevant to the topic.
  • the content library 128 serving as a media “book shelf”, maintains a collection of multimedia content items as well as definitions, tags, resources, and presentation scripts of the content items.
  • the content items are appropriately tagged, categorized, and organized in a content library 128 in a richly described taxonomy with numerous tags and properties by the content component 120 of the filmmaking engine 118 to enable access and browsing of the content library 128 in order to make intelligent and context-aware selections.
  • the content items in the content library 128 can be organized by a flexible emotional and/or psychological-orientated taxonomy for classification and identification, including terms such as Christianity, Islam, Malawiism, Islam, and secular beliefs.
  • the content items can also be tagged with an issue such as relationship breakup, job loss, death, or depression. Note that the tagging of traditions and issues are not mutually exclusive. There may also be additional tags for additional filtering such as gender and humor.
  • each content item in the content library 128 can be, but is not limited to, a media type of a (displayed or spoken) text (for non-limiting examples, an article, a short text item for quote, a contemplative text such as a personal story or essay, a historical reference, sports statistics, a book passage, or a medium reading or longer quote), a still or moving image (for a non-limiting example, component imagery capable of inducing a shift in the emotional state of the viewer), a video clip (including clips from videos that can be integrated into or shown as part of the movie), an audio clip (for a non-limiting example, a piece of music or sounds from nature or a university sports song), and other types of content items from which a user can learn information or be emotionally impacted, ranging from five thousand years of sacred scripts and emotional and/or psychological texts to modern self-help and non-religious content such as rational thought and secular content.
  • each content item can be provided by another party or created or uploaded by the user him/herself.
  • each of a text, image, video, and audio item can include one or more elements of: title, author (name, unknown, or anonymous), body (the actual item), source, type, and location.
  • a text item can include a source element of one of literary, personal experience, psychology, self help, and religious, and a type element of one of essay, passage, personal story, poem, quote, sermon, speech, historical event description, sports statistic, and summary.
  • a video, an audio, and an image item can all include a location element that points to the location (e.g., file path or URL) or access method of the video, audio, or image item.
  • an audio item may also include elements on album, genre, musician, or track number of the audio item as well as its audio type (music or spoken word).
  • FIG. 4 illustrates an example of various types of content items and the potential elements in each of them.
  • a text item can be used for displaying quotes, which are generally short extracts from a longer text or a short text such as an observation someone has made.
  • quotes are generally short extracts from a longer text or a short text such as an observation someone has made.
  • Non-limiting examples include Khan: “Be the change you wish to see in the world,” and/or extracts from scared texts such as the Books of Psalms from the Bible.
  • Quotes can be displayed in a multimedia movie for a short period of time to allow contemplation, comfort, or stimulation. For a non-limiting example, statistics from American Football on Super Bowls can be displayed while a user is watching compilation of sporting highlights for his or her favorite team.
  • a text item can be used in a long format for contemplation or assuming a voice for communication with the user to, non-limiting examples, explain or instruct a practice.
  • long format represents more information (e.g., exceeding 200 words) than can be delivered on a single screen when the multimedia movie is in motion.
  • Examples of long format text include but are not limited to personal essays on a topic or the description of or instructions for an activity such as a mediation or yoga practice.
  • a text item can be used to create a conversational text (e.g., a script dialog) between the user and the director component 124 .
  • the dialog can be used with meta-tags to insert personal, situation-related, or time-based information into the movie.
  • a dialog can include a simple greeting with the user's name (e.g., Hello Mike, Welcome Back to the System), a happy holiday message for a specific holiday related to a user's spiritual or religious tradition (e.g., Happy Hanukah), or recognition of a particular situation of the user (e.g., embarrassed your brother is ill).
  • an audio item can include music, sound effects, or spoken word.
  • an entire song can be used as the soundtrack for shorter movie.
  • the sound effects may include items such as nature sounds, water, and special effects audio support tracks such as breaking glass or machine sounds.
  • Spoken word may include speeches, audio books (entire or passages), and spoken quotes.
  • image items in the content library 128 can be characterized and tagged, either manually or automatically, with a number of psychoactive properties (“ ⁇ -tags”) for their inherent characteristics that are known, or presumed, to affect the emotional state of the viewer.
  • ⁇ -tag is an abbreviated form of “psychoactive tag,” since it is psychologically active, i.e., pertinent for association between tag values and psychological properties.
  • These ⁇ -tagged image items can be subsequently used to create emotional responses or connections with the user via a meaningful image progression as discussed later.
  • These psychoactive properties mostly depend on the visual qualities of an image rather than its content qualities.
  • the visual qualities may include but are not limited to Color (e.g., Cool-to-Warm), Energy, Abstraction, Luminance, Lushness, Moisture, Urbanity, Density, and Degree of Order, while the content qualities may include but are not limited to Age, Altitude, Vitality, Season and Time of Day.
  • images may contain energy or calmness. When a movie is meant to lead to calmness and tranquility, imagery can be selected and transition with the audio or music track. Likewise, if an inspirational movie is made to show athletes preparing for the winter Olympics, imagery of excellent performances, teamwork, and success are important.
  • the content component 120 may tag a night image from a city with automobile lights forming patterns across the entire image and a sunset image over a dessert scene with flowing sand and subtle differences in color and light differently. Note that dominant colors can be part of image assessment and analysis as color transitions can provide soothing or sharply contrasting reactions depending on the requirements of the movie.
  • numerical values of the psychoactive properties can be assigned to a range of emotional issues as well as a user's current context and emotional state gathered and known by the content component 120 . These properties can be tagged along numerical scales that measure the degree or intensity of the quality being measured.
  • FIG. 5 depicts examples of sliders that can be used to set values of the psychoactive tags on the image items.
  • the content component 120 of the filmmaking engine 118 associates each content item in the content library 128 with one or more tags for the purpose of easy identification, organization, retrieval, and customization.
  • the assignment of tags/meta data and definition of fields for descriptive elements provides flexibility at implementation for the director component 124 .
  • a content item can be tagged as generic (default value assigned) or humorous (which should be used only when humor is appropriate).
  • a particular nature image may be tagged for all traditions and multiple issues.
  • a pair of (sports preference, country) can be used to tag a content item as football preferred for Italians.
  • the content component 120 will only retrieve a content item for the user where the tag of the content item matches the user's profile.
  • the content component 120 of the filmmaking engine 118 may tag and organize the content items in content library 128 using a content management system (CMS) with meta-tags and customized vocabularies.
  • CMS content management system
  • the content component 120 may utilize the CMS terms and vocabularies to create its own meta-tags for content items and define content items through these meta-tags so that it may perform instant addition, deletion, or modification of tags.
  • the content component 120 may add a Dominant Color tag to an image when it was discovered during research of MME the dominant color of an image was important for smooth transitions between images.
  • the content component 120 of the filmmaking engine 118 may browse and retrieve the content items by one or more of topics, types of content items, dates collected, and by certain categories such as belief systems to build the content based on the user's profile and/or understanding of the items' “connections” with a topic or movie request submitted by the user.
  • the user's history of prior visits and/or community ratings may also be used as a filter to provide final selection of content items. For a non-limiting example, a sample music clip might be selected to be included in the content because it was encoded for a user who prefers motivational music in the morning.
  • the content component 120 may retrieve content items either from the content library 128 or, in case the content items relevant are not available there, identify the content items with the appropriate properties over the Web and save them in the content library 128 so that these content items will be readily available for future use.
  • the content component 120 of the filmmaking engine 118 may retrieve and customize the content based on the user's profile or context in order to create personalized content tailored for the user's current need or request.
  • a content item can be selected based on many criteria including the ratings of the content item from users with profiles similar to the current user, recurrence (how long ago, if ever, did the user see this item), how similar is this item to other items the user has previously rated, and how well does the item fit the issue or purpose of the movie. For a non-limiting example, content items that did not appeal to the user in the past based on his/her feedback will likely be excluded.
  • the user may simply choose “Get me through the day” from the topic list and the content component 120 will automatically retrieve and present content to the user based on the user's profile.
  • the content component 120 may automatically identify and retrieve content items relevant to the topic.
  • the director component 124 of the filmmaking engine 118 selects a multimedia script template from the script library 126 and creates a movie-like multimedia experience (a movie) by populating with content items retrieved and customized by the content component 120 .
  • each multimedia script template defines a timeline, which is a sequence of timing information for the corresponding content items to be composed as part of the multimedia content.
  • the multimedia script template provides guidelines for the times and content items in the multimedia experience and it can be authored by administrators with experience in filmmaking.
  • the director component 124 parses through the template to add in filmmaking techniques such as transition points tied to music track beat changes. Progression for images to achieve the desired result in the user's emotional state can also be effected in this stage.
  • the script template can be created either in the form of a template specified by an expert in movie creation or automatically by a script generating component 122 based on one or more rules from a rules library 130 .
  • the script generating component 122 generates a script template with content item placeholders for insertion of actual content items personalized by the content component 120 , wherein the content items inserted can be images, short text quotes, music or audio, and script dialogs.
  • the expert-authored script template may specify the start time, end time, and duration of the content item, whether the content item is repeatable or non-repeatable, how many times it should be repeated (if repeatable) as part of the script, or what the delay should be between repeats.
  • a template item (denoted by a number) that indicates a position at which a content item must be provided. In this example:
  • the multimedia script template is created by the script generating component 122 automatically based on rules from the rules library 130 .
  • Rule-based script template generation is that it can be easily modified by changing a rule. The rule change can then propagate to existing templates in order to generate new templates. For rules-based auto generation of the script or for occasions when audio files are selected dynamically (e.g., a viewer uploads his or her own song), the audio files will be analyzed and synchronization will be performed by the director component 124 as discussed below.
  • the director component 124 of the filmmaking engine 118 needs to create appropriately timed music, sound effects, and background audio.
  • the sounds of nature will occur when the scene is in the wilderness. It is also assumed that subtle or dramatic changes in the soundtrack such as a shift in tempo or beat will be timed to a change in scenery (imagery) or dialog (text).
  • the director component 124 of the filmmaking engine 118 enables audio-driven timeline adjustment of transitions and presentations of content items for the template. More specifically, the director component 124 dynamically synchronizes the retrieved and/or customized multimedia content items such as images or video clips with an audio clip/track to create a script of movie-like content based on audio analysis and script timeline marking, before presenting the movie-like content to the user via the display component 106 of the user interaction engine 102 . First, the director component 124 analyzes the audio clip/file and identifies various audio markers in the file, wherein the markers mark the time where music transition points exist on a timeline of a script template.
  • the director component 124 then synchronizes the audio markers representing music tempo and beat change in the audio clip with images/videos, image/video color, and text items retrieved and identified by the content component 120 for overlay.
  • the director component 124 may apply audio/music analysis in multiple stages, first as a programmatic modification to existing script template timelines, and second as a potential rule criterion in the rule-based approach for script template generation.
  • the director component 124 of the filmmaking engine 118 identifies various points in a timeline of the script template, wherein the points can be adjusted based on the time or duration of a content item.
  • such adjustment points include but are not limited to:
  • the director component 124 of the filmmaking engine 118 performs beat detection to identify the point in time (time index) at which each beat occurs in an audio file. Such detection is resilient to changes in tempo in the audio file and it identifies a series of time indexes, where each time index represents, in seconds, the time at which a beat occurs. The director component 124 may then use the time indexes to modify the item transition time, within a given window, which is a parameter that can be set by the director component 124 .
  • a script template specifies that an image begins at time index 15.5 with a window of ⁇ 2 seconds
  • the director component 124 may find the closest beat to 15.5 within the range of 13.5-17.5, and adjust the start time of the image to that time index as shown in FIG. 7 . The same adjustment may apply to each item transition time. If no beat is found within the window, the item transition time will not be adjusted.
  • the director component 124 of the filmmaking engine 118 performs tempo change detection to identify discrete segments of music in the audio file based upon the tempo of the segments. For a non-limiting example, a song with one tempo throughout, with no tempo changes, will have one segment. On the other hand, a song that alternates between 45 BPM and 60 BPM will have multiple segments as shown below, where segment A occurs from 0:00 seconds to 30:00 seconds into the song, and has a tempo of 45 BPM. Segment B begins at 30:01 seconds, when the tempo changes to 60 BPM, and continues until 45:00 seconds.
  • the director component 124 of the filmmaking engine 118 performs measure detection, which attempts to extend the notion of beat detection to determine when each measure begins in the audio file. For a non-limiting example, if a piece of music is in 4/4 time, then each measure contains four beats, where the beat that occurs first in the measure is more significant than a beat that occurs intra-measure.
  • the duration of a measure can be used to set the item transition duration. FIG. 9 shows the adjustment of the item beginning transition to coincide with the duration of a measure. A similar adjustment would occur with the ending transition.
  • the director component 124 of the filmmaking engine 118 performs key change detection to identify the time index at which a song changes key in the audio file, for a non-limiting example, from G-major to D-minor. Typically such key change may coincide with the beginning of a measure. The time index of a key change can then be used to change the item transition time as shown in FIG. 10 .
  • the director component 124 of the filmmaking engine 118 performs dynamics change detection to determine how loudly a section of music in the audio file is played. For non-limiting examples:
  • the director component 124 of the filmmaking engine 118 specifies an order of precedence for audio markers to avoid potential for conflict, as many of the audio markers described above can affect the same adjustment points. In the case where two or more markers apply in the same situation, one marker will take precedence over others according to the following schedule:
  • the director component 124 of the filmmaking engine 118 adopts techniques to take advantage of encoded meta-information in images to create a quality movie experience, wherein such techniques include but are not limited to, transitioning, zooming in to a point, panning to a point (such as panning to a seashell on a beach), panning in a direction, linkages to music, sound, and other psychological cues, and font treatment to set default values for text display such as font treatments including font family, size, color, shadow, and background color for each type of text displayed. Certain images may naturally lend themselves to be zoomed into a specific point to emphasize its psychoactive tagging.
  • the director component 124 may slowly zoom into a still pond by a meadow. Note that the speed of movement and start-end times may be configurable or calculated by the director component 124 to ensure the timing markers for the audio track transitions are smooth and consistent.
  • the director component 124 of the filmmaking engine 118 replicating a plurality of decisions made by a human film editor, generates and inserts one or more progressions of images from the content library 128 during creation of the movie to effectuate an emotional state-change in the user.
  • the images used for the progressions are tagged for their psychoactive properties as discussed above.
  • Such progression of images (the “Narrative”) in quality filmmaking tells a parallel story which the viewer may or may not be consciously aware of and enhances either the plot (in fiction films) or the sequence of information (in non-fiction films or news reports).
  • the director component 124 of the filmmaking engine 118 also adopts techniques, which although are often subtle and not necessarily recognizable by the viewer, contribute to the overall feel of the movie and engender a view of quality and polish.
  • the director component 124 of the filmmaking engine 118 creates a progression of images that mimics the internal workings of the psyche rather than the external workings of concrete reality.
  • the logic of a dream state varies from the logic of a chronological sequence since dream states may be non-linear and make intuitive associations between images while chronological sequences are explicit in their meaning and purpose.
  • the director component 124 enables the user to “drive” the construction of the image progressions by identifying his/her current and desired feeling state as discussed in details below. Compared to explicit designation of a specific image progression to use, such an approach allows multiple progressions of images to be tailored specifically to the feeling-state of each user, which gives the user a unique and meaningful experience with each movie-like content.
  • FIG. 12 depicts a flowchart of an example of a process to create an image progression in a movie based on psychoactive properties of the images.
  • FIG. 12 depicts functional steps in a particular order for purposes of illustration, the process is not limited to any particular order or arrangement of steps.
  • One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
  • the flowchart 1200 starts at block 1202 where psychoactive properties and their associated numerical values are tagged and assigned to images in the content library 128 . Such assignment can be accomplished by adjusting the sliders of psychoactive tags shown in FIG. 5 .
  • the flowchart 1200 continues to block 1204 where two images are selected by a user as starting and ending points respectively of a range for an image progression based on the psychoactive values of the images.
  • the first (starting) image selected from a group of sample images best represents the user's current feeling/emotional state, while the second (ending) image selected from a different set of images best represents the user's desired feeling/emotional state.
  • a user may select a dark image that has psychoactive value of luminance of 1.2 as the starting point and a light image that has psychoactive value of luminance of 9.8 as the ending point.
  • image progressions based on psycho-active tagging include from rural to urban, from ambiguous to concrete, from static to kinetic, from micro to macro, from barren-to-lush, seasons (from winter to spring), and time (from morning to late night).
  • the flowchart 1200 continues to block 1206 where numeric values of the psychoactive properties ( ⁇ -tags) of the two selected images, beginning with current feeling state and ending with desired feeling state are evaluated to set a range.
  • the flowchart 1200 continues to block 1208 where a set of images which psychoactive properties having numeric values progressing smoothly within the range from the beginning to the end are selected.
  • the images progress from one with ⁇ -tags representing the user's current feeling state through a gradual progression of images whose ⁇ -tags move closer and closer to the user's desired feeling state.
  • the number of images selected for the progression may be any number larger than two but is enough to ensure that there is smooth gradation progression from the starting point to the ending point.
  • the flowchart 1200 ends at block 1210 where the selected images are filled in the image progression in the movie.
  • the director component 124 of the filmmaking engine 118 detects if there is a gap in the progression of images where some images with desired psychoactive properties are missing. If such a gap does exist, the director component 124 then proceeds to research, mark, and collect more images either from the content library 128 or over the internet in order to fill the gap. For a non-limiting example, if the director component 124 tries to build a progression of images that is both morning-to-night and barren-to-lush, but there are not any (or many) sunset-over-the-rainforest images, the director component 124 will detect such image gap and to include more images in the content library 128 in order to fill such gap.
  • the director component 124 of the filmmaking engine 118 builds a vector of psychoactive values ( ⁇ -tags) for each image tagged along multiple psychoactive properties.
  • the ⁇ -tag vector is a list of numbers served as a numeric representation of that image where each number in the vector is the value of one of the ⁇ -tags of the image.
  • the ⁇ -tag vector of an image chosen by the user corresponds to the user's emotional state. For a non-limiting example, if the user is angry and selects an image with a ⁇ -tag vector of [2, 8, 8.5, 2 . . . ], other images with ⁇ -tag vectors of similar ⁇ -tag values may also reflect his/her emotional state of anger.
  • the director component 124 determines a series of “goal” intermediate ⁇ -tag vectors representing the ideal set of ⁇ -tags desired in the image progression from the user's current state to the target state. Images that match these intermediate ⁇ -tag vectors will correspond, for this specific user, to a smooth progression from his/her current emotional state to his/her target emotional state (e.g., from angry to peaceful).
  • the director component 124 identifies at least two types of “significant” ⁇ -tags in a ⁇ -tag vector as measured by change in values during image progressions: (1) a ⁇ -tag of the images changes significantly (e.g., a change in value >50%) where, e.g., the images progress from morning ⁇ noon ⁇ night, or high altitude ⁇ low altitude, etc.; (2) a ⁇ -tag of the images remains constant (a change in value ⁇ 10%) where, e.g., the images are all equally luminescent or equally urban, etc.
  • ⁇ -tag is regarded as “not applicable to this image.”
  • a picture of a clock has no relevance for season (unless it is in a field of daisies).
  • that ⁇ -tag is left out of the vector of the image since it is not relevant for this image and thus it will not be relevant for the progression.
  • the ⁇ -tags that remain in the ⁇ -tag vector are “active” (and may or may not be “significant”).
  • the director component 124 selects the series images from the content library 128 by comparing their ⁇ -tag vectors with the “goal” ⁇ -tag intermediate vectors. For the selection of each image, the comparison can be based on a measure of Euclidean distance between two ⁇ -tag vectors— ⁇ -tag vector (p 2 , p 2 . . . p n ) of a candidate image and one of the goal ⁇ -tag vectors (q 2 , q 2 . . . q n )—in an n-dimensional vector space of multiple ⁇ -tags to identify the image with the closest ⁇ -tag vector along all dimensions with the goal ⁇ -tag vector.
  • the Euclidean distance between the two vectors can be calculated as:
  • ⁇ i 1 n ⁇ ( p i - q i ) 2 .
  • the director component 124 selects the images by comparing the entire ⁇ -tag vectors in unison even though each of the ⁇ -tags in the vectors can be evaluated individually.
  • an image can be evaluated for “high energy” or “low energy” independently from “high density” or “low density”.
  • the association between the image and an emotional state is made based on the entire vector of ⁇ -tags, not just each of the individual ⁇ -tags, since “anger” is not only associated with “high energy” but also associated with values of all ⁇ -tags considered in unison.
  • association between an emotional state and a ⁇ -tag vector is specific to each individual user based on how he/she reacts to images, as one user's settings for ⁇ -tags at his/her emotional state of peacefulness does not necessarily correspond to another user's settings for ⁇ -tags at his/her emotional state of peacefulness.
  • the user interaction engine 102 enables the user to login and submit a topic or situation via the user interface 104 to have a related movie created.
  • the event generation engine 108 identifies a triggering event for movie generation based a published calendar and/or the user's profile. If the user is visiting for the first time, the profile engine 112 may interview the user with a set of questions in order to establish a profile of the user that accurately reflects the user's interests or concerns.
  • the filmmaking engine 118 Upon receiving the topic/situation from the user interaction engine 102 or a notification of a triggering event from the event generation engine 108 , the filmmaking engine 118 identifies, retrieves, and customizes content items appropriately tagged and organized in content library 128 based on the profile of the user. The filmmaking engine 118 then selects a multimedia script template from the script library 126 and creates a movie-like multimedia experience (the movie) by populating the script template with the retrieved and customized content items. The filmmaking engine 118 first analyzes an audio clip/file to identify various audio markers in the file wherein the markers mark the time where music transition points exist on the timeline of the script template.
  • the filmmaking engine 118 then generates movie-like content by synchronizing the audio markers representing adjustment points and changes in beat, music tempo, measure, key, and dynamics in the audio clip with images/videos, image/video color, text items retrieved and customized from the filmmaking engine 118 for overlay.
  • the filmmaking engine 118 adopts various techniques including transitioning, zooming in to a point, panning to a point, panning in a direction, font adjustment, and image progression.
  • FIG. 13 depicts a flowchart of an example of a process to support algorithmic movie generation. Although this figure depicts functional steps in a particular order for purposes of illustration, the process is not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
  • the flowchart 1300 starts at block 1302 where a triggering event is identified or a user is enabled to submit a topic or situation to which the user intends to seek help or counseling and have a related movie created.
  • the submission process can be done via a user interface and be standardized via a list of pre-defined topics/situations organized by categories.
  • the flowchart 1300 continues to block 1304 where a profile of the user is established and maintained if the user is visiting for the first time or the user's current profile is otherwise thin. At least a portion of the profile can be established by initiating interview questions to the user targeted at soliciting information on his/her personal interests and/or concerns.
  • the profile of the user can be continuously updated with the topics raised by the user and the scripts of content presented to him/her.
  • the flowchart 1300 continues to block 1306 where a set of multimedia content items are maintained, tagged, and organized properly in a content library for easy identification, retrieval, and customization.
  • the flowchart 1300 continues to block 1308 where one or more multimedia items are identified, retrieved, and customized based on the profile and/or current context of the user in order to create personalized content tailored for the user's current need or situation.
  • the flowchart 1300 continues to block 1310 where a multimedia script template is selected to be populated with the retrieved and customized content items.
  • the flowchart 1300 continues to block 1312 where an audio file is analyzed to identify various audio markers representing the time where music transition points exist along a timeline of a script template.
  • the audio markers can be identified by identifying adjustment points in the timeline, beats, tempo changes, measures, key changes, and dynamics changes in the audio file.
  • the flowchart 1300 ends at block 1314 where the movie-like content is generated by synchronizing the audio markers of the audio file with retrieved and customized content items.
  • One embodiment may be implemented using a conventional general purpose or a specialized digital computer or microprocessor(s) programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • the invention may also be implemented by the preparation of integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
  • One embodiment includes a computer program product which is a machine readable medium (media) having instructions stored thereon/in which can be used to program one or more hosts to perform any of the features presented herein.
  • the machine readable medium can include, but is not limited to, one or more types of disks including floppy disks, optical discs, DVD, CD-ROMs, micro drive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs) or any type of media or device suitable for storing instructions and/or data.
  • the present invention includes software for controlling both the hardware of the general purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human viewer or other mechanism utilizing the results of the present invention.
  • software may include, but is not limited to, device drivers, operating systems, execution environments/containers, and applications.

Abstract

A new approach is proposed that contemplates systems and methods to combine highly targeted and customized content items with algorithmic filmmaking techniques to create a film-quality, personalized multimedia experience (MME)/movie for a user. First, a rich content database is created and embellished with meaningful, accurate, and properly organized multimedia content items tagged with meta-information. Second, a software agent interacts with the user to create, learn, and exploit the user's context to determine which content items need to be retrieved and how they should be customized in order to create a script of content to meet the user's current need. Finally, retrieved and/or customized multimedia content items such as text, images, or video clips are utilized to create a script of movie-like content using automatic filmmaking techniques such as audio synchronization, image control and manipulation, and appropriately customized dialog and content.

Description

    RELATED APPLICATIONS
  • This application is related to U.S. patent application Ser. No. 12/460,522 filed Jul. 20, 2009, and entitled “A system and method for identifying and providing user-specific psychoactive content,” by Hawthorne et al., and is hereby incorporated herein by reference.
  • BACKGROUND
  • With the growing volume of content available over the Internet, people are increasingly seeking content online for useful information to address their problems as well as for a meaningful emotional and/or psychological experience. A multimedia experience (MME) is a movie-like presentation of a script of content created for and presented to an online user, preferably based on his/her current context. Here, the content may include one or more content items of a text, an image, a video, or audio clip. The user's context may include the user's profile, characteristics, desires, his/her rating of content items, and history of the user's interactions with an online content vendor/system (e.g., the number of visits by the user).
  • Due to the multimedia nature of the content, it is often desirable for the online content vendor to simulate the qualities found in motion pictures in order to create “movie-like” content for the user to enjoy an MME with content items including music, text, images, and videos as a backdrop. While creating simple Adobe Flash files and making “movies” with minimal filmmaking techniques from a content database is straightforward, the utility of these movies when applied to a context of personal interaction is complex. To create a movie that emotionally connects with the user on a deeply personal, emotional, and psychological level or an advertising application that seeks to connect the user with other emotions, traditional and advanced filmmaking techniques/effects need to be developed and exploited. Such techniques include but are not limited to, transitions tied to image changes as a fade in or out, gently scrolling text and/or images to a defined point of interest, color transitions in imagery, and transitions on music changes in beat or tempo. While many users may not consciously notice these effects, these effects can be profound in creating a personal or emotional reaction by the user to the generated MME.
  • The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent upon a reading of the specification and a study of the drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts an example of a system diagram to support algorithmic movie generation.
  • FIG. 2 illustrates an example of various information that may be included in a user's profile.
  • FIG. 3 depicts a flowchart of an example of a process to establish the user's profile.
  • FIG. 4 illustrates an example of various types of content items and the potential elements in each of them.
  • FIG. 5 depicts examples of sliders that can be used to set values of psychoactive tags on image items.
  • FIGS. 6( a)-(b) depict examples of adjustment points along a timeline of a content script template.
  • FIG. 7 depicts an example of adjusting the start time of a content item based on beat detection.
  • FIG. 8 depicts an example of rules-based synchronization based on tempo detection.
  • FIG. 9 depicts an example of adjustment of the item beginning transition to coincide with the duration of a measure.
  • FIG. 10 depicts an example of change of item transition time based on key change detection.
  • FIG. 11 depicts an example of rules-based synchronization based on dynamics change detection.
  • FIG. 12 depicts a flowchart of an example of a process to create an image progression in a movie based on psychoactive properties of the images.
  • FIG. 13 depicts a flowchart of an example of a process to support algorithmic movie generation.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • The approach is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” or “some” embodiment(s) in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
  • A new approach is proposed that contemplates systems and methods to create a film-quality, personalized multimedia experience (MME)/movie composed of one or more highly targeted and customized content items using algorithmic filmmaking techniques. Here, each of the content items can be individually identified, retrieved, composed, and presented to a user online as part of the movie. First, a rich content database is created and embellished with meaningful, accurate, and properly organized multimedia content items tagged with meta-information. Second, a software agent interacts with the user to create, learn, and explore the user's context to determine which content items need to be retrieved and how they should be customized in order to create a script of content to meet the user's current need. Finally, the retrieved and/or customized multimedia content items such as text, images, or video clips are utilized by the software agent to create a script of movie-like content via automatic filmmaking techniques such as audio synchronization, image control and manipulation, and appropriately customized dialog and content. Additionally, one or more progressions of images can also be generated and inserted during creation of the movie-like content to effectuate an emotional state-change in the user. Under this approach, the audio and visual (images and videos) content items are the two key elements of the content, each having specific appeals to create a deep personal, emotional, and psychological experience for a user in need. Such experience can be amplified for the user with the use of filmmaking techniques so that the user can have an experience that helps him/her focus on interaction with the content instead of distractions he/she may encounter at the moment.
  • Such a personalized movie making approach has numerous potential commercial applications that include but are not limited to advertising, self-help, entertainment, and education. The capability to automatically create a movie from content items in a content database personalized to a user can also be used, for a non-limiting example, to generate video essays for a topic such as a news event or a short history lesson to replace the manual and less-compelling photo essays currently used on many Internet news sites.
  • FIG. 1 depicts an example of a system diagram to support algorithmic movie generation. Although the diagrams depict components as functionally separate, such depiction is merely for illustrative purposes. It will be apparent that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware components. Furthermore, it will also be apparent that such components, regardless of how they are combined or divided, can execute on the same host or multiple hosts, and wherein the multiple hosts can be connected by one or more networks.
  • In the example of FIG. 1, the system 100 includes a user interaction engine 102, which includes at least a user interface 104, and a display component 106; an event generation engine 108, which includes at least an event component 110; a profile engine 112, which includes at least a profiling component 114; a profile library (database) 116 coupled to the event generation engine 108 and the profile engine 112; a filmmaking engine 118, which includes at least a content component 120, a script generating engine 122, and a director component 124; a script template library (database) 126 a content library (database) 128, and a rules library (database) 130, all coupled to the filmmaking engine 118; and a network 132.
  • As used herein, the term engine refers to software, firmware, hardware, or other component that is used to effectuate a purpose. The engine will typically include software instructions that are stored in non-volatile memory (also referred to as secondary memory). When the software instructions are executed, at least a subset of the software instructions is loaded into memory (also referred to as primary memory) by a processor. The processor then executes the software instructions in memory. The processor may be a shared processor, a dedicated processor, or a combination of shared or dedicated processors. A typical program will include calls to hardware components (such as I/O devices), which typically requires the execution of drivers. The drivers may or may not be considered part of the engine, but the distinction is not critical.
  • As used herein, the term library or database is used broadly to include any known or convenient means for storing data, whether centralized or distributed, relational or otherwise.
  • In the example of FIG. 1, each of the engines and libraries can run on one or more hosting devices (hosts). Here, a host can be a computing device, a communication device, a storage device, or any electronic device capable of running a software component. For non-limiting examples, a computing device can be but is not limited to a laptop PC, a desktop PC, a tablet PC, an iPod, an iPhone, a PDA, or a server machine. A storage device can be but is not limited to a hard disk drive, a flash memory drive, or any portable storage device. A communication device can be but is not limited to a mobile phone.
  • In the example of FIG. 1, the user interaction engine 102, the event generation engine 108, the profile engine 112, and the filmmaking engine 118 each has a communication interface (not shown), which is a software component that enables the engines to communicate with each other following certain communication protocols, such as TCP/IP protocol. The communication protocols between two devices are well known to those of skill in the art.
  • In the example of FIG. 1, the network 132 enables the user interaction engine 102, the event generation engine 108, the profile engine 112, and the filmmaking engine 118 to communicate and interact with each other. Here, the network 132 can be a communication network based on certain communication protocols, such as TCP/IP protocol. Such network can be but is not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, WiFi, and mobile communication network. The physical connections of the network and the communication protocols are well known to those of skill in the art.
  • In the example of FIG. 1, the user interaction engine 102 is configured to enable a user to submit a topic or situation to which the user intends to seek help or counseling or to have a related movie created via the user interface 104 and to present to the user a script of content relevant to addressing the topic or the movie request submitted by the user via the display component 106. Here, the topic (problem, question, interest, issue, event, condition, or concern, hereinafter referred to a topic) of the user provides the context for the content that is to be presented to him/her. The topic can be related to one or more of personal, emotional, psychological, relational, physical, practical, or any other need of the user. The creative situation can be derived from databases of specific content. For example, a wildlife conservation organization may create a specific database of images of wildlife and landscapes with motivational and conservation messages. In some embodiments, the user interface 104 can be a Web-based browser, which allows the user to access the system 100 remotely via the network 132.
  • In an alternate embodiment in the example of FIG. 1, the event generation engine 108 determines an event that is relevant to the user and/or the user's current context, wherein such event would trigger the generation of a movie by the filmmaking engine 118 even without an explicit inquiry from the user via the user interaction engine 102. Here, the triggering event can be but is not limited to a birthday, a tradition, or a holiday (such as Christmas, Ramadan, Easter, Yom Kippur). Such triggering event can be identified by the event component 110 of the event generation engine 108 based on a published calendar as well as information of the user's profile and history maintained in the profile library 116 discussed below.
  • In some embodiments, the event component 110 of the event generation engine 108 may be alerted by a news feed such as RSS to an event of interest to the user and may in turn inform the filmmaking engine 118 to create a movie or specific content in a movie for the user. The filmmaking engine 118 receives such notification from the event generation engine 108 whenever an event that might have an impact on the automatically generated movie occurs. For a non-limiting example, if the user is seeking wisdom and is strongly identified with a tradition, then the event component 110 may notify the filmmaking engine 118 of important observances such as Ramadan for a Muslim, wherein the filmmaking engine 118 may decide to use such information or not when composing a movie. For another non-limiting example, the most recent exciting win by a sports team of a university may trigger the event component 110 to provide notification to the filmmaking engine 118 to include relevant text, imagery or video clips of such win into a sports highlight movie of the university being specifically created for the user.
  • In the example of FIG. 1, the profile engine 112 establishes and maintains a profile of the user in the profile library 116 via the profiling component 114 for the purpose of identifying user-context for generating and customizing the content to be presented to the user. The profile may contain at least the following information of the user: gender and date of birth, parental status, marital status, universities attended, relationship status, as well as his/her current interests, hobbies, income level, habits; psycho-emotional information such as his/her current issues and concerns, psychological, emotional, and religious traditions, belief system, degree of adherence and influences; community information that defines how the user interacts with the online community of experts and professionals, and other information the user is willing to share. FIG. 2 illustrates an example of various information that may be included in a user profile.
  • In some embodiments, the profile engine 112 may establish the profile of the user by initiating one or more questions during pseudo-conversational interactions with the user via the user interaction engine 102 for the purpose of soliciting and gathering at least part of the information for the user profile listed above. Here, such questions focus on the aspects of the user's life that are not available through other means. The questions initiated by the profile engine 112 may focus on the personal interests or the emotional and/or psychological dimensions as well as dynamic and community profiles of the user. For a non-limiting example, the questions may focus on the user's personal interest, which may not be truly obtained by simply observing the user's purchasing habits.
  • In some embodiments, the profile engine 112 updates the profile of the user via the profiling component 114 based on the prior history/record of content viewing and dates of one or more of:
      • topics that have been raised by the user;
      • relevant content that has been presented to the user;
      • script templates that have been used to generate and present the content to the user;
      • feedback from the user and other users about the content that has been presented to the user.
  • In the example of FIG. 1, the profile library 116 embedded in a computer readable medium, which in operation, maintains a set of user profiles of the users. Once the content has been generated and presented to a user, the profile of the user stored in the profile library 116 can be updated to include the topic submitted by the user as well as the content presented to him/her as part of the user history. If the user optionally provides feedback on the content, the profile of the user can also be updated to include the user's feedback on the content.
  • FIG. 3 depicts a flowchart of an example of a process to establish the user's profile. Although this figure depicts functional steps in a particular order for purposes of illustration, the process is not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
  • In the example of FIG. 3, the flowchart 300 starts at block 302 where identity of the user submitting a topic for help or counseling is established. If the user is a first time visitor, the flowchart 300 continues to block 304 where the user is registered, and the flowchart 300 continues to block 306 where a set of interview questions are initiated to solicit information from the user for the purpose of establishing the user's profile. The flowchart 300 ends at block 308 where the profile of the user is provided to the filmmaking engine 118 for the purpose of retrieving and customizing the content relevant to the topic.
  • In the example of FIG. 1, the content library 128, serving as a media “book shelf”, maintains a collection of multimedia content items as well as definitions, tags, resources, and presentation scripts of the content items. The content items are appropriately tagged, categorized, and organized in a content library 128 in a richly described taxonomy with numerous tags and properties by the content component 120 of the filmmaking engine 118 to enable access and browsing of the content library 128 in order to make intelligent and context-aware selections. For a non-limiting example, the content items in the content library 128 can be organized by a flexible emotional and/or psychological-orientated taxonomy for classification and identification, including terms such as Christianity, Islam, Hinduism, Buddhism, and secular beliefs. The content items can also be tagged with an issue such as relationship breakup, job loss, death, or depression. Note that the tagging of traditions and issues are not mutually exclusive. There may also be additional tags for additional filtering such as gender and humor.
  • Here, each content item in the content library 128 can be, but is not limited to, a media type of a (displayed or spoken) text (for non-limiting examples, an article, a short text item for quote, a contemplative text such as a personal story or essay, a historical reference, sports statistics, a book passage, or a medium reading or longer quote), a still or moving image (for a non-limiting example, component imagery capable of inducing a shift in the emotional state of the viewer), a video clip (including clips from videos that can be integrated into or shown as part of the movie), an audio clip (for a non-limiting example, a piece of music or sounds from nature or a university sports song), and other types of content items from which a user can learn information or be emotionally impacted, ranging from five thousand years of sacred scripts and emotional and/or psychological texts to modern self-help and non-religious content such as rational thought and secular content. Here, each content item can be provided by another party or created or uploaded by the user him/herself.
  • In some embodiments, each of a text, image, video, and audio item can include one or more elements of: title, author (name, unknown, or anonymous), body (the actual item), source, type, and location. For a non-limiting example, a text item can include a source element of one of literary, personal experience, psychology, self help, and religious, and a type element of one of essay, passage, personal story, poem, quote, sermon, speech, historical event description, sports statistic, and summary. For another non-limiting example, a video, an audio, and an image item can all include a location element that points to the location (e.g., file path or URL) or access method of the video, audio, or image item. In addition, an audio item may also include elements on album, genre, musician, or track number of the audio item as well as its audio type (music or spoken word). FIG. 4 illustrates an example of various types of content items and the potential elements in each of them.
  • In some embodiments, a text item can be used for displaying quotes, which are generally short extracts from a longer text or a short text such as an observation someone has made. Non-limiting examples include Gandhi: “Be the change you wish to see in the world,” and/or extracts from scared texts such as the Books of Psalms from the Bible. Quotes can be displayed in a multimedia movie for a short period of time to allow contemplation, comfort, or stimulation. For a non-limiting example, statistics from American Football on Super Bowls can be displayed while a user is watching compilation of sporting highlights for his or her favorite team.
  • In some embodiments, a text item can be used in a long format for contemplation or assuming a voice for communication with the user to, non-limiting examples, explain or instruct a practice. Here, long format represents more information (e.g., exceeding 200 words) than can be delivered on a single screen when the multimedia movie is in motion. Examples of long format text include but are not limited to personal essays on a topic or the description of or instructions for an activity such as a mediation or yoga practice.
  • In some embodiments, a text item can be used to create a conversational text (e.g., a script dialog) between the user and the director component 124. The dialog can be used with meta-tags to insert personal, situation-related, or time-based information into the movie. For non-limiting examples, a dialog can include a simple greeting with the user's name (e.g., Hello Mike, Welcome Back to the System), a happy holiday message for a specific holiday related to a user's spiritual or religious tradition (e.g., Happy Hanukah), or recognition of a particular situation of the user (e.g., sorry your brother is ill).
  • In some embodiments, an audio item can include music, sound effects, or spoken word. For a non-limiting example, an entire song can be used as the soundtrack for shorter movie. The sound effects may include items such as nature sounds, water, and special effects audio support tracks such as breaking glass or machine sounds. Spoken word may include speeches, audio books (entire or passages), and spoken quotes.
  • In some embodiments, image items in the content library 128 can be characterized and tagged, either manually or automatically, with a number of psychoactive properties (“Ψ-tags”) for their inherent characteristics that are known, or presumed, to affect the emotional state of the viewer. Here, the term “Ψ-tag” is an abbreviated form of “psychoactive tag,” since it is psychologically active, i.e., pertinent for association between tag values and psychological properties. These Ψ-tagged image items can be subsequently used to create emotional responses or connections with the user via a meaningful image progression as discussed later. These psychoactive properties mostly depend on the visual qualities of an image rather than its content qualities. Here, the visual qualities may include but are not limited to Color (e.g., Cool-to-Warm), Energy, Abstraction, Luminance, Lushness, Moisture, Urbanity, Density, and Degree of Order, while the content qualities may include but are not limited to Age, Altitude, Vitality, Season and Time of Day. For a non-limiting example, images may contain energy or calmness. When a movie is meant to lead to calmness and tranquility, imagery can be selected and transition with the audio or music track. Likewise, if an inspirational movie is made to show athletes preparing for the winter Olympics, imagery of excellent performances, teamwork, and success are important. Thus, the content component 120 may tag a night image from a city with automobile lights forming patterns across the entire image and a sunset image over a dessert scene with flowing sand and subtle differences in color and light differently. Note that dominant colors can be part of image assessment and analysis as color transitions can provide soothing or sharply contrasting reactions depending on the requirements of the movie.
  • In some embodiments, numerical values of the psychoactive properties can be assigned to a range of emotional issues as well as a user's current context and emotional state gathered and known by the content component 120. These properties can be tagged along numerical scales that measure the degree or intensity of the quality being measured. FIG. 5 depicts examples of sliders that can be used to set values of the psychoactive tags on the image items.
  • In some embodiments, the content component 120 of the filmmaking engine 118 associates each content item in the content library 128 with one or more tags for the purpose of easy identification, organization, retrieval, and customization. The assignment of tags/meta data and definition of fields for descriptive elements provides flexibility at implementation for the director component 124. For a non-limiting example, a content item can be tagged as generic (default value assigned) or humorous (which should be used only when humor is appropriate). For another non-limiting example, a particular nature image may be tagged for all traditions and multiple issues. For yet another non-limiting example, a pair of (sports preference, country) can be used to tag a content item as football preferred for Italians. Thus, the content component 120 will only retrieve a content item for the user where the tag of the content item matches the user's profile.
  • In some embodiments, the content component 120 of the filmmaking engine 118 may tag and organize the content items in content library 128 using a content management system (CMS) with meta-tags and customized vocabularies. The content component 120 may utilize the CMS terms and vocabularies to create its own meta-tags for content items and define content items through these meta-tags so that it may perform instant addition, deletion, or modification of tags. For a non-limiting example, the content component 120 may add a Dominant Color tag to an image when it was discovered during research of MME the dominant color of an image was important for smooth transitions between images.
  • Once the content items in the content library 128 are tagged, the content component 120 of the filmmaking engine 118 may browse and retrieve the content items by one or more of topics, types of content items, dates collected, and by certain categories such as belief systems to build the content based on the user's profile and/or understanding of the items' “connections” with a topic or movie request submitted by the user. The user's history of prior visits and/or community ratings may also be used as a filter to provide final selection of content items. For a non-limiting example, a sample music clip might be selected to be included in the content because it was encoded for a user who prefers motivational music in the morning. The content component 120 may retrieve content items either from the content library 128 or, in case the content items relevant are not available there, identify the content items with the appropriate properties over the Web and save them in the content library 128 so that these content items will be readily available for future use.
  • In some embodiments, the content component 120 of the filmmaking engine 118 may retrieve and customize the content based on the user's profile or context in order to create personalized content tailored for the user's current need or request. A content item can be selected based on many criteria including the ratings of the content item from users with profiles similar to the current user, recurrence (how long ago, if ever, did the user see this item), how similar is this item to other items the user has previously rated, and how well does the item fit the issue or purpose of the movie. For a non-limiting example, content items that did not appeal to the user in the past based on his/her feedback will likely be excluded. In some situations when the user is not sure what he/she is looking for, the user may simply choose “Get me through the day” from the topic list and the content component 120 will automatically retrieve and present content to the user based on the user's profile. When the user is a first time visitor or his/her profile is otherwise thin, the content component 120 may automatically identify and retrieve content items relevant to the topic.
  • In the example of FIG. 1, the director component 124 of the filmmaking engine 118 selects a multimedia script template from the script library 126 and creates a movie-like multimedia experience (a movie) by populating with content items retrieved and customized by the content component 120. Here, each multimedia script template defines a timeline, which is a sequence of timing information for the corresponding content items to be composed as part of the multimedia content. The multimedia script template provides guidelines for the times and content items in the multimedia experience and it can be authored by administrators with experience in filmmaking. Once the script template is populated with the appropriate content, the director component 124 parses through the template to add in filmmaking techniques such as transition points tied to music track beat changes. Progression for images to achieve the desired result in the user's emotional state can also be effected in this stage.
  • In the example of FIG. 1, the script template can be created either in the form of a template specified by an expert in movie creation or automatically by a script generating component 122 based on one or more rules from a rules library 130. In both cases, the script generating component 122 generates a script template with content item placeholders for insertion of actual content items personalized by the content component 120, wherein the content items inserted can be images, short text quotes, music or audio, and script dialogs.
  • In some embodiments, for each content item, the expert-authored script template may specify the start time, end time, and duration of the content item, whether the content item is repeatable or non-repeatable, how many times it should be repeated (if repeatable) as part of the script, or what the delay should be between repeats. The table below represents an example of a multimedia script template, where there is a separate track for each type of content item in the template: Audio, Image, Text, Video, etc. There are a total of 65 seconds in this script and the time row represents the time (start=:00 seconds) that a content item starts or ends. For each content type, there is a template item (denoted by a number) that indicates a position at which a content item must be provided. In this example:
  • Figure US20110154197A1-20110623-C00001
    :00-:65 #1-Audio item
    :00-:35 #2-Image item
    :05-:30 #3-Text item
    :35-:65 #4-Image item
    :40-:60 #5-Video item

    While this approach provides a flexible and consistent method to author multimedia script templates, the synchronization to audio requires the development of a script template for each audio item (i.e., song, wilderness sound effect) that is selected by the user for a template-based implementation.
  • In an alternate embodiment, the multimedia script template is created by the script generating component 122 automatically based on rules from the rules library 130. The script generating component 122 may utilize an XML format with a defined schema to design rules that include, for a non-limiting example, <Initial Music=30>, which means that the initial music clip for this script template will run 30 minutes. The advantage of rule-based script template generation is that it can be easily modified by changing a rule. The rule change can then propagate to existing templates in order to generate new templates. For rules-based auto generation of the script or for occasions when audio files are selected dynamically (e.g., a viewer uploads his or her own song), the audio files will be analyzed and synchronization will be performed by the director component 124 as discussed below.
  • For filmmaking, the director component 124 of the filmmaking engine 118 needs to create appropriately timed music, sound effects, and background audio. For non-limiting examples of the types of techniques that may be employed to create a high-end viewer experience, it is taken for granted that the sounds of nature will occur when the scene is in the wilderness. It is also assumed that subtle or dramatic changes in the soundtrack such as a shift in tempo or beat will be timed to a change in scenery (imagery) or dialog (text).
  • For both the expert-authored and the rules-generated script templates, the director component 124 of the filmmaking engine 118 enables audio-driven timeline adjustment of transitions and presentations of content items for the template. More specifically, the director component 124 dynamically synchronizes the retrieved and/or customized multimedia content items such as images or video clips with an audio clip/track to create a script of movie-like content based on audio analysis and script timeline marking, before presenting the movie-like content to the user via the display component 106 of the user interaction engine 102. First, the director component 124 analyzes the audio clip/file and identifies various audio markers in the file, wherein the markers mark the time where music transition points exist on a timeline of a script template. These markers include but are not limited to adjustment points for the following audio events: key change, dynamics change, measure change, tempo change, and beat detection. The director component 124 then synchronizes the audio markers representing music tempo and beat change in the audio clip with images/videos, image/video color, and text items retrieved and identified by the content component 120 for overlay. In some embodiments, the director component 124 may apply audio/music analysis in multiple stages, first as a programmatic modification to existing script template timelines, and second as a potential rule criterion in the rule-based approach for script template generation.
  • In some embodiments, the director component 124 of the filmmaking engine 118 identifies various points in a timeline of the script template, wherein the points can be adjusted based on the time or duration of a content item. For non-limiting examples, such adjustment points include but are not limited to:
      • Item transition time, which is a single point in time that can be moved forward or back along the timeline. The item transition time further includes:
        • a. Item start time (same as the item beginning transition start time)
        • b. Item beginning transition end time
        • c. Item ending transition start time
        • d. Item end time (same as the item ending transition end time) as shown in FIG. 6( a).
      • Durations, which are spans of time, either for the entire item or for a transition. A duration may further include:
        • a. Item duration
        • b. Item beginning transition duration
        • c. Item ending transition duration
      • As shown in FIG. 6( b).
        Here, the adjustment points can apply to content items such as images, text, and messages that can be synchronized with an audio file.
  • In some embodiments, the director component 124 of the filmmaking engine 118 performs beat detection to identify the point in time (time index) at which each beat occurs in an audio file. Such detection is resilient to changes in tempo in the audio file and it identifies a series of time indexes, where each time index represents, in seconds, the time at which a beat occurs. The director component 124 may then use the time indexes to modify the item transition time, within a given window, which is a parameter that can be set by the director component 124. For a non-limiting example, if a script template specifies that an image begins at time index 15.5 with a window of ±2 seconds, the director component 124 may find the closest beat to 15.5 within the range of 13.5-17.5, and adjust the start time of the image to that time index as shown in FIG. 7. The same adjustment may apply to each item transition time. If no beat is found within the window, the item transition time will not be adjusted.
  • In some embodiments, the director component 124 of the filmmaking engine 118 performs tempo change detection to identify discrete segments of music in the audio file based upon the tempo of the segments. For a non-limiting example, a song with one tempo throughout, with no tempo changes, will have one segment. On the other hand, a song that alternates between 45 BPM and 60 BPM will have multiple segments as shown below, where segment A occurs from 0:00 seconds to 30:00 seconds into the song, and has a tempo of 45 BPM. Segment B begins at 30:01 seconds, when the tempo changes to 60 BPM, and continues until 45:00 seconds.
      • A: 00:00-30:00: 45 BPM
      • B: 30:01-45:00: 60 BPM
      • C: 45:01-72:00: 45 BPM
      • D: 72:01-90:00: 60 BPM
        One application of tempo change detection is to perform the same function as beat detection, with a higher priority, e.g., the item transition times can be modified to occur at a time index at which a tempo change is detected, within a given window. Another application of tempo detection is for a rules-based synchronization approach where, for a non-limiting example, a rule could be defined as: when a tempo change occurs and the tempo is <N, select an image with these parameters (tags or other metadata) as shown in FIG. 8.
  • In some embodiments, the director component 124 of the filmmaking engine 118 performs measure detection, which attempts to extend the notion of beat detection to determine when each measure begins in the audio file. For a non-limiting example, if a piece of music is in 4/4 time, then each measure contains four beats, where the beat that occurs first in the measure is more significant than a beat that occurs intra-measure. The duration of a measure can be used to set the item transition duration. FIG. 9 shows the adjustment of the item beginning transition to coincide with the duration of a measure. A similar adjustment would occur with the ending transition.
  • In some embodiments, the director component 124 of the filmmaking engine 118 performs key change detection to identify the time index at which a song changes key in the audio file, for a non-limiting example, from G-major to D-minor. Typically such key change may coincide with the beginning of a measure. The time index of a key change can then be used to change the item transition time as shown in FIG. 10.
  • In some embodiments, the director component 124 of the filmmaking engine 118 performs dynamics change detection to determine how loudly a section of music in the audio file is played. For non-limiting examples:
      • pianissimo—very quiet
      • piano—quiet
      • mezzo piano—moderately quiet
      • mezzo forte—moderately loud
      • forte—loud
      • fortissimo—very loud
        The objective of dynamics change detection is not to associate such labels with sections of music, but to detect sections of music with different dynamics, and their relative differences. For a non-limiting example, different sections in the music can be marked as:
    • A: 00:00-00:30: 1
    • B: 00:31-00:45: 3
    • C: 00:46-01:15: 4
    • D: 01:16-01:45: 2
    • E: 01:46-02:00: 4
      where 1 represents the quietest segments in this audio file and 4 represents the loudest. Furthermore, segment C should have the same relative loudness as section E, as they are both marked as 4. One application of dynamics change detection is similar to beat detection, where the item transition times can be adjusted to coincide with changes in dynamics within a given window. Another application of dynamics change detection is a rules-based approach, where specific item tags or other metadata can be associated with segments that have a given relative or absolute dynamic. For a non-limiting example, a rule could specify that for a segment with dynamic level 4, only images with dominant color [255-0-0] (red), ±65, and image category=nature can be selected as shown in FIG. 11.
  • In some embodiments, when multiple audio markers exist in the audio file, the director component 124 of the filmmaking engine 118 specifies an order of precedence for audio markers to avoid potential for conflict, as many of the audio markers described above can affect the same adjustment points. In the case where two or more markers apply in the same situation, one marker will take precedence over others according to the following schedule:
      • 1. Key change
      • 2. Dynamics change
      • 3. Measure change
      • 4. Tempo change
      • 5. Beat detection
        Under such precedence, if both a change in measure and a change in dynamics occur within the same window, the change in dynamics will take precedence over the change in measure when the director component 124 considers a change in an adjustment point.
  • In some embodiments, the director component 124 of the filmmaking engine 118 adopts techniques to take advantage of encoded meta-information in images to create a quality movie experience, wherein such techniques include but are not limited to, transitioning, zooming in to a point, panning to a point (such as panning to a seashell on a beach), panning in a direction, linkages to music, sound, and other psychological cues, and font treatment to set default values for text display such as font treatments including font family, size, color, shadow, and background color for each type of text displayed. Certain images may naturally lend themselves to be zoomed into a specific point to emphasize its psychoactive tagging. For a non-limiting example, for an image that is rural, the director component 124 may slowly zoom into a still pond by a meadow. Note that the speed of movement and start-end times may be configurable or calculated by the director component 124 to ensure the timing markers for the audio track transitions are smooth and consistent.
  • In some embodiments, the director component 124 of the filmmaking engine 118, replicating a plurality of decisions made by a human film editor, generates and inserts one or more progressions of images from the content library 128 during creation of the movie to effectuate an emotional state-change in the user. Here, the images used for the progressions are tagged for their psychoactive properties as discussed above. Such progression of images (the “Narrative”) in quality filmmaking tells a parallel story which the viewer may or may not be consciously aware of and enhances either the plot (in fiction films) or the sequence of information (in non-fiction films or news reports). For a non-limiting example, if a movie needs to transit a user from one emotional state to another, a progression of images from a barren landscape can transition slowly to one of a lush and vibrant landscape. While some image progressions may not be this overt, subtle progressions may be desired for a wide variety of movie scenes. In some embodiments, the director component 124 of the filmmaking engine 118 also adopts techniques, which although are often subtle and not necessarily recognizable by the viewer, contribute to the overall feel of the movie and engender a view of quality and polish.
  • In some embodiments, the director component 124 of the filmmaking engine 118 creates a progression of images that mimics the internal workings of the psyche rather than the external workings of concrete reality. By way of a non-limiting illustration, the logic of a dream state varies from the logic of a chronological sequence since dream states may be non-linear and make intuitive associations between images while chronological sequences are explicit in their meaning and purpose. Instead of explicit designating which progression of images to employ, the director component 124 enables the user to “drive” the construction of the image progressions by identifying his/her current and desired feeling state as discussed in details below. Compared to explicit designation of a specific image progression to use, such an approach allows multiple progressions of images to be tailored specifically to the feeling-state of each user, which gives the user a unique and meaningful experience with each movie-like content.
  • FIG. 12 depicts a flowchart of an example of a process to create an image progression in a movie based on psychoactive properties of the images. Although this figure depicts functional steps in a particular order for purposes of illustration, the process is not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
  • In the example of FIG. 12, the flowchart 1200 starts at block 1202 where psychoactive properties and their associated numerical values are tagged and assigned to images in the content library 128. Such assignment can be accomplished by adjusting the sliders of psychoactive tags shown in FIG. 5. The flowchart 1200 continues to block 1204 where two images are selected by a user as starting and ending points respectively of a range for an image progression based on the psychoactive values of the images. The first (starting) image selected from a group of sample images best represents the user's current feeling/emotional state, while the second (ending) image selected from a different set of images best represents the user's desired feeling/emotional state. For a non-limiting example, a user may select a dark image that has psychoactive value of luminance of 1.2 as the starting point and a light image that has psychoactive value of luminance of 9.8 as the ending point. Other non-limiting examples of image progressions based on psycho-active tagging include from rural to urban, from ambiguous to concrete, from static to kinetic, from micro to macro, from barren-to-lush, seasons (from winter to spring), and time (from morning to late night). The flowchart 1200 continues to block 1206 where numeric values of the psychoactive properties (Ψ-tags) of the two selected images, beginning with current feeling state and ending with desired feeling state are evaluated to set a range. The flowchart 1200 continues to block 1208 where a set of images which psychoactive properties having numeric values progressing smoothly within the range from the beginning to the end are selected. Here, the images progress from one with Ψ-tags representing the user's current feeling state through a gradual progression of images whose Ψ-tags move closer and closer to the user's desired feeling state. The number of images selected for the progression may be any number larger than two but is enough to ensure that there is smooth gradation progression from the starting point to the ending point. The flowchart 1200 ends at block 1210 where the selected images are filled in the image progression in the movie.
  • In some embodiments, the director component 124 of the filmmaking engine 118 detects if there is a gap in the progression of images where some images with desired psychoactive properties are missing. If such a gap does exist, the director component 124 then proceeds to research, mark, and collect more images either from the content library 128 or over the internet in order to fill the gap. For a non-limiting example, if the director component 124 tries to build a progression of images that is both morning-to-night and barren-to-lush, but there are not any (or many) sunset-over-the-rainforest images, the director component 124 will detect such image gap and to include more images in the content library 128 in order to fill such gap.
  • In some embodiments, the director component 124 of the filmmaking engine 118 builds a vector of psychoactive values (Ψ-tags) for each image tagged along multiple psychoactive properties. Here, the Ψ-tag vector is a list of numbers served as a numeric representation of that image where each number in the vector is the value of one of the Ψ-tags of the image. The Ψ-tag vector of an image chosen by the user corresponds to the user's emotional state. For a non-limiting example, if the user is angry and selects an image with a Ψ-tag vector of [2, 8, 8.5, 2 . . . ], other images with Ψ-tag vectors of similar Ψ-tag values may also reflect his/her emotional state of anger. Once Ψ-tag vectors of two images representing the user's current state and target state are chosen, the director component 124 then determines a series of “goal” intermediate Ψ-tag vectors representing the ideal set of Ψ-tags desired in the image progression from the user's current state to the target state. Images that match these intermediate Ψ-tag vectors will correspond, for this specific user, to a smooth progression from his/her current emotional state to his/her target emotional state (e.g., from angry to peaceful).
  • In some embodiments, the director component 124 identifies at least two types of “significant” Ψ-tags in a Ψ-tag vector as measured by change in values during image progressions: (1) a Ψ-tag of the images changes significantly (e.g., a change in value >50%) where, e.g., the images progress from morning→noon→night, or high altitude→low altitude, etc.; (2) a Ψ-tag of the images remains constant (a change in value <10%) where, e.g., the images are all equally luminescent or equally urban, etc. If the image of the current state or the target state of the user has a value of zero for a Ψ-tag, that Ψ-tag is regarded as “not applicable to this image.” For a non-limiting example, a picture of a clock has no relevance for season (unless it is in a field of daisies). If the image that the user selected for his/her current state has a zero for one of the Ψ-tags, that Ψ-tag is left out of the vector of the image since it is not relevant for this image and thus it will not be relevant for the progression. The Ψ-tags that remain in the Ψ-tag vector are “active” (and may or may not be “significant”).
  • In some embodiments, the director component 124 selects the series images from the content library 128 by comparing their Ψ-tag vectors with the “goal” Ψ-tag intermediate vectors. For the selection of each image, the comparison can be based on a measure of Euclidean distance between two Ψ-tag vectors—Ψ-tag vector (p2, p2 . . . pn) of a candidate image and one of the goal Ψ-tag vectors (q2, q2 . . . qn)—in an n-dimensional vector space of multiple Ψ-tags to identify the image with the closest Ψ-tag vector along all dimensions with the goal Ψ-tag vector. The Euclidean distance between the two vectors can be calculated as:
  • i = 1 n ( p i - q i ) 2 .
  • which yields a similarity score between two Ψ-tag vectors and the candidate image having the most similar vector with the goal vector (the lowest score) is selected. If a candidate image has a value of zero for a significant Ψ-tag that image is excluded since zero means that the Ψ-tag does not apply to this image and hence this image is not applicable to this progression to which the Ψ-tag is significant. Under such an approach, no random or incongruous image is selected by the director component 124 for the Ψ-tags that are included and “active” in the progression.
  • Note that the director component 124 selects the images by comparing the entire Ψ-tag vectors in unison even though each of the Ψ-tags in the vectors can be evaluated individually. For a non-limiting example, an image can be evaluated for “high energy” or “low energy” independently from “high density” or “low density”. However, the association between the image and an emotional state is made based on the entire vector of Ψ-tags, not just each of the individual Ψ-tags, since “anger” is not only associated with “high energy” but also associated with values of all Ψ-tags considered in unison. Furthermore, the association between an emotional state and a Ψ-tag vector is specific to each individual user based on how he/she reacts to images, as one user's settings for Ψ-tags at his/her emotional state of peacefulness does not necessarily correspond to another user's settings for Ψ-tags at his/her emotional state of peacefulness.
  • While the system 100 depicted in FIG. 1 is in operation, the user interaction engine 102 enables the user to login and submit a topic or situation via the user interface 104 to have a related movie created. Alternatively, the event generation engine 108 identifies a triggering event for movie generation based a published calendar and/or the user's profile. If the user is visiting for the first time, the profile engine 112 may interview the user with a set of questions in order to establish a profile of the user that accurately reflects the user's interests or concerns. Upon receiving the topic/situation from the user interaction engine 102 or a notification of a triggering event from the event generation engine 108, the filmmaking engine 118 identifies, retrieves, and customizes content items appropriately tagged and organized in content library 128 based on the profile of the user. The filmmaking engine 118 then selects a multimedia script template from the script library 126 and creates a movie-like multimedia experience (the movie) by populating the script template with the retrieved and customized content items. The filmmaking engine 118 first analyzes an audio clip/file to identify various audio markers in the file wherein the markers mark the time where music transition points exist on the timeline of the script template. The filmmaking engine 118 then generates movie-like content by synchronizing the audio markers representing adjustment points and changes in beat, music tempo, measure, key, and dynamics in the audio clip with images/videos, image/video color, text items retrieved and customized from the filmmaking engine 118 for overlay. In making the movie, the filmmaking engine 118 adopts various techniques including transitioning, zooming in to a point, panning to a point, panning in a direction, font adjustment, and image progression. Once the movie is generated, the user interaction engine 102 presents it to the user via the display component 106 and enables the user to rate or provide feedback to the content presented.
  • FIG. 13 depicts a flowchart of an example of a process to support algorithmic movie generation. Although this figure depicts functional steps in a particular order for purposes of illustration, the process is not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
  • In the example of FIG. 13, the flowchart 1300 starts at block 1302 where a triggering event is identified or a user is enabled to submit a topic or situation to which the user intends to seek help or counseling and have a related movie created. The submission process can be done via a user interface and be standardized via a list of pre-defined topics/situations organized by categories. The flowchart 1300 continues to block 1304 where a profile of the user is established and maintained if the user is visiting for the first time or the user's current profile is otherwise thin. At least a portion of the profile can be established by initiating interview questions to the user targeted at soliciting information on his/her personal interests and/or concerns. In addition, the profile of the user can be continuously updated with the topics raised by the user and the scripts of content presented to him/her. The flowchart 1300 continues to block 1306 where a set of multimedia content items are maintained, tagged, and organized properly in a content library for easy identification, retrieval, and customization. The flowchart 1300 continues to block 1308 where one or more multimedia items are identified, retrieved, and customized based on the profile and/or current context of the user in order to create personalized content tailored for the user's current need or situation. The flowchart 1300 continues to block 1310 where a multimedia script template is selected to be populated with the retrieved and customized content items. The flowchart 1300 continues to block 1312 where an audio file is analyzed to identify various audio markers representing the time where music transition points exist along a timeline of a script template. Here, the audio markers can be identified by identifying adjustment points in the timeline, beats, tempo changes, measures, key changes, and dynamics changes in the audio file. Finally, the flowchart 1300 ends at block 1314 where the movie-like content is generated by synchronizing the audio markers of the audio file with retrieved and customized content items.
  • One embodiment may be implemented using a conventional general purpose or a specialized digital computer or microprocessor(s) programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The invention may also be implemented by the preparation of integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
  • One embodiment includes a computer program product which is a machine readable medium (media) having instructions stored thereon/in which can be used to program one or more hosts to perform any of the features presented herein. The machine readable medium can include, but is not limited to, one or more types of disks including floppy disks, optical discs, DVD, CD-ROMs, micro drive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs) or any type of media or device suitable for storing instructions and/or data. Stored on any one of the computer readable medium (media), the present invention includes software for controlling both the hardware of the general purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human viewer or other mechanism utilizing the results of the present invention. Such software may include, but is not limited to, device drivers, operating systems, execution environments/containers, and applications.
  • The foregoing description of various embodiments of the claimed subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. Particularly, while the concept “interface” is used in the embodiments of the systems and methods described above, it will be evident that such concept can be interchangeably used with equivalent software concepts such as, class, method, type, module, component, bean, module, object model, process, thread, and other suitable concepts. While the concept “component” is used in the embodiments of the systems and methods described above, it will be evident that such concept can be interchangeably used with equivalent concepts such as, class, method, type, interface, module, object model, and other suitable concepts. Embodiments were chosen and described in order to best describe the principles of the invention and its practical application, thereby enabling others skilled in the relevant art to understand the claimed subject matter, the various embodiments and with various modifications that are suited to the particular use contemplated.

Claims (58)

1. A system, comprising:
a content library, which in operation, maintains a plurality of multimedia content items as well as definitions, tags, and source of the content items;
a filmmaking engine, which in operation,
identifies, retrieves, and customizes one or more multimedia content items from the content library based on a profile of a user;
selects a multimedia script template to be populated with the retrieved and customized content items, wherein the template defines a timeline for the content items to be composed as part of a content;
analyzes an audio file to identify a plurality of audio markers representing where music transition points exist along the timeline of the script template;
generates a movie-like content comprising of the one or more identified, retrieved, and customized content items by synchronizing the one or more content items with the plurality of audio markers of the audio file.
2. The system of claim 1, wherein:
each of the one or more multimedia content items is a text, an image, an audio, a video item, or other type of content item from which the user can learn information or be emotionally impacted.
3. The system of claim 2, wherein:
the text item is used for displaying quotes, which are short extracts from a longer text or a short text.
4. The system of claim 2, wherein:
the text item is in a long format for contemplation or assuming a voice for communication with the user to explain or instruct a practice.
5. The system of claim 2, wherein:
the text item is used to create a conversational text or script dialog with the user.
6. The system of claim 2, wherein:
the audio item includes music, sound effects, or spoken word.
7. The system of claim 2, wherein:
the image item is characterized and tagged with a number of psychoactive properties for its inherent characteristics that are known, or presumed, to affect the emotional state of the user.
8. The system of claim 7, wherein:
numerical values of the psychoactive properties are assigned to a range of emotional issues to the image item as well as the user's current context and emotional state.
9. The system of claim 1, further comprising:
a user interaction engine, which in operation, performs one or more of:
enabling the user to submit a topic or situation to which the user intends to seek help or counseling;
enabling the user to submit a request for the movie-like content related to the topic or situation;
presenting the movie-like content to the user.
10. The system of claim 9, wherein:
the user interaction engine enables the user to rate or provide feedback to the content presented.
11. The system of claim 1, further comprising:
an event generation engine, which in operation, determines an event that is relevant to the user, wherein such event triggers the generation of the movie-like content.
12. The system of claim 11, wherein:
the event is determined by an alert of a news feed.
13. The system of claim 1, further comprising:
a profile engine, which in operation, establishes and maintains the profile of the user.
14. The system of claim 13, wherein:
the profile engine establishes the profile of the user by initiating one or more questions during pseudo-conversational interactions with the user for the purpose of soliciting and gathering at least part of the information for the user profile.
15. The system of claim 13, wherein:
the profile engine update the user profile with history of topics raised by the user, the content presented to the user, and feedback and ratings of the content from the user.
16. The system of claim 1, wherein:
the filmmaking engine tags and organizes each of the content items in the content library in a richly describe taxonomy with one or more tags and properties to enable intelligent and context-aware selections.
17. The system of claim 16, wherein:
the filmmaking engine tags and organizes the content items in the content library using a content management system (CMS) with meta-tags and customized vocabularies.
18. The system of claim 1, wherein:
the filmmaking engine browses and retrieves the content items by one or more of topics, types of content items, dates collected, and by certain categories.
19. The system of claim 1, wherein:
the script template is created either in the for of a template specified by an expert in movie creation or automatically based on one or more rules.
20. The system of claim 1, wherein:
the filmmaking engine specifies an order of precedence for the plurality of audio markers to avoid potential for conflict.
21. The system of claim 1, wherein:
the filmmaking engine identifies various points in the timeline of the script wherein the points can be adjusted based on the time or duration of a content item.
22. The system of claim 1, wherein:
the filmmaking engine performs beat detection to identify the point in time at which each beat occurs in the audio file.
23. The system of claim 1, wherein:
the filmmaking engine performs tempo change detection to identify discrete segments of music in the audio file based upon the tempo of the segment.
24. The system of claim 1, wherein:
the filmmaking engine performs measure detection to determine when each measure begins in the audio file.
25. The system of claim 1, wherein:
the filmmaking engine performs key change detection to identify the time at which a song changes key in the audio file.
26. The system of claim 1, wherein:
the filmmaking engine performs dynamics change detection to determine sections of music in the audio file with different dynamics.
27. The system of claim 1, wherein:
the filmmaking engine adopts one or more techniques of transitioning, zooming in to a point, panning to a point, panning in a direction, adjusting fonts to create the movie-like content.
28. The system of claim 1, wherein:
the filmmaking engine generates and inserts one or more progressions of images during creation of the movie-like content to effectuate an emotional state-change in the user.
29. The system of claim 28, wherein:
the filmmaking engine creates a progression of images that mimics the internal workings of the psyche rather than the external workings of concrete reality.
30. The system of claim 28, wherein:
the filmmaking engine enables the user to drive construction of the one or more image progressions by identifying his/her current and desired feeling state.
31. The system of claim 28, wherein:
the filmmaking engine detects if there is a gap in one of the progressions of images where some images with desired psychoactive properties are missing.
32. The system of claim 31, wherein:
the filmmaking engine proceeds to research, mark, and collect more images to fill the gap if such gap exists.
33. A computer-implemented method, comprising:
maintaining, tagging, and organizing a plurality of multimedia content items as well as definitions, tags, and source of the content items;
identifying, retrieving, and customizing one or more of the multimedia content items based on a profile of a user;
selecting a multimedia script template to be populated with the retrieved and customized content items, wherein the template defines a timeline for the content items to be composed as part of a content;
analyzing an audio file to identify a plurality of audio markers representing where music transition points exist along the timeline of the script template;
generating a movie-like content comprising of the one or more identified, retrieved, and customized content items by synchronizing the one or more content items with the plurality of audio markers of the audio file.
34. The method of claim 33, further comprising:
enabling the user to perform one or more of:
enabling the user to submit a topic or situation to which the user intends to seek help or counseling;
enabling the user to submit a request for the movie-like content related to the topic or situation;
presenting the movie-like content to the user.
35. The method of claim 33, further comprising:
enabling the user to rate or provide feedback to the content presented.
36. The method of claim 33, further comprising:
identifying an event that is relevant to the user, wherein such event triggers the generation of the movie-like content.
37. The method of claim 33, further comprising:
establishing and maintaining the profile of the user.
38. The method of claim 33, further comprising:
updating the user profile with history of topics raised by the user, the content presented to the user, and feedback and ratings of the content from the user.
39. The method of claim 33, further comprising:
characterizing and tagging an image item with a number of psychoactive properties for its inherent characteristics that are known, or presumed, to affect the emotional state of the user.
40. The method of claim 39, further comprising:
assigning numerical values of the psychoactive properties to a range of emotional issues to the image item as well as the user's current context and emotional state.
41. The method of claim 33, further comprising:
tagging and organizing each of the content items in a richly describe taxonomy with one or more tags and properties to enable intelligent and context-aware selections.
42. The method of claim 33, further comprising:
tagging and organizing the content items using a content management system (CMS) with meta-tags and customized vocabularies.
43. The method of claim 33, further comprising:
browsing and retrieving the content items by one or more of topics, types of content items, dates collected, and by certain categories.
44. The method of claim 33, further comprising:
creating the script template either in the form of a template specified by an expert in movie creation or automatically based on one or more rules.
45. The method of claim 33, further comprising:
specifying an order of precedence for the plurality of audio markers to avoid potential for conflict.
46. The method of claim 33, further comprising:
identifying various points in the timeline of the script wherein the points can be adjusted based on the time or duration of a content item.
47. The method of claim 33, further comprising:
performing beat detection to identify the point in time at which each beat occurs in the audio file.
48. The method of claim 33, further comprising:
performing tempo change detection to identify discrete segments of music in the audio file based upon the tempo of the segment.
49. The method of claim 33, further comprising:
performing measure detection to determine when each measure begins in the audio file.
50. The method of claim 33, further comprising:
performing key change detection to identify the time at which a song changes key in the audio file.
51. The method of claim 33, further comprising:
performing dynamics change detection to determine sections of music in the audio file with different dynamics.
52. The method of claim 33, further comprising:
adopting one or more techniques of transitioning, zooming in to a point, panning to a point, panning in a direction, adjusting fonts to create the movie-like content.
53. The method of claim 33, further comprising:
generating and inserting one or more progressions of images during creation of the movie-like content to effectuate an emotional state-change in the user.
54. The method of claim 53, further comprising:
creating a progression of images that mimics the internal workings of the psyche rather than the external workings of concrete reality.
55. The method of claim 53, further comprising:
enabling the user to drive construction of the one or more image progressions by identifying his/her current and desired feeling state.
56. The method of claim 53, further comprising:
detecting if there is a gap in one of the progressions of images where some images with desired psychoactive properties are missing.
57. The method of claim 56, further comprising:
proceeding to research, mark, and collect more images to fill the gap if such gap exists.
58. A machine readable medium having software instructions stored thereon that when executed cause a system to:
maintain, tag, and organize a plurality of multimedia content items as well as definitions, tags, and source of the content items;
enable the user to submit a topic to which a user intends to seek help or counseling;
establish and maintain a profile of the user;
identify, retrieve, and customize one or more of the multimedia content items based on the topic and the profile of the user;
select a multimedia script template to be populated with the retrieved and customized content items, wherein the template defines a timeline for the content items to be composed as part of a content;
analyze an audio file to identify a plurality of audio markers representing where music transition points exist along the timeline of the script template;
generate a movie-like content comprising of the one or more identified, retrieved, and customized content items by synchronizing the one or more content items with the plurality of audio markers of the audio file;
present the movie-like content to the user.
US12/642,135 2009-12-18 2009-12-18 System and method for algorithmic movie generation based on audio/video synchronization Abandoned US20110154197A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/642,135 US20110154197A1 (en) 2009-12-18 2009-12-18 System and method for algorithmic movie generation based on audio/video synchronization
PCT/US2010/060086 WO2011075440A2 (en) 2009-12-18 2010-12-13 A system and method algorithmic movie generation based on audio/video synchronization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/642,135 US20110154197A1 (en) 2009-12-18 2009-12-18 System and method for algorithmic movie generation based on audio/video synchronization

Publications (1)

Publication Number Publication Date
US20110154197A1 true US20110154197A1 (en) 2011-06-23

Family

ID=44152914

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/642,135 Abandoned US20110154197A1 (en) 2009-12-18 2009-12-18 System and method for algorithmic movie generation based on audio/video synchronization

Country Status (2)

Country Link
US (1) US20110154197A1 (en)
WO (1) WO2011075440A2 (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089677A1 (en) * 2007-10-02 2009-04-02 Chan Weng Chong Peekay Systems and methods for enhanced textual presentation in video content presentation on portable devices
US20110283172A1 (en) * 2010-05-13 2011-11-17 Tiny Prints, Inc. System and method for an online memories and greeting service
US20120011272A1 (en) * 2010-07-08 2012-01-12 Panasonic Corporation Electronic device and computer program
US20120259788A1 (en) * 2007-10-24 2012-10-11 Microsoft Corporation Non-destructive media presentation derivatives
US20120278071A1 (en) * 2011-04-29 2012-11-01 Nexidia Inc. Transcription system
US20130132839A1 (en) * 2010-11-30 2013-05-23 Michael Berry Dynamic Positioning of Timeline Markers for Efficient Display
US20130330062A1 (en) * 2012-06-08 2013-12-12 Mymusaic Inc. Automatic creation of movie with images synchronized to music
US20140058828A1 (en) * 2010-06-07 2014-02-27 Affectiva, Inc. Optimizing media based on mental state analysis
US8808088B1 (en) * 2010-10-21 2014-08-19 Wms Gaming, Inc. Coordinating media content in wagering game systems
EP2860731A1 (en) * 2013-10-14 2015-04-15 Thomson Licensing Movie project scrutineer
US20150161249A1 (en) * 2013-12-05 2015-06-11 Lenovo (Singapore) Ptd. Ltd. Finding personal meaning in unstructured user data
WO2015192130A1 (en) * 2014-06-13 2015-12-17 Godfrey Mark T Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline
US20150379098A1 (en) * 2014-06-27 2015-12-31 Samsung Electronics Co., Ltd. Method and apparatus for managing data
US9356913B2 (en) 2014-06-30 2016-05-31 Microsoft Technology Licensing, Llc Authorization of joining of transformation chain instances
US9396698B2 (en) 2014-06-30 2016-07-19 Microsoft Technology Licensing, Llc Compound application presentation across multiple devices
WO2016196987A1 (en) * 2015-06-03 2016-12-08 Smule, Inc. Automated generation of coordinated audiovisual work based on content captured geographically distributed performers
US20160357864A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Personalized music presentation templates
US20170003828A1 (en) * 2015-06-30 2017-01-05 Marketing Technology Limited On-the-fly generation of online presentations
US9659394B2 (en) 2014-06-30 2017-05-23 Microsoft Technology Licensing, Llc Cinematization of output in compound device environment
US9773070B2 (en) 2014-06-30 2017-09-26 Microsoft Technology Licensing, Llc Compound transformation chain application across multiple devices
WO2017183015A1 (en) * 2016-04-20 2017-10-26 Muvix Media Networks Ltd. Methods and systems for independent, personalized, video-synchronized, cinema-audio delivery and tracking
US20180005157A1 (en) * 2016-06-30 2018-01-04 Disney Enterprises, Inc. Media Asset Tagging
US10122983B1 (en) * 2013-03-05 2018-11-06 Google Llc Creating a video for an audio file
US10127945B2 (en) 2016-03-15 2018-11-13 Google Llc Visualization of image themes based on image content
US20190019322A1 (en) * 2017-07-17 2019-01-17 At&T Intellectual Property I, L.P. Structuralized creation and transmission of personalized audiovisual data
US10347025B2 (en) * 2017-02-09 2019-07-09 International Business Machines Corporation Personalized word cloud embedded emblem generation service
US10380647B2 (en) * 2010-12-20 2019-08-13 Excalibur Ip, Llc Selection and/or modification of a portion of online content based on an emotional state of a user
CN110135355A (en) * 2019-05-17 2019-08-16 吉林大学 A method of utilizing color and audio active control driver's mood
US10417314B2 (en) * 2012-06-14 2019-09-17 Open Text Sa Ulc Systems and methods of a script generation engine
US10642893B2 (en) 2016-09-05 2020-05-05 Google Llc Generating theme-based videos
US10891930B2 (en) * 2017-06-29 2021-01-12 Dolby International Ab Methods, systems, devices and computer program products for adapting external content to a video stream
US10971191B2 (en) 2012-12-12 2021-04-06 Smule, Inc. Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline
US11032602B2 (en) 2017-04-03 2021-06-08 Smule, Inc. Audiovisual collaboration method with latency management for wide-area broadcast
US11310538B2 (en) 2017-04-03 2022-04-19 Smule, Inc. Audiovisual collaboration system and method with latency management for wide-area broadcast and social media-type user interface mechanics
US11488569B2 (en) 2015-06-03 2022-11-01 Smule, Inc. Audio-visual effects system for augmentation of captured performance based on content thereof
US11562128B2 (en) * 2020-03-30 2023-01-24 Bank Of America Corporation Data extraction system for targeted data dissection

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049537A (en) * 2012-12-25 2013-04-17 国云科技股份有限公司 Network information collection method

Citations (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5064410A (en) * 1984-12-12 1991-11-12 Frenkel Richard E Stress control system and method
US5717923A (en) * 1994-11-03 1998-02-10 Intel Corporation Method and apparatus for dynamically customizing electronic information to individual end users
US5732232A (en) * 1996-09-17 1998-03-24 International Business Machines Corp. Method and apparatus for directing the expression of emotion for a graphical user interface
US5862223A (en) * 1996-07-24 1999-01-19 Walker Asset Management Limited Partnership Method and apparatus for a cryptographically-assisted commercial network system designed to facilitate and support expert-based commerce
US5875265A (en) * 1995-06-30 1999-02-23 Fuji Xerox Co., Ltd. Image analyzing and editing apparatus using psychological image effects
US5884282A (en) * 1996-04-30 1999-03-16 Robinson; Gary B. Automated collaborative filtering system
US6314420B1 (en) * 1996-04-04 2001-11-06 Lycos, Inc. Collaborative/adaptive search engine
US20020023132A1 (en) * 2000-03-17 2002-02-21 Catherine Tornabene Shared groups rostering system
US6363154B1 (en) * 1998-10-28 2002-03-26 International Business Machines Corporation Decentralized systems methods and computer program products for sending secure messages among a group of nodes
US20020059378A1 (en) * 2000-08-18 2002-05-16 Shakeel Mustafa System and method for providing on-line assistance through the use of interactive data, voice and video information
US6434549B1 (en) * 1999-12-13 2002-08-13 Ultris, Inc. Network-based, human-mediated exchange of information
US20020147619A1 (en) * 2001-04-05 2002-10-10 Peter Floss Method and system for providing personal travel advice to a user
US6468210B2 (en) * 2000-02-14 2002-10-22 First Opinion Corporation Automated diagnostic system and method including synergies
US6477272B1 (en) * 1999-06-18 2002-11-05 Microsoft Corporation Object recognition with co-occurrence histograms and false alarm probability analysis for choosing optimal object recognition process parameters
US20020191775A1 (en) * 2001-06-19 2002-12-19 International Business Machines Corporation System and method for personalizing content presented while waiting
US20030055614A1 (en) * 2001-01-18 2003-03-20 The Board Of Trustees Of The University Of Illinois Method for optimizing a solution set
US6539395B1 (en) * 2000-03-22 2003-03-25 Mood Logic, Inc. Method for creating a database for comparing music
US20030060728A1 (en) * 2001-09-25 2003-03-27 Mandigo Lonnie D. Biofeedback based personal entertainment system
US20030163356A1 (en) * 1999-11-23 2003-08-28 Cheryl Milone Bab Interactive system for managing questions and answers among users and experts
US6629104B1 (en) * 2000-11-22 2003-09-30 Eastman Kodak Company Method for adding personalized metadata to a collection of digital images
US20030195872A1 (en) * 1999-04-12 2003-10-16 Paul Senn Web-based information content analyzer and information dimension dictionary
US6801909B2 (en) * 2000-07-21 2004-10-05 Triplehop Technologies, Inc. System and method for obtaining user preferences and providing user recommendations for unseen physical and information goods and services
US20040237759A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content
US20050010599A1 (en) * 2003-06-16 2005-01-13 Tomokazu Kake Method and apparatus for presenting information
US6853982B2 (en) * 1998-09-18 2005-02-08 Amazon.Com, Inc. Content personalization based on actions performed during a current browsing session
US20050079474A1 (en) * 2003-10-14 2005-04-14 Kenneth Lowe Emotional state modification method and system
US20050096973A1 (en) * 2003-11-04 2005-05-05 Heyse Neil W. Automated life and career management services
US20050108031A1 (en) * 2003-11-17 2005-05-19 Grosvenor Edwin S. Method and system for transmitting, selling and brokering educational content in streamed video form
US20050209890A1 (en) * 2004-03-17 2005-09-22 Kong Francis K Method and apparatus creating, integrating, and using a patient medical history
US20050216457A1 (en) * 2004-03-15 2005-09-29 Yahoo! Inc. Systems and methods for collecting user annotations
US20050240580A1 (en) * 2003-09-30 2005-10-27 Zamir Oren E Personalization of placed content ordering in search results
US6970883B2 (en) * 2000-12-11 2005-11-29 International Business Machines Corporation Search facility for local and remote interface repositories
US7003792B1 (en) * 1998-11-30 2006-02-21 Index Systems, Inc. Smart agent based on habit, statistical inference and psycho-demographic profiling
US20060095474A1 (en) * 2004-10-27 2006-05-04 Mitra Ambar K System and method for problem solving through dynamic/interactive concept-mapping
US20060106793A1 (en) * 2003-12-29 2006-05-18 Ping Liang Internet and computer information retrieval and mining with intelligent conceptual filtering, visualization and automation
US20060143563A1 (en) * 2004-12-23 2006-06-29 Sap Aktiengesellschaft System and method for grouping data
US20060200434A1 (en) * 2003-11-28 2006-09-07 Manyworlds, Inc. Adaptive Social and Process Network Systems
US7117224B2 (en) * 2000-01-26 2006-10-03 Clino Trini Castelli Method and device for cataloging and searching for information
US20060236241A1 (en) * 2003-02-12 2006-10-19 Etsuko Harada Usability evaluation support method and system
US20060242554A1 (en) * 2005-04-25 2006-10-26 Gather, Inc. User-driven media system in a computer network
US20060265268A1 (en) * 2005-05-23 2006-11-23 Adam Hyder Intelligent job matching system and method including preference ranking
US20060288023A1 (en) * 2000-02-01 2006-12-21 Alberti Anemometer Llc Computer graphic display visualization system and method
US7162443B2 (en) * 2000-10-30 2007-01-09 Microsoft Corporation Method and computer readable medium storing executable components for locating items of interest among multiple merchants in connection with electronic shopping
US20070067297A1 (en) * 2004-04-30 2007-03-22 Kublickis Peter J System and methods for a micropayment-enabled marketplace with permission-based, self-service, precision-targeted delivery of advertising, entertainment and informational content and relationship marketing to anonymous internet users
US20070150281A1 (en) * 2005-12-22 2007-06-28 Hoff Todd M Method and system for utilizing emotion to search content
US20070179351A1 (en) * 2005-06-30 2007-08-02 Humana Inc. System and method for providing individually tailored health-promoting information
US20070183354A1 (en) * 2006-02-03 2007-08-09 Nec Corporation Method and system for distributing contents to a plurality of users
US20070201086A1 (en) * 2006-02-28 2007-08-30 Momjunction, Inc. Method for Sharing Documents Between Groups Over a Distributed Network
US20070233622A1 (en) * 2006-03-31 2007-10-04 Alex Willcock Method and system for computerized searching and matching using emotional preference
US20070255674A1 (en) * 2005-01-10 2007-11-01 Instant Information Inc. Methods and systems for enabling the collaborative management of information based upon user interest
US20070294225A1 (en) * 2006-06-19 2007-12-20 Microsoft Corporation Diversifying search results for improved search and personalization
US20080059447A1 (en) * 2006-08-24 2008-03-06 Spock Networks, Inc. System, method and computer program product for ranking profiles
US20080172363A1 (en) * 2007-01-12 2008-07-17 Microsoft Corporation Characteristic tagging
US20080215568A1 (en) * 2006-11-28 2008-09-04 Samsung Electronics Co., Ltd Multimedia file reproducing apparatus and method
US20080306871A1 (en) * 2007-06-08 2008-12-11 At&T Knowledge Ventures, Lp System and method of managing digital rights
US20080320037A1 (en) * 2007-05-04 2008-12-25 Macguire Sean Michael System, method and apparatus for tagging and processing multimedia content with the physical/emotional states of authors and users
US20090006442A1 (en) * 2007-06-27 2009-01-01 Microsoft Corporation Enhanced browsing experience in social bookmarking based on self tags
US7496567B1 (en) * 2004-10-01 2009-02-24 Terril John Steichen System and method for document categorization
US20090063475A1 (en) * 2007-08-27 2009-03-05 Sudhir Pendse Tool for personalized search
US20090132526A1 (en) * 2007-11-19 2009-05-21 Jong-Hun Park Content recommendation apparatus and method using tag cloud
US20090132593A1 (en) * 2007-11-15 2009-05-21 Vimicro Corporation Media player for playing media files by emotion classes and method for the same
US20090144254A1 (en) * 2007-11-29 2009-06-04 International Business Machines Corporation Aggregate scoring of tagged content across social bookmarking systems
US20090271740A1 (en) * 2008-04-25 2009-10-29 Ryan-Hutton Lisa M System and method for measuring user response
US20090279869A1 (en) * 2008-04-16 2009-11-12 Tomoki Ogawa Recording medium, recording device, recording method, and playback device
US20090307207A1 (en) * 2008-06-09 2009-12-10 Murray Thomas J Creation of a multi-media presentation
US20090307629A1 (en) * 2005-12-05 2009-12-10 Naoaki Horiuchi Content search device, content search system, content search system server device, content search method, computer program, and content output device having search function
US20090312096A1 (en) * 2008-06-12 2009-12-17 Motorola, Inc. Personalizing entertainment experiences based on user profiles
US20090327422A1 (en) * 2008-02-08 2009-12-31 Rebelvox Llc Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode
US20090327266A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Index Optimization for Ranking Using a Linear Model
US7665024B1 (en) * 2002-07-22 2010-02-16 Verizon Services Corp. Methods and apparatus for controlling a user interface based on the emotional state of a user
US20100049851A1 (en) * 2008-08-19 2010-02-25 International Business Machines Corporation Allocating Resources in a Distributed Computing Environment
US20100083320A1 (en) * 2008-10-01 2010-04-01 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US20100114901A1 (en) * 2008-11-03 2010-05-06 Rhee Young-Ho Computer-readable recording medium, content providing apparatus collecting user-related information, content providing method, user-related information providing method and content searching method
US20100131534A1 (en) * 2007-04-10 2010-05-27 Toshio Takeda Information providing system
US20100145892A1 (en) * 2008-12-10 2010-06-10 National Taiwan University Search device and associated methods
US20100262597A1 (en) * 2007-12-24 2010-10-14 Soung-Joo Han Method and system for searching information of collective emotion based on comments about contents on internet
US7890374B1 (en) * 2000-10-24 2011-02-15 Rovi Technologies Corporation System and method for presenting music to consumers

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070038717A1 (en) * 2005-07-27 2007-02-15 Subculture Interactive, Inc. Customizable Content Creation, Management, and Delivery System
CN101421723A (en) * 2006-04-10 2009-04-29 雅虎公司 Client side editing application for optimizing editing of media assets originating from client and server
US20090240736A1 (en) * 2008-03-24 2009-09-24 James Crist Method and System for Creating a Personalized Multimedia Production

Patent Citations (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5064410A (en) * 1984-12-12 1991-11-12 Frenkel Richard E Stress control system and method
US5717923A (en) * 1994-11-03 1998-02-10 Intel Corporation Method and apparatus for dynamically customizing electronic information to individual end users
US5875265A (en) * 1995-06-30 1999-02-23 Fuji Xerox Co., Ltd. Image analyzing and editing apparatus using psychological image effects
US6314420B1 (en) * 1996-04-04 2001-11-06 Lycos, Inc. Collaborative/adaptive search engine
US5884282A (en) * 1996-04-30 1999-03-16 Robinson; Gary B. Automated collaborative filtering system
US5862223A (en) * 1996-07-24 1999-01-19 Walker Asset Management Limited Partnership Method and apparatus for a cryptographically-assisted commercial network system designed to facilitate and support expert-based commerce
US5732232A (en) * 1996-09-17 1998-03-24 International Business Machines Corp. Method and apparatus for directing the expression of emotion for a graphical user interface
US6853982B2 (en) * 1998-09-18 2005-02-08 Amazon.Com, Inc. Content personalization based on actions performed during a current browsing session
US6363154B1 (en) * 1998-10-28 2002-03-26 International Business Machines Corporation Decentralized systems methods and computer program products for sending secure messages among a group of nodes
US7003792B1 (en) * 1998-11-30 2006-02-21 Index Systems, Inc. Smart agent based on habit, statistical inference and psycho-demographic profiling
US20030195872A1 (en) * 1999-04-12 2003-10-16 Paul Senn Web-based information content analyzer and information dimension dictionary
US6477272B1 (en) * 1999-06-18 2002-11-05 Microsoft Corporation Object recognition with co-occurrence histograms and false alarm probability analysis for choosing optimal object recognition process parameters
US20030163356A1 (en) * 1999-11-23 2003-08-28 Cheryl Milone Bab Interactive system for managing questions and answers among users and experts
US6434549B1 (en) * 1999-12-13 2002-08-13 Ultris, Inc. Network-based, human-mediated exchange of information
US7117224B2 (en) * 2000-01-26 2006-10-03 Clino Trini Castelli Method and device for cataloging and searching for information
US20060288023A1 (en) * 2000-02-01 2006-12-21 Alberti Anemometer Llc Computer graphic display visualization system and method
US6468210B2 (en) * 2000-02-14 2002-10-22 First Opinion Corporation Automated diagnostic system and method including synergies
US20020023132A1 (en) * 2000-03-17 2002-02-21 Catherine Tornabene Shared groups rostering system
US6539395B1 (en) * 2000-03-22 2003-03-25 Mood Logic, Inc. Method for creating a database for comparing music
US6801909B2 (en) * 2000-07-21 2004-10-05 Triplehop Technologies, Inc. System and method for obtaining user preferences and providing user recommendations for unseen physical and information goods and services
US20020059378A1 (en) * 2000-08-18 2002-05-16 Shakeel Mustafa System and method for providing on-line assistance through the use of interactive data, voice and video information
US7890374B1 (en) * 2000-10-24 2011-02-15 Rovi Technologies Corporation System and method for presenting music to consumers
US7162443B2 (en) * 2000-10-30 2007-01-09 Microsoft Corporation Method and computer readable medium storing executable components for locating items of interest among multiple merchants in connection with electronic shopping
US6629104B1 (en) * 2000-11-22 2003-09-30 Eastman Kodak Company Method for adding personalized metadata to a collection of digital images
US6970883B2 (en) * 2000-12-11 2005-11-29 International Business Machines Corporation Search facility for local and remote interface repositories
US20030055614A1 (en) * 2001-01-18 2003-03-20 The Board Of Trustees Of The University Of Illinois Method for optimizing a solution set
US20020147619A1 (en) * 2001-04-05 2002-10-10 Peter Floss Method and system for providing personal travel advice to a user
US20020191775A1 (en) * 2001-06-19 2002-12-19 International Business Machines Corporation System and method for personalizing content presented while waiting
US20030060728A1 (en) * 2001-09-25 2003-03-27 Mandigo Lonnie D. Biofeedback based personal entertainment system
US7665024B1 (en) * 2002-07-22 2010-02-16 Verizon Services Corp. Methods and apparatus for controlling a user interface based on the emotional state of a user
US20060236241A1 (en) * 2003-02-12 2006-10-19 Etsuko Harada Usability evaluation support method and system
US20040237759A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content
US20050010599A1 (en) * 2003-06-16 2005-01-13 Tomokazu Kake Method and apparatus for presenting information
US20050240580A1 (en) * 2003-09-30 2005-10-27 Zamir Oren E Personalization of placed content ordering in search results
US20050079474A1 (en) * 2003-10-14 2005-04-14 Kenneth Lowe Emotional state modification method and system
US20050096973A1 (en) * 2003-11-04 2005-05-05 Heyse Neil W. Automated life and career management services
US20050108031A1 (en) * 2003-11-17 2005-05-19 Grosvenor Edwin S. Method and system for transmitting, selling and brokering educational content in streamed video form
US20060200434A1 (en) * 2003-11-28 2006-09-07 Manyworlds, Inc. Adaptive Social and Process Network Systems
US20060106793A1 (en) * 2003-12-29 2006-05-18 Ping Liang Internet and computer information retrieval and mining with intelligent conceptual filtering, visualization and automation
US20050216457A1 (en) * 2004-03-15 2005-09-29 Yahoo! Inc. Systems and methods for collecting user annotations
US20050209890A1 (en) * 2004-03-17 2005-09-22 Kong Francis K Method and apparatus creating, integrating, and using a patient medical history
US20070067297A1 (en) * 2004-04-30 2007-03-22 Kublickis Peter J System and methods for a micropayment-enabled marketplace with permission-based, self-service, precision-targeted delivery of advertising, entertainment and informational content and relationship marketing to anonymous internet users
US7496567B1 (en) * 2004-10-01 2009-02-24 Terril John Steichen System and method for document categorization
US20060095474A1 (en) * 2004-10-27 2006-05-04 Mitra Ambar K System and method for problem solving through dynamic/interactive concept-mapping
US20060143563A1 (en) * 2004-12-23 2006-06-29 Sap Aktiengesellschaft System and method for grouping data
US20070255674A1 (en) * 2005-01-10 2007-11-01 Instant Information Inc. Methods and systems for enabling the collaborative management of information based upon user interest
US20060242554A1 (en) * 2005-04-25 2006-10-26 Gather, Inc. User-driven media system in a computer network
US20060265268A1 (en) * 2005-05-23 2006-11-23 Adam Hyder Intelligent job matching system and method including preference ranking
US20070179351A1 (en) * 2005-06-30 2007-08-02 Humana Inc. System and method for providing individually tailored health-promoting information
US20090307629A1 (en) * 2005-12-05 2009-12-10 Naoaki Horiuchi Content search device, content search system, content search system server device, content search method, computer program, and content output device having search function
US20070150281A1 (en) * 2005-12-22 2007-06-28 Hoff Todd M Method and system for utilizing emotion to search content
US20070183354A1 (en) * 2006-02-03 2007-08-09 Nec Corporation Method and system for distributing contents to a plurality of users
US20070201086A1 (en) * 2006-02-28 2007-08-30 Momjunction, Inc. Method for Sharing Documents Between Groups Over a Distributed Network
US20070233622A1 (en) * 2006-03-31 2007-10-04 Alex Willcock Method and system for computerized searching and matching using emotional preference
US20070294225A1 (en) * 2006-06-19 2007-12-20 Microsoft Corporation Diversifying search results for improved search and personalization
US20080059447A1 (en) * 2006-08-24 2008-03-06 Spock Networks, Inc. System, method and computer program product for ranking profiles
US20080215568A1 (en) * 2006-11-28 2008-09-04 Samsung Electronics Co., Ltd Multimedia file reproducing apparatus and method
US20080172363A1 (en) * 2007-01-12 2008-07-17 Microsoft Corporation Characteristic tagging
US20100131534A1 (en) * 2007-04-10 2010-05-27 Toshio Takeda Information providing system
US20080320037A1 (en) * 2007-05-04 2008-12-25 Macguire Sean Michael System, method and apparatus for tagging and processing multimedia content with the physical/emotional states of authors and users
US20080306871A1 (en) * 2007-06-08 2008-12-11 At&T Knowledge Ventures, Lp System and method of managing digital rights
US20090006442A1 (en) * 2007-06-27 2009-01-01 Microsoft Corporation Enhanced browsing experience in social bookmarking based on self tags
US20090063475A1 (en) * 2007-08-27 2009-03-05 Sudhir Pendse Tool for personalized search
US20090132593A1 (en) * 2007-11-15 2009-05-21 Vimicro Corporation Media player for playing media files by emotion classes and method for the same
US20090132526A1 (en) * 2007-11-19 2009-05-21 Jong-Hun Park Content recommendation apparatus and method using tag cloud
US20090144254A1 (en) * 2007-11-29 2009-06-04 International Business Machines Corporation Aggregate scoring of tagged content across social bookmarking systems
US20100262597A1 (en) * 2007-12-24 2010-10-14 Soung-Joo Han Method and system for searching information of collective emotion based on comments about contents on internet
US20090327422A1 (en) * 2008-02-08 2009-12-31 Rebelvox Llc Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode
US20090279869A1 (en) * 2008-04-16 2009-11-12 Tomoki Ogawa Recording medium, recording device, recording method, and playback device
US20090271740A1 (en) * 2008-04-25 2009-10-29 Ryan-Hutton Lisa M System and method for measuring user response
US20090307207A1 (en) * 2008-06-09 2009-12-10 Murray Thomas J Creation of a multi-media presentation
US20090312096A1 (en) * 2008-06-12 2009-12-17 Motorola, Inc. Personalizing entertainment experiences based on user profiles
US20090327266A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Index Optimization for Ranking Using a Linear Model
US20100049851A1 (en) * 2008-08-19 2010-02-25 International Business Machines Corporation Allocating Resources in a Distributed Computing Environment
US20100083320A1 (en) * 2008-10-01 2010-04-01 At&T Intellectual Property I, L.P. System and method for a communication exchange with an avatar in a media communication system
US20100114901A1 (en) * 2008-11-03 2010-05-06 Rhee Young-Ho Computer-readable recording medium, content providing apparatus collecting user-related information, content providing method, user-related information providing method and content searching method
US20100145892A1 (en) * 2008-12-10 2010-06-10 National Taiwan University Search device and associated methods

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089677A1 (en) * 2007-10-02 2009-04-02 Chan Weng Chong Peekay Systems and methods for enhanced textual presentation in video content presentation on portable devices
US20120259788A1 (en) * 2007-10-24 2012-10-11 Microsoft Corporation Non-destructive media presentation derivatives
US9047593B2 (en) * 2007-10-24 2015-06-02 Microsoft Technology Licensing, Llc Non-destructive media presentation derivatives
US20110283172A1 (en) * 2010-05-13 2011-11-17 Tiny Prints, Inc. System and method for an online memories and greeting service
US20140058828A1 (en) * 2010-06-07 2014-02-27 Affectiva, Inc. Optimizing media based on mental state analysis
US20120011272A1 (en) * 2010-07-08 2012-01-12 Panasonic Corporation Electronic device and computer program
US8521849B2 (en) * 2010-07-08 2013-08-27 Panasonic Corporation Transmission control device and computer program controlling transmission of selected content file
US8808088B1 (en) * 2010-10-21 2014-08-19 Wms Gaming, Inc. Coordinating media content in wagering game systems
US10068412B2 (en) 2010-10-21 2018-09-04 Bally Gaming, Inc. Coordinating media content in wagering game systems
US20130132839A1 (en) * 2010-11-30 2013-05-23 Michael Berry Dynamic Positioning of Timeline Markers for Efficient Display
US8677242B2 (en) * 2010-11-30 2014-03-18 Adobe Systems Incorporated Dynamic positioning of timeline markers for efficient display
US10380647B2 (en) * 2010-12-20 2019-08-13 Excalibur Ip, Llc Selection and/or modification of a portion of online content based on an emotional state of a user
US9774747B2 (en) * 2011-04-29 2017-09-26 Nexidia Inc. Transcription system
US20120278071A1 (en) * 2011-04-29 2012-11-01 Nexidia Inc. Transcription system
US20130330062A1 (en) * 2012-06-08 2013-12-12 Mymusaic Inc. Automatic creation of movie with images synchronized to music
US10417314B2 (en) * 2012-06-14 2019-09-17 Open Text Sa Ulc Systems and methods of a script generation engine
US10971191B2 (en) 2012-12-12 2021-04-06 Smule, Inc. Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline
US11166000B1 (en) 2013-03-05 2021-11-02 Google Llc Creating a video for an audio file
US10122983B1 (en) * 2013-03-05 2018-11-06 Google Llc Creating a video for an audio file
EP2860731A1 (en) * 2013-10-14 2015-04-15 Thomson Licensing Movie project scrutineer
US20150161249A1 (en) * 2013-12-05 2015-06-11 Lenovo (Singapore) Ptd. Ltd. Finding personal meaning in unstructured user data
WO2015192130A1 (en) * 2014-06-13 2015-12-17 Godfrey Mark T Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline
US20150379098A1 (en) * 2014-06-27 2015-12-31 Samsung Electronics Co., Ltd. Method and apparatus for managing data
US10691717B2 (en) * 2014-06-27 2020-06-23 Samsung Electronics Co., Ltd. Method and apparatus for managing data
US9659394B2 (en) 2014-06-30 2017-05-23 Microsoft Technology Licensing, Llc Cinematization of output in compound device environment
US9773070B2 (en) 2014-06-30 2017-09-26 Microsoft Technology Licensing, Llc Compound transformation chain application across multiple devices
US9356913B2 (en) 2014-06-30 2016-05-31 Microsoft Technology Licensing, Llc Authorization of joining of transformation chain instances
US9396698B2 (en) 2014-06-30 2016-07-19 Microsoft Technology Licensing, Llc Compound application presentation across multiple devices
US10424283B2 (en) 2015-06-03 2019-09-24 Smule, Inc. Automated generation of coordinated audiovisual work based on content captured from geographically distributed performers
GB2554322A (en) * 2015-06-03 2018-03-28 Smule Inc Automated generation of coordinated audiovisual work based on content captured geographically distributed performers
GB2554322B (en) * 2015-06-03 2021-07-14 Smule Inc Automated generation of coordinated audiovisual work based on content captured from geographically distributed performers
US11488569B2 (en) 2015-06-03 2022-11-01 Smule, Inc. Audio-visual effects system for augmentation of captured performance based on content thereof
US9911403B2 (en) 2015-06-03 2018-03-06 Smule, Inc. Automated generation of coordinated audiovisual work based on content captured geographically distributed performers
WO2016196987A1 (en) * 2015-06-03 2016-12-08 Smule, Inc. Automated generation of coordinated audiovisual work based on content captured geographically distributed performers
US10664520B2 (en) * 2015-06-05 2020-05-26 Apple Inc. Personalized media presentation templates
US20160357864A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Personalized music presentation templates
US20170003828A1 (en) * 2015-06-30 2017-01-05 Marketing Technology Limited On-the-fly generation of online presentations
US10269035B2 (en) * 2015-06-30 2019-04-23 Marketing Technology Limited On-the-fly generation of online presentations
US11321385B2 (en) 2016-03-15 2022-05-03 Google Llc Visualization of image themes based on image content
US10127945B2 (en) 2016-03-15 2018-11-13 Google Llc Visualization of image themes based on image content
WO2017183015A1 (en) * 2016-04-20 2017-10-26 Muvix Media Networks Ltd. Methods and systems for independent, personalized, video-synchronized, cinema-audio delivery and tracking
US20180005157A1 (en) * 2016-06-30 2018-01-04 Disney Enterprises, Inc. Media Asset Tagging
US10642893B2 (en) 2016-09-05 2020-05-05 Google Llc Generating theme-based videos
US11328013B2 (en) 2016-09-05 2022-05-10 Google Llc Generating theme-based videos
US10347025B2 (en) * 2017-02-09 2019-07-09 International Business Machines Corporation Personalized word cloud embedded emblem generation service
US11310538B2 (en) 2017-04-03 2022-04-19 Smule, Inc. Audiovisual collaboration system and method with latency management for wide-area broadcast and social media-type user interface mechanics
US11683536B2 (en) 2017-04-03 2023-06-20 Smule, Inc. Audiovisual collaboration system and method with latency management for wide-area broadcast and social media-type user interface mechanics
US11553235B2 (en) 2017-04-03 2023-01-10 Smule, Inc. Audiovisual collaboration method with latency management for wide-area broadcast
US11032602B2 (en) 2017-04-03 2021-06-08 Smule, Inc. Audiovisual collaboration method with latency management for wide-area broadcast
US20210241739A1 (en) * 2017-06-29 2021-08-05 Dolby International Ab Methods, Systems, Devices and Computer Program Products for Adapting External Content to a Video Stream
CN113724744A (en) * 2017-06-29 2021-11-30 杜比国际公司 Method, system, and computer-readable medium for adapting external content to a video stream
US10891930B2 (en) * 2017-06-29 2021-01-12 Dolby International Ab Methods, systems, devices and computer program products for adapting external content to a video stream
US11610569B2 (en) * 2017-06-29 2023-03-21 Dolby International Ab Methods, systems, devices and computer program products for adapting external content to a video stream
US20190019322A1 (en) * 2017-07-17 2019-01-17 At&T Intellectual Property I, L.P. Structuralized creation and transmission of personalized audiovisual data
US11062497B2 (en) * 2017-07-17 2021-07-13 At&T Intellectual Property I, L.P. Structuralized creation and transmission of personalized audiovisual data
CN110135355A (en) * 2019-05-17 2019-08-16 吉林大学 A method of utilizing color and audio active control driver's mood
US11562128B2 (en) * 2020-03-30 2023-01-24 Bank Of America Corporation Data extraction system for targeted data dissection

Also Published As

Publication number Publication date
WO2011075440A2 (en) 2011-06-23
WO2011075440A3 (en) 2011-10-06

Similar Documents

Publication Publication Date Title
US20110154197A1 (en) System and method for algorithmic movie generation based on audio/video synchronization
US9213705B1 (en) Presenting content related to primary audio content
US11831939B2 (en) Personalized digital media file generation
US9753925B2 (en) Systems, methods, and apparatus for generating an audio-visual presentation using characteristics of audio, visual and symbolic media objects
EP3475848B1 (en) Generating theme-based videos
JP5009305B2 (en) How to create a communication material
CN101180870B (en) Method of automatically editing media recordings
O'Halloran et al. Multimodal analysis within an interactive software environment: critical discourse perspectives
US8937620B1 (en) System and methods for generation and control of story animation
TWI514171B (en) System and methods for dynamic page creation
US20130297599A1 (en) Music management for adaptive distraction reduction
US20050069225A1 (en) Binding interactive multichannel digital document system and authoring tool
Marshall et al. Representing popular music stardom on screen: the popular music biopic
US20160299914A1 (en) Creative arts recommendation systems and methods
Radovanović TikTok and sound: Changing the ways of creating, promoting, distributing and listening to music
McCabe Conversations with a killer: the Ted Bundy tapes and affective responses to the true crime documentary
Hu The KTV aesthetic: popular music culture and contemporary Hong Kong cinema
Cortez Museums as sites for displaying sound materials: a five-use framework
Venkatesh et al. “You Tube and I Find”—Personalizing multimedia content access
Luo The omnivore turn in cultural production: Case study of China’s Rainbow Chamber Singers
Huelin Soundtracking the city break: Library music in travel television
WO2002059799A1 (en) A multimedia system
Collares et al. Personalizing self-organizing music spaces with anchors: design and evaluation
Feisthauer Reconsidering Contemporary Music Videos
WO2010045607A2 (en) A system and method for rule-based content customization for user presentation

Legal Events

Date Code Title Description
AS Assignment

Owner name: SACRED AGENT, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAWTHORNE, LOUIS;MCCALL, SPENCER STUART;NEAL, MICHAEL R.;AND OTHERS;SIGNING DATES FROM 20091216 TO 20091217;REEL/FRAME:023678/0598

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION