WO2005122134A1 - 楽音生成装置、楽音生成方法、楽音生成プログラムおよび記憶媒体 - Google Patents
楽音生成装置、楽音生成方法、楽音生成プログラムおよび記憶媒体 Download PDFInfo
- Publication number
- WO2005122134A1 WO2005122134A1 PCT/JP2004/008037 JP2004008037W WO2005122134A1 WO 2005122134 A1 WO2005122134 A1 WO 2005122134A1 JP 2004008037 W JP2004008037 W JP 2004008037W WO 2005122134 A1 WO2005122134 A1 WO 2005122134A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- musical sound
- tone
- image
- motion
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/201—User input interfaces for electrophonic musical instruments for movement interpretation, i.e. capturing and recognizing a gesture or a specific kind of movement, e.g. to control a musical instrument
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/441—Image sensing, i.e. capturing images or optical patterns for musical purposes or musical control purposes
- G10H2220/455—Camera input, e.g. analyzing pictures from a video camera and using the analysis results as control data
Definitions
- the present invention relates to a musical sound generation device, a musical sound generation method, a musical sound generation program, and a storage medium.
- the present invention relates to a musical sound generating device, a musical sound generating method, a musical sound generating program, and a storage medium for automatically generating musical sound data corresponding to image data.
- Japanese Patent No. 2629740 discloses a technique for controlling a tempo or the like using a contour of a subject.
- R (red), G (green), and B (blue) color signals are separated from an input video signal, and gradation data representing gradation is generated as digital data for each color.
- the subject is specified based on the gradation data of each color and the predetermined threshold data, the contour of the subject is detected, and the performance is controlled according to the detected “complexity of the contour”.
- Japanese Patent Laid-Open Publication No. 2000-276138 discloses a technique for detecting a position of a moving operation object and generating a musical tone.
- the technique detects the position of a specific operation object having a predetermined shape, and performs this operation.
- the tone generation is based on both the moving time from the arbitrary position of the object to the current position and the current position. More specifically, the position of a specific part of the subject is detected, a tone to be emitted is assigned to the sound generation area set on the image display surface, and it is determined that the specific part no longer exists in a certain area on the image display surface. After a lapse of a predetermined period of time from when the tone is determined to exist in another area on a different image display surface, and when the determined other area belongs to the tone generation area, a tone assigned to the tone generation area is generated. I do.
- Japanese Patent Application Laid-Open No. 2000-276139 discloses a technique of extracting a plurality of motion vectors from each block of a supplied image. There is disclosed a technique of calculating one control vector from the plurality of motion vectors and generating a musical tone based on the calculated control vector.
- a method of extracting a plurality of motion vectors from each block of an image is to capture pixels having the least difference in color in a corresponding block (16 x 16 pixels) of a specific image frame and a subsequent image frame. , And the difference between the positions is defined as a motion vector.
- the color signal of a still image is decomposed, the subject is specified by a threshold test for each color, and the contour is detected. It is necessary to judge the complexity, and the processing load increases.
- the disadvantage is that the technique of modifying existing sound data based on the complexity of contours and generating music sounds is not expected.
- a musical sound generating apparatus and a musical sound generating method for automatically generating musical sound data by calculating motion data from input image data without preparing performance information or the like in a simple manner A tone generation program and a storage medium. Disclosure of the invention
- the invention according to claim 1 of the present application is a method of inputting image data of each frame and inputting motion data indicating a motion from a difference between corresponding pixels of image data of a plurality of frames.
- Operating section specifying means to be extracted tone generating means for generating tone data having a sound source, a scale, and a volume corresponding to the motion data specified by the operating section specifying means; and tone data generated by the tone generating means.
- a tone generator having output means for outputting the
- a tone synthesizer is provided, and the tone synthesizer generates tone data synthesized with the other tone data.
- the invention according to claim 2 of the present application is characterized in that the musical sound generating means according to claim 1 is provided with a rhythm control means, and the rhythm control means processes the musical sound data.
- the invention according to claim 3 of the present application is characterized in that the musical tone generating means according to claim 1 is provided with repetition control means, and the musical tone data is processed by the repetition control means.
- the invention according to claim 4 of the present application provides an image database (hereinafter abbreviated as image DB) in which patterns are registered in the musical sound generating means according to claim 1, and an image matching means.
- image DB image database
- the invention according to claim 5 of the present application is characterized in that a light emitting means is provided in the musical sound generating device according to claim 1, and the light emitting means emits light based on the musical sound data.
- the invention according to claim 6 of the present application is characterized in that an image processing means is provided in the musical sound generating device according to claim 1, and the image processing means performs image processing based on the musical sound data.
- the invention according to claim 7 of the present application calculates motion data indicating a differential force motion of each corresponding pixel of the image data of a plurality of frames using the image data of the frame as an input unit, and responds to the motion data.
- a tone generation method for generating tone data having a sound source, a scale, and a volume.
- a tone synthesizer is provided, and the tone synthesizer generates tone data synthesized with the other tone data.
- the invention according to claim 8 of the present application provides an operation unit specifying step of extracting motion data indicating motion from a difference between corresponding pixels of image data of a plurality of frames using the image data of the frame as an input unit.
- the music generation program In the music generation program,
- a tone synthesis step is provided in the tone generation step, and the tone data is synthesized with the other tone data in the tone synthesis step.
- the invention according to claim 9 of the present application is characterized in that it is a computer-readable recording medium recording the program according to claim 8.
- FIG. 1 is a configuration diagram of a musical sound generation device according to the present invention.
- FIG. 2 is a flowchart for specifying an operation of a musical sound generation program according to the present invention.
- FIG. 3 is a flowchart of a matching process according to the present invention.
- FIG. 4 is a flowchart of a sound task according to the present invention.
- FIG. 5 is a flowchart of a diagram task according to the present invention.
- FIG. 6 is a flowchart of an optical task according to the present invention.
- FIG. 7 is a diagram showing a configuration example of a difference list and a history stack.
- FIG. 8 is a diagram of a storage medium for storing a musical sound generation program according to the present invention.
- FIG. 1 shows a first embodiment according to the present invention, and is a configuration diagram of a musical sound generation device.
- reference numeral 100 denotes a tone generation device as tone generation means according to the present invention.
- Reference numeral 110 denotes an imaging unit which inputs continuous image data to the musical sound generation device 100 as a frame.
- Reference numeral 120 denotes continuous frame-based image data from another device, for example, a moving image itself output from a camera, a personal computer, a storage medium, or the like in frame units.
- Reference numeral 10 provided in the musical sound generation device 100 is an operation specifying means, which targets image data sent from the imaging means 110 and image data 120 from other devices, and also stores input image data. And a function to detect movement. At present, it is common for continuous moving images to be input at a frame rate of 10 to 30 frames / sec.
- the operation specifying means 10 includes a first buffer 12 for reading a continuous frame, and a second buffer 13 for storing a previous read frame. First, a frame of moving image data is read into the first buffer 12, and the contents thereof are sent to the second buffer 13, and the next frame is read into the first buffer. By this repetition, the image frame following the frame of the second buffer is always read into the first buffer, and the comparison of both frames of the first buffer and the second buffer is continuously performed. It becomes.
- the frame information of the image data read into the first buffer 12 is sent to the second buffer 13 after the matching unit 11 extracts whether or not the registered figure is included.
- the matching means 11 extracts by matching whether there is a figure registered in a pattern database (hereinafter abbreviated as pattern DB) in the first buffer 12, and sends the figure to the musical sound generating means 60.
- pattern DB a pattern database
- the pattern matching means 11 first extracts a contour from the analysis of the image data in the first buffer 12 and generates a pattern obtained by enlarging, reducing, and rotating the contour figure. Check whether it is in the registered figure registered in the pattern DB).
- the image data in the first buffer 12 and the image data in the second buffer 13 are continuous frames, and the difference between each pixel of both images is extracted in the difference buffer 14, and the motion detection unit 15 The motion data between them is extracted.
- each pixel value of the image data of the first buffer and the image data of the second buffer when all the pixels are different, it can be distinguished whether the image has been illuminated as a whole, all have moved, or are unrelated images. Since it is not possible, it is not possible to distinguish the motion, and the frame is advanced to the next frame. Even if all the pixel differences are zero, no still image power or motion is detected, so frame advance is performed until the next motion.
- pixels in which the R, G, and B color value differences are equal to or greater than a certain threshold in both frames were extracted as having a difference, and the gnorape of the pixels with the difference was extracted as an ⁇ island '' and extracted.
- the size of each island is treated as an area value substituted by the number of pixels having this difference, and islands whose area value is less than or equal to the threshold are ignored.
- there is a color difference in addition to the brightness difference and it is possible to pick up motion for each color by obtaining the color difference.
- the motion detection unit 15 outputs the list of the X- and Y-coordinates and the area value of the center of gravity of each island indicating the difference between the two frames to the tone generation unit 60.
- the tone generating means 60 has a sound database (hereinafter referred to as a sound DB) 40 in which pixels, scales, and chords are registered, and the position and area of each island of the frame data sent from the operation specifying means 10 The corresponding sound is extracted from the sound data, and the parameters of the music data are output in accordance with the standard MlDI (Musical Instruments Digital Interface) for exchanging music data as music data.
- the synthesizing means 61 of the musical sound generating means 60 reads analog data or digital data from a music database (hereinafter abbreviated as music DB) 50 storing existing measures, melodies, music, etc. The data is converted to digital data once, and if it is digital data, it is extracted as it is and combined with the musical sound data based on the MIDI data output from the motion detection unit, and the combined digital data is generated as MIDI parameters.
- music DB music database
- the rhythm control means 62 in the musical sound generating means 60 is used when the rhythm / tempo of a song or the like is modified or changed by the generated musical sound data. This is a function to extract the time element from the motion data expressed by MIDI of the motion identification means 10 and use the repetition cycle between frames to speed up or slow down the rhythm and tempo.
- the repetition control means 63 in the musical sound generating means 60 extracts a time element from the motion data expressed by MIDI of the motion specifying means 10, and uses the repetition period between frames to generate the generated musical sound data. Is a function that repeatedly emits.
- the above data is output as sound by the sound output means 65, or a force for generating and outputting a specific image by the image processing means 80, and output such as blinking of light by the light emitting means 90. .
- FIGS. 2 to 7 show a second embodiment using a program according to the present invention, and a description will be given below of a tone generation program.
- FIG. 2 is a flowchart of the entire program processing.
- the program in FIG. 2 is an embodiment executed as one task under the control of the operating system.
- step P210 each task for audio output, image output, and light output is started.
- each output task is generated separately, and the tone data based on the subsequent difference is received as “waiting for event”.
- child tasks such as sound tasks, image tasks, and light tasks that perform processing independently and in parallel are activated separately, but the specific events to be processed, in this case, It is in a state of waiting for an event in the music data.
- the program specifying this operation as the parent task generates tone data, and when a processing event specifically occurs, the child task is activated with the tone data for each child task. Accordingly, the tone data is sent to each child task at the same time as the tone data is generated, and each task performs each output process in parallel. However, if you want to output the effect of synchronizing sound, image, and light, you can add them with a single task, such as adding a certain delayed sound to the motion of the image. It is also possible to configure so that each task synchronizes its output by processing or using a synchronization instruction. In addition, the activation of each task may be performed as needed at another initial setting, or may be activated separately.
- step P 211 the first frame for generating a musical sound is read into the first buffer.
- step P212 to read the second frame continuously, the contents of the read first buffer are moved to the second buffer, and again in step P214, the next new frame is read into the first buffer.
- the above is the procedure for always storing the latest frame in the first buffer and storing the contents of the frame immediately before it in the second buffer. And compare the pixels for
- Step P216 In the process of calculating the difference between the two frames in Step P216, first, for each corresponding pixel of the frame, a difference for each color of each pixel is calculated, and a group of pixels having a difference of a certain value or more from the surroundings is calculated. As "islands". This island is a group of pixels that have a certain range of values, not just pixels of exactly the same value. Then, the number of pixels constituting the island is counted as the area value of each island.
- step P2128 when the color values of all pixels of both compared images are all equal to or less than a certain value, it is a case of a still image or a continuous frame without motion, and the difference between all pixels is zero. In this case, the process proceeds to step P240, and the process proceeds to a matching process for determining whether or not a registered figure is included. If there is a difference of more than a certain value between the pixels of the compared images, it is determined in step P220 whether the values of all the pixels are more than a certain value. If the two images are completely different images, if there is no pixel with the same color value due to the entire light, or if a fine pattern figure moves at high speed, This is the case when they cannot be detected.
- Step P240 the condition to reach step P222 is that, with the above selection, there is a part where the color value differs by more than a certain value and the same part whose color value is less than a certain value in each corresponding pixel in the frame. Based on this, it is determined whether there is any movement.
- Step P222 detects islands one after another as a group formed by pixels having similar difference values as “islands”. If there are no more islands to be taken out Step P224 Proceed to P232. When one island is taken out, the area of the island and the center of gravity of the pixels constituting the island are calculated in step P226. Objects whose area value does not reach a certain threshold value are inspected in step P228, ignored as trivial islands, and the process returns to step P222 in which the next island is taken out and inspected. If the area of the island exceeds a certain threshold, an entry having the center of gravity of this island is registered in the difference list for tone generation in step P228, and the area and the average color value of each dot are added. Return to processing step P222 for extracting the island.
- FIG. 7 is a configuration diagram of an embodiment of the history statistic force 80 and the difference list 70.
- Each detected island is registered in the difference list 70.
- a history stat force of 80 stacks them chronologically.
- the difference list 70 has an entry number column 71 for recording the number of islands detected for each frame to be analyzed, and a time stamp column 72 for recording the time at that time.
- an entry is created for each island with the X coordinate 73 and the Y coordinate 74 of each island as a pair, and the area and average color value of the island are set as an area column 75 and an average color value column 76 in that column. Stored in P230.
- Step P240 is a pattern matching process for determining whether or not a registered pattern exists in the contents of the first buffer, which will be described in detail with reference to FIG.
- the pattern matching process in step P246 if a registered figure is found, it is recorded in the registered figure field 83 of the history stat force 80, or the registered figure returns in a single frame with a parameter value as a visible figure list.
- the history statistic 80 is obtained when the end display field 81 for displaying the last entry, the difference list field 82 having the difference list 70 of each island as an entry, and the island is determined to be a registered figure. And a registered figure field 83 in which the registered figure is entered.
- Step P246 is a process of transferring data to each output task, and sends an event occurrence notification command to the operating system using the latest column of the history statistic force 80 including a difference list indicating movement as a parameter. .
- the output processing for each task is shown in Figs.
- step P248 if there is a next frame, the process returns to step P212 in which that frame is read as a new frame. If it is the processing of the last frame in the judgment of step P248, In step P250, the series of differences stored in the history statistics 80 and the detected figures, and the figure list, if any, are deleted.In step P252, each output task is deleted, and the process specific to this operation is performed. finish.
- a warning output is output in a repetitive mode in which the input image is continuously executed even after the input image is stopped, or when an emergency situation is detected.
- the output task can be freely configured.
- FIG. 3 is a flowchart of the matching process shown in Step P240 of FIG.
- Step P300 takes in the contents of the first buffer and prepares for access to the pattern DB in which the matching figure is registered.
- the outline of the figure is extracted by a general method by calculating, for example, a difference in color value from the contents of the first buffer.
- semi-IJ is determined one after another whether or not a closed loop exists in the extracted contour. Matching for similar figures is performed.
- Step P340 If no matching data is found in the inspection in Step P340, the process returns to Step P320 for extracting a closed figure again. If matching data is found, the name (shape ID) of the matched figure is extracted in step P350. Next, in step P360, in addition to the name of the figure, the center position of the figure and the color of the figure are extracted and added to a figure list (not shown).
- the figure list is a list storing information of registered figures included in the frame, and is added to the registered figure column 83 of the history statistic 80.
- an end display is added to the last column 83 of the history lister 80 in the figure list in step P370, and the processing time is stored in the time stamp column. This is returned to the caller as a parameter list.
- FIG. 4 is a flowchart of the sound task.
- the sound task generated in step P210 in FIG. 2 first issues an event waiting command to the operating system in step P410, and waits for being called with voice data from step P246 shown in FIG.
- the call parameters will be The figure list is pointed out, and the difference list 70 and the registered figure are taken out in step P412 using the end display field 81 of the history statistic or the last entry of the figure list as an end condition.
- step P414 the sound DB is first read, and based on the extracted difference list 70 and the registered figures, the instrument type is set using the X coordinate as a key, the scale is set using the Y coordinate as a key, and the volume balance is set using the XY coordinate as a key. Select the sound effector type using the area as a key and the special sound using the registered figure as a key. In the above, the parameters are adjusted in accordance with the MIDI standard in step P416.
- step P4108 it is determined whether or not there is a request for synthesizing the generated sound data with other sound data. If there is a request for synthesizing the sound data, in step P420, the music, bar, melody to be synthesized is read from the music DB. , Etc. are read and synthesized, but a digital 'signal' processor may be used for this synthesis.
- step P422 it is determined whether there is a request to change the tempo of the generated music, bar, melody, and the like.
- a request to change the tempo for example, a process is performed in which, for example, the time stamp of the same registered figure is extracted, and the interval of the target music is gradually adjusted to the repetition interval.
- step P426 it is determined whether there is a repetition request. If repetition is specified, in step P428, a repetition cycle and an end condition for repetition are set. Here, if the value of the time stamp 72 of the difference list 70 registered in the history stack 80 is taken out and the difference is taken out, the period of the change of the figure can be taken out based on this.
- Step P430 is an audio output process in which the digital audio signal is converted into an analog audio signal and output from a speaker or the like.
- step P432 it is determined whether the repetition condition set in step P428 is satisfied. If not, the process returns to step P430 to restart the audio output process. The process returns to the event waiting step P410 to generate a sound according to the movement of the frame.
- FIG. 5 is a flowchart of the diagram task.
- the diagram task generated in step P210 in FIG. 2 first issues an event wait command to the operating system in step P510. Then, the process waits for a call with sound data from step P246 shown in FIG.
- the call parameter indicates the history list or figure list, and the difference list 70 is registered with the end display field 81 of the history statistic force or the last entry display of the figure list as an end condition. The figure is taken out in step P512.
- step P514 first, an image database (hereinafter abbreviated as image DB) in which pixels are registered is read out, and based on the extracted difference list 70 and the registered figures, the type of the figure is set using the X coordinate as a key, and the Y coordinate is used as a key. Select a color scheme using the XY coordinates as a key, a color scheme using the XY coordinates as a key, a type of figure effector using an area as a key, and a special figure using a registered figure as a key. In step P516, it is determined whether or not the registered figure is in the history list.
- image DB image database
- step P528 If there is a registered figure, in step P518, the figure is changed or the color is changed in accordance with the promise of drawing various figures corresponding to the registered figure.
- step P520 it is determined whether there is a request for synthesizing the generated sound data with other sound data, and if there is a request for synthesizing sound data, in step P522, a pattern, a photograph, and the like to be synthesized are read from the image DB and synthesized.
- This application can be performed using various image processing application programs.
- Step P524 is an image output process in which image data is displayed on various display devices.
- FIG. 6 is a flowchart of the optical task.
- the optical task generated in step P210 in FIG. 2 first issues an event wait command in step P610, and waits for being called with sound data from step P246 shown in FIG.
- the calling parameter points to the history list or figure list, and the difference list 70 and the registered figure Take out in step P612.
- step P614 first, a light database (hereinafter, abbreviated as light DB) in which a list of light colors, colors, and lightness and a selection rule are registered is read, and the X coordinate is issued based on the extracted difference list and the registered figure.
- light DB a light database
- step P616 it is determined whether or not the registered figure is in the history list. If there is a registered figure, in step P618, the intensity of the light beam is changed into a wave shape, the light beam locus is moved, and the like. I can.
- step P620 it is determined whether there is a request for repetition of the generated optical data. If there is a request for repetition of the optical data, the repetition time is set in step P622, and in step P624, lighting of the issuing device is output. In step P626, it is determined whether or not the repetition condition set in step P622 is satisfied.If not, the process returns to step P620 to resume the light output process. Return to Step P610 for event waiting to generate the corresponding light.
- the elements to be selected corresponding to the coordinate values and the like described above, and the elements from the various DBs to be selected are examples, and are not limited to those, and may be variously selected as the objects to be selected.
- Various elements can be registered in the DB, and various different selections are possible according to the application target and purpose.Replacement, change, and combination of the elements to be selected and various DB registration elements are all described in this application. Shall be included in the scope of rights.
- the present invention is not limited to this, and based on the motion data detected from the frame difference. It can be widely applied as a frame analysis sensor, and the use of an oscillating means, a power generating means, and various driving means as output means is also included in the scope of the present application.
- FIG. 8 is an explanatory diagram of a storage medium storing a musical sound generation program according to the present invention.
- Reference numeral 900 denotes a terminal device on which the present invention is to be implemented.
- Reference numeral 910 denotes a bus, which includes a logical operation device (CPU) 920, a main storage device 930, and input / output means 940.
- the input / output means 940 includes a display means 941 and a keyboard 942.
- a program based on the present invention is stored in the storage medium (CD) 990 as an executable tone generation program (GP) 932, and a loader 931 for installing this program in the main storage 930 is also stored in the storage medium (CD) 990.
- the storage medium (CD) 931 is read into the main storage device 930, and the tone generation program (GP) 932 is installed in the main storage device 930 by the loader 931.
- the terminal device 900 functions as the musical sound generation device 100 shown in FIG.
- a tone generation program (GP) 932 according to the present invention can be loaded into the terminal device 100 from a large storage device 973 built in the server 971 connected to the LAN 950 via the LAN interface LANI / F911.
- the program reader 931 for installing the tone generation program (GP) 932 stored in the server 971 is first loaded into the main storage device 930 via the LAN 950, and then this loader Then, the tone generation program (GP) 932 in the executable form in the large memory 973 is installed in the main storage device 930.
- the tone generation program (GP) 932 according to the present invention stored in the storage 983 can be directly installed by the remote loader 982 using the work area of the main storage device 930.
- it may take the form of a loader 931 which is the same as that of the large storage device 973 connected to the LAN 950.
- the invention according to claim 1 extracts motion data indicating motion from a difference between corresponding pixels of a plurality of frames of image data, and generates musical sound data generated based on the motion data. Generates music data synthesized with other sound data, so that, for example, an existing song can be changed by a dance gesture or by a change in scenery from a car
- the invention according to claim 2 is the invention according to claim 1, wherein the musical sound generating means is provided with musical tone rhythm control means, and the rhythm control means processes the musical sound data. You can listen to the music in a comfortable rhythm that fluctuates according to the movement of the carp that flutters in the wind, accompanied by musical tones with the matched rhythm.
- the musical tone generating means of the first aspect is provided with a repetition control means, and the repetition control means processes the musical sound data, for example, an echo may be added to the musical sound or a dangerous When an unusual movement is detected, a warning sound can be repeatedly notified.
- the invention according to claim 4 provides the musical sound generating means according to claim 1, further comprising an image matching means, wherein a matching pattern extracted from an image database registered using a figure in the image data as a key. Since tone data is generated by Different musical sound data will be generated depending on the difference, and for example, it will be easy to detect a situation where a similar object mounted on a car, an automatic machine or an appliance and prepared for safety is dangerous due to unexpected movement etc. It becomes possible by calculation.
- a light emitting means is provided in the musical sound generating apparatus according to claim 1, and the light emitting means emits light based on the motion data. It can emit light when the lighting is changed or when a car or the like detects dangerous movements.
- an image processing means is provided in the musical sound generating device according to claim 1, and since the image processing means performs image processing based on the musical sound data, the movement of the object is deformed. You can view images and enjoy images that emphasize the movements of actors and animals.
- the motion data indicating the motion is calculated from the difference between the corresponding pixels of the image data, and the sound data synthesized with the motion data and other sound data.
- the existing music can be changed by the dance gesture or by the change of the scenery from the car.
- the invention according to claim 8 calculates motion data indicating a motion from a difference between corresponding pixels of image data, and generates musical sound data synthesized with the motion data and other sound data.
- an existing song can be changed with a dance gesture, or with a change of scenery from a car.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/629,235 US7655856B2 (en) | 2004-06-09 | 2004-06-09 | Musical sounding producing apparatus, musical sound producing method, musical sound producing program, and recording medium |
EP04745712.2A EP1760689B1 (en) | 2004-06-09 | 2004-06-09 | Musical sound producing apparatus and musical sound producing method |
PCT/JP2004/008037 WO2005122134A1 (ja) | 2004-06-09 | 2004-06-09 | 楽音生成装置、楽音生成方法、楽音生成プログラムおよび記憶媒体 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2004/008037 WO2005122134A1 (ja) | 2004-06-09 | 2004-06-09 | 楽音生成装置、楽音生成方法、楽音生成プログラムおよび記憶媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005122134A1 true WO2005122134A1 (ja) | 2005-12-22 |
Family
ID=35503306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/008037 WO2005122134A1 (ja) | 2004-06-09 | 2004-06-09 | 楽音生成装置、楽音生成方法、楽音生成プログラムおよび記憶媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US7655856B2 (ja) |
EP (1) | EP1760689B1 (ja) |
WO (1) | WO2005122134A1 (ja) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7655856B2 (en) * | 2004-06-09 | 2010-02-02 | Toyota Motor Kyushu Inc. | Musical sounding producing apparatus, musical sound producing method, musical sound producing program, and recording medium |
WO2009052032A1 (en) * | 2007-10-19 | 2009-04-23 | Sony Computer Entertainment America Inc. | Scheme for providing audio effects for a musical instrument and for controlling images with same |
US7939742B2 (en) * | 2009-02-19 | 2011-05-10 | Will Glaser | Musical instrument with digitally controlled virtual frets |
KR101394306B1 (ko) * | 2012-04-02 | 2014-05-13 | 삼성전자주식회사 | 효과 음향을 출력하는 휴대용 단말기의 장치 및 방법 |
US9281793B2 (en) | 2012-05-29 | 2016-03-08 | uSOUNDit Partners, LLC | Systems, methods, and apparatus for generating an audio signal based on color values of an image |
US20170177181A1 (en) * | 2015-12-18 | 2017-06-22 | Facebook, Inc. | User interface analysis and management |
US20180295317A1 (en) * | 2017-04-11 | 2018-10-11 | Motorola Mobility Llc | Intelligent Dynamic Ambient Scene Construction |
KR102390951B1 (ko) * | 2020-06-09 | 2022-04-26 | 주식회사 크리에이티브마인드 | 영상기반 음악작곡방법 및 그 장치 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6491188A (en) * | 1987-10-02 | 1989-04-10 | Yamaha Corp | Performance tempo controller |
JPH04174696A (ja) * | 1990-11-08 | 1992-06-22 | Yamaha Corp | 演奏環境に対応する電子楽器 |
JPH06102877A (ja) * | 1992-09-21 | 1994-04-15 | Sony Corp | 音響構成装置 |
JPH08314462A (ja) * | 1996-04-22 | 1996-11-29 | Kawai Musical Instr Mfg Co Ltd | 電子楽器 |
JPH1026978A (ja) * | 1996-07-10 | 1998-01-27 | Yoshihiko Sano | 楽音自動発生装置 |
JPH11175061A (ja) * | 1997-12-09 | 1999-07-02 | Yamaha Corp | 制御装置およびカラオケ装置 |
JP2000276139A (ja) * | 1999-03-23 | 2000-10-06 | Yamaha Corp | 楽音生成方法および電子機器の制御方法 |
JP2001083969A (ja) * | 1999-09-17 | 2001-03-30 | Yamaha Corp | 再生制御装置と媒体 |
JP2002311949A (ja) * | 2001-04-12 | 2002-10-25 | Mitsubishi Electric Corp | 楽音制御装置および楽音制御方法 |
JP3098423U (ja) * | 2003-06-09 | 2004-03-04 | 新世代株式会社 | 自動演奏装置及び自動演奏システム |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2537755A1 (fr) * | 1982-12-10 | 1984-06-15 | Aubin Sylvain | Dispositif de creation sonore |
AU571674B2 (en) * | 1984-03-06 | 1988-04-21 | Practel Pty Ltd | Vision/reaction system |
US5159140A (en) * | 1987-09-11 | 1992-10-27 | Yamaha Corporation | Acoustic control apparatus for controlling musical tones based upon visual images |
JP2629740B2 (ja) | 1987-10-02 | 1997-07-16 | ヤマハ株式会社 | 音響処理装置 |
JPH086549A (ja) * | 1994-06-17 | 1996-01-12 | Hitachi Ltd | 旋律合成方法 |
US5689078A (en) * | 1995-06-30 | 1997-11-18 | Hologramaphone Research, Inc. | Music generating system and method utilizing control of music based upon displayed color |
JP4174696B2 (ja) | 1998-11-11 | 2008-11-05 | ソニー株式会社 | 記録装置および方法、並びに記録媒体 |
JP3637802B2 (ja) | 1999-03-23 | 2005-04-13 | ヤマハ株式会社 | 楽音制御装置 |
US7655856B2 (en) * | 2004-06-09 | 2010-02-02 | Toyota Motor Kyushu Inc. | Musical sounding producing apparatus, musical sound producing method, musical sound producing program, and recording medium |
US7606375B2 (en) * | 2004-10-12 | 2009-10-20 | Microsoft Corporation | Method and system for automatically generating world environmental reverberation from game geometry |
US7525034B2 (en) * | 2004-12-17 | 2009-04-28 | Nease Joseph L | Method and apparatus for image interpretation into sound |
-
2004
- 2004-06-09 US US11/629,235 patent/US7655856B2/en not_active Expired - Fee Related
- 2004-06-09 EP EP04745712.2A patent/EP1760689B1/en not_active Not-in-force
- 2004-06-09 WO PCT/JP2004/008037 patent/WO2005122134A1/ja not_active Application Discontinuation
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6491188A (en) * | 1987-10-02 | 1989-04-10 | Yamaha Corp | Performance tempo controller |
JPH04174696A (ja) * | 1990-11-08 | 1992-06-22 | Yamaha Corp | 演奏環境に対応する電子楽器 |
JPH06102877A (ja) * | 1992-09-21 | 1994-04-15 | Sony Corp | 音響構成装置 |
JPH08314462A (ja) * | 1996-04-22 | 1996-11-29 | Kawai Musical Instr Mfg Co Ltd | 電子楽器 |
JPH1026978A (ja) * | 1996-07-10 | 1998-01-27 | Yoshihiko Sano | 楽音自動発生装置 |
JPH11175061A (ja) * | 1997-12-09 | 1999-07-02 | Yamaha Corp | 制御装置およびカラオケ装置 |
JP2000276139A (ja) * | 1999-03-23 | 2000-10-06 | Yamaha Corp | 楽音生成方法および電子機器の制御方法 |
JP2001083969A (ja) * | 1999-09-17 | 2001-03-30 | Yamaha Corp | 再生制御装置と媒体 |
JP2002311949A (ja) * | 2001-04-12 | 2002-10-25 | Mitsubishi Electric Corp | 楽音制御装置および楽音制御方法 |
JP3098423U (ja) * | 2003-06-09 | 2004-03-04 | 新世代株式会社 | 自動演奏装置及び自動演奏システム |
Also Published As
Publication number | Publication date |
---|---|
EP1760689B1 (en) | 2016-03-09 |
US7655856B2 (en) | 2010-02-02 |
EP1760689A1 (en) | 2007-03-07 |
US20080289482A1 (en) | 2008-11-27 |
EP1760689A4 (en) | 2010-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0306602B1 (en) | Self-controlled vision system | |
US5005459A (en) | Musical tone visualizing apparatus which displays an image of an animated object in accordance with a musical performance | |
US6225545B1 (en) | Musical image display apparatus and method storage medium therefor | |
US10482856B2 (en) | Automatic performance system, automatic performance method, and sign action learning method | |
EP1020843A1 (en) | Automatic musical composition method | |
JP2012194525A (ja) | 発音制御装置、特定装置、発音制御システム、プログラムおよび発音制御方法 | |
JP2005025778A (ja) | 画像処理方法および装置 | |
WO2005122134A1 (ja) | 楽音生成装置、楽音生成方法、楽音生成プログラムおよび記憶媒体 | |
JP3452783B2 (ja) | 振り付け採点機能を有するカラオケ装置 | |
US20200365123A1 (en) | Information processing method | |
JP2020046500A (ja) | 情報処理装置、情報処理方法および情報処理プログラム | |
US7297860B2 (en) | System and method for determining genre of audio | |
JP3077192B2 (ja) | 演奏環境に対応する電子楽器 | |
JP3643829B2 (ja) | 楽音生成装置、楽音生成プログラムおよび楽音生成方法 | |
US20220189200A1 (en) | Information processing system and information processing method | |
JPH06301475A (ja) | 位置検出装置 | |
JP2993867B2 (ja) | 観客情報から多様な対応をするロボットシステム | |
JP4765705B2 (ja) | 楽音制御装置 | |
JP2010194232A (ja) | 舞台装置 | |
Bering et al. | Virtual Drum Simulator Using Computer Vision | |
JP2005321514A (ja) | ゲーム装置および音程付効果音生成プログラムならびに方法 | |
JP4257300B2 (ja) | カラオケ端末装置 | |
CN116740234A (zh) | 信息处理方法、信息处理装置、演奏数据显示系统以及程序 | |
WO2023037956A1 (ja) | 演奏収録方法、演奏収録システムおよびプログラム | |
JP2008165098A (ja) | 電子楽器 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004745712 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11629235 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2004745712 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: JP |