WO2005125210A1 - Methods and apparatus for motion capture - Google Patents

Methods and apparatus for motion capture Download PDF

Info

Publication number
WO2005125210A1
WO2005125210A1 PCT/US2005/020965 US2005020965W WO2005125210A1 WO 2005125210 A1 WO2005125210 A1 WO 2005125210A1 US 2005020965 W US2005020965 W US 2005020965W WO 2005125210 A1 WO2005125210 A1 WO 2005125210A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
marker
motion capture
computer system
image data
Prior art date
Application number
PCT/US2005/020965
Other languages
French (fr)
Inventor
Prem Kuchi
Raghu Ram Hiremagalur
Sethuraman Panchanathan
Original Assignee
Arizona Board Of Regents For And On Behalf Of Arizona State University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arizona Board Of Regents For And On Behalf Of Arizona State University filed Critical Arizona Board Of Regents For And On Behalf Of Arizona State University
Publication of WO2005125210A1 publication Critical patent/WO2005125210A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Definitions

  • Motion capture is the process by which the movement information of various objects is quantized to be stored and/or processed.
  • the advancement of motion capture technologies has enabled applications in a wide range of fields, including medical rehabilitation sciences, sports sciences, gaming and animation, entertainment, animal research, and industrial usability and product development.
  • Figure 1 is a block diagram of a motion capture system according to various aspects of the present invention.
  • Figure 2 is a flow diagram of an exemplary motion capture process.
  • Figure 3 is an illustration of a subject in an exemplary initial pose.
  • Figure 4 is a flow diagram of an image data analysis process.
  • Figure 5 is a diagram of a motion capture implementation. Elements and steps in the figures are illustrated for simplicity and clarity and have not necessarily been rendered according to any particular sequence.
  • a motion capture system 100 comprises one or more cameras 110 and a computer system 112.
  • the cameras 110 record images and transfer corresponding image information to the computer system 112.
  • the computer system 112 analyzes the image information and generates motion capture data.
  • the motion capture process may be performed for any suitable purpose, for example rehabilitative medicine, performance enhancement, motion research, animation, security, or industrial processes.
  • the motion capture system uses standard, commercially available, relatively low-cost equipment, such as standard consumer video cameras and a conventional personal computer.
  • the cameras 110 may comprise any suitable systems for generating data corresponding to images.
  • the camera 110 may comprise a conventional video camera using a charge-coupled device (CCD) that generates image information, such as analog or digital signals, corresponding to the viewed environment.
  • CCD charge-coupled device
  • the camera 110 suitably responds to visual light, though the camera 110 may also or alternatively generate signals in response to other spectral elements, such as infrared, ultraviolet, x-ray, or polarized light.
  • the motion capture system 100 comprises one or multiple conventional color video cameras.
  • the cameras may comprise high-speed cameras for capturing quick motion.
  • the cameras 110 provide the generated information to the computer system 112.
  • the computer system 112 may receive the image information from another source, such as a storage medium or other source of image data.
  • the computer system 112 analyzes the data to generate motion capture data, such as data for use in the creation of a two- or three-dimensional representation of a live performance.
  • the computer system 112 suitably generates, in real-time, following a delay, or using pre-recorded information, motion capture data comprising, for example, a recording of body movement (or other movement) for immediate or delayed analysis and playback.
  • the motion capture data may be used for any suitable purpose, such as to map human motion onto a computer-generated character, for example to replicate human arm motion for a character's arm motion, or human hand and finger patterns controlling a character's skin color or emotional state.
  • the computer system 112 may generate the motion capture data according to any appropriate process and/or algorithm.
  • the computer system 112 may be configured to establish selected points associated with one or more targets contained in the image data. As the image data is collected, the computer system 112 tracks the movement of the selected points. In addition, the computer system 112 correlates the movement of the selected points as tracked by different cameras 110 to establish two- or three-dimensional tracks for the selected points.
  • the computer system 112 may also perform post-processing on the image data to prepare the data for playback.
  • the computer system executes a motion capture program.
  • the motion capture program suitably comprises a plurality of threads operating on the computer system 112 relating to different tasks, such as a capture thread 512 , a process thread 514, and a display thread 516.
  • the capture thread 512 captures image information as it is received from the source
  • the process thread 514 processes the captured data to generate the motion capture data
  • the display thread 516 provides the information for display.
  • a motion capture system 100 initially prepares a subject 114, such as a person, animal, or other moving entity or item, for acquiring image data (210).
  • the subject 114 may be prepared in any suitable manner (212).
  • the markers may be attached to relevant places on the subject 114.
  • the subject 114 environment may be initially situated to facilitate generation of the motion capture data.
  • the lighting for the environment may be adjusted to provide proper contrast for generating the image data and/or the subject 114 placed before a suitable background.
  • the subject 114 is placed in an initial pose that may be used for initially identifying the selected points on the subject 114.
  • the subject 114 may be positioned so that at least one leg and one arm are bent to more clearly define the relative locations of the shoulder 1, hip 2, knee 3, ankle 4, and toe 5.
  • the motion capture system 100 is initialized by activating at least one camera 110 and observing a target.
  • the computer system f ⁇ 2 also loads the motion capture program from a medium, such as a hard drive or other storage medium.
  • the computer system 112 may perform any appropriate steps to prepare the computer system 112 and/or the remaining elements of the motion capture system 100. For example, the computer system 112 may check the contrast level in the signals received from the cameras 110. If the contrast is inadequate, the computer system 112 may notify the operator to correct the condition, such as requiring additional light, less light, different colored markers, or the like.
  • the capture thread 512 captures the image information from the cameras, data files, or other source of image information.
  • the computer system 112 may use any suitable technology to capture the image information, such as conventional DirectX functionalities.
  • the capture thread 512 may transfer the data to a memory, such as a global buffer 518 ( Figure 5).
  • the computer system 112 may identify the markers or other selected points to be tracked in the image data.
  • the computer system 112 may acquire the selected point in any suitable manner, and may operate with any suitable selected points.
  • the motion capture system 100 may track unilateral markers, bilateral markers, or other configurations.
  • the motion capture system 100 may use physical markers, virtual markers, anatomical landmarks, or other appropriate points.
  • the process thread 514 of the motion capture system 100 tracks the movement of physical markers, such as visible or otherwise optically responsive markers attached to selected points on the subject 114.
  • the markers may comprise colored markers attached to the subject 114, such as colored paper or plastic discs or rectangles attached via an adhesive, like conventional Post It ® notes.
  • the computer system 112 may detect the marker in any suitable manner, such as using color filtering.
  • the various body segments of the subject may then be identified, for example using anthropomorphic information.
  • the markers comprise virtual markers, such as markers that are designated by the user or automatically selected by the computer system 112.
  • the computer system 112 may display a frame of an image to the user. The user may then select one or points in the image for tracking, such as by selecting the points using a tracking device.
  • the user may "point-and-click" to designate the selected virtual makers as desired locations, such as the hip, knee, ankle, toe, or the like.
  • Color and shape models may then be built around the designated virtual markers, and the body segments may be identified, such as by using anthropomo ⁇ hic information.
  • the computer system 112 may automatically identify and select virtual markers for tracking.
  • the computer system 112 may use trained and pre-stored shape models, anthropomorphic approximations, and/or a pattern recognition process to identify appropriate points for the hip, knee, ankle, toe, or the like.
  • Color models and shape models may then be generated around the identified virtual markers.
  • the locations of the virtual markers may also be refined based on the image data as the motion information is received.
  • the computer system 112 may also associate designations with each marker.
  • the designations may comprise any suitable designations, such as descriptions of the anatomical area associated with the marker.
  • the various markers may be designated, either automatically or manually, as LEFT HIP, RIGHT KNEE, HEAD, or the like.
  • the designations may also be associated with other identifying information, such as the name of the subject 114, the time of the motion capture session, and other relevant information.
  • the markers are designated using preselected designations.
  • the preselected designations are associated with various rules and/or characteristics.
  • the process thread 514 manages various marker data.
  • the process thread 514 receives the captured image information and returns the tracked point.
  • the process thread 514 suitably iteratively processes the marker data and computes the relevant markers' positions in the current frame.
  • the marker data may comprise any suitable information associated with one or more markers.
  • marker data may modeled as a dynamic graph with each marker as the vertex.
  • Marker data may comprise: i) Markers: individual positions of the markers; ii) ii) Connectors: join two markers to generate a line; and/or iii) iii) Joints: joining of two lines to generate a joint.
  • each of the markers, lines, and joints are suitably selected and connected before the subject begins moving.
  • the markers, lines, and joints may be added at any time and can be deleted at any time during the duration of the motion capture data.
  • the marker data may be modeled as a structure that is associated per frame.
  • the marker data for each frame is suitably written to a secondary storage as a file of the current session. Any future reference to this session may display the frames and the marker data available in the corresponding file.
  • the data present in the secondary storage can be interfaced or exported with other applications or otherwise utilized.
  • the computer system 112 may also analyze the marker data and associate the data with known models to facilitate generation and analysis of the motion capture data. For example, the computer system 112 may use the marker data to estimate the height of the subject 114.
  • the marker data may also assist in the selection of other model information that may be applied to the subject 114, such as a predicted gait cycle, approximate body structure, and the like.
  • the process thread 514 may proceed with receiving and analyzing successive frames of image data from the capture thread 512 (214). The successive frames are suitably synchronized so that the data from the various cameras 110 or other sources relate to substantially identical times.
  • the computer system 112 identifies the marker locations in the image data and accordingly generates data for the marker locations in two- or three-dimensional space (216).
  • the computer system 112 may employ any appropriate algorithms or processes to identify the locations of the markers and determine their locations.
  • the process thread 514 may predict the location of the marker in a particular frame, search the frame for the marker, identify the marker location, and refine the resulting motion capture data.
  • the process thread 514 may also perform any other desired computations relating to the image data and/or motion capture, such as calculating joint angle trajectories, stresses on joints, accelerations and velocities of body parts, or other relevant data.
  • the process thread 514 may receive the image data for successive frames of image data (410).
  • the frames may be received " at any appropriate rate, such as at the cameras' 110 frame rate for real-time processing.
  • the computer system 112 may perform any suitable analysis, including filtering, correlation, assimilation, adaptation, and the like.
  • the computer system 112 may apply particle filters, Kalman filters, non-Gaussian and Gaussian filters, adaptive forecasting algorithms, or other statistical signal processing techniques to the image data.
  • the present embodiment applies multiple analyses to the image data, including particle filtering and mean shift tracking, to track the movement of the markers.
  • the computer system 112 may initially apply a prediction model using image data or motion capture data from one or more preceding frames to select a likely area for searching for the marker in the current frame (412).
  • the computer system 112 may apply a conventional prediction model using positions, velocities, accelerations, and/or angles from preceding frames to predict the current location of each marker.
  • the prediction model may generate any suitable prediction data, such as an area or set of pixels most likely to currently include the marker.
  • the prediction model establishes a selected number of locations for further analysis, such as 10 to 1000 locations, for example 75-150 potential locations. A greater number of locations may increase the likelihood of accurately finding the location of the marker, but may also contribute complexity and duration to the processing.
  • the image data, such as the areas identified by the prediction model may be analyzed to identify the current location of the marker.
  • the present computer system 112 may perform particle filtering (variously known as condensation filtering, sequential Monte Carlo methods, or stochastic filtering methods) on the areas identified by the prediction model (414).
  • particle filtering calculates probabilities for the marker being located in each candidate area. Particle filtering tends to minimize error.
  • the particle filtering may process " any suitable characteristics in the image data, such as color, texture, or shape of the target.
  • the particle filter analyzes the relevant areas based on color to determine the various probabilities for the location of the marker within the analyzed areas.
  • the particle filtering is suitably constrained or bounded by various rules, such as anatomic distances between markers.
  • the particle filtering suitably generates a set of candidate particles most likely to include a particular marker.
  • the computer system 112 also performs mean shift tracking on the image data (416).
  • the computer system 112 of the present embodiment performs a standard mean shift analysis on the candidate particles identified by the particle filter.
  • Mean shift tracking performs global optimization for one frame to estimate the location of the marker in a subsequent frame.
  • Mean shift tracking tends to provide smoother tracking of the marker movement than particle filtering alone, and more effectively tolerate variations in position and features, such as light variation or shadows.
  • the mean shift tracking generates an expected area that is designated as the marker location.
  • the combination of particle filtering and mean shift tracking tends to statistically minimize error and maintain stability in the marker tracking.
  • the resulting data may be further processed to correct, clean, or smooth the data.
  • any suitable processes or techniques may be applied to the data for any desired purpose, for example to more accurately identify the location of the marker, smooth the data, or improve the marker tracking process.
  • the present motion capture system 100 performs one or more different processes to correct and clean the motion capture data (418).
  • the computer system 112 allows occlusion management and correction of data loss, for example due to occlusion of markers or other failure to detect the marker.
  • the data may be corrected in any suitable manner. For example, review any frame from which data is missing or having incorrect data and manually adjust the data to insert the correct marker location.
  • the computer system 112 may automatically correct the data, such as by inte ⁇ olating between prior and subsequent data.
  • the computer system 112 may perfo ⁇ n additional searching for missing data, for example using different search techniques and/or supplemental data. For example, if a left wrist marker location is missing but the location of the left elbow marker is known, then the computer system 112 may search a substantially spherical area having a radius corresponding to the length of the subject's forearm.
  • the computer system 112 is also suitably configured to perform region growing to reduce error. In the present embodiment, if the full area of the marker is not detected, the computer system 112 may apply region growing to the detected marker to encompass the entire marker. Region growing may be performed using any suitable characteristics of the marker, such as the shape, texture, and/or color of the marker, and may use any suitable region growing techniques, such as conventional region growing processes.
  • the computer system 112 may then calculate the centroid of the grown region, and the centroid is then designated as the marker.
  • the computer system 112 may refine the marker locations according to known information.
  • the computer system 112 may refine the shape of the markers according to shape-based parameters, motion information, and/or anthropomo ⁇ hic information.
  • the computer system 112 may also perform data smoothing on the motion capture data, for example to minimize errors and correct locations of markers.
  • the data smoothing may be applied to any data at any suitable point in the process.
  • the computer system 112 may perform Monte Carlo Markov Chain (MCMC) smoothing on the data generated by the particle filter to smooth transitions between frames and minimize the number of particles.
  • MCMC Monte Carlo Markov Chain
  • the computer system 112 may also apply a calibration algorithm to the data to compensate for three- dimensional movement of the subject 114. If the subject 114 moves other than in a plane pe ⁇ endicular to the camera 110, the calibration algorithm may adjust the data to compensate accordingly.
  • the calibration algorithm may comprise any suitable calibration algorithm, such as a conventional calibration algorithm.
  • the computer system 112 may also perform any other desired calculations on the motion capture data, such as calculation of different joint angles for each frame, joint angle trajectories, and timing differences between different parts of the body (420).
  • the computer system 112 may store any appropriate data for later use, such as the original video record, the motion capture data, and the calculated information, at any appropriate location, such as in a processed buffer 520 ( Figure 5). All or parts of the process may be repeated until the last frame of the original image data is processed and the motion capture data is complete.
  • the computer system 112 may also perform additional output processing to enhance the usefulness of the motion capture data or other suitable pu ⁇ ose.
  • the output processing may comprise any suitable processing of the image capture data for later use.
  • the display thread 516 may organize the records, including any appropriate data such as the original image data, the motion capture data, and other information, according to each subject's name and a session identifier.
  • the display thread 516 may also generate additional information for analysis, such as graphs showing time and amplitude of various motions, comparisons to previous motion capture ⁇ sessions, overlays showing the motion- capture- information overlaid on the original image data, and the like.
  • the display thread 516 may also facilitate entry of a report, such as using word processing program, to store a report that may be associated with the subject 114, session, or other information and data.
  • the display thread 516 may also include export functions for transmitting the records or storing them on a medium, such as a DVD.

Abstract

A motion capture system (100) according to various aspects of the present invention includes one or more data sources, such as conventional video cameras (110), and a computer system (112). The computer system (112) analyzes data from the data sources to generate motion capture data. The computer system (112) uses various algorithms, such as particle filtering and mean shift tracking, to generate the motion capture data.

Description

UTILITY PATENT APPLICATION METHODS AND APPARATUS FOR MOTION CAPTURE INVENTORS: Prem Kuchi, Raghu Ram Hiremagalur, and Sethuraman Panchanathan
CROSS-REFERENCES TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Patent Application Serial No. 60/579,962, filed June 14, 2004, entitled Markerless Motion Capture System using Conventional Visual Range Video Cameras, and incorporates the disclosure of the application by reference.
BACKGROUND OF THE INVENTION A great deal may be learned through the careful observation and study of bodies in motion. When applied to human motion, the concept of motion analysis enables an evaluation of how humans interact with the environment and how human bodies respond under certain circumstances or as a result of specific activity. Detailed study of human motion also facilitates better emulation and modeling of human motion. Motion capture is the process by which the movement information of various objects is quantized to be stored and/or processed. The advancement of motion capture technologies has enabled applications in a wide range of fields, including medical rehabilitation sciences, sports sciences, gaming and animation, entertainment, animal research, and industrial usability and product development. BRIEF DESCRIPTION OF THE DRAWING FIGURES A more complete understanding of the present invention may be derived by referring to the detailed description when considered in connection with the following illustrative figures. In the following figures, like reference numbers refer to similar elements and steps. Figure 1 is a block diagram of a motion capture system according to various aspects of the present invention. Figure 2 is a flow diagram of an exemplary motion capture process. Figure 3 is an illustration of a subject in an exemplary initial pose. Figure 4 is a flow diagram of an image data analysis process. Figure 5 is a diagram of a motion capture implementation. Elements and steps in the figures are illustrated for simplicity and clarity and have not necessarily been rendered according to any particular sequence. For example, steps that may be performed concurrently or in different order are illustrated in the figures to help to improve understanding of embodiments of the present invention. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS The present invention is described partly in terms of functional components and various processing steps. Such functional components may be realized by any number of components configured to perform the specified functions and achieve the various results. For example, the present invention may employ various elements, materials, signal sources, signal types, integrated components, recording devices, image data sources, processing components, filters, and the like, which may carry out a variety of functions. In addition, although the invention is described in the motion capture environment, the present invention may be practiced in conjunction with any number of applications, environments, data processing, image and motion analysis, therapeutic and diagnostic systems, and entertainment systems, and the systems described are merely exemplary applications for the invention. Further, the present invention may employ any number of techniques for manufacturing, assembling, testing, and the like. Referring now to Figure 1, a motion capture system 100 according to various aspects of the present invention comprises one or more cameras 110 and a computer system 112. The cameras 110 record images and transfer corresponding image information to the computer system 112. The computer system 112 analyzes the image information and generates motion capture data. The motion capture process may be performed for any suitable purpose, for example rehabilitative medicine, performance enhancement, motion research, animation, security, or industrial processes. In one embodiment, the motion capture system uses standard, commercially available, relatively low-cost equipment, such as standard consumer video cameras and a conventional personal computer. The cameras 110 may comprise any suitable systems for generating data corresponding to images. For example, the camera 110 may comprise a conventional video camera using a charge-coupled device (CCD) that generates image information, such as analog or digital signals, corresponding to the viewed environment. The camera 110 suitably responds to visual light, though the camera 110 may also or alternatively generate signals in response to other spectral elements, such as infrared, ultraviolet, x-ray, or polarized light. In one embodiment, the motion capture system 100 comprises one or multiple conventional color video cameras. Alternatively, the cameras may comprise high-speed cameras for capturing quick motion. The cameras 110 provide the generated information to the computer system 112. Alternatively, the computer system 112 may receive the image information from another source, such as a storage medium or other source of image data. The computer system 112 analyzes the data to generate motion capture data, such as data for use in the creation of a two- or three-dimensional representation of a live performance. The computer system 112 suitably generates, in real-time, following a delay, or using pre-recorded information, motion capture data comprising, for example, a recording of body movement (or other movement) for immediate or delayed analysis and playback. The motion capture data may be used for any suitable purpose, such as to map human motion onto a computer-generated character, for example to replicate human arm motion for a character's arm motion, or human hand and finger patterns controlling a character's skin color or emotional state. The computer system 112 may generate the motion capture data according to any appropriate process and/or algorithm. For example, the computer system 112 may be configured to establish selected points associated with one or more targets contained in the image data. As the image data is collected, the computer system 112 tracks the movement of the selected points. In addition, the computer system 112 correlates the movement of the selected points as tracked by different cameras 110 to establish two- or three-dimensional tracks for the selected points. The computer system 112 may also perform post-processing on the image data to prepare the data for playback. In the present embodiment, the computer system executes a motion capture program. For example, referring to Figure 5, the motion capture program suitably comprises a plurality of threads operating on the computer system 112 relating to different tasks, such as a capture thread 512 , a process thread 514, and a display thread 516. Generally, the capture thread 512 captures image information as it is received from the source, the process thread 514 processes the captured data to generate the motion capture data, and the display thread 516 provides the information for display. Referring to Figure 2, a motion capture system 100 according to various aspects of the present invention initially prepares a subject 114, such as a person, animal, or other moving entity or item, for acquiring image data (210). The subject 114 may be prepared in any suitable manner (212). For example, if the motion capture system 100 uses physical markers, the markers may be attached to relevant places on the subject 114. In addition, the subject 114 environment may be initially situated to facilitate generation of the motion capture data. For example, the lighting for the environment may be adjusted to provide proper contrast for generating the image data and/or the subject 114 placed before a suitable background. In the present embodiment, the subject 114 is placed in an initial pose that may be used for initially identifying the selected points on the subject 114. For example, referring to Figure 3, the subject 114 may be positioned so that at least one leg and one arm are bent to more clearly define the relative locations of the shoulder 1, hip 2, knee 3, ankle 4, and toe 5. In the present embodiment, the motion capture system 100 is initialized by activating at least one camera 110 and observing a target. The computer system fΪ2 also loads the motion capture program from a medium, such as a hard drive or other storage medium. The computer system 112 may perform any appropriate steps to prepare the computer system 112 and/or the remaining elements of the motion capture system 100. For example, the computer system 112 may check the contrast level in the signals received from the cameras 110. If the contrast is inadequate, the computer system 112 may notify the operator to correct the condition, such as requiring additional light, less light, different colored markers, or the like. When the motion capture system 100 is ready, the capture thread 512 captures the image information from the cameras, data files, or other source of image information. The computer system 112 may use any suitable technology to capture the image information, such as conventional DirectX functionalities. As each frame of data is collected, the capture thread 512 may transfer the data to a memory, such as a global buffer 518 (Figure 5). The computer system 112 may identify the markers or other selected points to be tracked in the image data. The computer system 112 may acquire the selected point in any suitable manner, and may operate with any suitable selected points. For example, the motion capture system 100 may track unilateral markers, bilateral markers, or other configurations. The motion capture system 100 may use physical markers, virtual markers, anatomical landmarks, or other appropriate points. In one embodiment, the process thread 514 of the motion capture system 100 tracks the movement of physical markers, such as visible or otherwise optically responsive markers attached to selected points on the subject 114. For example, the markers may comprise colored markers attached to the subject 114, such as colored paper or plastic discs or rectangles attached via an adhesive, like conventional Post It® notes. The computer system 112 may detect the marker in any suitable manner, such as using color filtering. The various body segments of the subject may then be identified, for example using anthropomorphic information. In another embodiment, the markers comprise virtual markers, such as markers that are designated by the user or automatically selected by the computer system 112. For example, the computer system 112 may display a frame of an image to the user. The user may then select one or points in the image for tracking, such as by selecting the points using a tracking device. Thus, the user may "point-and-click" to designate the selected virtual makers as desired locations, such as the hip, knee, ankle, toe, or the like. Color and shape models may then be built around the designated virtual markers, and the body segments may be identified, such as by using anthropomoφhic information. Alternatively, the computer system 112 may automatically identify and select virtual markers for tracking. For example, the computer system 112 may use trained and pre-stored shape models, anthropomorphic approximations, and/or a pattern recognition process to identify appropriate points for the hip, knee, ankle, toe, or the like. Color models and shape models may then be generated around the identified virtual markers. The locations of the virtual markers may also be refined based on the image data as the motion information is received. Upon identification of the markers, the computer system 112 may also associate designations with each marker. The designations may comprise any suitable designations, such as descriptions of the anatomical area associated with the marker. For example, the various markers may be designated, either automatically or manually, as LEFT HIP, RIGHT KNEE, HEAD, or the like. The designations may also be associated with other identifying information, such as the name of the subject 114, the time of the motion capture session, and other relevant information. In the present embodiment, the markers are designated using preselected designations. The preselected designations are associated with various rules and/or characteristics. For example, if a marker is designated as RIGHT ELBOW, the marker may be spatially associated with a RIGHT SHOULDER marker and a RIGHT WRIST marker, such as to assist in reconstruction of the motion, generation of the motion capture data, prevention of inappropriate reconstructions (such as connecting the RIGHT KNEE marker to the LEFT ANKLE marker), and the like. In the present exemplary embodiment, the process thread 514 manages various marker data. The process thread 514 receives the captured image information and returns the tracked point. The process thread 514 suitably iteratively processes the marker data and computes the relevant markers' positions in the current frame. The marker data may comprise any suitable information associated with one or more markers. For example, marker data may modeled as a dynamic graph with each marker as the vertex. Marker data may comprise: i) Markers: individual positions of the markers; ii) ii) Connectors: join two markers to generate a line; and/or iii) iii) Joints: joining of two lines to generate a joint. In real time instances, each of the markers, lines, and joints are suitably selected and connected before the subject begins moving. In non-real time applications, the markers, lines, and joints may be added at any time and can be deleted at any time during the duration of the motion capture data. The marker data may be modeled as a structure that is associated per frame. The marker data for each frame is suitably written to a secondary storage as a file of the current session. Any future reference to this session may display the frames and the marker data available in the corresponding file. The data present in the secondary storage can be interfaced or exported with other applications or otherwise utilized. The computer system 112 may also analyze the marker data and associate the data with known models to facilitate generation and analysis of the motion capture data. For example, the computer system 112 may use the marker data to estimate the height of the subject 114. The marker data may also assist in the selection of other model information that may be applied to the subject 114, such as a predicted gait cycle, approximate body structure, and the like. When the motion capture system 100 has been initialized and the subject 114 prepared, the process thread 514 may proceed with receiving and analyzing successive frames of image data from the capture thread 512 (214). The successive frames are suitably synchronized so that the data from the various cameras 110 or other sources relate to substantially identical times. As the image data is received, the computer system 112 identifies the marker locations in the image data and accordingly generates data for the marker locations in two- or three-dimensional space (216). The computer system 112 may employ any appropriate algorithms or processes to identify the locations of the markers and determine their locations. For example, the process thread 514 may predict the location of the marker in a particular frame, search the frame for the marker, identify the marker location, and refine the resulting motion capture data. The process thread 514 may also perform any other desired computations relating to the image data and/or motion capture, such as calculating joint angle trajectories, stresses on joints, accelerations and velocities of body parts, or other relevant data. For example, referring to Figure 4, the process thread 514 may receive the image data for successive frames of image data (410). The frames may be received" at any appropriate rate, such as at the cameras' 110 frame rate for real-time processing. The computer system 112 may perform any suitable analysis, including filtering, correlation, assimilation, adaptation, and the like. For example, the computer system 112 may apply particle filters, Kalman filters, non-Gaussian and Gaussian filters, adaptive forecasting algorithms, or other statistical signal processing techniques to the image data. The present embodiment applies multiple analyses to the image data, including particle filtering and mean shift tracking, to track the movement of the markers. To determine the location of a particular marker in a frame of data, the computer system 112 may initially apply a prediction model using image data or motion capture data from one or more preceding frames to select a likely area for searching for the marker in the current frame (412). For example, the computer system 112 may apply a conventional prediction model using positions, velocities, accelerations, and/or angles from preceding frames to predict the current location of each marker. The prediction model may generate any suitable prediction data, such as an area or set of pixels most likely to currently include the marker. In the present embodiment, the prediction model establishes a selected number of locations for further analysis, such as 10 to 1000 locations, for example 75-150 potential locations. A greater number of locations may increase the likelihood of accurately finding the location of the marker, but may also contribute complexity and duration to the processing. The image data, such as the areas identified by the prediction model, may be analyzed to identify the current location of the marker. The present computer system 112 may perform particle filtering (variously known as condensation filtering, sequential Monte Carlo methods, or stochastic filtering methods) on the areas identified by the prediction model (414). In particular, particle filtering calculates probabilities for the marker being located in each candidate area. Particle filtering tends to minimize error. The particle filtering may process" any suitable characteristics in the image data, such as color, texture, or shape of the target. In the present embodiment, the particle filter analyzes the relevant areas based on color to determine the various probabilities for the location of the marker within the analyzed areas. The particle filtering is suitably constrained or bounded by various rules, such as anatomic distances between markers. The particle filtering suitably generates a set of candidate particles most likely to include a particular marker. The computer system 112 also performs mean shift tracking on the image data (416). For example, the computer system 112 of the present embodiment performs a standard mean shift analysis on the candidate particles identified by the particle filter. Mean shift tracking performs global optimization for one frame to estimate the location of the marker in a subsequent frame. Mean shift tracking tends to provide smoother tracking of the marker movement than particle filtering alone, and more effectively tolerate variations in position and features, such as light variation or shadows. The mean shift tracking generates an expected area that is designated as the marker location. The combination of particle filtering and mean shift tracking tends to statistically minimize error and maintain stability in the marker tracking. The resulting data may be further processed to correct, clean, or smooth the data. Any suitable processes or techniques may be applied to the data for any desired purpose, for example to more accurately identify the location of the marker, smooth the data, or improve the marker tracking process. The present motion capture system 100 performs one or more different processes to correct and clean the motion capture data (418). In one embodiment, the computer system 112 allows occlusion management and correction of data loss, for example due to occlusion of markers or other failure to detect the marker. The data may be corrected in any suitable manner. For example,
Figure imgf000013_0001
review any frame from which data is missing or having incorrect data and manually adjust the data to insert the correct marker location. Alternatively, the computer system 112 may automatically correct the data, such as by inteφolating between prior and subsequent data. In addition, the computer system 112 may perfoπn additional searching for missing data, for example using different search techniques and/or supplemental data. For example, if a left wrist marker location is missing but the location of the left elbow marker is known, then the computer system 112 may search a substantially spherical area having a radius corresponding to the length of the subject's forearm. The computer system 112 is also suitably configured to perform region growing to reduce error. In the present embodiment, if the full area of the marker is not detected, the computer system 112 may apply region growing to the detected marker to encompass the entire marker. Region growing may be performed using any suitable characteristics of the marker, such as the shape, texture, and/or color of the marker, and may use any suitable region growing techniques, such as conventional region growing processes. The computer system 112 may then calculate the centroid of the grown region, and the centroid is then designated as the marker. In addition, the computer system 112 may refine the marker locations according to known information. For example, the computer system 112 may refine the shape of the markers according to shape-based parameters, motion information, and/or anthropomoφhic information. The computer system 112 may also perform data smoothing on the motion capture data, for example to minimize errors and correct locations of markers. The data smoothing may be applied to any data at any suitable point in the process. For example, the computer system 112 may perform Monte Carlo Markov Chain (MCMC) smoothing on the data generated by the particle filter to smooth transitions between frames and minimize the number of particles. MCMC smoothing tends to reduces erratic motion that may be produced by the particle filter. If the motion capture system 100 is generating two-dimensional data, the computer system 112 may also apply a calibration algorithm to the data to compensate for three- dimensional movement of the subject 114. If the subject 114 moves other than in a plane peφendicular to the camera 110, the calibration algorithm may adjust the data to compensate accordingly. The calibration algorithm may comprise any suitable calibration algorithm, such as a conventional calibration algorithm. The computer system 112 may also perform any other desired calculations on the motion capture data, such as calculation of different joint angles for each frame, joint angle trajectories, and timing differences between different parts of the body (420). As the motion capture data is generated, the computer system 112 may store any appropriate data for later use, such as the original video record, the motion capture data, and the calculated information, at any appropriate location, such as in a processed buffer 520 (Figure 5). All or parts of the process may be repeated until the last frame of the original image data is processed and the motion capture data is complete. The computer system 112 may also perform additional output processing to enhance the usefulness of the motion capture data or other suitable puφose. The output processing may comprise any suitable processing of the image capture data for later use. For example, the display thread 516 may organize the records, including any appropriate data such as the original image data, the motion capture data, and other information, according to each subject's name and a session identifier. The display thread 516 may also generate additional information for analysis, such as graphs showing time and amplitude of various motions, comparisons to previous motion capture^sessions, overlays showing the motion- capture- information overlaid on the original image data, and the like. The display thread 516 may also facilitate entry of a report, such as using word processing program, to store a report that may be associated with the subject 114, session, or other information and data. The display thread 516 may also include export functions for transmitting the records or storing them on a medium, such as a DVD. The particular implementations shown and described are illustrative of the invention and its best mode and are not intended to otherwise limit the scope of the present invention in any way. Indeed, for the sake of brevity, conventional manufacturing, connection, preparation, and other functional aspects of the system may not be described in detail. Furthermore, the connecting lines shown in the various figures are intended to represent exemplary functional relationships and/or physical couplings between the various elements. Many alternative or additional functional relationships or physical connections may be present in a practical system. The present invention has been described above with reference to a preferred embodiment. However, changes and modifications may be made to the preferred embodiment without departing from the scope of the present invention. These and other changes or modifications are intended to be included within the scope of the present invention.

Claims

1. A motion capture system, comprising: a data source configured to generate image data; and a computer system configured to receive the image data and generate motion capture data, wherein the computer system applies particle filtering and mean shift tracking to generate the motion capture data.
2. A motion capture system according to claim 1, wherein the data source comprises a camera responsive to visible light.
3. A motion capture system according to claim 1, wherein the computer system comprises a personal computer.
4. A motion capture system according to claim 1, wherein the computer system generates he motion captare data substantially simultaneously with receiving the image data.
5. A motion capture system according to claim 1, wherein the motion capture data is based on markers, and wherein the marker comprises a virtual marker.
6. A motion capture system according to claim 1 , wherein the computer system is further configured to apply region growing to a marker position in the image data.
7. A motion capture system according to claim 1, wherein the computer system is further configured to identify a marker in the image data using an anthropomoφhic model of a subject.
8. A motion capture system, comprising a data source configured to generate image data having a marker; and a computer system configured to receive the image data and generate motion capture data, wherein the computer system is configured to: apply particle filtering to the image data for a first set of candidate positions for the marker to identify a subset of candidate positions for the marker; and apply mean shift tracking to the image data for the identified subset of candidate positions to identify a more likely position for the marker.
9. A motion capture system according to claim 8, wherein the data source comprises a camera responsive to visible light.
10. A motion capture system according to claim 8, wherein the computer system comprises a personal computer.
11. A motion capture system according to claim 8, wherein the computer system generates the motion capture data substantially simultaneously with receiving the image data.
12. A motion capture system according to claim 8, wherein the marker comprises a virtual marker.
13. A motion capture system according to claim 8, wherein the computer system is further configured to apply region growing to the more likely position of the marker.
14. A motion capture system according to claim 8, wherein the computer system is further configured to initially identify the marker in the image data using an anthropomoφhic model of a subject.
15. A method of generating motion data, comprising: applyingj)article filtering to a frame of image data for a first set of candidate positions for a marker to identify a subset of candidate positions fόrThe markerTand" applying mean shift tracking to the image data for the identified subset of candidate positions to identify a more likely position for the marker.
16. A method of generating motion data according to claim 15, further comprising generating the image data with a camera responsive to visible light.
17. A method of generating motion data according to claim 16, wherein generating the image data, applying particle filtering, and applying mean shift tracking are performed substantially at the time the image data is generated.
18. A method of generating motion data according to claim 15, wherein the marker comprises a virtual marker.
19. A method of generating motion data according to claim 15, further comprising applying region growing to the more likely position for the marker.
20. A method of generating motion data according to claim 15, further comprising initially identifying the marker in the image data using an anthropomoφhic model of a subject.
PCT/US2005/020965 2004-06-14 2005-06-14 Methods and apparatus for motion capture WO2005125210A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57996204P 2004-06-14 2004-06-14
US60/579,962 2004-06-14

Publications (1)

Publication Number Publication Date
WO2005125210A1 true WO2005125210A1 (en) 2005-12-29

Family

ID=35510131

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/020965 WO2005125210A1 (en) 2004-06-14 2005-06-14 Methods and apparatus for motion capture

Country Status (1)

Country Link
WO (1) WO2005125210A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1870038A1 (en) * 2006-06-19 2007-12-26 Sony Corporation Motion capture apparatus and method, and motion capture program
WO2016081994A1 (en) * 2014-11-24 2016-06-02 Quanticare Technologies Pty Ltd Gait monitoring system, method and device
WO2017066278A1 (en) * 2015-10-13 2017-04-20 Elateral, Inc. In-situ previewing of customizable communications
CN107705321A (en) * 2016-08-05 2018-02-16 南京理工大学 Moving object detection and tracking method based on embedded system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6115052A (en) * 1998-02-12 2000-09-05 Mitsubishi Electric Information Technology Center America, Inc. (Ita) System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6115052A (en) * 1998-02-12 2000-09-05 Mitsubishi Electric Information Technology Center America, Inc. (Ita) System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1870038A1 (en) * 2006-06-19 2007-12-26 Sony Corporation Motion capture apparatus and method, and motion capture program
US8355529B2 (en) 2006-06-19 2013-01-15 Sony Corporation Motion capture apparatus and method, and motion capture program
WO2016081994A1 (en) * 2014-11-24 2016-06-02 Quanticare Technologies Pty Ltd Gait monitoring system, method and device
WO2017066278A1 (en) * 2015-10-13 2017-04-20 Elateral, Inc. In-situ previewing of customizable communications
CN107705321A (en) * 2016-08-05 2018-02-16 南京理工大学 Moving object detection and tracking method based on embedded system

Similar Documents

Publication Publication Date Title
US8639020B1 (en) Method and system for modeling subjects from a depth map
Mundermann et al. Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models
D’Antonio et al. Validation of a 3D markerless system for gait analysis based on OpenPose and two RGB webcams
US20030215130A1 (en) Method of processing passive optical motion capture data
US20100208038A1 (en) Method and system for gesture recognition
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CA3162163A1 (en) Real-time system for generating 4d spatio-temporal model of a real world environment
CN113658211B (en) User gesture evaluation method and device and processing equipment
Elaoud et al. Skeleton-based comparison of throwing motion for handball players
Michel et al. Markerless 3d human pose estimation and tracking based on rgbd cameras: an experimental evaluation
WO2005125210A1 (en) Methods and apparatus for motion capture
Nouredanesh et al. Chasing feet in the wild: a proposed egocentric motion-aware gait assessment tool
Almasi et al. Investigating the Application of Human Motion Recognition for Athletics Talent Identification using the Head-Mounted Camera
Abd Shattar et al. Experimental Setup for Markerless Motion Capture and Landmarks Detection using OpenPose During Dynamic Gait Index Measurement
Kanis et al. Improvements in 3D hand pose estimation using synthetic data
Xu Single-view and multi-view methods in marker-less 3d human motion capture
Kim et al. Tracking 3D human body using particle filter in moving monocular camera
Zhu et al. Kinematic Motion Analysis with Volumetric Motion Capture
Castresana et al. Goniometry-based glitch-correction algorithm for optical motion capture data
Ravi et al. ODIN: An OmniDirectional INdoor dataset capturing Activities of Daily Living from multiple synchronized modalities
Wong et al. Multi-person vision-based head detector for markerless human motion capture
Templin et al. The Effect of Synthetic Training Data on the Performance of a Deep Learning Based Markerless Biomechanics System
Xing et al. Markerless motion capture of human body using PSO with single depth camera
Sun et al. Analyzing and Recognizing Pedestrian Motion Using 3D Sensor Network and Machine Learning
CN117836819A (en) Method and system for generating training data set for keypoint detection, and method and system for predicting 3D position of virtual marker on non-marker object

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase