US5936671A - Object-based video processing using forward-tracking 2-D mesh layers - Google Patents

Object-based video processing using forward-tracking 2-D mesh layers Download PDF

Info

Publication number
US5936671A
US5936671A US08/886,871 US88687197A US5936671A US 5936671 A US5936671 A US 5936671A US 88687197 A US88687197 A US 88687197A US 5936671 A US5936671 A US 5936671A
Authority
US
United States
Prior art keywords
mesh
boundary
video
video object
motion data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/886,871
Inventor
Petrus J. L. Van Beek
Ahmet M. Tekalp
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rakuten Group Inc
Original Assignee
Sharp Laboratories of America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Laboratories of America Inc filed Critical Sharp Laboratories of America Inc
Priority to US08/886,871 priority Critical patent/US5936671A/en
Assigned to SHARP LABORATORIES OF AMERICA, INC. reassignment SHARP LABORATORIES OF AMERICA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TEKALP, AHMET M., VAN BEEK, PETRUS J.L.
Priority to PCT/JP1998/002957 priority patent/WO1999001986A1/en
Application granted granted Critical
Publication of US5936671A publication Critical patent/US5936671A/en
Assigned to SHARP KABUSHIKI KAISHA, INC. reassignment SHARP KABUSHIKI KAISHA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHARP LABORATORIES OF AMERICA, INCORPORATED
Assigned to RAKUTEN, INC. reassignment RAKUTEN, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHARP KABUSHIKI KAISHA
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Definitions

  • the present invention relates generally to object-based video processing techniques. More particularly, it concerns a method of video processing that enhances video data representation, storage and transmission in systems utilizing, for example, chroma-keying to extract meaningful parts from video data.
  • the method preferably is hardware- or computer-implemented or hardware- or computer-assisted, and may for example be coded as software or firmware into existing system software executed by a microprocessor, personal computer (PC) or mainframe computer or may be implemented in hardware such as a state machine or application-specific integrated circuit (ASIC) or other device or devices.
  • PC personal computer
  • ASIC application-specific integrated circuit
  • the invented method involves the object-based processing of parts of video frames referred to as Video Object Planes using 2-D meshes, wherein the color and shape information associated with the Video Object Planes are assumed to be known at every frame and wherein each video object is processed independently.
  • the invented method more particularly involves utilization of the Alpha Planes, which contain the shape information, in object-based design of an initial 2-D mesh, wherein an Alpha Plane is used to form a constraining polygonal mesh boundary, as well as in object-based tracking of mesh node points, wherein motion vectors of nodes on the mesh boundary are constrained so that these node points always lie along the Alpha Plane boundary, by means of restriction of the search space or back-projection, and mesh-based Video Object Plane mapping takes into account any differences between the mesh boundary and the Video Object Plane boundary.
  • Such invented methods may be computer-implemented or computer-assisted, as by being coded as software within any coding system as memory-based instructions executed by a microprocessor, PC or mainframe computer, or may be implemented in hardware such as a state machine.
  • FIG. 2 illustrates object-based forward motion modeling versus frame-based forward motion modeling and frame-based backward motion modeling.
  • FIG. 3 depicts an overview of the object-based mesh design and tracking algorithm in block diagram form.
  • FIG. 4 illustrates the selection of mesh node points in object-based mesh design.
  • FIG. 5 illustrates details of object-based motion estimation and motion compensation with a forward tracking mesh layer.
  • Object-based video representations allow for object-based compression, storage and transmission, in addition to object-based video manipulation, such as editing.
  • Object-based video compression methods are currently being developed in the context of the MPEG-4 standardization process 5, 11!. This disclosure describes methods for object-based video motion representation using forward tracking 2-D mesh layers, where one mesh layer is used for each object.
  • VOP Video Object Plane
  • 2-D Two-dimensional (2-D) snapshot of a Video Object at a particular time instant (similar to a video frame).
  • Each VOP consists of a number of color components, for instance a Y, U and V component, as well as a shape component or "Alpha Plane", describing its shape and opacity. This data structure is depicted in FIG. 1.
  • VOPs can be I, P or B type as in MPEG-1 and -2, which are previously adopted and published standards that are precursors to the developing MPEG-4 standard.
  • VOP types will be referred to herein as I-VOPs, P-VOPs and B-VOPs, respectively, corresponding to I-frames, P-frames and B-frames in the case of MPEG- 1 or -2.
  • the Alpha Planes are herein assumed to be known for every VOP in the VO. In practice, the Alpha Planes can be obtained using for example chroma-keying.
  • different video objects may have been acquired with different cameras. On the other hand, different video objects may have been obtained from a single camera shot, by partitioning each frame into the constituent video object planes. A layered video representation similar to the data structure described above was discussed in 18!.
  • Each video object is processed independently from other video objects; after processing and possible encoding and transmission of each video object, they may be overlaid so as to form a composited video frame. For instance, VOP 1 may be overlaid onto VOP 0, and VOP 2 may be overlaid onto the result of the first overlay.
  • type-I sources also referred to herein as type-I sequences
  • VOPs other than the background
  • chroma-keying techniques it will be appreciated that there are covered and uncovered VOP regions resulting from object-to-object interactions as the VOPs move independently of each other.
  • each VOP is approximated by a polygon or a spline with a finite number of vertex or control points, respectively.
  • Case I is concerned with processing of type-I sequences, where all VOPs, their alpha planes and composition orders are known.
  • all interaction between the VOPs such as one VOP covering another, can be handled by using the information in the alpha planes.
  • estimation of the motion vectors for node points along the boundary of VOP1 is constrained, such that motion vectors at the boundary of VOP1 in frame k must point to the boundary of VOP1 in frame k+1. This can be achieved by restricting the search space of these motion vectors during motion estimation (e.g., block-matching or hexagonal matching).
  • the tracking of the nodes in the background (VOPO) is performed as in 2!.
  • Prior techniques for frame-based video processing using 2-D mesh models include 1,2,3,8,9,12,14,15,16,19!.
  • Mesh-based motion modeling is an alternative to block-based motion modeling, which has been adopted in international video coding standards such as MPEG-1 and MPEG-2 13!.
  • a 2D mesh is a tessellation (or partition) of a 2D planar region into polygonal patches. The vertices of the polygonal patches are referred to as the node points of the mesh. Usually, the polygonal patches are triangles or quadrangles, leading to triangular or quadrilateral meshes, respectively.
  • the patches in the previous frame are deformed by the movements of the node points into polygonal patches in the current frame, and the texture inside each patch in the previous frame can be warped onto the current frame as a function of the node point motion vectors.
  • the warping is performed according to a six-parameter affine transform. Note that the patches overlap neither in the previous frame nor in the current frame.
  • the original 2-D motion field can be compactly represented by the motion of the mesh node points, from which a continuous, piecewise smooth motion field can be reconstructed.
  • An advantage of the mesh-motion model over a (translational) block-motion model is its ability to represent more general types of motions. At the same time, mesh models constrain the movements of adjacent image patches. Therefore, they are well-suited to represent mildly deformable but spatially continuous motion fields.
  • An advantage of the block-based model is its ability to handle discontinuities in the motion field; however, such discontinuities may not always coincide with block borders. Note that a mesh-based motion field can be described by approximately the same number of parameters as a translational block-based motion field in case of an equal number of patches.
  • FIG. 2 will be understood to illustrate an object-based forward motion modeling (c) versus frame-based forward motion modeling (b) and frame-based backward motion modeling (a) using 2-D meshes.
  • Meshes can be quadrilateral, as in (a) and (b), or triangular, as in (c). Triangular meshes are more convenient in representing arbitrary shaped objects, as in (c).
  • objects can have arbitrary shape; these are represented by the polygonal mesh boundary as in (c).
  • backward motion modeling motion vectors for the current frame are searched in the previous frame.
  • forward motion modeling motion vectors for the previous frame are searched in the current frame. In the latter case, the search procedure in the next frame can be based on the motion vectors obtained in the current frame, thus tracking points of interest through the sequence.
  • Motion estimation methods can be classified as backward or forward estimation, see FIG. 2.
  • the former in the case of mesh modeling, refers to searching in a previous reference frame for the best locations of the node points that match those in the current frame.
  • backward mesh motion estimation one usually sets up a new regular mesh in every frame.
  • forward mesh motion estimation one sets up a mesh in a previous reference frame, and searches for the best matching locations of the node points in the current frame. This enables the system to continue to search for node motion vectors in successive frames using the most recently updated mesh, thus tracking features of interest through the entire sequence.
  • the initial mesh may be regular, or may be adapted to the image contents, in which case it is called a content-based mesh.
  • FIG. 3 will be understood to illustrate an overview of object-based mesh tracking algorithm.
  • motion vectors of the mesh nodes are estimated, which point from the previous video object plane to the current video object plane; then, the motion vectors are applied to the nodes to motion compensate the mesh.
  • the mesh design results in a number of node point locations p n and triangular elements e k ; the mesh motion estimation results in a number of node motion vectors v n .
  • This section describes the design of a content-based mesh in case the Alpha Plane of the initial VOP is available 3!. It differs from the prior known frame-based mesh design algorithm 1! in the sense that the arbitrary shape of the VOP has to be represented. Firstly, node points on the boundary of the VOP are selected and secondly, interior nodes are selected, as illustrated in FIG. 4. Finally, Delaunay triangulation is applied to define the mesh triangular topology.
  • the Alpha Plane is first binarized by setting every nonzero pixel to the maximum pixel value (255) and all other pixel to the minimum pixel value (0).
  • the boundary of the VOP is then obtained by extracting the largest connected component in the binarized Alpha Plane and tracing the pixels on its contour. Then, the boundary of the VOP is approximated by straight-line segments, together forming a polygon. The resulting polygon becomes the boundary of the object mesh layer.
  • the vertices of the boundary polygon will serve as node points of the 2-D object mesh layer. We have used a fast sequential polygonal approximation algorithm 17! to compute the boundary polygon.
  • Additional nodes besides the vertices of the VOP boundary polygon, are selected within the VOP using the node selection algorithm proposed in 1!.
  • the basic principle of this method is to place node points in such a way that triangle edges align with intensity edges and the density of node points is proportional to the local motion activity.
  • the former is attained by placing node points on pixels with high spatial gradient.
  • the latter is achieved by allocating node points in such a way that a predefined function of the displaced frame difference (DFD) within each triangular patch attains approximately the same value.
  • the displaced frame difference can be computed using motion vectors estimated by conventional displacement estimation techniques.
  • DFD(x,y) an image containing the displaced frame difference inside the VOP
  • this image can be computed using a forward dense motion field from the previous VOP t to the current VOP t+1.
  • this image can contain past quantized prediction error.
  • areas in this image with high pixel values signal that either the motion cannot be estimated in that area, or that the motion is complex in that area. More nodes will be placed in these areas than in areas with displaced frame difference value, thus creating a finer motion representation in the former areas.
  • FIG. 4 will be understood to illustrate the node point selection procedure in object-based mesh design.
  • the boundary of the video object plane (VOP) is approximated by a polygon, consisting of straight-line segments.
  • the vertices of this polygon are selected as mesh boundary node points.
  • further node points are selected inside the VOP polygonal boundary.
  • a region is grown around the node location and pixels inside this region are marked, so that another node point cannot be placed within a marked region.
  • Each region grows until the integral over the region of a predefined function attains a certain value.
  • the predefined function can, for example, represent a local measure of temporal activity.
  • regions with small radius correspond to regions with high temporal activity
  • regions with large radius correspond to regions with low temporal activity.
  • triangulation of the point set is applied to obtain a mesh.
  • the straight-line segments on the polygonal mesh boundary are used as constraints in the triangulation, which guarantees that these segments become edges in the mesh and that no triangle falls outside the polygonal boundary.
  • constrained Delaunay triangulation 10! is employed to construct a content-based triangular mesh within each VOP.
  • Delaunay triangulation is a well-known technique in the computation geometry field to construct triangulations of point sets.
  • the edges of the VOP boundary polygon are used as constraints in the triangulation, to make sure that polygon edges become triangle edges and that all triangles are inside the polygon.
  • Video object tracking is a very challenging problem in general, since one needs to take into account the mutual occlusion of scene objects, which leads to covering and uncovering of object surfaces projecting into the image.
  • the complexity of the object-based tracking problem depends on the type of video source at hand and the problem is simplified if the Alpha Plane information is available.
  • Type-1 sources are such that the intensities at all pixels within each Video Object Plane are available for all time instances.
  • An example of a type-1 sequence is one where VOPs are shot by chroma-keying (blue-screening) techniques.
  • each VOP In type-2 sources, pixel intensities in the covered parts of each VOP are not available. This case arises, for example, if the VOPs are extracted from a single camera shot (usually by user interaction). In order to track multiple triangular meshes over a sequence of VOPs, in general one needs to take covering and uncovering of objects into account.
  • tracking of the VO mesh node points only for sequences without any occlusion where all VOP intensities, their Alpha Planes, and composition orders are known.
  • each VO sequence is processed and compressed independently. Given the assumption that there is no occlusion in a VO, the Alpha Planes can be effectively used to constrain the motion of mesh node points, simplifying the mesh tracking problem significantly.
  • Motion estimation is done in all P-VOPs (not in an I-VOP) in order to propagate a mesh from the previous VOP to the current VOP.
  • a motion vector has to be computed using forward estimation.
  • Motion vectors of node points inside a VOP can be estimated in several ways 13!, such as block matching, generalized block matching, gradient-based methods, hexagonal matching. We have used either full-search block-matching or a hierarchical version of the gradient-based method of Lucas and Kanade 7! to estimate the motion at locations of node points; and hexagonal matching 8! for motion vector refinement.
  • a square block of pixels is centered on the node point location in the previous VOP and the best matching block of pixels in the current VOP is found by searching candidate locations inside a search window.
  • the best match is defined by the use of an error criterion, such as the Sum of Absolute Differences (SAD) between pixels of the reference block and pixels of the candidate block.
  • SAD Sum of Absolute Differences
  • a dense motion field is first computed in the entire VOP which is then sampled at the locations of the mesh nodes. Note that prior to motion estimation, the previous and current VOPs are padded beyond its boundaries. For nodes which are close to the VOP boundary, only YUV data of that VOP is taken into account.
  • the motion estimation of node points on mesh boundaries is constrained such these nodes always fall on the actual VOP boundary.
  • the motion vectors of nodes at the boundary of the shown VOP at time t must point to a point on the boundary of this VOP at time t'. This can be achieved by restricting the search space during motion estimation or by projecting the boundary nodes onto the actual boundary after motion estimation. Since both block-based node motion estimation and hexagonal matching are search-based, the constraint provided by the VOP boundary can be enforced by restricting the search to candidate node locations on the VOP boundary.
  • Gradient-based motion estimation is not search-based, so the new node location, obtained by applying the computed motion vector to the old node location, must be projected back onto the VOP boundary. This is done by projecting the node onto the VOP boundary point that has the minimum distance to the initially computed node location. Further constraining is necessary in both the search-based and gradient-based techniques to ensure that the polygonal boundary will not self-intersect after the motion vectors are applied to the nodes. This means that the ordering of consecutive boundary node points may not change from one time instant to the next.
  • the motion estimation of nodes interior to the mesh is not constrained in the above manner.
  • FIG. 5 will be understood to illustrate an object-based motion estimation and motion compensation with a forward tracking mesh layer.
  • a small part of the mesh near the video object plane boundary is depicted with solid lines; the actual video object plane boundary is depicted with dashed lines.
  • the nodes on the polygonal mesh boundary must always fall exactly on the actual video object plane boundary; they are allowed to move along this boundary.
  • the interior nodes of the mesh are allowed to move to locations inside the mesh boundary.
  • Motion compensation of pixels inside each triangular patch of the mesh is performed according to an affine transform, defined by the three point correspondences of its nodes. Pixels in areas inside the mesh boundary but not inside the VOP boundary or vice versa need additional processing; either by padding or by separate mapping.
  • each node p n has a motion vector v n .
  • This post-processing enforces the general constraint on mesh topologies that edges between node points are not allowed to cross each other and triangles may not be flipped.
  • each triangular patch is warped from the previous VOP to the current VOP using the estimated node motion vectors. Note that prior to the warping, the image containing the previous VOP is padded, in case some area of triangles in the previous VOP falls outside the actual VOP region, see FIG. 5. For each triangular patch, the three forward node point motion vectors determine uniquely a backward affine transform from the current to the previous frame.
  • pixels in the current VOP may fall outside the mesh that models the current VOP, because the boundary of the mesh is only a polygonal approximation to the true boundary, see FIG. 5.
  • These pixels exterior to the mesh but inside the VOP need to be motion compensated as well.
  • Each of these pixels is motion compensated by computing a motion vector derived from the mesh boundary node motion vectors. This motion vector is estimated by interpolating the motion vectors of the two nearest nodes on the polygonal mesh boundary. This is done by inverse-distance-weighted interpolation of these two motion vectors.

Abstract

The invented method involves the object-based processing of parts of video frames referred to as Video Object Planes using 2-D meshes, wherein the color and shape information associated with the Video Object Planes are assumed to be known at every frame and wherein each video object is processed independently. The invented method more particularly involves utilization of the Alpha Planes, which contain the shape information, in object-based design of an initial 2-D mesh, wherein an Alpha Plane is used to form a constraining polygonal mesh boundary, as well as in object-based tracking of mesh node points, wherein motion vectors of nodes on the mesh boundary are constrained so that these node points always lie along the Alpha Plane boundary, by means of restriction of the search space or back-projection, and mesh-based Video Object Plane mapping takes into account any differences between the mesh boundary and the Video Object Plane boundary. Such invented methods may be computer-implemented or computer-assisted, as by being coded as software within any coding system as memory-based instructions executed by a microprocessor, PC or mainframe computer, or may be implemented in hardware such as a state machine.

Description

CROSS REFERENCE TO RELATED APPLICATION
This application claims priority from U.S. provisional patent application Ser. No. 60/021,093, filed on Jul. 2, 1996, the disclosure of which is incorporated hereby by this reference.
TECHNICAL FIELD
The present invention relates generally to object-based video processing techniques. More particularly, it concerns a method of video processing that enhances video data representation, storage and transmission in systems utilizing, for example, chroma-keying to extract meaningful parts from video data. The method preferably is hardware- or computer-implemented or hardware- or computer-assisted, and may for example be coded as software or firmware into existing system software executed by a microprocessor, personal computer (PC) or mainframe computer or may be implemented in hardware such as a state machine or application-specific integrated circuit (ASIC) or other device or devices.
BACKGROUND ART
Known background publications include the following references, familiarity with which is assumed, which references are incorporated herein by this reference.
1! Y. Altunbasak, A. M. Tekalp and G. Bozdagi, "Two-dimensional object based coding using a content-based mesh and affine motion parameterization," IEEE Int. Conference on Image Processing, Washington D.C., October 1995.
2! Y. Altunbasak and A. M. Tekalp, "Occlusion-adaptive 2-D mesh tracking," Proc. ICASSP '96, Atlanta, Ga., May 1996.
3! Y. Altunbasak and A. M. Tekalp, "Very-low bitrate video coding using object-based mesh design and tracking," Proc. SPIE/IS&T Electronic Imaging, Science and Technology, San Jose, Calif., February 1996.
4! P. J. L. van Beek and A. M. Tekalp, "Object-based video coding using forward tracking 2-D mesh layers," Visual Communications and Image Processing '97, San Jose, Calif., February 1997.
5! L. Chiariglione, "MPEG and multimedia communications," IEEE Trans. on Circ. and Syst. for Video Technology, vol. 7, no. 1, pp. 5-18, February 1997.
6! D. Hearn and M. P. Baker, "Computer Graphics," second edition, Prentice Hall, 1997.
7! B. Lucas and T. Kanade, "An iterative registration technique with an application to stereo vision," Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981.
8! Y. Nakaya and H. Harashima, "Motion compensation based on spatial transformations," IEEE Trans. on Circuits and Systems for Video Technology, vol. 4, no. 3, pp. 339-356, June 1994.
9! J. Nieweglowski, T. G. Campbell and P. Haavisto, "A novel video coding scheme based on temporal prediction using digital image warping," IEEE Transactions on Consumer Electronics, vol. 39, no. 3, pp. 141-150, August 1993.
10! J. R. Shewchuk, "Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator," First Workshop on Applied Computational Geometry, pp. 124-133, ACM, Philadelphia, May 1996.
11! T. Sikora, "The MPEG-4 Video Standard Verification Model," IEEE Trans. on Circ. and Syst for Video Technology, vol. 7, no. 1, pp. 19-31, February 1997.
12! G. J. Sullivan and R. L. Baker, "Motion compensation for video compression using control grid interpolation," Proc. ICASSP '91, vol. 4, pp. 2713-2716, May 1991.
13! A. M. Tekalp, "Digital Video Processing," Prentice Hall, 1995.
14! C. Toklu, A. T. Erdem, M. I. Sezan and A. M. Tekalp, "Tracking motion and intensity variations using hierarchical 2-D mesh modeling," Graphical Models and Image Processing, vol. 58, no. 6, pp. 553-573, November 1996.
15! C. Toklu, A. M. Tekalp, and A. T. Erdem, "2-D Triangular mesh-based mosaicking for object tracking in the presence of occlusion," Visual Communication and Image Processing '97, San Jose, Calif., February 1997.
16! C. Toklu, A. T. Erdem, and A. M. Tekalp, "2-D Mesh-based synthetic transfiguration of an object with occlusion," Proc. ICASSP '97, Munich, Germany, April 1997.
17! K. Wall and P. E. Danielsson, "A fast sequential method for polygonal approximation of digitized curves," Comp. Graphics, Vision and Im. Processing, vol. 28, pp. 229-227, 1984.
18! J. Y. A. Wang and E. H. Adelson, "Representing moving images with layers," IEEE Transactions on Image Processing, vol. 3, no. 5, pp. 625-638, September 1994.
19! Y. Wang and 0. Lee, "Active mesh--A feature seeking and tracking image sequence representation scheme," IEEE Transactions on Image Processing, vol. 3, no. 5, pp. 610-624, September 1994.
These references may be referred to herein by their bracketed number, e.g. the Nakaya, et al. article is referred to herein simply as 8!.
DISCLOSURE OF THE INVENTION
Briefly summarized, the invented method involves the object-based processing of parts of video frames referred to as Video Object Planes using 2-D meshes, wherein the color and shape information associated with the Video Object Planes are assumed to be known at every frame and wherein each video object is processed independently. The invented method more particularly involves utilization of the Alpha Planes, which contain the shape information, in object-based design of an initial 2-D mesh, wherein an Alpha Plane is used to form a constraining polygonal mesh boundary, as well as in object-based tracking of mesh node points, wherein motion vectors of nodes on the mesh boundary are constrained so that these node points always lie along the Alpha Plane boundary, by means of restriction of the search space or back-projection, and mesh-based Video Object Plane mapping takes into account any differences between the mesh boundary and the Video Object Plane boundary. Such invented methods may be computer-implemented or computer-assisted, as by being coded as software within any coding system as memory-based instructions executed by a microprocessor, PC or mainframe computer, or may be implemented in hardware such as a state machine.
These and additional objects and advantages of the present invention will be more readily understood after consideration of the drawings and the detailed description of the preferred embodiment which follows.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates the video data structure in block diagram form.
FIG. 2 illustrates object-based forward motion modeling versus frame-based forward motion modeling and frame-based backward motion modeling.
FIG. 3 depicts an overview of the object-based mesh design and tracking algorithm in block diagram form.
FIG. 4 illustrates the selection of mesh node points in object-based mesh design.
FIG. 5 illustrates details of object-based motion estimation and motion compensation with a forward tracking mesh layer.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT AND BEST MODE OF CARRYING OUT THE INVENTION BACKGROUND
Object-based video representations allow for object-based compression, storage and transmission, in addition to object-based video manipulation, such as editing. Object-based video compression methods are currently being developed in the context of the MPEG-4 standardization process 5, 11!. This disclosure describes methods for object-based video motion representation using forward tracking 2-D mesh layers, where one mesh layer is used for each object.
Following MPEG-4 terminology 11!, a "Video Object" (VO) refers to spatio-temporal data pertinent to a particular object and a "Video Object Plane" (VOP) refers to a two-dimensional (2-D) snapshot of a Video Object at a particular time instant (similar to a video frame). Each VOP consists of a number of color components, for instance a Y, U and V component, as well as a shape component or "Alpha Plane", describing its shape and opacity. This data structure is depicted in FIG. 1. VOPs can be I, P or B type as in MPEG-1 and -2, which are previously adopted and published standards that are precursors to the developing MPEG-4 standard. Those of skill in the art will appreciate that such VOP types will be referred to herein as I-VOPs, P-VOPs and B-VOPs, respectively, corresponding to I-frames, P-frames and B-frames in the case of MPEG- 1 or -2. Note that the Alpha Planes are herein assumed to be known for every VOP in the VO. In practice, the Alpha Planes can be obtained using for example chroma-keying. Note further, that different video objects may have been acquired with different cameras. On the other hand, different video objects may have been obtained from a single camera shot, by partitioning each frame into the constituent video object planes. A layered video representation similar to the data structure described above was discussed in 18!.
In brief summary, FIG. 1 may be seen to be an illustration of the data structure used in accordance with the invention. Depicted are three different video object planes (VOPs), each consisting of three color components (Y, U and V planes) and one shape component (A plane). The A- or Alpha plane represents the shape and opacity of a video object plane, i.e., it describes in which parts of the frame the color data is defined and it describes the visibility of that color data. The color data of a video object plane is fully visible in areas where the Alpha plane is white and invisible or undefined in areas where the Alpha plane is black. An Alpha plane can have other shades of gray to denote partially transparent video object planes. Each video object is processed independently from other video objects; after processing and possible encoding and transmission of each video object, they may be overlaid so as to form a composited video frame. For instance, VOP 1 may be overlaid onto VOP 0, and VOP 2 may be overlaid onto the result of the first overlay.
Mesh-based Representation of Video Object Motion
In type-I sources, also referred to herein as type-I sequences, it is assumed that the intensities at all pixels within each VOP are available for all time instances (see FIG. 1). An example of a type-I sequence is one where VOPs (other than the background) are shot by chroma-keying techniques. It will be appreciated that there are covered and uncovered VOP regions resulting from object-to-object interactions as the VOPs move independently of each other.
It is assumed that the boundary of each VOP is approximated by a polygon or a spline with a finite number of vertex or control points, respectively. We investigate object-to-object interactions and tracking of the node points along the VOP boundaries (vertex or control points) under what will be referred to herein as Case I.
Case I, as herein-defined, is concerned with processing of type-I sequences, where all VOPs, their alpha planes and composition orders are known. Here, all interaction between the VOPs, such as one VOP covering another, can be handled by using the information in the alpha planes. In reference to FIG. 5, estimation of the motion vectors for node points along the boundary of VOP1 is constrained, such that motion vectors at the boundary of VOP1 in frame k must point to the boundary of VOP1 in frame k+1. This can be achieved by restricting the search space of these motion vectors during motion estimation (e.g., block-matching or hexagonal matching). The tracking of the nodes in the background (VOPO) is performed as in 2!.
Prior techniques for frame-based video processing using 2-D mesh models include 1,2,3,8,9,12,14,15,16,19!. Mesh-based motion modeling is an alternative to block-based motion modeling, which has been adopted in international video coding standards such as MPEG-1 and MPEG-2 13!. A 2D mesh is a tessellation (or partition) of a 2D planar region into polygonal patches. The vertices of the polygonal patches are referred to as the node points of the mesh. Mostly, the polygonal patches are triangles or quadrangles, leading to triangular or quadrilateral meshes, respectively. The patches in the previous frame are deformed by the movements of the node points into polygonal patches in the current frame, and the texture inside each patch in the previous frame can be warped onto the current frame as a function of the node point motion vectors. In case of triangular patches, the warping is performed according to a six-parameter affine transform. Note that the patches overlap neither in the previous frame nor in the current frame. As such, the original 2-D motion field can be compactly represented by the motion of the mesh node points, from which a continuous, piecewise smooth motion field can be reconstructed.
An advantage of the mesh-motion model over a (translational) block-motion model is its ability to represent more general types of motions. At the same time, mesh models constrain the movements of adjacent image patches. Therefore, they are well-suited to represent mildly deformable but spatially continuous motion fields. An advantage of the block-based model is its ability to handle discontinuities in the motion field; however, such discontinuities may not always coincide with block borders. Note that a mesh-based motion field can be described by approximately the same number of parameters as a translational block-based motion field in case of an equal number of patches.
This disclosure combines recent mesh-based motion tracking and compensation methods 1,2,3,8,9,12,14! with a layered (object-based) video representation to address object-based functionalities for video processing systems in the case that Alpha Planes are available. Most prior techniques in literature address frame-based mesh modeling only. In frame-based modeling, a mesh covers the entire video frame, both in the previous and current frame. As described in this disclosure, in object-based modeling, a mesh covers only that part of a video frame that corresponds to a semantically meaningful object, captured by a Video Object Plane and delineated by an Alpha Plane. As such, each Video Object is to be processed independently. Both frame-based and object-based modeling are illustrated in FIG. 2. Methods for mesh tracking in the case that no Alpha Planes are available has been addressed in 15,16!.
In brief summary, FIG. 2 will be understood to illustrate an object-based forward motion modeling (c) versus frame-based forward motion modeling (b) and frame-based backward motion modeling (a) using 2-D meshes. Meshes can be quadrilateral, as in (a) and (b), or triangular, as in (c). Triangular meshes are more convenient in representing arbitrary shaped objects, as in (c). In object-based modeling, boundaries can have arbitrary shape; these are represented by the polygonal mesh boundary as in (c). In backward motion modeling, motion vectors for the current frame are searched in the previous frame. In forward motion modeling, motion vectors for the previous frame are searched in the current frame. In the latter case, the search procedure in the next frame can be based on the motion vectors obtained in the current frame, thus tracking points of interest through the sequence.
Motion estimation methods can be classified as backward or forward estimation, see FIG. 2. The former, in the case of mesh modeling, refers to searching in a previous reference frame for the best locations of the node points that match those in the current frame. In backward mesh motion estimation, one usually sets up a new regular mesh in every frame. In forward mesh motion estimation, one sets up a mesh in a previous reference frame, and searches for the best matching locations of the node points in the current frame. This enables the system to continue to search for node motion vectors in successive frames using the most recently updated mesh, thus tracking features of interest through the entire sequence. The initial mesh may be regular, or may be adapted to the image contents, in which case it is called a content-based mesh.
In this work, forward motion estimation using content-based triangular meshes is used, because it allows for better modeling and it allows for tracking of object features through the image sequence. Mesh tracking, in turn, enables manipulation and animation of graphics and video content using texture mapping, which is a common technique in 3-D graphics systems 6!. Furthermore, the mesh-based tracking algorithm described here, can be applied in object-based video compression systems 4!, achieving a common framework for object-based video compression and manipulation. We describe how a new content-based triangular mesh is designed independently for each I-VOP to be represented. We then describe how each mesh layer is tracked independently over the subsequent P-VOPs. In particular, we describe how the Alpha Planes, that are given for each VOP, are utilized in the initial mesh design as well as in the mesh tracking in a novel manner. An outline of the mesh design and tracking algorithm is depicted in FIG. 3.
In brief summary, FIG. 3 will be understood to illustrate an overview of object-based mesh tracking algorithm. An initial mesh is designed on the first video object plane, at t=0. For the following video object planes, at t=1, 2, 3, etc., motion vectors of the mesh nodes are estimated, which point from the previous video object plane to the current video object plane; then, the motion vectors are applied to the nodes to motion compensate the mesh. The mesh design results in a number of node point locations pn and triangular elements ek ; the mesh motion estimation results in a number of node motion vectors vn.
Object-based Mesh Design Using Alpha Plane Information
This section describes the design of a content-based mesh in case the Alpha Plane of the initial VOP is available 3!. It differs from the prior known frame-based mesh design algorithm 1! in the sense that the arbitrary shape of the VOP has to be represented. Firstly, node points on the boundary of the VOP are selected and secondly, interior nodes are selected, as illustrated in FIG. 4. Finally, Delaunay triangulation is applied to define the mesh triangular topology.
VOP Boundary Polygonization and Selection of Boundary Nodes
The Alpha Plane is first binarized by setting every nonzero pixel to the maximum pixel value (255) and all other pixel to the minimum pixel value (0). The boundary of the VOP is then obtained by extracting the largest connected component in the binarized Alpha Plane and tracing the pixels on its contour. Then, the boundary of the VOP is approximated by straight-line segments, together forming a polygon. The resulting polygon becomes the boundary of the object mesh layer. The vertices of the boundary polygon will serve as node points of the 2-D object mesh layer. We have used a fast sequential polygonal approximation algorithm 17! to compute the boundary polygon.
Selection of Nodes in the Interior of a VOP
Additional nodes, besides the vertices of the VOP boundary polygon, are selected within the VOP using the node selection algorithm proposed in 1!. The basic principle of this method is to place node points in such a way that triangle edges align with intensity edges and the density of node points is proportional to the local motion activity. The former is attained by placing node points on pixels with high spatial gradient. The latter is achieved by allocating node points in such a way that a predefined function of the displaced frame difference (DFD) within each triangular patch attains approximately the same value. The displaced frame difference can be computed using motion vectors estimated by conventional displacement estimation techniques.
An outline of the content-based node-point selection algorithm is as follows. An illustration is given in FIG. 4.
1. Compute an image containing the displaced frame difference inside the VOP, named DFD(x,y). For instance, this can be computed using a forward dense motion field from the previous VOP t to the current VOP t+1. In the case of video compression, this image can contain past quantized prediction error. In any case, areas in this image with high pixel values signal that either the motion cannot be estimated in that area, or that the motion is complex in that area. More nodes will be placed in these areas than in areas with displaced frame difference value, thus creating a finer motion representation in the former areas.
2. Compute a "cost function" image C(x,y)=|Ix (x,y)|2 +|Iy (x,y)|2, where Ix (x,y) and Iy (x,y) stand for the partial derivatives of the intensity with respect to x and y coordinates evaluated at the pixel (x,y). The cost function is related to the spatial intensity gradient so that selected node points tend to coincide with spatial edges.
3. Initialize a label image to keep track of node positions and pixel labels. Label all pixels as unmarked. Denote the number of available nodes by N.
4. (Re-)compute the average displaced frame difference value, given by ##EQU1## where DFD(x,y) stands for the displaced frame difference or prediction error image computed in step 1, the summation is over all unmarked pixels in the VOP, N is the number of currently available nodes, and p=2.
5. Find the unmarked pixel with the highest C(x,y) and label this point as a node point. Note that marked pixels cannot be labeled as nodes. Decrement N by 1.
6. Grow a square or circular region about this node point until the sum Σ DFD(x,y)!P over the unmarked pixels in this region is greater than DFDavg. Continue growing until the radius of this region is greater or equal than some prespecified value. Label all pixels within the as marked.
7. If N>0, go to 4; otherwise, the desired number of node points, N, is selected and the algorithm stops.
In brief summary, FIG. 4 will be understood to illustrate the node point selection procedure in object-based mesh design. The boundary of the video object plane (VOP) is approximated by a polygon, consisting of straight-line segments. The vertices of this polygon are selected as mesh boundary node points. Then, further node points are selected inside the VOP polygonal boundary. For each node point that is selected, a region is grown around the node location and pixels inside this region are marked, so that another node point cannot be placed within a marked region. Each region grows until the integral over the region of a predefined function attains a certain value. The predefined function can, for example, represent a local measure of temporal activity. Then, circular regions with small radius correspond to regions with high temporal activity, regions with large radius correspond to regions with low temporal activity. After node point selection, triangulation of the point set is applied to obtain a mesh. The straight-line segments on the polygonal mesh boundary are used as constraints in the triangulation, which guarantees that these segments become edges in the mesh and that no triangle falls outside the polygonal boundary.
The growing of marked pixels in step 6 ensures that each selected node is not closer to any other previously selected nodes than a prespecified minimum distance. At the same time, it controls the node point density in proportion to the local motion activity. In reference to FIG. 4, a small circle indicates a high temporal activity, while a large circle indicates low temporal activity.
Constrained Delaunay Triangulation
After all node points are selected, constrained Delaunay triangulation 10! is employed to construct a content-based triangular mesh within each VOP. Delaunay triangulation is a well-known technique in the computation geometry field to construct triangulations of point sets. The edges of the VOP boundary polygon are used as constraints in the triangulation, to make sure that polygon edges become triangle edges and that all triangles are inside the polygon.
Object-based Mesh Tracking Using Alpha Plane Information
This section describes a method for 2-D mesh tracking when Alpha Plane information is given and no occlusions are present in the video data. Video object tracking is a very challenging problem in general, since one needs to take into account the mutual occlusion of scene objects, which leads to covering and uncovering of object surfaces projecting into the image. However, the complexity of the object-based tracking problem depends on the type of video source at hand and the problem is simplified if the Alpha Plane information is available. We consider two different types of video sources. Type-1 sources are such that the intensities at all pixels within each Video Object Plane are available for all time instances. An example of a type-1 sequence is one where VOPs are shot by chroma-keying (blue-screening) techniques. In type-2 sources, pixel intensities in the covered parts of each VOP are not available. This case arises, for example, if the VOPs are extracted from a single camera shot (usually by user interaction). In order to track multiple triangular meshes over a sequence of VOPs, in general one needs to take covering and uncovering of objects into account. In the following, we discuss tracking of the VO mesh node points only for sequences without any occlusion, where all VOP intensities, their Alpha Planes, and composition orders are known. Here, each VO sequence is processed and compressed independently. Given the assumption that there is no occlusion in a VO, the Alpha Planes can be effectively used to constrain the motion of mesh node points, simplifying the mesh tracking problem significantly.
An overview of the mesh tracking procedure can be given by the block diagram shown in FIG. 3. The tracking algorithm implements the following steps: given the mesh in the previous VOP, a forward motion vector (between the previous and current VOPs) is estimated for each node point. These motion vectors are applied to the mesh nodes to obtain a mesh at the current VOP. The meshes at the previous and current VOPs can be used to warp the pixel-texture of the mesh elements (patches) from the previous VOP to the current VOP.
Node Motion Vector Estimation
Motion estimation is done in all P-VOPs (not in an I-VOP) in order to propagate a mesh from the previous VOP to the current VOP. For all the mesh node points, a motion vector has to be computed using forward estimation. Motion vectors of node points inside a VOP can be estimated in several ways 13!, such as block matching, generalized block matching, gradient-based methods, hexagonal matching. We have used either full-search block-matching or a hierarchical version of the gradient-based method of Lucas and Kanade 7! to estimate the motion at locations of node points; and hexagonal matching 8! for motion vector refinement. In the case of block-matching, a square block of pixels is centered on the node point location in the previous VOP and the best matching block of pixels in the current VOP is found by searching candidate locations inside a search window. The best match is defined by the use of an error criterion, such as the Sum of Absolute Differences (SAD) between pixels of the reference block and pixels of the candidate block. In the case of gradient-based motion estimation, a dense motion field is first computed in the entire VOP which is then sampled at the locations of the mesh nodes. Note that prior to motion estimation, the previous and current VOPs are padded beyond its boundaries. For nodes which are close to the VOP boundary, only YUV data of that VOP is taken into account.
The motion estimation of node points on mesh boundaries is constrained such these nodes always fall on the actual VOP boundary. For example, in reference to FIG. 5, the motion vectors of nodes at the boundary of the shown VOP at time t must point to a point on the boundary of this VOP at time t'. This can be achieved by restricting the search space during motion estimation or by projecting the boundary nodes onto the actual boundary after motion estimation. Since both block-based node motion estimation and hexagonal matching are search-based, the constraint provided by the VOP boundary can be enforced by restricting the search to candidate node locations on the VOP boundary. Gradient-based motion estimation is not search-based, so the new node location, obtained by applying the computed motion vector to the old node location, must be projected back onto the VOP boundary. This is done by projecting the node onto the VOP boundary point that has the minimum distance to the initially computed node location. Further constraining is necessary in both the search-based and gradient-based techniques to ensure that the polygonal boundary will not self-intersect after the motion vectors are applied to the nodes. This means that the ordering of consecutive boundary node points may not change from one time instant to the next. The motion estimation of nodes interior to the mesh is not constrained in the above manner.
In brief summary, FIG. 5 will be understood to illustrate an object-based motion estimation and motion compensation with a forward tracking mesh layer. A small part of the mesh near the video object plane boundary is depicted with solid lines; the actual video object plane boundary is depicted with dashed lines. The nodes on the polygonal mesh boundary must always fall exactly on the actual video object plane boundary; they are allowed to move along this boundary. The interior nodes of the mesh are allowed to move to locations inside the mesh boundary. Motion compensation of pixels inside each triangular patch of the mesh is performed according to an affine transform, defined by the three point correspondences of its nodes. Pixels in areas inside the mesh boundary but not inside the VOP boundary or vice versa need additional processing; either by padding or by separate mapping.
After motion estimation, each node pn has a motion vector vn. We employ a post-processing algorithm to preserve the connectivity of the patches. This post-processing enforces the general constraint on mesh topologies that edges between node points are not allowed to cross each other and triangles may not be flipped.
Mesh-based VOP Warping
In applications of the mesh-based motion tracking algorithm, such as video manipulation and video compression, a warping step is applied to map pixels from one VOP to another. To this effect, each triangular patch is warped from the previous VOP to the current VOP using the estimated node motion vectors. Note that prior to the warping, the image containing the previous VOP is padded, in case some area of triangles in the previous VOP falls outside the actual VOP region, see FIG. 5. For each triangular patch, the three forward node point motion vectors determine uniquely a backward affine transform from the current to the previous frame. Then, all pixels (x',y') within the patch of the current VOP are motion compensated from the previous VOP by using the affine transform to compute the corresponding location in the previous VOP (x,y). Bilinear interpolation is used when the corresponding location (x,y) in the previous VOP is not a pixel location.
Note that some pixels in the current VOP may fall outside the mesh that models the current VOP, because the boundary of the mesh is only a polygonal approximation to the true boundary, see FIG. 5. These pixels exterior to the mesh but inside the VOP need to be motion compensated as well. Each of these pixels is motion compensated by computing a motion vector derived from the mesh boundary node motion vectors. This motion vector is estimated by interpolating the motion vectors of the two nearest nodes on the polygonal mesh boundary. This is done by inverse-distance-weighted interpolation of these two motion vectors.
Accordingly, while the present invention has been shown and described with reference to the foregoing preferred methods, it will be apparent to those skilled in the art that other changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

We claim:
1. A method for mesh-based representation of the motion of different arbitrary shaped Video Objects, the method comprising the steps of:
representing and processing each Video Object independently from other Video Objects with a given Alpha Plane of a Video Object Plane utilized in design of a mesh.
2. A method for tracking the node points of an object-based mesh, the method comprising the steps of:
defining Video Object Plane boundaries by Alpha planes which are known at every frame; and
constraining motion vectors of node points on a mesh boundary of the object-based mesh to lie along the Video Object Plane boundary at every frame.
3. An improved method for constraining mesh boundary nodes along a Video Object Plane boundary, the improvement comprising restricting search space for new node locations during motion vector estimation such that an order of the nodes along a polygonal mesh boundary is not allowed to change.
4. An improvement to video data compression methods for processing successive video frames to code video object plane shape, motion and texture where the processing includes block-based motion data processing, the improvements comprising: Replacing the existing block-based motion data processing by a layer of mesh-based motion data processing, wherein the mesh-based motion data processing is performed in accordance to mesh-based motion data processing criteria.
5. An improvement to video data compression methods for processing successive video frames to code video object plane shape, motion and texture where the processing includes block-based motion data processing, the improvement comprising: Adding a layer of mesh-based motion data processing to the existing block-based motion data processing, wherein the mesh-based motion data processing is performed in accordance to mesh-based motion data processing criteria; and the block-based motion data processing is performed in accordance to block-based motion data processing criteria.
6. A method for tracking of the node points along Video Object Plane boundaries for a case I, the method comprising the steps of:
assuming the Video Object Plane boundaries to be known at every frame; and
constraining motion vectors of nodes along a boundary to lie along a same Video Object Plane boundary at a next frame by restricting search space.
7. In connection with a video coding method for tracking node points of a video object plane along the video object plane boundaries known at every frame, the improvement comprising:
constraining the motion vectors of nodes along the boundary to lie along the same video object plane boundary at the next frame, thereby restricting the search space in a predefined way.
8. The method of claim 7, wherein said search space is restricted for those nodes that define the mesh boundary at the next frame that lie at the Alpha plane boundary matching those at the present frame.
9. An improvement to data compression methods for processing successive video frames to code video object plane motion and texture where the processing includes block-based motion data processing, the improvement comprising:
adding a layer of mesh-based motion data processing to the existing block-based motion data processing, wherein the mesh-based motion data processing distinguishes between a first case I whereby at least one video object plane is defined by chroma-key sequences;
processing mesh-based motion data in accordance with mesh-based motion data processing criteria; and
processing block-based motion data in accordance with predefined block-based motion data processing criteria.
US08/886,871 1996-07-02 1997-07-02 Object-based video processing using forward-tracking 2-D mesh layers Expired - Lifetime US5936671A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US08/886,871 US5936671A (en) 1996-07-02 1997-07-02 Object-based video processing using forward-tracking 2-D mesh layers
PCT/JP1998/002957 WO1999001986A1 (en) 1997-07-02 1998-07-01 Object-based video processing using forward-tracking 2-d mesh layers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US2109396P 1996-07-02 1996-07-02
US08/886,871 US5936671A (en) 1996-07-02 1997-07-02 Object-based video processing using forward-tracking 2-D mesh layers

Publications (1)

Publication Number Publication Date
US5936671A true US5936671A (en) 1999-08-10

Family

ID=25389966

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/886,871 Expired - Lifetime US5936671A (en) 1996-07-02 1997-07-02 Object-based video processing using forward-tracking 2-D mesh layers

Country Status (2)

Country Link
US (1) US5936671A (en)
WO (1) WO1999001986A1 (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6026195A (en) * 1997-03-07 2000-02-15 General Instrument Corporation Motion estimation and compensation of video object planes for interlaced digital video
US6038258A (en) * 1996-05-29 2000-03-14 Samsung Electronics Co., Ltd. Encoding and decoding system of motion image containing arbitrary object
US6148026A (en) * 1997-01-08 2000-11-14 At&T Corp. Mesh node coding to enable object based functionalities within a motion compensated transform video coder
US6192156B1 (en) * 1998-04-03 2001-02-20 Synapix, Inc. Feature tracking using a dense feature array
WO2001041451A1 (en) * 1999-11-29 2001-06-07 Sony Corporation Video/audio signal processing method and video/audio signal processing apparatus
US6271861B1 (en) * 1998-04-07 2001-08-07 Adobe Systems Incorporated Smooth shading of an object
US6330281B1 (en) * 1999-08-06 2001-12-11 Richfx Ltd. Model-based view extrapolation for interactive virtual reality systems
FR2811791A1 (en) * 2000-07-13 2002-01-18 France Telecom MOTION ESTIMATOR FOR ENCODING AND DECODING IMAGE SEQUENCES
US20020009140A1 (en) * 2000-03-07 2002-01-24 Yves Ramanzin Method of encoding video signals
US6370196B1 (en) * 1997-07-28 2002-04-09 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for multiresolution object-oriented motion estimation
US6389072B1 (en) * 1998-12-23 2002-05-14 U.S. Philips Corp. Motion analysis based buffer regulation scheme
US6456731B1 (en) * 1998-05-21 2002-09-24 Sanyo Electric Co., Ltd. Optical flow estimation method and image synthesis method
US6496601B1 (en) * 1997-06-23 2002-12-17 Viewpoint Corp. System and method for asynchronous, adaptive moving picture compression, and decompression
US20040133647A1 (en) * 1998-12-23 2004-07-08 Canon Kabushiki Kaisha Method and system for conveying video messages
USRE38564E1 (en) 1997-03-07 2004-08-10 General Instrument Corporation Motion estimation and compensation of video object planes for interlaced digital video
US20040167554A1 (en) * 2000-12-20 2004-08-26 Fox Hollow Technologies, Inc. Methods and devices for reentering a true lumen from a subintimal space
US6785329B1 (en) * 1999-12-21 2004-08-31 Microsoft Corporation Automatic video object extraction
US20040223547A1 (en) * 2003-05-07 2004-11-11 Sharp Laboratories Of America, Inc. System and method for MPEG-4 random access broadcast capability
US20040240746A1 (en) * 2003-05-30 2004-12-02 Aliaga Daniel G. Method and apparatus for compressing and decompressing images captured from viewpoints throughout N-dimensioanl space
US20050183303A1 (en) * 2003-08-15 2005-08-25 Simonsen Peter A. Method and an arrangement for adveratising or promoting
US20050249426A1 (en) * 2004-05-07 2005-11-10 University Technologies International Inc. Mesh based frame processing and applications
US20050259881A1 (en) * 2004-05-20 2005-11-24 Goss Michael E Geometry and view assisted transmission of graphics image streams
US7095786B1 (en) 2003-01-11 2006-08-22 Neo Magic Corp. Object tracking using adaptive block-size matching along object boundary and frame-skipping when object motion is low
US20070053431A1 (en) * 2003-03-20 2007-03-08 France Telecom Methods and devices for encoding and decoding a sequence of images by means of motion/texture decomposition and wavelet encoding
CN100336390C (en) * 1999-11-29 2007-09-05 索尼公司 Step decomposition method and apparatus for extracting synthetic video selection for browsing
US20080031325A1 (en) * 2006-08-03 2008-02-07 Yingyong Qi Mesh-based video compression with domain transformation
EP1925159A2 (en) * 2005-09-16 2008-05-28 Sony Electronics, Inc. Adaptive area of influence filter for moving object boundaries
US20080122926A1 (en) * 2006-08-14 2008-05-29 Fuji Xerox Co., Ltd. System and method for process segmentation using motion detection
US20100189172A1 (en) * 2007-06-25 2010-07-29 France Telecom Methods and devices for coding and decoding an image sequence represented with the aid of motion tubes, corresponding computer program products and signal
US20100284627A1 (en) * 2009-05-08 2010-11-11 Mediatek Inc. Apparatus and methods for motion vector correction
US20110054503A1 (en) * 2009-09-02 2011-03-03 Isa Rizk Systems, methods and devices for ablation, crossing, and cutting of occlusions
US20120014606A1 (en) * 2010-07-16 2012-01-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable recording medium
US20150088475A1 (en) * 2013-09-26 2015-03-26 The Aerospace Corporation Space debris visualization, characterization and volume modeling
US9371099B2 (en) 2004-11-03 2016-06-21 The Wilfred J. and Louisette G. Lagassey Irrevocable Trust Modular intelligent transportation system
US9478033B1 (en) 2010-08-02 2016-10-25 Red Giant Software Particle-based tracking of objects within images
US20210099706A1 (en) * 2012-05-14 2021-04-01 V-Nova International Limited Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones
US11321904B2 (en) 2019-08-30 2022-05-03 Maxon Computer Gmbh Methods and systems for context passing between nodes in three-dimensional modeling
US11373369B2 (en) 2020-09-02 2022-06-28 Maxon Computer Gmbh Systems and methods for extraction of mesh geometry from straight skeleton for beveled shapes
US11714928B2 (en) 2020-02-27 2023-08-01 Maxon Computer Gmbh Systems and methods for a self-adjusting node workspace

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7125540B1 (en) * 2000-06-06 2006-10-24 Battelle Memorial Institute Microsystem process networks

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719629A (en) * 1995-12-27 1998-02-17 Samsung Electronics Co., Ltd. Motion picture encoding method and apparatus thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719629A (en) * 1995-12-27 1998-02-17 Samsung Electronics Co., Ltd. Motion picture encoding method and apparatus thereof

Non-Patent Citations (38)

* Cited by examiner, † Cited by third party
Title
A. M. Tekalp, "Digital Video Processing," Prentice Hall, 1995.
A. M. Tekalp, Digital Video Processing, Prentice Hall, 1995. *
B. Lucas and T. Kanade, "An iterative registration technique with an application to stereo vision," Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981.
B. Lucas and T. Kanade, An iterative registration technique with an application to stereo vision, Proc. DARPA Image Understanding Workshop, pp. 121 130, 1981. *
C. Toklu, A. M. Tekalp, and A. T. Erdem, "2-D Triangular mesh-based mosaicking for object tracking in the presence of occlusion," Visual Communication and Image Processing '97, San Jose, CA, Feb. 1997.
C. Toklu, A. M. Tekalp, and A. T. Erdem, 2 D Triangular mesh based mosaicking for object tracking in the presence of occlusion, Visual Communication and Image Processing 97, San Jose, CA, Feb. 1997. *
C. Toklu, A. T. Erdem, and A. M. Tekalp, "2-D Mesh-based synthetic transfiguration of an object with occlusion," Proc. ICASSP '97, Munich, Germany, Apr. 1997.
C. Toklu, A. T. Erdem, and A. M. Tekalp, 2 D Mesh based synthetic transfiguration of an object with occlusion, Proc. ICASSP 97, Munich, Germany, Apr. 1997. *
C. Toklu, A. T. Erdem, M. I. Sezan and A. M. Tekalp, "Tracking motion and intensity variations using hierarchical 2-D mesh modeling," Graphical Models and Image Processing, vol. 58, No. 6, pp. 553-573, Nov. 1996.
C. Toklu, A. T. Erdem, M. I. Sezan and A. M. Tekalp, Tracking motion and intensity variations using hierarchical 2 D mesh modeling, Graphical Models and Image Processing, vol. 58, No. 6, pp. 553 573, Nov. 1996. *
D. Hearn and M. P. Baker, Computer Graphics, second edition, Prentice Hall, 1997. *
D. Hearn and M. P. Baker,"Computer Graphics," second edition, Prentice Hall, 1997.
G. J. Sullivan and R. L. Baker, "Motion compensation for video compression using control grid interpolation," Proc. ICASSP '91, vol. 4, pp. 2713-2716, May 1991.
G. J. Sullivan and R. L. Baker, Motion compensation for video compression using control grid interpolation, Proc. ICASSP 91, vol. 4, pp. 2713 2716, May 1991. *
J. Nieweglowski, T. G. Campbell and P. Haavisto, "A novel video coding scheme based on temporal prediction using digital image warping," IEEE Transactions on Consumer Electronics, vol. 39, No. 3, pp. 141-150, Aug. 1993.
J. Nieweglowski, T. G. Campbell and P. Haavisto, A novel video coding scheme based on temporal prediction using digital image warping, IEEE Transactions on Consumer Electronics, vol. 39, No. 3, pp. 141 150, Aug. 1993. *
J. R. Shewchuk, "Triangle: Engineering a 2D quality mesh generator and Delaunay traingulator," First Workshop on Applied Computational Geometry, pp. 124-133, ACM, Philadelphia, May 1996.
J. R. Shewchuk, Triangle: Engineering a 2D quality mesh generator and Delaunay traingulator, First Workshop on Applied Computational Geometry, pp. 124 133, ACM, Philadelphia, May 1996. *
J. Y. A. Wang and E. H. Adelson, "Representing moving images with layers," IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 625-638, Sep. 1994.
J. Y. A. Wang and E. H. Adelson, Representing moving images with layers, IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 625 638, Sep. 1994. *
K. Wall and P. E. Danielsson, "A fast sequential method for polygonal approximation of digitized curves," Comp. Graphics, Vision and Im. Processing, vol. 28, pp. 229-227, 1984.
K. Wall and P. E. Danielsson, A fast sequential method for polygonal approximation of digitized curves, Comp. Graphics, Vision and Im. Processing, vol. 28, pp. 229 227, 1984. *
L. Chiariglione, "MPEG and multimedia communications," IEEE Trans. on Circ. and Syst. for Video Technology, vol. 7, No. 1, pp. 5-18, Feb. 1997.
L. Chiariglione, MPEG and multimedia communications, IEEE Trans. on Circ. and Syst. for Video Technology, vol. 7, No. 1, pp. 5 18, Feb. 1997. *
P. J. L. van Beek and A. M. Tekalp, "Object-based video coding using forward tracking 2-D mesh layers," Visual Communications and Image Processing '97, San Jose, CA, Feb. 1997.
P. J. L. van Beek and A. M. Tekalp, Object based video coding using forward tracking 2 D mesh layers, Visual Communications and Image Processing 97, San Jose, CA, Feb. 1997. *
T. Sikora, "The MPEG-4 Video Standard Verification Model," IEEE Trans. on Cir. and Syst. for Video Technology, vol. 7, No. 1, pp. 19-31, Feb. 1997.
T. Sikora, The MPEG 4 Video Standard Verification Model, IEEE Trans. on Cir. and Syst. for Video Technology, vol. 7, No. 1, pp. 19 31, Feb. 1997. *
Y. Altunbasak and A. M. Tekalp, "Occlusion-adaptive 2-D mesh tracking," Proc. ICASSP '96, Atlanta, GA, May 1996.
Y. Altunbasak and A. M. Tekalp, "Very-low bitrate video coding using object-based mesh design and tracking," Proc. SPIE/IS&T Electronic Imaging, Science and Technology, San Jose, CA, Feb. 1996.
Y. Altunbasak and A. M. Tekalp, Occlusion adaptive 2 D mesh tracking, Proc. ICASSP 96, Atlanta, GA, May 1996. *
Y. Altunbasak and A. M. Tekalp, Very low bitrate video coding using object based mesh design and tracking, Proc. SPIE/IS & T Electronic Imaging, Science and Technology, San Jose, CA, Feb. 1996. *
Y. Altunbasak, A. M. Tekalp and G. Bozdagi, "Two-dimensional object based coding using a content-based mesh and affine motion parameterization," IEEE Int. Conference on Image Processing, Washington DC, Oct. 1995.
Y. Altunbasak, A. M. Tekalp and G. Bozdagi, Two dimensional object based coding using a content based mesh and affine motion parameterization, IEEE Int. Conference on Image Processing, Washington DC, Oct. 1995. *
Y. Nakaya and H. Harashima, "Motion compensation based on spatial transformations," IEEE Trans. on Circuits and Systems for Video Technology, vol. 4, No. 3, pp. 339-356, Jun. 1994.
Y. Nakaya and H. Harashima, Motion compensation based on spatial transformations, IEEE Trans. on Circuits and Systems for Video Technology, vol. 4, No. 3, pp. 339 356, Jun. 1994. *
Y. Wang and O. Lee, "Active mesh --A feature seeking and tracking image sequence representation scheme," IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 610-624, Sep. 1994.
Y. Wang and O. Lee, Active mesh A feature seeking and tracking image sequence representation scheme, IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 610 624, Sep. 1994. *

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038258A (en) * 1996-05-29 2000-03-14 Samsung Electronics Co., Ltd. Encoding and decoding system of motion image containing arbitrary object
US6236680B1 (en) 1996-05-29 2001-05-22 Samsung Electronics Electronics Co., Ltd. Encoding and decoding system of motion image containing arbitrary object
US6744817B2 (en) * 1996-05-29 2004-06-01 Samsung Electronics Co., Ltd. Motion predictive arbitrary visual object encoding and decoding system
US6148026A (en) * 1997-01-08 2000-11-14 At&T Corp. Mesh node coding to enable object based functionalities within a motion compensated transform video coder
US6339618B1 (en) * 1997-01-08 2002-01-15 At&T Corp. Mesh node motion coding to enable object based functionalities within a motion compensated transform video coder
USRE38564E1 (en) 1997-03-07 2004-08-10 General Instrument Corporation Motion estimation and compensation of video object planes for interlaced digital video
US6026195A (en) * 1997-03-07 2000-02-15 General Instrument Corporation Motion estimation and compensation of video object planes for interlaced digital video
US6496601B1 (en) * 1997-06-23 2002-12-17 Viewpoint Corp. System and method for asynchronous, adaptive moving picture compression, and decompression
US6370196B1 (en) * 1997-07-28 2002-04-09 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for multiresolution object-oriented motion estimation
US6192156B1 (en) * 1998-04-03 2001-02-20 Synapix, Inc. Feature tracking using a dense feature array
US6271861B1 (en) * 1998-04-07 2001-08-07 Adobe Systems Incorporated Smooth shading of an object
US6788802B2 (en) * 1998-05-21 2004-09-07 Sanyo Electric Co., Ltd. Optical flow estimation method and image synthesis method
US6456731B1 (en) * 1998-05-21 2002-09-24 Sanyo Electric Co., Ltd. Optical flow estimation method and image synthesis method
US20040133647A1 (en) * 1998-12-23 2004-07-08 Canon Kabushiki Kaisha Method and system for conveying video messages
US6389072B1 (en) * 1998-12-23 2002-05-14 U.S. Philips Corp. Motion analysis based buffer regulation scheme
US6330281B1 (en) * 1999-08-06 2001-12-11 Richfx Ltd. Model-based view extrapolation for interactive virtual reality systems
WO2001041451A1 (en) * 1999-11-29 2001-06-07 Sony Corporation Video/audio signal processing method and video/audio signal processing apparatus
CN100336390C (en) * 1999-11-29 2007-09-05 索尼公司 Step decomposition method and apparatus for extracting synthetic video selection for browsing
US20080043848A1 (en) * 1999-11-29 2008-02-21 Kuhn Peter M Video/audio signal processing method and video/audio signal processing apparatus
US7356082B1 (en) 1999-11-29 2008-04-08 Sony Corporation Video/audio signal processing method and video-audio signal processing apparatus
CN100387061C (en) * 1999-11-29 2008-05-07 索尼公司 Video/audio signal processing method and video/audio signal processing apparatus
US20040252886A1 (en) * 1999-12-21 2004-12-16 Microsoft Corporation Automatic video object extraction
US6785329B1 (en) * 1999-12-21 2004-08-31 Microsoft Corporation Automatic video object extraction
US7453939B2 (en) * 1999-12-21 2008-11-18 Microsoft Corporation Automatic video object extraction
US6917649B2 (en) * 2000-03-07 2005-07-12 Koninklijke Philips Electronics N.V. Method of encoding video signals
US20020009140A1 (en) * 2000-03-07 2002-01-24 Yves Ramanzin Method of encoding video signals
US7502413B2 (en) * 2000-07-13 2009-03-10 France Telecom Motion estimator for coding and decoding image sequences
US20040047415A1 (en) * 2000-07-13 2004-03-11 Guillaume Robert Motion estimator for coding and decoding image sequences
FR2811791A1 (en) * 2000-07-13 2002-01-18 France Telecom MOTION ESTIMATOR FOR ENCODING AND DECODING IMAGE SEQUENCES
WO2002007099A1 (en) * 2000-07-13 2002-01-24 France Telecom Motion estimator for coding and decoding image sequences
US20040167554A1 (en) * 2000-12-20 2004-08-26 Fox Hollow Technologies, Inc. Methods and devices for reentering a true lumen from a subintimal space
US7142600B1 (en) 2003-01-11 2006-11-28 Neomagic Corp. Occlusion/disocclusion detection using K-means clustering near object boundary with comparison of average motion of clusters to object and background motions
US7095786B1 (en) 2003-01-11 2006-08-22 Neo Magic Corp. Object tracking using adaptive block-size matching along object boundary and frame-skipping when object motion is low
USRE42790E1 (en) 2003-01-11 2011-10-04 Neomagic Corporation Occlusion/disocclusion detection using K-means clustering near object boundary with comparison of average motion of clusters to object and background motions
US20070053431A1 (en) * 2003-03-20 2007-03-08 France Telecom Methods and devices for encoding and decoding a sequence of images by means of motion/texture decomposition and wavelet encoding
US20040223547A1 (en) * 2003-05-07 2004-11-11 Sharp Laboratories Of America, Inc. System and method for MPEG-4 random access broadcast capability
US20040240746A1 (en) * 2003-05-30 2004-12-02 Aliaga Daniel G. Method and apparatus for compressing and decompressing images captured from viewpoints throughout N-dimensioanl space
US7313285B2 (en) * 2003-05-30 2007-12-25 Lucent Technologies Inc. Method and apparatus for compressing and decompressing images captured from viewpoints throughout N-dimensional space
US20050183303A1 (en) * 2003-08-15 2005-08-25 Simonsen Peter A. Method and an arrangement for adveratising or promoting
US20050249426A1 (en) * 2004-05-07 2005-11-10 University Technologies International Inc. Mesh based frame processing and applications
US7616782B2 (en) 2004-05-07 2009-11-10 Intelliview Technologies Inc. Mesh based frame processing and applications
US20050259881A1 (en) * 2004-05-20 2005-11-24 Goss Michael E Geometry and view assisted transmission of graphics image streams
US7529418B2 (en) * 2004-05-20 2009-05-05 Hewlett-Packard Development Company, L.P. Geometry and view assisted transmission of graphics image streams
US9371099B2 (en) 2004-11-03 2016-06-21 The Wilfred J. and Louisette G. Lagassey Irrevocable Trust Modular intelligent transportation system
US10979959B2 (en) 2004-11-03 2021-04-13 The Wilfred J. and Louisette G. Lagassey Irrevocable Trust Modular intelligent transportation system
EP1925159A2 (en) * 2005-09-16 2008-05-28 Sony Electronics, Inc. Adaptive area of influence filter for moving object boundaries
EP1925159A4 (en) * 2005-09-16 2009-09-23 Sony Electronics Inc Adaptive area of influence filter for moving object boundaries
US20080031325A1 (en) * 2006-08-03 2008-02-07 Yingyong Qi Mesh-based video compression with domain transformation
US20080122926A1 (en) * 2006-08-14 2008-05-29 Fuji Xerox Co., Ltd. System and method for process segmentation using motion detection
US20100189172A1 (en) * 2007-06-25 2010-07-29 France Telecom Methods and devices for coding and decoding an image sequence represented with the aid of motion tubes, corresponding computer program products and signal
US8588292B2 (en) * 2007-06-25 2013-11-19 France Telecom Methods and devices for coding and decoding an image sequence represented with the aid of motion tubes, corresponding computer program products and signal
US20100284627A1 (en) * 2009-05-08 2010-11-11 Mediatek Inc. Apparatus and methods for motion vector correction
US8254439B2 (en) * 2009-05-08 2012-08-28 Mediatek Inc. Apparatus and methods for motion vector correction
TWI410895B (en) * 2009-05-08 2013-10-01 Mediatek Inc Apparatus and methods for motion vector correction
US20110054503A1 (en) * 2009-09-02 2011-03-03 Isa Rizk Systems, methods and devices for ablation, crossing, and cutting of occlusions
US8842918B2 (en) 2010-07-16 2014-09-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable recording medium
US8594433B2 (en) * 2010-07-16 2013-11-26 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable recording medium
US20120014606A1 (en) * 2010-07-16 2012-01-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable recording medium
US9478033B1 (en) 2010-08-02 2016-10-25 Red Giant Software Particle-based tracking of objects within images
US20210099706A1 (en) * 2012-05-14 2021-04-01 V-Nova International Limited Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones
US11595653B2 (en) * 2012-05-14 2023-02-28 V-Nova International Limited Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones
US20150088475A1 (en) * 2013-09-26 2015-03-26 The Aerospace Corporation Space debris visualization, characterization and volume modeling
US11321904B2 (en) 2019-08-30 2022-05-03 Maxon Computer Gmbh Methods and systems for context passing between nodes in three-dimensional modeling
US11714928B2 (en) 2020-02-27 2023-08-01 Maxon Computer Gmbh Systems and methods for a self-adjusting node workspace
US11373369B2 (en) 2020-09-02 2022-06-28 Maxon Computer Gmbh Systems and methods for extraction of mesh geometry from straight skeleton for beveled shapes

Also Published As

Publication number Publication date
WO1999001986A1 (en) 1999-01-14

Similar Documents

Publication Publication Date Title
US5936671A (en) Object-based video processing using forward-tracking 2-D mesh layers
Tekalp et al. Two-dimensional mesh-based visual-object representation for interactive synthetic/natural digital video
Szeliski et al. Spline-based image registration
Agarwala et al. Panoramic video textures
US6047088A (en) 2D mesh geometry and motion vector compression
Chang et al. Simultaneous motion estimation and segmentation
Stiller Object-based estimation of dense motion fields
CA2205177C (en) Mosaic based image processing system and method for processing images
US7822231B2 (en) Optical flow estimation method
EP0849950A2 (en) Dynamic sprites for encoding video data
US20100086050A1 (en) Mesh based frame processing and applications
EP1042736A1 (en) Sprite-based video coding system
Malassiotis et al. Model-based joint motion and structure estimation from stereo images
Tzovaras et al. 3D object articulation and motion estimation in model-based stereoscopic videoconference image sequence analysis and coding
Malassiotis et al. Object-based coding of stereo image sequences using three-dimensional models
van Beek et al. Object-based video coding using forward-tracking 2D mesh layers
Giaccone et al. Segmentation of Global Motion using Temporal Probabilistic Classification.
Al-Regib et al. Hierarchical motion estimation with content-based meshes
Malassiotis et al. Coding of video-conference stereo image sequences using 3D models
Benois-Pineau et al. A new method for region-based depth ordering in a video sequence: application to frame interpolation
Morikawa et al. 3D structure extraction coding of image sequences
Steinbach et al. Motion-based analysis and segmentation of image sequences using 3-D scene models
Toklu et al. 2-D mesh-based tracking of deformable objects with occlusion
EP1042918A1 (en) Static image generation method and device
Habuka et al. Image interpolation using enhanced multiresolution critical-point filters

Legal Events

Date Code Title Description
AS Assignment

Owner name: SHARP LABORATORIES OF AMERICA, INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VAN BEEK, PETRUS J.L.;TEKALP, AHMET M.;REEL/FRAME:008986/0603

Effective date: 19980127

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SHARP KABUSHIKI KAISHA, INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHARP LABORATORIES OF AMERICA, INCORPORATED;REEL/FRAME:010719/0789

Effective date: 20000327

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: RAKUTEN, INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHARP KABUSHIKI KAISHA;REEL/FRAME:031179/0760

Effective date: 20130823