WO2013029232A1 - Multi-resolution 3d textured mesh coding - Google Patents

Multi-resolution 3d textured mesh coding Download PDF

Info

Publication number
WO2013029232A1
WO2013029232A1 PCT/CN2011/079095 CN2011079095W WO2013029232A1 WO 2013029232 A1 WO2013029232 A1 WO 2013029232A1 CN 2011079095 W CN2011079095 W CN 2011079095W WO 2013029232 A1 WO2013029232 A1 WO 2013029232A1
Authority
WO
WIPO (PCT)
Prior art keywords
mesh
texture
simplified
texture image
encoded
Prior art date
Application number
PCT/CN2011/079095
Other languages
French (fr)
Inventor
Jiang Tian
Tao Luo
Kangying Cai
Wenfei JIANG
Original Assignee
Technicolor (China) Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technicolor (China) Technology Co., Ltd. filed Critical Technicolor (China) Technology Co., Ltd.
Priority to PCT/CN2011/079095 priority Critical patent/WO2013029232A1/en
Publication of WO2013029232A1 publication Critical patent/WO2013029232A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/36Level of detail

Definitions

  • Implementations are described that relate to 3D models. Various particular implementations relate to transmitting 3D models with texture in multi-resolution.
  • 3D models are used to represent 3D objects.
  • a mesh is a collection of vertices, edges and faces that defines the shape of an object in 3D computer graphics. The faces usually include triangles, quadrilaterals or other simple convex polygons.
  • connectivity, geometry, and property data are used to represent a 3D mesh.
  • Connectivity data describes the adjacency relationship between vertices.
  • Geometry data specifies vertex locations.
  • Property data specifies several attributes such as the normal vector, material reflectance, and texture coordinates.
  • a 3D model requires a significant amount of data to represent its shape information and texture images. Such models place large strains on computation, storage,
  • the 3D object can be received at different levels of qualities given its network and computational constraints. For example, a user with a low bandwidth may choose to only receive a coarse representation of the object while a user with a high bandwidth may choose to receive a fine
  • a first prior art approach provides a multi-resolution encoding method for the texture associated with a multi-resolution mesh.
  • Different texture images are generated for different levels of detail to obtain successive refinements.
  • FIGs. 1 A-D respectively show texture images 1 10, 120, 130, 140 having successive refinements of texture, in accordance with the first prior art approach.
  • the texture image in FIG. 1 A has a refinement associated with the highest level of details
  • the texture image in FIG. 1 D has a refinement associated with the lowest level of details.
  • the successive refinements do not correspond to finer or more detailed versions of the same element, but to different texture images associated with different levels of details.
  • the first prior art approach uses a common mesh unwrapping system agreed upon by both the encoder and the decoder.
  • a new texture atlas is generated from the mesh unwrapping, following a set of fixed and predefined rules known to both the encoder and the decoder.
  • the shape of the new texture atlas determines its "useful versus useless" parts. Due to the common unwrapping system, the shape of the new texture atlas and the references to the texture coordinates need not to be transmitted to the decoder since they can be automatically and losslessly
  • FIG. 2 show a texture atlas 200 generated from a mesh unwrapping, in accordance with the first prior art approach.
  • This texture atlas 200 is composed by several images carefully laid out inside a rectangle.
  • a second prior art approach provides a method to construct a progressive mesh such that all meshes therein share a common texture parametrization.
  • This method considers two quality metrics simultaneously.
  • the second prior art approach minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. By minimizing the largest texture stretch across all domain points, the second prior art approach creates a balanced parametrization where no domain direction is too stretched and thus undersamples its corresponding mapped 3D direction.
  • the second prior art approach also minimizes texture deviation
  • FIG.3 shows an existing surface signal 310 sampled to obtain a texture image 320, in accordance with the second prior art approach. As seen, the intent thereof is to create a parametrization for representing an existing signal already associated with the mesh surface.
  • FIG.4 shows an exemplary mapping 450 of a 2D texture domain 410 to a 3D surface 420, in accordance with the second prior art approach.
  • Singular values are represented as follows: ⁇ , ⁇ .
  • mapping is affine, its partial derivatives are constant over (s,t) and given by the following:
  • the method of the second prior art begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch.
  • the second prior art approach simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole progressive mesh sequence.
  • the charts are packed into a texture atlas.
  • FIG. 5 shows a partition result 500 for texture mapping progressive meshes obtained by the second prior art approach.
  • a method for encoding a three-dimensional model represented by a mesh of two-dimensional polygons includes simplifying the mesh to obtain a simplified mesh.
  • the method further includes generating a texture image.
  • the texture image represents textures for the mesh and the simplified mesh.
  • the method also includes encoding the texture image to form a base layer and an enhancement layer.
  • the base layer corresponds to a coarse representation of the texture image and the enhancement layer provides a refinement of the base layer.
  • an apparatus for encoding a three-dimensional model represented by a mesh of two- dimensional polygons includes a mesh simplifier for simplifying the mesh to obtain a simplified mesh.
  • the apparatus further includes a texture image generator for generating a texture image.
  • the texture image represents textures for the mesh and the simplified mesh.
  • the apparatus also includes an encoder for encoding the texture image to form a base layer and an enhancement layer.
  • the base layer corresponds to a coarse representation of the texture image and the enhancement layer provides a refinement of the base layer.
  • a method includes decoding a simplified mesh, a mesh, and a coarse representation of a texture map from a bitstream.
  • the coarse representation of the texture map corresponds to both the simplified mesh and the mesh.
  • the method further includes forming a three-dimensional model for the mesh, using the coarse representation of the texture map.
  • the method also includes decoding a refinement of the texture map to form a refined representation of the texture map.
  • the method additionally includes enhancing the three-dimensional model for the mesh, using the refined representation of the texture map.
  • an apparatus includes a decoder for decoding a simplified mesh, a mesh, and a coarse representation of a texture map from a bitstream.
  • the coarse representation of the texture map corresponds to both the simplified mesh and the mesh.
  • the apparatus further includes a rendering device for forming a three- dimensional model for the mesh using the coarse representation of the texture map.
  • the decoder decodes a refinement of the texture map to form a refined
  • the rendering device enhances the three- dimensional model for the mesh using the refined representation of the texture map.
  • FIGs. 1 A-D respectively show texture images 1 10, 120, 130, 140 having successive refinements of texture, in accordance with the first prior art approach;
  • FIG. 2 show a texture atlas 200 generated from a mesh unwrapping, in accordance with the first prior art approach
  • FIG. 3 shows an existing surface signal 310 sampled to obtain a texture image 320, in accordance with the second prior art approach
  • FIG. 4 shows an exemplary mapping of a 2D texture domain 410 to a 3D surface 420, in accordance with the second prior art approach
  • FIG. 5 shows a partition result 500 for texture mapping progressive meshes, in accordance with the second prior art approach
  • FIG. 6 shows various exemplary 3D representations with different levels of qualities obtained using progressive meshes and progressive coding of the texture image, in accordance with an embodiment of the present principles
  • FIG. 7 is a high level block diagram showing both a system and method for multi-resolution 3D textured mesh coding, in accordance with an embodiment of the present principles
  • FIG. 8 shows an exemplary texture atlas 800 to which the present principles may be applied, in accordance with an embodiment of the present principles
  • FIG. 9 shows an exemplary bitstream format 900, in accordance with an embodiment of the present principles.
  • FIG. 10 shows an exemplary method 1000 for providing a multi-resolution 3D textured mesh coding to one or more users, in accordance with an embodiment of the present principles
  • FIG. 1 1 shows an exemplary environment 1 100 to which the present principles may be applied, in accordance with an embodiment of the present principles
  • FIG. 12 shows an example of a boundary problem 1200 to which the present principles may be applied, in accordance with an embodiment of the present principles
  • FIG. 13 shows a segmentation 1310 of an image 1300, in accordance with an embodiment of the present principles
  • FIG. 14 shows a flowchart of a method 1400 implementing a feature detection step for texture image pattern aware partitioning, in accordance with an embodiment of the present principles
  • FIG. 15 further describes step 1430 of FIG. 14, in accordance with an embodiment of the present principles.
  • FIG. 1 6 shows a flowchart of a method 1 600 implementing a chart growing step for texture image pattern aware partitioning, in accordance with an embodiment of the present principles.
  • the present principles are directed to multi-resolution 3D textured mesh coding.
  • the present principles provide a method and apparatus for transmitting both shape and texture progressively.
  • a progressive mesh is constructed such that all meshes in the progressive mesh sequence share a common texture parametrization (i.e., a map having common texture coordinates for each of the corresponding progressive levels). Consequently, the corresponding texture coordinates are ready for progressive transmission.
  • a common texture parametrization i.e., a map having common texture coordinates for each of the corresponding progressive levels. Consequently, the corresponding texture coordinates are ready for progressive transmission.
  • LOD level of detail
  • the common texture image is progressively encoded to provide different levels of qualities.
  • a bitstream corresponding to a low level of quality is transmitted first to provide a coarse representation of details.
  • an encoder based on wavelet transformation can be used.
  • other scalable encoders for example, such as a JPEG2000 encoder and an H.264 SVC encoder, can also be used to provide different levels of resolutions and fidelities.
  • the present principles are not limited to solely the preceding encoders and, thus, other encoders of different types, different generations, and so forth, may also be used in accordance with the present principles, while maintaining the spirit of the present principles.
  • FIG. 6 shows various exemplary 3D representations with different levels of qualities obtained using progressive meshes and progressive coding of the texture image, in accordance with an embodiment of the present principles.
  • 3 progressive meshes are provided, namely mesh 610, mesh 620, and mesh 630.
  • Mesh 610 has the fewest number of triangles in representing the object's shape. As more triangles are progressively used in meshes 620 and 630, the shape of the object gets improved (e.g., unnecessary corners are "softened” and more details are added, and so froth). All three meshes 610, 620, and 630 share an underlying common texture image.
  • the common texture is encoded using different resolutions. In the example of FIG. 6, the common texture is encoded at a lowest resolution (640), a medium resolution (650), and a highest resolution (660).
  • Meshes 670, 680, and 690 are also encoded. Mesh 670 has the lowest resolution, while mesh 690 has the highest resolution.
  • mesh 620 includes the information from mesh 610
  • medium resolution texture 650 includes the content from lowest resolution texture 640.
  • Mesh 610 can be combined with either lowest resolution texture 640 or medium resolution texture 650, and lowest resolution texture 640 can be combined with either 620 or 610. If medium resolution texture 650 is used, lowest resolution texture 640 is not needed to obtain mesh 680. Similarly, if mesh 620 is used, mesh 610 is not needed to obtain mesh 680.
  • the present description illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • FIG. 7 is a high level block diagram showing both a system and method (collectively represented by the reference figure numeral 700) for multi-resolution 3D textured mesh coding, in accordance with an embodiment of the present principles.
  • the system 700 generates bitstreams providing different levels of quality for various representations of a 3D object. For example, a bitstream corresponding to a simplified mesh (e.g., such as mesh 610) and a low-resolution texture image (e.g., such as texture 640) provides a low quality and, therefore, a coarse representation (e.g., such as multi-resolution mesh 670) of the 3D object.
  • a texture image with a higher resolution can be formed.
  • a finer mesh can be generated.
  • the bitstream corresponds to a finer mesh with fine texture.
  • more levels are possible than those shown in FIG. 7.
  • the system 700 includes a mesh partition and parametrization device 710, a mesh simplifier 715, a parametrization optimizer 720, a new texture atlas generator 730, and a texture image compressor 740.
  • a mesh partition and parametrization device 710 perform various steps, and hence, these elements and their associated functions will be described in further detail hereinafter with respect to various embodiments of the present principles. It is to be appreciated that the preceding elements can be implemented in hardware, software, or a combination thereof. In an embodiment, each element can include a processor and associated memory, or two or more of these elements may share a processor and memory. These and other configurations for these elements are readily contemplated by one of ordinary skill in the art.
  • the system takes as inputs, for example, a fine mesh 701 and a fine texture 702 representative of a 3D object.
  • mesh partition and parametrization device 710 receives the fine mesh 701 as an input thereof
  • the new atlas generator 730 receives the fine texture 702 as an input thereof.
  • the mesh 701 is partitioned and parametrized by mesh partition and parametrization device 710 to obtain a set of texture charts.
  • the mesh 701 is simplified by mesh simplifier 715 to define a progressive mesh.
  • parametrization optimization is applied to the progressive mesh by
  • the set of texture charts are packed into a square texture image to form a new texture atlas by new atlas generator 730.
  • the square texture image i.e., the texture atlas
  • the texture image compressor 740 is compressed by texture image compressor 740. Steps 773 through 777 are described in further detail herein below.
  • a single unfolding of an arbitrary mesh onto a texture image may create regions of high distortion, so generally a mesh must be partitioned into a set of charts, which are regions with disk-like topology.
  • the mesh 701 is partitioned into a set of charts, and each chart is parametrized by creating a one- to-one mapping from the surface region to a 2D polygon.
  • the texture charts are these (parametrized) 2D polygons.
  • we partition (segment) the mesh 701 into a set of charts parametrize the charts individually onto 2D polygons, and pack the 2D polygons into a single texture image.
  • step 773 can also include resizing the texture charts (2D polygons) according to their stretch so that charts that have more stretch get more texture area.
  • Existing segmentation techniques do not consider the texture image pattern factor, which will lead to artifacts.
  • the segmentation algorithm is designed such that the "meaningful texture image patch" is grouped into one chart while meeting some geometric constraints.
  • G the gradient magnitude from the Sobel operator at some point on the mesh 701 for a particular objective function.
  • the objective function in our case will be a weighted sum of the geometric cost and ⁇ Gds , the latter being the integral of G over a triangle as follows:
  • ⁇ , ⁇ are relative weights
  • n is the number of triangles inside a chart
  • geometricCost is the sum of: (1 ) the mean-squared distance of the chart to the best- fitting plane through the chart; and (2) the perimeter length of this chart.
  • the input mesh is simplified to define a progressive mesh.
  • One objective during the simplification is to minimize texture deviation.
  • the texture deviation is measured as the geometric error according to parametric
  • the texture deviation between a simplifying mesh AT and the original mesh Af " at a point p l e AT is defined as p l - p n , where p" is the point on M n with the same parametric location (as p l ) in the texture domain.
  • step 775 we optimize the parametrization over the entire progressive mesh minimizing stretch and deviation at all levels of details (after we have determined the progressive mesh simplification sequence as per step 774).
  • the objective function will be a weighted sum of the texture stretch and deviation on all meshes ° ⁇ ⁇ ⁇ " as follows:
  • ⁇ , ⁇ are relative weights between stretch and deviation
  • weight(i) is the relative weight assigned to each LOD mesh (i.e., mesh /) in the progressive mesh sequence.
  • FIG. 8 shows an exemplary texture atlas (image) 800 to which the present principles may be applied, in accordance with an embodiment of the present principles.
  • the texture atlas 800 includes various texture charts. As shown in FIG. 8, significant similarity exists among the charts in the texture atlas 800, where chart instances sharing the same pattern tend to have similar textures.
  • the aim of the packing process includes minimizing the atlas resolution, and putting charts of the same category together as much as possible. Some optimization operations may be necessary in order to make the texture atlas more "compact”. After execution of step 776, we have optimized texture coordinates for each vertex.
  • step 777 we perform texture image compression.
  • a wavelet-based image encoder, a JPEG2000 encoder, and an H.264/AVC encoder can be used to implement the present principles.
  • FIG. 9 shows an exemplary bitstream format 900, in accordance with an embodiment of the present principles.
  • the format 900 is for illustrative purposes and, thus, other bitstream formats may also be used, while maintaining the spirit of the present principles.
  • the encoded data corresponding to the vertices' positions, texture coordinates, and texture image and represented as "position level 1 " in block 910, "texture coordinates level 1 " in block 920, and "image level 1 " in block 930, respectively.
  • level 2 For the next level (level 2), only the difference between level 2 and level 1 are encoded for vertices' positions and texture coordinates in "position level 2" in block 940 and in "texture coordinates level 2" in block 950, respectively.
  • the texture image corresponding to level 2 is encoded into "image level 2" in block 960. Note that the texture image is progressively encoded.
  • encoded data for both image levels 1 and 2 are needed.
  • the vertices' positions and texture coordinates at level 2 the vertices' positions and texture coordinates at level 1 are also required.
  • additional data for vertices' positions, texture coordinates and texture image are used to represent a refinement over the previous level.
  • the decoder can render the 3D object with a fine mesh and fine texture.
  • the highest quality can be rendered at the decoder.
  • vertices' positions, texture coordinates, and texture images all allow for progressive transmission. This flexibility in progressive transmission is important to certain interactive applications.
  • FIG. 10 shows an exemplary method 1000 for providing a multi-resolution 3D textured mesh coding to one or more users, in accordance with an embodiment of the present principles.
  • the method 1000 pertains to a receiving/decoding side, in compliment to method 700 described above.
  • a bitstream is received.
  • the corresponding vertices' positions, texture coordinates, and texture image conveyed within the bitstream are decoded.
  • the decoding is based on the bitstream itself.
  • the bitstream includes data for a higher quality level
  • the vertices' positions, texture coordinates, and the texture image will be decoded based on the bitstream for the current quality level and previously decoded data for the lower levels.
  • a 3D object is rendered using the vertices' positions, texture coordinates, and the texture image.
  • the user is afforded the opportunity to request more data to refine the 3D object and if the user so requests, then the decoder consequently receives additional data that correspond to a higher level of quality and the method returns to step 1020. Accordingly, the additional received data will be decoded at step 1020 and used for refining the 3D object at step 1030.
  • FIG. 1 1 shows an exemplary environment 1 100 to which the present principles may be applied, in accordance with an embodiment of the present principles.
  • the environment involves a server 1 1 10, one or more networks
  • the server receives a representation 1 105 of a 3D object in the form of a multi-resolution mesh with texture, and implements the present principles as represented, for example, in FIG. 7, to perform geometry streaming and texture streaming over the Internet 1 120.
  • the various user devices 1 131 and 1 132 receive the geometry streams and texture streams, and decode the same, for example as described with respect to FIG. 10 to provide 3D representations 1 185 (such as 670, 680, and 690 of FIG. 6) having various qualities associated therewith to the user devices 1 131 and 1 132.
  • each of the user devices 1 131 and 1 132 include a respective decoder 1 150.
  • a decoder may be implemented in hardware, software, or a combination thereof.
  • the decoder can include a processor and associated memory, or may use the processor and associated memory of the device within which it is found.
  • each of the user devices 1 131 and 1 132 include a rendering device (e.g., a display) 1 1 60.
  • the following provides an alternative approach to partitioning than that described above with respect to step 773 of FIG. 7. That is, the following approach provides an effective way to reduce artifacts for texture mapping of multi-resolution mesh. It is to be appreciated that given the teachings of the present principles provided herein, the following texture image pattern aware partitioning approach can be applied to various scenarios regarding texture mapping, unfolding of meshes, and so forth, as readily appreciated by one of ordinary skill in the art, while maintaining the spirit of the present principles.
  • FIG. 12 shows an example of a boundary problem 1200 to which the present principles may be applied, in accordance with an embodiment of the present principles.
  • texture maps for the two charts 1221 and 1222 may lead to artifacts.
  • the reason is that the texture image is mapped onto different geometries.
  • Using texture images for the different geometries poses a difficulty with respect to mapping. That is, if two neighboring points are mapped onto the distant boundaries of a texture image, the texture mapping function for these two points may result in blurred effect.
  • FIG. 13 shows a segmentation 1310 of an image 1300, in accordance with an embodiment of the present principles.
  • This image 1300 is partitioned into different sub regions of homogeneity.
  • the boundary of a "meaningful texture image patch" usually corresponds to a high image gradient zone.
  • such other and/or more elaborate criteria capable of being used for segmentation may include, but is not limited to, a model- based approach (i.e., considering one or more model parameters), a Histogram-based approach, and so forth.
  • texture mapping the model to be textured is decomposed into charts homomorphic to discs, where each chart is parametrized, and the unfolded charts are packed in a texture space.
  • texture artifacts may occur on the boundaries.
  • the present principles take into account the texture image pattern when partitioning the mesh into charts.
  • the segmentation algorithm is designed in such a way as to avoid chart boundaries in low image gradient zones. In other words, it is suitable to generate large charts with most of their boundaries in "color-sharp" zones.
  • the partition operation decomposes the model into a set of charts, and is intended to meet the following requirements as much as possible and/or practical given the intended application and available resources:
  • Charts boundaries should be positioned in such a way that most of the discontinuities between the charts will be located in zones where they will not cause texture artifacts;
  • Charts must be homomorphic to discs, and it must be possible to
  • the texture image pattern aware partitioning approach can be considered to involve two stages.
  • the first stage pertains to feature detection, which finds boundaries corresponding to high image gradient zones of the model.
  • the second stage pertains to chart growing, which makes the charts meet at these features curves.
  • the feature detection phase can be outlined as follows:
  • Choose a threshold so that a certain proportion of the edges is filtered out.
  • Sobel operator It measures the change of color of the texture image. We would like to generate charts with most of their boundaries in "color-sharp" zones, which means the larger a value for sharpness's of a given candidate boundary under
  • algorithm 1 directed to feature detection: obtain_feature_curve(edge start)
  • sharpness(S) ⁇ sharpness ⁇ e) is maximum.
  • depth-first search is utilized to find strings of edges starting with h', you may have a couple of such strings. The string which has the maximum sharpness is selected.
  • FIG. 14 shows a flowchart of a method 1400
  • a color-sharpness criterion is computed on the edges.
  • image gradient for the color-sharpness criterion.
  • a threshold is chosen such that a certain proportion of the edges is filtered out.
  • a feature curve is grown from the remaining edges.
  • Step 1430 involves applying Algorithm 1 to each of the remaining edges to grow the feature curves therefrom.
  • Algorithm 1 is to serve as the "meeting boundary" of different charts. These charts are obtained through growing from a set of seeds, as described in further detail herein below.
  • FIG. 15 further describes step 1430 of FIG. 14, in accordance with an embodiment of the present principles.
  • edge h' is set equal to start (i.e., a starting point for detecting features from which the feature curves are generated).
  • a depth-first search is used to find the string S of edges starting with edge h' which satisfies each of the following conditions: two consecutive edges of S share a vertex; the length of S is not larger than thresholdl ; sharpness(S) ⁇ sharpness ⁇ e) is maximum; no edge of S goes backward (relative to edge h'); and no edge of S is tagged as a feature neighbor.
  • edge h' is set equal to the 2 nd item of string S of edges.
  • edge h' is appended to detected_feature.
  • step 1525 it is determined whether or not the sharpness(S) is greater than the threshold. If so, the method returns to step 1510. Otherwise, the method proceeds to step 1530. At step 1530, it is determined whether or not the length of detected_feature is greater than the min featurejength. If so, then the method proceeds to step 1535. Otherwise, the method 1500 is terminated. At step 1535, the element detected_feature is tagged as a (detected) features, and the edges in the neighborhood of the (detected) feature are tagged as feature neighbors.
  • the charts can be created.
  • our approach is a greedy algorithm, expanding all the charts simultaneously from a set of seeds as follows:
  • a front is propagated from the borders and feature curves detected by the previous algorithm (Algorithm 1 ), to compute a
  • di stan ce_to_featu res function at each facet of a geometric shape in the charts (to be merged). Then the seeds are found to be the local maxi ma of this distance_to_features function.
  • Charts are merged if they meet at a small distance from their seed.
  • algorithm 2 directed to chart growing: priority_queue ⁇ edge> Heap sorted by dist(facet(edge))
  • distance_to_feature is stored in each facet F, and denoted by dist(F);
  • max_dist(C) denotes the maximum distance to features for all the facets of C
  • FIG. 1 6 shows a flowchart of a method 1 600 implementing a chart growing step (Algorithm 2) for texture image pattern aware partitioning, in accordance with an embodiment of the present principles.
  • a front is propagated from the borders and feature curves (detected by method 500, pertaining to Algorithm 1 ), to compute a distance_to_features function at each facet.
  • the seeds are found to be the local maxima of the distance_to_features function.
  • charts are merged if they meet at a small distance (e.g., below a given threshold distance) from their seed.
  • the teachings of the present invention are implemented as a combination of hardware and software.
  • the software may be implemented as an application program tangibly embodied on a program storage unit.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU"), a random access memory (“RAM”), and input/output ("I/O") interfaces.
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform may also include an operating system and microinstruction code.
  • the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
  • various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.

Abstract

There is provided a method for encoding a three-dimensional model represented by a mesh of two-dimensional polygons. The method includes simplifying the mesh to obtain a simplified mesh. The method further includes generating a texture image. The texture image represents textures for the mesh and the simplified mesh. The method also includes encoding the texture image to form a base layer and an enhancement layer. The base layer corresponds to a coarse representation of the texture image and the enhancement layer provides a refinement of the base layer.

Description

MULTI-RESOLUTION 3D TEXTURED MESH CODING
TECHNICAL FIELD
Implementations are described that relate to 3D models. Various particular implementations relate to transmitting 3D models with texture in multi-resolution.
BACKGROUND
3D models are used to represent 3D objects. A mesh is a collection of vertices, edges and faces that defines the shape of an object in 3D computer graphics. The faces usually include triangles, quadrilaterals or other simple convex polygons. Typically, connectivity, geometry, and property data are used to represent a 3D mesh. Connectivity data describes the adjacency relationship between vertices. Geometry data specifies vertex locations. Property data specifies several attributes such as the normal vector, material reflectance, and texture coordinates. A 3D model requires a significant amount of data to represent its shape information and texture images. Such models place large strains on computation, storage,
transmission, and display resources.
An important problem in 3D modeling is to allow a multi-resolution
transmission of the object. That is, the 3D object can be received at different levels of qualities given its network and computational constraints. For example, a user with a low bandwidth may choose to only receive a coarse representation of the object while a user with a high bandwidth may choose to receive a fine
representation of the object. Both multi-resolution and progressive coding share a common goal to further improve rendering performance. The difference is that multi- resolution defines several versions of a model at different levels of detail, whereas progressive coding transmit the approximation of a model in a more "continuous way".
Most existing multi-resolution compression and transmission techniques aim to encode the geometry of a 3D model efficiently. Consequently, these approaches study the progressive coding of the geometry only, without considering the
progressive coding of texture images or color materials.
A first prior art approach provides a multi-resolution encoding method for the texture associated with a multi-resolution mesh. Different texture images are generated for different levels of detail to obtain successive refinements. FIGs. 1 A-D respectively show texture images 1 10, 120, 130, 140 having successive refinements of texture, in accordance with the first prior art approach. The texture image in FIG. 1 A has a refinement associated with the highest level of details, and the texture image in FIG. 1 D has a refinement associated with the lowest level of details. The successive refinements do not correspond to finer or more detailed versions of the same element, but to different texture images associated with different levels of details.
The first prior art approach uses a common mesh unwrapping system agreed upon by both the encoder and the decoder. A new texture atlas is generated from the mesh unwrapping, following a set of fixed and predefined rules known to both the encoder and the decoder. The shape of the new texture atlas determines its "useful versus useless" parts. Due to the common unwrapping system, the shape of the new texture atlas and the references to the texture coordinates need not to be transmitted to the decoder since they can be automatically and losslessly
reconstructed in the decoder. Information saving is therefore obtained without the need to transmit the texture coordinates.
FIG. 2 show a texture atlas 200 generated from a mesh unwrapping, in accordance with the first prior art approach. This texture atlas 200 is composed by several images carefully laid out inside a rectangle.
A second prior art approach provides a method to construct a progressive mesh such that all meshes therein share a common texture parametrization. To create a single texture image that can be used to texture all meshes in a progressive mesh sequence, this method considers two quality metrics simultaneously. The second prior art approach minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. By minimizing the largest texture stretch across all domain points, the second prior art approach creates a balanced parametrization where no domain direction is too stretched and thus undersamples its corresponding mapped 3D direction. The second prior art approach also minimizes texture deviation
("slippage" error based on parametric correspondence) to obtain accurate textured mesh approximations. To measure texture deviation, the second prior art approach uses a heuristic of measuring the incremental texture deviation between two consecutive meshes. FIG.3 shows an existing surface signal 310 sampled to obtain a texture image 320, in accordance with the second prior art approach. As seen, the intent thereof is to create a parametrization for representing an existing signal already associated with the mesh surface.
FIG.4 shows an exemplary mapping 450 of a 2D texture domain 410 to a 3D surface 420, in accordance with the second prior art approach. Singular values are represented as follows: γ,Γ. The mapping computes the distance L from the 2D texture domain to the 3D surface 420 using the following equation: L2 =^(γζ2)/2 as described below. That is, to optimize a parametrization's ability to balance frequency content everywhere over the surface in every direction, the second prior art approaches defined a "texture stretch" metric on triangle meshes.
Given a triangle Twith 2D texture coordinates p p2,p3, Pi = and corresponding 3D coordinates ¾,¾,¾, the unique affine mappings(P) = s(*, = gis as follows
S(p) = ((p,p2,p3)ql+(p,p3,pl)q2+(p,pl,p2)q3)/(pl,p2,p3)
where (a,b,c) denotes area of triangle abc. Since the mapping is affine, its partial derivatives are constant over (s,t) and given by the following:
Ss= W/ s= (¾<¾- t3)+ q2(t3- + ¾<A- t2))/(2A)
t= (¾(¾- ¾)+ q2(sl- s3)+ q3(s2- ¾))/(2A)
A= (p1,p2,p3)= ((s2- s, t3- ¾)- (s3- s, t2- )/2
The larger and smaller singular values of the Jacobian [ss,s,]are given respectively by the following:
max singular value
γ = min singular value
Figure imgf000004_0001
where a = ss-ss, b = ss st, and c = s st. The singular values rand ^represent the largest and smallest length obtained when mapping unit-length vectors from the texture domain to the surface, i.e. the largest and smallest local "stretch". We define two stretch norms over triangle Tas follows:
L2(T j = V(r2 + r2)/2
Hence, the method of the second prior art begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. Next, the second prior art approach simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole progressive mesh sequence. Finally, the charts are packed into a texture atlas.
FIG. 5 shows a partition result 500 for texture mapping progressive meshes obtained by the second prior art approach.
SUMMARY
These and other drawbacks and disadvantages of the prior art are addressed by the present invention, which is directed to multi-resolution 3D textured mesh coding.
According to an aspect of the present principles, there is provided a method for encoding a three-dimensional model represented by a mesh of two-dimensional polygons. The method includes simplifying the mesh to obtain a simplified mesh. The method further includes generating a texture image. The texture image represents textures for the mesh and the simplified mesh. The method also includes encoding the texture image to form a base layer and an enhancement layer. The base layer corresponds to a coarse representation of the texture image and the enhancement layer provides a refinement of the base layer.
According to another aspect of the present principles, there is provided an apparatus for encoding a three-dimensional model represented by a mesh of two- dimensional polygons. The apparatus includes a mesh simplifier for simplifying the mesh to obtain a simplified mesh. The apparatus further includes a texture image generator for generating a texture image. The texture image represents textures for the mesh and the simplified mesh. The apparatus also includes an encoder for encoding the texture image to form a base layer and an enhancement layer. The base layer corresponds to a coarse representation of the texture image and the enhancement layer provides a refinement of the base layer.
According to still another aspect of the present principles, there is provided a method. The method includes decoding a simplified mesh, a mesh, and a coarse representation of a texture map from a bitstream. The coarse representation of the texture map corresponds to both the simplified mesh and the mesh. The method further includes forming a three-dimensional model for the mesh, using the coarse representation of the texture map. The method also includes decoding a refinement of the texture map to form a refined representation of the texture map. The method additionally includes enhancing the three-dimensional model for the mesh, using the refined representation of the texture map.
According to yet another aspect of the present principles, there is provided an apparatus. The apparatus includes a decoder for decoding a simplified mesh, a mesh, and a coarse representation of a texture map from a bitstream. The coarse representation of the texture map corresponds to both the simplified mesh and the mesh. The apparatus further includes a rendering device for forming a three- dimensional model for the mesh using the coarse representation of the texture map. The decoder decodes a refinement of the texture map to form a refined
representation of the texture map, and the rendering device enhances the three- dimensional model for the mesh using the refined representation of the texture map.
These and other aspects, features and advantages of the present invention will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGs. 1 A-D respectively show texture images 1 10, 120, 130, 140 having successive refinements of texture, in accordance with the first prior art approach;
FIG. 2 show a texture atlas 200 generated from a mesh unwrapping, in accordance with the first prior art approach;
FIG. 3 shows an existing surface signal 310 sampled to obtain a texture image 320, in accordance with the second prior art approach;
FIG. 4 shows an exemplary mapping of a 2D texture domain 410 to a 3D surface 420, in accordance with the second prior art approach;
FIG. 5 shows a partition result 500 for texture mapping progressive meshes, in accordance with the second prior art approach;
FIG. 6 shows various exemplary 3D representations with different levels of qualities obtained using progressive meshes and progressive coding of the texture image, in accordance with an embodiment of the present principles;
FIG. 7 is a high level block diagram showing both a system and method for multi-resolution 3D textured mesh coding, in accordance with an embodiment of the present principles; FIG. 8 shows an exemplary texture atlas 800 to which the present principles may be applied, in accordance with an embodiment of the present principles;
FIG. 9 shows an exemplary bitstream format 900, in accordance with an embodiment of the present principles;
FIG. 10 shows an exemplary method 1000 for providing a multi-resolution 3D textured mesh coding to one or more users, in accordance with an embodiment of the present principles;
FIG. 1 1 shows an exemplary environment 1 100 to which the present principles may be applied, in accordance with an embodiment of the present principles;
FIG. 12 shows an example of a boundary problem 1200 to which the present principles may be applied, in accordance with an embodiment of the present principles;
FIG. 13 shows a segmentation 1310 of an image 1300, in accordance with an embodiment of the present principles;
FIG. 14 shows a flowchart of a method 1400 implementing a feature detection step for texture image pattern aware partitioning, in accordance with an embodiment of the present principles;
FIG. 15 further describes step 1430 of FIG. 14, in accordance with an embodiment of the present principles; and
FIG. 1 6 shows a flowchart of a method 1 600 implementing a chart growing step for texture image pattern aware partitioning, in accordance with an embodiment of the present principles. DETAILED DESCRI PTION
The present principles are directed to multi-resolution 3D textured mesh coding. Advantageously to that end, the present principles provide a method and apparatus for transmitting both shape and texture progressively. According to one or more embodiments of the present principles, a progressive mesh is constructed such that all meshes in the progressive mesh sequence share a common texture parametrization (i.e., a map having common texture coordinates for each of the corresponding progressive levels). Consequently, the corresponding texture coordinates are ready for progressive transmission. For each level of detail (LOD), we just need to transmit the texture coordinates for the related refined vertices, while for unchanged vertices, their texture coordinates remain constant during the whole procedure.
The common texture image is progressively encoded to provide different levels of qualities. After the texture image is encoded, a bitstream corresponding to a low level of quality is transmitted first to provide a coarse representation of details. As more data corresponding to texture refinements are transmitted, a finer
representation of the texture image can be reconstructed at the receiving side.
In one embodiment, an encoder based on wavelet transformation can be used. In other embodiments, other scalable encoders, for example, such as a JPEG2000 encoder and an H.264 SVC encoder, can also be used to provide different levels of resolutions and fidelities. Of course, the present principles are not limited to solely the preceding encoders and, thus, other encoders of different types, different generations, and so forth, may also be used in accordance with the present principles, while maintaining the spirit of the present principles.
FIG. 6 shows various exemplary 3D representations with different levels of qualities obtained using progressive meshes and progressive coding of the texture image, in accordance with an embodiment of the present principles. To that end, 3 progressive meshes are provided, namely mesh 610, mesh 620, and mesh 630.
Mesh 610 has the fewest number of triangles in representing the object's shape. As more triangles are progressively used in meshes 620 and 630, the shape of the object gets improved (e.g., unnecessary corners are "softened" and more details are added, and so froth). All three meshes 610, 620, and 630 share an underlying common texture image. The common texture is encoded using different resolutions. In the example of FIG. 6, the common texture is encoded at a lowest resolution (640), a medium resolution (650), and a highest resolution (660). Meshes 670, 680, and 690 are also encoded. Mesh 670 has the lowest resolution, while mesh 690 has the highest resolution.
It is to be appreciated that mesh 620 includes the information from mesh 610, and medium resolution texture 650 includes the content from lowest resolution texture 640. Mesh 610 can be combined with either lowest resolution texture 640 or medium resolution texture 650, and lowest resolution texture 640 can be combined with either 620 or 610. If medium resolution texture 650 is used, lowest resolution texture 640 is not needed to obtain mesh 680. Similarly, if mesh 620 is used, mesh 610 is not needed to obtain mesh 680. The present description illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor ("DSP") hardware, read-only memory ("ROM") for storing software, random access memory ("RAM"), and non-volatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
FIG. 7 is a high level block diagram showing both a system and method (collectively represented by the reference figure numeral 700) for multi-resolution 3D textured mesh coding, in accordance with an embodiment of the present principles. The system 700 generates bitstreams providing different levels of quality for various representations of a 3D object. For example, a bitstream corresponding to a simplified mesh (e.g., such as mesh 610) and a low-resolution texture image (e.g., such as texture 640) provides a low quality and, therefore, a coarse representation (e.g., such as multi-resolution mesh 670) of the 3D object. When more information, for example, additional information about the texture image, is received in the bitstream, a texture image with a higher resolution can be formed. Similarly, when more details on the mesh are received, a finer mesh can be generated. With a texture image at a higher resolution and a finer mesh, the bitstream corresponds to a finer mesh with fine texture. Of course, as is readily appreciated by one of ordinary skill in the art, more levels are possible than those shown in FIG. 7.
When a coarse representation is rendered at a decoder and quality improvement is desired, only additional information corresponding to the quality refinement needs to be transmitted to the decoder. This advantageously leads to information savings.
The system 700 includes a mesh partition and parametrization device 710, a mesh simplifier 715, a parametrization optimizer 720, a new texture atlas generator 730, and a texture image compressor 740. Each of these elements perform various steps, and hence, these elements and their associated functions will be described in further detail hereinafter with respect to various embodiments of the present principles. It is to be appreciated that the preceding elements can be implemented in hardware, software, or a combination thereof. In an embodiment, each element can include a processor and associated memory, or two or more of these elements may share a processor and memory. These and other configurations for these elements are readily contemplated by one of ordinary skill in the art.
The system takes as inputs, for example, a fine mesh 701 and a fine texture 702 representative of a 3D object. In particular, at step 771 , mesh partition and parametrization device 710 receives the fine mesh 701 as an input thereof, and at step 772, the new atlas generator 730 receives the fine texture 702 as an input thereof. At step 773, the mesh 701 is partitioned and parametrized by mesh partition and parametrization device 710 to obtain a set of texture charts. At step 774, the mesh 701 is simplified by mesh simplifier 715 to define a progressive mesh. At step 775, parametrization optimization is applied to the progressive mesh by
parametrization optimizer 720. At step 776, the set of texture charts are packed into a square texture image to form a new texture atlas by new atlas generator 730. At step 777, the square texture image (i.e., the texture atlas) is compressed by texture image compressor 740. Steps 773 through 777 are described in further detail herein below.
Mesh Partition and Parametrization
A single unfolding of an arbitrary mesh onto a texture image may create regions of high distortion, so generally a mesh must be partitioned into a set of charts, which are regions with disk-like topology. Regarding step 773, the mesh 701 is partitioned into a set of charts, and each chart is parametrized by creating a one- to-one mapping from the surface region to a 2D polygon. Thus, the texture charts are these (parametrized) 2D polygons. Hence, regarding step 773, we partition (segment) the mesh 701 into a set of charts, parametrize the charts individually onto 2D polygons, and pack the 2D polygons into a single texture image. The single texture image can be compressed and transmitted in a progressive way, meaning that the single texture image can have different resolutions. In an embodiment, step 773 can also include resizing the texture charts (2D polygons) according to their stretch so that charts that have more stretch get more texture area. Existing segmentation techniques do not consider the texture image pattern factor, which will lead to artifacts. In order to minimize artifacts, the segmentation algorithm is designed such that the "meaningful texture image patch" is grouped into one chart while meeting some geometric constraints.
In our case, we need to incorporate the influence from the pattern of the initial texture image 702 into the segmentation operation. Let G be the gradient magnitude from the Sobel operator at some point on the mesh 701 for a particular objective function. The objective function in our case will be a weighted sum of the geometric cost and \ Gds , the latter being the integral of G over a triangle as follows:
Figure imgf000012_0001
where α, β are relative weights, n is the number of triangles inside a chart, and geometricCost is the sum of: (1 ) the mean-squared distance of the chart to the best- fitting plane through the chart; and (2) the perimeter length of this chart. It is to be appreciated that the partitioning (per step 773) is an iterative process. We keep a list of this objective function for each possible merge operation and, after each operation, we update this list.
To parametrize the charts, we adapt the minimization of a geometric stretch metric of the aforementioned second prior art approach shown and described with respect to FIG. 4. This metric penalizes under-sampling, and results in samples that are uniformly distributed over the surface.
Moreover, we provide another approach to partitioning, namely a texture image pattern aware mesh partitioning approach described in further detail herein below with respect to FIGs. 14-1 6. Mesh Simplification
Regarding step 774, the input mesh is simplified to define a progressive mesh. One objective during the simplification is to minimize texture deviation. The texture deviation is measured as the geometric error according to parametric
correspondence. In particular, the texture deviation between a simplifying mesh AT and the original mesh Af " at a point pl e AT is defined as pl - pn , where p" is the point on M n with the same parametric location (as pl ) in the texture domain. Parametrization Optimization
Regarding step 775, we optimize the parametrization over the entire progressive mesh minimizing stretch and deviation at all levels of details (after we have determined the progressive mesh simplification sequence as per step 774). The objective function will be a weighted sum of the texture stretch and deviation on all meshes ° · · · " as follows:
Energy = ^ weight(i) [a x Stretch(i) + β x Deviation(i)] ,
i=0,- · -n
where α, β are relative weights between stretch and deviation, and weight(i) is the relative weight assigned to each LOD mesh (i.e., mesh /) in the progressive mesh sequence.
New Texture Atlas Generation
Regarding step 776, we then pack the texture charts (obtained in step 773) in a square texture image to form a new texture atlas. While we mention a square texture image here for illustrative purposes, it is readily appreciated that other geometric shapes can also be used including, but not limited to, rectangles. Most existing techniques aim at minimizing the resolution of the atlas. However, atlas resolution is not equal to compressed size. FIG. 8 shows an exemplary texture atlas (image) 800 to which the present principles may be applied, in accordance with an embodiment of the present principles. The texture atlas 800 includes various texture charts. As shown in FIG. 8, significant similarity exists among the charts in the texture atlas 800, where chart instances sharing the same pattern tend to have similar textures.
We categorize the charts based on the color histogram and texture similarity.
The aim of the packing process includes minimizing the atlas resolution, and putting charts of the same category together as much as possible. Some optimization operations may be necessary in order to make the texture atlas more "compact". After execution of step 776, we have optimized texture coordinates for each vertex.
Texture Image Compression
Regarding step 777, we perform texture image compression. To that end, in an embodiment, we apply predictive compression to the texture image, and make the texture image ready for progressive transmission. Any image/video encoders that can perform encoding progressively be used to compress the texture image. For example, but clearly not meant to represent an exhaustive list, a wavelet-based image encoder, a JPEG2000 encoder, and an H.264/AVC encoder can be used to implement the present principles.
FIG. 9 shows an exemplary bitstream format 900, in accordance with an embodiment of the present principles. Of course, the format 900 is for illustrative purposes and, thus, other bitstream formats may also be used, while maintaining the spirit of the present principles. For the first level (level 1 ) that corresponds to the lowest quality, the encoded data corresponding to the vertices' positions, texture coordinates, and texture image and represented as "position level 1 " in block 910, "texture coordinates level 1 " in block 920, and "image level 1 " in block 930, respectively. For the next level (level 2), only the difference between level 2 and level 1 are encoded for vertices' positions and texture coordinates in "position level 2" in block 940 and in "texture coordinates level 2" in block 950, respectively. The texture image corresponding to level 2 is encoded into "image level 2" in block 960. Note that the texture image is progressively encoded. Thus, to reconstruct the texture image at level 2, encoded data for both image levels 1 and 2 are needed. Similarly, to reconstruct the vertices' positions and texture coordinates at level 2, the vertices' positions and texture coordinates at level 1 are also required.
For each additional quality level, additional data for vertices' positions, texture coordinates and texture image are used to represent a refinement over the previous level. When data for all levels are received, the decoder can render the 3D object with a fine mesh and fine texture. In the example shown in FIG. 9, when the data corresponding to the highest level n, "position level n" in block 970, "texture coordinates level n", and "image level n" are received, as well as its previous levels, the highest quality can be rendered at the decoder.
Thus, according to the present principles, vertices' positions, texture coordinates, and texture images all allow for progressive transmission. This flexibility in progressive transmission is important to certain interactive applications.
FIG. 10 shows an exemplary method 1000 for providing a multi-resolution 3D textured mesh coding to one or more users, in accordance with an embodiment of the present principles. The method 1000 pertains to a receiving/decoding side, in compliment to method 700 described above. At step 1010, a bitstream is received. At step 1020, the corresponding vertices' positions, texture coordinates, and texture image conveyed within the bitstream are decoded. When the bitstream only includes data for the lowest quality level, the decoding is based on the bitstream itself. When the bitstream includes data for a higher quality level, the vertices' positions, texture coordinates, and the texture image will be decoded based on the bitstream for the current quality level and previously decoded data for the lower levels. At step 1030, a 3D object is rendered using the vertices' positions, texture coordinates, and the texture image. At step 1040, it is determined whether or not the user is satisfied with the quality of the 3D object (rendered per step 1030). If so, then the method is terminated. Otherwise, the method proceeds to step 1050. At step 1050, the user is afforded the opportunity to request more data to refine the 3D object and if the user so requests, then the decoder consequently receives additional data that correspond to a higher level of quality and the method returns to step 1020. Accordingly, the additional received data will be decoded at step 1020 and used for refining the 3D object at step 1030.
FIG. 1 1 shows an exemplary environment 1 100 to which the present principles may be applied, in accordance with an embodiment of the present principles. The environment involves a server 1 1 10, one or more networks
(hereinafter simply represented by the Internet) 1 120, and various user devices 1 131 and 1 132. The server receives a representation 1 105 of a 3D object in the form of a multi-resolution mesh with texture, and implements the present principles as represented, for example, in FIG. 7, to perform geometry streaming and texture streaming over the Internet 1 120. The various user devices 1 131 and 1 132 receive the geometry streams and texture streams, and decode the same, for example as described with respect to FIG. 10 to provide 3D representations 1 185 (such as 670, 680, and 690 of FIG. 6) having various qualities associated therewith to the user devices 1 131 and 1 132. The same may involve the user 1 199 requesting a higher quality (per step 1050 of method 1000) via some feedback mechanism, which is then provided to the server. Accordingly, each of the user devices 1 131 and 1 132 include a respective decoder 1 150. Such a decoder may be implemented in hardware, software, or a combination thereof. In an embodiment, the decoder can include a processor and associated memory, or may use the processor and associated memory of the device within which it is found. Moreover, each of the user devices 1 131 and 1 132 include a rendering device (e.g., a display) 1 1 60. These and other configurations are readily contemplated by one of ordinary skill in the art.
It is to be appreciated that the environment 1 100 is provided for exemplary purposes and, thus, the present principles may be applied to many other
environments, as readily determined by one of ordinary skill in the art, while maintaining the spirit of the present principles.
Texture Image Pattern Aware Mesh Partitioning
The following provides an alternative approach to partitioning than that described above with respect to step 773 of FIG. 7. That is, the following approach provides an effective way to reduce artifacts for texture mapping of multi-resolution mesh. It is to be appreciated that given the teachings of the present principles provided herein, the following texture image pattern aware partitioning approach can be applied to various scenarios regarding texture mapping, unfolding of meshes, and so forth, as readily appreciated by one of ordinary skill in the art, while maintaining the spirit of the present principles.
In multi-resolution of geometry and texture, geometry is modified through the simplification operator. In the case of texture mapping, a modified geometry imposes a different texture mapping function which may have a boundary problem. FIG. 12 shows an example of a boundary problem 1200 to which the present principles may be applied, in accordance with an embodiment of the present principles. When an edge of a vertex on a boundary 1205 in a chart is simplified, texture maps for the two charts 1221 and 1222 may lead to artifacts. The reason is that the texture image is mapped onto different geometries. Using texture images for the different geometries poses a difficulty with respect to mapping. That is, if two neighboring points are mapped onto the distant boundaries of a texture image, the texture mapping function for these two points may result in blurred effect.
Thus, to minimize artifacts, we will design the segmentation algorithm in such a way as to make the meaningful texture image patch' into one chart. In other words, it is suitable to generate large charts with most of their boundaries in 'color- sharpness' zones. FIG. 13 shows a segmentation 1310 of an image 1300, in accordance with an embodiment of the present principles. This image 1300 is partitioned into different sub regions of homogeneity. The boundary of a "meaningful texture image patch" usually corresponds to a high image gradient zone. Thus, we will take into account the image gradient as a factor when we partition the input model. It is also possible to use other and/or more elaborate criteria, as readily determined by one of ordinary skill in the art, given the teachings of the present principles provided herein. Nonetheless, for illustrative purposes, we note that such other and/or more elaborate criteria capable of being used for segmentation may include, but is not limited to, a model- based approach (i.e., considering one or more model parameters), a Histogram-based approach, and so forth.
Usually in texture mapping the model to be textured is decomposed into charts homomorphic to discs, where each chart is parametrized, and the unfolded charts are packed in a texture space. In the decomposition step, if a texture image pattern has not been considered, texture artifacts may occur on the boundaries.
The present principles take into account the texture image pattern when partitioning the mesh into charts. To minimize artifacts, the segmentation algorithm is designed in such a way as to avoid chart boundaries in low image gradient zones. In other words, it is suitable to generate large charts with most of their boundaries in "color-sharp" zones.
The partition operation decomposes the model into a set of charts, and is intended to meet the following requirements as much as possible and/or practical given the intended application and available resources:
1 . Charts boundaries should be positioned in such a way that most of the discontinuities between the charts will be located in zones where they will not cause texture artifacts;
2. Charts must be homomorphic to discs, and it must be possible to
parametrize them without introducing too much deformation;
3. Reduce color discontinuity; and
4. Minimize texture distortion.
In an embodiment, the texture image pattern aware partitioning approach can be considered to involve two stages. The first stage pertains to feature detection, which finds boundaries corresponding to high image gradient zones of the model. The second stage pertains to chart growing, which makes the charts meet at these features curves. We note that the terms "borders", "boundaries", and "feature curves" are used interchangeably herein with respect to the pattern aware mesh partition approach.
Detect features
The feature detection phase can be outlined as follows:
• Compute a color-sharpness criterion on the edges (where "boundaries" and "edges" are used interchangeably herein). For illustrative purposes, we use image gradient as the color-sharpness criterion. It is also possible to use other and/or more elaborate criteria.
· Choose a threshold so that a certain proportion of the edges is filtered out.
• For each of the remaining edges, grow a feature curve by applying Algorithm 1 . Let us define sharpness = G , where G is the gradient magnitude from the
Sobel operator. It measures the change of color of the texture image. We would like to generate charts with most of their boundaries in "color-sharp" zones, which means the larger a value for sharpness's of a given candidate boundary under
consideration, the more chance for that candidate boundary to be an actual chart boundary. The following algorithm (hereinafter algorithm 1 ) is provided directed to feature detection: obtain_feature_curve(edge start)
vector<edge> detected eature
edge h' = start do
use depth-first search to find the string S of edges starting with h' and such that:
· two consecutive edges of S share a vertex
• the length of S is not larger than thresholdl
• sharpness(S) < sharpness{e) is maximum. When depth-first search is utilized to find strings of edges starting with h', you may have a couple of such strings. The string which has the maximum sharpness is selected.
• no edge of S goes backward (relative to K )
• no edge of S is tagged as a feature neighbor h'i second item of S append K to detected_feature
\Nh\\e{sharpness(S) > threshold2) if {length (detected_feature) > min_feature_length) then
tag the elements of detected_feature as features
tag the edges in the neighborhood of detected_feature as feature neighbors
end
Algorithm 1 attempts to predict the best paths, and filters out the small features caused by noise. FIG. 14 shows a flowchart of a method 1400
implementing a feature detection step (Algorithm 1 ) for texture image pattern aware partitioning, in accordance with an embodiment of the present principles. At step 1410, a color-sharpness criterion is computed on the edges. For illustrative purposes, we use image gradient for the color-sharpness criterion. However, it is to be appreciated that other and/or more elaborate criteria can be used, while
maintaining the spirit of the present principles. At step 1420, a threshold is chosen such that a certain proportion of the edges is filtered out. At step 1430, for each of the remaining edges, a feature curve is grown from the remaining edges. Step 1430, involves applying Algorithm 1 to each of the remaining edges to grow the feature curves therefrom. The purpose of these feature curves is to serve as the "meeting boundary" of different charts. These charts are obtained through growing from a set of seeds, as described in further detail herein below. FIG. 15 further describes step 1430 of FIG. 14, in accordance with an embodiment of the present principles. At step 1505, edge h' is set equal to start (i.e., a starting point for detecting features from which the feature curves are generated). At step 1510, a depth-first search is used to find the string S of edges starting with edge h' which satisfies each of the following conditions: two consecutive edges of S share a vertex; the length of S is not larger than thresholdl ; sharpness(S) < sharpness{e) is maximum; no edge of S goes backward (relative to edge h'); and no edge of S is tagged as a feature neighbor. The most advantage as far as predicting the best paths and filtering out the small features caused by noise can be obtained by using all of the preceding five conditions. At step 1515, edge h' is set equal to the 2nd item of string S of edges. At step 1520, edge h' is appended to detected_feature. At step 1525, it is determined whether or not the sharpness(S) is greater than the threshold. If so, the method returns to step 1510. Otherwise, the method proceeds to step 1530. At step 1530, it is determined whether or not the length of detected_feature is greater than the min featurejength. If so, then the method proceeds to step 1535. Otherwise, the method 1500 is terminated. At step 1535, the element detected_feature is tagged as a (detected) features, and the edges in the neighborhood of the (detected) feature are tagged as feature neighbors.
Expand charts
Once the color-sharpness features have been detected, the charts can be created. In an embodiment, our approach is a greedy algorithm, expanding all the charts simultaneously from a set of seeds as follows:
• A front is propagated from the borders and feature curves detected by the previous algorithm (Algorithm 1 ), to compute a
di stan ce_to_featu res function at each facet of a geometric shape in the charts (to be merged). Then the seeds are found to be the local maxi ma of this distance_to_features function.
• Charts are merged if they meet at a small distance from their seed.
The following algorithm (hereinafter algorithm 2) is provided directed to chart growing: priority_queue<edge> Heap sorted by dist(facet(edge))
set<edge> chart boundaries initialized with all the edges of the surface for each facet Fwhere dist(F) is a local maxi mum
create a new chart with seed F
add the edges of Fto Heap
end
\Nh\\e{Heap is not empty)
• edge h< e s Heap such that dist(e) is maxi mum
• remove h from Heap
• facet F < facet(h)
· facet Fopp < the opposite facet of F relative to h if ( chart( w ) is undefined ) then
\f(geometry_constraint_satisfied)then
• add Fopp to chart(F)
• remove E from chart boundaries
· remove edges which do not link two other chart boundary edges from
chart boundaries
• add the edges of Fopp belonging to
chart boundaries to Heap
else • create a new chart with Fopp
• add the edges of Fopp to Heap
else if ( chart( w )≠chart(F) and
geometry_constraint_satisfied and
max_dist(chart(F)) - dist(F) < ε and
max_dist(chart( w )) - dist(F) < ε then
merge chart(F) and chart( w ) end
end
end
This region growing uses the following:
• distance_to_feature is stored in each facet F, and denoted by dist(F);
· for each chart C, max_dist(C) denotes the maximum distance to features for all the facets of C;
• the set of edges chart boundaries represents the borders of all charts;
• geometry_constraint_satisfied means that the cost (sum of planarity
measure and compactness measure) of the merging operation does not exceed a user-specified threshold.
FIG. 1 6 shows a flowchart of a method 1 600 implementing a chart growing step (Algorithm 2) for texture image pattern aware partitioning, in accordance with an embodiment of the present principles. At step 1 610, a front is propagated from the borders and feature curves (detected by method 500, pertaining to Algorithm 1 ), to compute a distance_to_features function at each facet. At step 1 620, the seeds are found to be the local maxima of the distance_to_features function. At step 1 630, charts are merged if they meet at a small distance (e.g., below a given threshold distance) from their seed.
These and other features and advantages of the present invention may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
Most preferably, the teachings of the present invention are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units ("CPU"), a random access memory ("RAM"), and input/output ("I/O") interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present invention.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.

Claims

1 . A method for encoding a three-dimensional model represented by a mesh of two-dimensional polygons, comprising:
simplifying (774) the mesh to obtain a simplified mesh;
generating (776) a texture image, the texture image representing textures for the mesh and the simplified mesh; and
encoding (777) the texture image to form a base layer and an enhancement layer, the base layer corresponding to a coarse representation of the texture image and the enhancement layer providing a refinement of the base layer.
2. The method of claim 1 , further comprising:
partitioning (773) the mesh and the simplified mesh into a plurality of charts; parametrizing (773) the plurality of charts by minimizing an energy function which depends on gradients of the texture image.
3. The method of claim 1 , wherein encoding (777) the texture image is performed by a wavelet-based encoder.
4. The method of claim 1 , further comprising:
encoding (740) the mesh and the simplified mesh to an encoded mesh and an encoded simplified mesh respectively;
outputting the encoded simplified mesh, the encoded mesh, the base layer of the texture map, and the enhancement layer of the texture map.
5. The method of claim 4, wherein the encoded mesh is encoded in response to a difference between the mesh and the simplified mesh.
6. The method of claim 4, wherein the outputting of at least one of the encoded mesh and the enhancement layer of the texture map is performed in response to a user request.
7. The method of claim 1 , further comprising:
partitioning the mesh into a plurality of charts;
wherein partitioning the mesh into a plurality of charts comprises:
computing (1410) a color-sharpness criterion on edges in the texture image, wherein the color-sharpness criterion relates to a gradient of the texture image;
filtering (1420) out a certain proportion of the edges using a threshold related to the color-sharpness criterion to obtain remaining edges; and
growing (1430) a respective feature curve from each of the remaining edges.
8. An apparatus for encoding a three-dimensional model represented by a mesh of two-dimensional polygons, comprising:
a mesh simplifier (715) for simplifying the mesh to obtain a simplified mesh; a texture image generator (730) for generating a texture image, the texture image representing textures for the mesh and the simplified mesh;
an encoder (740) for encoding the texture image to form a base layer and an enhancement layer, the base layer corresponding to a coarse representation of the texture image and the enhancement layer providing a refinement of the base layer.
9. The apparatus of claim 8, further comprising a mesh partition and parametrization device for partitioning the mesh and the simplified mesh into a plurality of charts, and parametrizing the plurality of charts by minimizing an energy function which depends on gradients of the texture image.
10. The apparatus of claim 8, wherein said encoder (740) is a wavelet- based encoder.
1 1 . The apparatus of claim 8, wherein said encoder (740) encodes the mesh and the simplified mesh to an encoded mesh and an encoded simplified mesh respectively, and outputs the encoded simplified mesh, the encoded mesh, the base layer of the texture map, and the enhancement layer of the texture map.
12. The apparatus of claim 1 1 , wherein the encoded mesh is encoded in response to a difference between the mesh and the simplified mesh.
13. The apparatus of claim 8, further comprising a mesh partitioning device (710) for partitioning the mesh into a plurality of charts by computing a color- sharpness criterion on edges in the texture image, filtering out a certain proportion of the edges using a threshold related to the color-sharpness criterion to obtain remaining edges, and growing a respective feature curve from each of the remaining edges, wherein the color-sharpness criterion relates to a gradient of the texture image.
14. A method, comprising:
decoding (1020) a simplified mesh, a mesh, and a coarse representation of a texture map from a bitstream, the coarse representation of the texture map corresponding to both the simplified mesh and the mesh;
forming (1030) a three-dimensional model for the mesh, using the coarse representation of the texture map;
decoding (1050) a refinement of the texture map to form a refined
representation of the texture map; and
enhancing (1030) the three-dimensional model for the mesh, using the refined representation of the texture map.
15. The method of claim 14, wherein the coarse representation and the refinement of the texture map are decoded using a wavelet-based decoder.
1 6. The method of claim 14, wherein the step of decoding the mesh comprises:
decoding a difference between the simplified mesh and the mesh; and combining the difference and the simplified mesh to form the mesh.
17. The method of claim 14, further comprising receiving (1050) a user request to enhance said three-dimensional model.
18. An apparatus, comprising: a decoder (1 150) for decoding a simplified mesh, a mesh, and a coarse representation of a texture map from a bitstream, the coarse representation of the texture map corresponding to both the simplified mesh and the mesh; and
a rendering device (1 1 60) for forming a three-dimensional model for the mesh using the coarse representation of the texture map,
wherein the decoder (1 150) decodes a refinement of the texture map to form a refined representation of the texture map, and the rendering device (1 160) enhances the three-dimensional model for the mesh using the refined representation of the texture map.
19. The apparatus of claim 18, wherein at least a portion of the decoder (1 150) that decodes the coarse representation and the refinement of the texture map comprises a wavelet-based decoder.
20. The apparatus of claim 18, wherein said decoder (1 150) decodes the mesh by decoding a difference between the simplified mesh and the mesh, and combining the difference and the simplified mesh to form the mesh.
21 . The apparatus of claim 18, further comprising a feedback mechanism for receiving a user request to enhance said three-dimensional model.
PCT/CN2011/079095 2011-08-30 2011-08-30 Multi-resolution 3d textured mesh coding WO2013029232A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/079095 WO2013029232A1 (en) 2011-08-30 2011-08-30 Multi-resolution 3d textured mesh coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/079095 WO2013029232A1 (en) 2011-08-30 2011-08-30 Multi-resolution 3d textured mesh coding

Publications (1)

Publication Number Publication Date
WO2013029232A1 true WO2013029232A1 (en) 2013-03-07

Family

ID=47755191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/079095 WO2013029232A1 (en) 2011-08-30 2011-08-30 Multi-resolution 3d textured mesh coding

Country Status (1)

Country Link
WO (1) WO2013029232A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2996086A1 (en) * 2014-09-12 2016-03-16 Kubity System, method and computer program product for automatic optimization of 3d textured models for network transfer and real-time rendering
RU2679990C2 (en) * 2014-03-24 2019-02-14 Сони Корпорейшн Image coding apparatus and method and image decoding apparatus and method
WO2019241228A1 (en) * 2018-06-12 2019-12-19 Ebay Inc. Reconstruction of 3d model with immersive experience
US11205299B2 (en) 2017-03-08 2021-12-21 Ebay Inc. Integration of 3D models
WO2022133569A1 (en) * 2020-12-22 2022-06-30 Prevu3D Inc. Methods and system for reconstructing textured meshes from point cloud data
WO2022258879A3 (en) * 2021-06-09 2023-01-26 Nokia Technologies Oy A method, an apparatus and a computer program product for video encoding and video decoding
US11568575B2 (en) * 2019-02-19 2023-01-31 Google Llc Cost-driven framework for progressive compression of textured meshes

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1349716A (en) * 1999-12-28 2002-05-15 皇家菲利浦电子有限公司 SNR scalable video encoding method and corresponding decoding method
US6678419B1 (en) * 1999-03-26 2004-01-13 Microsoft Corporation Reordering wavelet coefficients for improved encoding
US7274372B1 (en) * 2000-11-06 2007-09-25 Intel Corporation Real-time digital three dimensional engraving
CN101119485A (en) * 2007-08-06 2008-02-06 北京航空航天大学 Characteristic reservation based three-dimensional model progressive transmission method
CN101364310A (en) * 2007-08-07 2009-02-11 北京灵图软件技术有限公司 Three-dimensional model image generating method and apparatus
US20100277571A1 (en) * 2009-04-30 2010-11-04 Bugao Xu Body Surface Imaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678419B1 (en) * 1999-03-26 2004-01-13 Microsoft Corporation Reordering wavelet coefficients for improved encoding
CN1349716A (en) * 1999-12-28 2002-05-15 皇家菲利浦电子有限公司 SNR scalable video encoding method and corresponding decoding method
US7274372B1 (en) * 2000-11-06 2007-09-25 Intel Corporation Real-time digital three dimensional engraving
CN101119485A (en) * 2007-08-06 2008-02-06 北京航空航天大学 Characteristic reservation based three-dimensional model progressive transmission method
CN101364310A (en) * 2007-08-07 2009-02-11 北京灵图软件技术有限公司 Three-dimensional model image generating method and apparatus
US20100277571A1 (en) * 2009-04-30 2010-11-04 Bugao Xu Body Surface Imaging

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2679990C2 (en) * 2014-03-24 2019-02-14 Сони Корпорейшн Image coding apparatus and method and image decoding apparatus and method
EP2996086A1 (en) * 2014-09-12 2016-03-16 Kubity System, method and computer program product for automatic optimization of 3d textured models for network transfer and real-time rendering
WO2016038091A1 (en) * 2014-09-12 2016-03-17 Kubity System, method and computer program product for automatic optimization of 3d textured models for network transfer and real-time rendering
CN107077746A (en) * 2014-09-12 2017-08-18 酷比特公司 System, method and computer program product for network transmission and the Automatic Optimal of the 3D texture models of real-time rendering
US10482629B2 (en) 2014-09-12 2019-11-19 Kubity System, method and computer program product for automatic optimization of 3D textured models for network transfer and real-time rendering
US11205299B2 (en) 2017-03-08 2021-12-21 Ebay Inc. Integration of 3D models
US11727627B2 (en) 2017-03-08 2023-08-15 Ebay Inc. Integration of 3D models
WO2019241228A1 (en) * 2018-06-12 2019-12-19 Ebay Inc. Reconstruction of 3d model with immersive experience
US11727656B2 (en) 2018-06-12 2023-08-15 Ebay Inc. Reconstruction of 3D model with immersive experience
US11568575B2 (en) * 2019-02-19 2023-01-31 Google Llc Cost-driven framework for progressive compression of textured meshes
WO2022133569A1 (en) * 2020-12-22 2022-06-30 Prevu3D Inc. Methods and system for reconstructing textured meshes from point cloud data
WO2022258879A3 (en) * 2021-06-09 2023-01-26 Nokia Technologies Oy A method, an apparatus and a computer program product for video encoding and video decoding

Similar Documents

Publication Publication Date Title
US11252441B2 (en) Hierarchical point cloud compression
CN110996098B (en) Method and device for processing point cloud data
US11276203B2 (en) Point cloud compression using fixed-point numbers
US11409998B2 (en) Trimming search space for nearest neighbor determinations in point cloud compression
US10904564B2 (en) Method and apparatus for video coding
CN111095929B (en) System, method and computer readable medium for compressing attribute information for point cloud
US10909727B2 (en) Hierarchical point cloud compression with smoothing
Maglo et al. 3d mesh compression: Survey, comparisons, and emerging trends
US20190311499A1 (en) Adaptive distance based point cloud compression
WO2013029232A1 (en) Multi-resolution 3d textured mesh coding
CN114467302A (en) Block-based predictive coding for point cloud compression
WO2012096790A2 (en) Planetary scale object rendering
US11836953B2 (en) Video based mesh compression
CN117178297A (en) Micro-grid for structured geometry of computer graphics
JP2006284704A (en) Three-dimensional map simplification device and three-dimensional map simplification method
US20230162404A1 (en) Decoding of patch temporal alignment for mesh compression
US20220180567A1 (en) Method and apparatus for point cloud coding
US20220284633A1 (en) Method and apparatus for constructing a 3d geometry
Marvie et al. Coding of dynamic 3D meshes
CN113240788A (en) Three-dimensional data transmission and reception method, apparatus, and computer-readable storage medium
WO2023280147A1 (en) Method, apparatus, and medium for point cloud coding
JPH0837664A (en) Moving picture encoding/decoding device
JP2024512921A (en) Encoding patch temporal alignment for mesh compression
WO2023174701A1 (en) V-pcc based dynamic textured mesh coding without occupancy maps
KR20230135646A (en) Encoding patch temporal alignment for mesh compression.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11871647

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11871647

Country of ref document: EP

Kind code of ref document: A1