CN104978764A - Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment - Google Patents

Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment Download PDF

Info

Publication number
CN104978764A
CN104978764A CN201410141093.XA CN201410141093A CN104978764A CN 104978764 A CN104978764 A CN 104978764A CN 201410141093 A CN201410141093 A CN 201410141093A CN 104978764 A CN104978764 A CN 104978764A
Authority
CN
China
Prior art keywords
human face
face image
expressive features
features point
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410141093.XA
Other languages
Chinese (zh)
Other versions
CN104978764B (en
Inventor
吕培
周炯
赵寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong smart Polytron Technologies Inc
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201410141093.XA priority Critical patent/CN104978764B/en
Publication of CN104978764A publication Critical patent/CN104978764A/en
Application granted granted Critical
Publication of CN104978764B publication Critical patent/CN104978764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment. The method comprises the following steps of: obtaining an original three-dimensional face mesh model corresponding to an original two-dimensional face image, wherein the original three-dimensional face mesh model contains a second expression feature point corresponding to a first expression feature point of the original two-dimensional face image; calculating a camera parameter matrix of the original three-dimensional face mesh model according to a formula (1); and mapping the second expression feature point onto the original two-dimensional face image according to the camera parameter matrix for judging the matching degree of the second expression feature point and the first expression feature point, and adjusting the original three-dimensional mesh model according to the judging result. The original three-dimensional face mesh model and the original two-dimensional face image are subjected to matching degree judgment according to the camera parameters, and the original three-dimensional face mesh model is adjusted under the condition of low matching degree, so that the regulated three-dimensional face mesh model can be enabled to achieve a better matching degree with the original two-dimensional face image.

Description

3 d human face mesh model disposal route and equipment
Technical field
The invention belongs to handle the pictures technical field, specifically relate to a kind of 3 d human face mesh model disposal route and equipment.
Background technology
The facial expression of face is a kind of delicate body language; it is the important means transmitting emotion information; in the application such as such as facial expression animation making, photo disposal; often can relate to and the human face expression on a secondary picture is transferred in the face picture of another width expression different from it; such as someone has shone a sheet photo; but to the not too satisfaction of the expression on this photograph, now the expression of the satisfaction on certain former sheet photo can be transferred on current photograph through certain image processing techniques by he.
Along with the development of three-dimensional modeling and acquiring technology, three-dimensional model is compared to two dimensional image, more details information can be provided, therefore, in the processing procedure of human face expression transfer, general needs to initial target two-dimension human face image and the modeling carrying out corresponding 3 d human face mesh model with reference to two-dimension human face image respectively, and then carry out the process such as scalloping, fusion based on corresponding 3 d human face mesh model, finally to realize the object of human face expression transfer.Therefore, the 3 d human face mesh model of foundation has material impact with the effect of matching degree to human face expression process of corresponding two-dimension human face image.
Mostly the foundation of existing 3 d human face mesh model is based on Facial expression database.Namely by carrying out bulk deformation to the three-dimensional face expression model stored in Facial expression database, to match with the human face expression unique point that marks on initial two-dimension human face image.But the 3 d human face mesh model that the mode of this bulk deformation is set up often has lower matching degree with corresponding two-dimension human face image.
Summary of the invention
For problems of the prior art, the invention provides a kind of 3 d human face mesh model disposal route and equipment, carry out based on Facial expression database the 3 d human face mesh model defect lower with the matching degree of corresponding two-dimension human face image that bulk deformation causes setting up in order to overcome in prior art.
First aspect present invention provides a kind of 3 d human face mesh model disposal route, comprising:
Obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
The camera parameter matrix of described initial three-dimensional face wire frame model is calculated according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
According to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the first possible implementation of first aspect, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image by the described camera parameter matrix that described basis calculates, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described pending model is adjusted, comprising:
The matching error of described second expressive features point and described first expressive features point is calculated according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Judge whether described matching error is more than or equal to predetermined threshold value;
If be more than or equal to, then described initial three-dimensional face wire frame model is adjusted, be less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
According to the first possible implementation of first aspect, in the implementation that the second of first aspect is possible, described described initial three-dimensional face wire frame model to be adjusted, comprising:
Calculate the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j;
The second expressive features point X on fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i';
With described geodesic line distance for constraint, the second preset algorithm is adopted to determine and described 3rd expressive features point X i' corresponding each grid vertex X j';
According to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
The implementation possible according to the first or the second of first aspect, first aspect, in the third possible implementation of first aspect, described original two dimensional facial image comprises target two-dimension human face image and with reference to two-dimension human face image;
The initial three-dimensional face wire frame model that described acquisition is corresponding with original two dimensional facial image, comprising:
Extract the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and the first expressive features point;
Determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
According to face mask unique point and the first expressive features point of described nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database;
Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image;
Determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Described each expression model is merged according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
According to the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect, described default expression storehouse comprises general blendshape model.
According to the third or the 4th kind of possible implementation of first aspect, in the 5th kind of possible implementation of first aspect, the described face mask unique point according to described target two-dimension human face image and the described face mask unique point with reference to two-dimension human face image determine nearly front face image, comprising:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
According to first aspect the third, the 4th kind or the 5th kind of possible implementation, in the 6th kind of possible implementation of first aspect, described according to judged result described initial three-dimensional face wire frame model adjusted after, also comprise:
According to the 3 d human face mesh model corresponding with described target two-dimension human face image, described target two-dimension human face image is out of shape, and is out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
Second aspect present invention provides a kind of 3 d human face mesh model treatment facility, comprising:
Acquisition module, for obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
Computing module, calculates the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
Judge module, for the second expressive features point on described initial three-dimensional face wire frame model being mapped to described original two dimensional facial image according to the described camera parameter matrix calculated, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the first possible implementation of second aspect, described judge module, comprising:
Computing unit, for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Judging unit, for judging whether described matching error is more than or equal to predetermined threshold value;
Adjustment unit, if for being more than or equal to, then adjusts described initial three-dimensional face wire frame model, is less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
According to the first possible implementation of second aspect, in the implementation that the second of second aspect is possible, described adjustment unit, comprising:
Computation subunit, for calculating the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j;
First adjustment subelement, the second expressive features point X on fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i';
Determine subelement, for described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X i' corresponding each grid vertex X j';
Second adjustment subelement, for according to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
The implementation possible according to the first or the second of second aspect, second aspect, in the third possible implementation of second aspect, described original two dimensional facial image comprises target two-dimension human face image and with reference to two-dimension human face image;
Described acquisition module, comprising:
Extraction unit, for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and the first expressive features point;
First determining unit, for determining nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
First deformation unit, for according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined from neutral face database is out of shape, obtains the Nature face model corresponding with described front face image;
Second deformation unit, for being out of shape each default expression model comprised in default expression storehouse respectively according to the Nature face model of described nearly front face image, obtains each expression model corresponding with described front face image;
Second determining unit, for determining the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Merge cells, for merging described each expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
According to the third possible implementation of second aspect, in the 4th kind of possible implementation of second aspect, described default expression storehouse comprises general blendshape model.
According to the third or the 4th kind of possible implementation of second aspect, in the 5th kind of possible implementation of second aspect, described first determining unit, specifically for:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
According to second aspect the third, the 4th kind or the 5th kind of possible implementation, in the 6th kind of possible implementation of second aspect, described equipment also comprises:
Deformation module, for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and be out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge module, for merging by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
3 d human face mesh model disposal route provided by the invention and equipment, after obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, according to the camera parameter of this initial three-dimensional face wire frame model, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.According to camera parameter, initial three-dimensional face wire frame model and original two dimensional facial image are carried out to the judgement of matching degree, make to adjust initial three-dimensional face wire frame model when matching degree is low, thus can ensure that the 3 d human face mesh model after adjusting and original two dimensional facial image have better matching degree.
Accompanying drawing explanation
The process flow diagram of the 3 d human face mesh model disposal route that Fig. 1 provides for the embodiment of the present invention one;
Fig. 2 is the process flow diagram of the processing procedure of step 103 embodiment illustrated in fig. 1;
The process flow diagram of the 3 d human face mesh model disposal route that Fig. 3 provides for the embodiment of the present invention two;
The structural representation of the 3 d human face mesh model treatment facility that Fig. 4 provides for the embodiment of the present invention three;
The structural representation of the 3 d human face mesh model treatment facility that Fig. 5 provides for the embodiment of the present invention four;
The structural representation of the treatment facility that Fig. 6 provides for the embodiment of the present invention five.
Embodiment
The process flow diagram of the 3 d human face mesh model disposal route that Fig. 1 provides for the embodiment of the present invention one, as shown in Figure 1, the method comprises:
Step 101, obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
In the present embodiment, above-mentioned 3 d human face mesh model disposal route is performed by a treating apparatus, this treating apparatus is preferably integrated to be arranged in the such as terminal device such as PC, notebook computer, may be used for the process two width images of input being carried out to human face expression transfer.The 3 d human face mesh model that the described method that the present embodiment provides is applicable to adopting the mode of prior art to obtain adjusts, or, also be applicable to adjust the carrying out of the 3 d human face mesh model that the method that the embodiment adopted as shown in Figure 3 provides obtains, be not limited with the present embodiment.
For the purpose of describing simply, be described no matter the 3 d human face mesh model adopting which kind of method above-mentioned to obtain is referred to as initial three-dimensional face wire frame model in the present embodiment.This initial three-dimensional face wire frame model correspond to an original two-dimension human face image.The described method that the present embodiment provides preferably is applicable in the application scenarios of human face expression transfer, in the processing procedure of human face expression transfer, is need to transfer on target two-dimension human face image with reference to the human face expression on two-dimension human face image.Carry out human face expression transfer if want, first will distinguish the reconstruction of realize target two-dimension human face image and the 3 d human face mesh model with reference to two-dimension human face image.Therefore, original two dimensional facial image described in the present embodiment can be such as with reference to two-dimension human face image or target two-dimension human face image, accordingly, initial three-dimensional face wire frame model can be such as the 3 d human face mesh model corresponding with reference two-dimension human face image or the 3 d human face mesh model corresponding with target two-dimension human face image, the described method provided due to the present embodiment is all applicable for these two kinds of 3 d human face mesh models, does not distinguish explanation below.
First treating apparatus obtains the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, and described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image.The 3 d human face mesh model such as obtained in prior art in the present embodiment is as described initial three-dimensional face wire frame model, input this treating apparatus, carry out follow-up adjustment process with the second expressive features point making this treating apparatus comprise according to this beginning 3 d human face mesh model.
Wherein, first expressive features point of original two dimensional facial image mainly refers to that face can cause different face changes when showing different expression, namely the motion morphology of these face forms the first expressive features point, such as form of nose, face, eyebrow, eyes etc. of this human face expression.And the second expressive features point corresponding with this first expressive features point can be artificial or automatic mark on initial three-dimensional face wire frame model.
Step 102, calculate the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
The second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image by the described camera parameter matrix that step 103, basis calculate, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the present embodiment, in order to whether the original two dimensional facial image whether corresponding with it to initial three-dimensional face wire frame model mates judge, first need the second expressive features point on initial three-dimensional face wire frame model to be mapped on corresponding original two dimensional facial image, and then the matching error judging between this second expressive features point with the first expressive features point on corresponding original two dimensional facial image, according to judged result, described initial three-dimensional face wire frame model is adjusted afterwards.
And when being mapped to by the second expressive features point on initial three-dimensional face wire frame model on corresponding original two dimensional facial image, need to use a parameter, i.e. camera parameter, this parameter generally represents with the form of a parameter matrix.Particularly, can pass through solution formula (1) and obtain camera parameter matrix, it is minimum that formula (1) means that this camera parameter matrix need meet the distance of the second expressive features point and the first expressive features point as far as possible.After obtaining this camera parameter matrix, this matrix is utilized to be mapped to by the second expressive features point on initial three-dimensional face wire frame model on corresponding two-dimension human face image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
In the present embodiment, according to camera parameter, initial three-dimensional face wire frame model and original two dimensional facial image are carried out to the judgement of matching degree, make to adjust initial three-dimensional face wire frame model when matching degree is low, thus can ensure that the 3 d human face mesh model after adjusting and original two dimensional facial image have better matching degree.
Further, Fig. 2 is the process flow diagram of the processing procedure of step 103 embodiment illustrated in fig. 1, as shown in Figure 2, according to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image in step 103 in Fig. 1, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted, comprising:
Step 201, calculate the matching error of described second expressive features point and described first expressive features point according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Step 202, judge whether described matching error is more than or equal to predetermined threshold value, if be more than or equal to, then perform step 203, otherwise terminate;
After the camera parameter obtaining initial three-dimensional face wire frame model, according to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching error of the second expressive features point on initial three-dimensional face wire frame model and the first expressive features point on original two dimensional facial image according to formula (2).In formula (2), because often pair of the expression degree of depth of unique point, pixel grey scale are had nothing in common with each other, therefore, often pair of expression unique point has different weight coefficients.
And then, judge whether described matching error is more than or equal to predetermined threshold value, if be more than or equal to, then need to adjust this initial three-dimensional face wire frame model according to step 203 ~ 206, otherwise without the need to adjusting.
Step 203, calculate the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j;
The second expressive features point X on step 204, fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i';
Step 205, with described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X i' corresponding each grid vertex X j';
Step 206, according to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
When judging that matching error is more than or equal to predetermined threshold value, need adjust initial three-dimensional face wire frame model.Particularly, the geodesic line distance of other grid vertexes on this initial three-dimensional face wire frame model on each second expressive features point to this initial three-dimensional face wire frame model except corresponding current second expressive features point is first calculated.Because initial three-dimensional face wire frame model is a three-dimensional grid model, be made up of grid one by one, it is the length of current second expressive features point the shortest mesh lines when arriving certain grid vertex along different mesh lines that this geodesic line distance can be understood as, i.e. path distance.
Afterwards, the second expressive features point X on fixing initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and this second expressive features point X ithe 3rd corresponding expressive features point X i', wherein, this first preset algorithm is such as Nelder-Mead simplex algorithm.
And then, with this geodesic line distance for constraint, adopt the second preset algorithm to determine each grid vertex X corresponding with described 3rd expressive features point Xi ' j', and according to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.Wherein, this second preset algorithm is such as radial basis function, Laplce's distortion of the mesh algorithm etc.
Be understandable that, why with geodesic line distance for constraint, because the grid vertex of other the non-unique points around the second expressive features point will be kept as far as possible, after the second expressive features point changes to the 3rd expressive features point, still after change according to geodesic line distance, keep and the correlative positional relation of the 3rd expressive features point.
In the present embodiment, when initial three-dimensional face wire frame model and the matching degree of corresponding two-dimension human face image are lower, initial three-dimensional face wire frame model is adjusted for retraining with above-mentioned geodesic line distance, while expression unique point is adjusted, be conducive to ensureing that other non-expressive features points keep the relative position relation with corresponding expression unique point before adjustment afterwards, make the 3 d human face mesh model after adjusting have better matching degree with corresponding two-dimension human face image.
The process flow diagram of the 3 d human face mesh model disposal route that Fig. 3 provides for the embodiment of the present invention two, as shown in Figure 3, this disposal route is the improvement to the process obtaining initial three-dimensional face wire frame model in prior art, set up based on Facial expression database in prior art in the scheme of the 3 d human face mesh model matched with source beginning two-dimension human face image, due to the age that each human face expression model in this Facial expression database is according to Different Individual, sex, shape of face, mood, expression waits statistics to set up, there is obvious individual differences, if the expression in original two-dimension human face image is beyond the scope of this Facial expression database, the 3 d human face mesh model matched cannot be obtained by Facial expression database.For this reason, the described method that provides of the present embodiment is for setting up the 3 d human face mesh model corresponding with original two dimensional facial image.Wherein, above-mentioned Fig. 1 or embodiment illustrated in fig. 2 described in original two dimensional facial image specifically comprise target two-dimension human face image in the present embodiment and with reference to two-dimension human face image, in the application scenarios of human face expression transfer, be need to transfer in target two-dimension human face image with reference to the human face expression in two-dimension human face image.
The described method that the present embodiment provides comprises:
Step 301, the human face expression unique point extracting described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and described first expressive features point;
Step 302, determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the described face mask unique point with reference to two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
Step 303, according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database;
Step 304, respectively each default expression model comprised in default expression storehouse to be out of shape according to the Nature face model of described nearly front face image, to obtain each expression model corresponding with described front face image;
Step 305, determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Step 306, according to described first weight coefficient merge described each expression model, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
The method still can be performed by above-mentioned treating apparatus, two width images of now this treating apparatus input are called target two-dimension human face image and reference two-dimension human face image, in the processing procedure of human face expression transfer, be need to transfer on target two-dimension human face image with reference to the human face expression on two-dimension human face image.
First target two-dimension human face image and the human face expression unique point with reference to two-dimension human face image is extracted respectively.Such as active shape model (Active ShapeModel can be adopted when extraction human face expression unique point, hereinafter referred to as ASM) etc. ripe algorithm accurately detect human face expression unique point, this human face expression unique point comprises face mask unique point and the first expressive features point, wherein, face mask unique point refers to some unique points that clearly can pick out facial contour, first expressive features point mainly refers to that face can cause different face changes when showing different expression, namely the motion morphology of these face forms the first expressive features point of this human face expression, such as nose, face, eyebrow, the form of eyes etc.
In the present embodiment, due to face mask unique point can embody face on corresponding image towards, therefore, respectively according to the face mask unique point in the human face expression unique point of target two-dimension human face image with reference to the face mask unique point in the human face expression unique point of two-dimension human face image, a nearly front face image of conduct can be selected in these two images.Face mask unique point specifically according to target two-dimension human face image in the present embodiment calculates the face mask curvature of target two-dimension human face image, and according to the face mask curvature of face mask unique point computing reference two-dimension human face image with reference to two-dimension human face image, the image selecting face mask curvature little is afterwards front face image.Face mask curvature is little means that face is towards more towards front.
And then, according to face mask unique point and the first expressive features point of this nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model of this nearly front face image from neutral face database.For example, such as select to determine using reference two-dimension human face image as nearly front face image, so carry out the such as distortion such as convergent-divergent, rotation, to obtain the Nature face model corresponding with this reference two-dimension human face image by the target Nature face model determined from neutral face database according to the face mask unique point of this reference two-dimension human face image and the first expressive features point.Wherein, the three-dimensional Nature face model of the individual differences such as multiple sex of forgoing, age, race is included in Nature face storehouse, the target Nature face model determined from neutral face database both can be the three-dimensional Nature face model selected randomly, also can be the three-dimensional Nature face model after the Weighted Fusion of all or part of three-dimensional Nature face model comprised in centering face database.
In the present embodiment, the Nature face model of the nearly front face image of acquisition has the face mask that nearly front face image is basically identical with this, just on this Nature face model, does not have detailed expressive features.Therefore, in this enforcement with this Nature face model for intermediary, and then carry out the process of follow-up expression model.
And then the Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image.Particularly, this is preset expression storehouse and is preferably general blendshape model, wherein includes multiple different expression model, in the present embodiment, adopts general blendshape model to come for above-mentioned Nature face model adds expressive features.Need be out of shape this multiple expression model according to the Nature face model of nearly front face image, express one's feelings model to obtain each blendshape corresponding to front face image near with this.Both contain the expressive features of himself in each blendshape expression model, contain again the face mask feature of front face image.
And then, need to determine that each blendshape corresponding with this target two-dimension human face image expresses one's feelings the first weight coefficient of model according to the first expressive features point of target two-dimension human face image, and according to determining that with reference to the first expressive features point of two-dimension human face image each blendshape corresponding with this reference two-dimension human face image expresses one's feelings the second weight coefficient of model, that is, the expressive features of expressing one's feelings on model due to each blendshape is different, need respectively for target two-dimension human face image with reference to two-dimension human face image, determine the proportion that each blendshape expresses one's feelings shared by model, and then merge described each blendshape expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each blendshape expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.So-called merging, namely being superimposed together according to respective weight coefficient model of each blendshape being expressed one's feelings, is namely that organ corresponding with each first expressive features point in model of being expressed one's feelings by each blendshape superposes according to the express one's feelings weight coefficient of model of corresponding blendshape.
How weight coefficient is determined to determine that the first weight coefficient illustrates, for certain the first expressive features point in target two-dimension human face image, travel through the unique point of each blendshape expression model organ corresponding with this first expressive features point successively, this unique point can be artificial hand labeled, also can initially delimit, such as eyebrow, and then determine that this blendshape expresses one's feelings the weight coefficient of model according to the unique point of this organ and the close degree of this first expressive features point.
The 3 d human face mesh model corresponding with target two-dimension human face image is obtained in execution of step 306, and after the 3 d human face mesh model corresponding with reference to two-dimension human face image, optionally, method as shown in Figure 1 or 2 can also be performed, the 3 d human face mesh model obtained is adjusted.
Optionally, in execution of step 306, or according to method as shown in Figure 1 or 2, after the 3 d human face mesh model obtained is adjusted, following steps can also be performed to realize the object of human face expression transfer.
Step 307, the basis 3 d human face mesh model corresponding with described target two-dimension human face image are out of shape described target two-dimension human face image, and are out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Step 308, by the target two-dimension human face image after distortion with merge with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
In the present embodiment, extract target two-dimension human face image and the human face expression unique point with reference to two-dimension human face image respectively, and select a nearly front face image of conduct according to the face mask unique point in this human face expression unique point from target two-dimension human face image with reference to two-dimension human face image, and then according to this nearly front face image, the target Nature face model determined from neutral face database is out of shape, obtain Nature face model, in this Nature face storehouse, do not rely on concrete personal feature; Respectively each blendshape expression model comprised in general blendshape model is out of shape according to this Nature face model afterwards, and then respectively according to target two-dimension human face image and the first weight coefficient and the second weight coefficient of determining each blendshape expression model with reference to each first expressive features point of two-dimension human face image respectively, to merge each blendshape expression model according to different weight coefficients, finally obtain 3 d human face mesh model corresponding with target two-dimension human face image and reference two-dimension human face image respectively.Because Nature face storehouse and general blendshape model all obviate the individual difference of people, overcome in prior art and set up the easily failed defect of 3 d human face mesh model based on Facial expression database.
The structural representation of the 3 d human face mesh model treatment facility that Fig. 4 provides for the embodiment of the present invention three, as shown in Figure 4, this treatment facility comprises:
Acquisition module 11, for obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
Computing module 12, calculates the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
Judge module 13, for the second expressive features point on described initial three-dimensional face wire frame model being mapped to described original two dimensional facial image according to the described camera parameter matrix calculated, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
The treatment facility of the present embodiment may be used for the technical scheme performing embodiment of the method shown in Fig. 1, and it realizes principle and technique effect is similar, repeats no more herein.
The structural representation of the 3 d human face mesh model treatment facility that Fig. 5 provides for the embodiment of the present invention four, as shown in Figure 5, this treatment facility is on basis embodiment illustrated in fig. 4, and described judge module 13, comprising:
Computing unit 131, for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Judging unit 132, for judging whether described matching error is more than or equal to predetermined threshold value;
Adjustment unit 133, if for being more than or equal to, then adjusts described initial three-dimensional face wire frame model, is less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
Further, described adjustment unit 133, comprising:
Computation subunit 1331, for calculating the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j;
First adjustment subelement 1332, the second expressive features point X on fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, ycoordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i';
Determine subelement 1333, for described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X i' corresponding each grid vertex X j';
Second adjustment subelement 1334, for according to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
Further, described original two dimensional facial image comprises target two-dimension human face image and reference two-dimension human face image;
Described acquisition module 11, comprising:
Extraction unit 111, for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and described first expressive features point;
First determining unit 112, for determining nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
First deformation unit 113, for according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined from neutral face database is out of shape, obtains the Nature face model corresponding with described front face image;
Second deformation unit 114, for being out of shape each default expression model comprised in default expression storehouse respectively according to the Nature face model of described nearly front face image, obtains each expression model corresponding with described front face image;
Second determining unit 115, for determining the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Merge cells 116, for merging described each expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
Particularly, described default expression storehouse comprises general blendshape model.
Further, described first determining unit 112, specifically for:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
Further, described treatment facility also comprises:
Deformation module 21, for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and be out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge module 22, for merging by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
The treatment facility of the present embodiment may be used for the technical scheme performing embodiment of the method shown in Fig. 2 or Fig. 3, and it realizes principle and technique effect is similar, repeats no more herein.
The structural representation of the treatment facility that Fig. 6 provides for the embodiment of the present invention five, as shown in Figure 6, this treatment facility comprises:
Storer 31 and the processor 32 be connected with described storer 31, wherein, described storer 31 is for storing batch processing code, described processor 32 is for calling the program code stored in described storer 31, to perform in 3 d human face mesh model disposal route as shown in Figure 1: obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image; The camera parameter matrix of described initial three-dimensional face wire frame model is calculated according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point; According to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
Further, described processor 32 is also for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Judge whether described matching error is more than or equal to predetermined threshold value; If be more than or equal to, then described initial three-dimensional face wire frame model is adjusted, be less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
Further, described processor 32 is also for calculating the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j; The second expressive features point X on fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i'; With described geodesic line distance for constraint, the second preset algorithm is adopted to determine and described 3rd expressive features point X i' corresponding each grid vertex X j'; According to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
Further, described original two dimensional facial image comprises target two-dimension human face image and reference two-dimension human face image, described processor 32 is also for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, and described human face expression unique point comprises face mask unique point and described first expressive features point; Determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image; According to face mask unique point and the first expressive features point of described nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database; Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image; Determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image; Described each expression model is merged according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
Further, described processor 32 also for calculating the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculates the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image; Determine that the image that described face mask curvature is little is nearly front face image.
Further, described processor 32 also for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and is out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image; Merge by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can have been come by the hardware that programmed instruction is relevant, aforesaid program can be stored in a computer read/write memory medium, this program, when performing, performs the step comprising said method embodiment; And aforesaid storage medium comprises: ROM, RAM, magnetic disc or CD etc. various can be program code stored medium.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (14)

1. a 3 d human face mesh model disposal route, is characterized in that, comprising:
Obtain the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
The camera parameter matrix of described initial three-dimensional face wire frame model is calculated according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
According to the described camera parameter matrix calculated, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
2. method according to claim 1, it is characterized in that, the second expressive features point on described initial three-dimensional face wire frame model is mapped on described original two dimensional facial image by the described camera parameter matrix that described basis calculates, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described pending model is adjusted, comprising:
The matching error of described second expressive features point and described first expressive features point is calculated according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Judge whether described matching error is more than or equal to predetermined threshold value;
If be more than or equal to, then described initial three-dimensional face wire frame model is adjusted, be less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
3. method according to claim 2, is characterized in that, describedly adjusts described initial three-dimensional face wire frame model, comprising:
Calculate the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j;
The second expressive features point X on fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i';
With described geodesic line distance for constraint, the second preset algorithm is adopted to determine and described 3rd expressive features point X i' corresponding each grid vertex X j';
According to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
4. the method according to any one of claims 1 to 3, is characterized in that, described original two dimensional facial image comprises target two-dimension human face image and reference two-dimension human face image;
The initial three-dimensional face wire frame model that described acquisition is corresponding with original two dimensional facial image, comprising:
Extract the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and described first expressive features point;
Determine nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
According to face mask unique point and the first expressive features point of described nearly front face image, the target Nature face model determined is out of shape, obtains the Nature face model corresponding with described front face image from neutral face database;
Nature face model according to described nearly front face image is out of shape each default expression model comprised in default expression storehouse respectively, obtains each expression model corresponding with described front face image;
Determine the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Described each expression model is merged according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
5. method according to claim 4, is characterized in that, described default expression storehouse comprises general blendshape model.
6. the method according to claim 4 or 5, is characterized in that, the described face mask unique point according to described target two-dimension human face image and the described face mask unique point with reference to two-dimension human face image determine nearly front face image, comprising:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
7. the method according to any one of claim 4 ~ 6, is characterized in that, described according to judged result described initial three-dimensional face wire frame model adjusted after, also comprise:
According to the 3 d human face mesh model corresponding with described target two-dimension human face image, described target two-dimension human face image is out of shape, and is out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
8. a 3 d human face mesh model treatment facility, is characterized in that, comprising:
Acquisition module, for obtaining the initial three-dimensional face wire frame model corresponding with original two dimensional facial image, described initial three-dimensional face wire frame model comprises the second expressive features point corresponding with the first expressive features point of described original two dimensional facial image;
Computing module, calculates the camera parameter matrix of described initial three-dimensional face wire frame model according to formula (1):
min Σ i = 1 N | | P · X i - x i | | 2 - - - ( 1 )
Wherein, P is described camera parameter matrix, X ifor i-th the second expressive features point on described initial three-dimensional face wire frame model, x ifor with described second expressive features point X ii-th the first expressive features point on corresponding described original two dimensional facial image, N is the number of the first expressive features point and the second expressive features point;
Judge module, for the second expressive features point on described initial three-dimensional face wire frame model being mapped to described original two dimensional facial image according to the described camera parameter matrix calculated, to judge the matching degree of described second expressive features point and described first expressive features point, and according to judged result, described initial three-dimensional face wire frame model is adjusted.
9. equipment according to claim 8, is characterized in that, described judge module, comprising:
Computing unit, for calculating the matching error of described second expressive features point and described first expressive features point according to formula (2):
Err = Σ i = 1 N w i | | P · X i - x i | | 2 - - - ( 2 )
Wherein, Err is described matching error, w ibe i-th couple of unique point X iand x iweight coefficient;
Judging unit, for judging whether described matching error is more than or equal to predetermined threshold value;
Adjustment unit, if for being more than or equal to, then adjusts described initial three-dimensional face wire frame model, is less than described predetermined threshold value to make the matching error of the second expressive features point on the 3 d human face mesh model after adjustment and described first expressive features point.
10. equipment according to claim 9, is characterized in that, described adjustment unit, comprising:
Computation subunit, for calculating the second expressive features point X ieach grid vertex X to described initial three-dimensional face wire frame model jgeodesic line distance, wherein, i is not equal to j;
First adjustment subelement, the second expressive features point X on fixing described initial three-dimensional face wire frame model iz coordinate, adopt first preset algorithm change described second expressive features point X ix, y coordinate, obtains and described second expressive features point X ithe 3rd corresponding expressive features point X i';
Determine subelement, for described geodesic line distance for constraint, adopt the second preset algorithm to determine and described 3rd expressive features point X i' corresponding each grid vertex X j';
Second adjustment subelement, for according to described 3rd expressive features point X i' and with described 3rd expressive features point X i' corresponding each grid vertex X j' adjust described initial three-dimensional face wire frame model.
Equipment according to any one of 11. according to Claim 8 ~ 10, is characterized in that, described original two dimensional facial image comprises target two-dimension human face image and reference two-dimension human face image;
Described acquisition module, comprising:
Extraction unit, for extracting the human face expression unique point of described target two-dimension human face image and the described human face expression unique point with reference to two-dimension human face image, described human face expression unique point comprises face mask unique point and described first expressive features point;
First determining unit, for determining nearly front face image according to the face mask unique point of described target two-dimension human face image and the face mask unique point of described reference two-dimension human face image, described nearly front face image is described target two-dimension human face image or described with reference to two-dimension human face image;
First deformation unit, for according to the face mask unique point of described nearly front face image and the first expressive features point, the target Nature face model determined from neutral face database is out of shape, obtains the Nature face model corresponding with described front face image;
Second deformation unit, for being out of shape each default expression model comprised in default expression storehouse respectively according to the Nature face model of described nearly front face image, obtains each expression model corresponding with described front face image;
Second determining unit, for determining the first weight coefficient of described each expression model according to the first expressive features point of described target two-dimension human face image, and determine the second weight coefficient of described each expression model according to described the first expressive features point with reference to two-dimension human face image;
Merge cells, for merging described each expression model according to described first weight coefficient, to obtain the 3 d human face mesh model corresponding with described target two-dimension human face image, and merge described each expression model according to described second weight coefficient, to obtain and the described 3 d human face mesh model corresponding with reference to two-dimension human face image.
12. equipment according to claim 11, is characterized in that, described default expression storehouse comprises general blendshape model.
13. equipment according to claim 11 or 12, is characterized in that, described first determining unit, specifically for:
Calculate the face mask curvature of described target two-dimension human face image according to the face mask unique point of described target two-dimension human face image, and calculate the described face mask curvature with reference to two-dimension human face image according to the described face mask unique point with reference to two-dimension human face image;
Determine that the image that described face mask curvature is little is nearly front face image.
14. equipment according to any one of claim 11 ~ 13, is characterized in that, also comprise:
Deformation module, for being out of shape described target two-dimension human face image according to the 3 d human face mesh model corresponding with described target two-dimension human face image, and be out of shape with reference to two-dimension human face image described according to the described 3 d human face mesh model corresponding with reference to two-dimension human face image;
Merge module, for merging by the target two-dimension human face image after distortion with reference to two-dimension human face image, to transfer to described on described target two-dimension human face image with reference to the expression on two-dimension human face image.
CN201410141093.XA 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment Active CN104978764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141093.XA CN104978764B (en) 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141093.XA CN104978764B (en) 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment

Publications (2)

Publication Number Publication Date
CN104978764A true CN104978764A (en) 2015-10-14
CN104978764B CN104978764B (en) 2017-11-17

Family

ID=54275238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410141093.XA Active CN104978764B (en) 2014-04-10 2014-04-10 3 d human face mesh model processing method and equipment

Country Status (1)

Country Link
CN (1) CN104978764B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327482A (en) * 2016-08-10 2017-01-11 东方网力科技股份有限公司 Facial expression reconstruction method and device based on big data
CN106530376A (en) * 2016-10-10 2017-03-22 福建网龙计算机网络信息技术有限公司 Three-dimensional role building method and system
CN106570931A (en) * 2016-10-10 2017-04-19 福建网龙计算机网络信息技术有限公司 Virtual reality resource manufacturing method and system
CN106934759A (en) * 2015-12-30 2017-07-07 掌赢信息科技(上海)有限公司 The front method and electronic equipment of a kind of human face characteristic point
CN107203962A (en) * 2016-03-17 2017-09-26 掌赢信息科技(上海)有限公司 The method and electronic equipment of a kind of pseudo- 3D rendering of utilization 2D picture makings
CN107292812A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment of migration of expressing one's feelings
CN107592449A (en) * 2017-08-09 2018-01-16 广东欧珀移动通信有限公司 Three-dimension modeling method, apparatus and mobile terminal
CN107993216A (en) * 2017-11-22 2018-05-04 腾讯科技(深圳)有限公司 A kind of image interfusion method and its equipment, storage medium, terminal
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN108446595A (en) * 2018-02-12 2018-08-24 深圳超多维科技有限公司 A kind of space-location method, device, system and storage medium
CN108491850A (en) * 2018-03-27 2018-09-04 北京正齐口腔医疗技术有限公司 The characteristic points automatic extraction method and device of three dimensional tooth mesh model
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system
CN108564659A (en) * 2018-02-12 2018-09-21 北京奇虎科技有限公司 The expression control method and device of face-image, computing device
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN109147037A (en) * 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 Effect processing method, device and electronic equipment based on threedimensional model
CN109377445A (en) * 2018-10-12 2019-02-22 北京旷视科技有限公司 Model training method, the method, apparatus and electronic system for replacing image background
CN109754467A (en) * 2018-12-18 2019-05-14 广州市百果园网络科技有限公司 Three-dimensional face construction method, computer storage medium and computer equipment
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110135376A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor
CN110263617A (en) * 2019-04-30 2019-09-20 北京永航科技有限公司 Three-dimensional face model acquisition methods and device
CN111259829A (en) * 2020-01-19 2020-06-09 北京小马慧行科技有限公司 Point cloud data processing method and device, storage medium and processor
CN111383308A (en) * 2018-12-29 2020-07-07 华为技术有限公司 Method and electronic equipment for generating animation expression
WO2021027585A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Human face image processing method and electronic device
CN112884881A (en) * 2021-01-21 2021-06-01 魔珐(上海)信息科技有限公司 Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
CN112989541A (en) * 2021-05-07 2021-06-18 国网浙江省电力有限公司金华供电公司 Three-dimensional grid model generation method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593365A (en) * 2009-06-19 2009-12-02 电子科技大学 A kind of method of adjustment of universal three-dimensional human face model
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method
US20130287294A1 (en) * 2012-04-30 2013-10-31 Cywee Group Limited Methods for Generating Personalized 3D Models Using 2D Images and Generic 3D Models, and Related Personalized 3D Model Generating System
US20140016823A1 (en) * 2012-07-12 2014-01-16 Cywee Group Limited Method of virtual makeup achieved by facial tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593365A (en) * 2009-06-19 2009-12-02 电子科技大学 A kind of method of adjustment of universal three-dimensional human face model
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
US20130287294A1 (en) * 2012-04-30 2013-10-31 Cywee Group Limited Methods for Generating Personalized 3D Models Using 2D Images and Generic 3D Models, and Related Personalized 3D Model Generating System
US20140016823A1 (en) * 2012-07-12 2014-01-16 Cywee Group Limited Method of virtual makeup achieved by facial tracking
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张怡等: "基于二维人脸图像的三维建模研究", 《兰州工业学院学报》 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934759A (en) * 2015-12-30 2017-07-07 掌赢信息科技(上海)有限公司 The front method and electronic equipment of a kind of human face characteristic point
CN107203962A (en) * 2016-03-17 2017-09-26 掌赢信息科技(上海)有限公司 The method and electronic equipment of a kind of pseudo- 3D rendering of utilization 2D picture makings
CN107203962B (en) * 2016-03-17 2021-02-19 掌赢信息科技(上海)有限公司 Method for making pseudo-3D image by using 2D picture and electronic equipment
CN107292812A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment of migration of expressing one's feelings
CN106327482A (en) * 2016-08-10 2017-01-11 东方网力科技股份有限公司 Facial expression reconstruction method and device based on big data
CN106327482B (en) * 2016-08-10 2019-01-22 东方网力科技股份有限公司 A kind of method for reconstructing and device of the facial expression based on big data
CN106530376A (en) * 2016-10-10 2017-03-22 福建网龙计算机网络信息技术有限公司 Three-dimensional role building method and system
CN106570931A (en) * 2016-10-10 2017-04-19 福建网龙计算机网络信息技术有限公司 Virtual reality resource manufacturing method and system
CN106530376B (en) * 2016-10-10 2021-01-26 福建网龙计算机网络信息技术有限公司 Three-dimensional role creating method and system
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN107592449A (en) * 2017-08-09 2018-01-16 广东欧珀移动通信有限公司 Three-dimension modeling method, apparatus and mobile terminal
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN108875335B (en) * 2017-10-23 2020-10-09 北京旷视科技有限公司 Method for unlocking human face and inputting expression and expression action, authentication equipment and nonvolatile storage medium
US10922533B2 (en) 2017-10-23 2021-02-16 Beijing Kuangshi Technology Co., Ltd. Method for face-to-unlock, authentication device, and non-volatile storage medium
CN107993216B (en) * 2017-11-22 2022-12-20 腾讯科技(深圳)有限公司 Image fusion method and equipment, storage medium and terminal thereof
CN107993216A (en) * 2017-11-22 2018-05-04 腾讯科技(深圳)有限公司 A kind of image interfusion method and its equipment, storage medium, terminal
CN108446595A (en) * 2018-02-12 2018-08-24 深圳超多维科技有限公司 A kind of space-location method, device, system and storage medium
CN108564659A (en) * 2018-02-12 2018-09-21 北京奇虎科技有限公司 The expression control method and device of face-image, computing device
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system
CN108491850A (en) * 2018-03-27 2018-09-04 北京正齐口腔医疗技术有限公司 The characteristic points automatic extraction method and device of three dimensional tooth mesh model
CN108491850B (en) * 2018-03-27 2020-04-10 北京正齐口腔医疗技术有限公司 Automatic feature point extraction method and device of three-dimensional tooth mesh model
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
WO2020034698A1 (en) * 2018-08-16 2020-02-20 Oppo广东移动通信有限公司 Three-dimensional model-based special effect processing method and device, and electronic apparatus
CN109147037A (en) * 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 Effect processing method, device and electronic equipment based on threedimensional model
CN109377445A (en) * 2018-10-12 2019-02-22 北京旷视科技有限公司 Model training method, the method, apparatus and electronic system for replacing image background
CN109754467A (en) * 2018-12-18 2019-05-14 广州市百果园网络科技有限公司 Three-dimensional face construction method, computer storage medium and computer equipment
CN109754467B (en) * 2018-12-18 2023-09-22 广州市百果园网络科技有限公司 Three-dimensional face construction method, computer storage medium and computer equipment
CN111383308A (en) * 2018-12-29 2020-07-07 华为技术有限公司 Method and electronic equipment for generating animation expression
CN111383308B (en) * 2018-12-29 2023-06-23 华为技术有限公司 Method for generating animation expression and electronic equipment
CN110263617B (en) * 2019-04-30 2021-10-22 北京永航科技有限公司 Three-dimensional face model obtaining method and device
CN110263617A (en) * 2019-04-30 2019-09-20 北京永航科技有限公司 Three-dimensional face model acquisition methods and device
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
US11100709B2 (en) 2019-05-15 2021-08-24 Zhejiang Sensetime Technology Development Co., Ltd Method, apparatus and device for processing deformation of virtual object, and storage medium
CN110135376A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor
WO2021027585A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Human face image processing method and electronic device
CN111259829A (en) * 2020-01-19 2020-06-09 北京小马慧行科技有限公司 Point cloud data processing method and device, storage medium and processor
CN111259829B (en) * 2020-01-19 2023-10-20 北京小马慧行科技有限公司 Processing method and device of point cloud data, storage medium and processor
CN112884881A (en) * 2021-01-21 2021-06-01 魔珐(上海)信息科技有限公司 Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
CN112884881B (en) * 2021-01-21 2022-09-27 魔珐(上海)信息科技有限公司 Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
CN112989541A (en) * 2021-05-07 2021-06-18 国网浙江省电力有限公司金华供电公司 Three-dimensional grid model generation method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN104978764B (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN104978764A (en) Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment
US11455495B2 (en) System and method for visual recognition using synthetic training data
CN108369643B (en) Method and system for 3D hand skeleton tracking
US20210012558A1 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN108182384B (en) Face feature point positioning method and device
TWI742690B (en) Method and apparatus for detecting a human body, computer device, and storage medium
CN109063584B (en) Facial feature point positioning method, device, equipment and medium based on cascade regression
KR102442486B1 (en) 3D model creation method, apparatus, computer device and storage medium
CN111161349B (en) Object posture estimation method, device and equipment
CN109711283A (en) A kind of joint doubledictionary and error matrix block Expression Recognition algorithm
CN108615256B (en) Human face three-dimensional reconstruction method and device
US20230141392A1 (en) Systems and methods for human pose and shape recovery
CN112767534A (en) Video image processing method and device, electronic equipment and storage medium
CN108109212A (en) A kind of historical relic restorative procedure, apparatus and system
JP2023524252A (en) Generative nonlinear human shape model
CN115345938B (en) Global-to-local-based head shadow mark point positioning method, equipment and medium
CN114173704A (en) Method for generating dental arch model
CN114120432A (en) Online learning attention tracking method based on sight estimation and application thereof
WO2020252969A1 (en) Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model
CN114187624A (en) Image generation method, image generation device, electronic equipment and storage medium
CN112488067A (en) Face pose estimation method and device, electronic equipment and storage medium
CN108573192B (en) Glasses try-on method and device matched with human face
Le et al. Marker optimization for facial motion acquisition and deformation
CN110363170B (en) Video face changing method and device
US20230281981A1 (en) Methods, devices, and computer readable media for training a keypoint estimation network using cgan-based data augmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20171208

Address after: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee after: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: Huawei Technologies Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180209

Address after: 528400 B37 No. N6 B37 of the three phase of the city of Ya Ju music in Zhongshan, Guangdong

Patentee after: Zhongshan micro network technology Co., Ltd.

Address before: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee before: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180927

Address after: 528463 Guangdong Zhongshan three township Zhenhua Road 3, three rural financial business center 705 cards, 706 cards

Patentee after: Guangdong smart Polytron Technologies Inc

Address before: 528400 B37 three, phase three, N6, three Town, Zhongshan, Guangdong.

Patentee before: Zhongshan micro network technology Co., Ltd.