CN105069746A - Video real-time human face substitution method and system based on partial affine and color transfer technology - Google Patents

Video real-time human face substitution method and system based on partial affine and color transfer technology Download PDF

Info

Publication number
CN105069746A
CN105069746A CN201510520746.XA CN201510520746A CN105069746A CN 105069746 A CN105069746 A CN 105069746A CN 201510520746 A CN201510520746 A CN 201510520746A CN 105069746 A CN105069746 A CN 105069746A
Authority
CN
China
Prior art keywords
face
image
target
video
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510520746.XA
Other languages
Chinese (zh)
Other versions
CN105069746B (en
Inventor
孙国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xinhe Shengshi Technology Co Ltd
Original Assignee
Hangzhou Xinhe Shengshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xinhe Shengshi Technology Co Ltd filed Critical Hangzhou Xinhe Shengshi Technology Co Ltd
Priority to CN201510520746.XA priority Critical patent/CN105069746B/en
Publication of CN105069746A publication Critical patent/CN105069746A/en
Application granted granted Critical
Publication of CN105069746B publication Critical patent/CN105069746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to a video real-time human face substitution method and a video real-time human face substitution system based on a partial affine and color transfer technology, which solve the defect that the real-time human face substitution cannot be achieved when compared with the prior art. The video real-time human face substitution method comprises the steps of: carrying out video acquisition to obtain target human face video through a camera, and intercepting a current frame image; detecting a human face and feature points thereof; carrying out human face substitution; carrying out human face fusion processing; checking whether the camera is turned off; finishing the human face substitution if the camera is turned off; and continuing the video acquisition and acquiring the current frame image if the camera is not turned off, and continuing the human face detection, human face substitution and human face fusion steps. The video real-time human face substitution method and the video real-time human face substitution system improve quality, speed and efficiency of human face substitution, and can be applied to real-time substitution in the video.

Description

Based on video real-time face replacement method and the system thereof of local affine invariant and color transfer technology
Technical field
The present invention relates to technical field of image processing, specifically based on video real-time face replacement method and the system thereof of local affine invariant and color transfer technology.
Background technology
Face replacement technology is an important research direction in computer vision field, instead of the softwares such as photoshop due to face replacement technology and carry out the various drawbacks such as picture editting's fusion by hand, thus have tremendous influence in business, amusement and some special industries thereof.Such as, some the dangerous play sheets needing acrobat to perform are all that stunt double first takes, and then in post-production, use face automatic replacement technology the face of acrobat to be replaced to personage's face of needs, reach the object that film is final.Present stage, a lot of researchist proposed different replacement policies, achieved certain success, but cannot realize when being applied in video.Main cause is the singularity of video human face replacement technology, it needs to accomplish real-time, fast speed and high-level efficiency, and traditional face replacement technology exist calculated amount large, have the shortcoming that aberration, time consumption are high, and the effect after replacing is poor, therefore can not be used in the Video Applications of real life.
Several face method of replacing of the prior art all cannot be applied to during video processes in real time, face method of replacing as based on Poisson editor: it is according to original face semantic information, in facial image database, automatically choose the target face of semantic similitude and build its three-dimensional model, then according to the attitude of original face and illumination estimation information, target face is adjusted, target face and the seamless fusion of original image is made afterwards by Poisson image fusion technology, make the seamless fusion of target face and original image/video, reach and replace nature true to nature, thus realize the automatic replacement of face.Its advantage is that the face gap after merging is not obvious; Its shortcoming is that the image after merging has aberration and operand is large, causes real-time poor, can not be used for video.
For another example based on the face replacement method of 3D face database, first the method carries out face registration and off-line learning to download face database, obtains the parameters such as the attitude of face in each image, illumination and expression; When input 1 image, system can carry out Face datection, obtain in image data base the face similar with inputting human face posture, resolution, illumination as face replace alternative; Carry out color adjustment and illumination to candidate afterwards to correct and input face mate, and complete image co-registration; Finally devise a combination of edge evaluation function to sort to gained fusion results, with merging best result as final replacement result.Its advantage be can syncretizing effect better; Its shortcoming to ensure that replacing target face with 1 face of specifying just can reach good effect; Real-time is poor.
For another example based on the face replacement method of real-time video, the method adopts AAM method to detect face and unique point thereof, then gets target face according to unique point, source images face template; According to triangle Affine Principle, by the shape of affine for the face shape of source images one-tenth target face, then adjust color weight, make the color of source images face as far as possible consistent with the color of target face.Its advantage is that real-time is good, does not postpone; Shortcoming is that Face datection rate is low; Face shape after affine and target person face shape have gap.Color adjustment is not change automatically, just an empirical value.
Above three kinds of methods are methods comparatively popular recently, and other method is all the improvement done on its basis, although Poisson editor algorithm fusion effect is better than real-time video algorithm, time efficiency but cannot meet video real-time demand; Method syncretizing effect based on 3D face database is best, but time consumption is also the highest, processes a frame time and reaches 9 seconds.
For the limitation that various face replacement technology exists, under existing hardware condition, how designing a kind of real-time face replacement method that can use in Video Applications effectively and rapidly has become the current technical matters being badly in need of solving.
Summary of the invention
The object of the invention is cannot accomplish to solve in prior art the defect that real-time face is replaced, providing a kind of video real-time face replacement method based on local affine invariant and color transfer technology and system thereof to solve the problems referred to above.
To achieve these goals, technical scheme of the present invention is as follows:
Based on the video real-time face replacement method of local affine invariant and color transfer technology, comprise the following steps:
Video acquisition, obtains target face video by camera, and intercepts current frame image;
Carry out face and feature point detection thereof, to the target facial image got, strong classifier is used to detect face, to the sample training in face database on the basis of the shape built, sample characteristics point in target person face characteristic point and face database is mated, make corresponding conversion, and unique point search, position mark are carried out to the target face detected;
Carry out face conversion, utilize affine transformation parameter, by the sample face affined transformation in face database to the target person in video on the face, carry out the replacement of face;
Carry out face fusion process, by the target person after the color transfer of the target face before conversion to conversion on the face, adopt Laplce gaussian pyramid to carry out fusion treatment, and be shown in the image that camera gets after the smoothing process of Gaussian filter is adopted to the edge of the target facial image after fusion treatment;
Check whether camera closes, if close, then terminate face and replace; If do not close, then proceed video acquisition and obtain current frame image, proceeding Face datection, face conversion and face fusion step.
Described carry out face and feature point detection comprises the following steps:
Read the sample data in face database, construct several sorters, and several classifier stage are unified into a strong classifier;
Read current video two field picture, use strong classifier to carry out Face datection to current video two field picture;
Build shape and train;
Structure human face characteristic point, obtains affine transformation parameter.
Described face conversion of carrying out comprises the following steps:
The coordinate of each unique point is calculated to the unique point of sample in target face and face database, and adjacent any three points are carried out unique point trigonometric ratio;
Utilize affine transformation parameter, the unique point after face database intermediate cam is mapped to target person on the face respectively, be transformed to the shape of target face, the affined transformation concrete steps of its structure are as follows:
For the target face after unique point trigonometric ratio and the sample in face database, find out the leg-of-mutton position of each triangle and corresponding target face in face database;
Keep the position relationship of each unique point in face database, by the position of each unique point Triangular Maps in face database to target person face characteristic point triangle place.
Described face fusion process of carrying out comprises the following steps:
At l α β spatially by the target person after the color transfer of the target face before conversion to conversion on the face, wherein l represents achromatic luminance channel, and α represents colored champac passage, and β represents red green passage;
The method of Laplce's gaussian pyramid is adopted to carry out fusion treatment to the different scale of target face after migration, the image of different decomposition layer;
Fringe region is found to the target facial image after fusion treatment, adopts the smoothing process of Gaussian filter.
Described several sorters that construct comprise the following steps:
Be uniformly distributed each sample in face database, obtain preliminary classification device H by training 0;
Classification judges, correct for classification, reduces its distribution probability; Classification error, improve its distribution probability, obtain a new training set S 1;
Obtain sorter, use training set S 1train, obtain sorter H 1;
Iteration is carried out classification and is judged and obtain sorter step T time altogether, obtains { H 1, H 2..., H tbe total to T sorter.
Described structure shape also carries out training package and draws together following steps:
Collect 400 face training samples, and mark each facial feature points in sample;
The coordinate of unique point in training set is conspired to create proper vector;
Shape facility is normalized and registration process;
To the shape facility principal component analyzing and processing after alignment, the structure concrete steps of principal component analysis (PCA) are as follows:
Input x, x=[x 1x 2x m] tbe m unit vector variable, calculate the sample matrix of x, its computing formula is as follows:
X = x 1 1 x 1 2 ... x 1 n x 2 1 x 2 2 ... x 2 n · · · · · · · · · · · · x m 1 x m 2 ... x m n , Wherein j=1,2 ..., n is variable x i, i=1,2 ..., the discrete sampling of m;
Calculate the average value mu of i-th row of sample matrix X i, its computing formula is as follows:
μ i = 1 n Σ j = 1 n X i ( j ) , ;
Calculate the center square of i-th row of sample matrix X its computing formula is as follows:
X i ‾ = X i - μ i = [ x ‾ i 1 x ‾ i 2 ... x ‾ i n ] , Wherein x ‾ i j = x i j - μ i ;
Calculate the center square of sample matrix X its computing formula is as follows:
X ‾ = [ X ‾ 1 T X ‾ 2 T ... X ‾ m T ] T ;
The covariance matrix Ω of computing center's distance, its computing formula is as follows:
Ω = 1 n X X ‾ T = φΛφ T ,
Wherein φ=[φ 1φ 2φ m] be the orthogonal vector matrix of m × m, Λ=diag{ λ 1, λ 2..., λ m, λ 1>=λ 2>=...>=λ mit is diagonal angle eigenvalue matrix;
Calculate orthogonal transition matrix P, its computing formula is as follows:
P=φ T
Orthogonal transition matrix is associated with obtain principal component analysis (PCA)
For the facial feature points of each sample builds local feature.
Described structure human face characteristic point comprises the following steps:
Characteristic point position calculates, and calculates the position of target person face characteristic point, and does yardstick and rotate change;
Each local feature region of the sample in the unique point of target face and face database is mated, calculates each local feature region and correspond to target person new position on the face;
Iteration carries out above-mentioned steps, obtains affine transformation parameter.
The described method of Laplce's gaussian pyramid that utilizes is carried out fusion treatment comprise the following steps the different scale of target face after migration, the image of different decomposition layer:
Adopt gaussian pyramid to obtain down-sampled images on several different spaces layers, multiple dimensioned for the target face after color transfer, construct image pyramid, the gaussian pyramid concrete steps of structure are as follows:
Target facial image G after input color transfer 0, with G 0as the 0th layer of gaussian pyramid;
To original input picture G 0carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain the ground floor image G of gaussian pyramid 1;
To ground floor image G 1carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain the second layer image G of gaussian pyramid 2;
To l-1 tomographic image G l-1carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain l (1≤l≤N) the tomographic image G of gaussian pyramid l,
Wherein N is the maximum number of plies of gaussian pyramid, R land C lbe respectively line number and the columns of the l layer of gaussian pyramid, the window function of a two dimension separable 5 × 5,
Repeat above process, form final gaussian pyramid;
Adopt laplacian pyramid to reconstruct upper layer images to up-sampling from pyramid top layer images to the pyramid diagram picture on the different frequency bands of different decomposition layers, the laplacian pyramid concrete steps of structure are as follows:
To the top layer images G of the gaussian pyramid obtained ninterpolation method is used to be amplified image G n *, wherein N is the maximum number of plies of gaussian pyramid;
To N-1 tomographic image G n-1interpolation method is used to be amplified image
By the l tomographic image G of gaussian pyramid linterpolation method is used to be amplified image G l *, its computing formula is as follows:
Wherein N is the maximum number of plies of gaussian pyramid, R land C lbe respectively line number and the columns of the l layer of gaussian pyramid, the window function of a two dimension separable 5 × 5,
Calculate
Calculate wherein N is for drawing the pyramidal maximum number of plies of Price, LP lit is the l tomographic image of Laplacian pyramid;
Repeat the calculating of each layer of gaussian pyramid layer, obtain laplacian pyramid LP 0, LP 1..., LP l..., LP n;
To rebuild after image merge, fusion treatment.
Based on the video real-time face replacement system of local affine invariant and color transfer technology, comprising:
Video acquisition module, for gathering the facial image of each frame in the video that obtains under camera;
Sorter constructing module, for carrying out Face datection to the image in the video got;
Shape training module, for building local feature for everyone face characteristic point, sets up out the position constraint of each unique point;
Principal component analysis (PCA) module, for doing feature extraction process by shape constructing module to shape facility;
Human face characteristic point search module, calculates the position at unique point place for seeker's face characteristic point;
Face affined transformation module, for being mapped to the relevant position of target face by the sample in face database;
Based on Laplce's gaussian pyramid image co-registration module, for the target face after color transfer is carried out corresponding fusion treatment;
Described video acquisition load module is connected with sorter tectonic model, the output terminal of described sorter constructing module is connected with principal component analysis (PCA) module with shape training module respectively, shape training module is connected with the input end of human face characteristic point search module respectively with principal component analysis (PCA) module, the output terminal of human face characteristic point search module is connected with face affined transformation module, and the output terminal of face affined transformation module is connected with based on Laplce's gaussian pyramid image co-registration module.
Beneficial effect
Video real-time face replacement method based on local affine invariant and color transfer technology of the present invention and system thereof, compared with prior art improve quality, speed and efficiency that face is replaced, can be used in replacing in real time in video.Utilize strong classifier can detect face accurately; Utilize the shape of structure, unique point that the application of principal component analysis (PCA) and face characteristic point search can detect face fast; Utilize local affine transformations the sample in face database can be mapped to target person on the face accurately; Utilize the image fusion technology of Laplce's gaussian pyramid, the target face after color transfer can be carried out decomposing, rebuilding, and then fusion treatment accurately.Whole face replacement process is real-time under camera, carry out accurately, fast, breach prior art and carry out the demand that time consumption when face is replaced is high, cannot meet video real-time, the image after simultaneously replacing has the defect that aberration, operand are large.
Accompanying drawing explanation
Fig. 1 is method precedence diagram of the present invention;
Fig. 2 is system connection layout of the present invention.
Embodiment
For making to have a better understanding and awareness architectural feature of the present invention and effect of reaching, coordinating detailed description in order to preferred embodiment and accompanying drawing, being described as follows:
As shown in Figure 1, the video real-time face replacement method based on local affine invariant and color transfer technology of the present invention, comprises the following steps:
The first step, video acquisition, reads the video under current camera, obtains target face video, and intercept current frame image, for post-processed is prepared by camera.
Second step, carries out face and feature point detection thereof.To the target facial image got, strong classifier is used to detect face, to the sample training in face database on the basis of the shape built, sample characteristics point in target person face characteristic point and face database is mated, make corresponding conversion, and unique point search, position mark are carried out to the target face detected.Feature point detection replaces in order to later stage face to prepare herein, if human face characteristic point can detect precisely, then the efficiency that processes in real time will be higher the later stage, otherwise, face and unique point thereof will be detected always, until feature point detection arrives, therefore here feature point detection is very crucial, is related to that can to carry out real-time face replaces.Its concrete steps are as follows:
(1) read the sample data in face database, construct several sorters, and several classifier stage are unified into a strong classifier.Construct several sorters to comprise the following steps:
A, each sample be uniformly distributed in face database, face database is the database preserving numerous face information, obtains preliminary classification device H by training 0.
B, classification judge, correct for classification, reduce its distribution probability; Classification error, improve its distribution probability, obtain a new training set S 1.
C, obtain new sorter, use training set S 1train, obtain new sorter H 1.
D, iteration are carried out classification and are judged and obtain sorter step T time altogether, obtain { H 1, H 2..., H tbe total to T sorter.Each and every one classifier stage of T is unified into a strong classifier, and uses this strong classifier to carry out Face datection to the current frame image obtained.
(2) read current video two field picture, use strong classifier to carry out Face datection to current video two field picture.
(3) build shape and train, it specifically comprises the following steps:
A, collection 400 face training samples, face training sample quantity can adjust accordingly, and Sample Storehouse is better at most, but the time of simultaneous training sample can be more.Hand labeled goes out each facial feature points in sample.If the facial feature points of the star's face in storehouse all marks with software, may occur the situation of mark by mistake, so the later stage is detected face again and replaces and just there will be error.Therefore the collection of Sample Storehouse is artificial acquisition mode, then artificially marks, and artificially marks when training, and when the later stage carries out identifications target face, software just understands automatic mark to target person on the face.Here Sample Storehouse is star's face storehouse, in actual applications, first will establish storehouse, then establish template.
B, the coordinate of unique point in training set is conspired to create proper vector.
C, be normalized and registration process shape facility, shape facility refers to that hand labeled goes out the facial feature points in sample, as eyes, cheekbone etc.
D, to alignment after shape facility principal component analyzing and processing, the structure concrete steps of principal component analysis (PCA) are as follows:
A, input x, x=[x 1x 2x m] tbe m unit vector variable, calculate the sample matrix of x, its computing formula is as follows:
X = x 1 1 x 1 2 ... x 1 n x 2 1 x 2 2 ... x 2 n · · · · · · · · · · · · x m 1 x m 2 ... x m n , Wherein j=1,2 ..., n is variable x i, i=1,2 ..., the discrete sampling of m;
The average value mu of i-th row of b, calculating sample matrix X i, its computing formula is as follows:
μ i = 1 n Σ j = 1 n X i ( j ) , ;
The center square of i-th row of c, calculating sample matrix X its computing formula is as follows:
X i ‾ = X i - μ i = [ x ‾ i 1 x ‾ i 2 ... x ‾ i n ] , Wherein x ‾ i j = x i j - μ i ;
The center square of d, calculating sample matrix X its computing formula is as follows:
X ‾ = [ X ‾ 1 T X ‾ 2 T ... X ‾ m T ] T ;
The covariance matrix Ω of e, computing center's distance, its computing formula is as follows:
Ω = 1 n X X ‾ T = φΛφ T ,
Wherein φ=[φ 1φ 2φ m] be the orthogonal vector matrix of m × m, Λ=diag{ λ 1, λ 2..., λ m, λ 1>=λ 2>=...>=λ mit is diagonal angle eigenvalue matrix;
F, calculate orthogonal transition matrix P, its computing formula is as follows:
P=φ T
G, orthogonal transition matrix to be associated with obtain principal component analysis (PCA)
H, for each sample facial feature points build local feature.
(4) construct human face characteristic point, obtain affine transformation parameter.Its concrete steps are as follows:
A, characteristic point position calculate.Calculate the position of target person face characteristic point, and do simple yardstick and rotate change.Due to the face size in star's face storehouse and varying in size of target face, the position of such unique point is certainly just different, and for example, star's face storehouse star is bold, and target face is little face, and that just needs to do yardstick and rotation has changed.
B, each local feature region of the sample in the unique point of target face and face database to be mated, calculate each local feature region and correspond to target person new position on the face;
C, iteration carry out above-mentioned steps, obtain affine transformation parameter.Affine transformation parameter is the transition matrix produced in the local feature region matching process of sample in the unique point of target face and face database, such as target person on the face some points on sample certain point on, we need from a point to another point, that just needs conversion, such as adds or subtraction, or division etc., this is point-to-point conversion, that is if on the required sample of feature point pairs of all target faces in all unique points, that has been exactly matrix, and we are called transformation matrix.
3rd step, carries out face conversion.Utilize affine transformation parameter, by the sample face affined transformation in face database to the target person in video on the face, carry out the replacement of face.Local affine invariant object makes the shape of star's face corresponding consistent with the shape of target face, in face replacement process, seem truer.In simple terms, the thing that affine method is done is exactly by a reflection of graphics to another position, the size of figure itself in mapping process, and shape may change, but still maintains the inner position relationship between points of figure.Affine basis is triangle polyester fibre method, so will carry out affined transformation must calculate three point coordinate i.e. unique point trigonometric ratio.View picture changing image is divided into each sub regions, affined transformation is carried out to each sub regions, based on the image ratio of local affine transformations based on the effect of the image of global change closer to target image.Its concrete steps are as follows:
(1) coordinate of each unique point is calculated to the unique point of sample in target face and face database, and adjacent any three points are carried out unique point trigonometric ratio.
(2) utilize affine transformation parameter, the unique point after face database intermediate cam is mapped to target person on the face respectively, be transformed to the shape of target face, the affined transformation concrete steps of its structure are as follows:
A, for the target face after unique point trigonometric ratio and the sample in face database, find out the leg-of-mutton position of each triangle in face database and corresponding target face.
The position relationship of each unique point in B, maintenance face database, by the position of each unique point Triangular Maps in face database to target person face characteristic point triangle place.
4th step, carries out face fusion process.By the target person after the color transfer of the target face before conversion to conversion on the face, adopt Laplce gaussian pyramid to carry out fusion treatment, and be shown to after the smoothing process of Gaussian filter is adopted to the edge of the target facial image after fusion treatment in the image that camera gets.Because in star's face colour of skin and reality, the colour of skin of target face there are differences, avoid the gap of merging rear generation in order to more effective, introduce color transfer method.Above local affine invariant method and color transfer method consuming time all smaller, so by two kinds of method integrated uses, put and can produce method more efficient than prior art in video.Simultaneously in order to improve the effect after face fusion further, utilize laplacian image fusion method at this, by the image co-registration after color transfer in target face, effect will be much better than art methods.Its concrete steps are as follows:
(1) at l α β spatially by the target person after the color transfer of the target face before conversion to conversion on the face, wherein l represents achromatic luminance channel, and α represents colored champac passage, and β represents red green passage.The basis in ShiLMS space, l α β space is set up, because three, LMS space interchannel has larger correlativity, brings certain difficulty to image processing process.For this situation, propose based on l α β color space, wherein, l represents achromatic luminance channel, and α represents colored champac passage, and β represents red green passage, different from other colour systems, and l α β space is more suitable for human visual perception system.To natural scene, 1Op triple channel nearly orthogonal, interchannel correlativity can drop to minimum.
(2) method of Laplce's gaussian pyramid is adopted to carry out fusion treatment to the different scale of target face after migration, the image of different decomposition layer.Laplacian pyramid is a kind of method of multiple dimensioned, multiresolution.Fusion process based on the Image Fusion of pyramid decomposition is carried out respectively on different scale, different spatial resolutions and different decomposition layer, better syncretizing effect can be obtained compared with simple image blending algorithm, simultaneously can in occasion use widely.It comprises the following steps:
A, for after color transfer target face adopt gaussian pyramid obtain down-sampled images on several different spaces layers, multiple dimensioned, construct image pyramid.The gaussian pyramid concrete steps of structure are as follows:
Target facial image G after a, input color transfer 0, with G 0as the 0th layer of gaussian pyramid;
B, to original input picture G 0carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain the ground floor image G of gaussian pyramid 1;
C, to ground floor image G 1carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain the second layer image G of gaussian pyramid 2;
D, to l-1 tomographic image G l-1carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain l (1≤l≤N) the tomographic image G of gaussian pyramid l,
Wherein N is the maximum number of plies of gaussian pyramid, R land C lbe respectively line number and the columns of the l layer of gaussian pyramid, the window function of a two dimension separable 5 × 5,
E, repeat above process, form final gaussian pyramid.
B, on the different frequency bands of different decomposition layers pyramid diagram picture adopt laplacian pyramid from pyramid top layer images, reconstruct upper layer images to up-sampling, the laplacian pyramid concrete steps of structure are as follows:
A, top layer images G to the gaussian pyramid obtained ninterpolation method is used to be amplified image G n *, wherein N is the maximum number of plies of gaussian pyramid;
B, to N-1 tomographic image G n-1interpolation method is used to be amplified image
C, by the l tomographic image G of gaussian pyramid linterpolation method is used to be amplified image G l *, its computing formula is as follows:
Wherein N is the maximum number of plies of gaussian pyramid, R land C lbe respectively line number and the columns of the l layer of gaussian pyramid, the window function of a two dimension separable 5 × 5,
D, calculating
E, calculating wherein N is for drawing the pyramidal maximum number of plies of Price, LP lit is the l tomographic image of Laplacian pyramid;
The calculating of f, each layer of repetition gaussian pyramid layer, obtains laplacian pyramid LP 0, LP 1..., LP l..., LP n.
C, to rebuild after image merge, fusion treatment.
(3) fringe region is found to the target facial image after fusion treatment, adopt the smoothing process of Gaussian filter.Edge-smoothing transition when the face of fusion being put into camera image frame by Gauss's edge filter method.
5th step, checks whether camera closes, if close, then terminates face and replaces; If do not close, then proceed video acquisition and obtain current frame image, proceeding Face datection, face conversion and face fusion step.
As shown in Figure 2, based on the video real-time face replacement system of local affine invariant and color transfer technology, comprising:
Video acquisition module, for gathering the facial image of each frame in the video that obtains under camera;
Sorter constructing module, for carrying out Face datection to the image in the video got;
Shape training module, for building local feature for everyone face characteristic point, sets up out the position constraint of each unique point;
Principal component analysis (PCA) module, for doing feature extraction process by shape constructing module to shape facility;
Human face characteristic point search module, calculates the position at unique point place for seeker's face characteristic point;
Face affined transformation module, for being mapped to the relevant position of target face by the sample in face database;
Based on Laplce's gaussian pyramid image co-registration module, for the target face after color transfer is carried out corresponding fusion treatment;
Described video acquisition load module is connected with sorter tectonic model, the output terminal of described sorter constructing module is connected with principal component analysis (PCA) module with shape training module respectively, shape training module is connected with the input end of human face characteristic point search module respectively with principal component analysis (PCA) module, the output terminal of human face characteristic point search module is connected with face affined transformation module, and the output terminal of face affined transformation module is connected with based on Laplce's gaussian pyramid image co-registration module.
More than show and describe ultimate principle of the present invention, principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; the just principle of the present invention described in above-described embodiment and instructions; the present invention also has various changes and modifications without departing from the spirit and scope of the present invention, and these changes and improvements all fall in claimed scope of the present invention.The protection domain of application claims is defined by appending claims and equivalent thereof.

Claims (9)

1., based on a video real-time face replacement method for local affine invariant and color transfer technology, it is characterized in that, comprise the following steps:
11) video acquisition, obtains target face video by camera, and intercepts current frame image;
12) face and feature point detection thereof is carried out, to the target facial image got, strong classifier is used to detect face, to the sample training in face database on the basis of the shape built, sample characteristics point in target person face characteristic point and face database is mated, make corresponding conversion, and unique point search, position mark are carried out to the target face detected;
13) carry out face conversion, utilize affine transformation parameter, by the sample face affined transformation in face database to the target person in video on the face, carry out the replacement of face;
14) face fusion process is carried out, by the target person after the color transfer of the target face before conversion to conversion on the face, adopt Laplce gaussian pyramid to carry out fusion treatment, and be shown in the image that camera gets after the smoothing process of Gaussian filter is adopted to the edge of the target facial image after fusion treatment;
15) check whether camera closes, if close, then terminate face and replace; If do not close, then proceed video acquisition and obtain current frame image, proceeding Face datection, face conversion and face fusion step.
2. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 1, is characterized in that, described carries out face and feature point detection comprises the following steps:
21) read the sample data in face database, construct several sorters, and several classifier stage are unified into a strong classifier;
22) read current video two field picture, use strong classifier to carry out Face datection to current video two field picture;
23) build shape and train;
24) construct human face characteristic point, obtain affine transformation parameter.
3. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 1, is characterized in that, described face conversion of carrying out comprises the following steps:
31) coordinate of each unique point is calculated to the unique point of sample in target face and face database, and adjacent any three points are carried out unique point trigonometric ratio;
32) utilize affine transformation parameter, the unique point after face database intermediate cam is mapped to target person on the face respectively, be transformed to the shape of target face, the affined transformation concrete steps of its structure are as follows:
321) for the target face after unique point trigonometric ratio and the sample in face database, the leg-of-mutton position of each triangle and corresponding target face in face database is found out;
322) position relationship of each unique point in face database is kept, by the position of each unique point Triangular Maps in face database to target person face characteristic point triangle place.
4. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 1, it is characterized in that, described face fusion process of carrying out comprises the following steps:
41) at l α β spatially by the target person after the color transfer of the target face before conversion to conversion on the face, wherein l represents achromatic luminance channel, and α represents colored champac passage, and β represents red green passage;
42) method of Laplce's gaussian pyramid is adopted to carry out fusion treatment to the different scale of target face after migration, the image of different decomposition layer;
43) fringe region is found to the target facial image after fusion treatment, adopt the smoothing process of Gaussian filter.
5. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 2, it is characterized in that, described several sorters that construct comprise the following steps:
51) be uniformly distributed each sample in face database, obtain preliminary classification device H by training 0;
52) classification judges, correct for classification, reduces its distribution probability; Classification error, improve its distribution probability, obtain a new training set S 1;
53) obtain sorter, use training set S 1train, obtain sorter H 1;
54) iteration is carried out classification judgement and is obtained sorter step T time altogether, obtains { H 1, H 2..., H tbe total to T sorter.
6. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 2, is characterized in that, described structure shape also carries out training package and draws together following steps:
61) collect 400 face training samples, and mark each facial feature points in sample;
62) coordinate of unique point in training set is conspired to create proper vector;
63) shape facility is normalized and registration process;
64) to the shape facility principal component analyzing and processing after alignment, the structure concrete steps of principal component analysis (PCA) are as follows:
641) x, x=[x is inputted 1x 2x m] tbe m unit vector variable, calculate the sample matrix of x, its computing formula is as follows:
X = x 1 1 x 1 2 ... x 1 n x 2 1 x 2 2 ... x 2 n · · · · · · · · · · · · x m 1 x m 2 ... x m n , Wherein j=1,2 ..., n is variable x i, i=1,2 ..., the discrete sampling of m;
642) average value mu of i-th row of sample matrix X is calculated i, its computing formula is as follows:
μ i = 1 n Σ j = 1 n X i ( j ) , ;
643) the center square of i-th row of sample matrix X is calculated its computing formula is as follows:
X i ‾ = X i - μ i = [ x ‾ i 1 x ‾ i 2 ... x ‾ i n ] , Wherein x ‾ i j = x i j - μ i ;
644) the center square of sample matrix X is calculated its computing formula is as follows:
X ‾ = [ X ‾ 1 T X ‾ 2 T ... X ‾ m T ] T ;
645) the covariance matrix Ω of computing center's distance, its computing formula is as follows:
Ω = 1 n X X ‾ T = φΛφ T ,
Wherein φ=[φ 1φ 2φ m] be the orthogonal vector matrix of m × m,
Λ=diag{ λ 1, λ 2..., λ m, λ 1>=λ 2>=...>=λ mit is diagonal angle eigenvalue matrix;
646) calculate orthogonal transition matrix P, its computing formula is as follows:
P=φ T
647) orthogonal transition matrix is associated with obtain principal component analysis (PCA)
65) for the facial feature points of each sample builds local feature.
7. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 2, it is characterized in that, described structure human face characteristic point comprises the following steps:
71) characteristic point position calculates, and calculates the position of target person face characteristic point, and does yardstick and rotate change;
72) each local feature region of the sample in the unique point of target face and face database is mated, calculate each local feature region and correspond to target person new position on the face;
73) iteration carries out above-mentioned steps, obtains affine transformation parameter.
8. the video real-time face replacement method based on local affine invariant and color transfer technology according to claim 4, it is characterized in that, the described method of Laplce's gaussian pyramid that utilizes is carried out fusion treatment comprise the following steps the different scale of target face after migration, the image of different decomposition layer:
81) adopt gaussian pyramid to obtain down-sampled images on several different spaces layers, multiple dimensioned for the target face after color transfer, construct image pyramid, the gaussian pyramid concrete steps of structure are as follows:
811) the target facial image G after color transfer is inputted 0, with G 0as the 0th layer of gaussian pyramid;
812) to original input picture G 0carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain the ground floor image G of gaussian pyramid 1;
813) to ground floor image G 1carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain the second layer image G of gaussian pyramid 2;
814) to l-1 tomographic image G l-1carry out Gassian low-pass filter and the interlacing down-sampling every row, obtain l (1≤l≤N) the tomographic image G of gaussian pyramid l,
Wherein N is the maximum number of plies of gaussian pyramid, R land C lbe respectively line number and the columns of the l layer of gaussian pyramid, the window function of a two dimension separable 5 × 5,
815) repeat above process, form final gaussian pyramid;
82) adopt laplacian pyramid to reconstruct upper layer images to up-sampling from pyramid top layer images to the pyramid diagram picture on the different frequency bands of different decomposition layers, the laplacian pyramid concrete steps of structure are as follows:
821) to the top layer images G of the gaussian pyramid obtained ninterpolation method is used to be amplified image G n *, wherein N is the maximum number of plies of gaussian pyramid;
822) to N-1 tomographic image G n-1interpolation method is used to be amplified image
823) by the l tomographic image G of gaussian pyramid linterpolation method is used to be amplified image its computing formula is as follows:
Wherein N is the maximum number of plies of gaussian pyramid, R land C lbe respectively line number and the columns of the l layer of gaussian pyramid, the window function of a two dimension separable 5 × 5,
824) calculate
825) calculate wherein N is for drawing the pyramidal maximum number of plies of Price, LP lit is the l tomographic image of Laplacian pyramid;
826) repeat the calculating of each layer of gaussian pyramid layer, obtain laplacian pyramid LP 0, LP 1..., LP l..., LP n;
83) to rebuild after image merge, fusion treatment.
9., based on the video real-time face replacement system of local affine invariant and color transfer technology, it is characterized in that: comprising:
Video acquisition module, for gathering the facial image of each frame in the video that obtains under camera;
Sorter constructing module, for carrying out Face datection to the image in the video got;
Shape training module, for building local feature for everyone face characteristic point, sets up out the position constraint of each unique point;
Principal component analysis (PCA) module, for doing feature extraction process by shape constructing module to shape facility;
Human face characteristic point search module, calculates the position at unique point place for seeker's face characteristic point;
Face affined transformation module, for being mapped to the relevant position of target face by the sample in face database;
Based on Laplce's gaussian pyramid image co-registration module, for the target face after color transfer is carried out corresponding fusion treatment;
Described video acquisition load module is connected with sorter tectonic model, the output terminal of described sorter constructing module is connected with principal component analysis (PCA) module with shape training module respectively, shape training module is connected with the input end of human face characteristic point search module respectively with principal component analysis (PCA) module, the output terminal of human face characteristic point search module is connected with face affined transformation module, and the output terminal of face affined transformation module is connected with based on Laplce's gaussian pyramid image co-registration module.
CN201510520746.XA 2015-08-23 2015-08-23 Video real-time face replacement method and its system based on local affine invariant and color transfer technology Active CN105069746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510520746.XA CN105069746B (en) 2015-08-23 2015-08-23 Video real-time face replacement method and its system based on local affine invariant and color transfer technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510520746.XA CN105069746B (en) 2015-08-23 2015-08-23 Video real-time face replacement method and its system based on local affine invariant and color transfer technology

Publications (2)

Publication Number Publication Date
CN105069746A true CN105069746A (en) 2015-11-18
CN105069746B CN105069746B (en) 2018-02-16

Family

ID=54499104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510520746.XA Active CN105069746B (en) 2015-08-23 2015-08-23 Video real-time face replacement method and its system based on local affine invariant and color transfer technology

Country Status (1)

Country Link
CN (1) CN105069746B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023063A (en) * 2016-05-09 2016-10-12 西安北升信息科技有限公司 Video transplantation face changing method
CN106331569A (en) * 2016-08-23 2017-01-11 广州华多网络科技有限公司 Method and system for transforming figure face in instant video picture
CN106792147A (en) * 2016-12-08 2017-05-31 天脉聚源(北京)传媒科技有限公司 A kind of image replacement method and device
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN107230181A (en) * 2017-06-05 2017-10-03 厦门美柚信息科技有限公司 Realize the method and device of facial image fusion
CN107316020A (en) * 2017-06-26 2017-11-03 司马大大(北京)智能系统有限公司 Face replacement method, device and electronic equipment
CN107481318A (en) * 2017-08-09 2017-12-15 广东欧珀移动通信有限公司 Replacement method, device and the terminal device of user's head portrait
CN107507216A (en) * 2017-08-17 2017-12-22 北京觅己科技有限公司 The replacement method of regional area, device and storage medium in image
CN107507217A (en) * 2017-08-17 2017-12-22 北京觅己科技有限公司 Preparation method, device and the storage medium of certificate photo
CN107622495A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107680071A (en) * 2017-10-23 2018-02-09 深圳市云之梦科技有限公司 A kind of face and the method and system of body fusion treatment
CN108259788A (en) * 2018-01-29 2018-07-06 努比亚技术有限公司 Video editing method, terminal and computer readable storage medium
CN108921795A (en) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 A kind of image interfusion method, device and storage medium
CN110136229A (en) * 2019-05-27 2019-08-16 广州亮风台信息科技有限公司 A kind of method and apparatus changed face for real-time virtual
CN110189248A (en) * 2019-05-16 2019-08-30 腾讯科技(深圳)有限公司 Image interfusion method and device, storage medium, electronic equipment
CN110298826A (en) * 2019-06-18 2019-10-01 合肥联宝信息技术有限公司 A kind of image processing method and device
CN110490897A (en) * 2019-07-30 2019-11-22 维沃移动通信有限公司 Imitate the method and electronic equipment that video generates
CN110543826A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Image processing method and device for virtual wearing of wearable product
CN110738161A (en) * 2019-10-12 2020-01-31 电子科技大学 face image correction method based on improved generation type confrontation network
CN111027465A (en) * 2019-12-09 2020-04-17 韶鼎人工智能科技有限公司 Video face replacement method based on illumination migration
CN111445564A (en) * 2020-03-26 2020-07-24 腾讯科技(深圳)有限公司 Face texture image generation method and device, computer equipment and storage medium
CN111553254A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 Face comparison preprocessing method
CN111553253A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 Standard face image selection method based on Euclidean distance variance algorithm
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN112101072A (en) * 2019-06-18 2020-12-18 北京陌陌信息技术有限公司 Face matching method, device, equipment and medium
CN112241744A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Image color migration method, device, equipment and computer readable medium
CN112734890A (en) * 2020-12-22 2021-04-30 上海影谱科技有限公司 Human face replacement method and device based on three-dimensional reconstruction
CN113128433A (en) * 2021-04-26 2021-07-16 刘秀萍 Video monitoring image enhancement method of color migration matching characteristics
CN113160034A (en) * 2021-04-13 2021-07-23 南京理工大学 Method for realizing complex motion migration based on multiple affine transformation representations

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687957A (en) * 2005-06-02 2005-10-26 上海交通大学 Man face characteristic point positioning method of combining local searching and movable appearance model
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
US20130129141A1 (en) * 2010-08-20 2013-05-23 Jue Wang Methods and Apparatus for Facial Feature Replacement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687957A (en) * 2005-06-02 2005-10-26 上海交通大学 Man face characteristic point positioning method of combining local searching and movable appearance model
US20130129141A1 (en) * 2010-08-20 2013-05-23 Jue Wang Methods and Apparatus for Facial Feature Replacement
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
亢洁等: "基于拉普拉斯金字塔降维的人脸识别算法", 《陕西科技大学学报》 *
宋明黎等: "基于拉普拉斯微分的脸部细节迁移", 《计算机辅助设计与图形学学报》 *
李传学: "图像/视频中自动人脸替换研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
林源等: "基于真实感三维头重建的人脸替换", 《清华大学学报(自然科学版)》 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023063A (en) * 2016-05-09 2016-10-12 西安北升信息科技有限公司 Video transplantation face changing method
CN106331569A (en) * 2016-08-23 2017-01-11 广州华多网络科技有限公司 Method and system for transforming figure face in instant video picture
CN106792147A (en) * 2016-12-08 2017-05-31 天脉聚源(北京)传媒科技有限公司 A kind of image replacement method and device
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN107146199B (en) * 2017-05-02 2020-01-17 厦门美图之家科技有限公司 Fusion method and device of face images and computing equipment
WO2018201551A1 (en) * 2017-05-02 2018-11-08 厦门美图之家科技有限公司 Facial image fusion method and apparatus and computing device
CN107230181B (en) * 2017-06-05 2018-06-29 厦门美柚信息科技有限公司 Realize the method and device of facial image fusion
CN107230181A (en) * 2017-06-05 2017-10-03 厦门美柚信息科技有限公司 Realize the method and device of facial image fusion
CN107316020A (en) * 2017-06-26 2017-11-03 司马大大(北京)智能系统有限公司 Face replacement method, device and electronic equipment
CN107316020B (en) * 2017-06-26 2020-05-08 司马大大(北京)智能系统有限公司 Face replacement method and device and electronic equipment
CN107481318A (en) * 2017-08-09 2017-12-15 广东欧珀移动通信有限公司 Replacement method, device and the terminal device of user's head portrait
CN107507216B (en) * 2017-08-17 2020-06-09 北京觅己科技有限公司 Method and device for replacing local area in image and storage medium
CN107507217A (en) * 2017-08-17 2017-12-22 北京觅己科技有限公司 Preparation method, device and the storage medium of certificate photo
CN107507217B (en) * 2017-08-17 2020-10-16 北京觅己科技有限公司 Method and device for making certificate photo and storage medium
CN107507216A (en) * 2017-08-17 2017-12-22 北京觅己科技有限公司 The replacement method of regional area, device and storage medium in image
CN107622495A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107680071A (en) * 2017-10-23 2018-02-09 深圳市云之梦科技有限公司 A kind of face and the method and system of body fusion treatment
CN107680071B (en) * 2017-10-23 2020-08-07 深圳市云之梦科技有限公司 Method and system for fusion processing of human face and human body
CN108259788A (en) * 2018-01-29 2018-07-06 努比亚技术有限公司 Video editing method, terminal and computer readable storage medium
CN108921795A (en) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 A kind of image interfusion method, device and storage medium
CN110189248A (en) * 2019-05-16 2019-08-30 腾讯科技(深圳)有限公司 Image interfusion method and device, storage medium, electronic equipment
CN110136229A (en) * 2019-05-27 2019-08-16 广州亮风台信息科技有限公司 A kind of method and apparatus changed face for real-time virtual
CN110298826A (en) * 2019-06-18 2019-10-01 合肥联宝信息技术有限公司 A kind of image processing method and device
CN112101072A (en) * 2019-06-18 2020-12-18 北京陌陌信息技术有限公司 Face matching method, device, equipment and medium
CN110490897A (en) * 2019-07-30 2019-11-22 维沃移动通信有限公司 Imitate the method and electronic equipment that video generates
CN110543826A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Image processing method and device for virtual wearing of wearable product
CN110738161A (en) * 2019-10-12 2020-01-31 电子科技大学 face image correction method based on improved generation type confrontation network
CN111027465A (en) * 2019-12-09 2020-04-17 韶鼎人工智能科技有限公司 Video face replacement method based on illumination migration
CN111445564B (en) * 2020-03-26 2023-10-27 腾讯科技(深圳)有限公司 Face texture image generation method, device, computer equipment and storage medium
CN111445564A (en) * 2020-03-26 2020-07-24 腾讯科技(深圳)有限公司 Face texture image generation method and device, computer equipment and storage medium
CN111553254A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 Face comparison preprocessing method
CN111553253A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 Standard face image selection method based on Euclidean distance variance algorithm
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN111612897B (en) * 2020-06-05 2023-11-10 腾讯科技(深圳)有限公司 Fusion method, device and equipment of three-dimensional model and readable storage medium
CN112241744A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Image color migration method, device, equipment and computer readable medium
CN112734890B (en) * 2020-12-22 2023-11-10 上海影谱科技有限公司 Face replacement method and device based on three-dimensional reconstruction
CN112734890A (en) * 2020-12-22 2021-04-30 上海影谱科技有限公司 Human face replacement method and device based on three-dimensional reconstruction
CN113160034A (en) * 2021-04-13 2021-07-23 南京理工大学 Method for realizing complex motion migration based on multiple affine transformation representations
CN113160034B (en) * 2021-04-13 2022-09-20 南京理工大学 Method for realizing complex motion migration based on multiple affine transformation representations
CN113128433A (en) * 2021-04-26 2021-07-16 刘秀萍 Video monitoring image enhancement method of color migration matching characteristics

Also Published As

Publication number Publication date
CN105069746B (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN105069746A (en) Video real-time human face substitution method and system based on partial affine and color transfer technology
CN108428229B (en) Lung texture recognition method based on appearance and geometric features extracted by deep neural network
CN104978580B (en) A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity
Livny et al. Automatic reconstruction of tree skeletal structures from point clouds
CN106951840A (en) A kind of facial feature points detection method
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN107358576A (en) Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN107833183A (en) A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring
CN103279936B (en) Human face fake photo based on portrait is synthesized and modification method automatically
CN109146899A (en) CT image jeopardizes organ segmentation method and device
CN105303615A (en) Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image
CN109978871B (en) Fiber bundle screening method integrating probability type and determination type fiber bundle tracking
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN106067161A (en) A kind of method that image is carried out super-resolution
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN104240256A (en) Image salient detecting method based on layering sparse modeling
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN109409327A (en) RRU module object position and posture detection method based on end-to-end deep neural network
CN108648194A (en) Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device
CN108346162A (en) Remote sensing image registration method based on structural information and space constraint
CN107862707A (en) A kind of method for registering images based on Lucas card Nader's image alignment
CN108460833A (en) A kind of information platform building traditional architecture digital protection and reparation based on BIM
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Video real-time human face substitution method and system based on partial affine and color transfer technology

Effective date of registration: 20190408

Granted publication date: 20180216

Pledgee: Bank of Jiangsu, Limited by Share Ltd, Hangzhou branch

Pledgor: HANGZHOU XINHE SHENGSHI TECHNOLOGY CO., LTD.

Registration number: 2019330000095

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20200608

Granted publication date: 20180216

Pledgee: Bank of Jiangsu Limited by Share Ltd. Hangzhou branch

Pledgor: HANGZHOU XINHE SHENGSHI TECHNOLOGY Co.,Ltd.

Registration number: 2019330000095