CN102682445A - Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision - Google Patents

Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision Download PDF

Info

Publication number
CN102682445A
CN102682445A CN201110460701XA CN201110460701A CN102682445A CN 102682445 A CN102682445 A CN 102682445A CN 201110460701X A CN201110460701X A CN 201110460701XA CN 201110460701 A CN201110460701 A CN 201110460701A CN 102682445 A CN102682445 A CN 102682445A
Authority
CN
China
Prior art keywords
camera
angle
target
algorithm
calibration information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110460701XA
Other languages
Chinese (zh)
Other versions
CN102682445B (en
Inventor
于乃功
许锋
阮晓钢
李均
王彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201110460701.XA priority Critical patent/CN102682445B/en
Publication of CN102682445A publication Critical patent/CN102682445A/en
Application granted granted Critical
Publication of CN102682445B publication Critical patent/CN102682445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention belongs to the field of computer vision and particularly relates to a computer vision algorithm capable of performing the target tracking, location and camera marking at the same time. The invention provides a camera marking method, or a target tracking and locating method, applied to an active three-dimensional vision system of a lacertilian-imitating chamaeleonidae suborder biological vision system structure. The coordinate extracting algorithm is mainly characterized in the camera marking; the tracking and location of an interested target are carried out synchronously; the marking process of the camera does not need the continuous motionlessness of the interested target and also does not need the execution before the target tracking and location. The continuous marking process of the adopted marking method is a discrete process; by using the target to perform the camera marking on the static state in the operation process of the vision algorithm, the requirement of the camera marking is reduced, the synchronous execution of the vision algorithm and the marking process is achieved. For the property of self-marking, the manual marking work is reduced, and the preparatory work of the algorithm is largely simplified.

Description

A kind of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm
Technical field
The invention belongs to the computer vision category, especially a kind ofly can carry out target following simultaneously, the computer vision algorithms make of location and camera calibration.Its biosystem of copying is the biological vision system of Lacertilia Chamaeleontidae.
Background technology
Computer vision is the important research direction of artificial intelligence field, compared to other information detection meanss, based on the pick-up unit of computer vision theory institute framework have contain much information, cost is low, noiseless to environment, be convenient to characteristics such as control.The framework mode of vision system has a lot, can be divided into the quantitative classification of camera: monocular vision, binocular vision, used for multi-vision visual; Can be divided into physical construction constraint condition: active vision, passive vision; Can be divided into the correlation degree between the visual information: common vision, stereoscopic vision etc.
With the simplest monocular vision, it has simple in structure, and vision and control algolithm are simple, advantage with low cost.But owing to the vision process based on common monocular vision is a process of being shone upon to the low image space of tieing up by the real world of higher-dimension; Inevitably will lose a lot of information in this process, wherein most important is the depth information that conventional monocular vision can't obtain target.
Common monocular vision can be remedied based on the stereoscopic vision of binocular vision and this defective of depth information can't be obtained.Through guaranteeing to have the suitable common ken between two fixing video cameras in the vision system, be aided with accurate camera calibration and Stereo Matching Algorithm again, can obtain the depth information of target based on the stereo visual system of binocular vision.But this kind vision system framework mode remains a kind of open cycle system in essence, and it can't be through the variation of parameter to conform that changes self when the operation vision algorithm.
Solve this defective; The method that makes vision system become a kind of stable closed-loop system is to come the framework vision system with the framework mode of active vision; Promptly increase certain locomitivity to camera; Making vision system can be feedback quantity with the image information of its acquisition, drives camera and rotates the whole vision system of the control of closed loop.
Because the vision system based on active vision theory framework is different with traditional vision system existing institute on physical construction based on passive vision theory framework; So its Depth Information Acquistion mode that is suitable for; Targets of interest locator meams, camera calibration mode are also different with the method that passive vision is taked.For vision system with active vision theory framework, carry out follow-up vision algorithm if desired and handle, it at first need be demarcated employed camera.And conventional demarcation mode requires before the vision algorithm operation, to carry out, and requires targets of interest transfixion in the calibration process.These two requirements make the use of active vision system become loaded down with trivial details, and particularly the camera in the vision system is often changed, or system can make the visual processes process exception loaded down with trivial details when using zoom camera.
Summary of the invention
The present invention proposes active, the camera calibration method of stereo visual system, target following and localization method that a cover is applied to copy Lacertilia Chamaeleontidae biological vision system architecture.Its topmost characteristics are camera calibration, and the tracking of targets of interest is carried out with the location synchronously, and the calibration process of its camera need not to require the lasting static of targets of interest, also need not to carry out before in target following and location.Applied vision system adopts wide-angle cameras with fixed focus framework, and the motion that two wide-angle cameras with fixed focus can be separate.
Its general structure of the applied imitative Lacertilia Chamaeleontidae biological vision of the present invention system may be summarized as follows:
Whole vision system is copied the biological vision system framework of Lacertilia Chamaeleontidae: its main member is 2 wide-angle cameras with fixed focus, and every wide-angle cameras with fixed focus has been equipped with 2 stepper motors and for it independently 2DOF locomitivity of horizontal direction and vertical direction is provided.Compared to the vision system of routine, the tangential movement between its two wide-angle cameras with fixed focus that have is separate, and luffing also is independently.Its biosystem of copying is the biological vision system of Lacertilia Chamaeleontidae (being commonly called as chameleon), and the characteristics of its vision system are two eyeballs independent motion mutually, rather than as the primate motion-related.What pay special attention to is; Every wide-angle cameras with fixed focus 2 stepper motor threes separately geometric center point supporting with it be same perpendicular on the straight line of surface level, these characteristics have guaranteed carrying out smoothly of depth information that back literary composition will be introduced in detail and world coordinates extraction algorithm.
Main contents of the present invention are for passing through the collaborative work of two wide-angle cameras with fixed focus of control; Call angle information demarcation learning algorithm, depth information and world coordinates extraction algorithm that hereinafter details; And with Cam shift track algorithm as aided algorithm; Imitative Lacertilia Chamaeleontidae biological vision system is carried out motion control and calculation of parameter, finally obtain the world coordinates of vision system tracking target in real time with respect to vision system two wide-angle cameras with fixed focus geometric center point line mid point separately.
Technical scheme of the present invention is:
1, makes two wide-angle cameras with fixed focus ferret out respectively; After the A camera is found target; Launch based on the monocular vision track algorithm of Cam shift algorithm and keep tracking target, and when returning this time image planar process vector of A camera in real time with respect to A camera initial position camera as the horizontal sextant angle of planar process vector and vertical angle; Described A camera is at first searching the camera of target, and the B camera is another camera; Described initial position is: make being parallel to each other as planar process vector and surface level of wide-angle cameras with fixed focus, and make the position of the line segment that is constituted perpendicular to two wide-angle cameras with fixed focus geometric center point lines as the planar process vector of two wide-angle cameras with fixed focus;
2, the B camera is followed the A camera and is carried out target search, and after the B camera searched target, the monocular vision track algorithm of also launching based on Cam shift algorithm kept the tracking to target;
3, call angle information and demarcate learning algorithm and calculate A, B camera respectively in real time and all trace into target after, picture planar process vector separately with respect to initial position camera separately as the horizontal sextant angle and vertical angle of planar process vector;
4, the result who obtains according to step 3) uses depth information and world coordinates extraction algorithm, calculates in real time and export target depth information and the world coordinates of target in world coordinate system; Described world coordinate system is: the mid point with two wide-angle cameras with fixed focus geometric center point lines is an initial point; With parallel with surface level, be x axle positive dirction perpendicular to two wide-angle cameras with fixed focus geometric center point lines and along the direction of visual pursuit, be the right-handed coordinate system of z axle positive dirction with the direction that makes progress perpendicular to surface level;
5, if target is not lost, return step 3), if the fruit track rejection then returns step 1).
Angle information is demarcated learning algorithm and may further comprise the steps:
2.1) initialization study inlet angle degree (Δ θ 1, Δ η 1)
Described study inlet angle is meant when target moves in the calibration process that target moves to the position that moves from initial point in the camera image, level angle that pairing camera rotates and vertical angle;
2.2) calculate in camera image the horizontal range P of camera image central point and target and vertical respectively apart from the number percent f (P) and the g (Q) of Q with respect to camera image principal diagonal length;
2.3) whether judge target at the camera image central point,
If (| f (P) |<ε) ∧ (| g (Q) |<ε), threshold epsilon>0, then target jumps to step 2.7 at central point);
If (| f (P) |>ε) ∨ (| g (Q) |>ε), threshold epsilon>0, then target jumps to step 2.4 not at central point);
2.4) look into the calibration information table,
If exist in the calibration information table corresponding to | f (P) | with | g (Q) | angle information, then algorithm jumps to step 2.8);
If do not exist in the calibration information table corresponding to | f (P) | with | g (Q) | angle information, then jump to step 2.5);
2.5) progressively rotate camera, target is overlapped with image center in camera image;
2.6) when target overlaps with the camera image central point, jump to step 2.7); Otherwise return step 2.5);
2.7) when target overlaps with image center, read camera picture planar process vector at this moment and also export with vertical angle η with respect to the horizontal sextant angle θ of this camera initial position camera as the planar process vector, go to step 2.10);
2.8) if exist in the calibration information table corresponding to | f (P) | with | g (Q) | angle information, then camera rotates this angle the target geometric center point is overlapped with image center in camera image, rotation direction is definite according to the calibration information signal table that converts;
2.9) export under the world coordinate system, target is with respect to the horizontal sextant angle and vertical angle of initial position camera normal vector; The time-delay T time, wherein T is the amount of delay that is provided with in advance, returns step 2.2);
2.10) judge whether that through preset study zone bit study finishes,
If study finishes, then go to step 2.2);
If study does not finish, then algorithm continues, and begins to demarcate;
2.11) judge whether it is to demarcate for the first time,
For demarcating for the first time, this moment, algorithm continued if study inlet angle equals initial value;
If being not equal to initial value, study inlet angle, jumps to step 2.13 this moment) not for demarcating for the first time;
2.12) make camera rotate (θ respectively with vertical direction in the horizontal direction ε, η ε), θ ε, η εBe threshold angle, jump to step 2.14)
2.13) the study inlet angle of record when camera turns over last demarcation interruption, jump to step 2.14);
2.14) if initial alignment, the threshold angle (θ that camera is rotated respectively with vertical direction in the horizontal direction ε, η ε) and the camera image internal object geometric center point corresponding with it | f (P) | with | g (Q) | write the calibration information table; If be not initial alignment, judge at first whether target before getting into this step motion has taken place; If target travel is then upgraded study inlet angle, and is jumped to step 2.2); If target there is not motion, then upgrade the calibration information table;
2.15) keep the vertical angle of camera constant, carry out rower and decide, whenever camera turns over a unit level angle in the horizontal direction, judge this moment, whether the target geometric center point left camera image; If oneself leaves, then go to step 2.17); If do not leave, then continue algorithm;
2.16) judge camera revolution crossed a unit level angle in the horizontal direction after, whether target moves, if be not moved, then upgrade the calibration information table, and jumps to step 2.15); If be moved, then upgrade study inlet angle, and jump to step 2.2):
2.17) change the vertical angle of camera, carry out the level of another row and demarcate, if the target geometric center point is left camera image, then the calibration information table is set up and is finished, and goes to step 2.18); If do not leave, then go to step 2.14);
2.18) after the calibration information table sets up and to finish, remove the study zone bit, go to step 2.2);
Described calibration information table is a bivector table that records camera imaging calibration information; Write down target position information in the table, and in the camera image, target moves to the target location from initial point, horizontal sextant angle and vertical angle that camera turns over as planar process vector correspondence; Described target position information is represented with respect to the number percent f (P) and the g (Q) of camera image principal diagonal length apart from Q with vertical by the horizontal range P of camera image central point and target.
Calibration information conversion signal table can be described as:
At first camera image is divided into 4 zones according to the place quadrant:
Figure BDA0000128347640000041
Wherein imax is the line number of camera image; Jmax is the columns of camera image; I refers to that camera image i is capable; J refers to camera image j row; I IjThe pixel that refers to the capable j row of i in the camera image;
The calibration information table only records the calibration information of I quadrant camera image, and the calibration information of other quadrant camera image can calculate through the rule of following calibration information conversion signal table; Calibration information converts and illustrates that table is as follows:
The I quadrant IF:(f(P)=|f(P)|)∧(g(Q)=|g(Q)|) |Δθ| |Δη|
The II quadrant IF:(-f(P)=|f(P)|)∧(g(Q)=|g(Q)|) -|Δθ| |Δη|
The III quadrant IF:(-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) -|Δθ| -|Δη|
The IV quadrant IF:(f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) |Δθ| |Δη|
Depth information and world coordinates extraction algorithm may further comprise the steps:
According to the distance between the two wide-angle cameras with fixed focus geometric center point; And demarcate the two wide-angle cameras with fixed focus this moment of picture planar process vector separately that learning algorithm obtains horizontal sextant angle and vertical angle with respect to separately initial position camera normal vector according to angle information, utilize trigonometric function to calculate the world coordinates of target.
Range of application of the present invention:
The calibration algorithm that the camera calibration of this cover in technical scheme partly can be used as monocular vision uses, can also change used for multi-vision visual for separately independently monocular vision demarcate the calibration request of reduction camera.Supporting target following and localization method can be applied in the active vision system.
The present invention has following advantage:
1) conventional active vision camera calibration is crossed in the range request calibration process target transfixion and calibration process will be prior to image processing process.The continuous calibration process of the scaling methodization that the present invention taked is a discrete calibration process; Utilize target discrete stationary state in the vision algorithm operational process to carry out the demarcation of camera, reduced the requirement of camera calibration and realized carrying out synchronously of vision algorithm and calibration process.It has reduced artificial staking-out work from the characteristic of demarcating, and has significantly simplified the preliminary preparation of algorithm.Its self calibrating function has automatic interruption restore funcitons, even in calibration process, receive external interference, it also can preserve the demarcation breakpoint, satisfies demarcation condition continued after treating once more and accomplishes from demarcating.
2) because vision system framework mode that the present invention adopted is the stereoscopic vision framework mode based on active vision, so vision system can obtain the depth information of target accurately, and calculates corresponding world coordinates.Stereo visual system compared to routine; Depth information that it is supporting and world coordinates extraction algorithm have very strong fault-tolerance to the pattern distortion of cam lens; Therefore this system can adopt the wide-angle cameras with fixed focus to enlarge its visual range, and needn't worry that distortion meeting that wide-angle lens causes impacts the calculating of depth information and world coordinates.
3) the applied vision system of the present invention has 4 degree of freedom at least, has surpassed the degree of freedom quantity of conventional vision system.The characteristic of its active vision makes system constitute the closed-loop control system based on image feedback, makes supporting vision algorithm have stronger adaptive capacity to environment.
Description of drawings
Fig. 1 imitates Lacertilia Chamaeleontidae biological vision system infrastructure abstract schematic
Fig. 2 imitates Lacertilia Chamaeleontidae biological vision system extension structure collectivity structural drawing
Fig. 3 angle information is demarcated learning algorithm operational process plane of delineation synoptic diagram
Fig. 4 X, the Y coordinate extracts synoptic diagram
Fig. 5 Z coordinate extracts synoptic diagram
The total flow chart of steps of Fig. 6
Fig. 7 angle information is demarcated the learning algorithm process flow diagram
Fig. 8 angle information is demarcated learning algorithm learning process synoptic diagram
Fig. 9 angle information is demarcated learning algorithm learning process process flow diagram
Figure 10 calibration information hoist pennants
Figure 11 depth information and world coordinates extraction algorithm process flow diagram
Among the figure: 1-wide-angle cameras with fixed focus, 2-vertical direction stepper motor, 3-horizontal direction stepper motor, 4-image acquisition and processing device, 5-electric machine controller.
Embodiment
Below in conjunction with Fig. 1~Figure 11, specify this instance.
1. realize the hardware platform and the basic functional principle of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm.
In the present embodiment, realize that the hardware platform of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm is described below:
As shown in Figure 1, this hardware platform is provided with two wide-angle cameras with fixed focus (1), the stepper motor (2) of vertical direction motion, stepper motor (3), image acquisition and processing device (4) and the electric machine controller (5) of horizontal motion.Every wide-angle cameras with fixed focus (1) is furnished with the stepper motor (2) of a vertical direction motion and the stepper motor (3) of a horizontal motion; Its purpose is to imitate the biological vision system of Lacertilia Chamaeleontidae; Especially the biological eye movement mechanism of simulation has vertically and the degree of freedom of horizontal direction.With respect to conventional vision system, this structure has distinct active vision characteristics, helps to reduce the complexity of algorithm.The reason of selecting stepper motor is that it can return more accurate angle information with respect to steering wheel.
Wide-angle cameras with fixed focus (1) is obtained to send to after the external image information and is attached thereto the image acquisition and processing device (4) that connects; Image acquisition and processing device (4) through corresponding preset algorithm process after; Controlled quentity controlled variable is sent to electric machine controller (5); Drive the stepper motor (2) of vertical direction motion, stepper motor (3) rotation of horizontal motion through electric machine controller (5), and then drive wide-angle cameras with fixed focus (1) rotation.
What pay special attention to is; The geometric center point of the stepper motor (2) of the geometric center point of wide-angle cameras with fixed focus (1) and vertical direction motion and the stepper motor (3) of horizontal motion be same perpendicular on the straight line of surface level, these characteristics have guaranteed carrying out smoothly of depth information that back literary composition will be introduced in detail and world coordinates extraction algorithm.
As shown in Figure 2; In this basic ocular system; On two wide-angle cameras with fixed focus line midpoint, can add a burnt zoom camera of length with level and vertical direction 2DOF locomitivity; And long burnt zoom camera will be higher than the wide-angle cameras with fixed focus, and the recognition capability of the vision system of expansion is got a promotion.The burnt zoom camera of the length that is added need level and vertical direction have respectively 180 spend angles the identification step of locomitivity after cooperating.The long-focus of its long burnt zoom camera, zoom section function makes it can guarantee the sharpness of target image to be identified and the number percent of the burnt zoom camera image of shared length, makes full use of the picture acquisition power of camera.It mainly imitates the foveal region of retina nest physiological structure in the human visual structure.
For effectively solving the problem that target is prone to leave the common FOV of vision system, can increase the stepper motor of a vertical direction motion and the stepper motor of a horizontal motion in the bottom of aforementioned hardware platform.Newly-increased power of motor is greater than aforementioned stepper motor.When target is about to leave the vision system image border,, avoid target to leave the common FOV of vision system through adjusting the rotational angle of newly-increased structure.This structure is mainly imitated from the biological neck structure of Lacertilia Chamaeleontidae, can assist whole vision system work.The burnt zoom camera of the length that increases newly and two motors make total system have more significantly active vision characteristic, more help preventing losing of target in the algorithm operational process; Had target image details more clearly, more helped realizing identification and navigation type algorithm.
2, imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm is realized, and is as shown in Figure 6:
1) wide-angle cameras with fixed focus position initialization
At first make two wide-angle cameras with fixed focus turn to predefined initialized location: though being parallel to each other of wide-angle cameras with fixed focus as planar process vector and surface level, and make the position of the picture planar process vector of two wide-angle cameras with fixed focus perpendicular to two wide-angle cameras with fixed focus geometric center point lines;
2) copy the biological vision system of Lacertilia Chamaeleontidae; Make two wide-angle cameras with fixed focus respectively to predefined different directions ferret out; Wherein one search target after; Launch based on the monocular vision track algorithm of Cam shift algorithm and keep tracking target, and when returning this time image planar process vector of this camera in real time with respect to this camera initial position camera as the horizontal sextant angle of planar process vector and vertical angle;
Cam shift algorithm can provide position and the target of target's center's point in present image shared size in present image.
This bionical searching method combines the wide-angle cameras with fixed focus can obtain bigger visual search scope.The image that this moment, two wide-angle cameras with fixed focus were gathered might be not have the common ken, can regard 2 groups of separate monocular vision video sequences during processing as.
Be that algorithmic descriptions needs, the camera of supposing at first to search target is wide-angle cameras with fixed focus A.
Make wide-angle cameras with fixed focus B follow wide-angle cameras with fixed focus A and carry out target search, after camera B searched target, the monocular vision track algorithm of also launching based on Cam shift algorithm kept the tracking to target;
In the present embodiment wide-angle cameras with fixed focus B is followed the search that process that wide-angle cameras with fixed focus A carries out target search is decomposed into vertical direction and horizontal direction.At first, make the vertical angle of wide-angle cameras with fixed focus B keep identical with wide-angle cameras with fixed focus A; Then, wide-angle cameras with fixed focus B begins the horizontal direction ferret out.The horizontal direction search procedure is following:
The stepper motor of horizontal motion (3) provides the rotational angle of level 180 degree in the present embodiment for camera.In world coordinate system, definition wide-angle cameras with fixed focus A is α as the planar process vector with Y axle positive dirction angle, and wide-angle cameras with fixed focus B is β as the planar process vector with Y axle positive dirction angle.When 0 °<α≤90 °, a ° beginning scans along the angle augment direction wide-angle cameras with fixed focus B from β=0; When 90 °<α<180 °, a ° beginning reduces scanning direction along angle to wide-angle cameras with fixed focus B from β=180;
Described world coordinate system is: the mid point with two wide-angle cameras with fixed focus geometric center point lines is an initial point; With parallel with surface level, be X axle positive dirction perpendicular to two wide-angle cameras with fixed focus geometric center point lines and along the direction of visual pursuit, be the right-handed coordinate system of Z axle positive dirction with the direction that makes progress perpendicular to surface level;
After wide-angle cameras with fixed focus B searched target, the monocular vision track algorithm of also launching based on Cam shift algorithm kept the tracking to target.
After two wide-angle cameras with fixed focus were all launched track algorithm, if the target persistent movement, then tracking possibly exist surplus difference of certain tracking and tracking lag.For surplus difference of the tracking of elimination algorithm and tracking lag, can consider expansion algorithm, eliminate through thereof using PID algorithm and follow the tracks of surplus difference and tracking lag.Specific practice is: in the camera present image, target with respect to the horizontal range of camera present image central point with vertically distance is as feedback quantity, the thereof using PID control algolithm makes wide-angle cameras with fixed focus tracking target more accurate.
3) call angle information and demarcate learning algorithm and calculate each camera A, B respectively in real time and all trace into target after, picture planar process vector separately with respect to initial position camera separately as the horizontal sextant angle and vertical angle of planar process vector;
4) use angle information is demarcated the result that learning algorithm obtains, and calls depth information and world coordinates extraction algorithm, calculates target depth information and the target world coordinates in world coordinate system in real time; Described world coordinate system is: the mid point with two wide-angle cameras with fixed focus geometric center point lines is an initial point; With parallel with surface level, be X axle positive dirction perpendicular to two wide-angle cameras with fixed focus geometric center point lines and along the direction of visual pursuit, be the right-handed coordinate system of Z axle positive dirction with the direction that makes progress perpendicular to surface level;
5) export target world coordinates.If target is not lost, then return step 3); If track rejection then returns step 2).
If the imitative applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm has increased the described stepper motor of preceding text in the bottom; Then after two wide-angle cameras with fixed focus are all launched this step of monocular vision track algorithm based on Cam shift algorithm; When if target is about to move to outside the common FOV of two wide-angle cameras with fixed focus, can be through the newly-increased motor rotational angle in the horizontal and vertical directions of adjustment to keep target within the common FOV of two wide-angle cameras with fixed focus.
If the imitative applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm has been expanded the burnt zoom camera of length mentioned above; Then on the basis of the target world coordinates that step 4) obtains; Carry out coordinate transform; The initial point of world coordinate system is moved to the geometric center point of long burnt zoom camera, and calculate the coordinate of target in new coordinate system.Rotating camera according to this coordinate can make target's center's point overlap with long burnt zoom camera image center is approximate; The monocular vision track algorithm of launching based on Cam shift algorithm keeps the tracking of long burnt zoom camera to target; The burnt section number percent that makes target account for the burnt zoom camera image of whole length that changes long burnt zoom camera is preset number percent, can begin identification.
2. angle information is demarcated learning algorithm
Angle information described in the imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm is demarcated learning algorithm and can be described below:
Shown in Figure 3 is that any wide-angle cameras with fixed focus is demarcated the initial pictures that collects when learning algorithm begins at angle information.Place, two dotted line point of crossing is initial point for camera imaging center point with this central point among the figure, and laterally dotted line is the LX axle, and vertically dotted line is set up the camera image coordinate system for the LY axle.The top spheroid is a targets of interest, and its coordinate is (P, Q), and the entire image lateral length is designated as X, and longitudinal length is designated as Y.
The regulation zone bit: establishing Marker_learning is the mode of learning zone bit.Non-0 the time when the Marker_learning value, algorithm runs on mode of learning.When the Marker_learning value was 0, algorithm did not move mode of learning.
The algorithm purpose: export in real time in the camera tracking target process, camera as the planar process vector with respect to self initial position camera as the horizontal sextant angle of planar process vector and vertical angle.
The process flow diagram of algorithm is as shown in Figure 7.
Angle information is demarcated learning algorithm and is aimed at single camera from demarcating the algorithm of behavior.When the step explanation of carrying out algorithm, what suppose discussion is wide-angle cameras with fixed focus A and its pairing two stepper motor.But algorithm is applicable to wide-angle cameras with fixed focus B and its pairing two stepper motor too.
Algorithm steps is following:
1) initialization study inlet angle degree (Δ θ 1, Δ η 1), make Δ θ 1ε, Δ η 1ε, (θ wherein ε, η ε) be threshold angle, its concrete size need obtain according to the camera actual debugging of degree that distorts, like (3 °, 3 °).Described study inlet angle is meant when target moves in the calibration process that target moves to the position that moves from initial point in the camera image, level angle that pairing camera rotates and vertical angle.Study inlet angle is used for recording angular information and demarcates the learning algorithm algorithm from the progress of demarcating.As (Δ θ 1≠ θ ε) ˇ (Δ η 1≠ η ε) time, algorithm stores is described one group from demarcating breakpoint.As Δ θ 1ε, Δ η 1εThe time, explain that algorithm do not store from demarcating breakpoint.
2) calculate the horizontal range P of (Fig. 3) image center and target in the wide-angle cameras with fixed focus image and vertically apart from the number percent of Q with respect to image principal diagonal length
Figure BDA0000128347640000081
, promptly wherein X, Y represent the length of camera image and wide respectively to
Figure BDA0000128347640000082
Figure BDA0000128347640000083
; P is that to represent that target is positioned at image center correct time right-hand, and P representes that target is positioned at target's center's point left when negative.Q representes that target is positioned at the image center top correct time, and Q representes when being negative that target is positioned at target's center's point below.
3) whether judge target at the camera image central point, promptly judge | f (P) | with | g (Q) | whether all less than one greater than 0 pre-set threshold ε.
If (| f (P) |<ε) ∧ (| g (Q) |<ε), then target and image center are similar to and overlap, and go to step 7),
If (| f (P) |>ε) ∨ (| g (Q) |>ε), then target does not overlap with image center, algorithm continuation.
Wherein, threshold epsilon is used to judge whether target overlaps with image center is approximate.Bigger threshold epsilon will make algorithm have quicker operation speed, but also can reduce the precision of algorithm.Less threshold epsilon will make algorithm have higher precision, but also can reduce the travelling speed of algorithm, and its concrete size need require debugging to obtain reference value according to actual angle precision needs and vision system real-time: 1/50.
4) if target does not overlap with image center; Look into the calibration information table; Judge in the calibration information table with respect to | f (P) | with | g (Q) | whether recorded corresponding | Δ θ | value and | Δ η | value, if exist corresponding | Δ θ | value and | Δ η | be worth, then algorithm jumps to step 8); If do not exist corresponding | Δ θ | value and | Δ η | value, then algorithm continuation.
Wherein Δ θ value is meant under world coordinate system; With the camera geometric center point is starting point, with the target geometric center point be terminal point vector
Figure BDA0000128347640000091
the projection on the surface level and this moment video camera as the horizontal sextant angle value of planar process vector between the projection on the surface level.Δ η value is meant under world coordinate system; With the camera central point is starting point, with the targets of interest central point be terminal point vector
Figure BDA0000128347640000092
perpendicular to the projection on the plane of surface level and two camera central point lines and this moment the camera normal vector perpendicular to the vertical angle value between the projection on the plane of surface level and two camera central point lines.
It should be noted that; Whether the foundation of judging in this step is corresponding for existing | Δ θ | with | Δ η | value but not whether exist the reason of calibration information table to be: can the interrupting from calibration process of angle information demarcation learning algorithm, its principle is similar to the Interrupt Process of computing machine.If target has moved, the study inlet angle of demarcating breakpoint is preserved; Jump out the demarcation learning process, get back to main algorithm; When target's center's point overlaps with image center once more, and target is called study inlet angle and is returned learning process when static.This just means that also the calibration information of when this step is judged, demarcating in the information table possibly be incomplete, but not exists fully or do not exist fully.Need specifically to judge each when therefore judging calibration information | f (P) | with | g (Q) | whether pairing calibration information exists.
The benefit that adds the breakpoint return mechanisms from calibration process is conspicuous: 1. it has strengthened the adaptability of algorithm; Make calibration process become a discrete calibration process by a continuous process; Promptly no longer require target after algorithm begins, at first to keep static to demarcate; But utilize in the algorithm operational process, the discrete stationary state of target is demarcated.2. it makes algorithm need not to wait for from calibration process to accomplish and can normally move, and the angle information output frequency that shows as in the algorithm operational process will be along with from the carrying out of calibration process and more and more real time implementation.3. it has strengthened the intelligent of algorithm.
5) progressively rotate camera and make that target overlaps with image center in camera image
The concrete realization: progressively change the PWM ripple dutycycle that sends to stepper motor (2) (3), make wide-angle cameras with fixed focus (1) to making image center and target's center point distance | P| and | the direction rotation that Q| reduces.Wherein the change step of camera rotation controlled quentity controlled variable PWM ripple dutycycle is divided into the change step μ of horizontal direction 1Change step μ with vertical direction 2, change step is not a fixing value, but a value that calculates according to f (P) or g (Q), its computing formula is respectively μ 1=k* μ 0* f (P) and μ 2=k* μ 0* g (Q).Wherein k is predefined multiplying power constant, μ 0Be the unit change step-length, f (P) or g (Q) have just have negative, so change step μ also have just having negative.
When increasing or decreasing PWM ripple dutycycle; If increased the stepper motor of described control integral level of preamble and vertical motion in the hardware system, then calculate the horizontal sextant angle θ and vertical angle η of camera picture planar process vector at this moment with respect to initial position camera normal vector.Calculate | θ | whether near the critical range of movement of stepper motor (3), and calculate | η | whether near the critical range of movement of stepper motor (2).If | θ |>| θ Max|-| θ ε 2|, wherein | θ Max| refer to the largest motion angle of stepper motor (3), θ ε 2Be threshold angle; The movement angle that promptly refers to a less stepper motor (3); Movement angle like 3 to 5 times of PWM ripple unit change step size mu 0 pairing stepper motors; The output angle of the newly-increased tangential movement stepper motor of adjustment earlier before algorithm further continues is then moved to dwindle whole vision system in the horizontal direction | θ | size, enlarge the tangential movement scope of imitating the applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm.If | η |>| η Max|-| η ε 2|, wherein | η Max| refer to the largest motion angle of stepper motor (2), η ε 2Be threshold angle, promptly refer to the movement angle of a less stepper motor (2), like 3 to 5 times of PWM ripple unit change step size mu 0The movement angle of pairing stepper motor.The output angle of the vertical motion stepper motor that then adjustment earlier increases newly before algorithm further continues; Make the motion of whole vision system in the vertical direction to dwindle | η | size, enlarge the vertical motion scope of the imitative applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm.When | θ |<| θ Max|-| θ ε 2| and | η |<| η Max|-| η ε 2|, then algorithm continues.
6) when the absolute value of f (P) value and g (Q) value all less than certain preset threshold value ε, promptly (| f (P) |<ε) ∧ (| g (Q) |<ε) time, be judged to be target and be similar to the camera image central point and overlap, jump to step 7); Otherwise return step 5).
7), read the picture planar process vector of camera this moment and also export with respect to the horizontal sextant angle θ of this camera initial position camera normal vector and vertical angle η when target and image center are approximate when overlapping.Go to step 10).
8) if having corresponding calibration information, then basis in the calibration information table | f (P) | value and | g (Q) | value is looked into the calibration information table and is obtained correspondence | Δ θ | value and | Δ η | value.According to the calibration information signal table that converts, confirm according to target place camera image quadrant | Δ θ | with | Δ η | and preceding additional symbol, confirm level angle Δ θ that camera rotates and vertical angle delta η.
Calibration information table described herein is described in detail after the calibration principle part hereinafter, and the calibration information conversion table is described in detail after the calibration information matrix section hereinafter.
If increased the stepper motor that the described control total system of preamble is carried out level and vertical motion in the hardware system, then calculate horizontal sextant angle θ and the vertical angle η of the picture planar process vector of camera this moment with respect to initial position camera normal vector.Belong to the quadrant of wide-angle cameras with fixed focus image at present according to target; Confirm the symbol of Δ θ and Δ η according to calibration information conversion signal table; Calculate | θ ± Δ θ | whether near the critical range of movement of stepper motor (3), and calculate | η ± Δ η | whether near the critical range of movement of stepper motor (2).
If | θ ± Δ θ |>| θ Max|-| θ ε 2|, wherein | θ Max| refer to the largest motion angle of stepper motor (3), θ ε 2Be threshold angle, promptly refer to the movement angle of a less stepper motor (3), like 3 to 5 times of PWM ripple unit change step size mu 0The movement angle of pairing stepper motor; The output angle of the tangential movement stepper motor that then adjustment earlier increases newly before algorithm further continues; Whole vision system is moved to dwindle in the horizontal direction | θ ± Δ θ |, thereby enlarge the tangential movement scope of imitating the applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm.Because the mode of this expansion range of movement has changed the position of target on image, therefore can not directly use the calibration information in the calibration information table, need go to step 4 again and search calibration information.
If | η ± Δ η |>| η Max|-| η ε 2|, wherein | η Max| refer to the largest motion angle of stepper motor (2), η ε 2Be threshold angle, promptly refer to the movement angle of a less stepper motor (2), like 3 to 5 times of PWM ripple unit change step size mu 0The movement angle of pairing stepper motor; The output angle of the vertical motion stepper motor that then adjustment earlier increases newly before algorithm further continues; Make the motion of whole vision system in the vertical direction to dwindle | η ± Δ η | size, enlarge the vertical motion scope of the imitative applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm.Because the mode of this expansion range of movement has changed the position of target on image, therefore can not directly use the calibration information in the calibration information table, need go to step 4 again and search calibration information.
If | θ ± Δ θ |<| θ Max|-| θ ε 2| and | η ± Δ η |<| η Max|-| η ε 2|, then algorithm continues.
After having looked into calibration information table and calibration information conversion table, the PWM ripple that rotates the dutycycle of level angle Δ θ and vertical angle delta η to the stepper motor transmission corresponding to camera makes with control wide-angle cameras with fixed focus | f (P) | and | g (Q) | reduce.
9) under the output world coordinate system, the actual vertically angle η+Δ η of the low precision real standard angle theta of the target of this moment+Δ θ and low precision.Wherein, θ, η are respectively horizontal sextant angle and the vertical angle of camera picture planar process vector at this moment with respect to its initial position camera normal vector.The time-delay T time, wherein T is the amount of delay that is provided with in advance, returns step 2).
Because target might be a real time kinematics in the algorithm operational process, causes the step motor control amount of system also to need real-time update.Therefore the PWM ripple that calculates herein postpones one section definite time T after sending, and can get into next step and continue to upgrade new Electric Machine Control amount.T time delay here needs debugging according to actual needs to set: long T time delay makes the algorithm computation amount little but system has hysteresis quality, and short T time delay makes the systematic comparison sensitivity but the algorithm computation amount is big.
10) judge whether the Marker_learning zone bit is 0, and the value of this zone bit is predefined.If Marker_learning sign place value is 0, explain then that the calibration information table has been set up to finish, go to step 2).If Marker_learning sign place value is non-0, then continue algorithm, begin or continue to set up the calibration information table.
Calibration principle
Be illustrated in figure 8 as angle information demarcation learning algorithm and move synoptic diagram from calibration process.Calibration process is: the first quartile at the camera image coordinate system is demarcated; Make camera fix vertical angle; Carry out the demarcation of horizontal direction then, next change the vertical angle of camera, carry out the demarcation of horizontal direction corresponding to new vertical angle; Constantly circulation is left camera image up to the target in the vertical direction.When demarcating first quartile with the pixel of delegation, camera moves horizontally, and this unsteady of g (Q) value that also just means target should be less than a threshold range, and this threshold value need be confirmed when reality is debugged.
Demarcating needs to satisfy two conditions, and the first, target is positioned at the central point of camera image, and the second, target is static.First condition is guaranteed in front the step, therefore in following step, only needs judge whether target is static.The method of judging was divided into for two steps:
The first step carries out judging whether target is static in the process of horizontal direction demarcation with fixing vertical angle at camera.The method of judging is: before beginning to carry out the horizontal direction demarcation, at first calculate g (Q), and give g (Q with its assignment 0), i.e. g (Q 0)=g (Q) begins to carry out horizontal direction then and demarcates, and calculates g (Q) in real time, and judges | g (Q)-g (Q 0) |<ε 2Whether set up,, explain that then target does not move,, explain that then target has taken place to move if be false if set up.ε 2For with reference to threshold value, be one less on the occasion of, desirable camera image principal diagonal length 1/50.The setting of threshold value is relevant with camera performance and requirement of experiment.
In second step, after camera is accomplished the demarcation of previous row horizontal direction and turned to the initial calibration position process of horizontal direction of next line continuously, judge whether target moves.The method of judging is: the g (Q that previous row is corresponding 0) assignment gives g (Q 1), before the next line horizontal direction is demarcated beginning, calculate g (Q) then, and assignment is given g (Q 0), i.e. g (Q 0)=g (Q), judge | g (Q 1)-g (Q 0) |<ε 3If inequality is set up, then explain not move in target, if inequality is false, explain that then motion has taken place target.ε 3For with reference to threshold value, be one than ε 2 bigger on the occasion of, desirable principal diagonal length 1/30, look actual precision prescribed and decide.
In case target in calibration process motion has taken place, calibration process stops immediately, and record object corresponding study inlet angle of when motion.When triggering the condition establishment of demarcating once more, continue uncompleted staking-out work according to study inlet angle.
The program detail flowchart of step 11 to step 18 is as shown in Figure 9.
11) judge whether it is to demarcate for the first time
Defining variable and assignment:
Standard level angle θ 0: camera begins timing signal as the horizontal sextant angle of planar process vector with respect to its initial position camera normal vector;
The vertical angle η of standard 0: camera begins timing signal as the vertical angle of planar process vector with respect to initial position camera normal vector;
Δ θ, Δ η, θ, η definition are ditto;
θ, Δ θ, θ 0Three's relation can be expressed as: θ=θ 0+ Δ θ
η, Δ η, η 0Three's relation can be expressed as: η=η 0+ Δ η
Determination methods:
Judge study this moment inlet angle (Δ θ 1, Δ η 1) whether be (θ ε, η ε), i.e. (Δ θ 1ε) ^ (Δ η 1ε) whether set up.
Judged result:
If set up, algorithm continues next step.Initialization value that judge to set up explanation study inlet angle is by assignment again, do not have the situation of interrupting from calibration process before promptly, and algorithm is not stored the study inlet angle of demarcating when interrupting, and next need begin to set up the calibration information table;
If be false, jump to step 13).Judge the situation that the explanation of being false exists calibration process to interrupt, and store study inlet angle corresponding when demarcating interruption last time.
12) in the camera coordinate system, make camera rotate (θ respectively with vertical direction in the horizontal direction ε, η ε), θ ε, η εBe threshold angle, and calculate the g (Q) after rotating; The static reference point g of objective definition (Q 0) and the static reference point g of alternative target (Q 1), and assignment g (Q 0)=g (Q), g (Q 1)=0 jumps to step 14) then.G (Q 0) be used for judging that target carrying out any delegation horizontal direction calibration process, whether target is static, the static reference point g of alternative target (Q 1) be used to judge that camera in the demarcation of completion delegation, rotates and prepares to carry out in the process of next line demarcation, whether target is static.
Concrete grammar: the PWM ripple dutycycle of the stepper motor that sends to vertical motion of gradually changing makes camera continue to rotate.As vertical angle η and the standard vertical angle η of camera picture planar process vector at this moment with respect to its initial position camera normal vector 0The difference vertical study threshold angle value that equals to be provided with in advance | η ε| the time, camera stops; Calculate the g (Q) of this moment, and establish the static reference point g of target (Q 0)=g (Q), the static reference point g of alternative target (Q 1)=0.The gradually change PWM ripple dutycycle of the stepper motor that sends to tangential movement makes camera continue to turn left.As horizontal sextant angle θ and the standard level angle θ of camera picture planar process vector at this moment with respect to its initial position camera normal vector 0The difference level study threshold angle value that equals to be provided with in advance | θ ε| the time, begin to set up calibration information table corresponding to g this moment (Q) value, jump to step 14).
Setting up the reason that to rotate threshold angle before the calibration information table earlier be: owing to be the angle information of rough grade, be high-precision angle information and be similar to the angle that algorithm need be exported when overlapping at target and image center through the angle information of calibration information table output.Therefore when target soon overlaps with image center, do not re-use the calibration information table and instruct motor to rotate and then obtain high-precision angle information.So algorithm does not need the camera calibration information in the threshold angle scope, i.e. algorithm shadow region in the calibration maps 8 not.
13) will skip algorithm at this moment before the study inlet angle from calibration process, directly recover the breakpoint progress of last time from calibration process;
The gradually change PWM slope dutycycle of the stepper motor that sends to vertical motion in the camera coordinate system, at first makes camera continue to rotate, and the angle that turns over downwards up to camera equals the study inlet angle delta η of vertical direction 1The time, camera stops.Calculate the g (Q) of this moment, make the static reference point g of alternative target (Q 1)=g (Q 0), and establish the static reference point g of target (Q 0)=g (Q)
The gradually change PWM ripple dutycycle of the stepper motor that sends to tangential movement in the camera coordinate system, makes camera continue to turn left, and the angle of turning left up to camera equals the study inlet angle delta θ of horizontal direction 1Till.Continue next step.
14) if g is (Q 1) equal zero, calculate f (P) value of this moment, with Δ θ=θ of this moment εWith Δ η=η εValue record is gone in the calibration information table in the position corresponding to this moment f (P) value and g (Q) value, continues next step.
If g (Q1) is not equal to zero, then judge the static reference point g of alternative target (Q 1) and the static reference point g of target (Q 0) absolute difference whether less than threshold value, promptly | g (Q 1)-g (Q 0) |<ε 3If inequality is set up, calculate f (P) value of this moment, the Δ θ of this moment and Δ η value record are gone in the calibration information table in the position corresponding to this moment f (P) value and g (Q) value.Algorithm continues next step; If inequality is false, note Δ θ and the Δ η of this moment, order study inlet angle delta θ 1=Δ θ, Δ η 1=Δ η; Jump procedure 2).
15) in the camera coordinate system, camera the level angle that turns left corresponding to PWM ripple dutycycle unit change step size mu 0, judge this moment the target geometric center point whether to leave camera image right-hand; If oneself leaves, then go to step 17); If do not leave, then continue algorithm;
16) calculate g this moment (Q) value, judge this moment | g (Q)-g (Q0) |<ε 2Whether set up,, calculate f (P) value of this moment, and with this f (P), Δ θ and Δ η value record that g (Q) is corresponding are gone in the calibration information table, return step 15) if set up; Otherwise, note Δ θ and Δ η at this moment, order study inlet angle delta θ 1=Δ θ, Δ η 1=Δ η; Jump procedure 2);
Step 16) in, only confirming that the new calibration information that obtains does not produce excessive saltus step at vertical direction, promptly | g (Q)-g (Q 0) |<ε 2The time, just can it be recorded into the calibration information table and return and continue to obtain new calibration information.Otherwise the decidable target moves, and need after preserving breakpoint, stop from calibration process.
17) in the camera coordinate system, make camera turn right to Δ θ=| θ ε|, make camera rotate one again corresponding to PWM ripple dutycycle unit change step size mu 0Vertical angle; Judge this moment, whether the target geometric center point left the camera image top; If oneself leaves, explain that then the foundation of calibration information table finishes, and goes to step 18); If do not leave, then calculate the g (Q) of this moment, and upgrade the static reference point g of alternative target (Q 1)=g (Q 0), the static reference point g of fresh target (Q more then 0)=g (Q) goes to step 14);
Angle information demarcate assist in the learning algorithm use can demarcate central point that learning algorithm provides target position and the target in camera image shared size in camera image as angle information based on the track algorithm of Cam shift algorithm.By means of these two information, whether the geometric center point that can determine target has left camera image, specifically is embodied as: as f (P)>ε 4Or g (Q)>ε 5The time, can judge that target left camera image.This moment truth be target central point near edge of image.ε 4, ε 5Be threshold value, ε in the present embodiment 4Value do
Figure BDA0000128347640000151
Smaller, ε 5Value be than
Figure BDA0000128347640000152
Smaller.
18) after the calibration information table was set up and finished, clear Marker_learning zone bit was 0, goes to step 2);
Only confirm angle information demarcate learning algorithm under the situation that calibration process has been accomplished, algorithm just can go to step 18, removing study zone bit Marker_learning is 0.After having removed study zone bit Marker_learning,, all no longer get into from the demarcation state no matter whether algorithm stores from the calibration process breakpoint.
The calibration information table
Shown in figure 10, demarcating the calibration information table of being mentioned in the learning algorithm at angle information is a two-dimensional data table that records camera imaging calibration information.This table is by arbitrfary point in the camera image coordinate system | f (P) |, | g (Q) |, and make target move to this by initial point to put level angle that pairing camera turns over form with vertical angle in the camera coordinate system.Promptly the table in each location records one group corresponding to one group | f (P) |, | g (Q) | value | Δ θ |, | Δ η | the value.Wherein the data of each row are corresponding to identical | f (P) | and value, the leftmost side to the row rightmost side is strengthened corresponding to numerical value gradually voluntarily | g (Q) | value.The data of each row are corresponding to identical | g (Q) | and value is listed as the top to row certainly and strengthens gradually corresponding to numerical value the below | f (P) | value.For example: among Figure 10 corresponding to | f (P) |=0.25, | g (Q) |=0.33, record one group of Δ θ in the table, the data of Δ η: Δ θ=15 °, Δ η=24.3 °.
Because employed cam lens distortion is a near symmetrical, so only records the calibration information of I quadrantal diagram picture in the camera image in the calibration information table.The calibration information of other quadrantal diagram pictures can obtain through the calibration information computing of illustrating in the signal table that converts.
The calibration information conversion table
Calibration information converts and illustrates to show: calibration information converts and illustrates table auxiliary calibration information table to work.
At first camera image is divided into 4 zones according to the place quadrant:
Wherein imax is the line number of camera image.Jmax is the columns of camera image.I refers to that camera image i is capable.J refers to camera image j row.I IjThe pixel that refers to the capable j row of i in the camera image.
The calibration information table only records the calibration information of I quadrant camera image, and the calibration information of other quadrant camera image can calculate through the rule of following calibration information conversion signal table.The calibration information signal table that converts is as follows: secondary series representes to judge the determination methods of different quadrants in the table, and third and fourth tabulation is shown when being positioned at different quadrant, angle that camera should rotate in the camera coordinate system and direction.
The I quadrant IF:(f(P)=|f(P)|)∧(g(Q)=|g(Q)|) |Δθ| |Δη|
The II quadrant IF:(-f(P)=|f(P)|)∧(g(Q)=|g(Q)|) -|Δθ| |Δη|
The III quadrant IF:(-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) -|Δθ| -|Δη|
The IV quadrant IF:(f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) |Δθ| |Δη|
Angle information is demarcated learning algorithm and need be carried out from the necessity of demarcating: 1. angle information is demarcated learning algorithm if do not carry out in the operational process from demarcating; It only just can export angle information under the approximate situation about overlapping of target and image center; This will cause the output of angle information not have real-time, be unfavorable for the use of subsequent algorithm.2. angle information is demarcated learning algorithm if do not carry out in the operational process from demarcating, and the ability of its tracking target will descend, and makes target more be prone to lose.
3. depth information and world coordinates extraction algorithm
Depth information and world coordinates extraction algorithm step are following:
1) the imitative applied vision system of Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm is demarcated.After manually demarcating, between the two wide-angle cameras with fixed focus geometric center point separately is own knowledge amount apart from d.As shown in Figure 4, for specifying the computation process of algorithm, establish two wide-angle cameras with fixed focus and be acute angle as the ray at planar process vector place and the angle of the line between the two wide-angle cameras with fixed focus geometric center point separately.If left side wide-angle cameras with fixed focus geometric center point and the target distance between the subpoint on the two wide-angle cameras with fixed focus geometric center point line separately is d 1, right wide-angle cameras with fixed focus geometric center point and the target distance between the subpoint on the two wide-angle cameras with fixed focus geometric center point lines is d 2, d then 1+ d 2=d.
2) receive the two wide-angle cameras with fixed focus this moment of the picture planar process vector separately of demarcating learning algorithm from angle information horizontal sextant angle θ with respect to separately initial position camera normal vector 1, θ 2With vertical angle η 1, η 2Demarcating in the angle information of learning algorithm from angle information, the angle information that obtains through inquiry calibration information table is being arranged, also having through reading the angle information that the controlled quentity controlled variable backwards calculation obtains.The former precision is low, and real-time is high, and latter's precision is high, and real-time is low.Need as the case may be, depth information and world coordinates extraction algorithm can be selected shielding wherein one type of angle information or whole the reception.
3) according to the scalar quantity d that obtains, and angle information θ 1, θ 2, calculate the degree of depth of target, i.e. the X coordinate of target with respect to the line mid point between the two wide-angle cameras with fixed focus geometric center point separately.
For Fig. 4, the depth information of target can be expressed as:
Figure BDA0000128347640000171
Derive: d 1 = Tan θ 1 Tan θ 2 d 2
Substitution: d 1+ d 2=d can get: ( Tan θ 1 Tan θ 2 + 1 ) d 2 = d
Put in order: d 2 = d ( Tan θ 1 Tan θ 2 + 1 ) , d 1 = d - d 2 = d - d ( Tan θ 1 Tan θ 2 + 1 )
Accordingly: X = d 2 Tan θ 2 = d 1 Tan θ 1 = d ( Tan θ 1 Tan θ 2 + 1 ) Tan θ 2 = d Tan θ 1 + Tan θ 2
Be that depth information, X coordinate are: X = d Tan θ 1 + Tan θ 2 .
4) further calculate
Figure BDA0000128347640000178
and be the Y coordinate of system's world coordinate system.In step 1), only considered that two wide-angle cameras with fixed focus as shown in Figure 4 are the situation of acute angle as the angle of vectorial ray that belongs to of planar process and the line between the two wide-angle cameras with fixed focus geometric center point separately.If obtuse angle then computing method and above-mentioned situation is similar.
The also available method of asking for the Y coordinate that is similar to of the Z coordinate of world coordinate system is obtained.As shown in Figure 5, obtain depth information X after, demarcate camera that learning algorithm obtains as the vertical angle η of planar process vector in conjunction with angle information with respect to initial position camera normal vector 1, through formula: Z=tan (η 1) * X can obtain the Z coordinate.So far, target is set up with respect to the world coordinates of vision system coordinate system and is finished.
5) export target withdraws from algorithm with respect to the world coordinates of the line mid point between the two wide-angle cameras with fixed focus geometric center point separately.
Its algorithm flow chart is shown in figure 11.
With depth information and world coordinates extraction algorithm and angle information demarcate that learning algorithm separates former because: 1. angle information is demarcated the angle information of the more fresh target that learning algorithm need not stop, and its real-time is had relatively high expectations.So the brief ten minutes of algorithm is important.2. angle information is demarcated the angle information that learning algorithm exports and is offered depth information and world coordinates extraction algorithm incessantly, can also offer other algorithms.
4. based on the monocular track algorithm of Camshifi algorithm
Angle information is demarcated learning algorithm and has been used the monocular track algorithm based on Cam shift algorithm, and it is mainly used in real time and to angle information demarcation learning algorithm size and target geometric center point the position in camera image of target in camera image is provided.
Conventional used for multi-vision visual system keeps track algorithm need be taken all factors into consideration the matching problem when following the tracks of between a plurality of cameras, and calculated amount is big, complex algorithm, and owing to do not possess the function of active vision, following range is narrow, is prone to take place track rejection.The applied vision system of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm based on active vision is independent with the track algorithm of each camera, has effectively reduced the complexity of algorithm, has reduced the calculated amount of algorithm.
Monocular vision track algorithm based on Cam shift algorithm belongs to prior art, may be summarized as follows:
Image to the camera passback carries out the hsv color spatial variations, extracts the hue component, makes up color histogram, the color probability distribution graph.According to the color probability distribution graph, use the Cam_shift algorithm to carry out the tracking of target.Set up the appropriate image search window, extract the search window center point coordinate.The calculation control amount sends to stepper motor, and the control camera makes tracking target be in the central spot of image all the time, and its concrete control method is that angle information is demarcated the control method described in the learning algorithm.

Claims (4)

1. imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm based on the physical platform that is made up of two wide-angle cameras with fixed focus, stepper motors, is characterized in that may further comprise the steps:
1). make two wide-angle cameras with fixed focus ferret out respectively; After the A camera is found target; Launch based on the monocular vision track algorithm of Cam shift algorithm and keep tracking target, and when returning this time image planar process vector of A camera in real time with respect to A camera initial position camera as the horizontal sextant angle of planar process vector and vertical angle; Described A camera is at first searching the camera of target, and the B camera is another camera; Described initial position is: make being parallel to each other as planar process vector and surface level of wide-angle cameras with fixed focus, and make the position of the line segment that is constituted perpendicular to two wide-angle cameras with fixed focus geometric center point lines as the planar process vector of two wide-angle cameras with fixed focus;
2) the .B camera is followed the A camera and is carried out target search, and after the B camera searched target, the monocular vision track algorithm of also launching based on Cam shift algorithm kept the tracking to target;
3). after calling angle information and demarcating learning algorithm and calculate A, B camera respectively in real time and all trace into target, picture planar process vector separately with respect to initial position camera separately as the horizontal sextant angle and vertical angle of planar process vector;
4). the result according to step 3) obtains, use depth information and world coordinates extraction algorithm, calculate in real time and export target depth information and the world coordinates of target in world coordinate system; Described world coordinate system is: the mid point with two wide-angle cameras with fixed focus geometric center point lines is an initial point; With parallel with surface level, be x axle positive dirction perpendicular to two wide-angle cameras with fixed focus geometric center point lines and along the direction of visual pursuit, be the right-handed coordinate system of z axle positive dirction with the direction that makes progress perpendicular to surface level;
5). if target is not lost, return step 3), if the fruit track rejection then returns step 1).
2. a kind of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm according to claim 1 is characterized in that described angle information demarcation learning algorithm may further comprise the steps:
2.1) initialization study inlet angle degree (Δ θ 1, Δ η 1)
Described study inlet angle is meant when target moves in the calibration process that target moves to the position that moves from initial point in the camera image, level angle that pairing camera rotates and vertical angle;
2.2) calculate in camera image the horizontal range P of camera image central point and target and vertical respectively apart from the number percent f (P) and the g (Q) of Q with respect to camera image principal diagonal length;
2.3) whether judge target at the camera image central point,
If (| f (P) |<ε) ∧ (| g (Q) |<ε), threshold epsilon>0, then target jumps to step 2.7 at central point);
If (| f (P) |>ε) ∨ (| g (Q) |>ε), threshold epsilon>0, then target jumps to step 2.4 not at central point);
2.4) look into the calibration information table,
If exist in the calibration information table corresponding to | f (P) | with | g (Q) | angle information, then algorithm jumps to step 2.8);
If do not exist in the calibration information table corresponding to | f (P) | with | g (Q) | angle information, then jump to step 2.5);
2.5) progressively rotate camera, target is overlapped with image center in camera image;
2.6) when target overlaps with the camera image central point, jump to step 2.7); Otherwise return step 2.5);
2.7) when target overlaps with image center, read camera picture planar process vector at this moment and also export with vertical angle η with respect to the horizontal sextant angle θ of this camera initial position camera as the planar process vector, go to step 2.10);
2.8) if exist in the calibration information table corresponding to | f (P) | with | g (Q) | angle information, then camera rotates this angle the target geometric center point is overlapped with image center in camera image, rotation direction is definite according to the calibration information signal table that converts;
2.9) export under the world coordinate system, target is with respect to the horizontal sextant angle and vertical angle of initial position camera normal vector; The time-delay T time, wherein T is the amount of delay that is provided with in advance, returns step 2.2);
2.10) judge whether that through preset study zone bit study finishes,
If study finishes, then go to step 2.2);
If study does not finish, then algorithm continues, and begins to demarcate;
2.11) judge whether it is to demarcate for the first time,
For demarcating for the first time, this moment, algorithm continued if study inlet angle equals initial value;
If being not equal to initial value, study inlet angle, jumps to step 2.13 this moment) not for demarcating for the first time;
2.12) make camera rotate (θ respectively with vertical direction in the horizontal direction ε, η ε), θ ε, η εBe threshold angle, jump to step 2.14)
2.13) the study inlet angle of record when camera turns over last demarcation interruption, jump to step 2.14);
2.14) if initial alignment, the threshold angle (θ that camera is rotated respectively with vertical direction in the horizontal direction ε, η ε) and the camera image internal object geometric center point corresponding with it | f (P) | with | g (Q) | write the calibration information table; If be not initial alignment, judge at first whether target before getting into this step motion has taken place; If target travel is then upgraded study inlet angle, and is jumped to step 2.2); If target there is not motion, then upgrade the calibration information table;
2.15) keep the vertical angle of camera constant, carry out rower and decide, whenever camera turns over a unit level angle in the horizontal direction, judge this moment, whether the target geometric center point left camera image; If oneself leaves, then go to step 2.17); If do not leave, then continue algorithm;
2.16) judge camera revolution crossed a unit level angle in the horizontal direction after, whether target moves, if be not moved, then upgrade the calibration information table, and jumps to step 2.15); If be moved, then upgrade study inlet angle, and jump to step 2.2):
2.17) change the vertical angle of camera, carry out the level of another row and demarcate, if the target geometric center point is left camera image, then the calibration information table is set up and is finished, and goes to step 2.18); If do not leave, then go to step 2.14);
2.18) after the calibration information table sets up and to finish, remove the study zone bit, go to step 2.2);
Described calibration information table is a bivector table that records camera imaging calibration information; Write down target position information in the table, and in the camera image, target moves to the target location from initial point, horizontal sextant angle and vertical angle that camera turns over as planar process vector correspondence; Described target position information is represented with respect to the number percent f (P) and the g (Q) of camera image principal diagonal length apart from Q with vertical by the horizontal range P of camera image central point and target.
3. angle information according to claim 2 is demarcated learning algorithm, it is characterized in that described calibration information conversion signal table can be described as:
At first camera image is divided into 4 zones according to the place quadrant:
Figure FDA0000128347630000031
Wherein imax is the line number of camera image; Jmax is the columns of camera image; I refers to that camera image i is capable; J refers to camera image j row; I IjThe pixel that refers to the capable j row of i in the camera image;
The calibration information table only records the calibration information of I quadrant camera image, and the calibration information of other quadrant camera image can calculate through the rule of following calibration information conversion signal table; Calibration information converts and illustrates that table is as follows:
The I quadrant IF:(f(P)=|f(P)|)∧(g(Q)=|g(Q)|) |Δθ| |Δη| The II quadrant I?F:(-f(P)=|f(P)|)∧(g(Q)=|g(Q)|) -|Δθ| |Δη| The III quadrant I?F:(-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) -|Δθ| -|Δη| The IV quadrant IF:(f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) |Δθ| |Δη|
4. a kind of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm according to claim 1 is characterized in that described depth information and world coordinates extraction algorithm may further comprise the steps:
According to the distance between the two wide-angle cameras with fixed focus geometric center point; And demarcate the two wide-angle cameras with fixed focus this moment of picture planar process vector separately that learning algorithm obtains horizontal sextant angle and vertical angle with respect to separately initial position camera normal vector according to angle information, utilize trigonometric function to calculate the world coordinates of target.
CN201110460701.XA 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision Active CN102682445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110460701.XA CN102682445B (en) 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110460701.XA CN102682445B (en) 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision

Publications (2)

Publication Number Publication Date
CN102682445A true CN102682445A (en) 2012-09-19
CN102682445B CN102682445B (en) 2014-12-03

Family

ID=46814312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110460701.XA Active CN102682445B (en) 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision

Country Status (1)

Country Link
CN (1) CN102682445B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007761A (en) * 2014-04-30 2014-08-27 宁波韦尔德斯凯勒智能科技有限公司 Visual servo robot tracking control method and device based on pose errors
CN108377342A (en) * 2018-05-22 2018-08-07 Oppo广东移动通信有限公司 double-camera photographing method, device, storage medium and terminal
CN109976335A (en) * 2019-02-27 2019-07-05 武汉大学 A kind of traceable Portable stereoscopic live streaming intelligent robot and its control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 Visual sense measurement method and device
US20090174701A1 (en) * 2006-07-31 2009-07-09 Cotter Tim S System and method for performing motion capture and image reconstruction
CN102034092A (en) * 2010-12-03 2011-04-27 北京航空航天大学 Active compound binocular rapid target searching and capturing system based on independent multiple-degree-of-freedom vision modules

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090174701A1 (en) * 2006-07-31 2009-07-09 Cotter Tim S System and method for performing motion capture and image reconstruction
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 Visual sense measurement method and device
CN102034092A (en) * 2010-12-03 2011-04-27 北京航空航天大学 Active compound binocular rapid target searching and capturing system based on independent multiple-degree-of-freedom vision modules

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NAIGONG YU等: "Study on mobile robot mapping based on binocular vision and Voronoi diagram", 《ELECTRICAL AND CONTROL ENGINEERING (ICECE), 2011 INTERNATIONAL CONFERENCE ON》 *
于乃功等: "机械臂视觉伺服系统中的高精度实时特征提取", 《控制与决策》 *
余洪山: "主动立体视觉关键技术及其应用研究", 《万方学位论文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007761A (en) * 2014-04-30 2014-08-27 宁波韦尔德斯凯勒智能科技有限公司 Visual servo robot tracking control method and device based on pose errors
CN104007761B (en) * 2014-04-30 2016-05-11 宁波韦尔德斯凯勒智能科技有限公司 Tracking control method and the device of the Visual Servo Robot based on position and attitude error
CN108377342A (en) * 2018-05-22 2018-08-07 Oppo广东移动通信有限公司 double-camera photographing method, device, storage medium and terminal
CN109976335A (en) * 2019-02-27 2019-07-05 武汉大学 A kind of traceable Portable stereoscopic live streaming intelligent robot and its control method

Also Published As

Publication number Publication date
CN102682445B (en) 2014-12-03

Similar Documents

Publication Publication Date Title
CN110033411B (en) High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
CN112598729B (en) Target object identification and positioning method integrating laser and camera
US20220137648A1 (en) Method and apparatus for tracking moving target and unmanned aerial vehicle
CN110163963B (en) Mapping device and mapping method based on SLAM
CN103886107A (en) Robot locating and map building system based on ceiling image information
CN101930628A (en) Monocular-camera and multiplane mirror catadioptric device-based motion capturing method
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
CN112991401B (en) Vehicle running track tracking method and device, electronic equipment and storage medium
EP3552388A1 (en) Feature recognition assisted super-resolution method
CN102682445B (en) Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision
Jeon et al. EFGHNet: a versatile image-to-point cloud registration network for extreme outdoor environment
CN103617631A (en) Tracking method based on center detection
CN110889353A (en) Space target identification method based on primary focus large-visual-field photoelectric telescope
CN104809720B (en) The two camera target association methods based on small intersection visual field
CN109978908A (en) A kind of quick method for tracking and positioning of single goal adapting to large scale deformation
CN114429435A (en) Wide-field-of-view range target searching device, system and method in degraded visual environment
CN111008992B (en) Target tracking method, device and system and storage medium
CN111079535A (en) Human skeleton action recognition method and device and terminal
CN114550219B (en) Pedestrian tracking method and device
CN109978779A (en) A kind of multiple target tracking device based on coring correlation filtering method
Jezouin et al. Three-dimensional structure from a monocular sequence of images
CN110481635B (en) Personification steering system and method based on convolutional neural network and traditional geometric controller
CN109587303B (en) Electronic equipment and mobile platform
Zhou et al. Visual tracking using improved multiple instance learning with co-training framework for moving robot
Berger et al. Using planar facets for stereovision SLAM

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant