CN101458083B - Structure light vision navigation system and method - Google Patents

Structure light vision navigation system and method Download PDF

Info

Publication number
CN101458083B
CN101458083B CN2007101943385A CN200710194338A CN101458083B CN 101458083 B CN101458083 B CN 101458083B CN 2007101943385 A CN2007101943385 A CN 2007101943385A CN 200710194338 A CN200710194338 A CN 200710194338A CN 101458083 B CN101458083 B CN 101458083B
Authority
CN
China
Prior art keywords
robot
particular path
path pattern
navigation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2007101943385A
Other languages
Chinese (zh)
Other versions
CN101458083A (en
Inventor
林彦君
陈耀俊
吴兆棋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN2007101943385A priority Critical patent/CN101458083B/en
Publication of CN101458083A publication Critical patent/CN101458083A/en
Application granted granted Critical
Publication of CN101458083B publication Critical patent/CN101458083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a structured light visual navigation system and a method thereof; wherein, the visual light navigation system at least comprises at least a projector which generates a specific path pattern formed by the structured light, and a visual server. The patter formed by the structured light can assist the structured light visual navigation system to detect a barrier, meanwhile, provides the specific path pattern which must be followed by a robot when in navigation. In the structured light visual navigation method, when the barrier is detected, the visual server can plan a virtual route and sends out motion control commands to the robot to cause that the robot follows the virtual route to go forward. The invention navigates the robot by the structured light so as to increase the navigation precision of the robot and reduce the heavy operation of the visual server.

Description

Structure light vision navigation system and method
Technical field
The present invention is relevant for a kind of vision navigation system and method, and structure light vision navigation system and the method for utilizing structured light to use as the robot navigation.
Background technology
Along with the progress of detecting device and control theory, intelligent robot system enters into the application scenario of various service types gradually from the factory automation association area in recent years, has opened up the frontier of robot autonomous service.The research and development of service type robot in the past mainly is to carry out in institutions for academic research, and the now more and more is subjected to producing the attention of this area.Wherein the main cause that occurs of indoor service humanoid robot has: 1. people want to break away from troublesome and uninteresting repetitive operation, as housework, watch over the sick or the like; 2. the decline of various electromechanical assembly costs.Each industrial analysis at present all proposes to replace the mankind to finish housework by the service type robot, is a quite promising technical field, also will bring huge business opportunity simultaneously.
And in robot system, independent navigation is a core technology, comprises location, the detection of obstacles of wide area, and path planning and tracking.At present, the bottleneck of airmanship maximum is, robot usually faces environment unpredictable or the identification dynamic change in the navigation procedure, add that the data that detecting device obtains often are incomplete, discontinuous, and insecure, though make that the ability of mobile robot's perception environment is various not strong.Robot autonomous navigation mainly is divided into two big classes: vision navigation system and non-vision navigation system; The former is that to be extracted with " face " by visual detector be the data type of unit, and sensing range is wide, can obtain most environmental information; The latter then is the detection signal that obtains " point " or " line ", and the characteristic of different detecting devices all can influence the effect of navigation.For example the detection angles of ultrasonic detector is big, resolution is low, concrete shape that can't the acquired disturbance thing; And for example infrared detector is subjected to the interference of barrier character of surface easily.At present modes of multiple detection fusion that adopt are improved the shortcoming of single detecting device more, but have also improved system complexity and power consumption.
In present vision guided navigation mode, the disclosed robot visual guidance method of Chinese patent case CN1569558A case for example by characteristics of image, it has brought into play the advantage of image detection, can directly obtain most environmental information; With regard to navigation feature, compared with other detecting devices, as laser, infrared ray, or ultrasound wave etc., more can in the space of unstructuredness, obtain preferable navigation effect.But preceding case has used a large amount of calculation resources to extract characteristic body in the environment, as door, post, corner, or other artificial targets, and then near the scene image the sign that identification is come out and original map image compare to confirm the position of robot, could determine direction and the speed that robot should walk at last.With regard to many indoor machine people's application situation, for example (,) clean robot, this kind navigate mode has reduced whole navigation efficient because of Flame Image Process on the contrary, and the avoidance for barrier does not simultaneously have concrete effect yet.
In present non-vision guided navigation mode, for example the Chinese patent case discloses a kind of flooring material that is used for the navigational system of robot and is used to provide the absolute coordinates of this navigational system use for No. 1925988.This navigational system comprises two-dimensional bar code, bar code reader and control module.Described two-dimensional bar code is formed on the floor with preliminary dimension with predetermined space, and has different unique coordinate values respectively.Bar code reader is installed in the robot, reading the coordinate figure of two-dimensional bar code representative on the floor, and carries out subsequent motion control by control module according to coordinate values.It is special that yet the flooring material that preceding case adopted needs, and not only cost is higher and causes the navigation failure easily because damage, even because badly damaged and make robot subside wherein and fault.
Though robot navigation's method of above-mentioned two patent cases can solve partly navigation problem, for being applied to the indoor service humanoid robot, it is to be solved to still have following point to have:
1. can't use in the dark, must overcome the problem of general visual servo easy servo fail under the low-light (level) environment.
2. the computing of barrier location determination and path planning is quite consuming time, and the too high meeting of computing proportion of visual servo badly influences servo-controlled instantaneity.
3. the mode of visual servo is all adopted in whole paths, and the tracking degree of accuracy of distant place is low excessively in image in robot.
4. autonomous mobile robot is roamed the problem of path repetition and power consumption consuming time.
5. often need construction vision servo system in environment in addition, may influence original general layout and system cost and also need include in and consider.
Therefore, have the vision navigation system and the method that can solve above-mentioned five vision guided navigation problems in this area,, and provide people more reliable service quality so that the indoor service humanoid robot can enter the stage of ripeness more quickly.
Summary of the invention
The purpose of this invention is to provide a kind of structure light vision navigation system and method, wherein this vision navigation system comprises that at least generation forms at least one projector of particular path pattern and visual server (server) by structured light.The pattern that this structured light constitutes carries out the detection of obstacles particular path pattern that must follow when the robot navigation is provided simultaneously except assisting this structure light vision navigation system.In this structured light visual navigation method, when detecting this barrier, then this visual server can be planned virtual route and robot is sent motion control commands, makes it follow this virtual route to advance.The present invention comes navigating robot by structured light, to reach navigation precision that improves robot and the effect that reduces the heavy computing of visual server.
The present invention more advances a purpose and provides a kind of structure light vision navigation system, and it comprises image extractor, projector, visual server and robot.Wherein projector is set up in the environment, the particular path graphic pattern projection can be needed in the space of guidance path in robot.Image extractor is set up in the environment, transfers to visual server after this spatial image can being extracted and handles.Visual server can pick out the position of robot from image, be out of shape the position of confirming barrier by the particular path of projection in the space simultaneously, and near the path the barrier is planned to virtual route again.Robot receives and begins to follow this particular path pattern to advance behind the signal that visual server sends and execute the task simultaneously.This moment, image extractor still continued to extract image.When robot follows this particular path to run near the barrier, the image servomechanism will transmit navigation instruction and give robot, make it according to instructing the virtual route of adjusting direction and speed and changing to advance according to previous computing gained, after treating that virtual route finishes, robot continues to follow the particular path pattern of actual projection in the space to advance.
The present invention more advances a purpose and provides a kind of structured light visual navigation method, and it comprises the steps: the particular path pattern that forms in the navigation space by structured light constituted that is projected in via projector; Extract image in this navigation space via image extractor; Detect barrier, if clear, then robot follows the particular path pattern to advance; If barrier is arranged, then, plan virtual route more separately with eliminate and set the border of virtual obstacles in the image because of this particular path pattern of part of barrier distortion; Judge whether to arrive impact point, if then navigation is finished; If not, then judge whether to run into the virtual route of first preplanning again; If run into virtual route, then follow virtual route, if not, then robot continues to follow this particular path pattern to advance, and repeat the determining step of described " whether arriving impact point " and " whether running into the virtual route of first preplanning ", till robot arrives impact point.
For above-mentioned and other purposes, feature and advantage of the present invention can be become apparent, preferred embodiment cited below particularly, and conjunction with figs. are described in detail as follows.
Description of drawings
Fig. 1 is the structure light vision navigation system according to first embodiment of the invention.
Fig. 2 is an image extractor picked-up cyclogram of handling the barrier front and back in visual server.
Fig. 3 discloses the process flow diagram of the structured light visual navigation method of preferred embodiment according to the present invention.
Fig. 4 is the structure light vision navigation system according to second embodiment of the invention.
Fig. 5 is the structure light vision navigation system according to third embodiment of the invention.
Fig. 6 is the structure light vision navigation system according to fourth embodiment of the invention.
The main element symbol description
1: image extractor
2,2a, 2b, 2c, 2d, 2e, 2f: projector
3: visual server
4: robot
5: path pattern
5z: virtual route
6: navigation space
7: barrier
Embodiment
" robot " at first, herein is defined as to carry out a certain specific any mobile platform.Therefore, no matter this platform is with wheel type of drive (or claiming from mule carriage) or step move mode, then all claim robot.Then, please refer to Fig. 1, first preferred embodiment of the present invention as shown in Figure 1, this structure light vision navigation system comprises an image extractor 1, for example digital camera etc., a projector 2, a visual server 3 and robot 4.The structured light projection that this projector 2 will form specific path pattern 5 needs in the navigation space 6 of guidance path and this structured light can be visible light or invisible light in robot 4; Simultaneously, the generation of this structured light can be and continues to produce, and intermittence produces or only produces once.Robot 4 is by fluorescence detector or visual detector is to detect this particular path pattern, this robot is had follow the ability of these particular path pattern 5 walkings.Transferring to visual server 3 after this image extractor 1 is extracted this navigation space image handles.This visual server can be computer or other embedded arithmetic units.Fig. 2 is for the image extractor picked-up cyclogram (as scheming shown in the left side) before handling barrier in the visual server and handle the preceding image extractor picked-up cyclogram (as scheming shown in the right side) of barrier.This visual server 3 can pick out the position of robot 4 from image, confirm the position of barrier 7 simultaneously by the distortion of this particular path pattern 5 of projection in navigation space 6, again near the path the barrier 7 is planned to virtual route 5z (as shown in Figure 2) again.Robot 4 receives and begins to follow this particular path pattern 5 to advance behind the signal that this visual server 3 sends and execute the task simultaneously, as sweeps work etc., and this moment, image extractor 1 still continued to extract image.As shown in Figure 2, when robot 4 follows this particular path pattern 5 to run near the barrier 7, image servomechanism 3 will transmit navigation instruction and give robot 4, make it adjust direction and speed according to instruction so that advance according to the virtual route 5z of previous computing gained, after treating that virtual route 5z finishes, robot 4 continues to follow this particular path pattern 5 to advance.
Vision navigation method process flow diagram by above-mentioned structure light vision navigation system of the present invention as shown in Figure 3.At first in step S301, via being projected in the navigation space 6 of projector 2 to form particular path pattern 5 by structured light was constituted.Then, in step S302, image extractor 1 transfers to visual server 3 after this navigation space image is extracted, so that the interior relative detection of obstacles of recently carrying out of pattern in advance of the pattern image that is extracted and visual server 3.If this particular path pattern in the image of finding to be extracted has distortion, then shown in the determining step of step S303, learn the existence that detects barrier.Anti-, if not distortion of this particular path pattern in the pattern image that is extracted does not then have exist (being step S304) that detects barrier, so robot 4 follows this particular path pattern 5 to advance.Detected the existence of barrier when visual server 3 after, then shown in step S305, path pattern 5 is eliminated because of the part particular path pattern of barrier distortion; And on the border that is formed a virtual obstacles by the periphery of the lines of being erased, shown in step S306.At this moment, visual server 3 can be planned separately that virtual route 5z follows for robot 4 again and advances, shown in step S307.The distance on this virtual route 5z and border is at least the robot centre of form to the shell limit, or is determined by several specific markers in the robot (it can be the paster or the diode displaying lamp of different colours or shape); Press down and try to achieve, can prevent that so robot 4 from bumping against barrier 7 by other particular path algorithms.Then, shown in step S308, robot 4 follows this particular path pattern 5 to advance.Afterwards, in step S309, judge whether to arrive impact point, if then (being step S310) finished in navigation; If not, then judge whether to run into the virtual route (being step S311) of first preplanning again.If run into virtual route, then follow virtual route (being step S312) to advance, if not, then continue to follow the particular path pattern 5 of actual projection to advance (being step S308), and repeating step S309 and S310, till robot 4 arrives this impact point.
Second preferred embodiment of the present invention as shown in Figure 4, this structure light vision navigation system comprises an image extractor 1, a projector 2a and another projector 2b, a visual server 3 and robot 4.Wherein projector 2a is set up in another projector 2b opposite.Particular path pattern 5a that projector 2a and 2b will overlap respectively mutually and 5b projection need in the navigation space 6 of guidance path in robot 4, the common formation one of particular path pattern 5a and 5b is enough to contain this barrier 7 combinatorial path pattern of navigation space on every side, makes this barrier 7 not have the zone of not having this combinatorial path pattern because of own vol stops the projection of this structured light around making it.In order to make robot 4 can distinguish the path of being followed is path pattern 5a or path pattern 5b, particular path pattern 5a and 5b respectively have the structured light of different colours, for example, red or blue, but follow the path pattern of particular color to advance so that robot 4 mat fluorescence detectors are auxiliary.
Transferring to visual server 3 after this image extractor 1 is extracted this spatial image handles.Visual server 3 can pick out the position of robot 4 from image, confirm the position of barrier 7 simultaneously by the distortion of the particular path pattern 5 of projection in navigation space, again near the paths the barrier 7 is planned to virtual route 5z again.Robot 4 receives and begins to follow this particular path pattern 5 to advance behind the signal that visual server 3 sends and execute the task simultaneously, as sweeps work etc.This moment, image extractor 1 still continued to extract image.As shown in Figure 2, when robot 4 follows this particular path pattern 5 to run near the barrier 7, image servomechanism 3 will transmit navigation instruction and give robot 4, make it adjust direction and speed according to instruction so that advance according to the virtual route 5z of previous computing gained, after treating that virtual route 5z finishes, robot 4 continues to follow actual projection this particular path pattern 5 in the space to advance.
The 3rd preferred embodiment of the present invention as shown in Figure 5, this structure light vision navigation system comprises an image extractor 1, a projector 2c and another projector 2d, a visual server 3 and robot 4.Wherein projector 2c and another projector 2d can be independent device each other or are connected to each other with mechanism's part.This two projector 2c, 2d are respectively with particular path pattern 5c, and the 5d projection needs in the navigation space 6 of guidance path in robot 4.Transferring to visual server 3 after this image extractor 1 is extracted this spatial image handles.Illustrated significantly by Fig. 5, because of this two projector 2c, the zone that 2d is incident upon navigation space 6 does not overlap so make particular path pattern 5c, and 5d is also nonoverlapping mutually.Thereby robot 4 can to distinguish the path of being followed be path pattern 5a or path pattern 5b; Change speech, particular path pattern 5c, 5d does not need the structured light of different colours.This visual server 3 can pick out the position of robot 4 from image, by particular path pattern 5c and the distortion of the 5d position of confirming barrier 7 in the space of projection, again near the paths the barrier 7 are planned to virtual route 5z (as shown in Figure 2) again simultaneously.Then, robot 4 receives and begins to follow particular path pattern 5 to advance behind the signal that visual server 3 sends and execute the task simultaneously, as sweeps work etc., and this moment, image extractor 1 still continued to extract image.As shown in Figure 2, when robot 4 follows path 5 to run near the barrier 7, image servomechanism 3 will transmit navigation instruction and give robot 4, make it adjust direction and speed according to instruction so that advance according to the virtual route 5z of previous computing gained, after treating that virtual route 5z finishes, robot 4 continues to follow the particular path pattern 5 of actual projection in the space to advance.
The 4th preferred embodiment of the present invention as shown in Figure 6, this structure light vision navigation system comprises an image extractor 1, projector 2e and 2f, a visual server 3 and robot 4.Wherein this two projectors 2e and 2f can be independent device each other or are connected to each other with mechanism's part.Other has a reverberator 8, and the particular path pattern 5e that projector 2e can be throwed reflexes to the navigation space 6 that robot 4 needs guidance path, another projector 2f then directly with particular path pattern 5f projection in this navigation space 6.Wherein, shown in the 3rd preferred embodiment, this particular path pattern 5e and 5f are nonoverlapping mutually, so it is that path pattern 5e person is path pattern 5f that the path of being followed can be distinguished by robot 4, therefore, particular path pattern 5e, 5f does not need the structured light of different colours.
Then, transferring to visual server 3 after this image extractor 1 is extracted this spatial image handles.Visual server 3 can pick out the position of robot 4 from image, by particular path pattern 5e and the distortion of the 5f position of confirming barrier 7 in the space of projection, again near the paths the barrier 7 are planned to virtual route 5z again simultaneously.Robot 4 receives and begins to follow this particular path pattern 5 to advance behind the signal that visual server 3 sends and execute the task simultaneously, as sweeps work etc., and this moment, image extractor 1 still continued to extract image.As shown in Figure 2, when robot 4 follows this particular path pattern 5 to run near the barrier 7, image servomechanism 3 will transmit navigation instruction and give robot 4, make it adjust direction and speed according to instruction so that advance according to the virtual route 5z of previous computing gained, after treating that virtual route 5z finishes, robot 4 continues to follow actual projection this particular path pattern 5 in the space to advance.
In sum, the present invention has following advantage than prior art:
1. also can use in the dark, overcome the problem of general visual servo easy servo fail under the low-light (level) environment.
2. structured light provides barrier position and ready-made path simultaneously, and the intelligent planning of ready-made channel junction branch highway section, has reduced the proportion of the required computing of visual server, can improve the most serious computing instantaneity of visual servo.
3. all adopt the mode of visual servo to compare with whole paths, promoted the tracking degree of accuracy of robot, especially the position in image at a distance.
4. improve the problem of path repetition of autonomous mobile robot roaming in the past and power consumption consuming time.
5. can combine with existing computer or frequency image monitoring system, not need other construct system.
Though the present invention with preferred embodiment openly as above, so it is not in order to limit the present invention.Those of ordinary skill under any in the technical field under the situation that does not break away from the spirit and scope of the present invention, can carry out various changes and modification.Therefore, protection scope of the present invention is as the criterion with the scope of the claim that proposed.

Claims (16)

1. structure light vision navigation system comprises:
At least one projector, it is set up in the environment, with structured light projection in navigation space to form the particular path pattern;
Image extractor, it is set up in the environment and extracts image in this navigation space;
Robot is arranged in this navigation space, has the ability of following this particular path pattern walking; And
Visual server, it accepts image in this navigation space that this image extractor transmits picking out the position of this robot from this image, and the distortion of this particular path pattern from this image is confirmed the position of barrier and plan virtual route near this barrier;
Wherein when this robot follows this particular path pattern to run near this barrier, this visual server transmits navigation instruction and gives this robot, it is advanced according to this virtual route, treat that this virtual route finishes after, this robot continues to advance according to this particular path pattern.
2. structure light vision navigation system as claimed in claim 1, wherein in environment more and then be equiped with at least one reverberator, the structured light that is used for that at least one projector is projected reflexes to this navigation space and the particular path pattern that projected with other projector is common constitutes whole path pattern that are enough to contain this navigation space.
3. structure light vision navigation system as claimed in claim 2, the common formation one of the particular path pattern that wherein said projector projected is enough to contain this barrier combinatorial path pattern of navigation space on every side, makes this barrier not have the zone of not having this combinatorial path pattern because of own vol stops the projection of this structured light around making it.
4. structure light vision navigation system as claimed in claim 1, wherein this robot has this robot and follows the ability of this particular path pattern walking by fluorescence detector or visual detector is to detect this particular path pattern.
5. the method for a structure light vision navigation comprises following steps:
The particular path pattern that forms in the navigation space by structured light constituted that is projected in via projector;
Extract the image in this navigation space;
The distortion of this particular path pattern from this image is to carry out detection of obstacles, if clear, then robot follows this particular path pattern to advance in this navigation space; If barrier is arranged, then, plan virtual route more separately with eliminate and set the border of virtual obstacles in the image because of this particular path pattern of part of barrier distortion;
Judge whether to arrive impact point, if then navigation is finished; If not, then judge whether to run into the virtual route of first preplanning again;
If run into virtual route, then follow this virtual route to advance, if not, then this robot continues to follow this particular path pattern to advance, and repeat the determining step of described " whether arriving impact point " and " whether running into the virtual route of first preplanning ", till this robot arrives impact point.
6. method as claimed in claim 5, wherein detection of obstacles, path planning are by carrying out in computer or the embedded arithmetic unit.
7. method as claimed in claim 5, wherein robot has this robot and follows the ability of this particular path pattern walking by fluorescence detector or visual detector is to detect this particular path pattern.
8. method as claimed in claim 5, wherein the distance on this virtual route and the border of this virtual obstacles is identical to the distance of the shell limit with the robot centre of form at least, or determined by the distance between several features in this robot, or try to achieve by other path planning algorithms.
9. the method for a structure light vision navigation comprises following steps:
The a plurality of particular path patterns that form in the navigation space by structured light constituted that are projected in via a plurality of projectors;
Extract the pattern image in this navigation space;
The distortion of this particular path pattern from this image is to carry out detection of obstacles, if clear, then robot follows these a plurality of particular path patterns to advance; If barrier is arranged, then, plan virtual route more separately with eliminate and set the border of virtual obstacles in the image because of these a plurality of particular path patterns of part of barrier distortion;
Judge whether to arrive impact point, if then navigation is finished; If not, then judge whether to run into the virtual route of first preplanning again;
If run into virtual route, then follow this virtual route to advance, if not, then this robot continues to follow these a plurality of particular path patterns to advance, and repeat the determining step of described " whether arriving impact point " and " whether running into the virtual route of first preplanning ", till this robot arrives this impact point; Wherein these a plurality of particular path patterns are to overlap mutually or do not overlap mutually, if these a plurality of particular path patterns are mutual overlappings, then each particular path pattern has different colors.
10. method as claimed in claim 9, wherein if this a plurality of particular path patterns are overlapping mutually, then its embodiment is that these a plurality of projectors are projected in same district in this navigation space or this a plurality of projectors individually with different time and are projected in mutual nonoverlapping not same district in this navigation space at one time individually.
11. method as claimed in claim 10, wherein should the common formation one in mutual nonoverlapping not same district in this navigation space be enough to contain the barrier combinatorial path pattern of navigation space on every side, and make barrier not have the zone of not having this combinatorial path pattern around making it because of own vol stops the projection of this structured light.
12. method as claimed in claim 10, wherein in environment more and then be equiped with at least one reverberator, be used for the structured light that at least one projected of these a plurality of projectors is reflexed in this navigation space.
13. method as claimed in claim 10, wherein detection of obstacles, path planning are by carrying out in computer or the embedded arithmetic unit.
14. method as claimed in claim 10, wherein this robot by fluorescence detector or visual detector to detect this a plurality of particular path patterns, this robot had the ability of following these a plurality of particular path patterns to walk.
15. method as claimed in claim 10, wherein this structured light continues generation, batch (-type) produces, or only produces once.
16. method as claimed in claim 10, wherein the distance on this virtual route and the border of this virtual obstacles is identical to the distance of the shell limit with the robot centre of form at least, or determined by the distance between several features in this robot, or try to achieve by other path planning algorithms.
CN2007101943385A 2007-12-14 2007-12-14 Structure light vision navigation system and method Active CN101458083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007101943385A CN101458083B (en) 2007-12-14 2007-12-14 Structure light vision navigation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007101943385A CN101458083B (en) 2007-12-14 2007-12-14 Structure light vision navigation system and method

Publications (2)

Publication Number Publication Date
CN101458083A CN101458083A (en) 2009-06-17
CN101458083B true CN101458083B (en) 2011-06-29

Family

ID=40769084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101943385A Active CN101458083B (en) 2007-12-14 2007-12-14 Structure light vision navigation system and method

Country Status (1)

Country Link
CN (1) CN101458083B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929280A (en) * 2012-11-13 2013-02-13 朱绍明 Mobile robot separating visual positioning and navigation method and positioning and navigation system thereof

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102095426A (en) * 2010-11-23 2011-06-15 深圳市凯立德科技股份有限公司 Location-based service terminal and planning path display method and navigation method thereof
TWI459170B (en) * 2012-10-04 2014-11-01 Ind Tech Res Inst A moving control device and an automatic guided vehicle with the same
EP3095430B1 (en) 2012-11-09 2020-07-15 Hocoma AG Gait training apparatus
CN104574365B (en) * 2014-12-18 2018-09-07 中国科学院计算技术研究所 Obstacle detector and method
CN104898677B (en) * 2015-06-29 2017-08-29 厦门狄耐克物联智慧科技有限公司 The navigation system and its method of a kind of robot
CN106695779B (en) * 2015-07-30 2019-04-12 广明光电股份有限公司 Robotic arm movement routine edit methods
JP6705636B2 (en) * 2015-10-14 2020-06-03 東芝ライフスタイル株式会社 Vacuum cleaner
WO2017158973A1 (en) * 2016-03-17 2017-09-21 本田技研工業株式会社 Automatic guided vehicle
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
US11009882B2 (en) * 2018-01-12 2021-05-18 Pixart Imaging Inc. Method, system for obstacle detection and a sensor subsystem
US10816994B2 (en) 2018-10-10 2020-10-27 Midea Group Co., Ltd. Method and system for providing remote robotic control
US10803314B2 (en) 2018-10-10 2020-10-13 Midea Group Co., Ltd. Method and system for providing remote robotic control
US10678264B2 (en) * 2018-10-10 2020-06-09 Midea Group Co., Ltd. Method and system for providing remote robotic control
CN113155117A (en) * 2020-01-23 2021-07-23 阿里巴巴集团控股有限公司 Navigation system, method and device
CN112822468B (en) * 2020-12-31 2023-02-17 成都极米科技股份有限公司 Projection control method and device, projection equipment and laser controller

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4907169A (en) * 1987-09-30 1990-03-06 International Technical Associates Adaptive tracking vision and guidance system
GB2259823A (en) * 1991-09-17 1993-03-24 Radamec Epo Limited Navigation system
US6101431A (en) * 1997-08-28 2000-08-08 Kawasaki Jukogyo Kabushiki Kaisha Flight system and system for forming virtual images for aircraft
CN1782668A (en) * 2004-12-03 2006-06-07 曾俊元 Method and device for preventing collison by video obstacle sensing
CN101033971A (en) * 2007-02-09 2007-09-12 中国科学院合肥物质科学研究院 Mobile robot map building system and map building method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4907169A (en) * 1987-09-30 1990-03-06 International Technical Associates Adaptive tracking vision and guidance system
GB2259823A (en) * 1991-09-17 1993-03-24 Radamec Epo Limited Navigation system
US6101431A (en) * 1997-08-28 2000-08-08 Kawasaki Jukogyo Kabushiki Kaisha Flight system and system for forming virtual images for aircraft
CN1782668A (en) * 2004-12-03 2006-06-07 曾俊元 Method and device for preventing collison by video obstacle sensing
CN101033971A (en) * 2007-02-09 2007-09-12 中国科学院合肥物质科学研究院 Mobile robot map building system and map building method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张洪涛."地面机器人结构光道路识别方法的研究".《微计算机信息》.2005,第21卷(第4期),15-17.
涂志国 等."弧焊机器人视觉测量控制系统".《计算机测量与控制》.2004,第12卷(第3期),201-204,222.
涂志国等."弧焊机器人视觉测量控制系统".《计算机测量与控制》.2004,第12卷(第3期),201-204,222. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929280A (en) * 2012-11-13 2013-02-13 朱绍明 Mobile robot separating visual positioning and navigation method and positioning and navigation system thereof
CN102929280B (en) * 2012-11-13 2015-07-01 朱绍明 Mobile robot separating visual positioning and navigation method and positioning and navigation system thereof

Also Published As

Publication number Publication date
CN101458083A (en) 2009-06-17

Similar Documents

Publication Publication Date Title
CN101458083B (en) Structure light vision navigation system and method
Rouček et al. Darpa subterranean challenge: Multi-robotic exploration of underground environments
Topp et al. Tracking for following and passing persons
Lima et al. Omni-directional catadioptric vision for soccer robots
Trulls et al. Autonomous navigation for mobile service robots in urban pedestrian environments
Sales et al. Adaptive finite state machine based visual autonomous navigation system
CN106227212A (en) The controlled indoor navigation system of precision based on grating map and dynamic calibration and method
Matsushita et al. On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter
CN105554472B (en) The method of the video monitoring system and its positioning robot of overlay environment
CN103970134A (en) Multi-mobile-robot system collaborative experimental platform and visual segmentation and positioning method thereof
US20210325889A1 (en) Method of redefining position of robot using artificial intelligence and robot of implementing thereof
JP2006252346A (en) Mobile robot
KR102031348B1 (en) Autonomous Working System, Method and Computer Readable Recording Medium
Leonard et al. A perception-driven autonomous urban vehicle
KR102023699B1 (en) Method for recognition of location and setting route by cord recognition of unmanned movility, and operation system
Hager et al. Toward domain-independent navigation: Dynamic vision and control
CN112454348A (en) Intelligent robot
Childers et al. US army research laboratory (ARL) robotics collaborative technology alliance 2014 capstone experiment
Berlin Spirit of berlin: An autonomous car for the DARPA urban challenge hardware and software architecture
JP7460328B2 (en) Mobile robot, mobile robot control system, and mobile robot control method
CN113064425A (en) AGV equipment and navigation control method thereof
JPH0820253B2 (en) Position detection method in mobile robot
EP3964913B1 (en) Model parameter learning method and movement mode parameter determination method
Shen et al. Navigation and Task Planning of a Mobile Robot under ROS Environment: A Case Study Using AutoRace Challenge
WO2023010870A1 (en) Robot navigation method, robot, robot system, apparatus and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant