US5093869A - Pattern recognition apparatus utilizing area linking and region growth techniques - Google Patents

Pattern recognition apparatus utilizing area linking and region growth techniques Download PDF

Info

Publication number
US5093869A
US5093869A US07/633,833 US63383390A US5093869A US 5093869 A US5093869 A US 5093869A US 63383390 A US63383390 A US 63383390A US 5093869 A US5093869 A US 5093869A
Authority
US
United States
Prior art keywords
processing means
graph
objects
region
ribbon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US07/633,833
Inventor
James F. Alves
Jerry A. Burman
Victoria Gor
Michele K. Daniels
Walter W. Tackett
Craig C. Reinhart
Bruce A. Berger
Brian J. Birdsall
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Hughes Aircraft Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hughes Aircraft Co filed Critical Hughes Aircraft Co
Assigned to HUGHES AIRCRAFT COMPANY, A CORP. OF DE reassignment HUGHES AIRCRAFT COMPANY, A CORP. OF DE ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: ALVES, JAMES F., BERGER, BRUCE A., BIRDSALL, BRIAN J., BURMAN, JERRY A., DANIELS, MICHELE K., GOR, VICTORIA, REINHART, CRAIG C., TACKETT, WALTER W.
Priority to US07/633,833 priority Critical patent/US5093869A/en
Priority to CA002055714A priority patent/CA2055714C/en
Priority to IL10010491A priority patent/IL100104A/en
Priority to NO91914813A priority patent/NO914813L/en
Priority to EP19910121973 priority patent/EP0492512A3/en
Priority to AU90016/91A priority patent/AU644923B2/en
Priority to KR1019910024287A priority patent/KR940006841B1/en
Priority to JP3345159A priority patent/JP2518578B2/en
Publication of US5093869A publication Critical patent/US5093869A/en
Application granted granted Critical
Assigned to HE HOLDINGS, INC., A DELAWARE CORP. reassignment HE HOLDINGS, INC., A DELAWARE CORP. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: HUGHES AIRCRAFT COMPANY A CORPORATION OF THE STATE OF DELAWARE
Assigned to HE HOLDINGS, INC., A DELAWARE CORP. reassignment HE HOLDINGS, INC., A DELAWARE CORP. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: HUGHES AIRCRAFT COMPANY, A CORPORATION OF DELAWARE
Assigned to RAYTHEON COMPANY reassignment RAYTHEON COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HE HOLDINGS, INC.
Assigned to RAYTHEON COMPANY reassignment RAYTHEON COMPANY MERGER (SEE DOCUMENT FOR DETAILS). Assignors: HE HOLDINGS, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • G06V10/426Graphical representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates generally to scene recognition systems and methods, and more particularly, to a scene recognition system and method that employs low and high level feature detection to identify and track targets.
  • Modern missile scene recognition systems employ specialized signal processing architectures and algorithms that are designed to quickly and efficiently detect the presence of particular target objects such as buildings, trucks, tanks, and ships, and the like that are located in the field of view of the missile. Consequently, more sophisticated designs are always in demand that can accurately identify or classify targets in a very short period of time.
  • the present invention comprises a scene recognition system and method for use with a missile guidance and tracking system, that employs low and high level feature detection to identify and track targets.
  • the system employs any conventional imaging sensor, such as an infrared sensor, a millimeter wave or synthetic aperture radar, or sonar, for example, to image the scene.
  • the output of the sensor (the image) is processed by a low level feature detection processor that extracts low level features from the image. This is accomplished by converting a matrix of sensor data (image data) into a matrix of orthogonal icons that symbolically represent the image using a predetermined set of attributes.
  • the orthogonal icons serve as the basis for processing by means of a high level graph matching processor which employs symbolic scene segmentation, description, and recognition processing that is performed subsequent to the low level feature detection.
  • the process generates attributed graphs representative of target objects present in the image.
  • the high level graph matching processor compares predetermined attributed reference graphs to the sensed graphs to produce a best common subgraph between the two, based on the degree of similarity between the two graphs.
  • the high level graph matching processor generates a recognition decision based on the value of the degree of similarity and a predetermined threshold.
  • the output of the high level graph matching processor provides data from which an aimpoint is determined.
  • the aimpoint is coupled as an input to the missile guidance system that tracks an identified target.
  • FIG. 1 is a block diagram of a scene recognition system employing low and high level feature detection to identify and track targets in accordance with the principles of the present invention
  • FIG. 2 is a detailed functional block diagram of the system of FIG. 1;
  • FIGS. 3a-3f show the processing performed to achieve low level feature detection
  • FIGS. 4a and 4b show the results of flat linking and region growth processing performed in the scene segmentation section of FIG. 2;
  • FIGS. 5a-5c show the results of boundary formation and linear feature extraction processing performed in the scene segmentation section of FIG. 2;
  • FIGS. 6 and 7 show attributes computed for region and ribbon objects, respectively, in object formation processing performed in the scene segmentation section of FIG. 2;
  • FIGS. 8a-8c show the transitional steps in generating an attributed sensed graph
  • FIGS. 9a and 9b show a reference scene and an attributed reference graph, respectively;
  • FIG. 10 shows a best common subgraph determined by the graph matching section of the scene recognition portion of FIG. 2;
  • FIGS. 11a and 11b show a reference scene and a sensed scene having the respective aimpoints designated therein.
  • FIG. 1 is a block diagram of a scene recognition system employing low and high level feature detection to identify and track targets in accordance with the principles of the present invention.
  • the system comprises a low level feature detector 11 that is adapted to receive image data derived from an imaging sensor 9, such as an infrared sensor, a television sensor, or a radar, for example.
  • the low level feature detector 11 is coupled to a high level graph matching processor 12 that is adapted to process icons representative of features contained in the image scene that are generated by the low level feature detector 11.
  • Aimpoint information is generated and used by a missile navigation system 24 to steer a missile toward a desired target.
  • the high level graph matching processor 12 includes a serially coupled graph synthesizer 13, graph matcher 14, and aimpoint estimator 15.
  • a reference graph storage memory 16 is coupled to the graph matching processor 12 and is adapted to store predefined graphs representing expected target objects that are present in the image scene.
  • the reference graphs include graphs representing tanks, buildings, landmarks, and water bodies, for example.
  • the low level feature detector 11 is described in detail in U.S. patent applications Ser. No. 514,779 filed on Apr. 25, 1990, entitled “Improved Data Decompression System and Method,” whose teachings are incorporated herein by reference. In summary, however, the low level feature detector 11 converts a matrix of sensor data (image data) into a matrix of orthogonal icons that symbolically represent the imaged scene through the use of a set of attributes. These are discussed in more detail below with reference to FIGS. 3-11.
  • the processing performed by the scene recognition system shown in FIG. 1 employs a series of transformations that converts image information into pregressively more compressed and abstract forms.
  • the first transformation performed by the low level feature detector 11 converts the sensor image, which is an array of numbers describing the intensity at each picture position, into a more compact array of icons with attributes that described the essential intensity distributions of 10 ⁇ 10 pixel blocks of image data.
  • the second transformation implemented in the high level graph matching processor 12 links the icon array into separate regions of nearly uniform intensity and identifies linear boundaries between regions.
  • a special case of a region that is long and narrow with roughly parallel sides is identified as a ribbon.
  • This transformation results in a list of the objects formed, various attributes of the objects themselves, and relationships between the objects. This information is encoded in a structure identified as an attributed graph.
  • the graph matcher 14 compares the graph structure derived from the sensor image with a previously stored graph derived earlier or from reconnaissance information, for example. If a high degree of correlation is found, the scene described by the reference graph is declared to be the same as the scene imaged by the sensor. Given a match, the aimpoint estimator 15 associates the aimpoint given apriori in the reference graph to the sensed graph, and this information is provided as an input to the missile navigation and guidance system.
  • the general problem of correlating two graphs is N-P complete, in that a linear increase in the number of nodes requires an exponential increase in the search, or number of comparisons, required to match the graphs.
  • this problem is solved by overloading the sensed and reference graphs with unique attributes that significantly reduces the search space and permits rapid, real time searching to be achieved.
  • FIG. 2 is a detailed functional block diagram of the system of FIG. 1.
  • Scene segmentation processing 21 comprises low level feature description processing, intensity and gradient based segmentation processing 32, 33 and gradient/intensity object merging 34, all performed by the low level feature detector 11 (FIG. 1).
  • Output signals from the scene segmentation processing 21 are coupled to scene description processing 22 that comprises graph synthesis processing 35 that produces a sensed graph of objects present in the image.
  • Reference graphs 36 are stored that comprise graphs representing vehicles, buildings, landmarks, and water bodies, for example. These graphs are prepared from video data gathered during reconnaissance flights, for example, which provide mission planning information.
  • Target objects that are present in the video data are processed by the present invention to generate the reference graphs which are ultimately used as comparative data from which target objects are identified and selected during operational use of the invention.
  • Scene recognition processing 23 employs the graphs generated by the graph synthesis processing 35 and the reference graphs 36 by means of graph matching processing 37 and scene transformation processing 38 which generate the aimpoint estimate that is fed to the navigation system 24.
  • FIGS. 3a-3f show the processing performed to achieve low level feature discrimination.
  • This process converts a matrix of sensor data (image data) into a matrix of orthogonal icons that represent the imaged scene symbolically via a set of attributes. These orthogonal icons serve as the basis for performing symbolic scene segmentation, description, and recognition processor shown in FIG. 2.
  • FIG. 3a represents an image of a house 41 whose sides 42, 43 and roof sections 44, 45 have different shading due to the orientation of the sun relative thereto. Each face of the house 41 is identified by a different texture (shading) identified in FIG. 3a.
  • the image of the house 41 is an array of numbers representative of different intensities associated with each pixel of data.
  • the image data representing the house 41 is processed by a 10 ⁇ 10 block window 46 as is roughly illustrated in FIG. 3b which generates pixel data illustrated in FIG. 3c.
  • the pixel data of FIG. 3c is generated in the low level feature detector 11.
  • This pixel data contains orthogonal icons.
  • Each pixel in FIG. 3c is shaded in accordance with the shade it sees in FIG. 3b.
  • the blocks or cells of data shown in FIG. 3c are generated in the low level feature detector 11.
  • FIG. 3d lines are formed, and in FIG. 3e regions are formed from the shaded pixels in FIG. 3c.
  • the lines are generated by gradient based segmentation processing 33 in the scene segmentation processor 21 shown in FIG. 2, while the regions are generated by the intensity based segmentation processing 32 in the scene segmentation processor 21 shown in FIG. 2.
  • the regions are formed using flat linking and gradient linking and the region shapes are determined.
  • the line and region information is then processed by gradient/intensity object merging 34 and the graph synthesis processing 35 shown in FIG. 2. in order to form regions and ribbons, and a graph is synthesized, as illustrated in FIG. 3f, which is a graph representation of the house 41 of FIG. 3a.
  • An aimpoint is shown in FIG. 3f that is derived from the information comprising the reference graph 36 shown in FIG. 2.
  • the flat linking and region growth processing is part of the scene segmentation processing 21 of FIG. 2.
  • This flat linking and region growth processing is comprised of two subprocesses.
  • the flat linking process groups low level feature discrimination orthogonal icons of type FLAT into homogeneous intensity flat regions using a relaxation-based algorithm.
  • the result is a set of regions comprised of FLAT icons and described by their area (number of constituent FLAT icons), intensity (average intensity of the constituent FLAT icons), and a list of the constituent FLAT icons. More particularly, groups of orthogonal icons having homogeneous intensity regions are formed to generate a set of regions having a block resolution boundary.
  • the groups of orthogonal icons are comprised of homogeneous intensity icons described by their area, including the number of constituent icons having homogeneous intensity, the average intensity of the constituent homogeneous intensity icons, and a list of the constituent homogeneous intensity icons, region growth processing appends adjacent orthogonal icons having an intensity gradient thereacross to provide a feature-resolution boundary.
  • the region growth process appends adjacent orthogonal icons of type gradient (non FLAT) information to the grown flat regions thus providing a feature-resolution boundary as opposed to the 10 ⁇ 10 block resolution boundary provided by the flat linking process.
  • the method employed is as follows. (1) Being with a flat region provided by the flat linking process. The region is comprised solely of FLAT icons and grows up to flanking gradient icons which stop the growth. (2) Consider the gradient icons flanking the region. Each gradient icon is described by a bi-intensity model of which one of the intensities are adjacent to the flat region. If this intensity is similar to the intensity of the flat region extend the region boudary to include the portion of the gradient icon covered by the intensity. (3) Repeat for all flat regions. Flat linking assigns a single region number to every FLAT icon.
  • Region growth assigns multiple region numbers to gradient icons.
  • Gradient icons of type EDGE and CORNER are assigned two region numbers, those of type RIBBON are assigned three region numbers.
  • Gradient icons of type SPOT are not considered.
  • the result of this process is a set of homogeneous intensity regions consisting of FLAT icons and partial gradient icons. This is described in detail in the "Improved Data Decompression System and Method" patent application cited above.
  • FIGS. 4a and 4b show the results of flat linking and region growth processing performed by the scene segmentation processing 21 of FIG. 2.
  • this processing is performed by intensity based segmentation processing 32, gradient based segmentation processing 33, and the gradient/intensity object merging 34 processes shown in FIG. 2.
  • the orthogonal icons containing a single shade of gray represent FLAT icons 50, 51.
  • Those that contain two shades of gray (along the diagonal) represent EDGE icons 52.
  • Regions are comprised of both FLAT and gradient (non FLAT) icons.
  • EDGE icons 52, as well as all other gradient icons, are associated with multiple regions, that is, they are assigned multiple region numbers.
  • each of the EDGE icons 52 are assigned two region numbers, 1 and 2. This provides us with boundaries that are accurate to the gradient features detected by the low level feature discrimination process.
  • Boundary formation and linear feature extraction processing is performed by the scene segmentation processing of FIG. 2. This processing is comprised of three subprocesses.
  • the gradient boundaries of each region 60-63 (FIG. 5a) are traversed forming a gradient chain around each region.
  • the gradient chains for each region are analyzed for linear segments 64-68 (FIG. 5b) and having pointers inserted into the chains at the beginning and end of each linear segment 64-68.
  • the linear segments 64-68 are analyzed for the presence of adjacent segments related to two or more regions that form line segments.
  • the results of these processes include boundary descriptions for each of the regions 60-63 formulated by the flat linking and region growth processes, and linear segments 70-73 (FIG. 5c) represented by length, orientation, and end point coordinates.
  • FIGS. 5a-5c show the results of boundary formation and linear feature extraction processing performed in the scene segmentation processing 21 of FIG. 2.
  • FIG. 5a shows the regions 60-63 derived by the flat linking and region growth process.
  • FIG. 5b shows the boundary descriptions of each of the regions 60-63 as derived by the boundary formation process.
  • FIG. 5c shows the line segments 70-73 derived by the linear feature extraction process.
  • the object formation process is comprised of two subprocesses.
  • the first forms ribbon objects 75 from the linear features provided by the gradient boundary and linear feature extraction processes and creates symbolic descriptions of these objects.
  • the second creates symbolic descriptions of the regions provided by the flat linking and region growing processes thus producing region objects 76-78 (FIGS. 8a, 8b).
  • region objects 76-78 FIGS. 8a, 8b.
  • FIGS. 6 and 7 show attributes computed for region and ribbon objects 75-78, shown in FIGS. 8a and 8b, respectively, in object formation processing performed by the scene segmentation process 21 of FIG. 2, and more particularly by the graph synthesis processing 35 in FIG. 2.
  • FIG. 6 shows region object attributes and computations for a region boundary and a convex hull using the equations shown in the drawing.
  • FIG. 7 shows the ribbon object attributes, including ribbon length in FIG. 7a, ribbon intensity in FIG. 7b, polarity in FIG. 7c, and ribbon orientation in FIG. 7d.
  • the arrows shown in FIGS. 7b and 7c are indicative of the orientation of the line segments that constitute the ribbon.
  • the process computes the area, perimeter, and convex hull attributes.
  • the process searches through a line table looking for pairs of lines that: (1) differ in orientation by 180 degrees, (2) are in close proximity to each other, (3) are flanked by similar intensities, and (4) do not enclose another line segment that is parallel to either line. When a pair of lines fitting these constraints is found the ribbon attributes for them are computed.
  • the attributes for ribbon objects include: (1) intensity of the ribbon, (2) polarity of the ribbon (light on dark or dark on light), (3) the width of the ribbon (distance between two lines), and (4) the orientation of the ribbon.
  • the graph synthesis process and its associated graph language computes relational descriptions between the region and ribbon objects 75-78 (FIGS. 8a, 8b) provided by the object formation process and formulates the attributed sensed graph structure from the region and ribbon objects and their relationships.
  • the relationships between every region/region, region/ribbon, and ribbon/ribbon pair are computed.
  • the relationships are of type spatial (proximity to one another) or comparative (attribute comparisons).
  • Graphs, nodes, links between nodes, and graph formation and processing is well known in the art and will not be described in detail herein.
  • One skilled in the art of graph processing should be able to derive the graphs described herein given a knowledge of the attributes that are to be referenced and the specification of the graphs described herein.
  • the region and ribbon objects are placed at graph nodes, one object per node, along with their descriptive attributes.
  • the relationships between each pair of objects are placed at the graph arcs (links) along with their attributes.
  • FIGS. 8a-8c show the transitional steps in generating an attributed sensed graph 80 in the graph synthesis process 35 (FIG. 2).
  • the graph 80 is comprised of nodes 81-84, that symbolically represent objects within the scene, and node attributes that describe the objects, and arcs (or links) that symbolically represent the relationships between the objects within the scene, and link attributes, that describe the relationships between the objects. More particularly, node 1 is representative of region 1 object 76, node 2 is representative of region 2 object 77, node 3 is representative of region 3 object 78, and node 4 is representative of the ribbon object 75.
  • the various links between the ribbon and region objects 75-78 are shown as 1 link 2, 2 link 4, 1 link 4, 1 link 3, 2 link 3, and 3 link 4.
  • the links include such parameters as the fact that the region 1 object 76 is above and adjacent the region 2 object 77, that the region 1 object 76 is left of the region 3 object 78, that the region 2 object 77 is left and adjacent to the ribbon object 75, and so forth.
  • Reference graph processing entails the same functions as the object formation and graph synthesis processes with two differences. First, it is performed prior to the capture of the sensed image and subsequent scene segmentation and description. Second, it receives its inputs from a human operator by way of graphical input devices rather than from autonomous segmentation processes.
  • FIGS. 9a and 9b show a reference scene 90 and an attributed reference graph 91, respectively.
  • FIG. 9a shows the reference scene 90 comprised of region 1 and region 2, the ribbon and a desired aimpoint 94.
  • the attributed reference graph 91 shown in FIG. 9b is comprised of nodes and links substantially the same as is shown in FIG. 8c, The nodes and links comprise the same types of relationships described with reference to FIG. 8c.
  • the graph matching process 37 compares attributed reference and attributed sensed graphs, generates a best common subgraph between the two based on the degree of similarity between the two graphs (confidence number), and generates a recognition decision based on the value of the confidence number and a predetermined threshold. Feasibility is determined by the degree of match between the node and are attributes of each of the graphs. A heuristic procedure is included to ignore paths that cannot produce a confidence number larger than the predetermined threshold. Also, paths which lead to ambiguous solutions, that is, solutions tha match a single object in one graph to multiple objects in the other are, ignored. Finally, path that do not preserve the spatial relationships between the objects as they appeared in the original scene are ignored.
  • FIG. 10 shows a best common subgraph 92 determined by the graph matching section of the scene recognition processing 23 of FIG. 2.
  • the graph shown in FIG. 8c is matched against the graph shown in FIG. 9b which generates the common subgraph 92 of FIG. 10. This is performed in a conventional manner well-known in the graph processing art and will not be described in detail herein.
  • FIGS. 11a and 11b show a reference scene 90 and a sensed scene 93 having their respective aimpoints 94, 95 designated therein. More particularly, FIGS. 11a and 11b depict one possible scenario in which an object is detected in the sensed scene (region 3) but is not designated in the reference graph. An appropriate aimpoint is designated in spite of this difference as well as others including object additions or omissions, object size and shape differences, and changes due to imaging conditions.
  • the aimpoint 94 in the reference scene 90 is included in the attributed reference graph and is indicative of a desired target aiming location. This aimpoint is "transferred" to the sensed scene as the aimpoint that the missile should be directed at. This information is transferred to the missile navigation system for its use.

Abstract

Image data is processed by a low level feature detection processor that extracts low level features from an image. This is accomplished by converting a matrix of image data into a matrix of orthogonal icons that symbolically represent the image scene using a predetermined set of attributes. The orthogonal icons serve as the basis of processing by means of a high level graph matching processor which employs symbolic scene segmentation, description, and recognition processing that is performed subsequent to the low level feature detection. This processing generates attribute graphs representative of target objects present in the image scene. High level graph matching compares predetermined attributed reference graphs to the sensed graphs to produce a best common subgraph between the two based on the degree of similarity between the two graphs. The high level graph matching generates a recognition decision based on the value of the degree of similarity and a predetermined threshold. The output of the high level graph matching provides data from which a target aimpoint is determined, and this aimpoint is coupled as an input to a missile guidance system that tracks identified targets.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
The present application is related to patent applications Ser. No. 514,778 filed on Apr. 25, 1990, entitled "Improved Data Compression System and Method," and Ser. No. 514,779 filed on Apr. 25, 1990, entitled "Improved Data Decompression System and Method," whose teachings are incorporated herein by reference.
BACKGROUND
The present invention relates generally to scene recognition systems and methods, and more particularly, to a scene recognition system and method that employs low and high level feature detection to identify and track targets.
Modern missile scene recognition systems employ specialized signal processing architectures and algorithms that are designed to quickly and efficiently detect the presence of particular target objects such as buildings, trucks, tanks, and ships, and the like that are located in the field of view of the missile. Consequently, more sophisticated designs are always in demand that can accurately identify or classify targets in a very short period of time.
SUMMARY OF THE INVENTION
The present invention comprises a scene recognition system and method for use with a missile guidance and tracking system, that employs low and high level feature detection to identify and track targets. The system employs any conventional imaging sensor, such as an infrared sensor, a millimeter wave or synthetic aperture radar, or sonar, for example, to image the scene. The output of the sensor (the image) is processed by a low level feature detection processor that extracts low level features from the image. This is accomplished by converting a matrix of sensor data (image data) into a matrix of orthogonal icons that symbolically represent the image using a predetermined set of attributes. The orthogonal icons serve as the basis for processing by means of a high level graph matching processor which employs symbolic scene segmentation, description, and recognition processing that is performed subsequent to the low level feature detection. The process generates attributed graphs representative of target objects present in the image. The high level graph matching processor compares predetermined attributed reference graphs to the sensed graphs to produce a best common subgraph between the two, based on the degree of similarity between the two graphs. The high level graph matching processor generates a recognition decision based on the value of the degree of similarity and a predetermined threshold. The output of the high level graph matching processor provides data from which an aimpoint is determined. The aimpoint is coupled as an input to the missile guidance system that tracks an identified target.
BRIEF DESCRIPTION OF THE DRAWINGS
The various features and advantages of the present invention may be more readily understood with reference to the following detailed description taken in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
FIG. 1 is a block diagram of a scene recognition system employing low and high level feature detection to identify and track targets in accordance with the principles of the present invention;
FIG. 2 is a detailed functional block diagram of the system of FIG. 1;
FIGS. 3a-3f show the processing performed to achieve low level feature detection;
FIGS. 4a and 4b show the results of flat linking and region growth processing performed in the scene segmentation section of FIG. 2;
FIGS. 5a-5c show the results of boundary formation and linear feature extraction processing performed in the scene segmentation section of FIG. 2;
FIGS. 6 and 7 show attributes computed for region and ribbon objects, respectively, in object formation processing performed in the scene segmentation section of FIG. 2;
FIGS. 8a-8c show the transitional steps in generating an attributed sensed graph; and
FIGS. 9a and 9b show a reference scene and an attributed reference graph, respectively;
FIG. 10 shows a best common subgraph determined by the graph matching section of the scene recognition portion of FIG. 2; and
FIGS. 11a and 11b show a reference scene and a sensed scene having the respective aimpoints designated therein.
DETAILED DESCRIPTION
Referring to the drawings, FIG. 1 is a block diagram of a scene recognition system employing low and high level feature detection to identify and track targets in accordance with the principles of the present invention. The system comprises a low level feature detector 11 that is adapted to receive image data derived from an imaging sensor 9, such as an infrared sensor, a television sensor, or a radar, for example. The low level feature detector 11 is coupled to a high level graph matching processor 12 that is adapted to process icons representative of features contained in the image scene that are generated by the low level feature detector 11. Aimpoint information is generated and used by a missile navigation system 24 to steer a missile toward a desired target.
The high level graph matching processor 12 includes a serially coupled graph synthesizer 13, graph matcher 14, and aimpoint estimator 15. A reference graph storage memory 16 is coupled to the graph matching processor 12 and is adapted to store predefined graphs representing expected target objects that are present in the image scene. The reference graphs include graphs representing tanks, buildings, landmarks, and water bodies, for example.
The low level feature detector 11 is described in detail in U.S. patent applications Ser. No. 514,779 filed on Apr. 25, 1990, entitled "Improved Data Decompression System and Method," whose teachings are incorporated herein by reference. In summary, however, the low level feature detector 11 converts a matrix of sensor data (image data) into a matrix of orthogonal icons that symbolically represent the imaged scene through the use of a set of attributes. These are discussed in more detail below with reference to FIGS. 3-11.
The processing performed by the scene recognition system shown in FIG. 1 employs a series of transformations that converts image information into pregressively more compressed and abstract forms. The first transformation performed by the low level feature detector 11 converts the sensor image, which is an array of numbers describing the intensity at each picture position, into a more compact array of icons with attributes that described the essential intensity distributions of 10×10 pixel blocks of image data.
The second transformation implemented in the high level graph matching processor 12 links the icon array into separate regions of nearly uniform intensity and identifies linear boundaries between regions. A special case of a region that is long and narrow with roughly parallel sides is identified as a ribbon. This transformation results in a list of the objects formed, various attributes of the objects themselves, and relationships between the objects. This information is encoded in a structure identified as an attributed graph.
The graph matcher 14 compares the graph structure derived from the sensor image with a previously stored graph derived earlier or from reconnaissance information, for example. If a high degree of correlation is found, the scene described by the reference graph is declared to be the same as the scene imaged by the sensor. Given a match, the aimpoint estimator 15 associates the aimpoint given apriori in the reference graph to the sensed graph, and this information is provided as an input to the missile navigation and guidance system.
In a classical graphing application, the general problem of correlating two graphs is N-P complete, in that a linear increase in the number of nodes requires an exponential increase in the search, or number of comparisons, required to match the graphs. In the present invention, this problem is solved by overloading the sensed and reference graphs with unique attributes that significantly reduces the search space and permits rapid, real time searching to be achieved.
FIG. 2 is a detailed functional block diagram of the system of FIG. 1. Scene segmentation processing 21 comprises low level feature description processing, intensity and gradient based segmentation processing 32, 33 and gradient/intensity object merging 34, all performed by the low level feature detector 11 (FIG. 1). Output signals from the scene segmentation processing 21 are coupled to scene description processing 22 that comprises graph synthesis processing 35 that produces a sensed graph of objects present in the image. Reference graphs 36 are stored that comprise graphs representing vehicles, buildings, landmarks, and water bodies, for example. These graphs are prepared from video data gathered during reconnaissance flights, for example, which provide mission planning information. Target objects that are present in the video data are processed by the present invention to generate the reference graphs which are ultimately used as comparative data from which target objects are identified and selected during operational use of the invention.
Scene recognition processing 23 employs the graphs generated by the graph synthesis processing 35 and the reference graphs 36 by means of graph matching processing 37 and scene transformation processing 38 which generate the aimpoint estimate that is fed to the navigation system 24.
FIGS. 3a-3f show the processing performed to achieve low level feature discrimination. This process converts a matrix of sensor data (image data) into a matrix of orthogonal icons that represent the imaged scene symbolically via a set of attributes. These orthogonal icons serve as the basis for performing symbolic scene segmentation, description, and recognition processor shown in FIG. 2. FIG. 3a represents an image of a house 41 whose sides 42, 43 and roof sections 44, 45 have different shading due to the orientation of the sun relative thereto. Each face of the house 41 is identified by a different texture (shading) identified in FIG. 3a. The image of the house 41 is an array of numbers representative of different intensities associated with each pixel of data.
The image data representing the house 41 is processed by a 10×10 block window 46 as is roughly illustrated in FIG. 3b which generates pixel data illustrated in FIG. 3c. The pixel data of FIG. 3c is generated in the low level feature detector 11. This pixel data contains orthogonal icons. Each pixel in FIG. 3c is shaded in accordance with the shade it sees in FIG. 3b. The blocks or cells of data shown in FIG. 3c are generated in the low level feature detector 11.
In FIG. 3d lines are formed, and in FIG. 3e regions are formed from the shaded pixels in FIG. 3c. The lines are generated by gradient based segmentation processing 33 in the scene segmentation processor 21 shown in FIG. 2, while the regions are generated by the intensity based segmentation processing 32 in the scene segmentation processor 21 shown in FIG. 2. The regions are formed using flat linking and gradient linking and the region shapes are determined. The line and region information is then processed by gradient/intensity object merging 34 and the graph synthesis processing 35 shown in FIG. 2. in order to form regions and ribbons, and a graph is synthesized, as illustrated in FIG. 3f, which is a graph representation of the house 41 of FIG. 3a. An aimpoint is shown in FIG. 3f that is derived from the information comprising the reference graph 36 shown in FIG. 2.
The flat linking and region growth processing is part of the scene segmentation processing 21 of FIG. 2. This flat linking and region growth processing is comprised of two subprocesses. The flat linking process groups low level feature discrimination orthogonal icons of type FLAT into homogeneous intensity flat regions using a relaxation-based algorithm. The result is a set of regions comprised of FLAT icons and described by their area (number of constituent FLAT icons), intensity (average intensity of the constituent FLAT icons), and a list of the constituent FLAT icons. More particularly, groups of orthogonal icons having homogeneous intensity regions are formed to generate a set of regions having a block resolution boundary. The groups of orthogonal icons are comprised of homogeneous intensity icons described by their area, including the number of constituent icons having homogeneous intensity, the average intensity of the constituent homogeneous intensity icons, and a list of the constituent homogeneous intensity icons, region growth processing appends adjacent orthogonal icons having an intensity gradient thereacross to provide a feature-resolution boundary.
The region growth process appends adjacent orthogonal icons of type gradient (non FLAT) information to the grown flat regions thus providing a feature-resolution boundary as opposed to the 10×10 block resolution boundary provided by the flat linking process. The method employed is as follows. (1) Being with a flat region provided by the flat linking process. The region is comprised solely of FLAT icons and grows up to flanking gradient icons which stop the growth. (2) Consider the gradient icons flanking the region. Each gradient icon is described by a bi-intensity model of which one of the intensities are adjacent to the flat region. If this intensity is similar to the intensity of the flat region extend the region boudary to include the portion of the gradient icon covered by the intensity. (3) Repeat for all flat regions. Flat linking assigns a single region number to every FLAT icon. Region growth assigns multiple region numbers to gradient icons. Gradient icons of type EDGE and CORNER are assigned two region numbers, those of type RIBBON are assigned three region numbers. Gradient icons of type SPOT are not considered. The result of this process is a set of homogeneous intensity regions consisting of FLAT icons and partial gradient icons. This is described in detail in the "Improved Data Decompression System and Method" patent application cited above.
FIGS. 4a and 4b show the results of flat linking and region growth processing performed by the scene segmentation processing 21 of FIG. 2. In particular this processing is performed by intensity based segmentation processing 32, gradient based segmentation processing 33, and the gradient/intensity object merging 34 processes shown in FIG. 2. In FIGS. 4a and 4b, the orthogonal icons containing a single shade of gray represent FLAT icons 50, 51. Those that contain two shades of gray (along the diagonal) represent EDGE icons 52. Regions are comprised of both FLAT and gradient (non FLAT) icons. EDGE icons 52, as well as all other gradient icons, are associated with multiple regions, that is, they are assigned multiple region numbers. In FIGS. 4a and 4b, each of the EDGE icons 52 are assigned two region numbers, 1 and 2. This provides us with boundaries that are accurate to the gradient features detected by the low level feature discrimination process.
Boundary formation and linear feature extraction processing is performed by the scene segmentation processing of FIG. 2. This processing is comprised of three subprocesses. The gradient boundaries of each region 60-63 (FIG. 5a) are traversed forming a gradient chain around each region. The gradient chains for each region are analyzed for linear segments 64-68 (FIG. 5b) and having pointers inserted into the chains at the beginning and end of each linear segment 64-68. The linear segments 64-68 are analyzed for the presence of adjacent segments related to two or more regions that form line segments. The results of these processes include boundary descriptions for each of the regions 60-63 formulated by the flat linking and region growth processes, and linear segments 70-73 (FIG. 5c) represented by length, orientation, and end point coordinates.
FIGS. 5a-5c show the results of boundary formation and linear feature extraction processing performed in the scene segmentation processing 21 of FIG. 2. FIG. 5a shows the regions 60-63 derived by the flat linking and region growth process. FIG. 5b shows the boundary descriptions of each of the regions 60-63 as derived by the boundary formation process. FIG. 5c shows the line segments 70-73 derived by the linear feature extraction process.
The object formation process is comprised of two subprocesses. The first forms ribbon objects 75 from the linear features provided by the gradient boundary and linear feature extraction processes and creates symbolic descriptions of these objects. The second creates symbolic descriptions of the regions provided by the flat linking and region growing processes thus producing region objects 76-78 (FIGS. 8a, 8b). These ribbon and region objects become the nodes of the attributed sensed graph. The symbolic descriptions serve as the attributes. FIGS. 6 and 7 show attributes computed for region and ribbon objects 75-78, shown in FIGS. 8a and 8b, respectively, in object formation processing performed by the scene segmentation process 21 of FIG. 2, and more particularly by the graph synthesis processing 35 in FIG. 2.
FIG. 6 shows region object attributes and computations for a region boundary and a convex hull using the equations shown in the drawing. With reference to FIG. 7, it shows the ribbon object attributes, including ribbon length in FIG. 7a, ribbon intensity in FIG. 7b, polarity in FIG. 7c, and ribbon orientation in FIG. 7d. The arrows shown in FIGS. 7b and 7c are indicative of the orientation of the line segments that constitute the ribbon.
For region objects, the process computes the area, perimeter, and convex hull attributes. For ribbon objects, the process searches through a line table looking for pairs of lines that: (1) differ in orientation by 180 degrees, (2) are in close proximity to each other, (3) are flanked by similar intensities, and (4) do not enclose another line segment that is parallel to either line. When a pair of lines fitting these constraints is found the ribbon attributes for them are computed. The attributes for ribbon objects include: (1) intensity of the ribbon, (2) polarity of the ribbon (light on dark or dark on light), (3) the width of the ribbon (distance between two lines), and (4) the orientation of the ribbon.
The graph synthesis process and its associated graph language computes relational descriptions between the region and ribbon objects 75-78 (FIGS. 8a, 8b) provided by the object formation process and formulates the attributed sensed graph structure from the region and ribbon objects and their relationships. The relationships between every region/region, region/ribbon, and ribbon/ribbon pair are computed. The relationships are of type spatial (proximity to one another) or comparative (attribute comparisons). Graphs, nodes, links between nodes, and graph formation and processing is well known in the art and will not be described in detail herein. One skilled in the art of graph processing should be able to derive the graphs described herein given a knowledge of the attributes that are to be referenced and the specification of the graphs described herein.
In formulating the attributed sensed graph, the region and ribbon objects are placed at graph nodes, one object per node, along with their descriptive attributes. The relationships between each pair of objects are placed at the graph arcs (links) along with their attributes. With this scheme, a fully connected attributed graph is formulated which symbolically represents the original imaged scene.
FIGS. 8a-8c show the transitional steps in generating an attributed sensed graph 80 in the graph synthesis process 35 (FIG. 2). The graph 80 is comprised of nodes 81-84, that symbolically represent objects within the scene, and node attributes that describe the objects, and arcs (or links) that symbolically represent the relationships between the objects within the scene, and link attributes, that describe the relationships between the objects. More particularly, node 1 is representative of region 1 object 76, node 2 is representative of region 2 object 77, node 3 is representative of region 3 object 78, and node 4 is representative of the ribbon object 75. The various links between the ribbon and region objects 75-78 are shown as 1 link 2, 2 link 4, 1 link 4, 1 link 3, 2 link 3, and 3 link 4. The links include such parameters as the fact that the region 1 object 76 is above and adjacent the region 2 object 77, that the region 1 object 76 is left of the region 3 object 78, that the region 2 object 77 is left and adjacent to the ribbon object 75, and so forth.
Reference graph processing entails the same functions as the object formation and graph synthesis processes with two differences. First, it is performed prior to the capture of the sensed image and subsequent scene segmentation and description. Second, it receives its inputs from a human operator by way of graphical input devices rather than from autonomous segmentation processes.
FIGS. 9a and 9b show a reference scene 90 and an attributed reference graph 91, respectively. FIG. 9a shows the reference scene 90 comprised of region 1 and region 2, the ribbon and a desired aimpoint 94. The attributed reference graph 91 shown in FIG. 9b is comprised of nodes and links substantially the same as is shown in FIG. 8c, The nodes and links comprise the same types of relationships described with reference to FIG. 8c.
The graph matching process 37 (FIG. 2) compares attributed reference and attributed sensed graphs, generates a best common subgraph between the two based on the degree of similarity between the two graphs (confidence number), and generates a recognition decision based on the value of the confidence number and a predetermined threshold. Feasibility is determined by the degree of match between the node and are attributes of each of the graphs. A heuristic procedure is included to ignore paths that cannot produce a confidence number larger than the predetermined threshold. Also, paths which lead to ambiguous solutions, that is, solutions tha match a single object in one graph to multiple objects in the other are, ignored. Finally, path that do not preserve the spatial relationships between the objects as they appeared in the original scene are ignored.
FIG. 10 shows a best common subgraph 92 determined by the graph matching section of the scene recognition processing 23 of FIG. 2. The graph shown in FIG. 8c is matched against the graph shown in FIG. 9b which generates the common subgraph 92 of FIG. 10. This is performed in a conventional manner well-known in the graph processing art and will not be described in detail herein.
FIGS. 11a and 11b show a reference scene 90 and a sensed scene 93 having their respective aimpoints 94, 95 designated therein. More particularly, FIGS. 11a and 11b depict one possible scenario in which an object is detected in the sensed scene (region 3) but is not designated in the reference graph. An appropriate aimpoint is designated in spite of this difference as well as others including object additions or omissions, object size and shape differences, and changes due to imaging conditions. The aimpoint 94 in the reference scene 90 is included in the attributed reference graph and is indicative of a desired target aiming location. This aimpoint is "transferred" to the sensed scene as the aimpoint that the missile should be directed at. This information is transferred to the missile navigation system for its use.
Thus there has been described a new and improved scene recognition system and method that employs low and high level feature detection to identify and track targets. It is to be understood that the above-described embodiment is merely illustrative of some of the many specific embodiments which represent applications of the principles of the present invention. Clearly, numerous and other arrangements can be readily devised by those skilled in the art without departing from the scope of the invention.

Claims (16)

What is claimed is:
1. A scene recognition system employing low and high level detection to identify and track targets located in an image scene and a missile guidance system adapted to steer a missile toward a desired target, said system comprising:
low level feature detection processor means for processing image data derived from and representative of an imaged scene, and for extracting features from the imaged scene by converting the image data into a matrix of orthogonal icons that symbolically represent the image using a predetermined set of attributes, said low level feature detection processor means comprising:
a) flat linking processing means for forming groups of orthogonal icons having homogeneous intensity regions to generate a set of regions having a block resolution boundary and that are comprised of homogeneous intensity icons described by their area, comprising the number of constituent icons having homogeneous intensity, the intensity, comprising the average intensity of the constituent homogeneous intensity icons, and a list of the constituent homogeneous intensity icons; and
b) region growth processing means coupled to the flat linking processing means for appending adjacent orthogonal icons having an intensity gradient thereacross to provide a feature-resolution boundary;
graph synthesis processor means coupled to the low level feature detection processing means for processing the orthogonal icons to generate predetermined objects representative of objects that are in the scene, and for computing relational descriptions between the objects to form an attributed sensed graph from the objects and their relationships as described by their attributes, and whereupon the objects are placed at graph nodes, one object per node, along with their descriptive attributes, and wherein the relationships between object pairs are placed at graph links along with their attributes, and whereupon a fully connected attributed graph is formulated which symbolically represents the image scene;
reference graph storage means coupled to the graph synthesis processing means for storing predetermined reference graphs representative of identifiable targets of interest that are expected to be present in the data comprising the image; and
graph matching processing means coupled to the graph synthesis processing means for comparing predetermined attributed reference graphs to the sensed graphs to produce an object recognition decision based on the value of the degree of similarity between the attributed reference graphs to the sensed graphs and a predetermined threshold, and for providing an output signal that is determinative of a target aimpoint, which output signal is coupled as an input to the missile guidance system to provide a guidance signal that is adapted to steer the missile toward the identified target.
2. The system of claim 1 wherein the low level feature detection processing means further comprises:
boundary formation and linear feature extraction processing means coupled to the region growth processing means for (1) traversing the gradient boundaries of each region to form a gradient chain around each region, (2) analyzing the gradient chains for each region for linear segments by means of pointers inserted into the chains at the beginning and end of each linear segment, and (3) analyzing the linear segments for the joining of segments related to two or more regions to form linear segments, which generates boundary descriptions for each of the regions formulated by the flat linking and region growth processing means and linear segments represented by length, orientation, and end point coordinates.
3. The system of claim 2 wherein the low level feature detection processing means further comprises:
object formation processing means coupled to the boundary formation and linear feature extraction processing means for forming ribbon objects from the linear features provided by the gradient boundary and linear feature extraction processing means and for creating symbolic descriptions of these objects, by creating symbolic descriptions of the regions provided by the flat linking and region growing processing means to produce region objects that define nodes of an attributed sensed graph, and whose symbolic descriptions comprise the attributes;
whereby for region objects, the object formation processing means computes the area, perimeter, and convex hull attributes, and for the ribbon objects, the object formation processing means searches through a line table looking for pairs of lines that: (1) differ in orientation by 180 degrees, (2) are in close proximity to each other, (3) are flanked by similar intensities, (4) do not enclose another line segment that is parallel to either line, and when a pair of lines fitting these constraints is found the ribbon attributes for them are computed, and wherein the attributes for ribbon objects include: (1) intensity of the ribbon, (2) polarity of the ribbon, meaning light on dark or dark on light, (3) the width of the ribbon, meaning the distance between the two lines, and (4) the orientation of the ribbon.
4. The system of claim 3 wherein the graph matching processing means compares predetermined attributed reference graphs to the sensed graphs to produce a best common subgraph between the two based on the degree of similarity between the two graphs, and generates a recognition decision based on the value of the degree of similarity and a predetermined threshold.
5. The system of claim 4 wherein the graph matching processing means utilizes a heuristically directed depth-first search technique to evaluate feasible matches between nodes and arc attributes of the attributed reference and sensed graphs.
6. The system of claim 5 wherein the graph matching processing means determines feasibility by the degree of match between the node and arc attributes of each of the graphs, and a heuristic procedure is included to ignore paths that cannot produce a degree of similarity larger than the predetermined threshold.
7. The system of claim 6 wherein the paths which lead to ambiguous solutions, comprising solutions that match a single object in one graph to multiple objects in the other, are ignored, and wherein paths that do not preserve the spatial relationships between the objects as they appeared in the image are ignored.
8. A scene recognition system employing low and high level feature detection to identify and track targets located in an imaged scene and a missile guidance system adapted to steer the missile toward a desired target, said system comprising:
low level feature detection processing means adapted to process image data derived from and representative of an image scene, for extracting low level features from the image by converting the image data into a matrix of orthogonal icons that symbolically represent the image scene using a predetermined set of attributes;
graph synthesis processing means coupled to the low level feature detection processing means processing the orthogonal icons and for computing relational descriptions between the region and ribbon objects provided by the object formation processing means to form the attributed sensed graph from the region and ribbon objects and their relationships, and whereupon the region and ribbon objects are placed at graph nodes, one object per node, along with their descriptive attributes, the relationships between each pair of objects are placed at the graph links along with their attributes, whereupon, a fully connected attributed graph is formulated which symbolically represents the image;
reference graph storage means coupled to the graph synthesis processing means for storing predetermined reference graphs representative of identifiable targets of interest that are expected to be present in the data comprising the image; and
graph matching processing means coupled to the graph synthesis processing means for comparing predetermined attributed reference graphs to the sensed graphs to produce an object recognition decision, which produces a best common subgraph between the two based on the degree of similarity between the two graphs, for generating a recognition decision based on the value of the degree of similarity and a predetermined threshold, and for providing an output signal that is determinative of a target aimpoint, which output signal is coupled as an input to the missile guidance system to provide a guidance signal that is adapted to steer the missile toward the identified target.
9. A scene recognition system employing low and high level feature detection to identify and track targets located in an image scene, said system comprising:
low level feature detection processing means adapted to process image data derived from and representative of an image scene, for extracting low level features from the image scene by converting the image data into a matrix of orthogonal icons that symbolically represent the image using a predetermined set of attributes;
flat linking processing means coupled to the low level feature detection processing means for forming groups of orthogonal icons having homogeneous intensity regions by means of a relaxation-based algorithm to generate a set of regions having a block resolution boundary and that are comprised of homogeneous intensity icons described by their area, comprising the number of constituent icons having homogeneous intensity, the intensity, comprising the average intensity of the constituent homogeneous intensity icons, and a list of the constituent homogeneous intensity icons;
region growth processing means coupled to the flat linking processing means for appending adjacent orthogonal icons having an intensity gradient thereacross to provide a feature-resolution boundary;
boundary formation and linear feature extraction processing means coupled to the region growth processing means for (1) traversing the gradient boundaries of each region to form a gradient chain around each region, (2) analyzing the gradient chains for each region for linear segments by means of pointers inserted into the chains at the beginning and end of each linear segment, and (3) analyzing the linear segments for the joining of segments related to two or more regions to form linear segments, which generates boundary descriptions for each of the regions formulated by the flat linking and region growth processing means and linear segments represented by length, orientation, and end point coordinates;
object formation processing means coupled to the boundary formation and linear feature extraction processing means for forming ribbon objects from the linear features provided by the gradient boundary and linear feature extraction processing means and for creating symbolic descriptions of these objects, by creating symbolic descriptions of the regions provided by the flat linking and region growing processing means to produce region objects that define nodes of an attributed sensed graph, and whose symbolic desctiptions comprise the attributes;
whereby for region objects, the object formation processing means computes the area, perimeter, and convex hull attributes, and for the ribbon objects, the object formation processing means searches through a line table looking for pairs of lines that: (1) differ in orientation by 180 degrees, (2) are in close proximity to each other, (3) are flanked by similar intensities, (4) do not enclose another line segment that is parallel to either line, and when a pair of lines fitting these constraints is found the ribbon attributes for them are computed, and wherein the attributes for ribbon objects include: (1) intensity of the ribbon, (2) polarity of the ribbon, meaning light on dark or dark on light, (3) the width of the ribbon, meaning the distance between the two lines, and (4) the orientation of the ribbon;
graph synthesis processing means coupled to the low level feature detection processing means processing the orthogonal icons and for computing relational descriptions between the region and ribbon objects provided by the object formation processing means to form the attributed sensed graph from the region and ribbon objects and their relationships, and whereupon the region and ribbon objects are placed at graph nodes, one object per node, along with their descriptive attributes, the relationships between each pair of objects are placed at the graph links along with their attributes, whereupon, a fully connected attributed graph is formulated which symbolically represents the imaged scene;
reference graph storage means coupled to the graph synthesis processing means for storing predetermined reference graphs representative of identifiable targets of interest that are expected to be present in the data comprising the image scene; and
graph matching processing means coupled to the graph synthesis processing means for comparing predetermined attributed reference graphs to the sensed graphs to produce a best common subgraph between the two based on the degree of similarity between the two graphs, and for generating a recognition decision based on the value of the degree of similarity and a predetermined threshold, and wherein the graph matching processing means utilizes a heuristically directed depth-first search technique to evaluate feasible matches between the nodes and arcs of the attributed reference and sensed graphs, wherein feasibility is determined by the degree of match between the node and arc attributes of each of the graphs, and a heuristic procedure is included to ignore paths of the tree that cannot possibly produce a degree of similarity larger than the predetermined threshold, and wherein the paths of the tree which lead to ambiguous solutions, wherein solutions that match a single object in one graph to multiple objects in the other, are ignored, and wherein paths of the tree that do not preserve the spatial relationships between the objects as they appeared in the original scene are ignored.
10. A method for use with a missile guidance system to track targets located in an imaged scene, said method comprising the steps of:
processing image data derived from and representative of an imaged scene to form groups of orthogonal icons, having homogeneous intensity regions, by means of a relaxation-based algorithm to generate a set of regions having a block resolution boundary and that are comprised of homogeneous intensity icons described by their area, comprising the number of constituent icons having homogeneous intensity, the intensity comprising the average intensity of the constituent homogeneous intensity icons, and a list of the constituent homogeneous intensity icons;
appending adjacent orthogonal icons having an intensity gradient thereacross to provide a feature-resolution boundary;
processing the orthogonal icons to form an attributed sensed graph from region and ribbon objects comprising the orthogonal icons and their relationships, and whereupon the region and ribbon objects are placed at graph nodes, one object per node, along with their descriptive attributes, the relationships between each pair of objects are placed at the graph links along with their attributes, whereupon, a fully connected attributed graph is formulated which symbolically represents the image scene;
storing predetermined reference graphs representative of identifiable targets of interest that are expected to be present in the data comprising the image scene; and
comparing predetermined reference graphs to the attributed sensed graphs to produce an object recognition decision, and providing an output signal that is determinative of a target aimpoint, which output signal is coupled as an input to the missile guidance system to provide a guidance signal that is adapted to steer a missile toward the identified target.
11. The method of claim 10 wherein the step of processing image data comprises:
forming regions by (1) traversing the gradient boundaries of each region to form a gradient chain around each region, (2) analyzing the gradient chains for each region for linear segments by means of pointers inserted into the chains at the beginning and end of each linear segment, and (3) analyzing the linear segments for the joining of segments related to two or more regions to form linear segments, which generates boudary descriptions for each of the regions formulated by the flat linking and region growth processing means and linear segments represented by length, orientation, and end point coordinates.
12. The method of claim 11 wherein the step of processing image data comprises:
forming ribbon objects from the linear features provided by the gradient boundary and linear feature extraction processing step and for creating symbolic descriptions of these objects, by creating symbolic descriptions of the regions provided by the flat linking and region growing processing means to produce region objects that define nodes of an attributed sensed graph, and whose symbolic descriptions comprise the attributes;
whereby for region objects, the area, perimeter, and convex hull attributes are determined, and for the ribbon objects, a line table is searched looking for pairs of lines that: (1) differ in orientation by 180 degrees, (2) are in close proximity to each other, (3) are flanked by similar intensities, (4) do not enclose another line segment that is parallel to either line, and when a pair of lines fitting these constraints is found the ribbon attributes for then are computed, and wherein the attributes for ribbon objects include: (1) intensity of the ribbon, (2) polarity of the Ribbon, meaning light on dark or dark on light, (3) the width of the ribbon, meaning the distance between the two lines, and (4) the orientation of the ribbon.
13. The method of claim 12 wherein the comparing step comprises comparing predetermined reference graphs to the attributed sensed graphs based on the value that is a function of the difference between the degree of similarity between the reference and sensed graphs, and a predetermined threshold.
14. The method of claim 13 wherein the comparing step utilizes a heuristically directed depth-first search technique to evaluate feasible matches between nodes and arcs of the attributed reference and sensed graphs.
15. The method of claim 14 wherein the comparing step determines feasibility by the degree of match between the node and arc attributes of each of the graphs, and a heuristic procedure is included to ignore paths that cannot produce a degree of similarity larger than the predetermined threshold.
16. The method of claim 15 wherein the paths which lead to ambiguous solutions, comprising solutions that match a single object in one graph to multiple objects in the other, are ignored, and wherein paths that do not preserve the spatial relationships between the objects as they appeared in the image scene are ignored.
US07/633,833 1990-12-26 1990-12-26 Pattern recognition apparatus utilizing area linking and region growth techniques Expired - Lifetime US5093869A (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US07/633,833 US5093869A (en) 1990-12-26 1990-12-26 Pattern recognition apparatus utilizing area linking and region growth techniques
CA002055714A CA2055714C (en) 1990-12-26 1991-11-15 Scene recognition system and method employing low and high level feature processing
IL10010491A IL100104A (en) 1990-12-26 1991-11-20 Scene recognition system and method employing low and high level feature processing
NO91914813A NO914813L (en) 1990-12-26 1991-12-06 SCENE RECOGNITION SYSTEM AND PROCEDURES USING LAVNIVAA AND HIGH LEVEL DETAIL TREATMENT
EP19910121973 EP0492512A3 (en) 1990-12-26 1991-12-20 Scene recognition system and method employing low and high level feature processing
AU90016/91A AU644923B2 (en) 1990-12-26 1991-12-23 Scene recognition system and method employing low and high level feature processing
KR1019910024287A KR940006841B1 (en) 1990-12-26 1991-12-24 Scene recognition system and method employing low and high level feature processing
JP3345159A JP2518578B2 (en) 1990-12-26 1991-12-26 Scene recognition device and scene recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US07/633,833 US5093869A (en) 1990-12-26 1990-12-26 Pattern recognition apparatus utilizing area linking and region growth techniques

Publications (1)

Publication Number Publication Date
US5093869A true US5093869A (en) 1992-03-03

Family

ID=24541303

Family Applications (1)

Application Number Title Priority Date Filing Date
US07/633,833 Expired - Lifetime US5093869A (en) 1990-12-26 1990-12-26 Pattern recognition apparatus utilizing area linking and region growth techniques

Country Status (8)

Country Link
US (1) US5093869A (en)
EP (1) EP0492512A3 (en)
JP (1) JP2518578B2 (en)
KR (1) KR940006841B1 (en)
AU (1) AU644923B2 (en)
CA (1) CA2055714C (en)
IL (1) IL100104A (en)
NO (1) NO914813L (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5172423A (en) * 1990-09-14 1992-12-15 Crosfield Electronics Ltd. Methods and apparatus for defining contours in colored images
US5237624A (en) * 1990-08-18 1993-08-17 Fujitsu Limited Reproduction of image pattern data
US5265173A (en) * 1991-03-20 1993-11-23 Hughes Aircraft Company Rectilinear object image matcher
US5268967A (en) * 1992-06-29 1993-12-07 Eastman Kodak Company Method for automatic foreground and background detection in digital radiographic images
EP0578508A2 (en) * 1992-07-10 1994-01-12 Sony Corporation Video camera with colour-based target tracking system
WO1997006631A2 (en) * 1995-08-04 1997-02-20 Ehud Spiegel Apparatus and method for object tracking
US5646691A (en) * 1995-01-24 1997-07-08 Nec Corporation System and method for inter-frame prediction of picture by vector-interpolatory motion-compensation based on motion vectors determined at representative points correctable in position for adaptation to image contours
US5666441A (en) * 1994-03-17 1997-09-09 Texas Instruments Incorporated Computer vision system to detect 3-D rectangular objects
US5671294A (en) * 1994-09-15 1997-09-23 The United States Of America As Represented By The Secretary Of The Navy System and method for incorporating segmentation boundaries into the calculation of fractal dimension features for texture discrimination
US5706361A (en) * 1995-01-26 1998-01-06 Autodesk, Inc. Video seed fill over time
US5894525A (en) * 1995-12-06 1999-04-13 Ncr Corporation Method and system for simultaneously recognizing contextually related input fields for a mutually consistent interpretation
US5978504A (en) * 1997-02-19 1999-11-02 Carnegie Mellon University Fast planar segmentation of range data for mobile robots
US6084989A (en) * 1996-11-15 2000-07-04 Lockheed Martin Corporation System and method for automatically determining the position of landmarks in digitized images derived from a satellite-based imaging system
US6144379A (en) * 1997-11-20 2000-11-07 International Business Machines Corporation Computer controlled user interactive display system for presenting graphs with interactive icons for accessing related graphs
WO2001063382A2 (en) * 2000-02-25 2001-08-30 Synquiry Technologies, Ltd. Conceptual factoring and unification of graphs representing semantic models
CN1080021C (en) * 1995-08-16 2002-02-27 裵莲秀 Magnetic circuits in the rotating system for generation both the mechanical power and the electric power
WO2002099684A1 (en) * 2001-06-04 2002-12-12 Barra, Inc. Method and apparatus for creating consistent risk forecasts and for aggregating factor models
US20030044067A1 (en) * 2001-08-24 2003-03-06 Yea-Shuan Huang Apparatus and methods for pattern recognition based on transform aggregation
US20040025025A1 (en) * 1999-10-19 2004-02-05 Ramarathnam Venkatesan System and method for hashing digital images
US6714676B2 (en) * 1997-09-04 2004-03-30 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20040125217A1 (en) * 2002-12-31 2004-07-01 Jesson Joseph E. Sensing cargo using an imaging device
US20040268220A1 (en) * 2001-04-24 2004-12-30 Microsoft Corporation Recognizer of text-based work
US20050022004A1 (en) * 2001-04-24 2005-01-27 Microsoft Corporation Robust recognizer of perceptually similar content
US6868181B1 (en) * 1998-07-08 2005-03-15 Siemens Aktiengesellschaft Method and device for determining a similarity of measure between a first structure and at least one predetermined second structure
US20050071377A1 (en) * 2001-04-24 2005-03-31 Microsoft Corporation Digital signal watermarker
US20050108543A1 (en) * 2001-04-24 2005-05-19 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US20050147298A1 (en) * 2003-12-29 2005-07-07 Eastman Kodak Company Detection of sky in digital color images
US20050149727A1 (en) * 2004-01-06 2005-07-07 Kozat S. S. Digital goods representation based upon matrix invariances
US20050169529A1 (en) * 2004-02-03 2005-08-04 Yuri Owechko Active learning system for object fingerprinting
US20050199782A1 (en) * 2004-03-12 2005-09-15 Calver Andrew J. Cargo sensing system
US20050237574A1 (en) * 1999-09-27 2005-10-27 Canon Kabushiki Kaisha Image processing apparatus and method
US20050257060A1 (en) * 2004-04-30 2005-11-17 Microsoft Corporation Randomized signal transforms and their applications
US7239740B1 (en) * 1998-04-07 2007-07-03 Omron Corporation Image processing apparatus and method, medium storing program for image processing, and inspection apparatus
US20090161968A1 (en) * 2007-12-24 2009-06-25 Microsoft Corporation Invariant visual scene and object recognition
US20110232719A1 (en) * 2010-02-17 2011-09-29 Freda Robert M Solar power system
US20120321137A1 (en) * 2007-12-14 2012-12-20 Sri International Method for building and extracting entity networks from video
US20140079320A1 (en) * 2012-09-17 2014-03-20 Gravity Jack, Inc. Feature Searching Along a Path of Increasing Similarity
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US9406138B1 (en) 2013-09-17 2016-08-02 Bentley Systems, Incorporated Semi-automatic polyline extraction from point cloud
US20180204473A1 (en) * 2017-01-18 2018-07-19 Microsoft Technology Licensing, Llc Sharing signal segments of physical graph
US20180212834A1 (en) * 2012-02-20 2018-07-26 Aptima, Inc. Systems and methods for network pattern matching
US10437884B2 (en) 2017-01-18 2019-10-08 Microsoft Technology Licensing, Llc Navigation of computer-navigable physical feature graph
US10482900B2 (en) 2017-01-18 2019-11-19 Microsoft Technology Licensing, Llc Organization of signal segments supporting sensed features
US10606814B2 (en) 2017-01-18 2020-03-31 Microsoft Technology Licensing, Llc Computer-aided tracking of physical entities
US10635981B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Automated movement orchestration
US10637814B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Communication routing based on physical status
US10679669B2 (en) 2017-01-18 2020-06-09 Microsoft Technology Licensing, Llc Automatic narration of signal segment
US11651285B1 (en) 2010-04-18 2023-05-16 Aptima, Inc. Systems and methods to infer user behavior

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2384095B (en) * 2001-12-10 2004-04-28 Cybula Ltd Image recognition
RU2507538C2 (en) * 2009-10-19 2014-02-20 Алексей Александрович Галицын Method for group identification of objects ("friendly-foreign") and target designation based on real-time wireless positioning and intelligent radar
CN111522967B (en) * 2020-04-27 2023-09-15 北京百度网讯科技有限公司 Knowledge graph construction method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3725576A (en) * 1962-09-12 1973-04-03 Us Navy Television tracking system
US3794272A (en) * 1967-02-13 1974-02-26 Us Navy Electro-optical guidance system
US3955046A (en) * 1966-04-27 1976-05-04 E M I Limited Improvements relating to automatic target following apparatus
US4047154A (en) * 1976-09-10 1977-09-06 Rockwell International Corporation Operator interactive pattern processing system
US4115803A (en) * 1975-05-23 1978-09-19 Bausch & Lomb Incorporated Image analysis measurement apparatus and methods
US4183013A (en) * 1976-11-29 1980-01-08 Coulter Electronics, Inc. System for extracting shape features from an image
US4267562A (en) * 1977-10-18 1981-05-12 The United States Of America As Represented By The Secretary Of The Army Method of autonomous target acquisition
US4783829A (en) * 1983-02-23 1988-11-08 Hitachi, Ltd. Pattern recognition apparatus
US4876729A (en) * 1984-02-21 1989-10-24 Kabushiki Kaisha Komatsu Seisakusho Method of identifying objects
US4971266A (en) * 1988-07-14 1990-11-20 Messerschmitt-Boelkow-Blohm Gmbh Guiding method and on-board guidance system for a flying body

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2210487B (en) * 1987-09-11 1991-07-10 Gen Electric Co Plc Object recognition
GB8905926D0 (en) * 1989-03-15 1990-04-25 British Aerospace Target aim point location

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3725576A (en) * 1962-09-12 1973-04-03 Us Navy Television tracking system
US3955046A (en) * 1966-04-27 1976-05-04 E M I Limited Improvements relating to automatic target following apparatus
US3794272A (en) * 1967-02-13 1974-02-26 Us Navy Electro-optical guidance system
US4115803A (en) * 1975-05-23 1978-09-19 Bausch & Lomb Incorporated Image analysis measurement apparatus and methods
US4047154A (en) * 1976-09-10 1977-09-06 Rockwell International Corporation Operator interactive pattern processing system
US4183013A (en) * 1976-11-29 1980-01-08 Coulter Electronics, Inc. System for extracting shape features from an image
US4267562A (en) * 1977-10-18 1981-05-12 The United States Of America As Represented By The Secretary Of The Army Method of autonomous target acquisition
US4783829A (en) * 1983-02-23 1988-11-08 Hitachi, Ltd. Pattern recognition apparatus
US4876729A (en) * 1984-02-21 1989-10-24 Kabushiki Kaisha Komatsu Seisakusho Method of identifying objects
US4971266A (en) * 1988-07-14 1990-11-20 Messerschmitt-Boelkow-Blohm Gmbh Guiding method and on-board guidance system for a flying body

Cited By (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5237624A (en) * 1990-08-18 1993-08-17 Fujitsu Limited Reproduction of image pattern data
US5172423A (en) * 1990-09-14 1992-12-15 Crosfield Electronics Ltd. Methods and apparatus for defining contours in colored images
US5265173A (en) * 1991-03-20 1993-11-23 Hughes Aircraft Company Rectilinear object image matcher
US5268967A (en) * 1992-06-29 1993-12-07 Eastman Kodak Company Method for automatic foreground and background detection in digital radiographic images
EP0578508A2 (en) * 1992-07-10 1994-01-12 Sony Corporation Video camera with colour-based target tracking system
EP0578508A3 (en) * 1992-07-10 1995-01-04 Sony Corp Video camera with colour-based target tracking system.
US5430809A (en) * 1992-07-10 1995-07-04 Sony Corporation Human face tracking system
US5666441A (en) * 1994-03-17 1997-09-09 Texas Instruments Incorporated Computer vision system to detect 3-D rectangular objects
US5671294A (en) * 1994-09-15 1997-09-23 The United States Of America As Represented By The Secretary Of The Navy System and method for incorporating segmentation boundaries into the calculation of fractal dimension features for texture discrimination
US5646691A (en) * 1995-01-24 1997-07-08 Nec Corporation System and method for inter-frame prediction of picture by vector-interpolatory motion-compensation based on motion vectors determined at representative points correctable in position for adaptation to image contours
US5706361A (en) * 1995-01-26 1998-01-06 Autodesk, Inc. Video seed fill over time
WO1997006631A3 (en) * 1995-08-04 1997-07-24 Ehud Spiegel Apparatus and method for object tracking
WO1997006631A2 (en) * 1995-08-04 1997-02-20 Ehud Spiegel Apparatus and method for object tracking
CN1080021C (en) * 1995-08-16 2002-02-27 裵莲秀 Magnetic circuits in the rotating system for generation both the mechanical power and the electric power
US5894525A (en) * 1995-12-06 1999-04-13 Ncr Corporation Method and system for simultaneously recognizing contextually related input fields for a mutually consistent interpretation
US6084989A (en) * 1996-11-15 2000-07-04 Lockheed Martin Corporation System and method for automatically determining the position of landmarks in digitized images derived from a satellite-based imaging system
US5978504A (en) * 1997-02-19 1999-11-02 Carnegie Mellon University Fast planar segmentation of range data for mobile robots
US6714676B2 (en) * 1997-09-04 2004-03-30 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US6144379A (en) * 1997-11-20 2000-11-07 International Business Machines Corporation Computer controlled user interactive display system for presenting graphs with interactive icons for accessing related graphs
US7239740B1 (en) * 1998-04-07 2007-07-03 Omron Corporation Image processing apparatus and method, medium storing program for image processing, and inspection apparatus
US6868181B1 (en) * 1998-07-08 2005-03-15 Siemens Aktiengesellschaft Method and device for determining a similarity of measure between a first structure and at least one predetermined second structure
US6985260B1 (en) * 1999-09-27 2006-01-10 Canon Kabushiki Kaisha Apparatus and method for drawing a gradient fill object
US20050237574A1 (en) * 1999-09-27 2005-10-27 Canon Kabushiki Kaisha Image processing apparatus and method
US20040025025A1 (en) * 1999-10-19 2004-02-05 Ramarathnam Venkatesan System and method for hashing digital images
US7421128B2 (en) 1999-10-19 2008-09-02 Microsoft Corporation System and method for hashing digital images
WO2001063382A2 (en) * 2000-02-25 2001-08-30 Synquiry Technologies, Ltd. Conceptual factoring and unification of graphs representing semantic models
WO2001063382A3 (en) * 2000-02-25 2002-01-10 Synquiry Technologies Ltd Conceptual factoring and unification of graphs representing semantic models
US7617398B2 (en) 2001-04-24 2009-11-10 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US20050022004A1 (en) * 2001-04-24 2005-01-27 Microsoft Corporation Robust recognizer of perceptually similar content
US20050071377A1 (en) * 2001-04-24 2005-03-31 Microsoft Corporation Digital signal watermarker
US20050084103A1 (en) * 2001-04-24 2005-04-21 Microsoft Corporation Recognizer of content of digital signals
US20050108543A1 (en) * 2001-04-24 2005-05-19 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US20050105733A1 (en) * 2001-04-24 2005-05-19 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7634660B2 (en) 2001-04-24 2009-12-15 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7657752B2 (en) 2001-04-24 2010-02-02 Microsoft Corporation Digital signal watermaker
US7568103B2 (en) 2001-04-24 2009-07-28 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7318158B2 (en) 2001-04-24 2008-01-08 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7707425B2 (en) 2001-04-24 2010-04-27 Microsoft Corporation Recognizer of content of digital signals
US7406195B2 (en) 2001-04-24 2008-07-29 Microsoft Corporation Robust recognizer of perceptually similar content
US20050273617A1 (en) * 2001-04-24 2005-12-08 Microsoft Corporation Robust recognizer of perceptually similar content
US20040268220A1 (en) * 2001-04-24 2004-12-30 Microsoft Corporation Recognizer of text-based work
US20060069919A1 (en) * 2001-04-24 2006-03-30 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7356188B2 (en) 2001-04-24 2008-04-08 Microsoft Corporation Recognizer of text-based work
US7636849B2 (en) 2001-04-24 2009-12-22 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
US7266244B2 (en) * 2001-04-24 2007-09-04 Microsoft Corporation Robust recognizer of perceptually similar content
US7318157B2 (en) 2001-04-24 2008-01-08 Microsoft Corporation Derivation and quantization of robust non-local characteristics for blind watermarking
WO2002099684A1 (en) * 2001-06-04 2002-12-12 Barra, Inc. Method and apparatus for creating consistent risk forecasts and for aggregating factor models
US20040236546A1 (en) * 2001-06-04 2004-11-25 Goldberg Lisa Robin Method and apparatus for creating consistent risk forecasts and for aggregating factor models
US7324978B2 (en) 2001-06-04 2008-01-29 Barra, Inc. Method and apparatus for creating consistent risk forecasts and for aggregating factor models
US7113637B2 (en) * 2001-08-24 2006-09-26 Industrial Technology Research Institute Apparatus and methods for pattern recognition based on transform aggregation
US20030044067A1 (en) * 2001-08-24 2003-03-06 Yea-Shuan Huang Apparatus and methods for pattern recognition based on transform aggregation
US7746379B2 (en) 2002-12-31 2010-06-29 Asset Intelligence, Llc Sensing cargo using an imaging device
US20040125217A1 (en) * 2002-12-31 2004-07-01 Jesson Joseph E. Sensing cargo using an imaging device
US20050147298A1 (en) * 2003-12-29 2005-07-07 Eastman Kodak Company Detection of sky in digital color images
US7336819B2 (en) * 2003-12-29 2008-02-26 Eastman Kodak Company Detection of sky in digital color images
US7831832B2 (en) 2004-01-06 2010-11-09 Microsoft Corporation Digital goods representation based upon matrix invariances
US20050149727A1 (en) * 2004-01-06 2005-07-07 Kozat S. S. Digital goods representation based upon matrix invariances
US7587064B2 (en) * 2004-02-03 2009-09-08 Hrl Laboratories, Llc Active learning system for object fingerprinting
US20050169529A1 (en) * 2004-02-03 2005-08-04 Yuri Owechko Active learning system for object fingerprinting
US20050199782A1 (en) * 2004-03-12 2005-09-15 Calver Andrew J. Cargo sensing system
US7421112B2 (en) * 2004-03-12 2008-09-02 General Electric Company Cargo sensing system
US8595276B2 (en) 2004-04-30 2013-11-26 Microsoft Corporation Randomized signal transforms and their applications
US7770014B2 (en) 2004-04-30 2010-08-03 Microsoft Corporation Randomized signal transforms and their applications
US20100228809A1 (en) * 2004-04-30 2010-09-09 Microsoft Corporation Randomized Signal Transforms and Their Applications
US20050257060A1 (en) * 2004-04-30 2005-11-17 Microsoft Corporation Randomized signal transforms and their applications
US20120321137A1 (en) * 2007-12-14 2012-12-20 Sri International Method for building and extracting entity networks from video
US8995717B2 (en) * 2007-12-14 2015-03-31 Sri International Method for building and extracting entity networks from video
US20090161968A1 (en) * 2007-12-24 2009-06-25 Microsoft Corporation Invariant visual scene and object recognition
US8406535B2 (en) 2007-12-24 2013-03-26 Microsoft Corporation Invariant visual scene and object recognition
US8036468B2 (en) 2007-12-24 2011-10-11 Microsoft Corporation Invariant visual scene and object recognition
US20110232719A1 (en) * 2010-02-17 2011-09-29 Freda Robert M Solar power system
US11651285B1 (en) 2010-04-18 2023-05-16 Aptima, Inc. Systems and methods to infer user behavior
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US9619561B2 (en) 2011-02-14 2017-04-11 Microsoft Technology Licensing, Llc Change invariant scene recognition by an agent
US11627048B2 (en) * 2012-02-20 2023-04-11 Aptima, Inc. Systems and methods for network pattern matching
US20180212834A1 (en) * 2012-02-20 2018-07-26 Aptima, Inc. Systems and methods for network pattern matching
US20140079320A1 (en) * 2012-09-17 2014-03-20 Gravity Jack, Inc. Feature Searching Along a Path of Increasing Similarity
US9076062B2 (en) * 2012-09-17 2015-07-07 Gravity Jack, Inc. Feature searching along a path of increasing similarity
US9406138B1 (en) 2013-09-17 2016-08-02 Bentley Systems, Incorporated Semi-automatic polyline extraction from point cloud
US20180204473A1 (en) * 2017-01-18 2018-07-19 Microsoft Technology Licensing, Llc Sharing signal segments of physical graph
US10606814B2 (en) 2017-01-18 2020-03-31 Microsoft Technology Licensing, Llc Computer-aided tracking of physical entities
US10635981B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Automated movement orchestration
US10637814B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Communication routing based on physical status
US10679669B2 (en) 2017-01-18 2020-06-09 Microsoft Technology Licensing, Llc Automatic narration of signal segment
US11094212B2 (en) * 2017-01-18 2021-08-17 Microsoft Technology Licensing, Llc Sharing signal segments of physical graph
US10482900B2 (en) 2017-01-18 2019-11-19 Microsoft Technology Licensing, Llc Organization of signal segments supporting sensed features
US10437884B2 (en) 2017-01-18 2019-10-08 Microsoft Technology Licensing, Llc Navigation of computer-navigable physical feature graph

Also Published As

Publication number Publication date
JP2518578B2 (en) 1996-07-24
CA2055714C (en) 1997-01-07
IL100104A (en) 1996-06-18
AU9001691A (en) 1992-07-09
IL100104A0 (en) 1992-08-18
CA2055714A1 (en) 1992-06-27
AU644923B2 (en) 1993-12-23
EP0492512A2 (en) 1992-07-01
JPH04306499A (en) 1992-10-29
KR940006841B1 (en) 1994-07-28
NO914813L (en) 1992-06-29
EP0492512A3 (en) 1993-12-08
KR920013189A (en) 1992-07-28
NO914813D0 (en) 1991-12-06

Similar Documents

Publication Publication Date Title
US5093869A (en) Pattern recognition apparatus utilizing area linking and region growth techniques
Jimenez et al. Classification of hyperdimensional data based on feature and decision fusion approaches using projection pursuit, majority voting, and neural networks
US4685143A (en) Method and apparatus for detecting edge spectral features
US11556745B2 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
EP0503179B1 (en) Confirmed boundary pattern matching
Verly et al. Model-based automatic target recognition (ATR) system for forwardlooking groundbased and airborne imaging laser radars (LADAR)
CN110097498B (en) Multi-flight-zone image splicing and positioning method based on unmanned aerial vehicle flight path constraint
Paglieroni et al. The position-orientation masking approach to parametric search for template matching
Kim et al. Building detection in high resolution remotely sensed images based on automatic histogram-based fuzzy c-means algorithm
Murrieta-Cid et al. Landmark identification and tracking in natural environment
Zhang et al. Change detection between digital surface models from airborne laser scanning and dense image matching using convolutional neural networks
US7263208B1 (en) Automated threshold selection for a tractable alarm rate
CN116912763A (en) Multi-pedestrian re-recognition method integrating gait face modes
Verly et al. Model-based system for automatic target recognition from forward-looking laser-radar imagery
McWilliams et al. Performance analysis of a target detection system using infrared imagery
Lohmann An evidential reasoning approach to the classification of satellite images
Bhanu et al. Analysis of terrain using multispectral images
FUJIMURA et al. Hierarchical Algorithms for the Classification of Remotely Sensed Multi-Spectral Images
Qin et al. A Novel Approach to Object Detection in Remote-Sensing Images Based on YOLOv3
Xu Integrating Segmentation and association Relationship for image Recognition
Nichol et al. Image segmentation and matching using the binary object forest
Nordin et al. The Potential of Building Detection from SAR and LIDAR Using Deep Learning
Usilin et al. Using of Viola and Jones Method to Localize Objects in Multispectral Aerospace Images based on Multichannel Features
Churi et al. Methods for Predictive Performance Improvement of Deep Learning Systems for Aerial Building Roof Detection with Multispectral Images
JPH08235363A (en) Visual process for computer

Legal Events

Date Code Title Description
AS Assignment

Owner name: HUGHES AIRCRAFT COMPANY, LOS ANGELES, CA A CORP. O

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:ALVES, JAMES F.;BURMAN, JERRY A.;GOR, VICTORIA;AND OTHERS;REEL/FRAME:005564/0971

Effective date: 19901212

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12

REMI Maintenance fee reminder mailed
AS Assignment

Owner name: RAYTHEON COMPANY, MASSACHUSETTS

Free format text: MERGER;ASSIGNOR:HE HOLDINGS, INC.;REEL/FRAME:015596/0626

Effective date: 19971217

Owner name: RAYTHEON COMPANY, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HE HOLDINGS, INC.;REEL/FRAME:015596/0647

Effective date: 19971217

Owner name: HE HOLDINGS, INC., A DELAWARE CORP., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:HUGHES AIRCRAFT COMPANY A CORPORATION OF THE STATE OF DELAWARE;REEL/FRAME:015596/0658

Effective date: 19951208

Owner name: HE HOLDINGS, INC., A DELAWARE CORP., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:HUGHES AIRCRAFT COMPANY, A CORPORATION OF DELAWARE;REEL/FRAME:015596/0755

Effective date: 19951208