CA2414849A1 - Image segmentation system and method - Google Patents

Image segmentation system and method Download PDF

Info

Publication number
CA2414849A1
CA2414849A1 CA002414849A CA2414849A CA2414849A1 CA 2414849 A1 CA2414849 A1 CA 2414849A1 CA 002414849 A CA002414849 A CA 002414849A CA 2414849 A CA2414849 A CA 2414849A CA 2414849 A1 CA2414849 A1 CA 2414849A1
Authority
CA
Canada
Prior art keywords
pixel
image
pixels
occupant
image segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002414849A
Other languages
French (fr)
Inventor
Michael Edward Farmer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eaton Corp
Original Assignee
Eaton Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eaton Corp filed Critical Eaton Corp
Publication of CA2414849A1 publication Critical patent/CA2414849A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01512Passenger detection systems
    • B60R21/01542Passenger detection systems detecting passenger motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01512Passenger detection systems
    • B60R21/0153Passenger detection systems using field detection presence sensors
    • B60R21/01538Passenger detection systems using field detection presence sensors for image processing, e.g. cameras or sensor arrays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01512Passenger detection systems
    • B60R21/01552Passenger detection systems detecting position of specific human body parts, e.g. face, eyes or hands
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01556Child-seat detection systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/24765Rule-based classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R2021/003Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks characterised by occupant or pedestian
    • B60R2021/0039Body parts of the occupant or pedestrian affected by the accident
    • B60R2021/0044Chest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Abstract

The present invention relates in general to systems used to process images, In particular, the present invention is an image segmentation system and method used to isolate the segmented image of a target person, animal, or object from an ambient image which includes the target person, animal, or object, in addition to the area surrounding the target. The invention supports the ability of an airbag deployment system to distinguish between different types of occupants by providing such deployment systems with a segmented image of the occupant. The invention is particularly useful at night or in other environments involving inadequate light or undesirable shadows. The invention can use histograms and cumulative distribution functions to perform image thresholding. Morphological erosion and dilation can be used to eliminate optical "noise" from the image. Gap filling is performed on the basis of the "momentum" and "gravity" of regions of similar pixel values. An upper ellipse can generated to represent the upper torso of the occupant. The invention is highly flexible, and can be modified in accordance with the desired use.

Description

IMAGE SEGMENTATION SYSTEM AND METiEIOD
RELATED APPLICATIONS
[0001] This Continuation-In-Part application claims the benefit of the following U.S.
utility applications: "A RULES-BASED OCCUPANT CLASSIFICATION SYSTEM
FOR AIRBAG DEPLOYMENT," Serial Number 091870,151, filed on May 30, 2001;
"IMAGE PROCESSI\1G SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS
USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL
INFORMATION," Serial Number 09/901,805, filed on July 10, 2001; and "IMAGE
PROCESSING SYSTEM FOR ESTIMATING THE ENERGY TRANSFER OF AN
OCCUPANT INTO AN AIRBAG," Serial Number / , filed on November 5, 2001, the contents of which are hereby by incorporated by reference in their entirety.
BACKGROUND OF THE INVENTIOle1 (0002] The present invention relates in general to systems and techniques used to isolate a segmented image of a person or object from an ambient image of the area surrounding and including the person or object. In particular, the present invention relates to isolating a segmented image of an occupant from the ambient image of an occupant, where the ambient image of the occupant includes both the occupant and the area surrounding the occupant, so that the deployment of an airbag can be prevented or modified due to the characteristics of tile segmented image of the occupant.
[0003] There are many situations in which it is desirahle to isolate the segmented image of a "targeted" person or object from an ambient image which includes images ancillary to the "targeted" person or object. Airbag deployment systems can be one application of such a technology. Airbag deployment systems may need to make various deployment decisions that relate in one way or another to the image of an occupant. Airbag deployment systems can alter their behavior based on a wide variety of factors that ultimately relate in one way or another to the occupant. The type of occupant, the proximity of an occupant to the airbag, the velocity and acceleration of an occupant, the mass of the occupant, the amount of energy an airbag needs to absorb as a result of an impact between the airbag and the occupant, and other airbag deployment considerations may utilize characteristics of the occupant that can be obtained from a segmented image of the occupant.
[0004] There are significant obstacles in the existing art with regards to image segmentation .techniques. First, prior art techniques do not function well in darker environments, such as when a car is being driven at night. Certain parts of a human occupant image can be rendered virtually invisible by darkness, depriving an airbag deployment system of accurate information. Second, such techniques do not compensate for varying degrees of darkness resulting from the fact that different parts of the occupant will be at different angles and distances from a sensor or light source.
It is beneficial for different regions in the occupant image to be treated differently.
Third, existing techniques do not maximize the intelligence that can be incorporated into the segmentation process when there is a specific type of segmented image.
There are anatomical aspects of a human occupant that can be used to refine the raw segmented image of an occupant that is captured by a camera or other sensor.
For example, the fact that ail parts of a human being are connected, and the inherent nature of those connections, can enhance the ability of a segmentation process to determine which image pixels relate to the occupant, and which image pixels relate the area surrounding the occupant. The knowledge that the occupant is likely in some sort of seated position can also be used to apply more intelligence to a raw ambient image.
SUMMARY OF THE INVENT'I~N
[0005] This invention relates to an image segmentation system or method that can be used to generate a segmented image of an occupant from an ambient image which includes the occupant, as well the- environment surrounding the occupant. An airbag deployment system can then utilize the segmented image of the occupant to determine whether or not an airbag should be deployed, and any parameters surrounding that deployment.
[0006] The invention consists of two primary components, an image thresholding subsystem and a gap filling subsystem. The image thresholding system can transform an ambient image into a binary image. Each pixel relating to the occupant can be set at one binary value and every pixel not clearly relating to the occupant can be set at a different binary value. An image threshold using a characteristic such as luminosity can be used to determine the binary value at which a ;particular pixel should be set.
Different regions of the ambient image can utilize different image thresholds to account for differences in lighting caused by shadows, differing distances to light sources, or other causes. The image threshold can incorporate probability analysis into the segmentation process using a cumulative distribution function. As a general matter, the image thresholding subsystem can take a "cynical" view with respect to whether a particular pixel represents the image of the occupant. The image thresholding subsystem can attempt to determine determines which pixels clearly represent the image of the occupant, and can treat "gray area" pixels as representing aspects of the ambient image that are not the occupant. It is not a hindrance to the invention for the image threshold to classify the majority of the ambient image as not relating to the occupant image.
[0007] The gap filling subsystem can incorporate into the characteristic of a particular pixel the characteristics of pixels in the vicinity of that particular pixel being set by the subsystem. The gap filling subsystem can fill in gaps left by the overly conservative approach of the image thresholding subsystem. The gap Ailing subsystem can apply morphological erosion and dilation techniques to remove spurious regions of pixels, e.g. pixel values that are out of place in the context of neighboring pixels.
The subsystem can apply a momentum-based heuristic, which incorporates into the value of a particular pixel, the "momentum" of nearby pixel value determinations.
The subsystem can also apply a gravity-based heuristic, which incorporates gravitational concepts of mass and distance into a particular pixel value.
[0008] The system can output the occupant image in various different forms, including generating an upper ellipse to represent the upper torso of the occupant.
[0009] Various aspects of this invention will become apparent to those skilled in the art from the following detailed description of the preferred embodiment, when read in light of the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] Fig. 1 shows a partial view of the surrounding environment for several potential embodiments of the invention.
[0011] Fig. 2 shows a high-level process flow of the image processing system for several potential embodiments of the invention.
(0012] Fig. 3 is a histogram tabulating the number of pixels that possess a particular luminosity value.
[0013] Fig. 4 is a diagram showing how multiple different image threshold can be applied to a single ambient image.
[0014] Fig. 5 is a graph of a cumulative distribution function that can be utilized by the image thresholding subsystem.
[0015] Fig. 6a shows a vertical grouping of pixels subject to morphological erosion and dilation.
(0016] Fig. 6b shows a horizontal grouping of pixels subject to morphological erosion and dilation.
[0017] Fig. 7 is a probability graph incorporating the concept of a momentum-based gap filling approach.
[0018] Fig. 8 is a diagram of pixel regions that can be processed with a gravity-based heuristic.
[0019) Fig. 9 is a diagram illustrating how an upper ellipse, a lower ellipse, and a centroid can be used to represent a segmented image of an occupant.
[0020) Fig. 10 is a diagram of an upper ellipse representing an occupant, and some of the important characteristics of the upper ellipse.
[0021] Fig. 11 is a flowchart of the image thresholding subsystem and the gap filling subsystem, including some of the various modules and processes that can be incorporated into those subsystems.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
A. PARTIAL VIEW OF SURROUNDING ENVIRONMENT
[0022] Refernng now to the drawings, illustrated in Fig. 1 is a partial view of the surrounding environment for potentially several different embodiments of an image segmentation system 16. If an occupant 18 is present, the occupant 18 can sit on a seat 20. In some embodiments, a camera or any other sensor capable of rapidly capturing images (collectively "camera" 22) can be attached in a roof liner 24, above the occupant 18 and closer to a front windshield 26 than t:he occupant 18. The camera 22 can be placed in a slightly downward angle towards the occupant 18 in order to capture changes in the angle of the occupant's 18 upper torso resulting from forward or backward movement in the seat 20. There are many potential locations for a camera 22 that are well known in the art.
[0023) In some embodiments, the camera 22 can incorporate or include an infrared light source operating on direct current to provide constant illumination in dark settings. The system 16 is designed for use in dark conditions such as night time, fog, heavy rain, significant clouds, solar eclipses, and any other environment darker than typical daylight conditions. The system 16 can be used :in brighter light conditions as well. Use of infrared lighting can hide the use of the light source from the occupant 18. Alternative embodiments may utilize one or more of the following: light sources separate from the camera; light sources emitting light other than infrared light; and light emitted only in a periodic manner utilizing alternating current. The system 16 can incorporate a wide range of other lighting and camera 22 configurations.
(0024) A computer system 30 for applying the image segmentation technique may be located virtually anywhere in or on a vehicle. Preferably, the computer system 30 is located near the camera 22 to avoid sending camera images through long wires.
The computer system 30 can be any type of computer or device capable of incorporating the processing performed by an image thresholding subsystem and a gap filling subsystem. The processing that can be performed by such subsystems is disclosed in Fig. 11, and is described in greater detail below. An airbag controller 32 is shown in an instrument panel 34, although the present invention could still function even if the airbag controller 32 were located in a different environment. Similarly, an airbag deployment system 36 is preferably located in the instrument panel 34 in front of the occupant 18 and the seat 20, although alternative locations can be used by the system 16. -B. HIGH LEVEL PROCESS FLOW
(0025) Fig. 2 discloses a high Level flowchart illustxa.ting the use of the image segmentation system 16. An ambient image 38 of a seat area 21 including both the occupant 18 and the seat area 21 can be captured by the camera 22. In the figure, the seat area 21 includes the entire occupant 18, although under many different circumstances and embodiments, only a portion of the occupant's 18 image will be captured, particularly if the camera 22 is positioned in a location where the lower extremities may not be viewable. The ambient image 38 can be sent to the computer 30. The computer 30 can isolate a segmented image 31 of the occupant 18 from the ambient image 38. The process by which the computer 30 performs image segmentation is described in greater detail below, and is disclosed in Fig.
11. The segmented image 31 can then be analyzed to determine the appropriate airbag deployment decision. For example, the segmented image 31 can be used to determine if the occupant 18 will be too close to the deployin~; airbag 36 at the time of deployment. The analysis and characteristics of the segmented image 31 can be sent to the airbag controller 32, allowing the airbag deployment system 36 to make the appropriate deployment decision with the information obtained relating to the occupantl8.
C. HISTOGRAM ~F PIXEL CHARACTERSTICS
[0026] Fig. 3 discloses an example of a type of histogram 39 that can be used by the system 16. Any image captured by the camera 22, including the segmented image and the ambient image 38, can be divided into one or more pixels 40. As a general matter, the greater the number of pixels 40, the better the resolution of the image 38.
In a preferred embodiment, the width of the ambient image 38 should be at least approximately 400 pixels across and the ambient image 38 should be at least approximately 300 pixels in height. If there are too few pixels, it can be difficult to isolate the segmented image 31 from the ambient image 38. However, the number of pixels is dependent upon the type and model of camera 22, and cameras 22 generally become more expensive as the number of pixels increases. A standard video camera can capture an image roughly 400 pixels across and 300 pixels in height. Such an embodiment captures a sufficiently detailed ambient image 38 while remaining relatively inexpensive because a standard non-customized camera 22 can be used.
Thus, a preferred embodiment will use approximately 120,000 (400 X 300) total pixels 40. The number of pixels 40 can depend on the camera 22 used by the system 16. Each pixel 40 can possess one or more different pixel characteristics or attributes (collectively "characteristics") 42 used by the system 16 to isolate the segmented image 31 from the ambient image 38. Pixels 40 can have one or more pixel characteristics 42, with each characteristic represented by one or more pixel values.

One example of a pixel characteristic 42 is a luminosity measurement ("luminosity").
The pixel characteristics 42 of luminosity can be measured, stored, and manipulated as a pixel value relating to the particular pixel. In a preferred embodiment, the luminosity value 42 is the initial pixel value for the pixels 40 in the ambient image 38, and luminosity can be represented in a numerical pixel value between 0 (darkest possible luminosity) and 255 (brightest possible luminosity). Alternative pixel characteristics can include color, heat, a weighted combination of two or more characteristics, or any other characteristic that could potentially be used to distinguish the segmented image 31 from the ambient image 38.
[0027] In some embodiments of the system 16, luminosity 42 can be represented by a numerical pixel value between 0 and 255, as disclosed in the figure. On such a scale, a luminosity of 0 can indicate the darkest possible' luminosity (black), and a luminosity of 255 can indicate the brightest possible luminosity (white).
Alternative embodiments can use histograms 39 of differing numerical scales, and even characteristics 42 other than luminosity.
[0028] The histogram 39 in the figure records the number of pixels 40 with a particular individual or combination of pixel characteristics 42 (collectively "characteristic"). The histogram 39 records the aggregate number of pixels 40 that possess a particular pixel value. The data in the histogram 39 can be used by the system 16 to divide some or preferably all the pixels 40 in the ambient image 38 into two or more pixel categories. A first category of pixels 40 can include those pixels that clearly represent the segmented image 31 of the occupant 18. Pixels 40 in that first category can be referred to as occupant pixels 40. A second category of pixels 40 can include those pixels 40 which either clearly represent images not relating to the occupant 18, or images that may or may not represent the occupant ("gray area images" or "ambient pixels"). In a preferred embodiment, all pixels 40 are categorized into one of two categories, and all occupant pixels 40 are set to a binary value of "1" while all ambient pixels 40 are set to a binary value of "0". The system 16 can use the histogram 39 and an image threshold to categorize the pixels 40 in the ambient image 38. The process of using the image threshold to set pixel values to a binary value is described in greater detail below.

D. IMAGE THRESHOLD
[0029 Fig. 4 is a diagram illustrating a preferred embodiment of a system 16 utilizing three image thresholds 44. Fig. 4 is a diagram of the ambient image 38 including the segmented image 31 of the occupant 18, at the time in which image thresholding is to be performed by the system 16. As described above, the ambient image 38 is composed of pixels 40. Each pixel 40 has a pixel characteristic 42 such as luminosity.
Pixels 40 can have different values of a particular characteristic 42. For example, some pixels 255 may have very high luminosity values such as 255 while other pixels 40 may have very low luminosity values such as 0. In terms of the characteristic of luminosity, higher pixel values tend to represent the occupant 18 and lower pixel values tend to represent images of the surrounding area. The imaging threshold process seeks out the top N% of pixels 40 on the basis of pixel characteristics such as luminosity and identifies those pixels as representing the segmented image 31 of the occupant 18. The remaining pixels 40 cannot confidently be said to represent the segmented image 31 of the occupant 18, and thus can be categorized by the system 16 as pixels not relating to the occupant 18. The histogram 39 discussed above facilitates the ability of the system 16 to calculate the image threshold (with respect to a particular pixel characteristic) required to "pick off' the top N% of the pixels 40.
[0030] Alternative embodiments can use as few as one image threshold 44 or as many image thresholds 44 as are desired. As discussed above, the system 16 can use an image threshold 44 in conjunction with the histogram 39 to divide pixels 40 into two or more various pixel categories.
[003i] Only pixels 40 in the top N% should be initially identified as representing the occupant 18. The histogram 39 facilitates the ability of the system 16 to rank pixel characteristics 42 in order to identify those pixels 40 most likely to represent the segmented image 31 of the occupant 18. For example, pixels 40 with a luminosity (pixel value) of Y or greater can be categorized as representing the segmented image 31. All other pixels 40 can be categorized as representing ancillary images (e.g.
"ambient pixels") in the ambient image 38. In a preferred embodiment, "gray area"
pixels should initially be placed in the ambient pixels 40 category.
[0032] N should generally be set no higher than 50, and no lower than 10. The system 16 can be designed to take an initially conservative approach with respect to which _g_ pixels 40 represent the occupant's I8 segmented image 31 because subsequent processing by the system 16 is designed to fill in the gaps Left by the conservative approach. In a preferred embodiment, different areas in the ambient image 38 have one of three different image thresholds 44 applied by the system 16 depending on the location of the particular pixel 40 with respect to the ambient image 31. The uppermost locations in the ambient image 38 are subject to the best lighting, and thus the system 16 can incorporate a top-tier image threshold! 46 designed to select only those pixels 40 in the top 10% (N=IO). Middle locations in the ambient image 38 are generally subject to less illuminating lighting, more significant shadows, and poorer resolution generally than those pixels 40 in the upper part of the ambient image 38.
Thus, the system 16 can incorporate an middle-tier image threshold 48 designed to select only those pixels 40 in the top 20% (N=20). Pixels 40 closest to the bottom of the ambient image 38 can incorporate a bottom-tier image: threshold 50 using the top 30% (N=30) of the pixels 40 as pixels 40 representing the segmented image 31 of the occupant 18. The high percentage is desirable because the lighting at the lower part of the ambient image 38 represents pixel 40 locations in the seat area 21 that are poorly lit with respect to other pixel 40 locations. Alternative embodiments can utilize a different number of image thresholds 44, and a wide variety of different N
values can be used by those tiers.
[0033] The image thresholds) 44 used by the system 16 should incorporate the concept of selecting a top percentile of pixels 40 as representing the segmented image 31 on the basis of pixel characteristics 42, such as luminosity, and the corresponding pixel values relating to that characteristic. A pixel characteristic 42 such as luminosity can be represented by a value Y, such as 255, and should not be a percentile or probability such as 10% or N%. A cumulative distribution function can be used to convert a pixel value or characteristic such .as Y into a probability or percentile such as N, or a probably or percentile using N into a measured characteristic 42 value such as Y.
(0034] In a preferred embodiment, the pixel values for the top N% of pixels (those pixels with characteristics greater than or equal to Y) can be set to a binary value such as "1." In a preferred embodiment, pixels 40 representing 'the segmented image 31 of the occupant 18 can be referred to as the binary image of the occupant 18, or as occupant pixel 40. Other pixel values can be set to a different binary value, such as "0" and can be classified as ambient pixels 40. Alternative embodiments can use a greater number of categories with different types and varieties of pixel values E. CUMULATIVE DISTRIBUTION FUNCTION
[0035] The ability to select a top N% of pixels 40 and select those pixels 40 as representing the occupant 18 (e.g. occupant pixels 40) can be done using a cumulative distribution function 52 based on the numerical measured value (Y) associated with a pixel characteristic 42. Fig. 5 discloses a cumulative distribution curve 52 that can be derived from the histogram disclosed in Fig. 3. The vertical axis can represent a cumulative probability 54 that that the system 16 has not mistakenly classified any pixels 40 as representing the occupant 18. The cumulative probability 54 can be the value of 1 - N. For example, selecting the top 10% of pixels will result in a probability of 0.9, with 0.9 representing the probability that an ambient pixel has not been mistakenly identified as a segmented pixel. Absolute certainty (a probability of 1.0) can only be achieved by assuming a11120,000 pixels are ambient pixels 40, e.g.
that no pixel 40 represents the segmented image 31 of the occupant 18. Such certainty is not helpful to the system 16, because it does not provide a starting point at which to build out the shape of the occupant 18. Conversely, a low standard of accuracy such as a value of 0 or a value close to 0, does not exclude enough pixels 40 from the category of pixels 40 representing the segmented image 31.
[0036] Probabilities such as 0.90, 0.80. or 0.70 are preferable because they generally indicate a high probability of accuracy while at the same time providing a substantial base of occupant 18 pixels 40 upon which to expand upon using the gap processing subsystem, described in greater detail below. In a preferred embodiment, mufti-image threshold 44 systems 16 will have as many cumulative distribution functions 52 as there are image thresholds 44.
F. MORPHOLOGICAL PROCESSING
(0037] The system 16 can incorporate a morphological heuristic into the gap processing subsystem, described in greater detail below. The gap processing heuristic applied by the gap processing subsystem can include the performing of a morphological heuristic. As described both above and below, the system 16 uses pixel values relating to a pixel to perform analysis on a pixel 40 as to whether the pixel is an occupant pixel 40, an ambient pixel 40, or some other category of pixel 40 (some embodiments may have three or more different pixel categories).
[0038] Morphological processing can incorporate into a particular pixel value, the pixel values of other pixels 40 in the vicinity ("vicinity pixels" or "vicinity pixel values") of the particular pixel being analyzed or set. Morphological erosion can remove spurious objects, e.g., untrue indications that a pixel 40 represents the segmented image 31. Morphological erosion can be performed in more than one direction. For example, erosion can be performed in a vertical direction, in a horizontal direction, or even in a diagonal direction. Morphological dilation can be performed to "grow out" the segmented image 31 redLiced by the '°conservative"
image thresholding process. Morphological dilation can also be performed in many different directions. Illustrated in Figures 6a and 6b are examples morphological processing performed the vertical and horizontal directions.
[0039] In a preferred embodiment, the order of morphological processing is as follows: vertical morphological erosion; vertical morphological dilation;
horizontal morphological erosion; and horizontal morphological dilation. Alternative embodiments may incorporate only some of these processes, and such systems 16 may perform the processing in a different order.
[0040] Fig. 6a is a diagram illustrating a vertical group 58 of pixels 40 used to perform vertical morphological processing. Vertical morphological processing can group pixels 40 together i.n the vertical group 58. In at preferred embodiment, the vertical group 58 includes 12 vertical rows of pixels 40, which each row including 2 columns of pixels. Alternative embodiments may use vertical groups 58 with different numbers of rows and columns. Each pixel 40 in the group 58 has at least one pixel value 56, such as a binary number in a preferred embodiment.
[0041] Vertical morphological erosion takes a "pessimistic" or "conservative"
view of whether or not a particular pixel 40 represents the segmented image 31, e.g.
whether a pixel 40 is an occupant pixel 40. If even one pixel 40 in the vertical group S8 has a pixel value 56 indicating that the pixel does not represent the segmented image 31 of the occupant (a pixel value 56 of "0" in a preferred embodiment), then all of the pixel values 56 in the vertical group 58 can be set to that same value.
[0042] Vertical morphological dilation can take an "optimistic" view of whether or not a particular pixel 40 represents the segmented image 31. If even one pixel value 56 in the vertical group 68 indicates that the pixel 40 represents the segmented image 3 I (a value 56 of "1" in a preferred embodiment), then each of the pixel values 56 in the vertical group 58 can be set to that same pixel value 56.
[0043] Fig. 6b is a diagram illustrating horizontal morphological processing.
Horizontal morphological processing can group pixels 40 together in a horizontal group 60 of sequential pixels. In a preferred embodiment, a horizontal group consists of 2 vertical rows of sequential pixels 40, which each row including 12 columns of sequential pixels 40. Alternative embodiments may use horizontal groups 60 with different numbers of rows and columns. Each pixel 40 in the group 60 has at least one pixel value 56, such as a binary number in a preferred embodiment.
[0044] Horizontal morphological erosion takes a "pessimistic" or "conservative" view of whether or not a particular pixel 40 represents the segmented image 31, e.g.
whether a pixel 40 is an occupant pixel 40. If even one pixel 40 in the horizontal group 60 has a pixel value 56 indicating that the pi:Kel does not represent the segmented image 31 of i:he occupant (a pixel value 56 of "0" in a preferred embodiment), then all of the pixel values 56 in the horizontal group 60 can be set to that same value, classifying each pixel 40 in the horizontal group 60 as an ambient pixel 40.
[0045] Horizontal morphological dilation takes an "optimistic" view of whether or not a particular pixel 40 represents the segmented image 31. If even one pixel value 56 in the horizontal group 60 indicates that the pixel represents the segmented image 31 (a value of"1" in a preferred embodiment), then each of the pixel values 56 in the vertical group 58 can be set to that same pixel value 56.
[0046] Some embodiments may set pixel values 56 to non.-binary numbers, or even to non-numerical characteristics such as letters, etc. Some embodiments may allow a horizontal group 60 to overlap with another horizontal group 60 and a vertical group 58 to overlap with another vertical group 58. Embodiments using overlapping horizontal groups 60 or overlapping vertical groups 58 can incorporate a one-to-one relationship between the particular pixel value 56 being set, and the particular horizontal group 60 or vertical group 58 being used to set that particular pixel 40. In other embodiments, a pixel 40 can only belong to one horizontal group 60 and one vertical group 58.
G. MOMENTUM-BASED PROCESSING
(0047] The system 20 can incorporate momentum-based processing to set pixel values 56. The underlying logic to such an approach is that if the last X consecutive pixels 40 represent the segmented image 31 of the occupant 18, then it is more likely than not that the next sequential pixel 40 (pixel X+1) in the location-based sequence of pixels, will also represent the occupant 18. The momentum-based heuristic is one means by which the system 30 can determine the appropriate value of a particular pixel by looking to and incorporating the pixel values of pixels in the vicinity ("vicinity pixels" or "vicinity pixel values") of the particular pixel that is subject to analysis by the system 30. Momentum-based processing can be incorporated into the gap processing subsystem, which is described in greater detail below.
[0048] Fig. 7 is one illustration of a graph useful for morrientum-based analysis by the momentum-based heuristic. A probability 62 of a particular pixel 40 having a pixel value 56 representing the segmented image 31 decreases with each consecutive pixel value 56 representing an ambient image 38. Momentum-based processing can be done in two dimensions, i.e. in both the vertical and horizontal directions, although vertical momentum processing is generally more useful in embodiments relating to occupants 18 in a seated position. Momentum-based processing can be particularly effective in trying to fill gaps left by the seatbelt or hat, or a location of the occupant 18 often blocked by shadow such as the neck of the occupant 18.
[0049] When a particular pixel 40 has a pixel value 56 indicating that the particular pixel 40 represents the segmented image 31 of the occupant 18, that particular pixel 40 can be referred to as an occupant pixel 40. When a particular pixel 40 has a pixel value 56 indicating that th.e particular pixel 40 does not or may not represent the segmented image 31 of the occupant 18, that particular pixel 40 can be referred to as an ambient pixel 40.

[0050] Fig. 8 is a diagram illustrating one example of one or more pixel regions 66 made up of occupant pixels 40. Each pixel region 66 includes at least one occupant pixel. For two occupant pixels 40 to be in the same pixel region 66, those two occupant pixels 40 must be connected by a "chain" of occupant pixels 40, with each occupant pixel 40 in the chain adjacent to at least onc: other occupant pixel in the chain. A typical ambient image 38 may include between 3 and 5 pixel regions 66, although ambient images 38 may have as few as 1 pixel region 66 or as potentially as many pixel regions 66 as there are occupant pixels 40.
[0051] Momentum-based processing can compute the sequential number of occupant pixels 40 in either the horizontal or vertical direction (only the vertical direction in a preferred embodiment) in a particular row or column (only vertical columns in a preferred embodiment) in the ambient image 38. The underlying logic of gap-based processing is that the pixel values (relating to pixels in l:he vicinity of the pixel being set) can be incorporated into the pixel being set. Momentum-based processing incorporates the concept that a series of sequential occupant pixels makes it more likely that an intervening ambient pixel or two is the result of misidentification by the system 16. For example, :if two pixel regions 66 are separated by a gap of 4 ambient sequential pixels 44, the "momentum" associated with the occupant pixels 40 in the two pixel regions 66 may be sufficient to fill the gap and join the two regions 66. The greater the number of sequential or consecutive occupant pixels 40, the greater the "momentum" to allow the filling of a gap of ambient pixels 40.
[0052] A momentum counter ("counter") can be used to determine the amount of momentum between pixel regions 66. The counter can. be incremented or added to each time the next pixel 40 in a sequence of pixels ('°sequential pixels") is an occupant pixel 40. Conversely, a value can be subtracted from tl-ie counter each time the next sequential pixel 40 is an ambient pixel 40. Momentum-based processing can attribute different weights to an occupant pixel 40 and an ambient pixel 40. For example, an occupant pixel 40 could result in a value of 4 being added to the counter, while an ambient pixel 40 could result in a value of 1 being subtracted from the counter.
Different embodiments can incorporate a wide variety of different weights to the momentum counter. In a preferred embodiment, an occupant pixel increments the counter by 1 and an ambient pixel results in a value of 1 being subtracted from the counter. When the counter reaches a number less than or equal to zero, the system 16 can cease trying to bridge the gap of ambient pixels 40 between the pixel regions 66 of occupant pixels 40.
(0053] An example of the momentum-based processing and the counter can be seen with respect to the pixel values in Table 1 below. For th.e purpose of distinguishing pixel positions from pixel values, letters are used in the example to represent the relative horizontal positions of the pixels. In many embodiments, a numerical coordinate position is used to identify the location of a pixel, and thus the position of a pixel would be identified by a numerical value. In the example, each occupant pixel 40 increments the counter by 1 and each ambient pixel 40 decrements the counter by 1. Table 1:
Positioni B C D E F G Ii I J K L M 1~1 A
i Value ~ 0 0 1 0 0 1 1 1 0 0 0 0 0 ( i The three 1's at positions CT, H, and I represent the largesl: region of 1's in the row, so momentum-based processing in a leftward direction will hegin a position I. I, H, and G each have pixel values of "1" so each of those pixels increments the counter by 1.
Thus, the counter value at position G is 3. F has a pixel value of 0, but the counter has a value greater than 0 so the pixel value at F is changed to a 1 and the counter is decreased by 1 to a value of~ 2. E has a pixel value of 0, but the counter has a value of 2 which is greater than 0, so the counter goes down to a value of 1 and the pixel value at E is changed to 1. D has a pixel value of l, so the counter is incremented to a value of 2. C has a pixel value of 0, but the counter value of 2 is greater than 0, so the pixel value at C is changed to a 1. and the counter has a value of 1. B also has a pixel value of 0, but the counter value of 1 is greater than 0, so B is changed to a value of 1 and the counter is changed to a value of 0. A has a pixel value of 0, and the counter value is 0, so there is no momentum to traverse the 0 value of A. Thus the right to left horizontal process stops.
[0054] Moving in a left to right direction the process begins with G, the left-most 1 in the pixel region of G, H, and I. The counter is incremented to a value of 1 at G, a value of 2 at H, and a value of 3 at I. J has a pixel value of 0 which is then changed to a value of l, and 1 is subtracted from the counter value which then becomes 2.
K has a pixel value of 0 which is changed to 1, but the counter value is then decreased to 1.
The pixel value at L is similarly changed from 0 to l, but the counter value is then decreased to 0. M has a pixel value of 0, and the counter value at M is 0, so the process stops. The process never even reaches pixel position N. Table 2 discloses what the resulting pixel values after the horizontal momentum-based processing of the row.
Table 2:
Position B C D E III J I~L M:R' F
G

Value 0'' 1 I I I I I l I I 0 0 I 1 ~ ~ ~

(0055) There are many different modifications and vari<~tions one can make to the example above. The values added to or subtracted from the counter can modified.
The starting and end points can similarly be varied from embodiment to embodiment.
There are many different heuristics and methodologies i.n which the momentum of occupant pixels can traverse one or more ambient pixels.
H. GRAVITY-BASED PROCESSING
[0056] The pixel regions 66 of occupant pixels 40 illustrated by the example in Fig. 8 can also be used by the system 16 to perform gravity-based processing using a gravity-based heuristic. Gravity-based processing is one means by which the system 30 can determine the appropriate pixel value 56 relating to a particular pixel 40 by looking to and incorporating the pixel values 56 of other pixels 40 in the vicinity of the particular pixel that is subject to analysis by the system 30. Gravity-based processing can include assigning a unique identifier to each pixel region 66, as illustrated in Fig. 8.
The system 16 can track region characteristics such as region location and region size.
The largest region 66 is typically the upper torso of the occupant 18.
[0057] Gravity-based processing and the gravity-based heuristic incorporates the assumption that that ambient pixels 40 in the general vicinity of the a pixel region 66 of occupant pixels 40 {a group of occupant pixels where each occupant pixel is adjacent to at least one other occupant pixel) may actually be misidentified occupant pixels 40. The system 16 can incorporate a heuristic resembling the manner in which physics measures the impact of gravity on an object. Tlhe size of the pixel region 66 and the distance between the pixel region 66 and the potentially misidentified ambient pixel 40 are two potentially important variables for gravity-based processing.
The "gravity" of a particular pixel region 66 can be compared to a predetermined threshold to determine whether or not an occupant pixel 40 has been misidentified as an ambient pixel 40.
Gravity = GMm > pre-computed threshold r In the equation above "G" is the "gravitational constant." It determines the strength with which pixels are attracted together. This parameter or characteristic is determined by the imagery and target types being analyzed to determine the amount of attraction needed to accomplish a complete segmentation. Different embodiments will have different "G" values. "M" can represent the aggregate total of the initial pixel value 56 for a characteristic 42, such as a luminosity value between 0 and 255 for each occupant pixel 40 in the pixel region 66. The small case "m"
represents the initial pixel characteristic 42 of the potentially misidentified ambient pixel 40 being considered for re-characterization the system's 16 gravity-based processing.
The variable "r" represents the "radius" or the number of pi~;els between the pixel region 66 and the ambient pixel 40 being considered for re-classification as an occupant pixel 40.
(0058] The gravity-based effects of each pixel region 66 can be checked for each pixel classified as an ambient pixel 40. In such an embodiment, the "gravitational"
effects of multiple pixel regions 66 on a particular ambient (pixel 40 can be considered.
Alternatively, the "gravitational" effect for a particular :pixel region 66 is calculated outward until the point in which ambient pixels 40 are no longer re-classified as occupant pixels 40, eliminating the need to perform calculations for each pixel 40 or even any additional ambient pixels 40.
I. ELLIPSE FITTIi~IG SIJI3SYSTEIVI
(0059] The system 16 can use an ellipse fitting subsystem to create an ellipse to represent the segmented image 31 of the occupant 18. In alternative embodiments, alternative shapes can be used to represent the segmented image 31 of the occupant 18. Fig. 9 illustrates one example of the type of results that can be generated by the ellipse fitting process. In a preferred embodiment, thf: ellipse fitting subsystem is software in the computer 30, but in alternative embodiments, the ellipse fitting subsystem can be housed in a different computer or device.
[0060] The upper ellipse 80 can extend from the hips up to the head of the occupant 18. The lower ellipse 84 can extend down from the hips to include the feet of the occupant 18. If the entire area from an occupant's 18 hips down to the occupant's 18 feet is not visible, a lower ellipse can be generated to represent what is visible. An ellipse 80 can be tracked by the system 16 using a single: point on the ellipse 80, such as a the centroid of the ellipse 80, described in greater detail below. Many ellipse fitting routines are known. in the art. A preferred embodiment does not utilize the lower ellipse 84.
[0061] Fig. 10 illustrates many of the variables that can be derived from the upper ellipse 80 to represent some characteristics of the segmented image 31 of the occupant 18 with respect to an airbag deployment system 36. A centroid 84 of the upper ellipse 80 can be identified by the system 16 for tracking characteristics of the occupant 18.
It is known in the art how to identify the centroid 5~l of an ellipse.
Alternative embodiments could use other points on the upper ellipse 80 to track the characteristics of the occupant 18 that are relevant to airbag deployment 36 or other processing. A
wide variety of occupant 18 characteristics can be derived from the upper ellipse 80.
[0062] Motion characteristics include the x-coordinate ("distance") 98 of the centroid 82 and a forward tilt angle ("0") 96. Shape measurements include the y-coordinate ("height") 94 of the centroid 82, the length of the major axis of the ellipse ("major") 88 and the length of the minor axis of the ellipse ("minor") 86.
Rate of change information, such as velocity and acceleration, can also be captured for all shape and motion characteristics.
J. OVERALL SEGMENT'AT'ION PROCESS
[0063] The various processes that can be performed by the system 16 can be implemented in many different ways, as discussed in greater detail above. The system 16 has a voluminous number of different potential embodiments. Some embodiments will not need to use all of the potential processes. Different embodiments can also use different parameters for carrying out a particular process ~or series of processes.
[0064] Fig. 11 discloses one example of a process flow for the system 16 to isolate the segmented image 31 of the occupant 18 from the ambient image 38 that includes both the occupant 18 and the surrounding seat area 21. T:he ambient image 38 of the occupant 18 and surrounding seat area 21 is an input of the computer system 30. As described above, the ambient image 38 includes a number of pixels 40. The system 16 receives the ambient image 38 in the form of pixels 40 that represent the ambient image 38. Each pixel 40 has one or more pixel characteristics 42, such as a luminosity, and each pixel characteristic can be associ;~ted with one or more pixel values for a particular pixel (for example, a particular pixel can have a luminosity value between 0 and 255). A segmentation process can be performed using a computer system 30 which houses an image thresholding subsystem 100 and a gap processing subsystem 102. An ellipse fitting subsystem 1 16 can be used to generate an upper ellipse 80 representing the segmented image 31 of the occupant 18, but the ellipse fitting process at 116 is not required for use of the system 16.
1. Image Thresholding Subsystem [0065] The image thresholding subsystem 100 can include two different modules, a set image threshold module 40 and a perform image thresholding module 42. The system 16 can generate an image threshold 44 to identify a first subset of pixels 40 as occupant pixels 40 and a second subset of pixels 40 as ambient pixels 40. The image threshold{s) 44 can be set by incorporating pixel characteristic 42 information as disclosed in Figs. 3, 4, and 5 and discussed in greater detail above. The application of the image threshold 44 involves.. comparing the pixel v;~lue 45 of a pixel 40 to the image threshold 44 in order to categorize or identify pixels 40 as belonging to one of two or more pixel categories, and then setting the revised pixel value 56 for the particular pixel 40 in accordance with the revised pixel value 56 associated with the pixel category. In a preferred embodiment, pixels 40 categorized at the time as representing the segmented. image 31 of the occupant are set to a value of "1"
and can be referred to as occupant pixels. In a preferred embodiment, pixels 40 categorized at the time as representing aspects of the ambient image 38 not relating to the occupant 18 are set to a pixel value 56 of "0" and can be referred to as ambient pixels. In a preferred embodiment, each pixel 40 has only one pixel value 56, and that value is based on luminosity as the pixel characteristic 42.
a. Set Image Threshold Module (0066] Generating an image threshold 44 can include the process of analyzing the distribution of pixel values 56 for one or more pixel characteristics 42 for some or all pixels 40. The set image threshold module 104 can use the histogram 39 in Fig.
3 to record the number of pixels 40 with a particular pixel value 56 for a pixel characteristic 42, such as for example a pixel 40 with a luminosity value of 255. The histogram 39 of Fig. 3 can be translated into the cumulative distribution curve 52 as disclosed in Fig. 5. The cumulative distribution curve 52 of Fig. 5 can be used to convert the desired N percentage into a image threshold ~44 based in a numerical value Y relating to the pixel characteristics) and pixel values) captured in the ambient image 38. Thus, the histogram 39 and cumulative distribution curve 52 can be used to calculate the appropriate image thresholds) 44 as disclosed in Fig. 4. The system 16 preferably uses more than one image threshold 44 t~o incorporate differences in lighting. Each image threshold 44 can be created to have a predetermined percentage of pixels 40 fall below the image threshold 44. The predetermined percentage for a particular image threshold 44 can take into account the relative locations of the pixels 40 with respect to the entire ambient image 38.
[0067] The histogram 39, cumulative distribution curve 52, and image threshold are discussed in greater detail above.
b. Perform Image Tresholding Module [0068] The perform image thresholding module sets individual pixel values 56 according to the relationship between the pixel characteristic 42 for a particular pixel 42, and the particular image threshold 44 applicable to that particular pixel 42 being set. The system 16 can modify pixel values 56 so that each pixel 40 in a particular pixel category shares the same pixel value 56.
[0069] In a preferred embodiment, pixels 40 categorized at the time as representing the segmented image 31 of the occupant are set to a first binary value of "1"
and can be referred to as occupant pixels. In a preferred embodiment, all other pixels 40 are categorized as ambient pixels with a second binary value of "0."

2. Gap Processing Subsystem [0070] After some or all of the piyel values 56 have been set in accordance to one or more image thresholds 44, a gap processing subsystem 102 can be used by the system 16 to modify pixel values 56 in a manner that incorporates intelligence in the form of other pixel values 56 in the vicinity° of the particular pixel 40 having its pixel value 56 set by the system 16. The gap processing subsystem 102. applies a gap filling or gap processing heuristic to modify pixel values 56. The system 16 can change pixel values 56 on the basis of other pixel values 56 relating to the pixels 40 in the vicinity of the pixel value 56 being changed. A segmented image 31 of the occupant 18 can then be derived from the madified pixel values 56.
(0071] The gap processing subsystem 102 can incorporate a morphological processing module 108, a momentum-based processing module 110, an identify and label pixel regions module 112, and a gravity-based processing module 114.
a. Morphological processing module [0072] A morphological processing module 108 can be used to both remove outlier occupant pixels 40 and to connect regions 66 of occupant pixels separated by outlier ambient pixel values 56 by changing certain occupant pixels 40 to ambient pixels 40 and by changing certain ambient pixels 40 into occupant pixels 40. The morphological processing module 108 is described in greater detail above, and the various morphological heuristics can be performed in any order.
(0073] In a preferred embodiment, vertical morphological erosion is performed first to remove spurious pixels 40 that are oriented horizontally, including such parts of the occupant 18 as the arms stretched out in a forward direction. A vertical morphological dilation can then b.e performed to "grow out" the occupant pixels 40 back to their appropriate "size" by transforming certain ambient pixels 40 into occupant pixels 40.
[0074] Horizontal morphological erosion can then be performed in a similar manner, followed by horizontal morphological dilation to once again "grow out" the occupant pixels 40.
[0075] Morphological processing is based on the useful assumption that the pixel value 56 of a particular pixel 40 is often related to the pixel values 56 for pixels 40 in the same vicinity as the particular pixel 40 being set. Morphological processing is one mechanism that can allow the system 16 to incorporate the knowledge of vicinity pixel values 56 to offset flaws in the camera 22 or flaws in the underlying intelligence applied by the system 16 in such mechanisms as the image threshold 44 and other processes or tools.
[0076] The vertical groups 58 and horizontal groups 60 used to perform morphological processing are illustrated in Figs. 6a and 6b, and are described in greater detail above.
b. lVlo~menturn-based processing module [0077] The momentum-based processing module 110 incorporates a momentum-based heuristic to set pixel values 56. This process is described in greater detail above. The momentum-based heuristic incorporates the :intelligence that the existence of a string of sequential occupant pixels increases the lil<:elihood that the next pixel in the sequence is also an occupant pixel 40, and should have the appropriate pixel value assigned to that pixel 40. A sufficient number of occupant pixels 40 can have the "momentum" to overcome a series of ambient pixels, resulting in those ambient pixels 40 being re-classified as o<;cupant pixels 40 by the momentum-based heuristic.
[0078] As discussed in greater detail above, the momentum-based heuristic uses a counter to calculate the momentum effects of a series of pixel values 56. The counter determines whether or not sufficient "momentum" exists to re-classify a sequence of ambient pixels 40 as occupant pixels 40. They system 16 can be flexible in the different positive and negative weights associated with a particular pixel value 56. A
first predetermined number can be added to the counter value each time the next adjacent pixel 40 belongs to a first pixel category, such as an occupant pixel 40, and a second predetermined number can be subtracted from the: counter value each time the next adjacent pixel 40 belongs to a second pixel category, such as an ambient pixel 40. If the counter reaches a value less than or equal to zero before a sequence of ambient pixels 40 is traversed by occupant pixel 40 "momentum" then the remaining sequence of ambient pixels 40 remain as ambient pixels 40. Some embodiments can implement a linear approach to momentum-based process>ing, applying momentum on a pixel by pixel basis. Cother embodiments may implement a threshold approach, where if the entire sequence of ambient pixels 40 is not overcome by occupant pixel _22-40 momentum, then none of the ambient pixels 40 in the sequence are re-classified as occupant pixels 40.
c. Identify and label pixel regions module [0079] The system 16 can identify pixel regions 66 (groups of occupant pixels where each occupant pixel 40 is adjacent to at least one other occupant pixel 40}.
Such processing can be performed by an identify and label pixel regions module 112.
Pixel regions 66 can be associated with a unique label, and can possess certain region attributes or characteristics such as size, location, and distance to other regions 66 and pixels 40. Fig. 8 is an example of an ambient image 38 containing four pixel regions 66. Pixel regions 66 can be used by the gravity-based processing module 114, and can in alternative embodiments., also be used by the momentum-based processing module 110.
d. Gravity-based processing m~dule [0080] A gravity-based processing module 114 allows th.e system 16 to incorporate the logical and useful assumption that ambient pixels 40 adjacent or close to a region 66 of occupant pixels 40 rnay simply be mis-categorized occupant pixels 40.
The gravity-based heuristic is described in greater detail above. The heuristic can incorporate the "mass" of the initial luminosity values 42 associated with occupant pixels 40 as well as ambient pixels 40 being considered for re-classification as occupant pixels 40 due to the "gravity" and relatively close; distance between the pixel region 66 of occupant pixels 40 and the ambient pi:~cel 40 considered for re-classification.
3. Segmented Imaged [0081] The segmented image 31 of the occupant 18 in isolation of the non-occupant images surrounding the occupant 18 in the ambient image 38 can be outputted from the gap processing subsystem 102. For the purposes of airbag deployment, the ellipse fitting subsystem 116 can generate an upper ellipse 80 from the occupant pixels and ambient pixels 40 outputted from the gap processing subsystem 102. For other embodiments, if other visual characteristics of the occupant are desirable, the initial pixel characteristics 42, such as luminosity in the preferred. embodiment, can replaced the pixel values 56 for all occupant pixels 40, resulting in a segmented image 31 of the occupant 18. In some embodiments, the binary outline of the segmented image 31 of the occupant 18 is all that is required for the ellipsf: fitting subsystem.
In other embodiments, a more visually detailed segmented image 31 can be generated by simply plugging back the initial luminosity values for each of the segmented pixels 40.
4. Ellipse Fitting Subsystem [0082] As discussed in greater detail above, an ellipse fitting subsystem 116 can be used to generate an ellipse 80 representing the segmented image 31 of the occupant 18. The upper ellipse 80 and characteristics relating to the upper ellipse 80 can provide the airbag controller 32 with useful information relating to the occupant 18 in order to provide the appropriate deployment instructions to the airbag deployment system 36.
[0083] The system 16 can also be incorporated in any other system 16 utilizing an occupant 18 image, including but not limited to trains, planes, motorcycles, or any other type of vehicle or structure in which an occupant 18 sits in a seat 20.
The system 16 also has utility not limited to the field of occupant 18 images.
[0084] In accordance with the provisions of the patent: statutes, the principles and modes of operation of this invention have been explained and illustrated in preferred embodiments. However, it must be understood that this invention may be practiced otherwise than is specifically explained and illustrated. without departing from its spirit or scope.

Claims (38)

1. An image segmentation method for use with an occupant, a sensor for generating sensor measurements, and an ambient image including an occupant and the area surrounding the occupant, said image segmentation method comprising the following steps:
receiving an ambient image represented by a plurality of pixels and a plurality of initial pixel values, wherein each said pixel has at least one said initial pixel value;
identifying one or more pixels as belonging to one of a plurality of pixel categories on the basis of one or more initial pixel value associated with the pixels being identified;
establishing a first revised pixel value for one or more pixels, wherein each pixel in the same pixel category has the same first revised pixel value;
setting a second revised pixel value for one or more of said pixels on the basis of one or more first revised pixel values that are associated with one or more pixels in the vicinity of the pixel being set; and deriving a segmented image of the occupant from said first revised pixel value and said second revised pixel value.
2. An image segmentation method as in claim 1, wherein identifying one or more said pixels further includes generating an image threshold.
3. An image segmentation method as in claim 2, wherein identifying one or more said pixels further includes comparing said plurality of initial pixel values to said image threshold.
4. An image segmentation method as in claim 2, wherein generating an image threshold further comprises analyzing the distribution of initial pixel values relating to a pixel characteristic.
5. An image segmentation method as in claim 4, wherein analyzing the distribution of initial pixel values further includes:

recording aggregate initial pixel values into a histogram;
translating the histogram into a cumulative distribution function; and calculating an image threshold based on a predetermined percentage of initial pixel values falling below the image threshold.
6. An image segmentation method as in claim 4, wherein each pixel has only one initial pixel value and only one pixel characteristic.
7. An image segmentation method as in claim 6, wherein luminosity is said pixel characteristic.
8. An image segmentation method as in claim 3, wherein each said pixel has a pixel location with respect to the ambient image, and said pixel location determines which of a plurality of image thresholds are compared to said initial pixel value for said pixel in said pixel location.
9. An image segmentation method as in claim 8, wherein a higher image threshold is applied in pixel locations where there is brighter lighting.
10. An image segmentation method as in claim 1, wherein there are only two pixel categories.
11. An image segmentation method as in claim 1, wherein setting the second revised pixel value includes a morphological heuristic.
12. An image segmentation method as in claim 11, wherein the morphological heuristic is a morphological erosion.
13. An image segmentation method as in claim 12, wherein the morphological heuristic is a morphological dilation.
14. An image segmentation method as in claim 1, wherein setting the second revised pixel value includes a momentum-based heuristic.
15. An image segmentation method as in claim 14, wherein a subset of said plurality of pixels are a group of adjacent pixels, wherein said plurality of pixel categories includes a first pixel category and a second pixel category, and wherein setting the second pixel value further comprises:
analyzing in a sequential manner the subset of pixels in the group of adjacent pixels;
adding to a counter value each time the next pixel in the sequence belongs to said first pixel category; and subtracting from said counter value each time the next pixel in the sequence belongs to said second pixel category.
16. An image segmentation method as in claim 15, wherein the momentum-based heuristic stops when the counter value is less than or equal to zero.
17. An image segmentation method as in claim 1, wherein setting the second revised pixel value includes a gravity-based heuristic.
18. An image segmentation method for use with an occupant, a sensor for generating sensor measurements, and an ambient image of an occupant and the area surrounding the occupant, said image segmentation method comprising the following steps:
receiving an ambient image represented by a plurality of pixels and a plurality of pixel values, wherein each of said pixels has at least one said initial pixel value;
recording aggregate initial pixel values in a histogram;
translating the histogram into a cumulative distribution function;
calculating an image threshold using the cumulative distribution function with a predetermined percentage of initial pixel values falling below the image threshold;

categorizing each pixel in the plurality of pixels into one of a plurality of pixel categories by comparing the image threshold to the initial pixel value for the pixel being categorized;
establishing a first revised pixel value so that each pixel in the same pixel category shares the same first revised pixel value;
modifying said first revised pixel value into a second revised pixel value in accordance with morphological heuristic;
determining a third revised pixel value from said second revised pixel value or said first revised pixel value with a momentum-based heuristic;
identifying regions of pixels based on the first revised pixel value, the second revised pixel value, and the third revised pixel value;
generating a fourth revised pixel value from the regions of pixels in accordance with a gravity-based heuristic; and deriving a segmented image of the occupant with the fourth revised pixel value.
19. An image segmentation method for use with an occupant as in claim 18, wherein deriving a segmented image of the occupant includes substituting the initial pixel value for said fourth revised pixel value representing the occupant image.
20. An image segmentation system for use with an occupant, a sensor for generating sensor measurements, and an ambient image of an occupant and the area around the occupant, said image segmentation system comprising:
an image thresholding subsystem, including a plurality of pixels representing the ambient image, and an image thresholding heuristic, said image thresholding subsystem categorizing said plurality of pixels in accordance with said image thresholding heuristic; and a gap processing subsystem, including a gap processing heuristic, a subset of vicinity pixels in said plurality of pixels, and a plurality of pixel values, said gap processing subsystem selectively setting said pixel values in accordance with said gap processing heuristic and said pixel values belonging to said pixels in said subset of vicinity pixels; and wherein a segmented image is generated from said plurality of pixels.
21. An image segmentation system as in claim 20, said image thresholding subsystem comprising a plurality of luminosity values, wherein each said pixel has at least one said luminosity value.
22. An image segmentation system as in claim 21, said image thresholding subsystem further comprising a histogram, wherein said histogram tabulates the number of said pixels having said luminosity value.
23. An image segmentation system as in claim 22, said image thresholding subsystem further comprising a cumulative distribution curve, wherein said histogram is converted into said cumulative distribution curve by said image thresholding subsystem.
24. An image segmentation system as in claim 23, said image thresholding subsystem further comprising a predetermined percentage and an image threshold, wherein said image threshold is calculated from said predetermined percentage and said cumulative distribution curve.
25. An image segmentation system as in claim 24, said image thresholding subsystem comprising a plurality of image thresholds and said pixel includes a pixel location, wherein said pixel location for said pixel determines which said image threshold is used for said pixel by said image thresholding subsystem.
26. An image segmentation system as in claim 24, said image thresholding subsystem further including a first subset of said plurality of pixels and a second subset of said plurality of pixels, said image thresholding subsystem dividing said plurality of pixels into said first subset of pixels and said second subset of pixels with said image threshold.
27. An image segmentation system as in claim 26, said image thresholding subsystem further including a first binary value and a second binary value, wherein said first subset of said plurality of pixels is set to said first binary value and said second subset of said plurality of pixels is set to said second binary value.
28. An image segmentation system as in claim 27 wherein at least approximately half of said plurality of said pixels are set to said first binary value.
29. An image segmentation system as in claim 20, said gap processing heuristic including a morphological heuristic.
30. An image segmentation system as in claim 29, said morphological heuristic comprising a morphological erosion.
31. An image segmentation system as in claim 29, said morphological heuristic comprising a morphological dilation.
32. An image segmentation system as in claim 29, said morphological heuristic comprising a vertically-based morphological heuristic.
33. An image segmentation system as in claim 29, said morphological heuristic comprising a horizontally-based morphological heuristic.
34. An image segmentation system as in claim 20, said gap processing heuristic comprising a momentum-based heuristic and said subset of vicinity pixels comprising a subset of sequential pixels, and said gap processing subsystem selectively setting said pixel value using said momentum-based heuristic and said pixel values relating to said subset of sequential pixels.
35. An image segmentation system as in claim 34, said sequential subset of pixels including a sequential vertical subset of pixels and a sequential horizontal subset of pixels, wherein said momentum-based heuristic analyzes said pixel values in said sequential vertical subset of pixels and said sequential horizontal subset of pixels, to determine said plurality of pixel values.
36. An image segmentation system as in claim 35, said momentum-based heuristic further comprising a momentum counter, said momentum counter determining when said gap processing subsystem terminates said momentum-based heuristic for said pixel value.
37. An image segmentation system as in claim 27:
said gap filling heuristic comprising a gravity-based heuristic, a pixel region, and a region characteristic;
said plurality of pixels including a target pixel;
wherein said pixel region includes one or more said pixels;
wherein said pixel region does not include said target pixel; and wherein said pixel region is a subset of said vicinity pixels.
38. An image segmentation system as in claim 37:
said gravity-based heuristic further including a region size and a region distance;
wherein said region size is the number of said pixels in said pixel region;
wherein said region distance is the distance between said target pixel and a center point in said pixel region; and wherein said gap processing subsystem sets said target pixel in accordance with said region size, said region distance, and said gravity-based heuristic.
CA002414849A 2001-12-17 2002-12-16 Image segmentation system and method Abandoned CA2414849A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/023,787 US7116800B2 (en) 2001-05-30 2001-12-17 Image segmentation system and method
US10/023,787 2001-12-17

Publications (1)

Publication Number Publication Date
CA2414849A1 true CA2414849A1 (en) 2003-06-17

Family

ID=21817183

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002414849A Abandoned CA2414849A1 (en) 2001-12-17 2002-12-16 Image segmentation system and method

Country Status (8)

Country Link
US (1) US7116800B2 (en)
EP (1) EP1320069A2 (en)
JP (1) JP2003233813A (en)
KR (1) KR20030051330A (en)
CN (1) CN1427372A (en)
BR (1) BR0205583A (en)
CA (1) CA2414849A1 (en)
MX (1) MXPA02012538A (en)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7769513B2 (en) * 2002-09-03 2010-08-03 Automotive Technologies International, Inc. Image processing for vehicular applications applying edge detection technique
US7109957B2 (en) * 1999-01-22 2006-09-19 Au Optronics Corp. Digital display driving circuit for light emitting diode display
US20050129274A1 (en) * 2001-05-30 2005-06-16 Farmer Michael E. Motion-based segmentor detecting vehicle occupants using optical flow method to remove effects of illumination
US20030123704A1 (en) * 2001-05-30 2003-07-03 Eaton Corporation Motion-based image segmentor for occupant tracking
US6853898B2 (en) 2001-05-30 2005-02-08 Eaton Corporation Occupant labeling for airbag-related applications
US7197180B2 (en) * 2001-05-30 2007-03-27 Eaton Corporation System or method for selecting classifier attribute types
US6925193B2 (en) * 2001-07-10 2005-08-02 Eaton Corporation Image processing system for dynamic suppression of airbags using multiple model likelihoods to infer three dimensional information
US6856694B2 (en) * 2001-07-10 2005-02-15 Eaton Corporation Decision enhancement system for a vehicle safety restraint application
US20050271280A1 (en) * 2003-07-23 2005-12-08 Farmer Michael E System or method for classifying images
US20080131004A1 (en) * 2003-07-14 2008-06-05 Farmer Michael E System or method for segmenting images
US20050058322A1 (en) * 2003-09-16 2005-03-17 Farmer Michael E. System or method for identifying a region-of-interest in an image
US7181083B2 (en) 2003-06-09 2007-02-20 Eaton Corporation System and method for configuring an imaging tool
US7327497B2 (en) * 2002-05-14 2008-02-05 Canon Kabushiki Kaisha Image reading apparatus, control method therefor, and program
US7676062B2 (en) * 2002-09-03 2010-03-09 Automotive Technologies International Inc. Image processing for vehicular applications applying image comparisons
US20040161153A1 (en) * 2003-02-18 2004-08-19 Michael Lindenbaum Context-based detection of structured defects in an image
CN100456800C (en) * 2003-09-11 2009-01-28 松下电器产业株式会社 Visual processing device, visual processing method, visual processing program, and semiconductor device
US6925403B2 (en) * 2003-09-15 2005-08-02 Eaton Corporation Method and system for calibrating a sensor
US20050065757A1 (en) * 2003-09-19 2005-03-24 White Samer R. System and method for estimating displacement of a seat-belted occupant
US6944527B2 (en) * 2003-11-07 2005-09-13 Eaton Corporation Decision enhancement system for a vehicle safety restraint application
GB0326374D0 (en) * 2003-11-12 2003-12-17 British Telecomm Object detection in images
JP4480677B2 (en) * 2003-12-11 2010-06-16 富士通株式会社 Image correction method, program, and apparatus
US20050177290A1 (en) * 2004-02-11 2005-08-11 Farmer Michael E. System or method for classifying target information captured by a sensor
US20050179239A1 (en) * 2004-02-13 2005-08-18 Farmer Michael E. Imaging sensor placement in an airbag deployment system
US20060056657A1 (en) * 2004-02-13 2006-03-16 Joel Hooper Single image sensor positioning method and apparatus in a multiple function vehicle protection control system
US7636479B2 (en) * 2004-02-24 2009-12-22 Trw Automotive U.S. Llc Method and apparatus for controlling classification and classification switching in a vision system
US20050276508A1 (en) * 2004-06-15 2005-12-15 Lockheed Martin Corporation Methods and systems for reducing optical noise
US20060050953A1 (en) * 2004-06-18 2006-03-09 Farmer Michael E Pattern recognition method and apparatus for feature selection and object classification
US20060030988A1 (en) * 2004-06-18 2006-02-09 Farmer Michael E Vehicle occupant classification method and apparatus for use in a vision-based sensing system
JP2006059252A (en) * 2004-08-23 2006-03-02 Denso Corp Method, device and program for detecting movement, and monitoring system for vehicle
US7430321B2 (en) * 2004-09-09 2008-09-30 Siemens Medical Solutions Usa, Inc. System and method for volumetric tumor segmentation using joint space-intensity likelihood ratio test
US7283901B2 (en) * 2005-01-13 2007-10-16 Trw Automotive U.S. Llc Controller system for a vehicle occupant protection device
GB0510793D0 (en) * 2005-05-26 2005-06-29 Bourbay Ltd Segmentation of digital images
EP2036039A2 (en) * 2006-06-23 2009-03-18 Koninklijke Philips Electronics N.V. A method, a system and a computer program for determining a threshold in an image comprising image values
JP4166253B2 (en) * 2006-07-10 2008-10-15 トヨタ自動車株式会社 Object detection apparatus, object detection method, and object detection program
US20080059027A1 (en) * 2006-08-31 2008-03-06 Farmer Michael E Methods and apparatus for classification of occupancy using wavelet transforms
US7873235B2 (en) * 2007-01-29 2011-01-18 Ford Global Technologies, Llc Fog isolation and rejection filter
CN101499165B (en) * 2009-03-09 2011-01-26 广东威创视讯科技股份有限公司 Partition method for crossing-overlapped chromosome
US20100245608A1 (en) * 2009-03-25 2010-09-30 Itt Manufacturing Enterprises, Inc. Adaptive method and system for extracting a bright image from a thermal image
ES2481347B1 (en) * 2012-12-26 2015-07-30 Universidad De Almeria PROCEDURE FOR AUTOMATIC INTERPRETATION OF IMAGES FOR THE QUANTIFICATION OF NUCLEAR TUMOR MARKERS.  
CN103458246B (en) * 2013-09-03 2016-08-17 清华大学 Occlusion handling method in video motion segmentation and system
US9589211B2 (en) 2015-05-08 2017-03-07 Siemens Healthcare Gmbh Learning-based aorta segmentation using an adaptive detach and merge algorithm
AU2016369355A1 (en) 2015-12-18 2018-05-10 Ventana Medical Systems, Inc. Systems and methods of unmixing images with varying acquisition properties
EP3581440A1 (en) * 2018-06-11 2019-12-18 Volvo Car Corporation Method and system for controlling a state of an occupant protection feature for a vehicle
CN109886976A (en) * 2019-02-19 2019-06-14 湖北工业大学 A kind of image partition method and system based on grey wolf optimization algorithm
US11398017B2 (en) * 2020-10-09 2022-07-26 Samsung Electronics Co., Ltd. HDR tone mapping based on creative intent metadata and ambient light
US11526968B2 (en) 2020-11-25 2022-12-13 Samsung Electronics Co., Ltd. Content adapted black level compensation for a HDR display based on dynamic metadata

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4179696A (en) 1977-05-24 1979-12-18 Westinghouse Electric Corp. Kalman estimator tracking system
JPS60152904A (en) * 1984-01-20 1985-08-12 Nippon Denso Co Ltd Vehicle-driver-position recognizing apparatus
JPS6166905A (en) 1984-09-10 1986-04-05 Nippon Denso Co Ltd Position of vehicle driver recognizing device
JPS6166906A (en) 1985-03-12 1986-04-05 Nippon Denso Co Ltd Recognizing device for vehicle driver position
DE3803426A1 (en) 1988-02-05 1989-08-17 Audi Ag METHOD FOR ACTIVATING A SECURITY SYSTEM
DE68911428T2 (en) 1988-07-29 1994-06-30 Mazda Motor Airbag device for a motor vehicle.
JPH04500189A (en) 1989-03-20 1992-01-16 シーメンス アクチエンゲゼルシャフト Control device for vehicle occupant restraint and/or occupant protection device
GB2236419B (en) 1989-09-15 1993-08-11 Gen Engineering Improvements in or relating to a safety arrangement
JP2605922B2 (en) 1990-04-18 1997-04-30 日産自動車株式会社 Vehicle safety devices
FR2670979A1 (en) * 1990-12-21 1992-06-26 Philips Electronique Lab LOCAL BINARY SEGMENTATION METHOD OF DIGITIZED IMAGES, BY HISTOGRAMS MISCELLANEOUS.
JP2990381B2 (en) 1991-01-29 1999-12-13 本田技研工業株式会社 Collision judgment circuit
US5051751A (en) 1991-02-12 1991-09-24 The United States Of America As Represented By The Secretary Of The Navy Method of Kalman filtering for estimating the position and velocity of a tracked object
US5982944A (en) * 1991-09-27 1999-11-09 E. I. Du Pont De Nemours And Company Adaptive vision system using dual thresholding
US5446661A (en) 1993-04-15 1995-08-29 Automotive Systems Laboratory, Inc. Adjustable crash discrimination system with occupant position detection
JP2590705B2 (en) * 1993-09-28 1997-03-12 日本電気株式会社 Motion compensation prediction device
US5366241A (en) 1993-09-30 1994-11-22 Kithil Philip W Automobile air bag system
US5413378A (en) 1993-12-02 1995-05-09 Trw Vehicle Safety Systems Inc. Method and apparatus for controlling an actuatable restraining device in response to discrete control zones
US5482314A (en) 1994-04-12 1996-01-09 Aerojet General Corporation Automotive occupant sensor system and method of operation by sensor fusion
US5528698A (en) * 1995-03-27 1996-06-18 Rockwell International Corporation Automotive occupant sensing device
JP2891159B2 (en) * 1996-02-14 1999-05-17 日本電気株式会社 Object detection method from multi-view images
US5923776A (en) * 1996-05-23 1999-07-13 The United States Of America As Represented By The Secretary Of The Navy Object extraction in images
US5983147A (en) 1997-02-06 1999-11-09 Sandia Corporation Video occupant detection and classification
US6116640A (en) 1997-04-01 2000-09-12 Fuji Electric Co., Ltd. Apparatus for detecting occupant's posture
US6005958A (en) 1997-04-23 1999-12-21 Automotive Systems Laboratory, Inc. Occupant type and position detection system
US6018693A (en) 1997-09-16 2000-01-25 Trw Inc. Occupant restraint system and control method with variable occupant position boundary
US6026340A (en) 1998-09-30 2000-02-15 The Robert Bosch Corporation Automotive occupant sensor system and method of operation by sensor fusion
US6249590B1 (en) * 1999-02-01 2001-06-19 Eastman Kodak Company Method for automatically locating image pattern in digital images
US6801662B1 (en) 2000-10-10 2004-10-05 Hrl Laboratories, Llc Sensor fusion architecture for vision-based occupant detection
US6697502B2 (en) * 2000-12-14 2004-02-24 Eastman Kodak Company Image processing method for detecting human figures in a digital image
US6625310B2 (en) * 2001-03-23 2003-09-23 Diamondback Vision, Inc. Video segmentation using statistical pixel modeling
US6577936B2 (en) 2001-07-10 2003-06-10 Eaton Corporation Image processing system for estimating the energy transfer of an occupant into an airbag
US6925193B2 (en) 2001-07-10 2005-08-02 Eaton Corporation Image processing system for dynamic suppression of airbags using multiple model likelihoods to infer three dimensional information
US20030133595A1 (en) 2001-05-30 2003-07-17 Eaton Corporation Motion based segmentor for occupant tracking using a hausdorf distance heuristic
US6662093B2 (en) 2001-05-30 2003-12-09 Eaton Corporation Image processing system for detecting when an airbag should be deployed
US20030123704A1 (en) 2001-05-30 2003-07-03 Eaton Corporation Motion-based image segmentor for occupant tracking
US6459974B1 (en) 2001-05-30 2002-10-01 Eaton Corporation Rules-based occupant classification system for airbag deployment
US6853898B2 (en) 2001-05-30 2005-02-08 Eaton Corporation Occupant labeling for airbag-related applications
US7197180B2 (en) 2001-05-30 2007-03-27 Eaton Corporation System or method for selecting classifier attribute types

Also Published As

Publication number Publication date
CN1427372A (en) 2003-07-02
KR20030051330A (en) 2003-06-25
MXPA02012538A (en) 2003-09-22
US20030031345A1 (en) 2003-02-13
EP1320069A2 (en) 2003-06-18
JP2003233813A (en) 2003-08-22
BR0205583A (en) 2004-08-03
US7116800B2 (en) 2006-10-03

Similar Documents

Publication Publication Date Title
US7116800B2 (en) Image segmentation system and method
JP7369921B2 (en) Object identification systems, arithmetic processing units, automobiles, vehicle lights, learning methods for classifiers
US7197180B2 (en) System or method for selecting classifier attribute types
US8285046B2 (en) Adaptive update of background pixel thresholds using sudden illumination change detection
US7139411B2 (en) Pedestrian detection and tracking with night vision
CN111881730A (en) Wearing detection method for on-site safety helmet of thermal power plant
CN100544446C (en) The real time movement detection method that is used for video monitoring
US20050058322A1 (en) System or method for identifying a region-of-interest in an image
US20120257831A1 (en) Context processor for video analysis system
US20050271280A1 (en) System or method for classifying images
US20150169082A1 (en) Method and Device for Filter-Processing Imaging Information of Emission Light Source
EP1407940A2 (en) Motion-based segmentor for occupant tracking
EP1411474A2 (en) Motion-based segmentation for occupant tracking using a hausdorff distance heuristic
CN1523533A (en) Human detection through face detection and motion detection
CN112613336B (en) Method and apparatus for generating object classification of object
CN108830246B (en) Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment
CN104766071A (en) Rapid traffic light detection algorithm applied to pilotless automobile
CN104268536B (en) A kind of image method for detecting human face
CN112556655B (en) Forestry fire prevention monocular positioning method and system
US20050281461A1 (en) Motion-based image segmentor
JP2021528757A (en) Instance segmentation inferred from the output of a machine learning model
KR20230036301A (en) System and Method for Monitoring Illegal Trash Dumping based on YOLO
CN111489448A (en) Method for detecting real world light source, mixed reality system and recording medium

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued