WO2006022630A1 - Panoramic vision system and method - Google Patents

Panoramic vision system and method Download PDF

Info

Publication number
WO2006022630A1
WO2006022630A1 PCT/US2004/023849 US2004023849W WO2006022630A1 WO 2006022630 A1 WO2006022630 A1 WO 2006022630A1 US 2004023849 W US2004023849 W US 2004023849W WO 2006022630 A1 WO2006022630 A1 WO 2006022630A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vision system
vehicle
display
image data
Prior art date
Application number
PCT/US2004/023849
Other languages
French (fr)
Inventor
Louie Lee
Masoud Vakili
Original Assignee
Silicon Optix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Optix, Inc. filed Critical Silicon Optix, Inc.
Priority to EP04779083A priority Critical patent/EP1771811A4/en
Priority to JP2007523514A priority patent/JP4543147B2/en
Priority to CN2004800431488A priority patent/CN1985266B/en
Priority to PCT/US2004/023849 priority patent/WO2006022630A1/en
Publication of WO2006022630A1 publication Critical patent/WO2006022630A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/80
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/102Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/306Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using a re-scaling of images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/40Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the details of the power supply or the coupling to vehicle components
    • B60R2300/402Image calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/70Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by an event-triggered choice to display a specific image among a selection of captured images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8053Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for bad weather conditions or night vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8066Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring rearward traffic

Definitions

  • This invention relates to vision systems, particularly to panoramic vision systems with image distortion correction.
  • panoramic lens enable a wide field of view but will have distortions due to inherent geometric shape and optical non-linearity.
  • multiple panoramic cameras may cover the entire exterior and interior area of a building, and the system can provide continuous view of the area and, manually or automatically, track objects through the area.
  • multiple panoramic cameras can provide a full 360° view of the area and around obstructive objects.
  • Such systems can adapt display views to specific operator modes such as turning, reversing, and lane changing to improve situational awareness. Additional advantages of a vehicle vision system are reducing the wind drag and noise caused by side mirrors, and reducing the width span of the vehicle by eliminating such protruding mirrors.
  • These systems can also have the capability to detect objects in motion, provide warning of close objects, and track such objects through multiple viewing regions. Vision systems could also greatly enhance night vision through various technologies such as infrared, radar, and light sensitive devices.
  • Vision systems consist of one or more image acquisition devices, coupled to one or more viewable display units.
  • An image acquisition device can incorporate different lenses with different focal lengths and depths of focus such as planar, panoramic, fish-eye, and annular.
  • Lenses like fish-eye and annular have a wide field of view and a large depth of focus. They can capture a wide and deep field of view. They tend, however, to distort images, especially the edges. Resulting images look disproportionate.
  • any type of lens there are also optical distortions caused by tangential and radial lens imperfections, lens offset, focal length of the lens, and light falloff near the outer portions of the lens.
  • a vision system there are yet other types of image distortions caused by luminance variations and color aberrations.
  • Prior art panoramic vision systems do not remove image distortions while accurately blending multiple images.
  • One such panoramic system is disclosed in United States Patent No. 6,498, 620B2, namely a rearview vision system for a vehicle. This system consists of image capture devices, an image synthesizer, and a display system. Neither in this document, nor in the ones referenced therein, is there an electronic image processing system correcting for geometric, optica!, color aberration, and luminance image distortions.
  • a vehicle vision system includes pre-calibration based luminance correction only but not other optical and geometric corrections. A thorough luminance and chrominance correction should be based on input and output optical and geometric parameters and be adaptive for changing ambient environments.
  • the present invention in one aspect provides a panoramic vision system having associated camera, display optics and geometric characteristics, said system comprising: (a) a plurality of image acquisition devices to capture image frame data from a scene and to generate image sensor inputs, said image frame data collectively covering up to a 360° field of view, said image acquisition devices having geometric and optical distortion parameters; (b) a digitizer coupled to the plurality of image acquisition devices to sample and convert the image frame data and the image sensor inputs into digital image data; (c) an image processor coupled to the digitizer comprising:
  • an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data
  • a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level, and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data
  • a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
  • a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation based on the convoluted distortion parameters to the digital image data to generate corrected digital image data;
  • a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data;
  • the present invention provides a method for providing panoramic vision using a panoramic vision system having camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, to generate a composite image that covers up to 360° or 4 ⁇ steradians, said method comprising: (a) acquiring image frame data from a scene, said image frame data collectively covering up to 360° or 4 ⁇ steradian field of view, and generating a set of image senor inputs;
  • the present invention provides an image processor, for use in a panoramic vision system having associated camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, said panoramic vision system using a plurality of image acquisition devices to capture image frames from a scene and to generate digital image data and image sensor inputs and a digitizer to convert the digital image data and the image sensor inputs into digital image data, said image processor comprising:
  • a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram, and the ambient light level and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
  • a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
  • a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation, based on the convoluted distortion parameters, to the digital image data to generate corrected digital image data;
  • a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data.
  • the present invention in its first embodiment provides a vehicle vision system covering up to 360° horizontal field of view, and up to 180° vertical field of view coverage, with emphasis on the rear, side, and corner views.
  • the situational display image can be integrated with other vehicle information data such as vehicle status or navigation data.
  • the image is preferably displayed on the front panel, contiguous with the driver's field of view. Additional views can be displayed to improve coverage and flexibility such as interior dashboard corner displays that substitute mirrors, eliminating exposure to the external elements and wind drag.
  • the system is adapted to reconfigure the display and toggle between views based on control inputs, when one view becomes more critical than others.
  • the display reconfigures to show a wide view of the rear bottom of the vehicle when the vehicle is backing up, or the right side view when the right turning signal is turned on and the vehicle runs at higher speed, or the right corner view when the right turning signal is active and the vehicle is stopped or runs at low speed.
  • the system provides such facilities as parking assistance, or lane crossing warning via pattern recognition of curbs, lanes, and objects as well as distance determination.
  • the present invention provides a surveillance system covering up to 360° horizontal or 4 ⁇ steradian field of view of the exterior and/or interior of a structure such as a building, a bridge, etc.
  • the system can provide multiple distortion corrected views to produce a continuous strip or scanning panoramic view.
  • the system can perform motion detection and object tracking over the surveillance area and object recognition to decipher people, vehicles, etc.
  • the invention provides a videoconference vision system producing a wide view of up to 180° or a circular view of up to 360°.
  • the invention facilitates viewing the participants of a conference or a gathering and provides ability to zoom in on speakers with a single camera. Further details of different aspects and advantages of the embodiments of the invention will be revealed in the following description along with the accompanying drawings.
  • FIG. 1 represents an overall view of a vision system and its components built in accordance with the present invention
  • FIG. 2A represents the structure of the image processor of FIG. 1 as part of the vision system
  • FIG. 2B represents the flow logic of the image processor of FIG. 1 ;
  • FIG. 3A represents a conventional vehicle with mirrors used for side and rear vision
  • FIG. 3B represents an example setting for the cameras and their views in a vehicle example of the vision system
  • FIG. 3C represents selected views for the cameras in a vehicle example of the vision system
  • FIG. 4A represents the viewing surface of the vision system display in a vehicle vision system example
  • FIG. 4B represents the same viewing surface as FIG. 4A when the right turning signal is engaged in a vehicle vision system example;
  • FIG. 5A represents a wide alternative of the viewing surface of the vision system display in a vehicle vision system example
  • FIG. 5B represents reconfigured viewing surface of FIG. 5A when the vehicle is running in reverse; ,
  • FIG. 6A represents reconfigured display when the vehicle is changing lanes
  • FIG. 6B represents reconfigured display when the vehicle is turning right and driver's view is obscured
  • FIG. 7 represents a top view display showing the vehicle and adjacent objects
  • FIG. 8 represents an example of control inputs in a vehicle vision system
  • FIG. 9A represents the view of a hemispheric, 180° (2 ⁇ steradians) view camera, set in the center of a conference table in a videoconference vision system example;
  • FIG. 9B represents the display of a videoconference vision system example
  • FIG. 10 represents a hallway or a building wall with two cameras installed on the walls as part of a surveillance system example
  • FIG. 11A represents the display of a surveillance system example of the invention.
  • FIG. 11 B represents the same display as FIG. 9A at a later time.
  • FIG. 1 shows the overall structure of vision system 100. It comprises a plurality of image acquisition devices like camera 110 to capture image frames, digitizer 118 to convert image frame data to digital image data, image processor 200 to adjust luminance and correct image distortions and form a composite image from digital image data and control parameters, controller 101 to relay user and sensor parameters to image processor 200, and display device 120 to display the composed image for viewing on the viewing surface 160.
  • the final composed image in the present invention covers up to 360° including several specific regions of interest and is significantly distortion free. It greatly enhances situational awareness.
  • Camera 110 in FIG. 1 comprises image optics 112, sensor array
  • image optics 112 may use a wide angle lens to minimize the number of cameras required.
  • Wide angle lenses also have wider depth of focus, reducing the need for focus adjustment and its cost implementation. Such lenses have a field of view typically in excess of 100° and a depth of focus from a couple of feet to infinity. Both these properties make such lenses desirable for a panoramic vision system.
  • Wide angle lenses have greater optical distortions that are difficult and costly to correct optically.
  • electronic correction is used to compensate for any optical and geometric distortion as well as any other non-linearity and imperfection in the optics or projection path.
  • camera lenses need not be perfect or yield perfect images. All image distortions, including those caused by the lens geometry or imperfection, are mathematically modeled and electronically corrected. The importance of this feature lies in the fact that it enables one to use inexpensive wide-angle lenses with an extended depth of focus.
  • the drawback of such lenses is the image distortion and luminance non-uniformity they cause; they tend to stretch objects, especially near the edges of a scene. This effect is well known, for example, for fish-eye or annular lenses. They make objects look disproportionate.
  • Image sensor array 114 is depicted in camera 110.
  • the sensor array is adapted to moderate extreme variations of light intensity of the scene through the camera system. For a vision system to operate efficiently in different lighting conditions, there is need for a great dynamic range. This dynamic range requirement stems from the day and night variations in ambient light intensity as well a wide variation from incidental light sources at night.
  • a camera sensor is basically a mosaic of segments, each exposed to the light from a portion of the scene, registering the intensity of the light as output voltages.
  • Image Capture Control 116 sends image frame intensity information and based on these data it receives commands for integration optimization, lens iris control, and white balance control from image processor 200.
  • Digitizer 118 receives image data from image capture control 116 and converts these data into digital image data.
  • digitizer 118 may be integrated in the image sensor 114 or image processor 200.
  • digitizer 118 receives these data from audio device array 117. It then produces digital audio data and combines these data with digital image data. The digital image data is then sent to image processor 200.
  • FIG. 2A shows image processor 200 in detail.
  • the function of the image processor is to receive digital image data, measure image statistics, enhance image quality, compensate for luminance non-uniformity, and correct various distortions in these data to finally generate one or multiple composite images that are significantly distortion free.
  • It comprises optics and geometry data interface 236, which comprises camera optics data, projection optics data, and projection geometry data, as well as control data interface 202, image measurement module 210, luminance correction module 220, distortion convolution stage 250, distortion correction module 260, and display controller 280.
  • image measurement module 210 the luminance correction module 220
  • distortion convolution stage 250 distortion correction module 260
  • display controller 280 display controller
  • image processor 200 measures full and selective image areas and analyzes them to control exposure for best image quality in the area of interest.
  • High light level sources such as headlights and spotlights are substantially reduced in intensity and low level objects are enhanced in detail to aid element recognition.
  • luminance correction module 220 performs histogram analysis of the areas of interest and expands contrast in the low light area and reduces contrast in high light areas to provide enhanced image element recognition.
  • Luminance non-uniformities are undesired brightness variations across an image.
  • the main physical reason for luminance non-uniformity is the fact that light rays going through different portions of an optical system, travel different distances and area densities. The intensity of a light ray falls off with the square of the distance it travels. This phenomenon happens in the camera optics as well as the display optics.
  • imperfections in an optical system also cause luminance non-uniformity. Examples of such imperfections are projection lens vignette and lateral fluctuations and non-uniformity in the generated projection light. If the brightness variations are different for the three different color components, they are referred to as chrominance non-uniformity.
  • Another type of optical distortion is color aberration.
  • an optical component like a lens has different indices of refraction for different wavelengths.
  • Light rays propagating through optical components refract at different angles for different wavelengths. This results in lateral shifts of colors in images.
  • Lateral aberrations cause different color component of a point object to separate and diverge. On a viewing surface a point would look fringed.
  • Another type of color aberration is axial in nature and is caused by the fact that a lens has different focal points for different light wavelengths. This type of color aberrations could not be corrected electronically.
  • image processor 200 maps the captured scene onto a viewing surface with certain characteristics such as shape, size, and aspect ratio. For example an image can be formed from different sections of a captured scene and projected onto a portion of the windshield of a car, in which case it would suffer distortions because of the non- flat shape as well as particular size of the viewing surface.
  • the image processor in the present invention corrects for all these distortions as explained below.
  • image measurement module 210 receives digital image data from image measurement module 210, where image contrast and brightness histograms within the region of interest are measured. These histograms are analyzed by the luminance correction module 220 to control sensor exposure and to adjust the digital image data to improve image content for visualization and detection. Some adjustments include highlight compression, contrast expansion, detail enhancement, and noise reduction.
  • luminance correction module 220 receives camera and projection optics data from optics and geometry data interface 236. These data are determined from accurate light propagation calculations and calibrations of the optical components. These data are crucial for luminance corrections since they provide the optical path length of different light rays through the camera as well as the projection system. Separate or combined camera and projection correction maps are generated to compute the correction for each pixel. The correction map can be obtained off-line or it could be computed dynamically according to circumstances. The function of the luminance correction module 220 is therefore to receive digital image data and produce luminance- adjusted digital image data. In case of chrominance non-uniformity, luminance correction module 220 preferably applies luminance corrections separately to the three different color components.
  • the physical implementation of luminance correction module 220 could be by a software program or a dedicated processing circuitry such as a digital signal processor or computational logic within an integrated circuit.
  • Image warping is basically an efficient parameterization of coordinate transformations, mapping output pixels to input pixels. Ideally a grid data set represents a mapping of every output pixel to an input pixel. However, grid data representation is quite unforgiving in terms of hardware implementation because of the shear size of the look-up tables.
  • Image warping in this invention provides an efficient way to represent a pixel grid data set via a few parameters. In one example, this parameterization is done by polynomials of degree n, with n determined by the complexity of the combined distortion.
  • Different areas of the output space are divided into patches with inherent geometrical properties to reduce the degree of polynomials.
  • the higher the number of patches and degree of fitting polynomial per patch the more accurate the parameterization of the grid data set.
  • Such warp maps therefore represent a mapping of output pixels to input pixels, representing the camera optics, display optics, and display geometry including the nature of the final composite image specification and the shape of viewing surface.
  • any control parameter, including user input parameters are also combined with the above parameters and represented in a single transformation. In addition to coordinate transformation, a sampling or filtering function is often needed.
  • Filtering is basically a weighted averaging function, resulting in the intensities of constituent colors of an output pixel based on the all pixels inside the footprint.
  • an anisotropic elliptical footprint is used for optimal image quality. It is known that the larger the size of the footprint, the higher the quality of the output image.
  • Image processing 200 in the present invention, performs the image filtering with simultaneous coordinate transformation. To correct for image distortions, all geometric and optical distortion parameters explained above are concatenated in the distortion convolution stage 250 for different color components.
  • the concatenated optical and geometric distortion parameters are then obtained by distortion correction module 260.
  • the function of this module is to transform the position, shape, and color intensities of each element of a scene onto a display pixel.
  • the shape of viewing surface 160 is taken into account in the projection geometry data 234. This surface is not necessarily flat and could be any general shape so long as a surface map is obtained and concatenated with other distortion parameters.
  • the display surface map is convoluted with the rest of the distortion data.
  • Distortion correction module 260 obtains a warp map covering the entire space of distortion parameters.
  • the process is explained in detail in co-pending United States Patent Application Nos. 2003/0020732-A1 and 2003/0043303-A1 hereby incorporated by reference.
  • a transformation is computed to compensate for the distortions an image suffers when it propagates through the camera optics, through display optics, and onto the specific shape of the viewing surface 160.
  • the formation of the distortion parameter set and the transformation computation could be done offline and stored in a memory to be accessed by image processor 200 via an interface. They could as well be done at least partly dynamically in case of varying parameters.
  • a display image can be composed of view windows that can be independent views or concatenated views.
  • the distortion correction module 260 interpolates and calculates from the warp surface equations the spatial transform and filtering parameters and performs the image transformation for display image.
  • distortion correction module 260 finds nearest grid data point in the distortion parameter space first. It then interpolates the existing transformation corresponding to that set of parameters to fit the actual distortion parameters.
  • Correction module 260 then applies the transformation to the digital image data to compensate for all distortions.
  • the digital image data from each frame of each camera is combined to form a composite image fitting the viewing surface and its substructure.
  • distortion correction module 260 could be by a software program on a general purpose digital signal processor or by a dedicated processing circuitry such as an application specific integrated circuit.
  • a physical example of the image processor 200 is incorporated in the Silicon Optix Inc. sxW1 and REON chip.
  • FIG. 2B shows the flow logic of image processor 200 in one example of the present invention. Digital data flow in this chart is indicated via bold lines whereas calculated data flow is depicted via thin lines. As seen in this figure, brightness and contrast histograms are measured from the digital data at step (10).
  • Step (14) and (16) Camera optics along with display optics and display geometry parameters are obtained in steps (14) and (16). These data are then obtained along with brightness and contrast histogram in step (20), where the image luminance non-uniformity is adjusted. Optics and geometry data from steps (14) and (16), as well as control parameters obtained in step (26), are then gathered at step (30). At this step, all distortion parameters are concatenated. A transformation inverting the effect of the distortions is then computed at step (40). This compensating transformation is then applied to the luminance-adjusted digital image data obtained from step (20).
  • step (50) constitutes simultaneous coordinate transformation and image processing.
  • step (50) distortion-compensated digital image data are used to generate a composite image for display.
  • display controller 280 generates an image from these data.
  • the image is formed on display device 120 which could be a direct view display or a projection display device.
  • the image from a projection display device 120 is projected through display optics
  • Display optics 122 are bulk optics components to direct the projected light form display device 120 onto viewing surface 160. Any particular display system has additional optical and geometric distortions that need to be corrected. In this example, image processor 200 concatenates these distortions and corrects for the combined distortion.
  • the present invention is not limited to any particular choice of the display system; the display device could be provided as liquid crystal, light emitting diode, cathode ray tube, electroluminescent, plasma, or any other viable display choice.
  • the display device could be viewable directly or it could be projected through the display optics onto an integrated compartment, serving as a viewing screen.
  • the brightness of the display system is adjusted via ambient light sensors, an independent signal, and a user dimmer switch.
  • the size of the viewing surface is also preferably adjustable by the user for instance according to the distance from the operators' eyes.
  • Controller 101 interfaces with image processor 200. It acquires user parameters from user interface 150 along with inputs from the sensors 194 and sends them to the image processor 200 for display control.
  • the user parameters are specific to the application corresponding to different embodiments of the invention.
  • Sensors 194 preferably include ambient light sensors and direct glare sensors. The data from these sensors are used for display adjustment. Control parameters are convoluted with other parameters to provide a desired image.
  • Compression stage 132 receives the video stream from image processor 200 and compresses the digital video data. Compressed data are stored in the record device 142 for future use. In another embodiment the data are encoded in the encryption stage 134 and sent over the network via network interface 144.
  • FIG. 3A shows a conventional automotive vehicle 900 with two side mirrors and an in-cabin mirror covering right view 902, left view 904, and rear view 906. It is well known that traditional mirror positions and fields of view cause blind spots and may have distortions due to wide viewing angle. It is hard to get complete situational awareness from the images provided by the three mirrors.
  • vehicle 900' is equipped with camera 110' and camera 111' forward of the driver seat. These positions increase coverage by overlapping with the driver's direct forward field of view.
  • Camera 113' in this example is situated at the rear of the vehicle or in the middle rear of its roof. Specific areas of different views are selected to provide a continuous image without overlap. This prevents driver confusion.
  • vehicle 900" has camera 110" and camera 111" at the front corners of the vehicle and camera 113" at the center rear.
  • the emphasis is on adjustable views by user input or by control parameters.
  • turning side view 903" is made available when the turning signal is engaged. It should be noted that the coverage of this view is a function of user inputs, and in principal, covers the whole area between dashed lines.
  • drive side view 905" is used in regular driving mode and it is also expandable to cover the whole area between dashed lines.
  • Rear view in this example has two modes depending on control parameters. When the vehicle is in reverse, reverse rear view 907" is used for display.
  • This view yields a complete coverage of the rear of the vehicle, including objects on the pavement. This assures safer backing up and greatly facilitates parallel parking.
  • a narrower drive rear view 906" is used. These positions and angles ensure a convenient view of the exterior of the vehicle by the driver. This example significantly facilitates functions like parallel parking and lane change.
  • FIG. 4A shows an example of the viewing surface 160 as seen by the driver.
  • Rear view 166 is at the bottom of the viewing surface while the right view 162 and left view 164 are at the top right and the top left of the viewing surface 160 respectively.
  • FIG. 5A shows an example of viewing surface 160' where a wider display is used and the side displays are at the two sides of the rear view 166'.
  • Image processor 200 has the panoramic conversion and image stitching capability to compose this particular display.
  • FIG. 4B and FIG. 5B present examples of reconfigured displays of FIG. 4A and FIG. 5A, when the vehicle is turning right or in reverse respectively.
  • FIG. 6A is an example illustration of the reconfigured display of viewing surface 160 when the vehicle is changing lanes, moving into the right ⁇ lane.
  • the right front and rear of the vehicle is completely viewable, resulting in situation awareness with respect to everything on the right side of the vehicle.
  • FIG. 6B is an example illustration of the display configured for turning right, when the right turning signal is engaged.
  • Turning side view 903" of FIG. 3C is now displayed at the top middle portion of the display as turning side view 163".
  • Rear view 166" and right view 166" are at the bottom of the display.
  • FIG. 7 shows another example of the viewing surface 160.
  • This particular configuration of the display is achievable via gathering visual information from image acquisition devices, as well as distance determination from ultrasound and radar sensors integrated with vision system 100.
  • the vehicle with bold lines and grey shade is contains vision system 100 and is shown with respect to the surrounding setting, namely, other vehicle.
  • This example implementation of the present invention greatly increases situational awareness and facilitates driving the vehicle. Two of the dimensions of each object are captured directly by the cameras. The shape and sizes of these vehicles and objects are reconstructed by extrapolating the views of the cameras according to the size and shape of the lanes and curbs. A more accurate visual account of the driving scene could be reconstructed by pattern recognition and look up tales for specific objects in a database.
  • the display brightness is adjustable via ambient light sensors, via signal from the vehicle headlights, or via manual dimmer switch.
  • the size of the display surface is also adjustable via the driver.
  • the focal length of the displayed images lie considerably within the driver's field of focus as explained in United States Patent No. 5,949,331 which is hereby incorporated by reference.
  • the display system focal length be adjusted according to the speed of the vehicle to make sure the images always form within the depth of focus of the driver. At higher speed, the driver naturally focuses on a longer distance. Such adjustments are achieved via speedometer signal 156 or transmission signal 154.
  • FIG. 6 shows an example of user inputs 150 in a vehicle vision system.
  • Signal interface 151 receives signals from different components and interfaces with controller 101.
  • Turning signal 152 for instance, when engaged while turning right, relays a signal to the signal interface 151 and onto the controller 101 and eventually to image processor 200.
  • image processor 200 receives this signal, it configures the display in a way as to put emphasis on the right display 162 on viewing surface 160.
  • FIG. 4B shows the viewing screen 160 under such conditions.
  • Right view display now occupies half of viewing surface 160 with the other half dedicated to rear view display 166.
  • FIG. 5B shows the same situation with a wide viewing surface embodiment. Other signals evoke other functions.
  • the transmission preferably generates a signal as to emphasize the rear view.
  • Other signals include steering signal which is engaged when the steering wheel is turned to one side by more than a preset limit, brake signal when is engaged when the brake pedal is depressed, transmission signal which conveys information about the traveling speed and direction, and speedometer signal which is gauged at various set velocities to input different signals.
  • the display system in the present invention, adjusts to different situations depending on any of these control parameters and reconfigures automatically to make crucial images available.
  • a variety of control signals could be incorporated with image processor 200 and here we have mentioned only a few examples.
  • a variety of useful information could be displayed on the viewing surface 160 including road maps, roadside information, GPS information, and local weather forecast.
  • Vision system 100 accesses these data either through downloaded files, or through wireless communications at driver's request. It then superimposes these data on the image frame data to compose the image for the display system. Important warning systems are also preferably received by the vision system and are displayed and broadcast on via the display system and audio signal interfaces. Vision system 100 could also be integrated with intelligent highway driving systems via exchanging image data information. It is also integrated with distance determiners and object identifiers as per United States Patent No. 6,498,620 B2 hereby incorporated by reference.
  • vehicle vision system 100 comprises compression stage 132 and record device 142 to make a continuous record of events as detected by the cameras and audio devices. These features incorporate the function to save specific segments prior to and past impact in addition to driver intended events to be reviewed after mishaps like accidents, carjacking, and traffic violations. Such information could be used by the law enforcement agents, judicial authorities, and insurance companies.
  • the present invention provides a videoconference vision system covering up to 180° or 360° depending on a given setting.
  • audio signals are also needed along with the video signals.
  • audio device array 117 provides audio inputs in addition to video inputs from the cameras.
  • the audio signal data is converted to digital data via digitizer 118 and is superposed on the digital image frame data.
  • a pan function is provided based on the triangulation of the audio signal to pan to individual speakers or questioners.
  • the pan and zoom functions are provided digitally by image processor 200. In lack of optical zoom, a digital zoom is provided.
  • FIG. 7A relates to an example of a videoconference system. It shows the view of camera 110 at the center of a conference table.
  • camera 110 has a hemispheric lens system and captures everything above, and including, the table surface. Raw images are upside down, stretched, and disproportionate. The displayed images on viewing surface 160 are, however, distortion free and panoramically converted.
  • FIG. 7B shows viewing surface 160 where the speaker's image is blown up and the rest of the conference is visible in a straight form on the same surface.
  • the present invention provides a surveillance system with motion detection and object recognition.
  • FIG. 2A shows an example of image processor 200, used in the surveillance system.
  • Motion detector 270 evaluates successive input image frames and based on preset level of detected motion and signals alarm and sends the motion area co-ordinates to the distortion convolution stage 250.
  • the input image frames are used to monitor area outside the current view window.
  • the tracking co-ordinates are used by the distortion convolution stage 250 to calculate the view window for corrected display of the detected object.
  • the distortion corrected object image can be resolution enhanced by motion compensated and temporal interpolation of the object.
  • Object recognition and classification can also be performed.
  • FIG. 8 shows an illustrated example of a surveillance system.
  • Cameras 110 and 111 are in a hallway where they are mounted on the walls. These two cameras monitor traffic through the hallway.
  • Image processor 200 in this embodiment uses motion detector and tracker 270 to track the motion of an object as it passes through the hallway.
  • the viewing surface 160 in this example is shown in FIG. 9A at a certain time and FIG. 9B at a later time.
  • the passage of the object is thoroughly captured as he moves from the field of view of one camera to the other.
  • the vision system can also perform resolution enhancement by temporal extraction to improve object detail and recognition and displays it at the top of the image in both FIG. 9A and FIG. 9B.
  • the top portion in this figure is provided in full resolution or by digitally zooming on the moving object.
  • the recognition and tracking of the object is achieved via comparing the detect object in motion from one frame with the next frames. An outline highlights the tracked object for ease of recognition.

Abstract

A distortion corrected panoramic vision system and method provides a visually correct composite image acquired through wide angle optics and projected onto a viewing surface. The system uses image acquisition devices to capture a scene up to 360° or 4π steradians broad. An image processor corrects for luminance or chrominance non-uniformity and applies a spatial transform to each image frame. The spatial transform is convolved by concatenating the viewing transform, acquisition geometry and optical distortion transform, and display geometry and optical transform. The distortion corrections are applied separately for red, green, and blue components to eliminate lateral color aberrations of the optics. A display system is then used to display the resulting composite image on a display device which is then projected through the projection optics and onto a viewing surface. The resulting image is visibly distortion free and matches the characteristics of the viewing surface.

Description

Title: PANORAMIC VISION SYSTEM AND METHOD
FIELD OF THE INVENTION
This invention relates to vision systems, particularly to panoramic vision systems with image distortion correction.
BACKGROUND OF THE INVENTION
Improved situational awareness is increasingly required for many applications including surveillance systems, videoconference vision systems, and vehicle vision systems. Essential to such applications is the need to monitor a wide operating area and to form a composite image for easy comprehension in different user modes. To minimize the number of cameras and cost, cameras with panoramic lens enable a wide field of view but will have distortions due to inherent geometric shape and optical non-linearity. In a surveillance application, multiple panoramic cameras may cover the entire exterior and interior area of a building, and the system can provide continuous view of the area and, manually or automatically, track objects through the area.
In a vehicle vision system, multiple panoramic cameras can provide a full 360° view of the area and around obstructive objects. Such systems can adapt display views to specific operator modes such as turning, reversing, and lane changing to improve situational awareness. Additional advantages of a vehicle vision system are reducing the wind drag and noise caused by side mirrors, and reducing the width span of the vehicle by eliminating such protruding mirrors. These systems can also have the capability to detect objects in motion, provide warning of close objects, and track such objects through multiple viewing regions. Vision systems could also greatly enhance night vision through various technologies such as infrared, radar, and light sensitive devices.
Vision systems consist of one or more image acquisition devices, coupled to one or more viewable display units. An image acquisition device can incorporate different lenses with different focal lengths and depths of focus such as planar, panoramic, fish-eye, and annular. Lenses like fish-eye and annular have a wide field of view and a large depth of focus. They can capture a wide and deep field of view. They tend, however, to distort images, especially the edges. Resulting images look disproportionate. In any type of lens, there are also optical distortions caused by tangential and radial lens imperfections, lens offset, focal length of the lens, and light falloff near the outer portions of the lens. In a vision system, there are yet other types of image distortions caused by luminance variations and color aberrations. These distortions affect the quality and sharpness of the image. Prior art panoramic vision systems do not remove image distortions while accurately blending multiple images. One such panoramic system is disclosed in United States Patent No. 6,498, 620B2, namely a rearview vision system for a vehicle. This system consists of image capture devices, an image synthesizer, and a display system. Neither in this document, nor in the ones referenced therein, is there an electronic image processing system correcting for geometric, optica!, color aberration, and luminance image distortions. In the United States Patent Application Publication No. 2003/0103141A1 a vehicle vision system includes pre-calibration based luminance correction only but not other optical and geometric corrections. A thorough luminance and chrominance correction should be based on input and output optical and geometric parameters and be adaptive for changing ambient environments.
Distorted images and discontinuity between multiple images slow down the operator's visual cognizance, and as such, her/his situational awareness, resulting in potential errors. This is one of the most important impediments in launching an efficient panoramic vision system. It is therefore desirable to provide a panoramic vision system with accurate representation of the situational view via removing geometric, optical, color aberration, luminance, and other image distortions and providing a composite image of multiple views. Such corrections will aid visualization and recognition and will improve visual image quality. SUMMARY OF THE INVENTION
The present invention in one aspect provides a panoramic vision system having associated camera, display optics and geometric characteristics, said system comprising: (a) a plurality of image acquisition devices to capture image frame data from a scene and to generate image sensor inputs, said image frame data collectively covering up to a 360° field of view, said image acquisition devices having geometric and optical distortion parameters; (b) a digitizer coupled to the plurality of image acquisition devices to sample and convert the image frame data and the image sensor inputs into digital image data; (c) an image processor coupled to the digitizer comprising:
(i) an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data; (ii) a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level, and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
(iii) a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters; (iv) a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation based on the convoluted distortion parameters to the digital image data to generate corrected digital image data; (v) a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data; and
(d) a display system coupled to said image processor to display the composite image on a viewing surface for viewing, said composite image being visually distortion free.
In another aspect, the present invention provides a method for providing panoramic vision using a panoramic vision system having camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, to generate a composite image that covers up to 360° or 4π steradians, said method comprising: (a) acquiring image frame data from a scene, said image frame data collectively covering up to 360° or 4π steradian field of view, and generating a set of image senor inputs;
(b) converting the image frame data and the image senor inputs into digital image data, said digital image data being associated with an image luminance histogram and an ambient light level;
(c) obtaining the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level and correcting for luminance non-uniformities to optimize the luminance range of selected regions within the digital image data;
(d) convoluting the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters; (e) generating and applying the distortion correction transformations to the digital image data, said distortion correction transformations being based on the convoluted geometric and optical distortion parameters to generate corrected digital image data;
(f) synthesizing a composite image from the corrected digital image data; and
(g) displaying the composite image on a viewing surface for viewing, said composite image being visually distortion free.
In another aspect, the present invention provides an image processor, for use in a panoramic vision system having associated camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, said panoramic vision system using a plurality of image acquisition devices to capture image frames from a scene and to generate digital image data and image sensor inputs and a digitizer to convert the digital image data and the image sensor inputs into digital image data, said image processor comprising:
(a) an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data;
(b) a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram, and the ambient light level and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
(c) a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters; (d) a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation, based on the convoluted distortion parameters, to the digital image data to generate corrected digital image data; and
(e) a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data.
The present invention in its first embodiment provides a vehicle vision system covering up to 360° horizontal field of view, and up to 180° vertical field of view coverage, with emphasis on the rear, side, and corner views. The situational display image can be integrated with other vehicle information data such as vehicle status or navigation data. The image is preferably displayed on the front panel, contiguous with the driver's field of view. Additional views can be displayed to improve coverage and flexibility such as interior dashboard corner displays that substitute mirrors, eliminating exposure to the external elements and wind drag. The system is adapted to reconfigure the display and toggle between views based on control inputs, when one view becomes more critical than others.
For instance, the display reconfigures to show a wide view of the rear bottom of the vehicle when the vehicle is backing up, or the right side view when the right turning signal is turned on and the vehicle runs at higher speed, or the right corner view when the right turning signal is active and the vehicle is stopped or runs at low speed. In a preferred embodiment, the system provides such facilities as parking assistance, or lane crossing warning via pattern recognition of curbs, lanes, and objects as well as distance determination.
In another embodiment, the present invention provides a surveillance system covering up to 360° horizontal or 4π steradian field of view of the exterior and/or interior of a structure such as a building, a bridge, etc. The system can provide multiple distortion corrected views to produce a continuous strip or scanning panoramic view. The system can perform motion detection and object tracking over the surveillance area and object recognition to decipher people, vehicles, etc. In yet another embodiment, the invention provides a videoconference vision system producing a wide view of up to 180° or a circular view of up to 360°. In this embodiment, the invention facilitates viewing the participants of a conference or a gathering and provides ability to zoom in on speakers with a single camera. Further details of different aspects and advantages of the embodiments of the invention will be revealed in the following description along with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS In the accompanying drawings:
FIG. 1 represents an overall view of a vision system and its components built in accordance with the present invention;
FIG. 2A represents the structure of the image processor of FIG. 1 as part of the vision system; FIG. 2B represents the flow logic of the image processor of FIG. 1 ;
FIG. 3A represents a conventional vehicle with mirrors used for side and rear vision;
FIG. 3B represents an example setting for the cameras and their views in a vehicle example of the vision system; FIG. 3C represents selected views for the cameras in a vehicle example of the vision system;
FIG. 4A represents the viewing surface of the vision system display in a vehicle vision system example; FIG. 4B represents the same viewing surface as FIG. 4A when the right turning signal is engaged in a vehicle vision system example;
FIG. 5A represents a wide alternative of the viewing surface of the vision system display in a vehicle vision system example; FIG. 5B represents reconfigured viewing surface of FIG. 5A when the vehicle is running in reverse;,
FIG. 6A represents reconfigured display when the vehicle is changing lanes;
FIG. 6B represents reconfigured display when the vehicle is turning right and driver's view is obscured;
FIG. 7 represents a top view display showing the vehicle and adjacent objects;
FIG. 8 represents an example of control inputs in a vehicle vision system;
FIG. 9A represents the view of a hemispheric, 180° (2π steradians) view camera, set in the center of a conference table in a videoconference vision system example;
FIG. 9B represents the display of a videoconference vision system example;
FIG. 10 represents a hallway or a building wall with two cameras installed on the walls as part of a surveillance system example;
FIG. 11A represents the display of a surveillance system example of the invention; and
FIG. 11 B represents the same display as FIG. 9A at a later time.
DETAILED DESCRIPTION OF THE INVENTION
Built in accordance with the present invention, FIG. 1 shows the overall structure of vision system 100. It comprises a plurality of image acquisition devices like camera 110 to capture image frames, digitizer 118 to convert image frame data to digital image data, image processor 200 to adjust luminance and correct image distortions and form a composite image from digital image data and control parameters, controller 101 to relay user and sensor parameters to image processor 200, and display device 120 to display the composed image for viewing on the viewing surface 160. The final composed image in the present invention covers up to 360° including several specific regions of interest and is significantly distortion free. It greatly enhances situational awareness. Camera 110 in FIG. 1 comprises image optics 112, sensor array
114 and image capture control 116. For applications requiring wide area coverage, image optics 112 may use a wide angle lens to minimize the number of cameras required. Wide angle lenses also have wider depth of focus, reducing the need for focus adjustment and its cost implementation. Such lenses have a field of view typically in excess of 100° and a depth of focus from a couple of feet to infinity. Both these properties make such lenses desirable for a panoramic vision system. Wide angle lenses have greater optical distortions that are difficult and costly to correct optically. In the present invention, electronic correction is used to compensate for any optical and geometric distortion as well as any other non-linearity and imperfection in the optics or projection path.
In the present invention, camera lenses need not be perfect or yield perfect images. All image distortions, including those caused by the lens geometry or imperfection, are mathematically modeled and electronically corrected. The importance of this feature lies in the fact that it enables one to use inexpensive wide-angle lenses with an extended depth of focus. The drawback of such lenses is the image distortion and luminance non-uniformity they cause; they tend to stretch objects, especially near the edges of a scene. This effect is well known, for example, for fish-eye or annular lenses. They make objects look disproportionate. There are bulk optic solutions available to compensate for these distortions. The addition of bulk optics, however, would make such devices larger, more massive, and more expensive. The cost of bulk optics components is fixed by the labor cost for such functions as polishing, alignment, etc. Electronic components however become constantly cheaper and more efficient. It is also well known that bulk optics can never remove certain distortions completely.
Another drawback of bulk optics solutions is the sensitivity to alignment and being rendered dysfunctional upon impact. There is therefore crucial need for electronic image corrections to make a vision system represent the true setting, while making it robust, and relatively economical. This is substantiated by the fact that in a vision system, electronic components are most likely needed anyway to accomplish other tasks. These tasks include panoramic conversion and display control. Image processor 200, described later in this section, provides these essential functions.
Image sensor array 114 is depicted in camera 110. The sensor array is adapted to moderate extreme variations of light intensity of the scene through the camera system. For a vision system to operate efficiently in different lighting conditions, there is need for a great dynamic range. This dynamic range requirement stems from the day and night variations in ambient light intensity as well a wide variation from incidental light sources at night. A camera sensor is basically a mosaic of segments, each exposed to the light from a portion of the scene, registering the intensity of the light as output voltages. Image Capture Control 116 sends image frame intensity information and based on these data it receives commands for integration optimization, lens iris control, and white balance control from image processor 200. Digitizer 118 receives image data from image capture control 116 and converts these data into digital image data. In an alternative example, the function of digitizer 118 may be integrated in the image sensor 114 or image processor 200. In case of existing audio data acquisition, digitizer 118 receives these data from audio device array 117. It then produces digital audio data and combines these data with digital image data. The digital image data is then sent to image processor 200.
FIG. 2A shows image processor 200 in detail. The function of the image processor is to receive digital image data, measure image statistics, enhance image quality, compensate for luminance non-uniformity, and correct various distortions in these data to finally generate one or multiple composite images that are significantly distortion free. It comprises optics and geometry data interface 236, which comprises camera optics data, projection optics data, and projection geometry data, as well as control data interface 202, image measurement module 210, luminance correction module 220, distortion convolution stage 250, distortion correction module 260, and display controller 280. To understand the function of the image processor 200 it is important to briefly discuss the nature and causes of these distortions.
Ambient light and spot lighting can result in extreme variations in illumination levels, resulting in poor image quality and difficulty in image element recognition. In this invention, image processor 200 measures full and selective image areas and analyzes them to control exposure for best image quality in the area of interest. High light level sources such as headlights and spotlights are substantially reduced in intensity and low level objects are enhanced in detail to aid element recognition.
While the image sensor may be of any type and resolution, for applications identified in this invention, a high resolution solid state CCD or CMOS image sensor would be appropriate in terms of size, cost, integration, and flexibility. Typical applications include adjusting to wide varying ambient light levels by iris control and integration time. In addition to iris control and integration time control, luminance correction module 220 performs histogram analysis of the areas of interest and expands contrast in the low light area and reduces contrast in high light areas to provide enhanced image element recognition.
Luminance non-uniformities are undesired brightness variations across an image. The main physical reason for luminance non-uniformity is the fact that light rays going through different portions of an optical system, travel different distances and area densities. The intensity of a light ray falls off with the square of the distance it travels. This phenomenon happens in the camera optics as well as the display optics. In addition to this purely physical cause, imperfections in an optical system also cause luminance non-uniformity. Examples of such imperfections are projection lens vignette and lateral fluctuations and non-uniformity in the generated projection light. If the brightness variations are different for the three different color components, they are referred to as chrominance non-uniformity. Another type of optical distortion is color aberration. It stems from the fact that an optical component like a lens has different indices of refraction for different wavelengths. Light rays propagating through optical components refract at different angles for different wavelengths. This results in lateral shifts of colors in images. Lateral aberrations cause different color component of a point object to separate and diverge. On a viewing surface a point would look fringed. Another type of color aberration is axial in nature and is caused by the fact that a lens has different focal points for different light wavelengths. This type of color aberrations could not be corrected electronically.
A variety of other distortions might be present in a vision system. Tangential or radial lens imperfections, lens offset, projector imperfections, and keystone distortions from off-axis projection are some of the common distortions.
In addition to the mentioned distortions, image processor 200 maps the captured scene onto a viewing surface with certain characteristics such as shape, size, and aspect ratio. For example an image can be formed from different sections of a captured scene and projected onto a portion of the windshield of a car, in which case it would suffer distortions because of the non- flat shape as well as particular size of the viewing surface. The image processor in the present invention corrects for all these distortions as explained below.
Referring now to the details of image processor 200 in FIG. 2A, digital image data are received by image measurement module 210, where image contrast and brightness histograms within the region of interest are measured. These histograms are analyzed by the luminance correction module 220 to control sensor exposure and to adjust the digital image data to improve image content for visualization and detection. Some adjustments include highlight compression, contrast expansion, detail enhancement, and noise reduction.
Along with this measurement information, luminance correction module 220 receives camera and projection optics data from optics and geometry data interface 236. These data are determined from accurate light propagation calculations and calibrations of the optical components. These data are crucial for luminance corrections since they provide the optical path length of different light rays through the camera as well as the projection system. Separate or combined camera and projection correction maps are generated to compute the correction for each pixel. The correction map can be obtained off-line or it could be computed dynamically according to circumstances. The function of the luminance correction module 220 is therefore to receive digital image data and produce luminance- adjusted digital image data. In case of chrominance non-uniformity, luminance correction module 220 preferably applies luminance corrections separately to the three different color components. The physical implementation of luminance correction module 220 could be by a software program or a dedicated processing circuitry such as a digital signal processor or computational logic within an integrated circuit.
Luminance-adjusted image data should be corrected for geometric, optical, and other spatial distortions. These distortions are referred to as "warp" corrections and such correction techniques are called "image warping" in the literature. A discussion of image warping can be found in George Wolberg's "Digital Image Warping", IEEE Computer Society Press, 1988, hereby incorporated by reference. Image warping is basically an efficient parameterization of coordinate transformations, mapping output pixels to input pixels. Ideally a grid data set represents a mapping of every output pixel to an input pixel. However, grid data representation is quite unforgiving in terms of hardware implementation because of the shear size of the look-up tables. Image warping, in this invention provides an efficient way to represent a pixel grid data set via a few parameters. In one example, this parameterization is done by polynomials of degree n, with n determined by the complexity of the combined distortion.
Different areas of the output space, in another example of this invention, are divided into patches with inherent geometrical properties to reduce the degree of polynomials. In principal, the higher the number of patches and degree of fitting polynomial per patch, the more accurate the parameterization of the grid data set. However, this has to be balanced with processing power for real time applications. Such warp maps therefore represent a mapping of output pixels to input pixels, representing the camera optics, display optics, and display geometry including the nature of the final composite image specification and the shape of viewing surface. In addition, any control parameter, including user input parameters are also combined with the above parameters and represented in a single transformation. In addition to coordinate transformation, a sampling or filtering function is often needed. Once the output image pixel is mapped onto an input pixel, an area around this input pixel is designated for filtering. This area is referred to as the filter footprint. Filtering is basically a weighted averaging function, resulting in the intensities of constituent colors of an output pixel based on the all pixels inside the footprint. In a particular example, an anisotropic elliptical footprint is used for optimal image quality. It is known that the larger the size of the footprint, the higher the quality of the output image. Image processing 200, in the present invention, performs the image filtering with simultaneous coordinate transformation. To correct for image distortions, all geometric and optical distortion parameters explained above are concatenated in the distortion convolution stage 250 for different color components. These parameters include camera optics data, projection optics data, projection geometry data, via Optics and geometry interface 236, and control inputs via control interface 202. The concatenated optical and geometric distortion parameters are then obtained by distortion correction module 260. The function of this module is to transform the position, shape, and color intensities of each element of a scene onto a display pixel. The shape of viewing surface 160 is taken into account in the projection geometry data 234. This surface is not necessarily flat and could be any general shape so long as a surface map is obtained and concatenated with other distortion parameters. The display surface map is convoluted with the rest of the distortion data.
Distortion correction module 260, in one example of the present invention, obtains a warp map covering the entire space of distortion parameters. The process is explained in detail in co-pending United States Patent Application Nos. 2003/0020732-A1 and 2003/0043303-A1 hereby incorporated by reference. For each set of distortion parameters, a transformation is computed to compensate for the distortions an image suffers when it propagates through the camera optics, through display optics, and onto the specific shape of the viewing surface 160. The formation of the distortion parameter set and the transformation computation could be done offline and stored in a memory to be accessed by image processor 200 via an interface. They could as well be done at least partly dynamically in case of varying parameters. A display image can be composed of view windows that can be independent views or concatenated views. For each view window, the distortion correction module 260 interpolates and calculates from the warp surface equations the spatial transform and filtering parameters and performs the image transformation for display image. For every image frame, distortion correction module 260 finds nearest grid data point in the distortion parameter space first. It then interpolates the existing transformation corresponding to that set of parameters to fit the actual distortion parameters. Correction module 260 then applies the transformation to the digital image data to compensate for all distortions. The digital image data from each frame of each camera is combined to form a composite image fitting the viewing surface and its substructure. The corresponding digital data are corrected in such a way that when an image is formed on the viewing surface, it is visibly distortion free and it fits an optimized viewing region on the viewing surface. The physical implementation of distortion correction module 260 could be by a software program on a general purpose digital signal processor or by a dedicated processing circuitry such as an application specific integrated circuit. A physical example of the image processor 200 is incorporated in the Silicon Optix Inc. sxW1 and REON chip. FIG. 2B shows the flow logic of image processor 200 in one example of the present invention. Digital data flow in this chart is indicated via bold lines whereas calculated data flow is depicted via thin lines. As seen in this figure, brightness and contrast histograms are measured from the digital data at step (10). Camera optics along with display optics and display geometry parameters are obtained in steps (14) and (16). These data are then obtained along with brightness and contrast histogram in step (20), where the image luminance non-uniformity is adjusted. Optics and geometry data from steps (14) and (16), as well as control parameters obtained in step (26), are then gathered at step (30). At this step, all distortion parameters are concatenated. A transformation inverting the effect of the distortions is then computed at step (40). This compensating transformation is then applied to the luminance-adjusted digital image data obtained from step (20).
As formerly explained, pixel data is filtered at this step for higher image quality. Accordingly, step (50) constitutes simultaneous coordinate transformation and image processing. At step (50), distortion-compensated digital image data are used to generate a composite image for display. Once the digital image data for a frame are processed and composed, display controller 280 generates an image from these data. In an example of this invention, the image is formed on display device 120 which could be a direct view display or a projection display device. In one example, the image from a projection display device 120 is projected through display optics
122, onto viewing surface 160. Display optics 122 are bulk optics components to direct the projected light form display device 120 onto viewing surface 160. Any particular display system has additional optical and geometric distortions that need to be corrected. In this example, image processor 200 concatenates these distortions and corrects for the combined distortion.
It should be noted that the present invention is not limited to any particular choice of the display system; the display device could be provided as liquid crystal, light emitting diode, cathode ray tube, electroluminescent, plasma, or any other viable display choice. The display device could be viewable directly or it could be projected through the display optics onto an integrated compartment, serving as a viewing screen. Preferably, the brightness of the display system is adjusted via ambient light sensors, an independent signal, and a user dimmer switch. The size of the viewing surface is also preferably adjustable by the user for instance according to the distance from the operators' eyes.
Controller 101 interfaces with image processor 200. It acquires user parameters from user interface 150 along with inputs from the sensors 194 and sends them to the image processor 200 for display control. The user parameters are specific to the application corresponding to different embodiments of the invention. Sensors 194 preferably include ambient light sensors and direct glare sensors. The data from these sensors are used for display adjustment. Control parameters are convoluted with other parameters to provide a desired image.
For some applications of this invention, it could be desirable to record the data stream or to send it over the network to different clients. In both cases it is desirable and a necessity to compress the video data stream first to achieve storage and bandwidth limits. Compression stage 132, in one example, receives the video stream from image processor 200 and compresses the digital video data. Compressed data are stored in the record device 142 for future use. In another embodiment the data are encoded in the encryption stage 134 and sent over the network via network interface 144.
Having explained general features of the present invention in detail, a number of example implementations of vision system 100 will now be discussed. In one embodiment, the present invention provides a vehicle vision system. FIG. 3A shows a conventional automotive vehicle 900 with two side mirrors and an in-cabin mirror covering right view 902, left view 904, and rear view 906. It is well known that traditional mirror positions and fields of view cause blind spots and may have distortions due to wide viewing angle. It is hard to get complete situational awareness from the images provided by the three mirrors.
In one example of the present invention illustrated in FIG. 3B, vehicle 900' is equipped with camera 110' and camera 111' forward of the driver seat. These positions increase coverage by overlapping with the driver's direct forward field of view. Camera 113' in this example is situated at the rear of the vehicle or in the middle rear of its roof. Specific areas of different views are selected to provide a continuous image without overlap. This prevents driver confusion.
In another example illustrated in FIG. 3C, vehicle 900" has camera 110" and camera 111" at the front corners of the vehicle and camera 113" at the center rear. In this example the emphasis is on adjustable views by user input or by control parameters. For example, turning side view 903" is made available when the turning signal is engaged. It should be noted that the coverage of this view is a function of user inputs, and in principal, covers the whole area between dashed lines. Similarly, drive side view 905" is used in regular driving mode and it is also expandable to cover the whole area between dashed lines. . Rear view in this example has two modes depending on control parameters. When the vehicle is in reverse, reverse rear view 907" is used for display. This view yields a complete coverage of the rear of the vehicle, including objects on the pavement. This assures safer backing up and greatly facilitates parallel parking. When the vehicle is in drive mode, however, a narrower drive rear view 906" is used. These positions and angles ensure a convenient view of the exterior of the vehicle by the driver. This example significantly facilitates functions like parallel parking and lane change.
FIG. 4A shows an example of the viewing surface 160 as seen by the driver. Rear view 166 is at the bottom of the viewing surface while the right view 162 and left view 164 are at the top right and the top left of the viewing surface 160 respectively. FIG. 5A shows an example of viewing surface 160' where a wider display is used and the side displays are at the two sides of the rear view 166'. Image processor 200 has the panoramic conversion and image stitching capability to compose this particular display. FIG. 4B and FIG. 5B present examples of reconfigured displays of FIG. 4A and FIG. 5A, when the vehicle is turning right or in reverse respectively.
FIG. 6A is an example illustration of the reconfigured display of viewing surface 160 when the vehicle is changing lanes, moving into the right lane. In this example, the right front and rear of the vehicle is completely viewable, resulting in situation awareness with respect to everything on the right side of the vehicle. FIG. 6B is an example illustration of the display configured for turning right, when the right turning signal is engaged. Turning side view 903" of FIG. 3C is now displayed at the top middle portion of the display as turning side view 163". Rear view 166" and right view 166" are at the bottom of the display.
FIG. 7 shows another example of the viewing surface 160. This particular configuration of the display is achievable via gathering visual information from image acquisition devices, as well as distance determination from ultrasound and radar sensors integrated with vision system 100. In this illustrated example, the vehicle with bold lines and grey shade is contains vision system 100 and is shown with respect to the surrounding setting, namely, other vehicle. This example implementation of the present invention greatly increases situational awareness and facilitates driving the vehicle. Two of the dimensions of each object are captured directly by the cameras. The shape and sizes of these vehicles and objects are reconstructed by extrapolating the views of the cameras according to the size and shape of the lanes and curbs. A more accurate visual account of the driving scene could be reconstructed by pattern recognition and look up tales for specific objects in a database.
It should be noted that the present invention is not limited to these illustrated examples; variations on the number and position of cameras as well as reconfigurations of the viewing surface are within the scope of this invention. Projection geometry data, projection optics data, and camera optics data are cover all these alternative implementations in image processor 200.
Preferably the display brightness is adjustable via ambient light sensors, via signal from the vehicle headlights, or via manual dimmer switch. In addition, preferably the size of the display surface is also adjustable via the driver. It is also preferred that the focal length of the displayed images lie considerably within the driver's field of focus as explained in United States Patent No. 5,949,331 which is hereby incorporated by reference. However, it is also preferred that the display system focal length be adjusted according to the speed of the vehicle to make sure the images always form within the depth of focus of the driver. At higher speed, the driver naturally focuses on a longer distance. Such adjustments are achieved via speedometer signal 156 or transmission signal 154.
FIG. 6 shows an example of user inputs 150 in a vehicle vision system. Signal interface 151 receives signals from different components and interfaces with controller 101. Turning signal 152 for instance, when engaged while turning right, relays a signal to the signal interface 151 and onto the controller 101 and eventually to image processor 200. Once image processor 200 receives this signal, it configures the display in a way as to put emphasis on the right display 162 on viewing surface 160. FIG. 4B shows the viewing screen 160 under such conditions. Right view display now occupies half of viewing surface 160 with the other half dedicated to rear view display 166. FIG. 5B shows the same situation with a wide viewing surface embodiment. Other signals evoke other functions.
For instance, while parking the vehicle, the transmission preferably generates a signal as to emphasize the rear view. Other signals include steering signal which is engaged when the steering wheel is turned to one side by more than a preset limit, brake signal when is engaged when the brake pedal is depressed, transmission signal which conveys information about the traveling speed and direction, and speedometer signal which is gauged at various set velocities to input different signals. The display system, in the present invention, adjusts to different situations depending on any of these control parameters and reconfigures automatically to make crucial images available. A variety of control signals could be incorporated with image processor 200 and here we have mentioned only a few examples. A variety of useful information could be displayed on the viewing surface 160 including road maps, roadside information, GPS information, and local weather forecast. Vision system 100 accesses these data either through downloaded files, or through wireless communications at driver's request. It then superimposes these data on the image frame data to compose the image for the display system. Important warning systems are also preferably received by the vision system and are displayed and broadcast on via the display system and audio signal interfaces. Vision system 100 could also be integrated with intelligent highway driving systems via exchanging image data information. It is also integrated with distance determiners and object identifiers as per United States Patent No. 6,498,620 B2 hereby incorporated by reference.
In one example implementation, vehicle vision system 100 comprises compression stage 132 and record device 142 to make a continuous record of events as detected by the cameras and audio devices. These features incorporate the function to save specific segments prior to and past impact in addition to driver intended events to be reviewed after mishaps like accidents, carjacking, and traffic violations. Such information could be used by the law enforcement agents, judicial authorities, and insurance companies.
In a different embodiment, the present invention provides a videoconference vision system covering up to 180° or 360° depending on a given setting. In this embodiment audio signals are also needed along with the video signals. In the illustrated example of FIG. 1 of the vision system 100, audio device array 117 provides audio inputs in addition to video inputs from the cameras. The audio signal data is converted to digital data via digitizer 118 and is superposed on the digital image frame data. A pan function is provided based on the triangulation of the audio signal to pan to individual speakers or questioners. The pan and zoom functions are provided digitally by image processor 200. In lack of optical zoom, a digital zoom is provided.
FIG. 7A relates to an example of a videoconference system. It shows the view of camera 110 at the center of a conference table. In this particular example camera 110 has a hemispheric lens system and captures everything above, and including, the table surface. Raw images are upside down, stretched, and disproportionate. The displayed images on viewing surface 160 are, however, distortion free and panoramically converted.
FIG. 7B shows viewing surface 160 where the speaker's image is blown up and the rest of the conference is visible in a straight form on the same surface.
In yet a different embodiment, the present invention provides a surveillance system with motion detection and object recognition. FIG. 2A shows an example of image processor 200, used in the surveillance system. Motion detector 270 evaluates successive input image frames and based on preset level of detected motion and signals alarm and sends the motion area co-ordinates to the distortion convolution stage 250. The input image frames are used to monitor area outside the current view window. The tracking co-ordinates are used by the distortion convolution stage 250 to calculate the view window for corrected display of the detected object. The distortion corrected object image can be resolution enhanced by motion compensated and temporal interpolation of the object. Object recognition and classification can also be performed.
FIG. 8 shows an illustrated example of a surveillance system. Cameras 110 and 111 are in a hallway where they are mounted on the walls. These two cameras monitor traffic through the hallway. Image processor 200 in this embodiment uses motion detector and tracker 270 to track the motion of an object as it passes through the hallway. The viewing surface 160 in this example is shown in FIG. 9A at a certain time and FIG. 9B at a later time. The passage of the object is thoroughly captured as he moves from the field of view of one camera to the other. The vision system can also perform resolution enhancement by temporal extraction to improve object detail and recognition and displays it at the top of the image in both FIG. 9A and FIG. 9B. The top portion in this figure is provided in full resolution or by digitally zooming on the moving object. The recognition and tracking of the object is achieved via comparing the detect object in motion from one frame with the next frames. An outline highlights the tracked object for ease of recognition.

Claims

Claims
1. A panoramic vision system having associated camera, display optics and geometric characteristics, said system comprising:
(a) a plurality of image acquisition devices to capture image frame data from a scene and to generate image sensor inputs, said image frame data collectively covering up to a 360° field of view, said image acquisition devices having geometric and optical distortion parameters;
(b) a digitizer coupled to the plurality of image acquisition devices to sample and convert the image frame data and the image sensor inputs into digital image data;
(c) an image processor coupled to the digitizer comprising:
(i) an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data; (ii) a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level, and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
(iii) a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters; (iv) a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation based on the convoluted distortion parameters to the digital image data to generate corrected digital image data; (v) a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data; and
(d) a display system coupled to said image processor to display the composite image on a viewing surface for viewing, said composite image being visually distortion free.
2. The panoramic vision system of claim 1 , wherein said image processor is adapted to separately process red, green, and blue components associated with the digital image data to correct for lateral color aberration and distortions.
3. The panoramic vision system of claim 2, wherein said luminance correction module separately processes the red, green, and blue components to correct for chrominance non-uniformity.
4. The panoramic vision system of claim 1 , wherein said plurality of image acquisition devices and the digitizer are implemented by a plurality of digital cameras.
5. The panoramic vision system of claim 3, wherein the display system is a light projection system, and wherein, the geometric and optical distortion parameters are concatenated in order to pre-correct for the associated geometric and optical distortions.
6. The panoramic vision system of claim 3, further comprising at least one ambient light sensor, at least one lighting signal, and at least one user dimmer switch, wherein the brightness of the display system is adjusted according to at least one of said ambient light sensor, separate lighting signal, and user dimmer switch.
7. The panoramic vision system of claim 3, further comprising a controller wherein said controller is adapted to relay a set of user parameters to the panoramic vision system to concatenate said user parameters with distortion parameters based on a set of selected viewing functions and preferences.
8. The panoramic vision system of claim 3, further comprising a control interface coupled to the image processor which is adapted to receive and relay user inputs and control parameters.
9. The panoramic vision system of claim 3, wherein said image processor is further adapted to send exposure control commands to the image acquisition devices for iris and exposure time control.
10. The panoramic vision system of claim 3, further comprising a viewing surface, wherein the position and size of the composite image on the viewing surface is adjustable according to a set of user inputs.
11. The panoramic vision system of claim 3, provided as a vehicle vision system.
12. The vehicle vision system of claim 11 , wherein the display system is adapted to form a composite image in the forward field of vision of a driver positioned within the vehicle, namely on at least one of the group consisting of: dashboard, windshield, roof, and fascia.
13. The vehicle vision system of claim 12, wherein the display system is adapted to form a composite image on two viewing surfaces on the interior corners of the windshield and fascia in order to mimic vehicle side mirrors.
14. The vehicle vision system of claim 11 , wherein the focal point of the composite image falls in the forward field of focus associated with a driver positioned within the vehicle.
15. The vehicle vision system of claim 11 , wherein the focal point of the display system is practically infinite.
16. The vehicle vision system of claim 11 , wherein the vehicle vision system reconfigures amongst different configurations according to input from the group consisting of: a turning signal, a transmission system, a brake system, a steering system, and a speedometer.
17. The vehicle vision system of claim 16, wherein the display system is adapted to enhance the right view from the vehicle when the vehicle turning signal is set to turn right and to enhance the left view from the vehicle when the vehicle turning signal is set to turn left.
18. The vehicle vision system of claim 16, wherein the display system is adapted to enhance the right view from the vehicle when the vehicle steering wheel is turned toward the right by a preset angle and to enhance the left view from the vehicle when the vehicle steering wheel is turned toward the left by a preset angle.
19. The vehicle vision system of claim 16, wherein the display system is adapted to enhance the rear view from the vehicle when the vehicle is operating in reverse.
20. The vehicle vision system of claim 16, wherein the vision system is adapted to enhance at least one of the front and rear corner side view from the vehicle if the direct view is obscured by adjacent objects.
21. The vehicle vision system of claim 20, further comprising at least one of a laser and a radar sensor and wherein the image processor is further adapted to use the sensors to detect distances between the exterior of the vehicle and an object, and to provide audio and visual warnings when the distance between the exterior of the vehicle and the object falls within limits defined by the speed of the vehicle.
22. The vehicle vision system of claim 21, wherein the display system is configured to show the vehicle and it's surrounding setting as viewed from the top by capturing two dimensions via cameras and extrapolating in the third dimension according to lane and curb patterns and object look-up databases.
23. The vehicle vision system of claim 20, wherein the image processor is further adapted to detect at least one of road, lane, and curb edge markings and to provide audio and visual warnings when the exterior of the vehicle approaches said edge markings.
24. The vehicle vision system of claim 23, wherein, the image processor is further adapted to detect the position of a designated area and to provide directional indicators on the display system to guide vehicle movement to the designated area.
25. The vehicle vision system of claim 11 , further comprising at least one of the group consisting of: ambient light sensors, oncoming headlight sensors, and driver dimmer switch, and wherein the brightness and contrast associated with the display system is adjusted according to the readings of at least one of the ambient light sensors, oncoming headlight sensors, and driver dimmer switch.
26. The vehicle vision system of claim 11 , further comprising a secure black box device and a trigger device for recording all the digital image data associated with a period of time defined around activation of the trigger, said trigger bring adapted to be activated by at least one of the group consisting of: a user, a collision, a hijacking, and a theft.
27. The vehicle vision system of claim 11 , further comprising an integrated GPS navigation unit which allows for road maps and roadside information to be displayed on the display system.
28. The panoramic vision system of claim 3, further comprising a plurality of surveillance sensors, said system being provided as a surveillance vision system.
29. The surveillance vision system of claim 28, wherein the image processor is further adapted to perform motion detection and object tracking using the surveillance sensors.
30. The surveillance vision system of claim 28, wherein the image processor is adapted to provide a set of visual and audio signals when motion is detected by surveillance sensors.
31. The surveillance vision system of claim 26, wherein the image processor is further adapted to define a set of designated protected areas, to track a plurality of objects, and to generate audio and visual warnings when such objects enter into designated protected areas, and wherein the display system is reconfigured to highlight said objects when such objects enter into the designated protected areas.
32. The panoramic vision system of claim 3, further including a set of audio sensors and wherein the digitizer is further adapted to convert the audio sensor inputs into digital audio data, said system being provided as a videoconferencing vision system.
33. The videoconference vision system of claim 32, wherein the image processor and the display system are further adapted to generate and display a zoomed image of a videoconferencing user according to the strength and direction of the audio signal generated by the videoconferencing user.
34. A method for providing panoramic vision using a panoramic vision system having camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, to generate a composite image that covers up to 360° or 4π steradians, said method comprising:
(a) acquiring image frame data from a scene, said image frame data collectively covering up to 360° or 4π steradian field of view, and generating a set of image senor inputs;
(b) converting the image frame data and the image senor inputs into digital image data, said digital image data being associated with an image luminance histogram and an ambient light level;
(c) obtaining the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level and correcting for luminance non-uniformities to optimize the luminance range of selected regions within the digital image data;
(d) convoluting the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
(e) generating and applying the distortion correction transformations to the digital image data, said distortion correction transformations being based on the convoluted geometric and optical distortion parameters to generate corrected digital image data;
(f) synthesizing a composite image from the corrected digital image data; and (g) displaying the composite image on a viewing surface for viewing, said composite image being visually distortion free.
35. The method of claim 34, further processing the red, green, and blue components of the digital image data separately to correct for lateral color aberration distortions.
36. The method of claim 35, wherein luminance adjustment is conducted separately for each of said red, green and blue components to correct for chrominance non-uniformity.
37. The method of claim 36, wherein the correction transformation is achieved by:
(i) obtaining a grid data set covering the entire space of the geometric and optical distortion parameters;
(ii) computing a correction transformation corresponding to each data set on the grid of (i), and
(iii) interpolating adjacent grid transformation to fit the actual image frame transformation parameters.
38. The method of claim 37, wherein (i) and (ii) are executed once off line and the transformation grid is accessed for each image frame correction.
39. The method of claim 37, wherein (i) and (ii) are done dynamically under varying conditions.
40. The method of claim 37, wherein the pixel map associated with the transformation is represented via warp maps.
41. The method of claim 40, wherein the warp maps are parameterized in terms of one of incremental polynomials and positional polynomials.
42. The method of claim 36, further generating and relaying exposure control commands to a plurality of image acquisition devices.
43. The method of claim 36, further including receiving control and user parameters to convolute with distortion parameters.
44. The method of claim 36, further including motion detection and tracking.
45. The method of claim 43, further including adjusting brightness, contrast, and size of the composite image according to sensors and user inputs.
46. The method of claim 43, further reconfiguring the display system according to a set of control inputs.
47. The method of claim 43, further adjusting the focal length of the composite image according to a set of control parameters.
48. An image processor, for use in a panoramic vision system having associated camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, said panoramic vision system using a plurality of image acquisition devices to capture image frames from a scene and to generate digital image data and image sensor inputs and a digitizer to convert the digital image data and the image sensor inputs into digital image data, said image processor comprising:
(a) an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data;
(b) a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram, and the ambient light level and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
(c) a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
(d) a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation, based on the convoluted distortion parameters, to the digital image data to generate corrected digital image data; and
(e) a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data.
49. The image processor of claim 48, further adapted to separately process red, green, and blue components of the digital image data to correct for lateral color aberration distortions.
50. The image processor of claim 49, wherein said luminance correction module separately processes red, green, and blue components of the digital image data to correct for chrominance non-uniformity.
51. The image processor of claim 50, further adopted to generate and relay exposure control commands to image acquisition devices.
52. The image processor of claim 50, further adapted to receive control and user parameters to convolute with distortion parameters.
53. The image processor of claim 50, further adopted for motion detection and tracking components.
54. The image processor of claim 50, further adopted for adjusting the brightness, contrast, and size of the synthesized image according to a plurality of sensors and a set of user inputs.
55. The image processor of claim 52, further adapted to reconfigure the display system according to a set of control inputs.
56. The image processor of claim 52, further adapted to adjust the focal length of the composite image according to a set of control parameters.
PCT/US2004/023849 2004-07-26 2004-07-26 Panoramic vision system and method WO2006022630A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP04779083A EP1771811A4 (en) 2004-07-26 2004-07-26 Panoramic vision system and method
JP2007523514A JP4543147B2 (en) 2004-07-26 2004-07-26 Panorama vision system and method
CN2004800431488A CN1985266B (en) 2004-07-26 2004-07-26 Panoramic vision system and method
PCT/US2004/023849 WO2006022630A1 (en) 2004-07-26 2004-07-26 Panoramic vision system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2004/023849 WO2006022630A1 (en) 2004-07-26 2004-07-26 Panoramic vision system and method

Publications (1)

Publication Number Publication Date
WO2006022630A1 true WO2006022630A1 (en) 2006-03-02

Family

ID=35967743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/023849 WO2006022630A1 (en) 2004-07-26 2004-07-26 Panoramic vision system and method

Country Status (4)

Country Link
EP (1) EP1771811A4 (en)
JP (1) JP4543147B2 (en)
CN (1) CN1985266B (en)
WO (1) WO2006022630A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2926908A1 (en) * 2008-01-30 2009-07-31 Renault Sas Motor vehicle i.e. car, driving assisting method, involves allocating quality score to each of pixels in each of corrected images, and selecting pixel having high quality in order to elaborate top view of environment of motor vehicle
CN101252687B (en) * 2008-03-20 2010-06-02 上海交通大学 Method for implementing multichannel combined interested area video coding and transmission
CN101945287A (en) * 2010-10-14 2011-01-12 杭州华三通信技术有限公司 ROI encoding method and system thereof
GB2471708A (en) * 2009-07-09 2011-01-12 Thales Holdings Uk Plc Image combining with light point enhancements and geometric transforms
EP2163429A3 (en) * 2008-08-29 2011-03-02 MAN Nutzfahrzeuge AG Method for image-supported monitoring of a vehicle environment, especially the environment of a goods vehicle
EP2312497A1 (en) * 2009-09-30 2011-04-20 Hitachi Ltd. Apparatus for vehicle surroundings monitorings
WO2012128532A2 (en) * 2011-03-20 2012-09-27 Hwang Seung-Bal High-efficiency lighting system for a closed circuit television camera
EP2562047A1 (en) * 2011-08-23 2013-02-27 Fujitsu General Limited Drive assisting apparatus
US20130293683A1 (en) * 2012-05-03 2013-11-07 Harman International (Shanghai) Management Co., Ltd. System and method of interactively controlling a virtual camera
WO2014095782A1 (en) * 2012-12-17 2014-06-26 Connaught Electronics Ltd. Method for white balance of an image presentation considering color values exclusively of a subset of pixels, camera system and motor vehicle with a camera system
CN101534384B (en) * 2008-03-10 2014-10-08 株式会社理光 Image processing method, image processing device, and image capturing device
WO2015193851A1 (en) * 2014-06-19 2015-12-23 Iveco S.P.A. Back vision system for assisting vehicle driving
WO2016072927A1 (en) * 2014-11-07 2016-05-12 BAE Systems Hägglunds Aktiebolag Situation awareness system and method for situation awareness in a combat vehicle
US20160269597A1 (en) * 2013-10-29 2016-09-15 Kyocera Corporation Image correction parameter output apparatus, camera system and correction parameter output method
US9560274B2 (en) 2011-11-07 2017-01-31 Sony Corporation Image generation apparatus and image generation method
US9729788B2 (en) 2011-11-07 2017-08-08 Sony Corporation Image generation apparatus and image generation method
US9836668B2 (en) 2014-03-26 2017-12-05 Sony Corporation Image processing device, image processing method, and storage medium
US9894272B2 (en) 2011-11-07 2018-02-13 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method
CN108973860A (en) * 2018-06-27 2018-12-11 北斗星通(重庆)汽车电子有限公司 Vehicle-mounted blind area detection system
WO2018225392A1 (en) * 2017-06-09 2018-12-13 Sony Corporation Control apparatus, image pickup apparatus, control method, program, and image pickup system
US10284776B2 (en) 2011-11-07 2019-05-07 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method
US10291864B2 (en) 2014-04-17 2019-05-14 Sony Corporation Image processing device and image processing method
CN110796597A (en) * 2019-10-10 2020-02-14 武汉理工大学 Vehicle-mounted all-round-view image splicing device based on space-time compensation
US10609306B2 (en) 2016-05-30 2020-03-31 Casio Computer Co., Ltd. Image processing apparatus, image processing method and storage medium
CN111968184A (en) * 2020-08-24 2020-11-20 北京茵沃汽车科技有限公司 Method, device and medium for realizing view follow-up in panoramic looking-around system
US10867575B2 (en) 2016-08-31 2020-12-15 Samsung Electronics Co., Ltd. Image display apparatus and operating method thereof

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102052916B (en) * 2009-11-04 2013-05-01 沈阳迅景科技有限公司 Method for three-dimensional measurement of panoramic real scenes
CN102918573B (en) * 2010-02-08 2016-03-16 “建筑投资项目M公司”有限责任公司 Determine car speed and coordinate and follow-up identification and automatic the recording traffic method of breaking rules and regulations and the equipment realizing described method are carried out to it
CN102233850A (en) * 2010-05-04 2011-11-09 全福精密科技有限公司 Driving auxiliary device with panoramic and driving recording functions
JP2012144162A (en) * 2011-01-12 2012-08-02 Toyota Motor Corp Travel support apparatus
CN104303017B (en) * 2011-03-28 2017-05-17 Avl测试系统公司 Deconvolution method for emissions measurement
JP2013214947A (en) * 2012-03-09 2013-10-17 Ricoh Co Ltd Image capturing apparatus, image capturing system, image processing method, information processing apparatus, and program
US9387813B1 (en) * 2012-03-21 2016-07-12 Road-Iq, Llc Device, system and method for aggregating networks and serving data from those networks to computers
KR101847825B1 (en) * 2012-05-24 2018-04-12 현대모비스 주식회사 Around View System and Method for Camera Brightness Correction thereof
KR101937272B1 (en) * 2012-09-25 2019-04-09 에스케이 텔레콤주식회사 Method and Apparatus for Detecting Event from Multiple Image
JP2016500169A (en) * 2012-10-05 2016-01-07 ヴィディノティ エスアーVidinoti Sa Annotation method and apparatus
JP2014150476A (en) * 2013-02-04 2014-08-21 Olympus Imaging Corp Photographing apparatus, image processing method, and image processing program
CN103568955A (en) * 2013-09-30 2014-02-12 深圳市领华数据信息有限公司 Car interior glass projection method and system
DE102014205511A1 (en) * 2014-03-25 2015-10-01 Conti Temic Microelectronic Gmbh METHOD AND DEVICE FOR DISPLAYING OBJECTS ON A VEHICLE INDICATOR
KR101629577B1 (en) * 2014-12-10 2016-06-13 현대오트론 주식회사 Monitoring method and apparatus using a camera
TWI536313B (en) 2015-06-30 2016-06-01 財團法人工業技術研究院 Method for adjusting vehicle panorama system
CN107534757B (en) * 2015-08-10 2020-04-10 Jvc 建伍株式会社 Vehicle display device and vehicle display method
JP6582874B2 (en) * 2015-10-28 2019-10-02 株式会社リコー COMMUNICATION SYSTEM, COMMUNICATION DEVICE, COMMUNICATION METHOD, AND PROGRAM
CN105898338A (en) * 2015-12-18 2016-08-24 乐视致新电子科技(天津)有限公司 Panorama video play method and device
EP3435652A1 (en) * 2016-03-22 2019-01-30 Ricoh Company, Ltd. Image processing system, image processing method, and program
KR101765556B1 (en) * 2016-05-11 2017-08-23 (주)캠시스 Apparatus and method for processing the image according to the velocity of automobile
CN108001357A (en) * 2016-10-31 2018-05-08 中交北斗技术有限责任公司 A kind of high accuracy Big Dipper positioning intelligent rearview mirror
WO2018094697A1 (en) * 2016-11-25 2018-05-31 深圳市窝窝头科技有限公司 Fast three-dimensional space projection and photographing visual identification system
CN106775309A (en) * 2016-12-06 2017-05-31 北京尊豪网络科技有限公司 A kind of method and device for showing information of real estate
ES2695250A1 (en) * 2017-06-27 2019-01-02 Broomx Tech S L Procedure to project immersive audiovisual content (Machine-translation by Google Translate, not legally binding)
CN109204326B (en) * 2017-06-29 2020-06-12 深圳市掌网科技股份有限公司 Driving reminding method and system based on augmented reality
JP6988206B2 (en) * 2017-07-07 2022-01-05 株式会社タダノ Crane car
JP6989212B2 (en) * 2017-11-27 2022-01-05 株式会社東海理化電機製作所 Vehicle visibility device
JP7004410B2 (en) * 2017-11-27 2022-01-21 株式会社東海理化電機製作所 Vehicle visibility device and display control method
JP6857695B2 (en) * 2018-09-14 2021-04-14 シャープ株式会社 Rear display device, rear display method, and program
CN109263652B (en) * 2018-11-14 2020-06-09 江铃汽车股份有限公司 Method for measuring and checking front visual field of driver
CN111917985B (en) * 2020-08-14 2021-11-16 广东申义实业投资有限公司 Vehicle, method and device for three-dimensional panoramic visual display and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465163A (en) * 1991-03-18 1995-11-07 Canon Kabushiki Kaisha Image processing method and apparatus for processing oversized original images and for synthesizing multiple images
US5699444A (en) * 1995-03-31 1997-12-16 Synthonics Incorporated Methods and apparatus for using image data to determine camera location and orientation
US5978521A (en) * 1997-09-25 1999-11-02 Cognex Corporation Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object
US6163361A (en) * 1999-04-23 2000-12-19 Eastman Kodak Company Digital camera including a printer for receiving a cartridge having security control circuitry
US6498620B2 (en) 1993-02-26 2002-12-24 Donnelly Corporation Vision system for a vehicle including an image capture device and a display system having a long focal length
US20030103141A1 (en) 1997-12-31 2003-06-05 Bechtel Jon H. Vehicle vision system
US20030133019A1 (en) 1996-11-08 2003-07-17 Olympus Optical Co., Ltd., Image processing apparatus for joining a plurality of images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5289321A (en) * 1993-02-12 1994-02-22 Secor James O Consolidated rear view camera and display system for motor vehicle
JP3232408B2 (en) * 1997-12-01 2001-11-26 日本エルエスアイカード株式会社 Image generation device, image presentation device, and image generation method
JP3881439B2 (en) * 1998-01-23 2007-02-14 シャープ株式会社 Image processing device
JP3298851B2 (en) * 1999-08-18 2002-07-08 松下電器産業株式会社 Multi-function vehicle camera system and image display method of multi-function vehicle camera
US7474799B2 (en) * 2002-06-12 2009-01-06 Silicon Optix Inc. System and method for electronic correction of optical anomalies
US20040100565A1 (en) * 2002-11-22 2004-05-27 Eastman Kodak Company Method and system for generating images used in extended range panorama composition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465163A (en) * 1991-03-18 1995-11-07 Canon Kabushiki Kaisha Image processing method and apparatus for processing oversized original images and for synthesizing multiple images
US6498620B2 (en) 1993-02-26 2002-12-24 Donnelly Corporation Vision system for a vehicle including an image capture device and a display system having a long focal length
US5699444A (en) * 1995-03-31 1997-12-16 Synthonics Incorporated Methods and apparatus for using image data to determine camera location and orientation
US20030133019A1 (en) 1996-11-08 2003-07-17 Olympus Optical Co., Ltd., Image processing apparatus for joining a plurality of images
US5978521A (en) * 1997-09-25 1999-11-02 Cognex Corporation Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object
US20030103141A1 (en) 1997-12-31 2003-06-05 Bechtel Jon H. Vehicle vision system
US6163361A (en) * 1999-04-23 2000-12-19 Eastman Kodak Company Digital camera including a printer for receiving a cartridge having security control circuitry

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP1771811A4
SHUM, SZELISKI: "Systems and Experiment Paper: Constructions of Panoramic Image Mosaics with Global and Local Alignment", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 36, no. 2, 2000, pages 101 - 130

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2926908A1 (en) * 2008-01-30 2009-07-31 Renault Sas Motor vehicle i.e. car, driving assisting method, involves allocating quality score to each of pixels in each of corrected images, and selecting pixel having high quality in order to elaborate top view of environment of motor vehicle
CN101534384B (en) * 2008-03-10 2014-10-08 株式会社理光 Image processing method, image processing device, and image capturing device
CN101252687B (en) * 2008-03-20 2010-06-02 上海交通大学 Method for implementing multichannel combined interested area video coding and transmission
EP2163429A3 (en) * 2008-08-29 2011-03-02 MAN Nutzfahrzeuge AG Method for image-supported monitoring of a vehicle environment, especially the environment of a goods vehicle
GB2471708A (en) * 2009-07-09 2011-01-12 Thales Holdings Uk Plc Image combining with light point enhancements and geometric transforms
EP2312497A1 (en) * 2009-09-30 2011-04-20 Hitachi Ltd. Apparatus for vehicle surroundings monitorings
US8289391B2 (en) 2009-09-30 2012-10-16 Hitachi, Ltd. Apparatus for vehicle surroundings monitoring
CN101945287A (en) * 2010-10-14 2011-01-12 杭州华三通信技术有限公司 ROI encoding method and system thereof
WO2012128532A3 (en) * 2011-03-20 2013-01-03 Hwang Seung-Bal High-efficiency lighting system for a closed circuit television camera
WO2012128532A2 (en) * 2011-03-20 2012-09-27 Hwang Seung-Bal High-efficiency lighting system for a closed circuit television camera
EP2562047A1 (en) * 2011-08-23 2013-02-27 Fujitsu General Limited Drive assisting apparatus
US9729788B2 (en) 2011-11-07 2017-08-08 Sony Corporation Image generation apparatus and image generation method
US10284776B2 (en) 2011-11-07 2019-05-07 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method
US9894272B2 (en) 2011-11-07 2018-02-13 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method
US9560274B2 (en) 2011-11-07 2017-01-31 Sony Corporation Image generation apparatus and image generation method
US20130293683A1 (en) * 2012-05-03 2013-11-07 Harman International (Shanghai) Management Co., Ltd. System and method of interactively controlling a virtual camera
EP2661073A3 (en) * 2012-05-03 2013-12-25 Harman International Industries, Incorporated System and method of interactively controlling a virtual camera
WO2014095782A1 (en) * 2012-12-17 2014-06-26 Connaught Electronics Ltd. Method for white balance of an image presentation considering color values exclusively of a subset of pixels, camera system and motor vehicle with a camera system
US20160269597A1 (en) * 2013-10-29 2016-09-15 Kyocera Corporation Image correction parameter output apparatus, camera system and correction parameter output method
US10097733B2 (en) 2013-10-29 2018-10-09 Kyocera Corporation Image correction parameter output apparatus, camera system and correction parameter output method
EP3065390A4 (en) * 2013-10-29 2017-08-02 Kyocera Corporation Image correction parameter output device, camera system, and correction parameter output method
US9836668B2 (en) 2014-03-26 2017-12-05 Sony Corporation Image processing device, image processing method, and storage medium
US10291864B2 (en) 2014-04-17 2019-05-14 Sony Corporation Image processing device and image processing method
WO2015193851A1 (en) * 2014-06-19 2015-12-23 Iveco S.P.A. Back vision system for assisting vehicle driving
AU2015275735B2 (en) * 2014-06-19 2019-05-09 Iveco S.P.A. Back vision system for assisting vehicle driving
WO2016072927A1 (en) * 2014-11-07 2016-05-12 BAE Systems Hägglunds Aktiebolag Situation awareness system and method for situation awareness in a combat vehicle
US10609306B2 (en) 2016-05-30 2020-03-31 Casio Computer Co., Ltd. Image processing apparatus, image processing method and storage medium
US10867575B2 (en) 2016-08-31 2020-12-15 Samsung Electronics Co., Ltd. Image display apparatus and operating method thereof
US11295696B2 (en) 2016-08-31 2022-04-05 Samsung Electronics Co., Ltd. Image display apparatus and operating method thereof
WO2018225392A1 (en) * 2017-06-09 2018-12-13 Sony Corporation Control apparatus, image pickup apparatus, control method, program, and image pickup system
US11272115B2 (en) 2017-06-09 2022-03-08 Sony Corporation Control apparatus for controlling multiple camera, and associated control method
CN108973860A (en) * 2018-06-27 2018-12-11 北斗星通(重庆)汽车电子有限公司 Vehicle-mounted blind area detection system
CN110796597A (en) * 2019-10-10 2020-02-14 武汉理工大学 Vehicle-mounted all-round-view image splicing device based on space-time compensation
CN110796597B (en) * 2019-10-10 2024-02-02 武汉理工大学 Vehicle-mounted all-round image splicing device based on space-time compensation
CN111968184A (en) * 2020-08-24 2020-11-20 北京茵沃汽车科技有限公司 Method, device and medium for realizing view follow-up in panoramic looking-around system

Also Published As

Publication number Publication date
JP2008507449A (en) 2008-03-13
CN1985266A (en) 2007-06-20
CN1985266B (en) 2010-05-05
EP1771811A4 (en) 2010-06-09
EP1771811A1 (en) 2007-04-11
JP4543147B2 (en) 2010-09-15

Similar Documents

Publication Publication Date Title
US7576767B2 (en) Panoramic vision system and method
JP4543147B2 (en) Panorama vision system and method
JP4491453B2 (en) Method and apparatus for visualizing the periphery of a vehicle by fusing infrared and visual images depending on the periphery
US9445011B2 (en) Dynamic rearview mirror adaptive dimming overlay through scene brightness estimation
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
US20150042799A1 (en) Object highlighting and sensing in vehicle image display systems
US20150109444A1 (en) Vision-based object sensing and highlighting in vehicle image display systems
US20140114534A1 (en) Dynamic rearview mirror display features
US7557691B2 (en) Obstacle detector for vehicle
US8743202B2 (en) Stereo camera for a motor vehicle
US10506178B2 (en) Image synthesis device for electronic mirror and method thereof
US20130128049A1 (en) Driver assistance system for a vehicle
JP4975592B2 (en) Imaging device
US20020075387A1 (en) Arrangement and process for monitoring the surrounding area of an automobile
JP2008530667A (en) Method and apparatus for visualizing the periphery of a vehicle by fusing infrared and visible images
US20170171444A1 (en) Imaging setting changing apparatus, imaging system, and imaging setting changing method
WO2013157184A1 (en) Rearward visibility assistance device for vehicle, and rear visibility assistance method for vehicle
WO2020195851A1 (en) Vehicle-mounted camera device, and image distortion correction method for same
KR102235951B1 (en) Imaging Apparatus and method for Automobile
JP2019001325A (en) On-vehicle imaging device
WO2017158829A1 (en) Display control device and display control method
DE102013220839B4 (en) A method of dynamically adjusting a brightness of an image of a rear view display device and a corresponding vehicle imaging system
KR20070049109A (en) Panoramic vision system and method
JP7384343B2 (en) Image processing device, image processing program
JP4795813B2 (en) Vehicle perimeter monitoring device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004779083

Country of ref document: EP

Ref document number: 6957/DELNP/2006

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 200480043148.8

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2007523514

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020067025482

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2004779083

Country of ref document: EP