US9270872B2 - Apparatus, systems, and methods for removing shading effect from image - Google Patents

Apparatus, systems, and methods for removing shading effect from image Download PDF

Info

Publication number
US9270872B2
US9270872B2 US14/089,908 US201314089908A US9270872B2 US 9270872 B2 US9270872 B2 US 9270872B2 US 201314089908 A US201314089908 A US 201314089908A US 9270872 B2 US9270872 B2 US 9270872B2
Authority
US
United States
Prior art keywords
image
lighting
correction
correction mesh
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/089,908
Other versions
US20150146038A1 (en
Inventor
David Donohoe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MOVIDIUS LIMITED
Original Assignee
Linear Algebra Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Linear Algebra Technologies Ltd filed Critical Linear Algebra Technologies Ltd
Priority to US14/089,908 priority Critical patent/US9270872B2/en
Assigned to LINEAR ALGEBRA TECHNOLOGIES LIMITED reassignment LINEAR ALGEBRA TECHNOLOGIES LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DONOHOE, DAVID
Priority to PCT/IB2014/003059 priority patent/WO2015079320A1/en
Publication of US20150146038A1 publication Critical patent/US20150146038A1/en
Application granted granted Critical
Publication of US9270872B2 publication Critical patent/US9270872B2/en
Assigned to MOVIDIUS LIMITED reassignment MOVIDIUS LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: LINEAR ALGEBRA TECHNOLOGIES LIMITED
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • H04N5/2173
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N5/2351
    • H04N5/3572
    • H04N9/045
    • H04N9/735

Definitions

  • the present application relates generally to image processing.
  • the present application relates to removing shading effects in images.
  • An image sensor can be used to capture color information about a scene.
  • the image sensor can include pixel elements that are configured to respond differently to different wavelengths of light, much like a human visual system.
  • a pixel element of an image sensor can achieve such color selectivity using a color filter, which filters the incoming light reaching the pixel element based on the light's wavelength.
  • the color filters for the plurality of pixel elements can be arranged in an array as well. Such color filters are often referred to as a color filter array (CFA).
  • CFA color filter array
  • FIG. 1A illustrates a Bayer CFA.
  • the Bayer CFA 102 can include a plurality of color filters (e.g., Gr 104 , R 106 , B 108 , and Gb 110 ), each of which filters the incoming light reaching a pixel element based on the light's wavelength.
  • a pixel underlying a green filter 104 can capture light with a wavelength in the range of the color “green”; a pixel underlying a red filter 106 can capture light with a wavelength in the range of the color “red”; and a pixel underlying a blue filter 108 can capture light with a wavelength in the range of the color “blue.”
  • the Bayer CFA 102 can be overlaid on the pixel elements so that the underlying pixel elements only observe the light that passes through the overlaid filter.
  • the Bayer CFA 102 can arrange the color filters in a checkerboard pattern. In the Bayer CFA 102 , there are twice as many green filters 104 , 110 as there are red filters 106 or blue filters 108 . There may be other types of CFAs. Different CFAs differ in (1) the filters used to pass selected wavelengths of light and/or (2) the arrangement of filters in the array.
  • each color channel (e.g., Red, Green, Blue) can be separated into separate “channels.”
  • FIG. 1B illustrates the Green channel 120 of the captured image 102 .
  • the Green channel 120 includes pixels with missing values 118 because those pixels were used to capture other colors (e.g., Red or Blue).
  • These missing values 118 can be interpolated from neighboring pixels 110 , 112 , 114 , 116 to fill-in the missing value. This process can be repeated for other color channels. By stacking the interpolated color channels, a color image can be generated.
  • the shading effects refer to a phenomenon in which a brightness of an image is reduced. In some cases, the shading effects can vary as a function of a spatial location in an image.
  • One of the prominent spatially-varying shading effects is referred to as the color non-uniformity effect.
  • the color non-uniformity effect refers to a phenomenon in which a color of a captured image varies spatially, even when the physical properties of the light (e.g., the amount of light and/or the wavelength of the captured light) captured by the image sensor is uniform across spatial locations in the image sensor.
  • a typical symptom of a color non-uniformity effect can include a green tint at the center of an image, which fades into a magenta tint towards the edges of an image. This particular symptom has been referred to as the “green spot” issue.
  • the color non-uniformity effect can be prominent when a camera captures an image of white or gray surfaces, such as a wall or a piece of paper.
  • the vignetting effect refers to a phenomenon in which less light reaches the corners of an image sensor compared to the center of an image sensor. This results in decreasing brightness as one moves away from the center of an image and towards the edges of the image.
  • FIG. 2 illustrates a typical vignetting effect. When a camera is used to capture an image 200 of a uniform white surface, the vignetting effect can render the corners of the image 202 darker than the center of the image 204 .
  • the disclosed embodiments include an apparatus.
  • the apparatus can be configured to remove a shading effect from an image.
  • the apparatus can include one or more interfaces configured to provide communication with an imaging module that is configured to capture the image; and a processor, in communication with the one or more interfaces, configured to run a module stored in memory.
  • the module can be configured to receive the image captured by the imaging module under a first lighting spectrum; receive a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum; determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and operate the correction mesh on the image to remove the shading effect from the image.
  • the module is further configured to determine that the image was captured under the first lighting spectrum using an automated white balance technique.
  • the module is further configured to determine the correction mesh for the image based on the first lighting spectrum of the image.
  • the module is further configured to determine, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to one of a predetermined set of lighting spectra, receive a prediction function associated with the one of the predetermined set of lighting spectra, and apply the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
  • the prediction function comprises a linear function.
  • the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
  • the prediction function is associated only with the portion of the per-unit correction mesh.
  • the module is configured to determine, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to a linear combination of two or more lighting spectra, receive prediction functions associated with the two or more lighting spectra, combine the prediction functions to generate a final prediction function, and apply the final prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
  • the apparatus is a part of a camera module in a mobile device.
  • the disclosed embodiments include a method for removing a shading effect on an image.
  • the method can include receiving, at a correction module of a computing system, the image captured under a first lighting spectrum from an imaging module over an interface of the computing system; receiving, at the correction module, a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum; determining, at the correction module, a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and operating, at the correction module, the correction mesh on the image to remove the shading effect from the image.
  • the method further includes determining that the image was captured under the first lighting spectrum using an automated white balance technique.
  • the method further includes determining the correction mesh for the image based on the first lighting spectrum of the image.
  • the method further includes determining, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to one of a predetermined set of lighting spectra, receiving a prediction function associated with the one of the predetermined set of lighting spectra, and applying the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
  • the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
  • the method further includes determining, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to a linear combination of two or more lighting spectra, receiving prediction functions associated with the two or more lighting spectra, combining the prediction functions to generate a final prediction function, and applying the final prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
  • the disclosed embodiments include a non-transitory computer readable medium having executable instructions associated with a correction module.
  • the executable instructions are operable to cause a data processing apparatus to an image captured under a first lighting spectrum from an imaging module in communication with the data processing apparatus; retrieve, from a memory device, a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum; determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and operate the correction mesh on the image to remove a shading effect from the image.
  • the computer readable medium further includes executable instructions operable to cause the data processing apparatus to determine that the image was captured under the first lighting spectrum using an automated white balance technique.
  • the computer readable medium further includes executable instructions operable to cause the data processing apparatus to determine the correction mesh for the image based on the first lighting spectrum of the image.
  • the computer readable medium further includes executable instructions operable to cause the data processing apparatus to receive a prediction function associated with one of a predetermined set of lighting spectra, and apply the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
  • the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
  • the disclosed embodiments include an apparatus.
  • the apparatus is configured to determine a prediction function associated with a first lighting spectrum for a portion of a particular image sensor based on image sensors having an identical image sensor type as the particular image sensor.
  • the apparatus includes a processor configured to run one or more modules stored in memory that is configured to receive portions of a plurality of images associated with the first lighting spectrum taken by a first plurality of image sensors having the identical image sensor type as the particular image sensor; combine the portions of the plurality of images to generate a combined image portion for the first lighting spectrum; determine a first correction mesh for the first lighting spectrum based on the combined image portion for the first lighting spectrum; receive a plurality of correction meshes associated with a second lighting spectrum for a second plurality of image sensors; and determine the prediction function, for the portion of the particular image sensor, that models a relationship between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum, thereby providing the prediction function for the particular image sensor without relying on any images taken by the
  • the one or more modules is configured to minimize, in part, a sum of squared differences between values of the first correction mesh associated with the first lighting spectrum and values of the plurality of correction meshes associated with the second lighting spectrum.
  • the prediction function comprises a linear function.
  • the one or more modules is configured to provide the prediction function to an imaging module that embodies the particular image sensor.
  • the first plurality of image sensors and the second plurality of image sensors comprise an identical set of image sensors.
  • the portions of the plurality of images comprises a single pixel of the plurality of images at an identical location in the plurality of images.
  • the one or more modules is configured to communicate with a correction module, which is configured to: receive an image from the particular image sensor; retrieve a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determine a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operate the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
  • a correction module which is configured to: receive an image from the particular image sensor; retrieve a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determine a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operate the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
  • the apparatus is a part of a mobile device.
  • the disclosed embodiments include a method for determining a prediction function associated with a first lighting spectrum for a portion of a particular image sensor based on image sensors having an identical image sensor type as the particular image sensor.
  • the method can include receiving, at a sensor type calibration module of an apparatus, portions of a plurality of images associated with the first lighting spectrum taken by a first plurality of image sensors having the identical image sensor type as the particular image sensor; combining, by the sensor type calibration module, the portions of the plurality of images to generate a combined image portion for the first lighting spectrum; determining, by the sensor type calibration module, a first correction mesh for the first lighting spectrum based on the combined image portion for the first lighting spectrum; receiving, at a prediction function estimation module in the apparatus, a plurality of correction meshes associated with a second lighting spectrum for a second plurality of image sensors; and determining, by the prediction function estimation module, the prediction function, for the portion of the particular image sensor, that models a relationship between the first correction mesh associated with the first lighting spectrum and the plurality of correction mesh
  • determining the prediction function comprises minimizing, in part, a sum of squared differences between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum.
  • the prediction function comprises a linear function.
  • the method includes providing, by the prediction function estimation module, the prediction function to an imaging module that embodies the particular image sensor.
  • the first plurality of image sensors and the second plurality of image sensors comprise an identical set of image sensors.
  • the portions of the plurality of images comprises a single pixel of the plurality of images at an identical grid location in the plurality of images.
  • the method includes receiving, at a correction module in communication with the particular image sensor, an image from the particular image sensor; retrieving, by the correction module, a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determining, by the correction module, a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operating, by the correction module, the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
  • the disclosed embodiments include a non-transitory computer readable medium having executable instructions operable to cause a data processing apparatus to determine a prediction function associated with a first lighting spectrum for a portion of a particular image sensor based on image sensors having an identical image sensor type as the particular image sensor.
  • the executable instructions can be operable to cause the data processing apparatus to receive portions of a plurality of images associated with the first lighting spectrum taken by a first plurality of image sensors having the identical image sensor type as the particular image sensor; combine the portions of the plurality of images to generate a combined image portion for the first lighting spectrum; determine a first correction mesh for the first lighting spectrum based on the combined image portion for the first lighting spectrum; receive a plurality of correction meshes associated with a second lighting spectrum for a second plurality of image sensors; and determine the prediction function, for the portion of the particular image sensor, that models a relationship between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum, thereby providing the prediction function for the particular image sensor without relying on any images taken by the particular image sensor.
  • the computer readable medium can also include executable instructions operable to cause the data processing apparatus to minimize, in part, a sum of squared differences between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum.
  • the computer readable medium can also include executable instructions operable to cause the data processing apparatus to provide the prediction function to an imaging module that embodies the particular image sensor.
  • the portions of the plurality of images comprises a single pixel of the plurality of images at an identical grid location in the plurality of images.
  • the computer readable medium can also include executable instructions operable to cause the data processing apparatus to communicate with a correction module, which is configured to: receive an image from the particular image sensor; retrieve a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determine a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operate the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
  • a correction module which is configured to: receive an image from the particular image sensor; retrieve a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determine a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operate the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
  • FIGS. 1A-1B illustrate a Bayer color filter array and an interpolation technique for generating a color image.
  • FIG. 2 illustrates a typical vignetting effect.
  • FIG. 3 illustrates an imaging system that corrects a shading effect in an image in accordance with some embodiments.
  • FIG. 4 illustrates a correction mesh estimation process during a training stage in accordance with some embodiments.
  • FIG. 5 illustrates a shading correction process during a run-time stage in accordance with some embodiments.
  • the strength of the spatially-variant shading effects can depend on an individual camera's characteristics.
  • the strength of a vignetting effect can depend on the mechanical and optical design of a camera.
  • the strength of a color non-uniformity effect can depend on an individual image sensor's characteristics such as a geometry of pixels in an image sensor.
  • the color non-uniformity effect can be particularly pronounced in image sensors for mobile devices, such as a cellular phone.
  • mobile devices there is a need to keep the size of the image sensor to a small form factor while retaining a high pixel resolution. This results in very small pixel geometries (on the order of 1.7 ⁇ m), which can be detrimental to the color non-uniformity effect.
  • Crosstalk refers to a phenomenon in which the light passing through a given “tile” (e.g., pixel) of the CFA is not registered (or accumulated) solely by the pixel element underneath it, but also contributes to the surrounding sensor elements, thereby increasing the value of the neighboring pixels for different colors.
  • crosstalk was not a big problem because it could be corrected using a global color correction matrix, which removes the crosstalk effect globally (e.g., uniformly) across the image.
  • a global color correction matrix which removes the crosstalk effect globally (e.g., uniformly) across the image.
  • spatially-varying crosstalk has become more prominent. With smaller pixels, the crosstalk effect is more prominent at the edge of an image sensor because more light reaches the edge of an image sensor at an oblique angle. This results in a strong, spatially-varying color shift.
  • Such spatially-varying crosstalk effects are hard to remove because the spatially varying crosstalk effect is not always aligned with the center of the image, nor is it perfectly radial. Therefore, it is hard to precisely model the shape in which the spatially-varying crosstalk effect is manifested.
  • the crosstalk pattern can vary significantly from one image sensor to another due to manufacturing process variations. Oftentimes, the crosstalk pattern can depend heavily on the optical filters placed in front of the image sensor, such as an infrared (IR) cutoff filter. Moreover, the crosstalk effect can depend on the spectral power distribution (SPD) of the light reaching the image sensor. For example, an image of a white paper captured under sunlight provides a completely different color shift compared to an image of the same white paper captured under fluorescent light. Therefore, removing spatially-varying, sensor-dependent shading, resulting from crosstalk effects or vignetting effects, is a challenging task.
  • SPD spectral power distribution
  • One approach to removing the spatially-varying, sensor-dependent shading from an image is (1) determining a gain factor that should be applied to each pixel in the image to “un-do” (or compensate for) the shading effect of the image sensor and (2) multiplying each pixel of the image with the corresponding gain factor.
  • the gain factor for each pixel would depend on (1) an individual sensor's characteristics and (2) the lighting profile under which the image was taken, the gain factor should be determined for every pixel in the image, for all sensors of interest, and for all lighting conditions of interest. Therefore, this process can be time consuming and inefficient.
  • the disclosed apparatus, systems, and methods relate to effectively removing sensor-dependent, lighting-dependent shading effects from images.
  • the disclosed shading removal mechanism does not make any assumptions about the spatial pattern in which shading effects are manifested in images. For example, the disclosed shading removal mechanism does not assume that the shading effects follow a radial pattern or a polynomial pattern.
  • the disclosed shading removal mechanism avoids predetermining the spatial pattern of the shading effects to retain high flexibility and versatility.
  • the disclosed shading removal mechanism is configured to model shading characteristics of an image sensor so that shading effects from images, captured by the image sensor, can be removed using the modeled shading characteristics.
  • the disclosed shading removal mechanism can model the shading characteristics of an image sensor using a correction mesh.
  • the correction mesh can include one or more parameters with which an image can be processed to remove the shading effects.
  • the correction mesh can include one or more gain factors to be multiplied to one or more pixel values in an image in order to compensate for the shading effect.
  • the correction mesh can have unit value (e.g., a value of “1”) near the center of the correction mesh since there is not any shading effect to remove at the center.
  • the correction mesh can have a larger value (e.g., a value of “5”) near the corners because the shading effect is more prominent in the corners and the pixel values at the corner need to be amplified to counteract the shading effect.
  • the correction mesh can have the same spatial dimensionality as the image sensor.
  • the correction mesh when the image sensor has N ⁇ M pixels, the correction mesh can also have N ⁇ M pixels.
  • the correction mesh can have a lower spatial dimensionality compared to the image sensor. Because the shading can vary slowly across the image, the correction mesh does not need to have the same resolution as the image sensor to fully compensate for the shading effect. Instead, the correction mesh can have a lower spatial dimensionality compared to the image sensor so that the correction mesh can be stored in a storage medium with a limited capacity, and can be up-sampled prior to being applied to correct the shading effect.
  • the correction mesh can be determined based on an image of a highly uniform scene captured using an image sensor of interest.
  • the captured image of a highly uniform scene should be a uniform image, having the same pixel value everywhere across the image.
  • the captured image of a highly uniform scene is not uniform. Therefore, the captured image of a highly uniform scene can be used to determine signal gain factors to remove the shading effect.
  • the highly uniform scene can be a smooth white surface; in other cases, the highly uniform scene can be a light field output by an integrating sphere that can provide uniform light rays.
  • the correction mesh can be generated by inverting the value of each pixel of the captured highly uniform scene image:
  • C ⁇ ( x , y ) 1 I ⁇ ( x , y ) where (x,y) represents a coordinate of the pixel; I(x,y) represents a pixel value of the white surface image captured by the image sensor at position (x,y); and C(x,y) represents a value of the correction mesh at position at position (x,y).
  • the captured image can be filtered using a low-pass filter, such as a Gaussian filter, before being inverted:
  • the correction filter C(x,y) 1 G ⁇ ( x , y ) ⁇ I ⁇ ( x , y )
  • G(x,y) is a low-pass filter and is a convolution operator.
  • the Gaussian filter can be 7 ⁇ 7 pixels. This filtering step can be beneficial when the image sensor is noisy.
  • the correction filter C(x,y) is designed to have a lower resolution compared to the image sensor, the correction filter C(x,y) can be computed by inverting a sub-sampled version of the low-pass filtered image:
  • C ⁇ ( w , z ) 1 ⁇ ( G ⁇ ( x , y ) ⁇ I ⁇ ( x , y ) )
  • ⁇ (•) indicates a down-sampling operator
  • (w,z) refers to the down-sampled coordinate system.
  • the subsampling operation can saves memory and bandwidth at runtime. In some embodiments, it is desirable to reduce the size of the correction mesh for memory and bandwidth benefits, but it should be large enough to avoid artifacts when it is up-sampled to the image resolution at run-time.
  • each color channel of an image sensor can have its own separate correction mesh. This allows the disclosed shading correction mechanism to address not only intensity shading effects, but also color shading effects, such as the color non-uniformity.
  • the CFA of the image sensor includes red 106 , green 104 , and blue 108 pixels, as illustrated in FIG.
  • the shading correction mechanism can use four correction meshes, one correction mesh for the red color channel, referred to as C R , one correction mesh for the blue color channel, referred to as C B , one correction mesh for the green color pixels (Gr) 104 that are laterally adjacent to red pixels, referred to as C Gr , and one correction mesh for the green color pixels (Gb) 110 that are laterally adjacent to blue pixels, referred to as C Gb .
  • C R red color channel
  • C B blue color channel
  • C Gr green color pixels
  • Gb green color pixels
  • C Gb green color pixels
  • C Gb green color pixels
  • the level of crosstalk between Gr 104 and red 106 or blue 108 pixels is not the same as the level of crosstalk between Gb 110 and red 106 or blue 108 pixels, at a given local area of the sensor. Therefore, it can be beneficial to use four correction meshes for an image sensor with a three-color (RGB) CFA.
  • the amount of shading effects can be dependent on the light spectrum under which an image was taken. Therefore, in some embodiments, the correction meshes can be dependent on an input light spectrum it, also referred to as an input light profile.
  • Such spectrum-dependent correction meshes can be referred to as C R, ⁇ (x,y), C B, ⁇ (x,y), C Gr, ⁇ (x,y), and C Gb, ⁇ (x,y) to denote the dependence with the spectrum, ⁇ .
  • this approach would involve determining the correction mesh for each image sensor for all input light spectra, independently. This process can quickly become unwieldy when there are many image sensors of interest and when the image sensors are expected to operate in a wide-range of input light spectrum. For example, when an image sensor manufacturer or an electronic system manufacturer sells a large number of image sensors, it would be hard to determine a correction mesh for each of the image sensors across all light spectrum of interest. Even if the manufacturers can determine correction meshes for all light spectra of interest, the correction meshes should be stored on the device that would perform the actual shading correction. If the actual shading correction is performed on computing devices with limited memory resources, such as mobile devices or cellular phones, storing such a large number of correction meshes can be, by itself, a challenging and expensive task.
  • the disclosed shading correction mechanism avoids computing and storing such large number of correction meshes. Instead, the disclosed shading correction mechanism uses a computational technique to predict an appropriate correction mesh for an image captured by a particular image sensor. For example, the disclosed shading correction mechanism can analyze the captured image to determine an input lighting spectrum associated with the captured image. The input lighting spectrum can refer to a lighting profile of a light source used to shine the scene captured by the image. Subsequently, the disclosed shading correction mechanism can estimate an appropriate correction mesh for the determined input lighting spectrum based on (1) known characteristics about the particular image sensor with which the image was captured and (2) typical characteristics of image sensors having the same image sensor type as the particular image sensor.
  • the known characteristics about the particular image sensor can include a correction mesh of the particular image sensor for a predetermined input light spectrum, which may be different from the determined input lighting spectrum for the captured image.
  • the typical characteristics of image sensors having the same image sensor type can include one or more correction meshes of typical image sensors of the image sensor type with which the particular image sensor is also associated.
  • the typical characteristics of image sensors having the same image sensor type can include one or more correction meshes associated with an “average” image sensor of the image sensor type for a predetermined set of input light spectra, which may or may not include the determined input lighting spectrum for the captured image.
  • the disclosed shading correction mechanism can be configured to predict the correction mesh of the particular image sensor for the determined input lighting spectrum by converting the correction mesh of the particular image sensor for a predetermined input light spectrum (which may be distinct from the determined input light spectrum of the captured image) into a correction mesh for the determined input light spectrum by taking into account the correction meshes associated with an “average” image sensor of the image sensor type.
  • This function ⁇ ⁇ p can depend on the determined light spectrum ⁇ D of the input image—hence the subscript ⁇ D —and the image sensor type associated with the sensor i.
  • This shading correction scheme is useful and efficient because the shading correction scheme can determine the correction mesh C i, ⁇ p of the particular sensor i only once for a predetermined spectrum ⁇ p , and adapt it to be applicable to a wide range of lighting profiles ⁇ D using the prediction function ⁇ ⁇ D .
  • This scheme can reduce the number of correction meshes to be determined for a particular imaging module 302 , thereby improving the efficiency of the shading correction.
  • the disclosed shading correction scheme may not require storing all correction meshes for all lighting spectra of interest, which can be expensive.
  • FIG. 3 illustrates an imaging system 300 that carries out the disclosed shading correction scheme in accordance with some embodiments.
  • the imaging system 300 can include an imaging module 302 , which can include one or more of a lens 304 , an image sensor 306 , and/or an internal imaging module memory device 322 .
  • the imaging system 300 can also include a computing system 308 , which can include one or more of a processor 310 , a memory device 312 , a per-unit calibration module 314 , a sensor type calibration module 316 , a correction module 320 , an internal interface 324 and/or one or more interfaces 326 .
  • the lens 304 can include an optical device that is configured to collect light rays, from an imaging scene entering the imaging module 302 and form an image of the imaging scene on an image sensor 306 .
  • the image sensor 306 can include an electronic device that is configured to convert light rays into electronic signals.
  • the image sensor 306 can include one or more of a digital charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) pixel elements, also referred to as pixel sensors.
  • CCD digital charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the internal image module memory device 322 can include a computer readable medium, flash memory, a magnetic disk drive, an optical drive, a programmable read-only memory (PROM), a read-only memory (ROM), or any other suitable memory or combination of memories.
  • the internal image module memory device 322 can be configured to maintain or store a per-unit correction mesh for the imaging module 302 , as described further below.
  • the imaging module 302 can be coupled to a computing device 308 over an interface 326 .
  • the memory device 312 of the computing device 308 can include a computer readable medium, flash memory, a magnetic disk drive, an optical drive, a programmable read-only memory (PROM), a read-only memory (ROM), or any other suitable memory or combination of memories.
  • the memory 312 can maintain or store software and/or instructions that can be processed by the processor 310 . In some embodiments, the memory 312 can also maintain correction meshes and/or parameters for the prediction function.
  • the processor 310 can communicate with the memory 312 and interface 326 to communicate with other devices, such as an imaging module 302 or any other computing devices, such as a desktop computer or a server in a data center.
  • the processor 310 can include any applicable processor such as a system-on-a-chip that combines one or more of a central processing unit (CPU), an application processor, and flash memory, or a reduced instruction set computing (RISC) processor.
  • CPU central processing unit
  • application processor application processor
  • flash memory or a reduced instruction set computing (RISC) processor.
  • RISC reduced instruction set computing
  • the interface 326 can provide an input and/or output mechanism to communicate with other network devices.
  • the interface can be implemented in hardware to send and receive signals in a variety of mediums, such as optical, copper, and wireless, and in a number of different protocols, some of which may be non-transient.
  • the processor 310 can be configured to run one or more modules.
  • the one or more modules can include the per-unit calibration module 314 configured to determine a correction mesh for the image sensor 306 for a specific lighting profile.
  • the one or more modules can also include a sensor type calibration module 316 configured to determine one or more correction meshes for typical image sensors of the same type as the particular image sensor 306 for a predetermined set of lighting spectra.
  • the one or more modules can also include a prediction function estimation module 318 configured to estimate the prediction function for the image sensor 306 .
  • the one or more modules can also include the correction module 320 that is configured to apply the predicted correction mesh to remove the shading effect in images captured by the image sensor 306 .
  • the one or more modules can include any other suitable module or combination of modules.
  • modules 314 , 316 , 318 , and 320 are described as separate modules, they may be combined in any suitable combination of modules.
  • the processor 310 , the memory device 312 , and the module 314 , 316 , 318 , 320 can communicate via an internal interface 324 .
  • the disclosed shading correction mechanism can operate in two stages: a training stage and a run-time stage.
  • the training stage which may be performed during the device production and/or at a laboratory, the disclosed shading correction mechanism can determine a computational function that is capable of generating a correction mesh for an image sensor of interest.
  • the computational function can be determined based on characteristics of image sensors that are similar to the image sensor of interest.
  • the disclosed shading correction mechanism can estimate the lighting condition under which the image was taken, use the computational function corresponding to the estimated lighting condition to estimate a correction mesh for the image, and apply the estimated correction mesh to remove shading effects from the image.
  • FIG. 4 illustrates a correction mesh estimation process during a training stage in accordance with some embodiments.
  • the per-unit (PU) calibration module 314 can be configured to determine a correction mesh for the imaging module 302 for a specific input light spectrum ⁇ p .
  • the specific light spectrum ⁇ p can be predetermined.
  • the specific light spectrum ⁇ p can be a continuous light spectrum, for instance, a lighting spectrum corresponding to an outdoor scene in the shadow under a daylight, an outdoor scene under a cloudy daylight, an outdoor scene under a direct sunlight, or an incandescent light.
  • the specific light spectrum ⁇ p can have a spiky profile, for instance, fluorescent lighting spectrum.
  • the specific light spectrum ⁇ p can be one of the examples provided above. In other embodiments, the specific light spectrum ⁇ p can include two or more of the examples provided above.
  • the correction mesh for the specific light spectrum ⁇ p can be referred to as a per-unit correction mesh, identified as C i, ⁇ p .
  • the PU calibration module 314 can determine the per-unit correction mesh C i, ⁇ p based on a general correction mesh generation procedure described above. For example, the PU calibration module 314 can receive an image I i, ⁇ p (x,y) of a uniform, monochromatic surface from the image sensor 306 , where the image I i, ⁇ p (x,y) is captured under the predetermined specific light profile ⁇ p . Then the PU calibration module 314 can optionally filter the image I i, ⁇ p (x,y) using a low-pass filter, such as a Gaussian filter G(x,y), to generate a filtered image G(x,y) I i, ⁇ p (x,y).
  • a low-pass filter such as a Gaussian filter G(x,y
  • the calibration module 314 can optionally down-sample the filtered image G(x,y) I i, ⁇ p (x,y) and compute a pixel-wise inverse of the down-sampled, filtered image to generate a per-unit correction mesh C i, ⁇ p :
  • the PU calibration module 314 can stack four adjacent pixels in a 2 ⁇ 2 grid, e.g., Gr 104 , R 106 , B 108 , Gb 110 , of the image I i, ⁇ p (x,y) to form a single pixel, thereby reducing the size of the image into a quarter.
  • the stacked, four-dimensional image is referred to as a stacked input image I i, ⁇ p (x,y), where each pixel has four values.
  • the PU calibration module 314 can then perform the above process to compute a four-dimensional correction mesh C i, ⁇ p (w,z), where each dimension is associated with one of the Gr channel, R channel, B channel, and Gb channel:
  • the correction mesh C i, ⁇ p (w,z) which refers to [C Gr, ⁇ p (w,z), C R, ⁇ p (w,z), C B, ⁇ p (w,z), and C Gb, ⁇ p (w,z)]
  • the correction mesh C i, ⁇ p (w,z) can be stored in the internal image module memory device 322 and/or a memory device 312 .
  • the per-unit correction mesh C i, ⁇ p (w,z) can be computed at production time.
  • the imaging module 302 includes a memory device 322
  • the imaging module manufacturer can use the PU calibration module 314 to compute the per-unit correction mesh C i, ⁇ p (w,z) and store the per-unit correction mesh C i, ⁇ p (w,z) in the memory device 322 .
  • the PU calibration module 314 can compute the per-unit correction mesh C i, ⁇ p (w,z) when the manufacturer of an electronic system that embodies the imaging module 302 builds the electronic system. For example, when the imaging module 302 is embodied in a cell phone, the manufacturer of the cell phone can use the PU calibration module 314 to compute the per-unit correction mesh C i, ⁇ p (w,z) and store the per-unit correction mesh C i, ⁇ p (w,z) in the cell phone's memory.
  • the PU calibration module 314 can compute the per-unit correction mesh C i, ⁇ p (w,z) by a user of the imaging module 302 .
  • the user of the imaging module 302 or an electronic device can be requested to take an image of a uniform surface before the user actually starts to use the imaging module 302 .
  • the PU calibration module 314 can use the image of the uniform surface to compute the per-unit correction mesh C i, ⁇ p (w,z) before the user actually starts to using the imaging module 302 .
  • the per-unit correction mesh C i, ⁇ p (w,z) is tailored to the particular image sensor 306 for the specific lighting profile ⁇ p , the per-unit correction mesh C i, ⁇ p (w,z) can remove the shading effects only of the particular image sensor 306 and only for the particular lighting profile ⁇ p . Therefore, the direct use of the per-unit correction mesh C i, ⁇ p (w,z) is quite limiting.
  • an imaging module manufacturer or an electronic device manufacturer can gather many per-unit correction meshes C i, ⁇ p (w,z) for typical image sensors of an image sensor type.
  • the gathered per-unit correction meshes can be used by the prediction function (PF) estimation module 318 to generate a prediction function, as described with respect to step 404 .
  • PF prediction function
  • the sensor type (ST) calibration module 316 can generate one or more correction meshes for the image sensor type to which the image sensor 306 belongs.
  • the ST calibration module 316 can characterize the shading characteristics of an image sensor that is typical of the image sensor type to which the image sensor 306 belongs.
  • the image sensor type can be defined as a particular product number assigned to the image sensor.
  • the image sensor type can be defined as a manufacturer of the image sensor. For example, all image sensors manufactured by the same sensor manufacturer can belong to the same image sensor type.
  • the image sensor type can be defined as a particular fabrication facility from which an image sensor is fabricated. For example, all image sensors manufactured from the same fabrication facility can belong to the same image sensor type.
  • the image sensor type can be defined as a particular technology used in the image sensor.
  • the image sensor can be a charge-coupled-device (CCD) type or a complementary metal-oxide-semiconductor (CMOS) type depending on the technology used by a pixel element of an image sensor.
  • CCD charge-coupled-device
  • CMOS complementary metal-oxide-semiconductor
  • the ST calibration module 316 is configured to generate one or more correction meshes for the image sensor type by averaging characteristics of representative image sensors associated with the image sensor type.
  • the ST calibration module 316 is configured to receive images I ⁇ c ⁇ (x,y) of a uniform, monochromatic surface taken by a set of image sensors that are representative of an image sensor associated with an image sensor type. These images I ⁇ c ⁇ (x,y) are taken under one ⁇ c of a predetermined set of lighting profiles ⁇ .
  • the predetermined set of lighting profiles ⁇ can include one or more lighting profiles that often occur in real-world settings.
  • the predetermined set of lighting profiles ⁇ can include “Incandescent 2700K,” “Fluorescent 2700K,” “Fluorescent 4000K,” “Fluorescent 6500K,” “Outdoor midday sun (6500K),” or any other profiles of interest.
  • the ST calibration module 316 is configured to combine the images taken by these sensors under the same lighting profile, to generate a combined image ⁇ ⁇ c ⁇ (x,y), one for each of the predetermined set of lighting profiles ⁇ .
  • the combination operation can include computing an average of pixels at the same location (x,y) across the images taken by these sensors under the same lighting profile.
  • the ST calibration module 316 is configured to process the average image ⁇ ⁇ c ⁇ (x,y) for each lighting profile to generate a reference correction mesh C r, ⁇ c ⁇ (w,z) for each one ⁇ c of a predetermined set of lighting profiles ⁇ .
  • the ST calibration module 316 is configured to generate a reference correction mesh C r, ⁇ c ⁇ (w,z) for each one ⁇ c of a predetermined set of lighting profiles ⁇ by down-sampling the average image ⁇ ⁇ c ⁇ (x,y) and computing an inverse of each of the values in the down-sampled average image ⁇ ⁇ c ⁇ (w,z).
  • the reference correction mesh C r, ⁇ c ⁇ (w,z) could be used to remove the shading effects of an “average” sensor of the image sensor type for the associated one of the predetermined set of lighting profiles ⁇ . Because the ST calibration module 316 does not need to use an image captured by the image sensor 306 , the ST calibration module 314 can perform the above steps in a laboratory setting, independently of the imaging module 302 .
  • the computing system 308 has access to two sets of correction meshes: the per-unit correction mesh C i, ⁇ p (w,z) for a specific lighting profile ⁇ p , and the one or more reference correction meshes C r, ⁇ c ⁇ (w,z).
  • the PF estimation module 318 can use these sets of correction meshes to generate a prediction function ⁇ for the image sensor 306 , which is configured to transform the per-unit correction mesh C i, ⁇ p (w,z) for a specific lighting profile ⁇ p to a lighting-adapted correction mesh C i, ⁇ D (w,z) for the lighting spectrum ⁇ D under which an image was taken.
  • the prediction function ⁇ for the image sensor 306 can depend on the lighting spectrum under which an image was taken. Also, the prediction function ⁇ for the image sensor 306 can also depend on the location of the pixel (w,z). Such a light spectrum dependence and a spatial dependence is represented by subscripts ⁇ , (w,z): ⁇ ⁇ ,(w,z) .
  • the prediction function ⁇ for the image sensor 306 can be a linear function, which may be represented as a matrix.
  • the matrix can be a 4 ⁇ 4 matrix since each pixel (w,z) of a correction mesh C can include four gain factors: one for each color channel ([Gr, R, G, Gb]).
  • 63 4 ⁇ 4 transform matrices M ⁇ ,(w,z) can represent the prediction function ⁇ ⁇ ,(w,z) for a particular light spectrum.
  • the computing system 308 can apply the transform matrices M ⁇ D ,(w,z) , associated with the lighting spectrum ⁇ D under which an input image was taken, to a corresponding set of gain factors in the per-unit correction mesh C i, ⁇ p (w,z) to generate a lighting-adapted correction mesh C i, ⁇ D (w,z) for the image sensor 306 :
  • C i, ⁇ D ( w,z ) M ⁇ D ,(w,z) C i, ⁇ p ( w,z )
  • the PF estimation module 318 can generate a transform matrix M ⁇ c ,(w,z) for ⁇ c by finding a matrix M ⁇ c ,(w,z) that maps the per-unit correction mesh C i, ⁇ p (w,z) to a reference correction mesh C r, ⁇ c (w,z) for ⁇ c .
  • the transform matrix M ⁇ c ,(w,z) maps a correction mesh for a specific light profile ⁇ p to a correction mesh for one of the predetermined set of lighting profiles ⁇ c used to characterize a typical image sensor of an image sensor type.
  • the PF estimation module 318 can generate a prediction function by modeling a relationship between
  • the PF estimation module 318 can generate the transform matrix M ⁇ c ,(w,z) using a least-squares technique:
  • M ⁇ c , ( w , z ) arg ⁇ ⁇ min M ⁇ ⁇ ⁇ j ⁇ J ⁇ ⁇ ⁇ ⁇ C r , ⁇ c ⁇ ( w , z ) - MC j , ⁇ p ⁇ ( w , z ) ⁇ ⁇ 2 ⁇
  • C j ⁇ J, ⁇ p (w,z) represents correction meshes for the specific lighting profile ⁇ p for all sensors j in the sample set J at that grid location (w,z).
  • These per-unit correction meshes C j ⁇ J, ⁇ p (w,z) can be generated by an imaging module manufacturer or a manufacturer of an electronic device that embodies the imaging module 302 as a part of step 402 .
  • the resulting matrix M ⁇ c ,(w,z) is a matrix that can adapt the per-unit correction mesh C i, ⁇ p (w,z) to a different lighting spectrum ⁇ c .
  • the PF estimation module 318 can augment the least-squares technique to take into account characteristics of the matrix M:
  • M ⁇ c , ( w , z ) arg ⁇ ⁇ min M ⁇ ⁇ ⁇ j ⁇ J ⁇ ⁇ ⁇ ⁇ C r , ⁇ c ⁇ ( w , z ) - MC j , ⁇ p ⁇ ( w , z ) ⁇ ⁇ 2 + ⁇ M ⁇ ⁇ ⁇
  • ⁇ M ⁇ ⁇ is a ⁇ -norm of the matrix M, which can prefer a sparse matrix M compared to a non-sparse matrix.
  • the PF estimation module 318 can estimate a non-linear regression function that maps the correction mesh C j ⁇ J, ⁇ p (w,z) for the specific lighting profile ⁇ p to a reference correction mesh C r, ⁇ c (w,z):
  • f ⁇ c , ( w , z ) arg ⁇ ⁇ min f ⁇ ⁇ ⁇ j ⁇ J ⁇ ⁇ ⁇ ⁇ C r , ⁇ c ⁇ ( w , z ) - f ⁇ ( C j , ⁇ p ⁇ ( w , z ) ) ⁇ ⁇ 2 ⁇
  • can be a parametric function or a non-parametric function, such as a kernel function.
  • the non-linear function ⁇ ⁇ c ,(w,z) can be estimated using support vector machine techniques, and/or any other supervised learning techniques for regression.
  • the PF estimation module 318 can generate a transform matrix M ⁇ c ,(w,z) or a non-linear function ⁇ ⁇ c ,(w,z) independently of the grid location (w,z), the PF estimation module 318 can generate a transform matrix M ⁇ c ,(w,z) or a non-linear function ⁇ ⁇ c ,(w,z) that does not depend on any assumption about the pattern of the shading non-uniformity.
  • the disclosed technique is highly adaptable to different causes of non-uniformity, and therefore to different types of sensors.
  • the disclosed scheme uses a 4 ⁇ 4 transform matrix, arbitrary crosstalk among color channels at a given sensor location can be corrected.
  • the PF estimation module 318 can provide the prediction function to the memory 312 and/or the internal image module memory 322 via the interface 326 .
  • the prediction function e.g., the transform matrix M ⁇ c ,(w,z) or the non-linear function ⁇ ⁇ c ,(w,z)
  • the prediction function (e.g., the transform matrix M ⁇ c ,(w,z) or the non-linear function ⁇ ⁇ c ,(w,z) ) can be generated independently of the particular sensor of interest.
  • the per-unit correction meshes C j ⁇ J, ⁇ p (w,z) does not need to include the per-unit mesh of the particular sensor i of interest. Therefore, the prediction function can be generated once for all image sensors of the image sensor type. As long as an image sensor is typical of an image sensor type, the shading effects in the image sensor can be corrected using the estimated prediction function.
  • FIG. 5 illustrates a shading correction process during a run-time stage in accordance with some embodiments.
  • the image sensor i is configured to capture an image I(x,y) of a scene and provide the captured image I(x,y) to the correction module 320 .
  • the correction module 320 is configured to estimate a lighting condition (e.g., lighting profile ⁇ D ) under which the image I(x,y) was taken.
  • the correction module 320 is configured to use an auto white balance (AWB) technique to determine the lighting profile ⁇ D .
  • the correction module 320 is configured to receive results of an AWB technique, which is performed separately at a separate computing device.
  • the AWB technique can select one ⁇ c of the predetermined set of lighting profiles ⁇ as the lighting profile ⁇ D for the captured image.
  • the AWB technique can detect mixed lighting conditions in which the lighting profile ⁇ D can be represented as a linear combination of two or more of the predetermined set of lighting profiles ⁇ :
  • the correction module 320 is configured to generate a lighting-adapted correction mesh for the captured image. To this end, the correction module 320 is configured to retrieve the per-unit correction mesh C i, ⁇ p (w,z) for the particular image sensor i and the prediction function corresponding to the determined lighting profile ⁇ D .
  • the correction module 320 can retrieve the prediction function corresponding to the determined lighting profile ⁇ D by retrieving the prediction function (e.g., the transform matrix M ⁇ c ,(w,z) or the non-linear function ⁇ ⁇ c ,(w,z) ) for the associated one ⁇ c of the predetermined set of lighting profiles ⁇ .
  • the prediction function e.g., the transform matrix M ⁇ c ,(w,z) or the non-linear function ⁇ ⁇ c ,(w,z)
  • the correction module 320 can retrieve the prediction function corresponding to the determined lighting profile ⁇ D by combining the prediction functions corresponding to the predetermined set of lighting profiles ⁇ .
  • the correction module 320 can linearly combine the transform matrices M ⁇ c ,(w,z) to generate the transform matrix M ⁇ D ,(w,z) for the determined lighting profile ⁇ D :
  • the correction module 320 can apply the prediction function to the per-unit correction mesh C i, ⁇ p (w,z) of the image sensor i to determine the lighting adapted correction mesh C i, ⁇ D (w,z).
  • the correction module 320 can subsequently use the lighting-adapted correction mesh C i, ⁇ D (w,z) to remove the shading effect in the image.
  • the correction module 320 can up-sample the lighting-adapted correction mesh C i, ⁇ D (w,z) to C i, ⁇ D (x,y) so that the lighting-adapted correction mesh C i, ⁇ D (x,y) has the same dimension as an input image I(x,y).
  • the correction module 320 can be configured to organize the gain factors for the color channels [Gr, R, G, Gb] in accordance with the Bayer CFA pattern of the input image I(x,y).
  • the correction module 320 can be configured to multiply, in a pixel-by-pixel manner, the lighting-adapted correction mesh C i, ⁇ D (x,y) and the input image I(x,y) to remove the shading effect on the input image I(x,y).
  • the disclosed shading correction scheme is effective because it is able to take into account both the sensor-specific characteristics, such as the per-unit correction mesh C i, ⁇ p (w,z), and the typical characteristics of sensors having the same image sensor type, such as the reference correction meshes C r, ⁇ c (w,z).
  • one or more of the modules 314 , 316 , 318 , and 320 can be implemented in software using the memory 312 .
  • the memory 312 can be a non-transitory computer readable medium, flash memory, a magnetic disk drive, an optical drive, a programmable read-only memory (PROM), a read-only memory (ROM), or any other memory or combination of memories.
  • the software can run on a processor 310 capable of executing computer instructions or computer code.
  • the processor 310 might also be implemented in hardware using an application specific integrated circuit (ASIC), programmable logic array (PLA), digital signal processor (DSP), field programmable gate array (FPGA), or any other integrated circuit.
  • ASIC application specific integrated circuit
  • PLA programmable logic array
  • DSP digital signal processor
  • FPGA field programmable gate array
  • one or more of the modules 314 , 316 , 318 , and 320 can be implemented in hardware using an ASIC, PLA, DSP, FPGA, or any other integrated circuit. In some embodiments, two or more of the modules 314 , 316 , 318 , and 320 can be implemented on the same integrated circuit, such as ASIC, PLA, DSP, or FPGA, thereby forming a system on chip.
  • the imaging module 302 and the computing system 308 can reside in a single electronic device.
  • the imaging module 302 and the computing system 308 can reside in a cell phone or a camera device.
  • the electronic device can include user equipment.
  • the user equipment can communicate with one or more radio access networks and with wired communication networks.
  • the user equipment can be a cellular phone having phonetic communication capabilities.
  • the user equipment can also be a smart phone providing services such as word processing, web browsing, gaming, e-book capabilities, an operating system, and a full keyboard.
  • the user equipment can also be a tablet computer providing network access and most of the services provided by a smart phone.
  • the user equipment operates using an operating system such as Symbian OS, iPhone OS, RIM's Blackberry, Windows Mobile, Linux, HP WebOS, and Android.
  • the screen might be a touch screen that is used to input data to the mobile device, in which case the screen can be used instead of the full keyboard.
  • the user equipment can also keep global positioning coordinates, profile information, or other location information.
  • the electronic device can also include any platforms capable of computations and communication.
  • Non-limiting examples can include televisions (TVs), video projectors, set-top boxes or set-top units, digital video recorders (DVR), computers, netbooks, laptops, and any other audiovisual equipment with computation capabilities.
  • the electronic device can be configured with one or more processors that process instructions and run software that may be stored in memory.
  • the processor also communicates with the memory and interfaces to communicate with other devices.
  • the processor can be any applicable processor such as a system-on-a-chip that combines a CPU, an application processor, and flash memory.
  • the electronic device may also include speakers and a display device in some embodiments.
  • the imaging module 302 and the computing system 308 can reside in different electronic devices.
  • the imaging module 302 can be a part of a camera or a cell phone
  • the computing system 308 can be a part of a desktop computer or a server.
  • the imaging module 302 and the computing system 308 can reside in a single electronic device, but the PU calibration module 314 , the ST calibration module 316 , and/or the PF estimation module 318 can reside in a separate computing device in communication with the computing system 308 , instead of the computing system 308 itself.
  • the PU calibration module 314 , the ST calibration module 316 , and/or the PF estimation module 318 can reside in a server in a data center.

Abstract

The disclosed subject matter includes an apparatus configured to remove a shading effect from an image. The apparatus can include one or more interfaces configured to provide communication with an imaging module that is configured to capture the image, and a processor, in communication with the one or more interfaces, configured to run a module stored in memory. The module is configured to receive the image captured by the imaging module under a first lighting spectrum, receive a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum, determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum, and operate the correction mesh on the image to remove the shading effect from the image.

Description

FIELD OF THE APPLICATION
The present application relates generally to image processing. In particular, the present application relates to removing shading effects in images.
BACKGROUND
An image sensor can be used to capture color information about a scene. The image sensor can include pixel elements that are configured to respond differently to different wavelengths of light, much like a human visual system. In many cases, a pixel element of an image sensor can achieve such color selectivity using a color filter, which filters the incoming light reaching the pixel element based on the light's wavelength. For an image sensor with a plurality of pixel elements arranged in an array, the color filters for the plurality of pixel elements can be arranged in an array as well. Such color filters are often referred to as a color filter array (CFA).
There are many types of CFAs. One of the widely used CFAs is a Bayer CFA, which arranges the color filters in an alternating, checkerboard pattern. FIG. 1A illustrates a Bayer CFA. The Bayer CFA 102 can include a plurality of color filters (e.g., Gr 104, R 106, B 108, and Gb 110), each of which filters the incoming light reaching a pixel element based on the light's wavelength. For example, a pixel underlying a green filter 104 can capture light with a wavelength in the range of the color “green”; a pixel underlying a red filter 106 can capture light with a wavelength in the range of the color “red”; and a pixel underlying a blue filter 108 can capture light with a wavelength in the range of the color “blue.” The Bayer CFA 102 can be overlaid on the pixel elements so that the underlying pixel elements only observe the light that passes through the overlaid filter. The Bayer CFA 102 can arrange the color filters in a checkerboard pattern. In the Bayer CFA 102, there are twice as many green filters 104, 110 as there are red filters 106 or blue filters 108. There may be other types of CFAs. Different CFAs differ in (1) the filters used to pass selected wavelengths of light and/or (2) the arrangement of filters in the array.
An image captured by an image sensor with a CFA can be processed to generate a color image. In particular, each color channel (e.g., Red, Green, Blue) can be separated into separate “channels.” As an example, FIG. 1B illustrates the Green channel 120 of the captured image 102. The Green channel 120 includes pixels with missing values 118 because those pixels were used to capture other colors (e.g., Red or Blue). These missing values 118 can be interpolated from neighboring pixels 110, 112, 114, 116 to fill-in the missing value. This process can be repeated for other color channels. By stacking the interpolated color channels, a color image can be generated.
An image captured by an image sensor can be subject to undesired shading effects. The shading effects refer to a phenomenon in which a brightness of an image is reduced. In some cases, the shading effects can vary as a function of a spatial location in an image. One of the prominent spatially-varying shading effects is referred to as the color non-uniformity effect. The color non-uniformity effect refers to a phenomenon in which a color of a captured image varies spatially, even when the physical properties of the light (e.g., the amount of light and/or the wavelength of the captured light) captured by the image sensor is uniform across spatial locations in the image sensor. A typical symptom of a color non-uniformity effect can include a green tint at the center of an image, which fades into a magenta tint towards the edges of an image. This particular symptom has been referred to as the “green spot” issue. The color non-uniformity effect can be prominent when a camera captures an image of white or gray surfaces, such as a wall or a piece of paper.
Another one of the prominent spatially-varying shading effects is referred to as a vignetting effect. The vignetting effect refers to a phenomenon in which less light reaches the corners of an image sensor compared to the center of an image sensor. This results in decreasing brightness as one moves away from the center of an image and towards the edges of the image. FIG. 2 illustrates a typical vignetting effect. When a camera is used to capture an image 200 of a uniform white surface, the vignetting effect can render the corners of the image 202 darker than the center of the image 204.
Because the spatially-varying shading effects can be undesirable, there is a need for an effective, efficient mechanism for removing the spatially-varying shading effects from an image.
SUMMARY
The disclosed embodiments include an apparatus. The apparatus can be configured to remove a shading effect from an image. The apparatus can include one or more interfaces configured to provide communication with an imaging module that is configured to capture the image; and a processor, in communication with the one or more interfaces, configured to run a module stored in memory. The module can be configured to receive the image captured by the imaging module under a first lighting spectrum; receive a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum; determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and operate the correction mesh on the image to remove the shading effect from the image.
In some embodiments, the module is further configured to determine that the image was captured under the first lighting spectrum using an automated white balance technique.
In some embodiments, the module is further configured to determine the correction mesh for the image based on the first lighting spectrum of the image.
In some embodiments, the module is further configured to determine, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to one of a predetermined set of lighting spectra, receive a prediction function associated with the one of the predetermined set of lighting spectra, and apply the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
In some embodiments, the prediction function comprises a linear function.
In some embodiments, the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
In some embodiments, the prediction function is associated only with the portion of the per-unit correction mesh.
In some embodiments, the module is configured to determine, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to a linear combination of two or more lighting spectra, receive prediction functions associated with the two or more lighting spectra, combine the prediction functions to generate a final prediction function, and apply the final prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
In some embodiments, the apparatus is a part of a camera module in a mobile device.
The disclosed embodiments include a method for removing a shading effect on an image. The method can include receiving, at a correction module of a computing system, the image captured under a first lighting spectrum from an imaging module over an interface of the computing system; receiving, at the correction module, a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum; determining, at the correction module, a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and operating, at the correction module, the correction mesh on the image to remove the shading effect from the image.
In some embodiments, the method further includes determining that the image was captured under the first lighting spectrum using an automated white balance technique.
In some embodiments, the method further includes determining the correction mesh for the image based on the first lighting spectrum of the image.
In some embodiments, the method further includes determining, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to one of a predetermined set of lighting spectra, receiving a prediction function associated with the one of the predetermined set of lighting spectra, and applying the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
In some embodiments, the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
In some embodiments, the method further includes determining, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to a linear combination of two or more lighting spectra, receiving prediction functions associated with the two or more lighting spectra, combining the prediction functions to generate a final prediction function, and applying the final prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
The disclosed embodiments include a non-transitory computer readable medium having executable instructions associated with a correction module. The executable instructions are operable to cause a data processing apparatus to an image captured under a first lighting spectrum from an imaging module in communication with the data processing apparatus; retrieve, from a memory device, a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum; determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and operate the correction mesh on the image to remove a shading effect from the image.
In some embodiments, the computer readable medium further includes executable instructions operable to cause the data processing apparatus to determine that the image was captured under the first lighting spectrum using an automated white balance technique.
In some embodiments, the computer readable medium further includes executable instructions operable to cause the data processing apparatus to determine the correction mesh for the image based on the first lighting spectrum of the image.
In some embodiments, the computer readable medium further includes executable instructions operable to cause the data processing apparatus to receive a prediction function associated with one of a predetermined set of lighting spectra, and apply the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
In some embodiments, the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
The disclosed embodiments include an apparatus. The apparatus is configured to determine a prediction function associated with a first lighting spectrum for a portion of a particular image sensor based on image sensors having an identical image sensor type as the particular image sensor. The apparatus includes a processor configured to run one or more modules stored in memory that is configured to receive portions of a plurality of images associated with the first lighting spectrum taken by a first plurality of image sensors having the identical image sensor type as the particular image sensor; combine the portions of the plurality of images to generate a combined image portion for the first lighting spectrum; determine a first correction mesh for the first lighting spectrum based on the combined image portion for the first lighting spectrum; receive a plurality of correction meshes associated with a second lighting spectrum for a second plurality of image sensors; and determine the prediction function, for the portion of the particular image sensor, that models a relationship between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum, thereby providing the prediction function for the particular image sensor without relying on any images taken by the particular image sensor.
In some embodiments, the one or more modules is configured to minimize, in part, a sum of squared differences between values of the first correction mesh associated with the first lighting spectrum and values of the plurality of correction meshes associated with the second lighting spectrum.
In some embodiments, the prediction function comprises a linear function.
In some embodiments, the one or more modules is configured to provide the prediction function to an imaging module that embodies the particular image sensor.
In some embodiments, the first plurality of image sensors and the second plurality of image sensors comprise an identical set of image sensors.
In some embodiments, the portions of the plurality of images comprises a single pixel of the plurality of images at an identical location in the plurality of images.
In some embodiments, the one or more modules is configured to communicate with a correction module, which is configured to: receive an image from the particular image sensor; retrieve a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determine a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operate the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
In some embodiments, the apparatus is a part of a mobile device.
The disclosed embodiments include a method for determining a prediction function associated with a first lighting spectrum for a portion of a particular image sensor based on image sensors having an identical image sensor type as the particular image sensor. The method can include receiving, at a sensor type calibration module of an apparatus, portions of a plurality of images associated with the first lighting spectrum taken by a first plurality of image sensors having the identical image sensor type as the particular image sensor; combining, by the sensor type calibration module, the portions of the plurality of images to generate a combined image portion for the first lighting spectrum; determining, by the sensor type calibration module, a first correction mesh for the first lighting spectrum based on the combined image portion for the first lighting spectrum; receiving, at a prediction function estimation module in the apparatus, a plurality of correction meshes associated with a second lighting spectrum for a second plurality of image sensors; and determining, by the prediction function estimation module, the prediction function, for the portion of the particular image sensor, that models a relationship between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum, thereby providing the prediction function for the particular image sensor without relying on any images taken by the particular image sensor.
In some embodiments, determining the prediction function comprises minimizing, in part, a sum of squared differences between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum.
In some embodiments, the prediction function comprises a linear function.
In some embodiments, the method includes providing, by the prediction function estimation module, the prediction function to an imaging module that embodies the particular image sensor.
In some embodiments, the first plurality of image sensors and the second plurality of image sensors comprise an identical set of image sensors.
In some embodiments, the portions of the plurality of images comprises a single pixel of the plurality of images at an identical grid location in the plurality of images.
In some embodiments, the method includes receiving, at a correction module in communication with the particular image sensor, an image from the particular image sensor; retrieving, by the correction module, a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determining, by the correction module, a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operating, by the correction module, the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
The disclosed embodiments include a non-transitory computer readable medium having executable instructions operable to cause a data processing apparatus to determine a prediction function associated with a first lighting spectrum for a portion of a particular image sensor based on image sensors having an identical image sensor type as the particular image sensor. The executable instructions can be operable to cause the data processing apparatus to receive portions of a plurality of images associated with the first lighting spectrum taken by a first plurality of image sensors having the identical image sensor type as the particular image sensor; combine the portions of the plurality of images to generate a combined image portion for the first lighting spectrum; determine a first correction mesh for the first lighting spectrum based on the combined image portion for the first lighting spectrum; receive a plurality of correction meshes associated with a second lighting spectrum for a second plurality of image sensors; and determine the prediction function, for the portion of the particular image sensor, that models a relationship between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum, thereby providing the prediction function for the particular image sensor without relying on any images taken by the particular image sensor.
In some embodiments, the computer readable medium can also include executable instructions operable to cause the data processing apparatus to minimize, in part, a sum of squared differences between the first correction mesh associated with the first lighting spectrum and the plurality of correction meshes associated with the second lighting spectrum.
In some embodiments, the computer readable medium can also include executable instructions operable to cause the data processing apparatus to provide the prediction function to an imaging module that embodies the particular image sensor.
In some embodiments, the portions of the plurality of images comprises a single pixel of the plurality of images at an identical grid location in the plurality of images.
In some embodiments, the computer readable medium can also include executable instructions operable to cause the data processing apparatus to communicate with a correction module, which is configured to: receive an image from the particular image sensor; retrieve a per-unit correction mesh associated with the particular image sensor, wherein the per-unit correction mesh is associated with the second lighting spectrum; determine a correction mesh for a portion of the image by operating the prediction function on a portion of the per-unit correction mesh; and operate the correction mesh on a portion of the image to remove a shading effect from the portion of the image.
DESCRIPTION OF DRAWINGS
Various objects, features, and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings, in which like reference numerals identify like elements.
FIGS. 1A-1B illustrate a Bayer color filter array and an interpolation technique for generating a color image.
FIG. 2 illustrates a typical vignetting effect.
FIG. 3 illustrates an imaging system that corrects a shading effect in an image in accordance with some embodiments.
FIG. 4 illustrates a correction mesh estimation process during a training stage in accordance with some embodiments.
FIG. 5 illustrates a shading correction process during a run-time stage in accordance with some embodiments.
DETAILED DESCRIPTION
In the following description, numerous specific details are set forth regarding the systems and methods of the disclosed subject matter and the environment in which such systems and methods may operate, etc., in order to provide a thorough understanding of the disclosed subject matter. It will be apparent to one skilled in the art, however, that the disclosed subject matter may be practiced without such specific details, and that certain features, which are well known in the art, are not described in detail in order to avoid complication of the disclosed subject matter. In addition, it will be understood that the examples provided below are exemplary, and that it is contemplated that there are other systems and methods that are within the scope of the disclosed subject matter.
Removing spatially-variant shading effects from an image is a challenging task because the strength of the spatially-variant shading effects can depend on an individual camera's characteristics. For example, the strength of a vignetting effect can depend on the mechanical and optical design of a camera. As another example, the strength of a color non-uniformity effect can depend on an individual image sensor's characteristics such as a geometry of pixels in an image sensor.
The color non-uniformity effect can be particularly pronounced in image sensors for mobile devices, such as a cellular phone. In mobile devices, there is a need to keep the size of the image sensor to a small form factor while retaining a high pixel resolution. This results in very small pixel geometries (on the order of 1.7 μm), which can be detrimental to the color non-uniformity effect.
One of the reasons that small pixels increase the color non-uniformity effect is that a small pixel geometry can increase the crosstalk of color channels in an image sensor. Crosstalk refers to a phenomenon in which the light passing through a given “tile” (e.g., pixel) of the CFA is not registered (or accumulated) solely by the pixel element underneath it, but also contributes to the surrounding sensor elements, thereby increasing the value of the neighboring pixels for different colors.
Traditionally, crosstalk was not a big problem because it could be corrected using a global color correction matrix, which removes the crosstalk effect globally (e.g., uniformly) across the image. However, as the pixel geometry gets smaller, spatially-varying crosstalk has become more prominent. With smaller pixels, the crosstalk effect is more prominent at the edge of an image sensor because more light reaches the edge of an image sensor at an oblique angle. This results in a strong, spatially-varying color shift. Such spatially-varying crosstalk effects are hard to remove because the spatially varying crosstalk effect is not always aligned with the center of the image, nor is it perfectly radial. Therefore, it is hard to precisely model the shape in which the spatially-varying crosstalk effect is manifested.
In addition, the crosstalk pattern can vary significantly from one image sensor to another due to manufacturing process variations. Oftentimes, the crosstalk pattern can depend heavily on the optical filters placed in front of the image sensor, such as an infrared (IR) cutoff filter. Moreover, the crosstalk effect can depend on the spectral power distribution (SPD) of the light reaching the image sensor. For example, an image of a white paper captured under sunlight provides a completely different color shift compared to an image of the same white paper captured under fluorescent light. Therefore, removing spatially-varying, sensor-dependent shading, resulting from crosstalk effects or vignetting effects, is a challenging task.
One approach to removing the spatially-varying, sensor-dependent shading from an image is (1) determining a gain factor that should be applied to each pixel in the image to “un-do” (or compensate for) the shading effect of the image sensor and (2) multiplying each pixel of the image with the corresponding gain factor. However, because the gain factor for each pixel would depend on (1) an individual sensor's characteristics and (2) the lighting profile under which the image was taken, the gain factor should be determined for every pixel in the image, for all sensors of interest, and for all lighting conditions of interest. Therefore, this process can be time consuming and inefficient.
The disclosed apparatus, systems, and methods relate to effectively removing sensor-dependent, lighting-dependent shading effects from images. The disclosed shading removal mechanism does not make any assumptions about the spatial pattern in which shading effects are manifested in images. For example, the disclosed shading removal mechanism does not assume that the shading effects follow a radial pattern or a polynomial pattern. The disclosed shading removal mechanism avoids predetermining the spatial pattern of the shading effects to retain high flexibility and versatility.
The disclosed shading removal mechanism is configured to model shading characteristics of an image sensor so that shading effects from images, captured by the image sensor, can be removed using the modeled shading characteristics. The disclosed shading removal mechanism can model the shading characteristics of an image sensor using a correction mesh. The correction mesh can include one or more parameters with which an image can be processed to remove the shading effects.
In some embodiments, the correction mesh can include one or more gain factors to be multiplied to one or more pixel values in an image in order to compensate for the shading effect. For example, when the correction mesh models a vignetting effect, as illustrated in FIG. 2, the correction mesh can have unit value (e.g., a value of “1”) near the center of the correction mesh since there is not any shading effect to remove at the center. However, the correction mesh can have a larger value (e.g., a value of “5”) near the corners because the shading effect is more prominent in the corners and the pixel values at the corner need to be amplified to counteract the shading effect. In some cases, the correction mesh can have the same spatial dimensionality as the image sensor. For example, when the image sensor has N×M pixels, the correction mesh can also have N×M pixels. In other cases, the correction mesh can have a lower spatial dimensionality compared to the image sensor. Because the shading can vary slowly across the image, the correction mesh does not need to have the same resolution as the image sensor to fully compensate for the shading effect. Instead, the correction mesh can have a lower spatial dimensionality compared to the image sensor so that the correction mesh can be stored in a storage medium with a limited capacity, and can be up-sampled prior to being applied to correct the shading effect.
In some embodiments, the correction mesh can be determined based on an image of a highly uniform scene captured using an image sensor of interest. When the image sensor does not suffer from any shading effects, the captured image of a highly uniform scene should be a uniform image, having the same pixel value everywhere across the image. However, when the image sensor suffers from shading effects, the captured image of a highly uniform scene is not uniform. Therefore, the captured image of a highly uniform scene can be used to determine signal gain factors to remove the shading effect. In some cases, the highly uniform scene can be a smooth white surface; in other cases, the highly uniform scene can be a light field output by an integrating sphere that can provide uniform light rays.
In some embodiments, the correction mesh can be generated by inverting the value of each pixel of the captured highly uniform scene image:
C ( x , y ) = 1 I ( x , y )
where (x,y) represents a coordinate of the pixel; I(x,y) represents a pixel value of the white surface image captured by the image sensor at position (x,y); and C(x,y) represents a value of the correction mesh at position at position (x,y). In some cases, the captured image can be filtered using a low-pass filter, such as a Gaussian filter, before being inverted:
C ( x , y ) = 1 G ( x , y ) I ( x , y )
where G(x,y) is a low-pass filter and
Figure US09270872-20160223-P00001
is a convolution operator. In some embodiments, the Gaussian filter can be 7×7 pixels. This filtering step can be beneficial when the image sensor is noisy. When the correction filter C(x,y) is designed to have a lower resolution compared to the image sensor, the correction filter C(x,y) can be computed by inverting a sub-sampled version of the low-pass filtered image:
C ( w , z ) = 1 ( G ( x , y ) I ( x , y ) )
where ↓(•) indicates a down-sampling operator, and (w,z) refers to the down-sampled coordinate system. The subsampling operation can saves memory and bandwidth at runtime. In some embodiments, it is desirable to reduce the size of the correction mesh for memory and bandwidth benefits, but it should be large enough to avoid artifacts when it is up-sampled to the image resolution at run-time.
In some embodiments, each color channel of an image sensor can have its own separate correction mesh. This allows the disclosed shading correction mechanism to address not only intensity shading effects, but also color shading effects, such as the color non-uniformity. In some cases, when the CFA of the image sensor includes red 106, green 104, and blue 108 pixels, as illustrated in FIG. 1, the shading correction mechanism can use four correction meshes, one correction mesh for the red color channel, referred to as CR, one correction mesh for the blue color channel, referred to as CB, one correction mesh for the green color pixels (Gr) 104 that are laterally adjacent to red pixels, referred to as CGr, and one correction mesh for the green color pixels (Gb) 110 that are laterally adjacent to blue pixels, referred to as CGb. Since the amount of crosstalk can be dependent on the angle at which the light arrives at the image sensor, the level of crosstalk between Gr 104 and red 106 or blue 108 pixels is not the same as the level of crosstalk between Gb 110 and red 106 or blue 108 pixels, at a given local area of the sensor. Therefore, it can be beneficial to use four correction meshes for an image sensor with a three-color (RGB) CFA.
In some cases, the amount of shading effects can be dependent on the light spectrum under which an image was taken. Therefore, in some embodiments, the correction meshes can be dependent on an input light spectrum it, also referred to as an input light profile. Such spectrum-dependent correction meshes can be referred to as CR,π(x,y), CB,π(x,y), CGr,π(x,y), and CGb,π(x,y) to denote the dependence with the spectrum, π.
Because the correction mesh is dependent on both (1) the image sensor and (2) the input light spectrum, this approach would involve determining the correction mesh for each image sensor for all input light spectra, independently. This process can quickly become unwieldy when there are many image sensors of interest and when the image sensors are expected to operate in a wide-range of input light spectrum. For example, when an image sensor manufacturer or an electronic system manufacturer sells a large number of image sensors, it would be hard to determine a correction mesh for each of the image sensors across all light spectrum of interest. Even if the manufacturers can determine correction meshes for all light spectra of interest, the correction meshes should be stored on the device that would perform the actual shading correction. If the actual shading correction is performed on computing devices with limited memory resources, such as mobile devices or cellular phones, storing such a large number of correction meshes can be, by itself, a challenging and expensive task.
To address these issues, the disclosed shading correction mechanism avoids computing and storing such large number of correction meshes. Instead, the disclosed shading correction mechanism uses a computational technique to predict an appropriate correction mesh for an image captured by a particular image sensor. For example, the disclosed shading correction mechanism can analyze the captured image to determine an input lighting spectrum associated with the captured image. The input lighting spectrum can refer to a lighting profile of a light source used to shine the scene captured by the image. Subsequently, the disclosed shading correction mechanism can estimate an appropriate correction mesh for the determined input lighting spectrum based on (1) known characteristics about the particular image sensor with which the image was captured and (2) typical characteristics of image sensors having the same image sensor type as the particular image sensor.
The known characteristics about the particular image sensor can include a correction mesh of the particular image sensor for a predetermined input light spectrum, which may be different from the determined input lighting spectrum for the captured image. The typical characteristics of image sensors having the same image sensor type can include one or more correction meshes of typical image sensors of the image sensor type with which the particular image sensor is also associated. For example, the typical characteristics of image sensors having the same image sensor type can include one or more correction meshes associated with an “average” image sensor of the image sensor type for a predetermined set of input light spectra, which may or may not include the determined input lighting spectrum for the captured image.
More particularly, the disclosed shading correction mechanism can be configured to predict the correction mesh of the particular image sensor for the determined input lighting spectrum by converting the correction mesh of the particular image sensor for a predetermined input light spectrum (which may be distinct from the determined input light spectrum of the captured image) into a correction mesh for the determined input light spectrum by taking into account the correction meshes associated with an “average” image sensor of the image sensor type.
For example, the disclosed shading correction mechanism is configured to compute the following:
C i,π D π D (C i,π p )
where Ci,π p refers to a correction mesh of a sensor i for the predetermined spectrum πp, C i,π D refers to a predicted correction mesh of the sensor i for the light spectrum πD associated with the input image, and ƒπ D is a prediction function to convert Ci,π p to Ci,π D . This function ƒπ p can depend on the determined light spectrum πD of the input image—hence the subscript πD—and the image sensor type associated with the sensor i.
This shading correction scheme is useful and efficient because the shading correction scheme can determine the correction mesh Ci,π p of the particular sensor i only once for a predetermined spectrum πp, and adapt it to be applicable to a wide range of lighting profiles πD using the prediction function ƒπ D . This scheme can reduce the number of correction meshes to be determined for a particular imaging module 302, thereby improving the efficiency of the shading correction. Furthermore, the disclosed shading correction scheme may not require storing all correction meshes for all lighting spectra of interest, which can be expensive.
FIG. 3 illustrates an imaging system 300 that carries out the disclosed shading correction scheme in accordance with some embodiments. The imaging system 300 can include an imaging module 302, which can include one or more of a lens 304, an image sensor 306, and/or an internal imaging module memory device 322. The imaging system 300 can also include a computing system 308, which can include one or more of a processor 310, a memory device 312, a per-unit calibration module 314, a sensor type calibration module 316, a correction module 320, an internal interface 324 and/or one or more interfaces 326.
The lens 304 can include an optical device that is configured to collect light rays, from an imaging scene entering the imaging module 302 and form an image of the imaging scene on an image sensor 306. The image sensor 306 can include an electronic device that is configured to convert light rays into electronic signals. The image sensor 306 can include one or more of a digital charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) pixel elements, also referred to as pixel sensors.
In some embodiments, the internal image module memory device 322 can include a computer readable medium, flash memory, a magnetic disk drive, an optical drive, a programmable read-only memory (PROM), a read-only memory (ROM), or any other suitable memory or combination of memories. The internal image module memory device 322 can be configured to maintain or store a per-unit correction mesh for the imaging module 302, as described further below.
The imaging module 302 can be coupled to a computing device 308 over an interface 326. The memory device 312 of the computing device 308 can include a computer readable medium, flash memory, a magnetic disk drive, an optical drive, a programmable read-only memory (PROM), a read-only memory (ROM), or any other suitable memory or combination of memories. The memory 312 can maintain or store software and/or instructions that can be processed by the processor 310. In some embodiments, the memory 312 can also maintain correction meshes and/or parameters for the prediction function.
The processor 310 can communicate with the memory 312 and interface 326 to communicate with other devices, such as an imaging module 302 or any other computing devices, such as a desktop computer or a server in a data center. The processor 310 can include any applicable processor such as a system-on-a-chip that combines one or more of a central processing unit (CPU), an application processor, and flash memory, or a reduced instruction set computing (RISC) processor.
The interface 326 can provide an input and/or output mechanism to communicate with other network devices. The interface can be implemented in hardware to send and receive signals in a variety of mediums, such as optical, copper, and wireless, and in a number of different protocols, some of which may be non-transient.
The processor 310 can be configured to run one or more modules. The one or more modules can include the per-unit calibration module 314 configured to determine a correction mesh for the image sensor 306 for a specific lighting profile. The one or more modules can also include a sensor type calibration module 316 configured to determine one or more correction meshes for typical image sensors of the same type as the particular image sensor 306 for a predetermined set of lighting spectra. The one or more modules can also include a prediction function estimation module 318 configured to estimate the prediction function for the image sensor 306. The one or more modules can also include the correction module 320 that is configured to apply the predicted correction mesh to remove the shading effect in images captured by the image sensor 306. The one or more modules can include any other suitable module or combination of modules. Although modules 314, 316, 318, and 320 are described as separate modules, they may be combined in any suitable combination of modules. In some embodiments, the processor 310, the memory device 312, and the module 314, 316, 318, 320 can communicate via an internal interface 324.
The disclosed shading correction mechanism can operate in two stages: a training stage and a run-time stage. In the training stage, which may be performed during the device production and/or at a laboratory, the disclosed shading correction mechanism can determine a computational function that is capable of generating a correction mesh for an image sensor of interest. In some cases, the computational function can be determined based on characteristics of image sensors that are similar to the image sensor of interest. Then, in the run-time stage, during which the image sensor of interest takes an image, the disclosed shading correction mechanism can estimate the lighting condition under which the image was taken, use the computational function corresponding to the estimated lighting condition to estimate a correction mesh for the image, and apply the estimated correction mesh to remove shading effects from the image.
FIG. 4 illustrates a correction mesh estimation process during a training stage in accordance with some embodiments. In step 402, the per-unit (PU) calibration module 314 can be configured to determine a correction mesh for the imaging module 302 for a specific input light spectrum πp. In some embodiments, the specific light spectrum πp can be predetermined. For example, the specific light spectrum πp can be a continuous light spectrum, for instance, a lighting spectrum corresponding to an outdoor scene in the shadow under a daylight, an outdoor scene under a cloudy daylight, an outdoor scene under a direct sunlight, or an incandescent light. As another example, the specific light spectrum πp can have a spiky profile, for instance, fluorescent lighting spectrum. In some embodiments, the specific light spectrum πp can be one of the examples provided above. In other embodiments, the specific light spectrum πp can include two or more of the examples provided above. The correction mesh for the specific light spectrum πp can be referred to as a per-unit correction mesh, identified as Ci,π p .
The PU calibration module 314 can determine the per-unit correction mesh Ci,π p based on a general correction mesh generation procedure described above. For example, the PU calibration module 314 can receive an image Ii,π p (x,y) of a uniform, monochromatic surface from the image sensor 306, where the image Ii,π p (x,y) is captured under the predetermined specific light profile πp. Then the PU calibration module 314 can optionally filter the image Ii,π p (x,y) using a low-pass filter, such as a Gaussian filter G(x,y), to generate a filtered image G(x,y)
Figure US09270872-20160223-P00001
Ii,π p (x,y). Subsequently, the calibration module 314 can optionally down-sample the filtered image G(x,y)
Figure US09270872-20160223-P00001
Ii,π p (x,y) and compute a pixel-wise inverse of the down-sampled, filtered image to generate a per-unit correction mesh Ci,π p :
C i , π p ( w , z ) = 1 ( G ( x , y ) I i , π p ( x , y ) )
In some embodiments, when the image of the uniform surface is a color image, then the PU calibration module 314 can stack four adjacent pixels in a 2×2 grid, e.g., Gr 104, R 106, B 108, Gb 110, of the image Ii,π p (x,y) to form a single pixel, thereby reducing the size of the image into a quarter. The stacked, four-dimensional image is referred to as a stacked input image Ii,π p (x,y), where each pixel has four values. The PU calibration module 314 can then perform the above process to compute a four-dimensional correction mesh Ci,π p (w,z), where each dimension is associated with one of the Gr channel, R channel, B channel, and Gb channel:
C i , π p ( w , z ) = 1 ( G ( x , y ) I i , π p ( x , y ) )
In some embodiments, the correction mesh Ci,π p (w,z), which refers to [CGr,π p (w,z), CR,π p (w,z), CB,π p (w,z), and CGb,π p (w,z)], can be stored in the internal image module memory device 322 and/or a memory device 312.
In some cases, the per-unit correction mesh Ci,π p (w,z) can be computed at production time. For example, if the imaging module 302 includes a memory device 322, the imaging module manufacturer can use the PU calibration module 314 to compute the per-unit correction mesh Ci,π p (w,z) and store the per-unit correction mesh Ci,π p (w,z) in the memory device 322.
In other cases, if the imaging module 302 does not include a memory device 322, the PU calibration module 314 can compute the per-unit correction mesh Ci,π p (w,z) when the manufacturer of an electronic system that embodies the imaging module 302 builds the electronic system. For example, when the imaging module 302 is embodied in a cell phone, the manufacturer of the cell phone can use the PU calibration module 314 to compute the per-unit correction mesh Ci,π p (w,z) and store the per-unit correction mesh Ci,π p (w,z) in the cell phone's memory.
In other cases, the PU calibration module 314 can compute the per-unit correction mesh Ci,π p (w,z) by a user of the imaging module 302. For example, the user of the imaging module 302 or an electronic device can be requested to take an image of a uniform surface before the user actually starts to use the imaging module 302. Then the PU calibration module 314 can use the image of the uniform surface to compute the per-unit correction mesh Ci,π p (w,z) before the user actually starts to using the imaging module 302.
Because the per-unit correction mesh Ci,π p (w,z) is tailored to the particular image sensor 306 for the specific lighting profile πp, the per-unit correction mesh Ci,π p (w,z) can remove the shading effects only of the particular image sensor 306 and only for the particular lighting profile πp. Therefore, the direct use of the per-unit correction mesh Ci,πp(w,z) is quite limiting.
In some embodiments, an imaging module manufacturer or an electronic device manufacturer can gather many per-unit correction meshes Ci,π p (w,z) for typical image sensors of an image sensor type. The gathered per-unit correction meshes can be used by the prediction function (PF) estimation module 318 to generate a prediction function, as described with respect to step 404.
In step 404, the sensor type (ST) calibration module 316 can generate one or more correction meshes for the image sensor type to which the image sensor 306 belongs. In particular, the ST calibration module 316 can characterize the shading characteristics of an image sensor that is typical of the image sensor type to which the image sensor 306 belongs. In some embodiments, the image sensor type can be defined as a particular product number assigned to the image sensor. In other embodiments, the image sensor type can be defined as a manufacturer of the image sensor. For example, all image sensors manufactured by the same sensor manufacturer can belong to the same image sensor type. In other embodiments, the image sensor type can be defined as a particular fabrication facility from which an image sensor is fabricated. For example, all image sensors manufactured from the same fabrication facility can belong to the same image sensor type. In other embodiments, the image sensor type can be defined as a particular technology used in the image sensor. For example, the image sensor can be a charge-coupled-device (CCD) type or a complementary metal-oxide-semiconductor (CMOS) type depending on the technology used by a pixel element of an image sensor.
The ST calibration module 316 is configured to generate one or more correction meshes for the image sensor type by averaging characteristics of representative image sensors associated with the image sensor type. For example, the ST calibration module 316 is configured to receive images Iπ c εΠ(x,y) of a uniform, monochromatic surface taken by a set of image sensors that are representative of an image sensor associated with an image sensor type. These images Iπ c εΠ(x,y) are taken under one πc of a predetermined set of lighting profiles Π. The predetermined set of lighting profiles Π can include one or more lighting profiles that often occur in real-world settings. For example, the predetermined set of lighting profiles Π can include “Incandescent 2700K,” “Fluorescent 2700K,” “Fluorescent 4000K,” “Fluorescent 6500K,” “Outdoor midday sun (6500K),” or any other profiles of interest.
Subsequently, the ST calibration module 316 is configured to combine the images taken by these sensors under the same lighting profile, to generate a combined image Īπ c εΠ(x,y), one for each of the predetermined set of lighting profiles Π. The combination operation can include computing an average of pixels at the same location (x,y) across the images taken by these sensors under the same lighting profile.
Then, the ST calibration module 316 is configured to process the average image Īπ c εΠ(x,y) for each lighting profile to generate a reference correction mesh Cr,π c εΠ(w,z) for each one πc of a predetermined set of lighting profiles Π. For example, the ST calibration module 316 is configured to generate a reference correction mesh Cr,π c εΠ(w,z) for each one πc of a predetermined set of lighting profiles Π by down-sampling the average image Īπ c εΠ(x,y) and computing an inverse of each of the values in the down-sampled average image Īπ c εΠ(w,z). Since a reference correction mesh Cr,π c εΠ(w,z) is generated based on an average image Īπ c εΠ(x,y), the reference correction mesh Cr,π c εΠ(w,z) could be used to remove the shading effects of an “average” sensor of the image sensor type for the associated one of the predetermined set of lighting profiles Π. Because the ST calibration module 316 does not need to use an image captured by the image sensor 306, the ST calibration module 314 can perform the above steps in a laboratory setting, independently of the imaging module 302.
Once the ST calibration module 316 generates the one or more reference correction meshes Cr,π c εΠ(w,z) for each one πc of a predetermined set of lighting profiles Π, in step 406, the computing system 308 has access to two sets of correction meshes: the per-unit correction mesh Ci,π p (w,z) for a specific lighting profile πp, and the one or more reference correction meshes Cr,π c εΠ(w,z). Subsequently, the PF estimation module 318 can use these sets of correction meshes to generate a prediction function ƒ for the image sensor 306, which is configured to transform the per-unit correction mesh Ci,π p (w,z) for a specific lighting profile πp to a lighting-adapted correction mesh C i,π D (w,z) for the lighting spectrum πD under which an image was taken.
In some embodiments, the prediction function ƒ for the image sensor 306 can depend on the lighting spectrum under which an image was taken. Also, the prediction function ƒ for the image sensor 306 can also depend on the location of the pixel (w,z). Such a light spectrum dependence and a spatial dependence is represented by subscripts π, (w,z): ƒπ,(w,z).
In some embodiments, the prediction function ƒ for the image sensor 306 can be a linear function, which may be represented as a matrix. In some cases, the matrix can be a 4×4 matrix since each pixel (w,z) of a correction mesh C can include four gain factors: one for each color channel ([Gr, R, G, Gb]). For example, if the correction mesh has a spatial dimension of 9×7, then 63 4×4 transform matrices Mπ,(w,z) can represent the prediction function ƒπ,(w,z) for a particular light spectrum. As described further below, during run-time, the computing system 308 can apply the transform matrices Mπ D ,(w,z), associated with the lighting spectrum πD under which an input image was taken, to a corresponding set of gain factors in the per-unit correction mesh Ci,π p (w,z) to generate a lighting-adapted correction mesh C i,π D (w,z) for the image sensor 306:
C i,π D (w,z)=M π D ,(w,z) C i,π p (w,z)
In some embodiments, the PF estimation module 318 can generate a transform matrix Mπ c ,(w,z) for πc by finding a matrix Mπ c ,(w,z) that maps the per-unit correction mesh Ci,π p (w,z) to a reference correction mesh Cr,π c (w,z) for πc. In essence, the transform matrix Mπ c ,(w,z) maps a correction mesh for a specific light profile πp to a correction mesh for one of the predetermined set of lighting profiles πc used to characterize a typical image sensor of an image sensor type.
In some embodiments, the PF estimation module 318 can generate a prediction function by modeling a relationship between
the PF estimation module 318 can generate the transform matrix Mπ c ,(w,z) using a least-squares technique:
M π c , ( w , z ) = arg min M { j J { C r , π c ( w , z ) - MC j , π p ( w , z ) } 2 }
where CjεJ,π p (w,z) represents correction meshes for the specific lighting profile πp for all sensors j in the sample set J at that grid location (w,z). These per-unit correction meshes CjεJ,π p (w,z) can be generated by an imaging module manufacturer or a manufacturer of an electronic device that embodies the imaging module 302 as a part of step 402. The resulting matrix Mπ c ,(w,z) is a matrix that can adapt the per-unit correction mesh Ci,π p (w,z) to a different lighting spectrum πc.
In other embodiments, the PF estimation module 318 can augment the least-squares technique to take into account characteristics of the matrix M:
M π c , ( w , z ) = arg min M { j J { C r , π c ( w , z ) - MC j , π p ( w , z ) } 2 + M γ }
where ∥M∥γ is a γ-norm of the matrix M, which can prefer a sparse matrix M compared to a non-sparse matrix.
In other embodiments, the PF estimation module 318 can estimate a non-linear regression function that maps the correction mesh CjεJ,π p (w,z) for the specific lighting profile πp to a reference correction mesh Cr,π c (w,z):
f π c , ( w , z ) = arg min f { j J { C r , π c ( w , z ) - f ( C j , π p ( w , z ) ) } 2 }
where ƒ can be a parametric function or a non-parametric function, such as a kernel function. In some embodiments, the non-linear function ƒπ c ,(w,z) can be estimated using support vector machine techniques, and/or any other supervised learning techniques for regression.
Since the PF estimation module 318 can generate a transform matrix Mπ c ,(w,z) or a non-linear function ƒπ c ,(w,z) independently of the grid location (w,z), the PF estimation module 318 can generate a transform matrix Mπ c ,(w,z) or a non-linear function ƒπ c ,(w,z) that does not depend on any assumption about the pattern of the shading non-uniformity. Thus, the disclosed technique is highly adaptable to different causes of non-uniformity, and therefore to different types of sensors. Also, since the disclosed scheme uses a 4×4 transform matrix, arbitrary crosstalk among color channels at a given sensor location can be corrected.
Once the PF estimation module 318 computes the prediction function (e.g., the transform matrix Mπ c ,(w,z) or the non-linear function ƒπ c ,(w,z)) for each one πc of a predetermined set of lighting profiles Π, the PF estimation module 318 can provide the prediction function to the memory 312 and/or the internal image module memory 322 via the interface 326.
In some embodiments, the prediction function (e.g., the transform matrix Mπ c ,(w,z) or the non-linear function ƒπ c ,(w,z)) can be generated independently of the particular sensor of interest. For example, the per-unit correction meshes CjεJ,π p (w,z) does not need to include the per-unit mesh of the particular sensor i of interest. Therefore, the prediction function can be generated once for all image sensors of the image sensor type. As long as an image sensor is typical of an image sensor type, the shading effects in the image sensor can be corrected using the estimated prediction function.
FIG. 5 illustrates a shading correction process during a run-time stage in accordance with some embodiments. In step 502, the image sensor i is configured to capture an image I(x,y) of a scene and provide the captured image I(x,y) to the correction module 320. In step 504, the correction module 320 is configured to estimate a lighting condition (e.g., lighting profile πD) under which the image I(x,y) was taken. In some embodiments, the correction module 320 is configured to use an auto white balance (AWB) technique to determine the lighting profile πD. In other embodiments, the correction module 320 is configured to receive results of an AWB technique, which is performed separately at a separate computing device. In some cases, the AWB technique can select one πc of the predetermined set of lighting profiles Π as the lighting profile πD for the captured image. In other embodiments, the AWB technique can detect mixed lighting conditions in which the lighting profile πD can be represented as a linear combination of two or more of the predetermined set of lighting profiles Π:
π D = c = 1 C α c π c
In step 506, the correction module 320 is configured to generate a lighting-adapted correction mesh for the captured image. To this end, the correction module 320 is configured to retrieve the per-unit correction mesh Ci,π p (w,z) for the particular image sensor i and the prediction function corresponding to the determined lighting profile πD. When the determined lighting profile πD is one πc of the predetermined set of lighting profiles Π, then the correction module 320 can retrieve the prediction function corresponding to the determined lighting profile πD by retrieving the prediction function (e.g., the transform matrix Mπ c ,(w,z) or the non-linear function ƒπ c ,(w,z)) for the associated one πc of the predetermined set of lighting profiles Π. When the determined lighting profile πD is a linear combination of the predetermined set of lighting profiles Π, then the correction module 320 can retrieve the prediction function corresponding to the determined lighting profile πD by combining the prediction functions corresponding to the predetermined set of lighting profiles Π. For example, when the prediction functions are transform matrices Mπ c ,(w,z) and the AWB technique provided a set of πc for πD, then the correction module 320 can linearly combine the transform matrices Mπ c ,(w,z) to generate the transform matrix Mπ D ,(w,z) for the determined lighting profile πD:
M π D , ( w , z ) = c = 1 C α c M π c , ( w , z )
Once the correction module 320 determines the prediction function for the determined lighting profile πD, the correction module 320 can apply the prediction function to the per-unit correction mesh Ci,π p (w,z) of the image sensor i to determine the lighting adapted correction mesh C i,π D (w,z). For example, when the prediction function is a transform matrix Mπ D ,(w,z), then the correction module 320 can multiply the transform matrix Mπ D ,(w,z) to the per-unit correction mesh Ci,π p (w,z) to determine the lighting-adapted correction mesh C i,π D (w,z):
C i,π D (w,z)=M π D ,(w,z) C i,π p (w,z).
As another example, when the prediction function is a non-linear function ƒπ c ,(w,z), then the correction module 320 can apply the non-linear function ƒπ c ,(w,z) to the per-unit correction mesh Ci,π p (w,z) to determine the lighting-adapted correction mesh C i,π D (w,z):
C i,π D (w,z)=ƒπ D ,(w,z)(C i,π p (w,z)).
In step 508, the correction module 320 can subsequently use the lighting-adapted correction mesh C i,π D (w,z) to remove the shading effect in the image. For example, the correction module 320 can up-sample the lighting-adapted correction mesh C i,π D (w,z) to C i,π D (x,y) so that the lighting-adapted correction mesh C i,π D (x,y) has the same dimension as an input image I(x,y). During this up-sampling process, the correction module 320 can be configured to organize the gain factors for the color channels [Gr, R, G, Gb] in accordance with the Bayer CFA pattern of the input image I(x,y). Then, the correction module 320 can be configured to multiply, in a pixel-by-pixel manner, the lighting-adapted correction mesh C i,π D (x,y) and the input image I(x,y) to remove the shading effect on the input image I(x,y).
The disclosed shading correction scheme is effective because it is able to take into account both the sensor-specific characteristics, such as the per-unit correction mesh Ci,π p (w,z), and the typical characteristics of sensors having the same image sensor type, such as the reference correction meshes Cr,π c (w,z).
In some embodiments, one or more of the modules 314, 316, 318, and 320 can be implemented in software using the memory 312. The memory 312 can be a non-transitory computer readable medium, flash memory, a magnetic disk drive, an optical drive, a programmable read-only memory (PROM), a read-only memory (ROM), or any other memory or combination of memories. The software can run on a processor 310 capable of executing computer instructions or computer code. The processor 310 might also be implemented in hardware using an application specific integrated circuit (ASIC), programmable logic array (PLA), digital signal processor (DSP), field programmable gate array (FPGA), or any other integrated circuit.
In some embodiments, one or more of the modules 314, 316, 318, and 320 can be implemented in hardware using an ASIC, PLA, DSP, FPGA, or any other integrated circuit. In some embodiments, two or more of the modules 314, 316, 318, and 320 can be implemented on the same integrated circuit, such as ASIC, PLA, DSP, or FPGA, thereby forming a system on chip.
In some embodiments, the imaging module 302 and the computing system 308 can reside in a single electronic device. For example, the imaging module 302 and the computing system 308 can reside in a cell phone or a camera device.
In some embodiments, the electronic device can include user equipment. The user equipment can communicate with one or more radio access networks and with wired communication networks. The user equipment can be a cellular phone having phonetic communication capabilities. The user equipment can also be a smart phone providing services such as word processing, web browsing, gaming, e-book capabilities, an operating system, and a full keyboard. The user equipment can also be a tablet computer providing network access and most of the services provided by a smart phone. The user equipment operates using an operating system such as Symbian OS, iPhone OS, RIM's Blackberry, Windows Mobile, Linux, HP WebOS, and Android. The screen might be a touch screen that is used to input data to the mobile device, in which case the screen can be used instead of the full keyboard. The user equipment can also keep global positioning coordinates, profile information, or other location information.
The electronic device can also include any platforms capable of computations and communication. Non-limiting examples can include televisions (TVs), video projectors, set-top boxes or set-top units, digital video recorders (DVR), computers, netbooks, laptops, and any other audiovisual equipment with computation capabilities. The electronic device can be configured with one or more processors that process instructions and run software that may be stored in memory. The processor also communicates with the memory and interfaces to communicate with other devices. The processor can be any applicable processor such as a system-on-a-chip that combines a CPU, an application processor, and flash memory. The electronic device may also include speakers and a display device in some embodiments.
In other embodiments, the imaging module 302 and the computing system 308 can reside in different electronic devices. For example, the imaging module 302 can be a part of a camera or a cell phone, and the computing system 308 can be a part of a desktop computer or a server. In some embodiments, the imaging module 302 and the computing system 308 can reside in a single electronic device, but the PU calibration module 314, the ST calibration module 316, and/or the PF estimation module 318 can reside in a separate computing device in communication with the computing system 308, instead of the computing system 308 itself. For example, the PU calibration module 314, the ST calibration module 316, and/or the PF estimation module 318 can reside in a server in a data center.
It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods, and systems for carrying out the several purposes of the disclosed subject matter. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed subject matter.
Although the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter.

Claims (20)

I claim:
1. An apparatus configured to remove a shading effect from an image, the apparatus comprising:
one or more interfaces configured to provide communication with an imaging module that is configured to capture the image; and
a processor, in communication with the one or more interfaces, configured to run a module stored in memory that is configured to:
receive the image captured by the imaging module under a first lighting spectrum;
receive a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum;
determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and
operate the correction mesh on the image to remove the shading effect from the image.
2. The apparatus of claim 1, wherein the module is further configured to determine that the image was captured under the first lighting spectrum using an automated white balance technique.
3. The apparatus of claim 2, wherein the module is configured to:
determine, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to a linear combination of two or more lighting spectra,
receive prediction functions associated with the two or more lighting spectra,
combine the prediction functions to generate a final prediction function, and
apply the final prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
4. The apparatus of claim 2, wherein the module is further configured to determine the correction mesh for the image based on the first lighting spectrum of the image.
5. The apparatus of claim 2, wherein the module is configured to:
determine, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to one of a predetermined set of lighting spectra,
receive a prediction function associated with the one of the predetermined set of lighting spectra, and
apply the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
6. The apparatus of claim 5, wherein the prediction function comprises a linear function.
7. The apparatus of claim 5, wherein the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
8. The apparatus of claim 5, wherein the prediction function is associated only with the portion of the per-unit correction mesh.
9. The apparatus of claim 1, wherein the apparatus is a part of a camera module in a mobile device.
10. A method for removing a shading effect on an image, the method comprising:
receiving, at a correction module of a computing system, the image captured under a first lighting spectrum from an imaging module over an interface of the computing system;
receiving, at the correction module, a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum;
determining, at the correction module, a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and
operating, at the correction module, the correction mesh on the image to remove the shading effect from the image.
11. The method of claim 10, further comprising determining that the image was captured under the first lighting spectrum using an automated white balance technique.
12. The method of claim 11, further comprising determining the correction mesh for the image based on the first lighting spectrum of the image.
13. The method of claim 11, further comprising:
determining, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to one of a predetermined set of lighting spectra,
receiving a prediction function associated with the one of the predetermined set of lighting spectra, and
applying the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
14. The method of claim 13, wherein the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
15. The method of claim 11, further comprising:
determining, using the automated white balance technique, that the first lighting spectrum of the image is substantially similar to a linear combination of two or more lighting spectra,
receiving prediction functions associated with the two or more lighting spectra,
combining the prediction functions to generate a final prediction function, and
applying the final prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
16. A non-transitory computer readable medium having executable instructions associated with a correction module, operable to cause a data processing apparatus to:
receive an image captured under a first lighting spectrum from an imaging module in communication with the data processing apparatus;
retrieve, from a memory device, a per-unit correction mesh for adjusting images captured by the imaging module under a second lighting spectrum;
determine a correction mesh for the image captured under the first lighting spectrum based on the per-unit correction mesh for the second lighting spectrum; and
operate the correction mesh on the image to remove a shading effect from the image.
17. The non-transitory computer readable medium of claim 16, further comprising executable instructions operable to cause the data processing apparatus to determine that the image was captured under the first lighting spectrum using an automated white balance technique.
18. The non-transitory computer readable medium of claim 17, further comprising executable instructions operable to cause the data processing apparatus to determine the correction mesh for the image based on the first lighting spectrum of the image.
19. The non-transitory computer readable medium of claim 17, further comprising executable instructions operable to cause the data processing apparatus to:
receive a prediction function associated with one of a predetermined set of lighting spectra, and
apply the prediction function to at least a portion of the per-unit correction mesh to determine the correction mesh for the image.
20. The non-transitory computer readable medium of claim 17, wherein the prediction function is based on characteristics of image sensors having an identical image sensor type as an image sensor in the imaging module.
US14/089,908 2013-11-26 2013-11-26 Apparatus, systems, and methods for removing shading effect from image Active 2034-01-14 US9270872B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/089,908 US9270872B2 (en) 2013-11-26 2013-11-26 Apparatus, systems, and methods for removing shading effect from image
PCT/IB2014/003059 WO2015079320A1 (en) 2013-11-26 2014-11-26 Apparatus, systems, and methods for adaptive image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/089,908 US9270872B2 (en) 2013-11-26 2013-11-26 Apparatus, systems, and methods for removing shading effect from image

Publications (2)

Publication Number Publication Date
US20150146038A1 US20150146038A1 (en) 2015-05-28
US9270872B2 true US9270872B2 (en) 2016-02-23

Family

ID=52682763

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/089,908 Active 2034-01-14 US9270872B2 (en) 2013-11-26 2013-11-26 Apparatus, systems, and methods for removing shading effect from image

Country Status (2)

Country Link
US (1) US9270872B2 (en)
WO (1) WO2015079320A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170078596A1 (en) * 2015-09-14 2017-03-16 Apical Ltd. Adaptive shading correction
US10460704B2 (en) 2016-04-01 2019-10-29 Movidius Limited Systems and methods for head-mounted display adapted to human visual mechanism
US10949947B2 (en) 2017-12-29 2021-03-16 Intel Corporation Foveated image rendering for head-mounted display devices
US20210185285A1 (en) * 2018-09-18 2021-06-17 Zhejiang Uniview Technologies Co., Ltd. Image processing method and apparatus, electronic device, and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2578329B (en) * 2018-10-24 2022-11-09 Advanced Risc Mach Ltd Retaining dynamic range using vignetting correction and intensity compression curves
TWI736112B (en) * 2020-01-20 2021-08-11 瑞昱半導體股份有限公司 Pixel value calibrationmethod and pixel value calibration device

Citations (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB710876A (en) 1951-04-27 1954-06-23 Chamberlain & Hookham Ltd Protective apparatus for electricity distributing systems
GB1488538A (en) 1975-11-28 1977-10-12 Ibm Compressed refresh buffer
US4281312A (en) 1975-11-04 1981-07-28 Massachusetts Institute Of Technology System to effect digital encoding of an image
US4680730A (en) 1983-07-08 1987-07-14 Hitachi, Ltd. Storage control apparatus
EP0240032A2 (en) 1986-04-04 1987-10-07 Hitachi, Ltd. Vector processor with vector data compression/expansion capability
EP0245027A2 (en) 1986-05-08 1987-11-11 THE GENERAL ELECTRIC COMPANY, p.l.c. Data compression
CA1236584A (en) 1984-12-03 1988-05-10 William E. Hall Parallel processing system
JPH0342969B2 (en) 1989-04-27 1991-06-28
US5081573A (en) 1984-12-03 1992-01-14 Floating Point Systems, Inc. Parallel processing system
US5226171A (en) 1984-12-03 1993-07-06 Cray Research, Inc. Parallel vector processing system for individual and broadcast distribution of operands and control information
WO1993013628A1 (en) 1991-12-20 1993-07-08 Ampex Systems Corporation Method and apparatus for image data compression using combined luminance/chrominance coding
US5262973A (en) 1992-03-13 1993-11-16 Sun Microsystems, Inc. Method and apparatus for optimizing complex arithmetic units for trivial operands
WO1996008928A1 (en) 1994-09-13 1996-03-21 Nokia Mobile Phones Ltd. Video compression method
GB2311882A (en) 1996-04-04 1997-10-08 Videologic Ltd Data processing management system with programmable routing operations
US5861873A (en) 1992-06-29 1999-01-19 Elonex I.P. Holdings, Ltd. Modular portable computer with removable pointer device
WO2000022503A1 (en) 1998-10-09 2000-04-20 Bops Incorporated Efficient complex multiplication and fast fourier transform (fft) implementation on the manarray architecture
WO2000034887A1 (en) 1998-12-04 2000-06-15 Bops Incorporated System for dynamic vliw sub-instruction selection for execution time parallelism in an indirect vliw processor
WO2000045282A1 (en) 1999-01-28 2000-08-03 Bops Incorporated Methods and apparatus to support conditional execution in a vliw-based array processor with subword execution
WO2001043074A1 (en) 1999-12-07 2001-06-14 Nintendo Co., Ltd. 3d transformation matrix compression and decompression
US6275921B1 (en) 1997-09-03 2001-08-14 Fujitsu Limited Data processing device to compress and decompress VLIW instructions by selectively storing non-branch NOP instructions
GB2362055A (en) 2000-05-03 2001-11-07 Clearstream Tech Ltd Image compression using a codebook
WO2001084849A1 (en) 2000-05-03 2001-11-08 Clearstream Technologies Limited Video data transmission
GB2362733A (en) 2000-05-25 2001-11-28 Siroyan Ltd A processor for executing compressed instructions and method of compressing instructions
WO2002051099A2 (en) 2000-12-19 2002-06-27 Qualcomm Incorporated Method and system to accelerate cryptographic functions for secure e-commerce applications using cpu and dsp to calculate the cryptographic functions
EP1241892A1 (en) 2001-03-06 2002-09-18 Siemens Aktiengesellschaft Hardware accelerator for video signal processing system
US20030005261A1 (en) 2001-06-29 2003-01-02 Gad Sheaffer Method and apparatus for attaching accelerator hardware containing internal state to a processing core
US6577316B2 (en) 1998-07-17 2003-06-10 3Dlabs, Inc., Ltd Wide instruction word graphics processor
US20030149822A1 (en) 2002-02-01 2003-08-07 Bryan Scott Method for integrating an intelligent docking station with a handheld personal computer
US20030154358A1 (en) 2002-02-08 2003-08-14 Samsung Electronics Co., Ltd. Apparatus and method for dispatching very long instruction word having variable length
US20040101045A1 (en) 2002-11-22 2004-05-27 Keman Yu System and method for low bit rate watercolor video
US20040260410A1 (en) 2000-05-01 2004-12-23 Kiyomi Sakamoto Asic expansion module for PDA
US6859870B1 (en) 2000-03-07 2005-02-22 University Of Washington Method and apparatus for compressing VLIW instruction and sharing subinstructions
WO2005091109A1 (en) 2004-03-19 2005-09-29 Nokia Corporation Device with a cryptographic coprocessor
US20060023429A1 (en) 2000-10-17 2006-02-02 Spx Corporation Plug-in module for portable computing device
US7038687B2 (en) 2003-06-30 2006-05-02 Intel Corporation System and method for high-speed communications between an application processor and coprocessor
GB0710876D0 (en) 2006-06-08 2007-07-18 Intel Corp Increasing the battery life of a mobile computing system in a reduced power state through memory compression
WO2008010634A1 (en) 2006-07-20 2008-01-24 Ad Semiconductor Co., Ltd. Method and apparatus for detecting capacitance using a plurality of time division frequencies
US20080068389A1 (en) 2003-11-19 2008-03-20 Reuven Bakalash Multi-mode parallel graphics rendering system (MMPGRS) embodied within a host computing system and employing the profiling of scenes in graphics-based applications
US20080074515A1 (en) * 2006-09-25 2008-03-27 Fujifilm Corporation Image taking apparatus
WO2008087195A1 (en) 2007-01-17 2008-07-24 Linear Algebra Technologies An accelerator device for attaching to a portable electronic device
US20080259186A1 (en) * 2007-04-23 2008-10-23 Yu-Wei Wang Correcting a captured image in digital imaging devices
JP2008277926A (en) 2007-04-25 2008-11-13 Kyocera Corp Image data processing method and imaging device using same
US20100165144A1 (en) 2008-12-31 2010-07-01 Ji Soo Lee Scene illumination adaptive lens shading correction for imaging devices
US20110141326A1 (en) * 2009-12-14 2011-06-16 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20120293677A1 (en) 2011-05-17 2012-11-22 Samsung Electronics Co., Ltd. Methods and apparatuses for anti-shading correction with extended color correlated temperature dependency
US20140063283A1 (en) * 2012-08-30 2014-03-06 Apple Inc. Correction factor for color response calibration
US20140071309A1 (en) * 2012-09-10 2014-03-13 Apple Inc. Signal shaping for improved mobile video communication
US8713080B2 (en) 2007-03-15 2014-04-29 Linear Algebra Technologies Limited Circuit for compressing data and a processor employing same

Patent Citations (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB710876A (en) 1951-04-27 1954-06-23 Chamberlain & Hookham Ltd Protective apparatus for electricity distributing systems
US4281312A (en) 1975-11-04 1981-07-28 Massachusetts Institute Of Technology System to effect digital encoding of an image
GB1488538A (en) 1975-11-28 1977-10-12 Ibm Compressed refresh buffer
US4680730A (en) 1983-07-08 1987-07-14 Hitachi, Ltd. Storage control apparatus
CA1236584A (en) 1984-12-03 1988-05-10 William E. Hall Parallel processing system
US5081573A (en) 1984-12-03 1992-01-14 Floating Point Systems, Inc. Parallel processing system
US5226171A (en) 1984-12-03 1993-07-06 Cray Research, Inc. Parallel vector processing system for individual and broadcast distribution of operands and control information
EP0240032A2 (en) 1986-04-04 1987-10-07 Hitachi, Ltd. Vector processor with vector data compression/expansion capability
EP0245027A2 (en) 1986-05-08 1987-11-11 THE GENERAL ELECTRIC COMPANY, p.l.c. Data compression
US4783841A (en) 1986-05-08 1988-11-08 The General Electric Company P.L.C. Data compression
JPH0342969B2 (en) 1989-04-27 1991-06-28
DE69228442T2 (en) 1991-12-20 1999-09-30 Ampex METHOD AND DEVICE FOR IMAGE DATA COMPRESSION BY LUMINANCE / CHROMINANCE CODING
WO1993013628A1 (en) 1991-12-20 1993-07-08 Ampex Systems Corporation Method and apparatus for image data compression using combined luminance/chrominance coding
CN1078841A (en) 1991-12-20 1993-11-24 安佩克斯系统公司 The method and apparatus that the luminance/chrominance code that use mixes is compressed pictorial data
US5434623A (en) 1991-12-20 1995-07-18 Ampex Corporation Method and apparatus for image data compression using combined luminance/chrominance coding
US5262973A (en) 1992-03-13 1993-11-16 Sun Microsystems, Inc. Method and apparatus for optimizing complex arithmetic units for trivial operands
US5861873A (en) 1992-06-29 1999-01-19 Elonex I.P. Holdings, Ltd. Modular portable computer with removable pointer device
FI97096B (en) 1994-09-13 1996-06-28 Nokia Mobile Phones Ltd Video compression procedure
WO1996008928A1 (en) 1994-09-13 1996-03-21 Nokia Mobile Phones Ltd. Video compression method
US6304605B1 (en) 1994-09-13 2001-10-16 Nokia Mobile Phones Ltd. Video compressing method wherein the direction and location of contours within image blocks are defined using a binary picture of the block
DE69519801T2 (en) 1994-09-13 2001-06-13 Nokia Mobile Phones Ltd VIDEO COMPRESSION PROCEDURE
GB2311882A (en) 1996-04-04 1997-10-08 Videologic Ltd Data processing management system with programmable routing operations
WO1997038372A1 (en) 1996-04-04 1997-10-16 Videologic Limited A data processing management system
US5968167A (en) 1996-04-04 1999-10-19 Videologic Limited Multi-threaded data processing management system
DE69709078T2 (en) 1996-04-04 2002-10-31 Imagination Tech Ltd ADMINISTRATIVE SYSTEM FOR DATA PROCESSING
ES2171919T3 (en) 1996-04-04 2002-09-16 Imagination Tech Ltd INFORMATIC MANAGEMENT SYSTEM.
US6275921B1 (en) 1997-09-03 2001-08-14 Fujitsu Limited Data processing device to compress and decompress VLIW instructions by selectively storing non-branch NOP instructions
US6467036B1 (en) 1997-12-04 2002-10-15 Bops, Inc. Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor
US6173389B1 (en) 1997-12-04 2001-01-09 Billions Of Operations Per Second, Inc. Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor
US6851041B2 (en) 1997-12-04 2005-02-01 Pts Corporation Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor
US7146487B2 (en) 1998-01-28 2006-12-05 Altera Corporation Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution
US7010668B2 (en) 1998-01-28 2006-03-07 Pts Corporation Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution
US6954842B2 (en) 1998-01-28 2005-10-11 Pts Corporation Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution
US6366999B1 (en) 1998-01-28 2002-04-02 Bops, Inc. Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution
US6760831B2 (en) 1998-01-28 2004-07-06 Pts Corporation Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution
US6948087B2 (en) 1998-07-17 2005-09-20 3Dlabs, Inc., Ltd. Wide instruction word graphics processor
US6577316B2 (en) 1998-07-17 2003-06-10 3Dlabs, Inc., Ltd Wide instruction word graphics processor
WO2000022503A1 (en) 1998-10-09 2000-04-20 Bops Incorporated Efficient complex multiplication and fast fourier transform (fft) implementation on the manarray architecture
US7424594B2 (en) 1998-10-09 2008-09-09 Altera Corporation Efficient complex multiplication and fast fourier transform (FFT) implementation on the ManArray architecture
US6839728B2 (en) 1998-10-09 2005-01-04 Pts Corporation Efficient complex multiplication and fast fourier transform (FFT) implementation on the manarray architecture
WO2000034887A1 (en) 1998-12-04 2000-06-15 Bops Incorporated System for dynamic vliw sub-instruction selection for execution time parallelism in an indirect vliw processor
WO2000045282A1 (en) 1999-01-28 2000-08-03 Bops Incorporated Methods and apparatus to support conditional execution in a vliw-based array processor with subword execution
US6591019B1 (en) 1999-12-07 2003-07-08 Nintendo Co., Ltd. 3D transformation matrix compression and decompression
WO2001043074A1 (en) 1999-12-07 2001-06-14 Nintendo Co., Ltd. 3d transformation matrix compression and decompression
US7409530B2 (en) 2000-03-07 2008-08-05 University Of Washington Method and apparatus for compressing VLIW instruction and sharing subinstructions
US6859870B1 (en) 2000-03-07 2005-02-22 University Of Washington Method and apparatus for compressing VLIW instruction and sharing subinstructions
US20040260410A1 (en) 2000-05-01 2004-12-23 Kiyomi Sakamoto Asic expansion module for PDA
GB2362055A (en) 2000-05-03 2001-11-07 Clearstream Tech Ltd Image compression using a codebook
WO2001084849A1 (en) 2000-05-03 2001-11-08 Clearstream Technologies Limited Video data transmission
US7124279B2 (en) 2000-05-25 2006-10-17 Pts Corporation Processor and method for generating and storing compressed instructions in a program memory and decompressed instructions in an instruction cache wherein the decompressed instructions are assigned imaginary addresses derived from information stored in the program memory with the compressed instructions
GB2366643A (en) 2000-05-25 2002-03-13 Siroyan Ltd Compressing instructions for processors
JP2002007211A (en) 2000-05-25 2002-01-11 Siroyan Ltd Processor having compressed instructions and method for compressing instruction for the processor
US7343471B2 (en) 2000-05-25 2008-03-11 Pts Corporation Processor and method for generating and storing compressed instructions in a program memory and decompressed instructions in an instruction cache wherein the decompressed instructions are assigned imaginary addresses derived from information stored in the program memory with the compressed instructions
CN1326132A (en) 2000-05-25 2001-12-12 斯罗扬有限公司 Processor with compressed instructions and compress method thereof
GB2362733A (en) 2000-05-25 2001-11-28 Siroyan Ltd A processor for executing compressed instructions and method of compressing instructions
EP1158401A2 (en) 2000-05-25 2001-11-28 Siroyan Limited Processor having compressed instructions and method of compressing instructions
US20060023429A1 (en) 2000-10-17 2006-02-02 Spx Corporation Plug-in module for portable computing device
WO2002051099A2 (en) 2000-12-19 2002-06-27 Qualcomm Incorporated Method and system to accelerate cryptographic functions for secure e-commerce applications using cpu and dsp to calculate the cryptographic functions
EP1241892A1 (en) 2001-03-06 2002-09-18 Siemens Aktiengesellschaft Hardware accelerator for video signal processing system
US20030005261A1 (en) 2001-06-29 2003-01-02 Gad Sheaffer Method and apparatus for attaching accelerator hardware containing internal state to a processing core
US20030149822A1 (en) 2002-02-01 2003-08-07 Bryan Scott Method for integrating an intelligent docking station with a handheld personal computer
FR2835934A1 (en) 2002-02-08 2003-08-15 Samsung Electronics Co Ltd Very long instruction word processor, has decoding unit in dispatch unit to decode information of sub-instructions and operating engines, each having set of functional units based on instruction that is dispatched
US20030154358A1 (en) 2002-02-08 2003-08-14 Samsung Electronics Co., Ltd. Apparatus and method for dispatching very long instruction word having variable length
US7366874B2 (en) 2002-02-08 2008-04-29 Samsung Electronics Co., Ltd. Apparatus and method for dispatching very long instruction word having variable length
US20040101045A1 (en) 2002-11-22 2004-05-27 Keman Yu System and method for low bit rate watercolor video
US7038687B2 (en) 2003-06-30 2006-05-02 Intel Corporation System and method for high-speed communications between an application processor and coprocessor
US20080068389A1 (en) 2003-11-19 2008-03-20 Reuven Bakalash Multi-mode parallel graphics rendering system (MMPGRS) embodied within a host computing system and employing the profiling of scenes in graphics-based applications
WO2005091109A1 (en) 2004-03-19 2005-09-29 Nokia Corporation Device with a cryptographic coprocessor
US20070291571A1 (en) 2006-06-08 2007-12-20 Intel Corporation Increasing the battery life of a mobile computing system in a reduced power state through memory compression
DE102007025948A1 (en) 2006-06-08 2008-01-03 Intel Corporation, Santa Clara Extend the life of a battery of a mobile computer system in a reduced power state by means of memory compression
CN101086680A (en) 2006-06-08 2007-12-12 英特尔公司 Increasing the battery life of a mobile computing system in reduced power state through memory compression
GB0710876D0 (en) 2006-06-08 2007-07-18 Intel Corp Increasing the battery life of a mobile computing system in a reduced power state through memory compression
WO2008010634A1 (en) 2006-07-20 2008-01-24 Ad Semiconductor Co., Ltd. Method and apparatus for detecting capacitance using a plurality of time division frequencies
US20080074515A1 (en) * 2006-09-25 2008-03-27 Fujifilm Corporation Image taking apparatus
WO2008087195A1 (en) 2007-01-17 2008-07-24 Linear Algebra Technologies An accelerator device for attaching to a portable electronic device
US8713080B2 (en) 2007-03-15 2014-04-29 Linear Algebra Technologies Limited Circuit for compressing data and a processor employing same
US20080259186A1 (en) * 2007-04-23 2008-10-23 Yu-Wei Wang Correcting a captured image in digital imaging devices
JP2008277926A (en) 2007-04-25 2008-11-13 Kyocera Corp Image data processing method and imaging device using same
US20100165144A1 (en) 2008-12-31 2010-07-01 Ji Soo Lee Scene illumination adaptive lens shading correction for imaging devices
US20110141326A1 (en) * 2009-12-14 2011-06-16 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20120293677A1 (en) 2011-05-17 2012-11-22 Samsung Electronics Co., Ltd. Methods and apparatuses for anti-shading correction with extended color correlated temperature dependency
US20140063283A1 (en) * 2012-08-30 2014-03-06 Apple Inc. Correction factor for color response calibration
US20140071309A1 (en) * 2012-09-10 2014-03-13 Apple Inc. Signal shaping for improved mobile video communication

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion issued by the European Patent Office as International Searching Authority for International Application No. PCT/IB2014/003059 dated May 11, 2015 (10 pages).
No Author Listed, "ARM Architecture Reference Manual," ARMv7-A and ARMv7-R edition, 2734 pages (1996-1998, 2000, 2004-2012).
No Author Listed, "Cortex-A8," Revision r3p2, Technical Reference Manual, 580 page (2006-2010).
No Author Listed, "Cortex-A9 NEON Media Processing Engine," Revision r3p0, Technical Reference Manual, 49 pages (2008-2011).
No Author Listed, "i.MX 6Dual/6Quad Applications Processor Reference Manual," Rev. 2, 5817 pages (Jun. 2014).
No Author Listed, "MSC8256 Reference Manual," Six Core Digital Signal Processor, Rev. 0, 1272 pages (Jul. 2011).
No Author Listed, "SC140 DSP Core Reference Manual," Rev. 3, 712 pages (Nov. 2001).
Rosten, E. et al., "Machine learning for high-speed corner detection," Department of Engineering, Cambridge University, UK, 14 pages (2006).
Williamson, David, "ARM Cortex A8: A High Performance Processor for Low Power Applications," 23 pages, In Unique chips and systems (Eugene John, Juan Rubio, eds) Boca Raton: CRC Press (2008).

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170078596A1 (en) * 2015-09-14 2017-03-16 Apical Ltd. Adaptive shading correction
US10044952B2 (en) * 2015-09-14 2018-08-07 Apical Ltd. Adaptive shading correction
US10460704B2 (en) 2016-04-01 2019-10-29 Movidius Limited Systems and methods for head-mounted display adapted to human visual mechanism
US10949947B2 (en) 2017-12-29 2021-03-16 Intel Corporation Foveated image rendering for head-mounted display devices
US11682106B2 (en) 2017-12-29 2023-06-20 Intel Corporation Foveated image rendering for head-mounted display devices
US20210185285A1 (en) * 2018-09-18 2021-06-17 Zhejiang Uniview Technologies Co., Ltd. Image processing method and apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
US20150146038A1 (en) 2015-05-28
WO2015079320A1 (en) 2015-06-04

Similar Documents

Publication Publication Date Title
US11849224B2 (en) Global tone mapping
US9270872B2 (en) Apparatus, systems, and methods for removing shading effect from image
US9754349B2 (en) Prevention of highlight clipping
CN113228628B (en) System and method for converting non-bayer pattern color filter array image data
US9635332B2 (en) Saturated pixel recovery in light-field images
US11263782B2 (en) Image signal processor for processing images
RU2543974C2 (en) Auto-focus control using image statistics data based on coarse and fine auto-focus scores
RU2537038C2 (en) Automatic white balance processing with flexible colour space selection
CA3075544A1 (en) Image signal processor for processing images
US20150363912A1 (en) Rgbw demosaic method by combining rgb chrominance with w luminance
JP5859080B2 (en) Method and related apparatus for correcting color artifacts in images
US20120081553A1 (en) Spatial filtering for image signal processing
US9007488B2 (en) Systems and methods for generating interpolated high-dynamic-range images
US8675102B2 (en) Real time denoising of video
US20210185285A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
EP3275169B1 (en) Downscaling a digital raw image frame
US9860456B1 (en) Bayer-clear image fusion for dual camera
US20150055861A1 (en) Methods and Systems for Image Demosaicing
US20160277721A1 (en) Color filtered area processing method for improving image processing
US20180260929A1 (en) Digital camera methods and devices optimized for computer vision applications
EP3688977B1 (en) Generating a monochrome image
US10863148B2 (en) Tile-selection based deep demosaicing acceleration
US9401012B2 (en) Method for correcting purple distortion in digital images and a computing device employing same
WO2016200480A1 (en) Color filter array scaler

Legal Events

Date Code Title Description
AS Assignment

Owner name: LINEAR ALGEBRA TECHNOLOGIES LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DONOHOE, DAVID;REEL/FRAME:032624/0263

Effective date: 20131217

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: MOVIDIUS LIMITED, NETHERLANDS

Free format text: MERGER;ASSIGNOR:LINEAR ALGEBRA TECHNOLOGIES LIMITED;REEL/FRAME:061546/0001

Effective date: 20181207

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8