US9111365B2 - Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal - Google Patents

Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal Download PDF

Info

Publication number
US9111365B2
US9111365B2 US12/853,853 US85385310A US9111365B2 US 9111365 B2 US9111365 B2 US 9111365B2 US 85385310 A US85385310 A US 85385310A US 9111365 B2 US9111365 B2 US 9111365B2
Authority
US
United States
Prior art keywords
interpolation
color
window
edge
primary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/853,853
Other versions
US20110032396A1 (en
Inventor
Hee-Chan Park
Min-Kyu Park
Han-Sae SONG
Young-Kwon Yoon
Yong-gu Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, YONG-GU, PARK, HEE-CHAN, PARK, MIN-KYU, SONG, HAN-SAE, YOON, YOUNG-KWON
Publication of US20110032396A1 publication Critical patent/US20110032396A1/en
Application granted granted Critical
Publication of US9111365B2 publication Critical patent/US9111365B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • H04N5/205Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
    • H04N5/208Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4015Demosaicing, e.g. colour filter array [CFA], Bayer pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/403Edge-driven scaling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only

Definitions

  • the present invention relates generally to a method for acquiring multicolor images by forming a Red, Green, and Blue (RGB) channel or RGB color on each pixel, based on image data entered by passing through a Bayer color filter array, and more particularly, to an edge-adaptive interpolation and noise filtering method performed for the image data.
  • RGB Red, Green, and Blue
  • Image sensors used in portable terminals with camera modules include Complementary Metal-Oxide Semiconductor (CMOS) image sensors and Charge-Coupled Device (CCD) image sensors.
  • CMOS image sensor is used in most cellular phone cameras and also in low-cost digital cameras because it has a high degree of integration and is easily mass-produced. Although the CMOS image sensor has advantages of low cost and low power usage, it is weak in noise. Most of this noise is processed by software to correct pixel values of images output from the image sensor.
  • a fundamental technique of a picture quality improvement algorithm is to detect an edge within an image and to form an RGB value using neighborhood pixel values, without artifacts for each pixel, in consideration of characteristics of the edge.
  • Some of the available interpolation and noise filtering methods include interpolation methods considering edge direction and interpolation methods using a noise elimination structure.
  • edge direction For interpolation methods considering edge direction, an edge direction is estimated and interpolation is performed along the estimated edge direction.
  • edge analysis methods and interpolation methods using neighborhood pixel values there are various edge analysis methods and interpolation methods using neighborhood pixel values.
  • edge-adaptive interpolation methods supporting two directions such as U.S. Pat. No. 5,373,322, which issued to Laroche et al., and is entitled “Apparatus and Method for Adaptively Interpolating a Full Color Image Utilizing Chrominance Gradients”, and U.S. Pat. No. 6,507,364, which issued to Bishay et al., and is entitled “Edge-dependent Interpolation Method for Color Reconstruction in Image Processing Devices”, disclose interpolation methods for obtaining a gradient by dividing an edge direction for a 3 ⁇ 3 window into horizontal and vertical directions and performing, interpolation using the gradient. Additionally, U.S. Pat. No.
  • Edge-adaptive interpolation methods supporting four directions such as U.S. Pat. No. 6,404,918, which issued to Hel-or et al., and is entitled “Image Demosaicing Method Utilizing Directional Smoothing”, discloses a method for extracting directions using a steerable filter and iterating interpolation in the extracted direction. Additionally, U.S. Pat. No. 6,832,009, which issued to Shezaf et al., and is entitled “Method and Apparatus for Improved Image Interpolation”, discloses an interpolation method for detecting a four-direction edge using a 3 ⁇ 3 Sobel operation.
  • U.S. Pat. No. 6,707,937 which issued to Sobel et al., and is entitled “Interpolation of Edge Portions of a Digital Image”, discloses an interpolation method using 3 ⁇ 3 edge detection of short scale and long scale filtering.
  • U.S. Pat. No. 7,133,553 which issued to Embler, and is entitled “Correlation-based Color Mosaic Interpolation Adjustment Using Luminance Gradients”, discloses an interpolation method performed by measuring an edge direction using a 3 ⁇ 3 preset directional filter.
  • 7,376,288 which issued to Huang et al., and is entitled “Edge Adaptive Demosaic System and Method”, discloses a method for performing interpolation in a gradient direction within a threshold after obtaining a gradient value of four edge directions within a 5 ⁇ 5 window.
  • noise may occur due to a Gr/Gb difference or an interpolation direction measurement error. Because it is unstable to accurately measure an edge direction or intensity at a Bayer level, noise is generally eliminated after performing interpolation and then measuring the accurate edge direction and intensity.
  • interpolation methods include, for example, U.S. Pat. No. 7,292,725, which issued to Chen et al., and is entitled “Demosaicking Method and Apparatus for Color Filter Array Interpolation in Digital Image Acquisition Systems”.
  • U.S. Pat. No. 7,292,725 discloses a structure in which a neighborhood pixel value and an interpolation value are tabulated and the interpolation value is retrieved when a specific template is entered.
  • an edge direction is not accurate detected during edge detection in a Bayer pattern. Because it is important to naturally restore images while preserving a boundary of an edge, it is also important to accurately analyze an edge direction within a given window or block.
  • inaccurate edge detection may generate noise during interpolation and may damage a detail/texture part during noise elimination.
  • the inaccurate edge detection also affects detail extraction for edge enhancement.
  • an aspect of the present invention provides an interpolation and noise filtering method that accurately analyzes an edge direction within a given window or block and eliminates noise while preserving detail.
  • an edge-adaptive interpolation and noise filtering method includes performing primary interpolation on a first color, based on a first preset number of directions, with respect to a first color window obtained from an input image window, estimating an edge direction within a primary interpolated first color window obtained by the primary interpolation, based on a second preset number of directions, the second preset number of directions being larger than the first preset number, with respect to the primary interpolated first color window, performing secondary interpolation on the first color, based on the estimated edge direction, with respect to the primary interpolated first color window, and performing interpolation on a second color, based on the estimated edge direction, with respect to a second color window obtained from the input image window.
  • FIG. 1 is a block diagram schematically illustrating a portable terminal for outputting RGB color images based on a Bayer image according to an embodiment of the present invention
  • FIG. 2 is a diagram illustrating a partial configuration of a camera as illustrated in FIG. 1 , according to an embodiment of the present invention
  • FIGS. 3A and 3B are diagrams illustrating a Bayer image window, and R, G, and B channel image windows interpolated based on the Bayer image window, respectively, according to an embodiment of the present invention
  • FIG. 4 is a diagram illustrating functional modules of an image signal processor as illustrated in FIG. 1 , according to an embodiment of the present invention
  • FIG. 5 is a flow chart illustrating a primary interpolation method performed by a G interpolation module as illustrated in FIG. 4 , according to an embodiment of the present invention
  • FIG. 6 is a diagram illustrating a two-direction projection operation according to an embodiment of the present invention.
  • FIG. 7 is a flow chart illustrating an edge direction estimation method performed by an edge direction estimation module as illustrated in FIG. 4 , according to an embodiment of the present invention
  • FIGS. 8A and 8B are diagrams illustrating an 8-direction projection operation, according to an embodiment of the present invention.
  • FIG. 9 is a flow chart illustrating a secondary interpolation method performed by a G interpolation correction module as illustrated in FIG. 4 , according to an embodiment of the present invention.
  • FIGS. 10A and 10B are diagrams illustrating a projection frequency analysis process, according to an embodiment of the present invention.
  • FIGS. 11A and 11B are diagrams illustrating an adaptive Gaussian filter generation process, according to an embodiment of the present invention.
  • FIG. 12 is a flow chart illustrating an interpolation method for R and B channel images performed by an R/B interpolation module as illustrated in FIG. 4 , according to an embodiment of the present invention
  • FIG. 13 is a diagram illustrating interpolation of a B channel image using a G channel image according to an embodiment of the present invention
  • FIG. 14 is a diagram schematically summarizing an edge-adaptive interpolation and noise filtering method according to an embodiment of the present invention.
  • FIGS. 15A-15C , 16 A- 16 C, and 17 A- 17 C are diagrams illustrating images obtained by an edge-adaptive interpolation and noise filtering method according to an embodiment of the present invention.
  • FIG. 1 is a block diagram schematically illustrating a portable terminal for outputting RGB color images in a Bayer image according to an embodiment of the present invention.
  • a portable terminal 100 includes a camera 110 , an Image Signal Processor (ISP) 200 , a display 120 , a wireless communication unit 130 , a controller 140 , and a memory 150 .
  • ISP Image Signal Processor
  • the portable terminal 100 may also include a speaker, a microphone, a user input device such as a keypad, etc.
  • the camera 110 captures an image of a subject and converts the formed image into electric signals.
  • the camera 110 may include a lens system including at least one lens, and an image sensor, such as a CCD or CMOS image sensor.
  • the display 120 displays an image frame generated from the ISP 200 on a screen.
  • the display 120 may use a Liquid Crystal Display (LCD) or a touch screen.
  • the touch screen displays an image according to the control of the controller 140 , generates key contact interruption when a user input means, such as a finger or a stylus pen, contacts thereto, and provides the controller 140 with user input information including input coordinates and input states according to the control of the controller 140 .
  • a user input means such as a finger or a stylus pen
  • the wireless communication unit 130 receives a wireless downlink signal through an antenna and transmits downlink data obtained by demodulating the wireless downlink signal to the controller 140 .
  • the wireless communication unit 130 generates a wireless uplink signal by modulating uplink data received from the controller 140 and wirelessly transmits the generated wireless uplink signal through the antenna.
  • the modulation and demodulation processes may be performed according to a Code Division Multiple Access (CDMA) scheme, a Frequency Division Multiplexing (FDM) scheme, or a Time Division Multiplexing (TDM) scheme.
  • CDMA Code Division Multiple Access
  • FDM Frequency Division Multiplexing
  • TDM Time Division Multiplexing
  • the memory 150 stores images.
  • the memory may also store databases associated with user information and documents, background images (menu screen, standby screen, etc.) necessary to drive the portable terminal 100 , and operating programs.
  • the controller 140 executes an application according to user input information.
  • the ISP 200 processes images received from the camera 110 or images stored in the memory 150 in units of frames, according to the control by the controller 140 , and generates an image frame that is converted to be suitable (e.g., have a proper size, picture quality, resolution, etc.) for the display 120 .
  • the ISP 200 sets a part of an input image to a window and performs interpolation and noise filtering in units of windows.
  • the window may include a central pixel located at the center of the window and neighborhood pixels adjacent to the central pixel within the window.
  • the window refers to a pixel matrix of a predetermined size, for example, a window of 3 ⁇ 3 pixels to 12 ⁇ 12 pixels.
  • an image signal input to the ISP 200 from the camera 110 is Bayer data.
  • Bayer data refers to RGB data filtered by a Bayer color filter.
  • FIG. 2 is a diagram illustrating a partial configuration of a camera 110 as illustrated in FIG. 1 .
  • the camera 110 includes an image sensor including a pixel array 112 and a Bayer Color Filter Array (CFA) 114 arranged on the pixel array 112 .
  • Each pixel generates a value corresponding to the brightness of incident light.
  • the image sensor may have a configuration in which each pixel detects Red (R), Green (G), and Blue (B) colors. However, each pixel typically detects only one color using a color filter due to cost restraints.
  • the image sensor may further include the Bayer CFA 114 arranged on the pixel array 112 .
  • the Bayer CFA 114 includes three-color (R, G, and B) filter units, which alternate with each other in row and column directions.
  • Each of the filter units corresponds to each of the pixels.
  • an R channel passing through an R filter unit is R color and a pixel arranged to correspond to the R filter unit detects the R channel.
  • FIGS. 3A and 3B are diagrams illustrating a Bayer image window 310 , and R, G, and B channel image windows 320 , 330 , and 340 , which are interpolated based on the Bayer image window 310 , respectively.
  • an R channel is denoted by dots
  • a G channel is denoted by oblique lines
  • a B channel is denoted by checked lines.
  • an original pixel, which is not interpolated is indicated by a high density of dots or lines and an interpolated pixel is indicated by a low density of dots or lines.
  • the ISP 200 forms R, G, and B channels on respective pixels based on image data (i.e., a Bayer image) passing through the Bayer CFA 114 and obtains multicolor images (i.e., images obtained by overlapping interpolated R, G and B channel images).
  • image data i.e., a Bayer image
  • multicolor images i.e., images obtained by overlapping interpolated R, G and B channel images.
  • FIG. 4 is a diagram illustrating functional modules of the ISP 200 as illustrated in FIG. 1 .
  • the functional modules may correspond to a series of program steps performed by or functional elements within the ISP 200 .
  • the ISP 200 includes a G interpolation module 230 , an edge direction estimation module 220 , a G interpolation correction module 230 , and an R/B interpolation module 240 .
  • the G interpolation module 210 performs primary interpolation on a first primary color (G) based on directions (i.e., horizontal and vertical directions) of a first preset number with respect to an input image window.
  • the G interpolation module 210 performs the following operations (i.e., primary interpolation) with a 5 ⁇ 5 Bayer raw data patch indicating pixel values for a 5 ⁇ 5 Bayer window.
  • FIG. 5 is a flow chart illustrating a primary interpolation method performed for a G channel image (or window) by the G interpolation module 210 .
  • FIG. 6 is a diagram illustrating a two-direction projection operation.
  • the primary interpolation method is for interpolating an empty pixel value of a G channel window by analyzing an edge pattern of a Bayer window.
  • the primary interpolation method includes calculating three interpolation values in step S 110 , calculating 5 ⁇ 5 horizontal/vertical projection values in step S 120 , determining an edge in step S 130 , determining vertical and horizontal direction in step S 140 , vertical interpolation in step S 150 , horizontal interpolation in step S 160 , and 4-neighborhood weighted interpolation in step S 170 .
  • step S 110 3 ⁇ 3 2-direction (horizontal/vertical) projection values are calculated.
  • the horizontal/vertical projection is an operation of pixel values in a horizontal or vertical direction with respect to a Bayer window.
  • FIG. 6 illustrates first to fifth vertical projections V 1 to V 5 and first to fifth projections H 1 to H 5 .
  • the G interpolation module 210 calculates 3 ⁇ 3 directional projection values with respect to horizontal and vertical directions.
  • VV 1 ( G 2+ G 12)/2+ G 12+( G 12+ G 22)/2
  • VV 2 G 8+( G 8+ G 18)/2+ G 18
  • VV 3 ( G 4+ G 14)/2+ G 14+( G 14+ G 24) (1)
  • VV1, VV2, and VV3 denote values for the second to fourth vertical projections V 2 to V 4
  • HH 1 , HH 2 , and HH 3 denote values for the second to fourth horizontal projections H 2 to H 4 .
  • the G interpolation module 210 calculates horizontal and vertical direction weights weight_h and weight_v using the horizontal and vertical projection values.
  • weight — h
  • weight — v
  • the G interpolation module 210 previously calculates temporary (or candidate) interpolation values for three cases using the vertical and horizontal projection values and the vertical and horizontal direction weights.
  • the three cases are when an edge is in a vertical direction, when an edge is in a horizontal direction, and when there is no edge.
  • Green_H denotes a temporary interpolation value for a horizontal direction
  • Green_V denotes a temporary interpolation value for a vertical direction
  • Green — 0 denotes a temporary interpolation value for no direction or an arbitrary direction (i.e., in case of no edge or in case of uniform (monochrome) color).
  • Green — H ( G 12+ G 14)/2+ R 13 ⁇ (( R 11+ R 13)/2+( R 13+ R 15)/2)/2
  • Green — V ( G 8+ G 18)/2+ R 13 ⁇ (( R 3+ R 13)/2+( R 23+ R 13)/2)/2
  • Green — 0 (weight — h *( G 12+ G 14)/2+weight — v *( G 8+ G 18)/2)/(weight — h +weight — v ) (4)
  • step S 120 the G interpolation module 210 calculates 5 ⁇ 5 directional projection values for a 5 ⁇ 5 Bayer window using the temporary interpolation values.
  • Equation (5) ⁇ Green_H, Green_V, and Green — 0 ⁇ are substituted for X.
  • V 1 R 1+ G 2+ R 3+ G 4+ R 5
  • V 2 G 2+ B 7+ G 12+ B 17+ G 22
  • V 3 R 3+ G 8+ X+G 18+
  • V 5 G 21+ G 22+ R 23+ G 24+ R 25
  • the G interpolation module 210 calculates a maximum difference value between respective directions.
  • the difference value indicates a difference between pixel values and aids in determination of which direction has a marked contrast.
  • Horizontal_Max_Diff( X ) max( H 1, H 2, H 3, H 4, H 5) ⁇ min( H 1, H 2, H 3, H 4, H 5)
  • Vertical_Max_Diff( X ) max( V 1, V 2, V 3, V 4, V 5) ⁇ min( V 1, V 2, V 3, V 4, V 5) (6)
  • step S 130 the G interpolation module 210 determines whether the window includes an edge using Equation (7).
  • Equation (7) has a true value, that is, if a difference between a vertical direction contrast and a horizontal direction contrast is insignificant (i.e., if the difference is less than a threshold value th), the G interpolation module 210 determines that an edge is not included in the Bayer window (i.e., no direction or uniform/monochrome color).
  • step S 140 the G interpolation module 210 determines whether an edge included in the Bayer window is in a vertical direction or in a horizontal direction using Equations (8) and (9). Horizontal_Max_Diff(Green — H )>Vertical_Max_Diff(Green — V ) (8)
  • Equation (8) has a true value, that is, if the vertical direction contrast (i.e., Horizontal_Max_Diff(Green_H)) is greater than the horizontal direction contrast (i.e., Vertical_Max_Diff(Green_V)), the G interpolation module 210 determines that the edge included in the Bayer window is in a horizontal direction. Horizontal_Max_Diff(Green — H ) ⁇ Vertical_Max_Diff(Green — V ) (9)
  • Equation (9) has a true value, that is, if the horizontal direction contrast is greater than the vertical direction contrast, the G interpolation module 210 determines that the edge included in the Bayer window is in a vertical direction.
  • the horizontal direction contrast i.e., a difference between left and right pixel values
  • the vertical direction contrast i.e., a difference between upper and lower pixel values
  • the G interpolation module 210 allocates the temporary interpolation value of an estimated direction to a central pixel of the G channel window according to the edge direction (horizontal or vertical direction, or no direction) estimated through the above-described processes.
  • G 13 ⁇ Green — H
  • the G interpolation module 210 allocates the vertical direction interpolation value Green_V to the central pixel in step S 150 , allocates the horizontal direction interpolation value Green_H to the central pixel in step S 160 , and allocates the no-direction interpolation value Green — 0 to the central pixel in step S 170 .
  • the edge direction estimation module 220 estimates an edge direction within the primary interpolated G channel window based on a second preset number (i.e., 8) of directions, larger than the first preset number (i.e., 2).
  • FIG. 7 is a flow chart illustrating an edge direction estimation method performed by the edge direction estimation module 220 .
  • FIGS. 8A and 8B are diagrams illustrating an 8-direction projection operation.
  • the edge direction estimation method is for estimating an edge pattern within the primary interpolated G channel window with respect to multiple (i.e., 8) directions.
  • the edge direction estimation module 220 performs the following operations (i.e., edge direction estimation) with pixel values for the primary interpolated G channel window.
  • FIG. 8A illustrates a primary interpolated G channel window and FIG. 8B illustrates one of the 8 directional projections.
  • the edge direction estimation module 220 calculates 8 directional projection values using Equation (11).
  • Equation (12) x and y denote pixel coordinates in a 5 ⁇ 5 G channel window and z denotes a distance from a central point (3, 3) of a 5 ⁇ 5 G channel image to each pixel coordinate. Projection values of respective directions obtained using Equation (II) are shown below in Equation (12).
  • dir[ 0][0] ( g[ 0][0]+ g[ 0][1]+ g[ 0][2]+ g[ 0][3]+ g[ 0][4])*12;
  • dir[ 0][1] ( g[ 1][0]+ g[ 1][1]+ g[ 1][2]+ g[ 1][3]+ g[ 1][4])*12;
  • dir[ 0][2] ( g[ 2][0]+ g[ 2][1]+ g[ 2][2]+ g[ 2][3]+ g[ 2][4])*12;
  • dir[ 0][3] ( g[ 3][0]+ g[ 3][1]+ g[ 3][2]+ g[ 3][3]+ g[ 3][4])*12;
  • dir[ 0][4] ( g[ 4
  • Equation (12) constants such as ‘12’, ‘15’, ‘20’ and ‘30’ are values multiplied to obtain a common multiple necessary for obtaining the projection values of respective directions.
  • step S 220 the edge direction estimation module 220 calculates a maximum difference value between respective directions.
  • Equation (13) i and j denote pixel coordinates, and k and l denote z values.
  • step S 230 the edge direction estimation module 220 detects an edge direction using the maximum contrast difference.
  • Such an edge direction is calculated using Equation (14).
  • dir arg max p (
  • diff p ⁇ 1 /2+diff p +diff p+1 /2) ⁇ (diff p+3 /2+diff p 4+diff P+5 /2)
  • Equation (14) p denotes an angle value of a projection direction.
  • the edge direction estimation module 220 calculates the angle value p of the edge direction satisfying Equation (14).
  • the angle value p corresponds to an edge direction.
  • the edge direction estimation module 220 determines the angle value p calculated using Equation (14) as the edge direction.
  • the G interpolation correction module 230 performs a secondary interpolation on the primary interpolated G channel window based on the estimated edge direction with respect to the primary interpolated G channel window. That is, the G interpolation correction module 230 analyzes the estimated edge direction and a projection frequency of the edge, intensifies interpolation in the edge direction by applying a 5 ⁇ 5 adaptive Gaussian filter, and eliminates noise.
  • the application of the adaptive Gaussian filter refers to adjusting a Gaussian smoothing filter by analyzing a two-Dimensional (2D) frequency and direction of a signal.
  • FIG. 9 is a flow chart illustrating a secondary interpolation method performed by the G interpolation correction module 230 as illustrated in FIG. 4 .
  • FIGS. 10A and 10B are diagrams illustrating a projection frequency analysis process.
  • FIGS. 11A and 11B are diagrams illustrating an adaptive Gaussian filter generation process.
  • step S 310 the G interpolation correction module 230 analyzes the estimated edge direction and a frequency of a projection value of a vertical direction using Equation (15). That is, the G interpolation correction module 230 approximates, to cosine, a Fourier magnitude of an edge direction in a spatial domain based on a Fourier-slice theorem. This process corresponds to a Discrete Fourier Transform (DFT).
  • DFT Discrete Fourier Transform
  • the G interpolation correction module 230 measures the strength of an edge direction p using a filter.
  • weighted average values of Equation (15) are used.
  • Constants indicated in Equation (15) are filter values for measuring the strength of the edge direction. It will be apparent to those skilled in the art that the filter values may be changed according to design.
  • dir _strength — 1
  • dir _strength — 2
  • dir _strength — 3
  • dir _strength — 4
  • the G interpolation correction module 230 defines a 2D Gaussian smoothing filter using Equation (16) and determines a direction and shape of a covariance matrix ⁇ .
  • Equation (16) x denotes a coordinate value of a 2D vector.
  • FIGS. 10A and 10B graphically illustrate DFT in the projection frequency analysis process. More specifically, FIG. 10A is a diagram before DFT and FIG. 10B is a diagram after DFT.
  • step S 320 the G interpolation correction module 230 converts a Gaussian filter of a frequency domain, as illustrated in FIG. 11A , into an adaptive Gaussian filter of a spatial domain, as illustrated in FIG. 11B , using the above-described projection frequency analysis, intensifies interpolation in an edge direction, and eliminates noise.
  • Equation (17) ⁇ denotes a preset angle p (i.e., one of 8 directions).
  • the G interpolation correction module 230 determines eigenvalues of a ⁇ direction and a vertical direction (principal components) of the ⁇ direction using Equation (18) (as a deviation of a projection frequency cosine coefficient).
  • the G interpolation correction module 230 converts ⁇ freq of a frequency domain into ⁇ spatial of a spatial domain using Equation (19) to calculate a normal distribution.
  • ⁇ spatial 4 ⁇ ⁇ ln ⁇ ⁇ 2 2 ⁇ ⁇ ln ⁇ ⁇ 2 ⁇ ⁇ freq ( 19 )
  • An area varies according to the directions of two principal vectors and therefore a Gaussian distribution is a circle or elliptical form.
  • step S 330 the G interpolation correction module 230 combines the adaptive Gaussian filter with a bidirectional filter using Equation (20) and applies the combined filter to the primary interpolated G channel image.
  • Equation (20) G(x) denotes a 2D adaptive Gaussian, c denotes a central pixel value, and g(x) denotes a one-Dimensional (1D) normal distribution.
  • the R/B interpolation module 240 performs interpolation on second and third primary colors (R and B) based on the secondary interpolated image window and the estimated edge direction.
  • FIG. 12 is a flow chart illustrating an interpolation method for R and B channel windows performed by the R/B interpolation module 240 as illustrated in FIG. 4 .
  • FIG. 13 is a diagram illustrating interpolation of a B channel window using a G channel window.
  • step S 410 the R/B interpolation module 240 interpolates 6 neighborhood pixels ‘A’, ‘C’, ‘D’, ‘E’, ‘F’, and ‘H’, which are adjacent to a central pixel ‘X’, using a secondary interpolated G channel windows 420 (pixel values ‘c’, ‘d’, ‘e’, and ‘f’) with respect to a B channel window 410 , and in step S 420 , interpolates the central pixel ‘X’ according to the estimated edge direction using the neighborhood interpolation values as shown in an interpolated B channel window 430 .
  • a secondary interpolated G channel windows 420 pixel values ‘c’, ‘d’, ‘e’, and ‘f’
  • FIG. 14 is a diagram schematically summarizing an edge-adaptive interpolation and noise filtering method according to an embodiment of the present invention.
  • step S 510 primary interpolation is performed on a G channel window 510 based on two directions (horizontal and vertical directions) to obtain a primary interpolated G channel window 515 .
  • step S 520 an edge direction 525 is estimated within the primary interpolated G channel window 515 , based on 8 directions 520 with respect to the primary interpolated G channel window.
  • step S 530 secondary interpolation is performed on the primary interpolated G channel window 530 , based on the estimated edge direction to obtain a secondary interpolated G channel window 535 .
  • a trilateral filter considering an edge direction, a frequency, and a distance from a central pixel is applied as a noise filter.
  • step S 540 interpolation is performed on a B or R channel window 542 , based on the secondary interpolated G channel window 540 and the estimated edge direction 525 , to obtain an interpolated B or R channel window 545 .
  • FIGS. 15A-15C , 16 A- 16 C, and 17 A- 17 C are diagrams illustrating images obtained by an edge-adaptive interpolation and noise filtering method according to the present invention.
  • interpolation using different colors was performed for a Bayer image photographed by an 8-Megapixel image sensor of Sony.
  • AMB Automatic White Balance
  • gamma gamma
  • edge enhancement were maintained at the same value of the same algorithm.
  • FIG. 15A shows an image according to two-direction edge estimation
  • FIG. 15B shows an image according to 4-direction edge estimation
  • FIG. 15C shows an image according to 8-direction edge estimation.
  • the image according to the 8-direction edge estimation smoothly expresses a high frequency edge of an angle slanted in multiple (8) directions.
  • FIG. 16A shows an image according to two-direction edge estimation
  • FIG. 16B shows an image according to 4-direction edge estimation
  • FIG. 16C shows an image according to 8-direction edge estimation.
  • the image according to the 8-direction edge estimation smoothes an edge by interpolation refinement within the same processing block and simultaneously eliminates noise, thereby obtaining a clear image.
  • FIG. 17A shows an image according to two-direction edge estimation
  • FIG. 17B shows an image according to 4-direction edge estimation
  • FIG. 17C shows an image according to 8-direction edge estimation.
  • R/B interpolation in a wrong direction may deteriorate color noise
  • the image according to the 8-direction edge estimation can reduce color noise (i.e., fringe) as shown in FIG. 17C .
  • adaptive interpolation and noise filtering can accurately analyze an edge direction within a given window so that an image can be naturally restored while preserving an edge boundary, and can eliminate noise while preserving details.
  • the edge-adaptive interpolation and noise filtering method according to the present invention can be achieved in the form of hardware, software (i.e. a program), or a combination thereof.
  • a program may be stored in a volatile or non-volatile recording medium, which can be read by machine such as a computer.
  • the recording medium may be a storage device, such as a Read-Only Memory (ROM), a memory, such as a Random Access Memory (RAM), a memory chip and an integrated chip, or an optical or magnetic recording medium, such as a Compact Disk (CD), a Digital Versatile Disk (DVD), a magnetic disk and a magnetic tape.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • CD Compact Disk
  • DVD Digital Versatile Disk
  • the edge-adaptive interpolation and noise filtering method of the present invention may be implemented in the form of a program including codes for achieving the method. Furthermore, the program may be electrically transmitted through an arbitrary medium such as communication signals propagated by wire or wirelessly.

Abstract

An edge-adaptive interpolation and noise filtering method is provided including performing primary interpolation on a first color, based on a first preset number of directions, with respect to a first color window obtained from an input image window, estimating an edge direction within a primary interpolated first color window obtained by the primary interpolation, based on a second preset number of directions, the second preset number of directions being larger than the first preset number, with respect to the primary interpolated first color window, performing secondary interpolation on the first color, based on the estimated edge direction, with respect to the primary interpolated first color window, and performing interpolation on a second color, based on the estimated edge direction, with respect to a second color window obtained from the input image window.

Description

PRIORITY
This application claims priority under 35 U.S.C. §119(a) to a patent application filed in the Korean Intellectual Property Office on Aug. 10, 2009 and assigned Serial No. 10-2009-0073442, the entire disclosure of which is hereby incorporated by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to a method for acquiring multicolor images by forming a Red, Green, and Blue (RGB) channel or RGB color on each pixel, based on image data entered by passing through a Bayer color filter array, and more particularly, to an edge-adaptive interpolation and noise filtering method performed for the image data.
2. Description of the Related Art
Image sensors used in portable terminals with camera modules include Complementary Metal-Oxide Semiconductor (CMOS) image sensors and Charge-Coupled Device (CCD) image sensors. A CMOS image sensor is used in most cellular phone cameras and also in low-cost digital cameras because it has a high degree of integration and is easily mass-produced. Although the CMOS image sensor has advantages of low cost and low power usage, it is weak in noise. Most of this noise is processed by software to correct pixel values of images output from the image sensor.
A fundamental technique of a picture quality improvement algorithm is to detect an edge within an image and to form an RGB value using neighborhood pixel values, without artifacts for each pixel, in consideration of characteristics of the edge.
Some of the available interpolation and noise filtering methods include interpolation methods considering edge direction and interpolation methods using a noise elimination structure.
For interpolation methods considering edge direction, an edge direction is estimated and interpolation is performed along the estimated edge direction. However, there are various edge analysis methods and interpolation methods using neighborhood pixel values.
For example, edge-adaptive interpolation methods supporting two directions, such as U.S. Pat. No. 5,373,322, which issued to Laroche et al., and is entitled “Apparatus and Method for Adaptively Interpolating a Full Color Image Utilizing Chrominance Gradients”, and U.S. Pat. No. 6,507,364, which issued to Bishay et al., and is entitled “Edge-dependent Interpolation Method for Color Reconstruction in Image Processing Devices”, disclose interpolation methods for obtaining a gradient by dividing an edge direction for a 3×3 window into horizontal and vertical directions and performing, interpolation using the gradient. Additionally, U.S. Pat. No. 6,882,563, which issued to Asao, and is entitled “Magnetic Memory Device and Method for Manufacturing the Same”, discloses a method for eliminating a color fringe by adding a correction step, after performing interpolation similarly to the above-disclosed interpolation methods.
Edge-adaptive interpolation methods supporting four directions, such as U.S. Pat. No. 6,404,918, which issued to Hel-or et al., and is entitled “Image Demosaicing Method Utilizing Directional Smoothing”, discloses a method for extracting directions using a steerable filter and iterating interpolation in the extracted direction. Additionally, U.S. Pat. No. 6,832,009, which issued to Shezaf et al., and is entitled “Method and Apparatus for Improved Image Interpolation”, discloses an interpolation method for detecting a four-direction edge using a 3×3 Sobel operation.
Further, U.S. Pat. No. 6,707,937, which issued to Sobel et al., and is entitled “Interpolation of Edge Portions of a Digital Image”, discloses an interpolation method using 3×3 edge detection of short scale and long scale filtering. U.S. Pat. No. 7,133,553, which issued to Embler, and is entitled “Correlation-based Color Mosaic Interpolation Adjustment Using Luminance Gradients”, discloses an interpolation method performed by measuring an edge direction using a 3×3 preset directional filter. U.S. Pat. No. 7,376,288, which issued to Huang et al., and is entitled “Edge Adaptive Demosaic System and Method”, discloses a method for performing interpolation in a gradient direction within a threshold after obtaining a gradient value of four edge directions within a 5×5 window.
For interpolation methods using a noise elimination structure, noise may occur due to a Gr/Gb difference or an interpolation direction measurement error. Because it is unstable to accurately measure an edge direction or intensity at a Bayer level, noise is generally eliminated after performing interpolation and then measuring the accurate edge direction and intensity.
For example, U.S. Pat. No. 6,795,586, which issued to Gindele et al., and is entitled “Noise Cleaning and Interpolating Sparsely Populated Color Digital Image”, discloses a structure in which color interpolation and noise elimination are processed in one block. U.S. Pat. No. 6,816,194, which issued to Zhang et al., and is entitled “Systems and Methods with Error Resilience in Enhancement Layer Bitstream of Scalable Video Coding”, discloses an interpolation method using a boundary of an edge by application of a bilateral filter during a demosaicing process.
Similarly, U.S. Pat. No. 7,256,828, which issued to Nilsson et al., and is entitled “Weighted Gradient Based and Color Corrected Interpolation”, discloses a structure using the intensity of an edge as an interpolation weight. U.S. Pat. No. 6,970,597, which issued to Olding et al., and is entitled “Method of Defining Coefficients for Use in Interpolating Pixel Values”, discloses a structure for determining a coefficient of a kernel by raising window support during interpolation.
Other interpolation methods include, for example, U.S. Pat. No. 7,292,725, which issued to Chen et al., and is entitled “Demosaicking Method and Apparatus for Color Filter Array Interpolation in Digital Image Acquisition Systems”. U.S. Pat. No. 7,292,725 discloses a structure in which a neighborhood pixel value and an interpolation value are tabulated and the interpolation value is retrieved when a specific template is entered.
Additionally, U.S. Pat. No. 6,130,960, which issued to Acharya, and is entitled “Block-matching Algorithm for Color Interpolation” and U.S. Pat. No. 6,933,971, which is issued to Bezryadin, disclose interpolation methods using a weighted average, after analyzing the intensity of a neighborhood pixel.
U.S. Pat. No. 7,053,944, which issued to Acharya, and is entitled, “Method of using hue to interpolate color pixel signals”, discloses an interpolation method using a characteristic showing uniform hue value in the same color region.
However, the conventional interpolation and filtering methods above have a number of problems.
First, an edge direction is not accurate detected during edge detection in a Bayer pattern. Because it is important to naturally restore images while preserving a boundary of an edge, it is also important to accurately analyze an edge direction within a given window or block.
Second, inaccurate edge detection may generate noise during interpolation and may damage a detail/texture part during noise elimination. The inaccurate edge detection also affects detail extraction for edge enhancement.
SUMMARY OF THE INVENTION
The present invention is designed to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present invention provides an interpolation and noise filtering method that accurately analyzes an edge direction within a given window or block and eliminates noise while preserving detail.
In accordance with an aspect of the present invention, an edge-adaptive interpolation and noise filtering method is provided. The method includes performing primary interpolation on a first color, based on a first preset number of directions, with respect to a first color window obtained from an input image window, estimating an edge direction within a primary interpolated first color window obtained by the primary interpolation, based on a second preset number of directions, the second preset number of directions being larger than the first preset number, with respect to the primary interpolated first color window, performing secondary interpolation on the first color, based on the estimated edge direction, with respect to the primary interpolated first color window, and performing interpolation on a second color, based on the estimated edge direction, with respect to a second color window obtained from the input image window.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects, features, and advantages of certain embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a block diagram schematically illustrating a portable terminal for outputting RGB color images based on a Bayer image according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a partial configuration of a camera as illustrated in FIG. 1, according to an embodiment of the present invention;
FIGS. 3A and 3B are diagrams illustrating a Bayer image window, and R, G, and B channel image windows interpolated based on the Bayer image window, respectively, according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating functional modules of an image signal processor as illustrated in FIG. 1, according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating a primary interpolation method performed by a G interpolation module as illustrated in FIG. 4, according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a two-direction projection operation according to an embodiment of the present invention;
FIG. 7 is a flow chart illustrating an edge direction estimation method performed by an edge direction estimation module as illustrated in FIG. 4, according to an embodiment of the present invention;
FIGS. 8A and 8B are diagrams illustrating an 8-direction projection operation, according to an embodiment of the present invention;
FIG. 9 is a flow chart illustrating a secondary interpolation method performed by a G interpolation correction module as illustrated in FIG. 4, according to an embodiment of the present invention;
FIGS. 10A and 10B are diagrams illustrating a projection frequency analysis process, according to an embodiment of the present invention;
FIGS. 11A and 11B are diagrams illustrating an adaptive Gaussian filter generation process, according to an embodiment of the present invention;
FIG. 12 is a flow chart illustrating an interpolation method for R and B channel images performed by an R/B interpolation module as illustrated in FIG. 4, according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating interpolation of a B channel image using a G channel image according to an embodiment of the present invention;
FIG. 14 is a diagram schematically summarizing an edge-adaptive interpolation and noise filtering method according to an embodiment of the present invention; and
FIGS. 15A-15C, 16A-16C, and 17A-17C are diagrams illustrating images obtained by an edge-adaptive interpolation and noise filtering method according to an embodiment of the present invention.
Throughout the drawings, the same drawing reference numerals will be understood to refer to the same elements, features and structures.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Various embodiments of the present invention will now be described with reference to the accompanying drawings. The following description includes specific details in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without such specific details.
FIG. 1 is a block diagram schematically illustrating a portable terminal for outputting RGB color images in a Bayer image according to an embodiment of the present invention.
Referring to FIG. 1, a portable terminal 100 includes a camera 110, an Image Signal Processor (ISP) 200, a display 120, a wireless communication unit 130, a controller 140, and a memory 150. Although not illustrated in FIG. 1, the portable terminal 100 may also include a speaker, a microphone, a user input device such as a keypad, etc.
The camera 110 captures an image of a subject and converts the formed image into electric signals. Although not shown in FIG. 1, the camera 110 may include a lens system including at least one lens, and an image sensor, such as a CCD or CMOS image sensor.
The display 120 displays an image frame generated from the ISP 200 on a screen. The display 120 may use a Liquid Crystal Display (LCD) or a touch screen. The touch screen displays an image according to the control of the controller 140, generates key contact interruption when a user input means, such as a finger or a stylus pen, contacts thereto, and provides the controller 140 with user input information including input coordinates and input states according to the control of the controller 140.
The wireless communication unit 130 receives a wireless downlink signal through an antenna and transmits downlink data obtained by demodulating the wireless downlink signal to the controller 140. The wireless communication unit 130 generates a wireless uplink signal by modulating uplink data received from the controller 140 and wirelessly transmits the generated wireless uplink signal through the antenna. For example, the modulation and demodulation processes may be performed according to a Code Division Multiple Access (CDMA) scheme, a Frequency Division Multiplexing (FDM) scheme, or a Time Division Multiplexing (TDM) scheme.
The memory 150 stores images. The memory may also store databases associated with user information and documents, background images (menu screen, standby screen, etc.) necessary to drive the portable terminal 100, and operating programs.
The controller 140 executes an application according to user input information.
The ISP 200 processes images received from the camera 110 or images stored in the memory 150 in units of frames, according to the control by the controller 140, and generates an image frame that is converted to be suitable (e.g., have a proper size, picture quality, resolution, etc.) for the display 120. The ISP 200 sets a part of an input image to a window and performs interpolation and noise filtering in units of windows. The window may include a central pixel located at the center of the window and neighborhood pixels adjacent to the central pixel within the window. The window refers to a pixel matrix of a predetermined size, for example, a window of 3×3 pixels to 12×12 pixels.
Commonly, an image signal input to the ISP 200 from the camera 110 is Bayer data. Bayer data refers to RGB data filtered by a Bayer color filter.
FIG. 2 is a diagram illustrating a partial configuration of a camera 110 as illustrated in FIG. 1.
Referring to FIG. 2, the camera 110 includes an image sensor including a pixel array 112 and a Bayer Color Filter Array (CFA) 114 arranged on the pixel array 112. Each pixel generates a value corresponding to the brightness of incident light. The image sensor may have a configuration in which each pixel detects Red (R), Green (G), and Blue (B) colors. However, each pixel typically detects only one color using a color filter due to cost restraints. The image sensor may further include the Bayer CFA 114 arranged on the pixel array 112.
The Bayer CFA 114 includes three-color (R, G, and B) filter units, which alternate with each other in row and column directions. Each of the filter units corresponds to each of the pixels. For example, an R channel passing through an R filter unit is R color and a pixel arranged to correspond to the R filter unit detects the R channel.
FIGS. 3A and 3B are diagrams illustrating a Bayer image window 310, and R, G, and B channel image windows 320, 330, and 340, which are interpolated based on the Bayer image window 310, respectively. In FIG. 3A, an R channel is denoted by dots, a G channel is denoted by oblique lines, and a B channel is denoted by checked lines. In FIG. 3B, an original pixel, which is not interpolated, is indicated by a high density of dots or lines and an interpolated pixel is indicated by a low density of dots or lines.
The ISP 200 forms R, G, and B channels on respective pixels based on image data (i.e., a Bayer image) passing through the Bayer CFA 114 and obtains multicolor images (i.e., images obtained by overlapping interpolated R, G and B channel images).
FIG. 4 is a diagram illustrating functional modules of the ISP 200 as illustrated in FIG. 1. The functional modules may correspond to a series of program steps performed by or functional elements within the ISP 200. The ISP 200 includes a G interpolation module 230, an edge direction estimation module 220, a G interpolation correction module 230, and an R/B interpolation module 240.
The G interpolation module 210 performs primary interpolation on a first primary color (G) based on directions (i.e., horizontal and vertical directions) of a first preset number with respect to an input image window. The G interpolation module 210 performs the following operations (i.e., primary interpolation) with a 5×5 Bayer raw data patch indicating pixel values for a 5×5 Bayer window.
FIG. 5 is a flow chart illustrating a primary interpolation method performed for a G channel image (or window) by the G interpolation module 210.
FIG. 6 is a diagram illustrating a two-direction projection operation.
Referring to FIG. 5, the primary interpolation method is for interpolating an empty pixel value of a G channel window by analyzing an edge pattern of a Bayer window. The primary interpolation method includes calculating three interpolation values in step S110, calculating 5×5 horizontal/vertical projection values in step S120, determining an edge in step S130, determining vertical and horizontal direction in step S140, vertical interpolation in step S150, horizontal interpolation in step S160, and 4-neighborhood weighted interpolation in step S170.
More specifically, in step S110, 3×3 2-direction (horizontal/vertical) projection values are calculated. The horizontal/vertical projection is an operation of pixel values in a horizontal or vertical direction with respect to a Bayer window.
FIG. 6, illustrates first to fifth vertical projections V1 to V5 and first to fifth projections H1 to H5.
The G interpolation module 210 calculates 3×3 directional projection values with respect to horizontal and vertical directions.
Vertical Direction
VV1=(G2+G12)/2+G12+(G12+G22)/2
VV2=G8+(G8+G18)/2+G18
VV3=(G4+G14)/2+G14+(G14+G24)  (1)
Horizontal Direction
HH1=G6+G8)/2+G8+(G8+G10)/2
HH2=G12+(G12+G14)/2+G14
HH3=(G16+G18)/2+G18+(G18+G20)/2  (2)
In Equations (1) and (2), VV1, VV2, and VV3 denote values for the second to fourth vertical projections V2 to V4, and HH1, HH2, and HH3 denote values for the second to fourth horizontal projections H2 to H4.
The G interpolation module 210 calculates horizontal and vertical direction weights weight_h and weight_v using the horizontal and vertical projection values.
weight h=|HH1−HH2|+|HH3−HH2|
weight v=|VV1−VV2|+|VV3−VV2|  (3)
The G interpolation module 210 previously calculates temporary (or candidate) interpolation values for three cases using the vertical and horizontal projection values and the vertical and horizontal direction weights. The three cases are when an edge is in a vertical direction, when an edge is in a horizontal direction, and when there is no edge.
In Equation (4), Green_H denotes a temporary interpolation value for a horizontal direction, Green_V denotes a temporary interpolation value for a vertical direction, and Green 0 denotes a temporary interpolation value for no direction or an arbitrary direction (i.e., in case of no edge or in case of uniform (monochrome) color).
Green H=(G12+G14)/2+R13−((R11+R13)/2+(R13+R15)/2)/2
Green V=(G8+G18)/2+R13−((R3+R13)/2+(R23+R13)/2)/2
Green0=(weight h*(G12+G14)/2+weight v*(G8+G18)/2)/(weight h+weight v)  (4)
In step S120, the G interpolation module 210 calculates 5×5 directional projection values for a 5×5 Bayer window using the temporary interpolation values.
In Equation (5), {Green_H, Green_V, and Green0} are substituted for X.
V1=R1+G2+R3+G4+R5
V2=G2+B7+G12+B17+G22
V3=R3+G8+X+G18+R23
V4=G4+B9+G14+B19+G24
V5=G21+G22+R23+G24+R25
H1=R1+G2+R3+G4+R5
H2=G6+B7+G8+B9+G10
H3=R11+G12+X+G14+R15
H4=G16+B17+G18+B19+G20
H5=R21+G22+R23+G24+R25  (5)
The G interpolation module 210 calculates a maximum difference value between respective directions. The difference value indicates a difference between pixel values and aids in determination of which direction has a marked contrast.
Horizontal_Max_Diff(X)=max(H1,H2,H3,H4,H5)−min(H1,H2,H3,H4,H5)
Vertical_Max_Diff(X)=max(V1,V2,V3,V4,V5)−min(V1,V2,V3,V4,V5)  (6)
In step S130, the G interpolation module 210 determines whether the window includes an edge using Equation (7).
|Horizontal_Max_Diff(Green0)−Vertical_Max_Diff(Green0)|<th  (7)
If Equation (7) has a true value, that is, if a difference between a vertical direction contrast and a horizontal direction contrast is insignificant (i.e., if the difference is less than a threshold value th), the G interpolation module 210 determines that an edge is not included in the Bayer window (i.e., no direction or uniform/monochrome color).
In step S140, the G interpolation module 210 determines whether an edge included in the Bayer window is in a vertical direction or in a horizontal direction using Equations (8) and (9).
Horizontal_Max_Diff(Green H)>Vertical_Max_Diff(Green V)  (8)
If Equation (8) has a true value, that is, if the vertical direction contrast (i.e., Horizontal_Max_Diff(Green_H)) is greater than the horizontal direction contrast (i.e., Vertical_Max_Diff(Green_V)), the G interpolation module 210 determines that the edge included in the Bayer window is in a horizontal direction.
Horizontal_Max_Diff(Green H)<Vertical_Max_Diff(Green V)  (9)
If Equation (9) has a true value, that is, if the horizontal direction contrast is greater than the vertical direction contrast, the G interpolation module 210 determines that the edge included in the Bayer window is in a vertical direction.
For example, if an edge pattern, an upper half of which is dark and a lower half of which is bright, is present in the Bayer window, the horizontal direction contrast, i.e., a difference between left and right pixel values, will be insignificant and the vertical direction contrast, i.e., a difference between upper and lower pixel values, will be significant.
In steps S150, S160, and S170, the G interpolation module 210 allocates the temporary interpolation value of an estimated direction to a central pixel of the G channel window according to the edge direction (horizontal or vertical direction, or no direction) estimated through the above-described processes.
G13={Green H|Green V|Green0}  (10)
The G interpolation module 210 allocates the vertical direction interpolation value Green_V to the central pixel in step S150, allocates the horizontal direction interpolation value Green_H to the central pixel in step S160, and allocates the no-direction interpolation value Green 0 to the central pixel in step S170.
Thereafter, the edge direction estimation module 220 estimates an edge direction within the primary interpolated G channel window based on a second preset number (i.e., 8) of directions, larger than the first preset number (i.e., 2).
FIG. 7 is a flow chart illustrating an edge direction estimation method performed by the edge direction estimation module 220.
FIGS. 8A and 8B are diagrams illustrating an 8-direction projection operation.
Referring to FIG. 7, the edge direction estimation method is for estimating an edge pattern within the primary interpolated G channel window with respect to multiple (i.e., 8) directions.
The edge direction estimation module 220 performs the following operations (i.e., edge direction estimation) with pixel values for the primary interpolated G channel window.
In step S210, the edge direction estimation module 220 calculates 8 directional projection values for the primary interpolated 5×5 G channel window. Two directions adjacent to each other among the 8 directions have an angle of 90/4 degrees. That is, if the 5×5 G channel window is expressed by g(x, y), the edge direction estimation module 220 divides an angle of 180 degrees into (2n−2) directions, while maintaining the same angle. In the present embodiment, because n=5, (2n−2) becomes 8.
FIG. 8A illustrates a primary interpolated G channel window and FIG. 8B illustrates one of the 8 directional projections.
The edge direction estimation module 220 calculates 8 directional projection values using Equation (11).
dir x , y , z = x y z g ( x , y ) δ ( cos x + sin y - z ) ( 11 )
In Equation (11), x and y denote pixel coordinates in a 5×5 G channel window and z denotes a distance from a central point (3, 3) of a 5×5 G channel image to each pixel coordinate. Projection values of respective directions obtained using Equation (II) are shown below in Equation (12).
dir[0][0]=(g[0][0]+g[0][1]+g[0][2]+g[0][3]+g[0][4])*12;
dir[0][1]=(g[1][0]+g[1][1]+g[1][2]+g[1][3]+g[1][4])*12;
dir[0][2]=(g[2][0]+g[2][1]+g[2][2]+g[2][3]+g[2][4])*12;
dir[0][3]=(g[3][0]+g[3][1]+g[3][2]+g[3][3]+g[3][4])*12;
dir[0][4]=(g[4][0]+g[4][1]+g[4][2]+g[4][3]+g[4][4])*12;
dir[1][0]=(g[0][0]+g[1][2]+g[2][4])*20;
dir[1][1]=(g[1][1]+g[2][3])*30;
dir[1][2]=(g[1][0]+g[2][2]+g[3][4])*20;
dir[1][3]=(g[2][1]+g[3][3])*30;
dir[1][4]=(g[2][0]+g[3][2]+g[4][4])*20;
dir[2][0]=(g[0][2]+g[1][3]+g[2][4])*20;
dir[2][1]=(g[0][1]+g[1][2]+g[2][3]+g[4][4])*15;
dir[2][2]=(g[0][0]+g[1][1]+g[2][2]+g[3][3]+g[4][4])*12;
dir[2][3]=(g[1][0]+g[2][1]+g[3][2]+g[4][3])*15;
dir[2][4]=(g[2][0]+g[3][1]+g[4][2])*20;
dir[3][0]=(g[0][2]+g[2][3]+g[4][4])*20;
dir[3][1]=(g[1][2]+g[3][3])*30;
dir[3][2]=(g[0][1]+g[2][2]+g[4][3])*20;
dir[3][3]=(g[1][1]+g[3][2])*30;
dir[3][4]=(g[0][0]+g[2][1]+g[4][2])*20;
dir[4][0]=(g[0][0]+g[1][0]+g[2][0]+g[3][0]+g[4][0])*12;
dir[4][1]=(g[0][1]+g[1][1]+g[2][1]+g[3][1]+g[4][1])*12;
dir[4][2]=(g[0][2]+g[1][2]+g[2][2]+g[3][2]+g[4][2])*12;
dir[4][3]=(g[0][3]+g[1][3]+g[2][3]+g[3][3]+g[4][3])*12;
dir[4][4]=(g[0][4]+g[1][4]+g[2][4]+g[3][4]+g[4][4])*12;
dir[5][0]=(g[0][4]+g[2][3]+g[4][2])*20
dir[5][1]=(g[1][3]+g[3][2])*30;
dir[5][2]=(g[0][3]+g[2][2]+g[4][1])*20;
dir[5][3]=(g[1][2]+g[3][1])*30;
dir[5][4]=(g[0][2]+g[2][1]+g[3][0])*20;
dir[6][0]=(g[2][4]+g[3][3]+g[4][2])*20;
dir[6][1]=(g[1][4]+g[2][3]+g[3][2]+g[4][1])*15;
dir[6][2]=(g[0][4]+g[1][3]+g[2][2]+g[3][1]+g[4][0])*12;
dir[6][3]=(g[0][3]+g[1][2]+g[2][1]+g[3][0])*15;
dir[6][4]=(g[0][2]+g[1][1]+g[2][0])*20;
dir[7][0]=(g[2][4]+g[3][2]+g[4][0])*20;
dir[7][1]=(g[2][3]+g[3][1])*30;
dir[7][2]=(g[1][4]+g[2][2]+g[3][0])*20;
dir[7][3]=(g[1][3]+g[2][1])*30;
dir[7][4]=(g[0][4]+g[1][2]+g[2][0])*20  (12)
In Equation (12), constants such as ‘12’, ‘15’, ‘20’ and ‘30’ are values multiplied to obtain a common multiple necessary for obtaining the projection values of respective directions.
In step S220, the edge direction estimation module 220 calculates a maximum difference value between respective directions. The maximum difference value denotes a difference between pixel values and aids in a determination of which direction has a marked contrast. Namely, the edge direction estimation module 220 calculates a projection value showing the greatest difference, i.e., a maximum contrast difference, between the projection values obtained according to respective directions. This is given by Equation (13).
diffij=max(diri,j,k−diri,j,l)  (13)
In Equation (13), i and j denote pixel coordinates, and k and l denote z values.
In step S230, the edge direction estimation module 220 detects an edge direction using the maximum contrast difference. Such an edge direction is calculated using Equation (14).
dir=arg max p(|diffp−1/2+diffp+diffp+1/2)−(diffp+3/2+diffp=4+diffP+5/2)|)  (14)
In Equation (14), p denotes an angle value of a projection direction.
The edge direction estimation module 220 calculates the angle value p of the edge direction satisfying Equation (14). The angle value p corresponds to an edge direction. The edge direction estimation module 220 determines the angle value p calculated using Equation (14) as the edge direction.
The G interpolation correction module 230 performs a secondary interpolation on the primary interpolated G channel window based on the estimated edge direction with respect to the primary interpolated G channel window. That is, the G interpolation correction module 230 analyzes the estimated edge direction and a projection frequency of the edge, intensifies interpolation in the edge direction by applying a 5×5 adaptive Gaussian filter, and eliminates noise. The application of the adaptive Gaussian filter refers to adjusting a Gaussian smoothing filter by analyzing a two-Dimensional (2D) frequency and direction of a signal.
FIG. 9 is a flow chart illustrating a secondary interpolation method performed by the G interpolation correction module 230 as illustrated in FIG. 4.
FIGS. 10A and 10B are diagrams illustrating a projection frequency analysis process.
FIGS. 11A and 11B are diagrams illustrating an adaptive Gaussian filter generation process.
In step S310, the G interpolation correction module 230 analyzes the estimated edge direction and a frequency of a projection value of a vertical direction using Equation (15). That is, the G interpolation correction module 230 approximates, to cosine, a Fourier magnitude of an edge direction in a spatial domain based on a Fourier-slice theorem. This process corresponds to a Discrete Fourier Transform (DFT).
More specifically, the G interpolation correction module 230 measures the strength of an edge direction p using a filter. To measure the strength of the edge direction p, weighted average values of Equation (15) are used. Constants indicated in Equation (15) are filter values for measuring the strength of the edge direction. It will be apparent to those skilled in the art that the filter values may be changed according to design.
dir_strength1=|p
Figure US09111365-20150818-P00001
{+0.6152,+0.3485,+0.0000,−0.3485,−0.6152}|
dir_strength2=|p
Figure US09111365-20150818-P00001
{+0.2265,+0.0000,−0.4531,+0.0000,+0.2265}|
dir_strength3=|p
Figure US09111365-20150818-P00001
{+0.4570,−0.3485,+0.0000,+0.3485,−0.4570}|
dir _strength 4=|p
Figure US09111365-20150818-P00001
{+0.2265,−0.3398,+0.2265,−0.3398,−0.2265}|
The G interpolation correction module 230 defines a 2D Gaussian smoothing filter using Equation (16) and determines a direction and shape of a covariance matrix Σ.
G ( x , y ) = 1 2 π - 1 2 ( x - 1 x t ) ( 16 )
In Equation (16), x denotes a coordinate value of a 2D vector.
FIGS. 10A and 10B graphically illustrate DFT in the projection frequency analysis process. More specifically, FIG. 10A is a diagram before DFT and FIG. 10B is a diagram after DFT.
In step S320, the G interpolation correction module 230 converts a Gaussian filter of a frequency domain, as illustrated in FIG. 11A, into an adaptive Gaussian filter of a spatial domain, as illustrated in FIG. 11B, using the above-described projection frequency analysis, intensifies interpolation in an edge direction, and eliminates noise.
The G interpolation correction module 230 determines an eigenvector of a covariance matrix using Equation (17) and defines a principal component of the covariance matrix Σ of the 2D Gaussian smoothing filter identically to a predefined projection direction p.
x cos(θ)+y sin(θ)=0  (17)
In Equation (17), θ denotes a preset angle p (i.e., one of 8 directions).
The G interpolation correction module 230 determines eigenvalues of a θ direction and a vertical direction (principal components) of the θ direction using Equation (18) (as a deviation of a projection frequency cosine coefficient).
σ freq = i ( i * freq ( i ) ) 2 i freq ( i ) ( 18 )
In Equation (18), i denotes a frequency index (where −4≦i≦4) and i!=0.
The G interpolation correction module 230 converts σfreq of a frequency domain into σspatial of a spatial domain using Equation (19) to calculate a normal distribution.
σ spatial = 4 ln 2 2 ln 2 σ freq ( 19 )
An area varies according to the directions of two principal vectors and therefore a Gaussian distribution is a circle or elliptical form.
In step S330, the G interpolation correction module 230 combines the adaptive Gaussian filter with a bidirectional filter using Equation (20) and applies the combined filter to the primary interpolated G channel image.
f ( x ) = G ( x ) · g ( c - x ) · f ( x ) G ( x ) · g ( c - x ) ( 20 )
In Equation (20), G(x) denotes a 2D adaptive Gaussian, c denotes a central pixel value, and g(x) denotes a one-Dimensional (1D) normal distribution.
The R/B interpolation module 240 performs interpolation on second and third primary colors (R and B) based on the secondary interpolated image window and the estimated edge direction.
FIG. 12 is a flow chart illustrating an interpolation method for R and B channel windows performed by the R/B interpolation module 240 as illustrated in FIG. 4.
FIG. 13 is a diagram illustrating interpolation of a B channel window using a G channel window.
For example, in step S410, the R/B interpolation module 240 interpolates 6 neighborhood pixels ‘A’, ‘C’, ‘D’, ‘E’, ‘F’, and ‘H’, which are adjacent to a central pixel ‘X’, using a secondary interpolated G channel windows 420 (pixel values ‘c’, ‘d’, ‘e’, and ‘f’) with respect to a B channel window 410, and in step S420, interpolates the central pixel ‘X’ according to the estimated edge direction using the neighborhood interpolation values as shown in an interpolated B channel window 430.
FIG. 14 is a diagram schematically summarizing an edge-adaptive interpolation and noise filtering method according to an embodiment of the present invention.
Referring to FIG. 14, in step S510 primary interpolation is performed on a G channel window 510 based on two directions (horizontal and vertical directions) to obtain a primary interpolated G channel window 515. In step S520 an edge direction 525 is estimated within the primary interpolated G channel window 515, based on 8 directions 520 with respect to the primary interpolated G channel window. In step S530 secondary interpolation is performed on the primary interpolated G channel window 530, based on the estimated edge direction to obtain a secondary interpolated G channel window 535. In this case, a trilateral filter considering an edge direction, a frequency, and a distance from a central pixel is applied as a noise filter.
In step S540, interpolation is performed on a B or R channel window 542, based on the secondary interpolated G channel window 540 and the estimated edge direction 525, to obtain an interpolated B or R channel window 545.
FIGS. 15A-15C, 16A-16C, and 17A-17C are diagrams illustrating images obtained by an edge-adaptive interpolation and noise filtering method according to the present invention. Here, interpolation using different colors was performed for a Bayer image photographed by an 8-Megapixel image sensor of Sony. However, Automatic White Balance (AWB), gamma, and edge enhancement were maintained at the same value of the same algorithm.
More specifically, FIG. 15A shows an image according to two-direction edge estimation, FIG. 15B shows an image according to 4-direction edge estimation, and FIG. 15C shows an image according to 8-direction edge estimation. As shown in FIG. 15C, the image according to the 8-direction edge estimation smoothly expresses a high frequency edge of an angle slanted in multiple (8) directions.
FIG. 16A shows an image according to two-direction edge estimation, FIG. 16B shows an image according to 4-direction edge estimation, and FIG. 16C shows an image according to 8-direction edge estimation. As shown in FIG. 16C, the image according to the 8-direction edge estimation smoothes an edge by interpolation refinement within the same processing block and simultaneously eliminates noise, thereby obtaining a clear image.
FIG. 17A shows an image according to two-direction edge estimation, FIG. 17B shows an image according to 4-direction edge estimation, and FIG. 17C shows an image according to 8-direction edge estimation. Although R/B interpolation in a wrong direction may deteriorate color noise, the image according to the 8-direction edge estimation can reduce color noise (i.e., fringe) as shown in FIG. 17C.
According to the above-described embodiments of the present invention, adaptive interpolation and noise filtering can accurately analyze an edge direction within a given window so that an image can be naturally restored while preserving an edge boundary, and can eliminate noise while preserving details.
It is apparent that the edge-adaptive interpolation and noise filtering method according to the present invention can be achieved in the form of hardware, software (i.e. a program), or a combination thereof. Such a program may be stored in a volatile or non-volatile recording medium, which can be read by machine such as a computer. The recording medium may be a storage device, such as a Read-Only Memory (ROM), a memory, such as a Random Access Memory (RAM), a memory chip and an integrated chip, or an optical or magnetic recording medium, such as a Compact Disk (CD), a Digital Versatile Disk (DVD), a magnetic disk and a magnetic tape. Namely, the edge-adaptive interpolation and noise filtering method of the present invention may be implemented in the form of a program including codes for achieving the method. Furthermore, the program may be electrically transmitted through an arbitrary medium such as communication signals propagated by wire or wirelessly.
Although certain embodiments of the present invention have been disclosed for illustrative purposes, various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims. Accordingly, the scope of the present invention should not be limited to the description of the embodiment, but defined by the accompanying claims and equivalents thereof.

Claims (6)

What is claimed is:
1. A method of edge-adaptive interpolation and noise filtering an image, the method comprising:
performing primary interpolation on a first color, based on a first preset number of directions, with respect to a first color window obtained from an input image window;
estimating an edge direction within a primary interpolated first color window obtained by the primary interpolation, based on a second preset number of directions, the second preset number of directions being larger than the first preset number, with respect to the primary interpolated first color window;
performing secondary interpolation on the first color, based on the estimated edge direction, with respect to the primary interpolated first color window; and
performing interpolation on a second color, based on the estimated edge direction, with respect to a second color window obtained from the input image window,
wherein performing the secondary interpolation comprises:
setting a filter based on the estimated edge direction and an edge frequency; and
performing the secondary interpolation on the first color, based on the estimated edge direction, by applying the filter to the primary interpolated first color window.
2. The method of claim 1, wherein the first preset number is 2 and the second preset number is 8.
3. The method of claim 1, wherein performing primary interpolation comprises:
calculating projection values for the first color window to obtain temporary interpolation values;
determining the edge direction using the temporary interpolation values, based on the first preset number of directions; and
performing primary interpolation on the first color, with respect to the first color window, based on the edge direction.
4. The method of claim 1, wherein estimating the edge direction comprises:
calculating projection values for the second preset number of directions, with respect to the primary interpolated first color window; and
estimating the edge direction using the projection values, based on the second preset number of directions.
5. A non-transitory computer-readable recording medium storing a program causing a processor to execute a process for edge-adaptive interpolation and noise filtering of an image, the process comprising:
performing primary interpolation on a first color, based on a first preset number of directions, with respect to a first color window obtained from an input image window;
estimating an edge direction within a primary interpolated first color window obtained by the primary interpolation, based on a second preset number of directions, the second preset number of directions being larger than the first preset number, with respect to the primary interpolated first color window;
performing secondary interpolation on the first color, based on the estimated edge direction, with respect to the primary interpolated first color window; and
performing interpolation on a second color, based on the estimated edge direction, with respect to a second color window obtained from the input image window,
wherein performing the secondary interpolation comprises:
setting a filter based on the estimated edge direction and an edge frequency; and
performing the secondary interpolation on the first color, based on the estimated edge direction, by applying the filter to the primary interpolated first color window.
6. A portable terminal comprising:
a non-transitory computer-readable recording medium storing a program causing a processor to execute a process for edge-adaptive interpolation and noise filtering of an image,
wherein the process includes performing primary interpolation on a first color, based on a first preset number of directions, with respect to a first color window obtained from an input image window; estimating an edge direction within a primary interpolated first color window obtained by the primary interpolation, based on a second preset number of directions, the second preset number of directions being larger than the first preset number, with respect to the primary interpolated first color window; performing secondary interpolation on the first color, based on the estimated edge direction, with respect to the primary interpolated first color window; and performing interpolation on a second color, based on the estimated edge direction, with respect to a second color window obtained from the input image window,
wherein performing the secondary interpolation comprises:
setting a filter based on the estimated edge direction and an edge frequency; and
performing the secondary interpolation on the first color, based on the estimated edge direction, by applying the filter to the primary interpolated first color window.
US12/853,853 2009-08-10 2010-08-10 Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal Expired - Fee Related US9111365B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2009-0073442 2009-08-10
KR1020090073442A KR101335127B1 (en) 2009-08-10 2009-08-10 Edge adaptive interpolation and noise filtering method, computer-readable recording medium and portable terminal

Publications (2)

Publication Number Publication Date
US20110032396A1 US20110032396A1 (en) 2011-02-10
US9111365B2 true US9111365B2 (en) 2015-08-18

Family

ID=42668210

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/853,853 Expired - Fee Related US9111365B2 (en) 2009-08-10 2010-08-10 Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal

Country Status (3)

Country Link
US (1) US9111365B2 (en)
EP (1) EP2293238A3 (en)
KR (1) KR101335127B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150163465A1 (en) * 2013-12-09 2015-06-11 Marvell World Trade Ltd. Method and apparatus for demosaicing of color filter array image
US10249021B2 (en) * 2016-11-29 2019-04-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, and electronic device
US10440265B2 (en) 2016-11-29 2019-10-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, electronic device and control method

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101678690B1 (en) * 2010-04-15 2016-11-24 삼성전자주식회사 Method for image processing and apparatus for the same
US8737769B2 (en) * 2010-11-26 2014-05-27 Microsoft Corporation Reconstruction of sparse data
JP5631769B2 (en) * 2011-02-17 2014-11-26 株式会社東芝 Image processing device
TWI464704B (en) * 2011-08-22 2014-12-11 Novatek Microelectronics Corp Color information interpolation method
WO2013075728A1 (en) * 2011-11-22 2013-05-30 Cern - European Organization For Nuclear Research Method and system for compressing a data array with projections
CN103049878A (en) * 2012-12-10 2013-04-17 天津天地伟业数码科技有限公司 Color interpolation method on basis of FPGA (Field Programmable Gate Array) and edge prediction algorithm
US9582853B1 (en) * 2015-08-03 2017-02-28 Intel Corporation Method and system of demosaicing bayer-type image data for image processing
EP3452981A1 (en) * 2016-05-03 2019-03-13 Koninklijke Philips N.V. Device and method for denoising a vector-valued image
CN106162133B (en) * 2016-06-30 2018-11-09 北京大学 Color interpolation method based on adaptive directed filtering
GB2559776B (en) 2017-02-17 2022-04-06 Grass Valley Ltd Decoding a bayer-mask or like coded image
CN113678437A (en) 2019-03-28 2021-11-19 华为技术有限公司 Method and apparatus for intra smoothing

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5373322A (en) 1993-06-30 1994-12-13 Eastman Kodak Company Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients
US6130960A (en) 1997-11-03 2000-10-10 Intel Corporation Block-matching algorithm for color interpolation
US6404918B1 (en) 1999-04-30 2002-06-11 Hewlett-Packard Company Image demosaicing method utilizing directional smoothing
US6507364B1 (en) 1998-03-13 2003-01-14 Pictos Technologies, Inc. Edge-dependent interpolation method for color reconstruction in image processing devices
US6707937B1 (en) 2000-07-14 2004-03-16 Agilent Technologies, Inc. Interpolation of edge portions of a digital image
US6795586B1 (en) 1998-12-16 2004-09-21 Eastman Kodak Company Noise cleaning and interpolating sparsely populated color digital image
US6816194B2 (en) 2000-07-11 2004-11-09 Microsoft Corporation Systems and methods with error resilience in enhancement layer bitstream of scalable video coding
US6832009B1 (en) 1999-09-24 2004-12-14 Zoran Corporation Method and apparatus for improved image interpolation
US6875040B1 (en) 2004-01-09 2005-04-05 Pirana Plugs Lockable electric power cord adapter
US6882563B2 (en) 2001-11-30 2005-04-19 Kabushiki Kaisha Toshiba Magnetic memory device and method for manufacturing the same
US6933971B2 (en) 2002-05-14 2005-08-23 Kwe International, Inc. Reconstruction of color components in digital image processing
US6970597B1 (en) 2001-12-05 2005-11-29 Pixim, Inc. Method of defining coefficients for use in interpolating pixel values
US7053944B1 (en) 1999-10-01 2006-05-30 Intel Corporation Method of using hue to interpolate color pixel signals
US7088392B2 (en) * 2001-08-27 2006-08-08 Ramakrishna Kakarala Digital image system and method for implementing an adaptive demosaicing method
US7133553B2 (en) 2003-02-18 2006-11-07 Avago Technologies Sensor Ip Pte. Ltd. Correlation-based color mosaic interpolation adjustment using luminance gradients
KR20060131083A (en) 2005-06-15 2006-12-20 삼성전자주식회사 Method and apparatus for edge adaptive color interpolation
US7256828B2 (en) 2003-01-16 2007-08-14 Dialog Imaging Systems Gmbh Weighted gradient based and color corrected interpolation
US7292725B2 (en) 2004-11-15 2007-11-06 Industrial Technology Research Institute Demosaicking method and apparatus for color filter array interpolation in digital image acquisition systems
US20080007630A1 (en) 2006-07-07 2008-01-10 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US7333678B1 (en) 2003-05-20 2008-02-19 Micronas Usa, Inc. Edge adaptive demosaic system and method
US7376288B2 (en) 2004-05-20 2008-05-20 Micronas Usa, Inc. Edge adaptive demosaic system and method
US20090135278A1 (en) 2007-11-22 2009-05-28 Olympus Corporation Image processing device and computer-readable storage medium

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5373322A (en) 1993-06-30 1994-12-13 Eastman Kodak Company Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients
US6130960A (en) 1997-11-03 2000-10-10 Intel Corporation Block-matching algorithm for color interpolation
US6507364B1 (en) 1998-03-13 2003-01-14 Pictos Technologies, Inc. Edge-dependent interpolation method for color reconstruction in image processing devices
US6795586B1 (en) 1998-12-16 2004-09-21 Eastman Kodak Company Noise cleaning and interpolating sparsely populated color digital image
US6404918B1 (en) 1999-04-30 2002-06-11 Hewlett-Packard Company Image demosaicing method utilizing directional smoothing
US6832009B1 (en) 1999-09-24 2004-12-14 Zoran Corporation Method and apparatus for improved image interpolation
US7053944B1 (en) 1999-10-01 2006-05-30 Intel Corporation Method of using hue to interpolate color pixel signals
US6816194B2 (en) 2000-07-11 2004-11-09 Microsoft Corporation Systems and methods with error resilience in enhancement layer bitstream of scalable video coding
US6707937B1 (en) 2000-07-14 2004-03-16 Agilent Technologies, Inc. Interpolation of edge portions of a digital image
US7088392B2 (en) * 2001-08-27 2006-08-08 Ramakrishna Kakarala Digital image system and method for implementing an adaptive demosaicing method
US6882563B2 (en) 2001-11-30 2005-04-19 Kabushiki Kaisha Toshiba Magnetic memory device and method for manufacturing the same
US6970597B1 (en) 2001-12-05 2005-11-29 Pixim, Inc. Method of defining coefficients for use in interpolating pixel values
US6933971B2 (en) 2002-05-14 2005-08-23 Kwe International, Inc. Reconstruction of color components in digital image processing
US7256828B2 (en) 2003-01-16 2007-08-14 Dialog Imaging Systems Gmbh Weighted gradient based and color corrected interpolation
US7133553B2 (en) 2003-02-18 2006-11-07 Avago Technologies Sensor Ip Pte. Ltd. Correlation-based color mosaic interpolation adjustment using luminance gradients
US7333678B1 (en) 2003-05-20 2008-02-19 Micronas Usa, Inc. Edge adaptive demosaic system and method
US6875040B1 (en) 2004-01-09 2005-04-05 Pirana Plugs Lockable electric power cord adapter
US7376288B2 (en) 2004-05-20 2008-05-20 Micronas Usa, Inc. Edge adaptive demosaic system and method
US7292725B2 (en) 2004-11-15 2007-11-06 Industrial Technology Research Institute Demosaicking method and apparatus for color filter array interpolation in digital image acquisition systems
US20070002154A1 (en) 2005-06-15 2007-01-04 Samsung Electronics Co., Ltd. Method and apparatus for edge adaptive color interpolation
KR20060131083A (en) 2005-06-15 2006-12-20 삼성전자주식회사 Method and apparatus for edge adaptive color interpolation
JP2008015946A (en) 2006-07-07 2008-01-24 Canon Inc Apparatus and method for image processing
US20080007630A1 (en) 2006-07-07 2008-01-10 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20090135278A1 (en) 2007-11-22 2009-05-28 Olympus Corporation Image processing device and computer-readable storage medium
JP2009130632A (en) 2007-11-22 2009-06-11 Olympus Corp Image processor, and image processing program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Jonghwa Lee et al., "Edge-Adaptive Demosaicking for Artifact Suppression Along Line Edges", IEEE Transactions on Consumer Electronics, vol. 53, No. 3, Aug. 1, 2007.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150163465A1 (en) * 2013-12-09 2015-06-11 Marvell World Trade Ltd. Method and apparatus for demosaicing of color filter array image
US9613396B2 (en) * 2013-12-09 2017-04-04 Marvell World Trade Ltd. Method and apparatus for demosaicing of color filter array image
US10249021B2 (en) * 2016-11-29 2019-04-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, and electronic device
US10440265B2 (en) 2016-11-29 2019-10-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, electronic device and control method
US10438320B2 (en) 2016-11-29 2019-10-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
KR20110015969A (en) 2011-02-17
KR101335127B1 (en) 2013-12-03
EP2293238A3 (en) 2013-02-27
EP2293238A2 (en) 2011-03-09
US20110032396A1 (en) 2011-02-10

Similar Documents

Publication Publication Date Title
US9111365B2 (en) Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal
US7825965B2 (en) Method and apparatus for interpolating missing colors in a color filter array
US9582863B2 (en) Image processing apparatus, image processing method, and program
US9179113B2 (en) Image processing device, and image processing method, and program
JP5672776B2 (en) Image processing apparatus, image processing method, and program
EP2278788B1 (en) Method and apparatus for correcting lens shading
US7256828B2 (en) Weighted gradient based and color corrected interpolation
WO2011048870A1 (en) Image processing device, image processing method, and program
US20090252411A1 (en) Interpolation system and method
US7813583B2 (en) Apparatus and method for reducing noise of image sensor
US9030579B2 (en) Image processing apparatus and control method that corrects a signal level of a defective pixel
US8982248B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US20140184853A1 (en) Image processing apparatus, image processing method, and image processing program
US11202045B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US20100134661A1 (en) Image processing apparatus, image processing method and program
US9401006B2 (en) Image processing apparatus, image processing method, and storage medium
US20110032269A1 (en) Automatically Resizing Demosaicked Full-Color Images Using Edge-Orientation Maps Formed In The Demosaicking Process
US7440016B2 (en) Method of processing a digital image
US20140355872A1 (en) Method for determining interpolating direction for color demosaicking
US20190387205A1 (en) Tile-selection based deep demosaicing acceleration
KR101327790B1 (en) Image interpolation method and apparatus
US9451222B2 (en) Color processing of digital images
US9258461B2 (en) Image processing device and method, and image processing program
US10878533B2 (en) Method and device for demosaicing of color images
Jeong et al. Edge-Adaptive Demosaicking for Reducing Artifact along Line Edge

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PARK, HEE-CHAN;PARK, MIN-KYU;SONG, HAN-SAE;AND OTHERS;REEL/FRAME:024828/0100

Effective date: 20100726

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230818