WO1999013429A1 - A method for directly compressing a color image and tailoring the compression based on the color filter array, optics, and sensor characteristics - Google Patents

A method for directly compressing a color image and tailoring the compression based on the color filter array, optics, and sensor characteristics Download PDF

Info

Publication number
WO1999013429A1
WO1999013429A1 PCT/US1998/012525 US9812525W WO9913429A1 WO 1999013429 A1 WO1999013429 A1 WO 1999013429A1 US 9812525 W US9812525 W US 9812525W WO 9913429 A1 WO9913429 A1 WO 9913429A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
image processing
processing
array
Prior art date
Application number
PCT/US1998/012525
Other languages
French (fr)
Inventor
Antony Scott Bruner
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to AU80732/98A priority Critical patent/AU8073298A/en
Publication of WO1999013429A1 publication Critical patent/WO1999013429A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Definitions

  • the present invention relates generally to data compression, and more specifically relates to a method for directly compressing a color image and tailoring the compression based on characteristics (e.g., color filter array, optics, and sensor characteristics) of an image capture device.
  • characteristics e.g., color filter array, optics, and sensor characteristics
  • An image capture device having a plurality of photo-sites, captures color images and generates a digital representation of the color image.
  • Conventional color image processing system perform extensive color processing on a digital color image before compressing the image.
  • a typical color image processing system processes the following color spaces before compressing:
  • the sensor color space is imposed by the scene, optics, CFA and sensor system and is unique for each implementation.
  • the working color space is a color space (e.g., RGB (CIE)) to which corrections such as white balance and gamma models can be applied.
  • the display color space e.g., RGB (NTSC)
  • the transmission color space is imposed by the compression /transmission mechanism being implemented, such as YCrCb (ITU-R BT.601).
  • An example of conventional color processing is computer- intensive as follows. First, an interpolation step predicts missing color sample components since each photo-site typically employs a specific primary color filter (e.g., Red, Green or Blue filters) when, in fact, there are many colors.
  • the sensor color space is a function of the image capture device (e.g., optics, color filters, etc.).
  • the ⁇ W ⁇ W 2 W 3 > working color space is typically a standard such as the YCrCb color space that.
  • Color filter arrays detect color and typically employ three or more filter passbands. Those color filter arrays that employ a primary set of colors are "weighted" around (RGB). Those color filter arrays employing a complementary color set are "weighted” around cyan, magenta and yellow (CMY
  • the step of processing the sample in the ⁇ W ⁇ W 2 W 3 > working color space to account for perceived color and the temperature of lighting conditions is needed. Furthermore, processing to correct for anticipated non-ideal reproduction and/or display device characteristics introduced by a device (such as the monitor or screen) is needed. Processing is also needed to transform from the ⁇ W ⁇ W 2 W 3 > working color space to a preferred color space, denoted ⁇ D ⁇ D 2 D 3 >.
  • D ⁇ D D 2 D 3 > refers to the display's color space (e.g., a RGB color space). This display color space conforms to NTSC receiver phosphor standard and is employed for displaying color on monitors.
  • sample decimations of (i.e., elimination of) selected preferred color space components are performed on the image as a precursor to later stages of a transform based compression /decompression scheme.
  • This intensive processing of the color image prior to compression, expends system resources and reduces the performance of a system.
  • the time for an image capture device to present a compressed image for storage or transmission is significantly increased by the time needed for the color processing steps outlined above.
  • An improved method of image processing employs an optical low-pass filter and a color filter with a pattern.
  • the optical low-pass filter and color filter array generate separable aperture functions that are amendable for transform based compression schemes.
  • the method of image processing includes the step of capturing the image and directly compressing the image without color processing.
  • Figure 1 illustrates an image capture device in which the direct compression system of the present invention can be implemented.
  • Figure 2 illustrates in greater detail a photo-site shown in Figure
  • Figure 3A illustrates a Bayer CFA pattern.
  • Figure 3B illustrates an RGB stripe CFA pattern.
  • Figure 3C illustrates an interline RGB CFA pattern.
  • Figure 3D illustrates a complementary Bayer CFA pattern.
  • Figure 4 illustrates a block diagram of one embodiment of the direct compression system of the present invention.
  • Figure 5 illustrates how the present invention groups elements of a two-dimensional sample array in accordance with a primary red, green, blue (RGB) color filter array having a Bayer pattern.
  • RGB red, green, blue
  • Figure 6 illustrates how the present invention recovers the initial two-dimensional sample array from the four grouped arrays.
  • Figure 7 illustrates in greater detail the image processing unit in which the direct compression system of the present invention can be implemented.
  • Figure 8 illustrates a computer system in which the image recovery software of the present invention, described in Figure 6, can be implemented.
  • each block within the flowcharts represents both a method step and an apparatus element for performing the method step.
  • the corresponding apparatus element may be configured in hardware, software, firmware or combinations thereof.
  • FIG. 1 illustrates an image capture device 10 in which the color image processing system 20 of the present invention can be implemented.
  • the image capture device 10 includes an optical system 12 that employs an optical low-pass filter for sampling incident light 11 comprising the scene being captured 11.
  • the optical low-pass filter is employed to prevent aliasing (i.e., to prevent sampling the scene at greater than the Nyquist criteria). Adhering to the Nyquist criteria avoids the introduction of sampling artifacts such as Moire patterns.
  • the optical low-pass filter employed by the optical system 12 eliminates luma spectral content above NM/2 and chroma spectral content above NM/4.
  • luma or luminance can be thought of in terms of a human's visual perception of "brightness” of a scene.
  • Chroma or chrominance can be thought of in terms of a human's visual perception of "colors” in a scene.
  • Spectral content is defined to be the finite interval expressed as a number of cycles per degree for each of the characteristic wavelengths comprising the brightness and color content of a scene as captured by the optical system of an image capturing device.
  • Spectral content can be expressed as line pairs per unit distance on the focal plane.
  • the image capture device 10 also includes a color filter array 14 that receives the sampled light from the optical system 12.
  • the color filter array 14 includes a plurality of color filter elements that are arranged in a particular pattern.
  • the pattern that defines the mosaic of color filter elements is referred to herein as a color filter array (CFA) pattern.
  • Each filter element has a spectral band-pass characteristic and, accordingly, only allows light of certain wavelengths, specified by the band, to pass.
  • a blue filter element allows light wavelengths corresponding to a predetermined pass band (e.g., 400-500 nm) representing the color blue to pass.
  • the image capture device 10 also includes a photo-site array 16.
  • the photo-site array 16 includes a plurality of photo-sites 18 arranged in rows and columns.
  • the photo-site array 16 receives light photons and responsive thereto generates a digital representation 17 of the image (hereinafter referred to as an image array 17) and provides this digital representation 17 to the color image processing system 20 of the present invention.
  • the color image processing system 20 of the present invention will be described in greater detail hereinafter with reference to Figure 4.
  • FIG 2 illustrates in greater detail the photo-site 18 that includes a transducer element 19 (e.g., a photodetector, or photodiode) that receives the filtered light from the color filter array 14.
  • the transducer element 19 converts the light photons into an electrical signal representing the intensity of the light.
  • the photo-site 18 includes an amplifier 21 for amplifying the electrical signal that represents the intensity of photons at that photo-site.
  • the photo-site 18 also includes an analog- to-digital converter (A/D) 23 for converting the amplified electrical signal into a digital representation of the image at that photo-site. Accordingly, each photo site 18 generates a digital value corresponding to the intensity of the image at that particular photo-site 18.
  • the digital representation of each photo-site together comprise the digital representation 17 of the image (i.e., the image array).
  • Figure 3A illustrates a Bayer CFA pattern.
  • Figure 3B illustrates an RGB stripe CFA pattern.
  • Figure 3C illustrates an interline RGB CFA pattern.
  • Figure 3D illustrates a complementary Bayer CFA pattern. These patterns are shown for a four by four color filter array.
  • the RGB stripe pattern employs green elements, red elements and blue elements.
  • the interline RGB pattern employs green even elements, green odd elements, red even-odd elements and blue odd-even elements.
  • the even and odd designations are employed to reflect spatial locations as indicated by the row and column indices. For example, the red even-odd elements occur at even numbered rows and odd numbered columns. It will be noted by those skilled in the art that one can tailor the CFA patterns to suit a particular compression algorithm.
  • FIG 4 illustrates a block diagram of one embodiment of the color image processing system 20 of the present invention.
  • the color image processing system 20 includes a group unit 22 that is coupled to receive the electrical signals, representing the image from the light sensor 18.
  • the group unit 22 groups the electrical signals received from the photo sites 12 into different groups based on amplitude and spatial phase of the electrical signal. For example, when the color filter array 16 is arranged in a Bayer pattern, as illustrated in Figure 3 A, the group unit 22 groups the red components (R) into a red group, and groups the blue components (B) into a blue group.
  • the green-even components (Gg) are grouped into a green-even group
  • the green odd (Go) are grouped into a green-odd group.
  • the green-even (GE) and the green-odd (Go) components relate respectively to the even and odd spatial phase (i.e., the spatial location of the components relative to BAYER pattern) of the green components.
  • a compression unit 24 is coupled to the group unit 22.
  • the compression unit 24 receives the different groups from the group unit 22 and directly compresses the image information into a compressed format.
  • the compression unit 24 in this embodiment utilizes a transform based entropy coder similar in principle to Joint Photographic Experts Group (JPEG) compression, which is a well known compression technique.
  • JPEG Joint Photographic Experts Group
  • JPEG Joint Photographic Experts Group
  • the compression unit 24 performs well known compression steps such as transformation, coefficient quantization, coefficient ordering, and entropy coding.
  • Quantization tables and selective statistical models are provided, respectively, during coefficient quantization and entropy coding selective.
  • the quantization tables and statistical models are based upon the CFA pattern, the optics, the sensor characteristics, and other image capturing device characteristics.
  • the transform step can be a block DCT or a hierarchical wavelet technique that are well known in the art.
  • the entropy coding can be run-length, symbol generation, Huffman, or arithmetic encoding that are all also well known in the art.
  • the output of the compression unit 24 is a compressed image bitstream.
  • the decompression unit 26 receives the compressed image bitstream and responsive thereto recovers the image that is grouped in accordance to spatial phase and filter passband.
  • the image recovery unit 28 receives the elements grouped according to spatial phase and filter pass-band and responsive thereto recovers the original image.
  • the color processing unit 30 receives the image and responsive thereto performs conventional color processing to generate an image array suitable for a particular application.
  • the present invention allows an image capture device (e.g., digital camera or video camera) to reduce and simplify the processing power necessary in the device.
  • the present invention recognizes that properly designed combination of optical components and image sensor arrays employing a mosaic of color filters creates separable aperture functions amendable to transform based compression /decompression schemes. Moreover, the present invention recognizes that the spectral band-pass characteristics of certain dyes and pigments, used to manufacture the color filters, produce sample components and color spaces that are amendable to transform based compressio /decompression schemes. Consequently, the present invention exploits these conditions to affect the direct and immediate application of transform based compression/ decompression schemes on the digital representation of analog data samples received from image sensor arrays that employ a mosaic of color filters. By so doing, the present invention defers otherwise computer intensive color processing step until after decompression for reproduction and /or other display purposes.
  • the prior art approaches did not realize or recognize the above issues. Consequently, the prior art teaches compression and decompression only after intensive color processing, as is noted in the Background section.
  • the color image processing system 20 of the present invention also includes a decompression unit 26.
  • the decompression unit 26 decompresses the compressed image and provides the decompressed image to a image recovery unit 28.
  • the image recovery unit restores the original focal plane array 10.
  • the image recovery unit 28 performs the reverse operation of group unit 22.
  • the image recovery unit 28 receives the decompressed image information that is grouped according to amplitude and spatial phase and rebuilds or reconstructs the original focal plane array 10 with pixel values arranged in accordance with a predetermined pattern.
  • This predetermined pattern e.g., Bayer
  • This predetermined pattern is determined by the mosaic of color filters arranged in accordance with the color filter array 16.
  • a color processing unit 30 is coupled to the image recovery unit to receive the digital representation of the image and perform conventional color processing (interpolation, noise reduction, color spectral conversion, decimation, etc.).
  • This image may be a data object or structure that is a digital representation of the image or scene.
  • the "color processing unit” can perform, but is not limited to, the following: arithmetic /logical operations, filtering operations, transform (linear) operations, color space conversions, morphological operations, point operations, and geometric operations.
  • color image processing system 20 of the present invention performs image processing on a color image array 17 as follows.
  • the image array is first sorted by the amplitude/phase group unit 22 into a GE array, a R array, a B array and a G ⁇ - Each of these groups are then separately submitted to the compression unit 24.
  • the compression unit 24 When the GE array, the R array, the B array, and the Go array are submitted to the compression unit 24, the compression unit 24 generates a compressed GE array, a compressed R array, a compressed B array and a compressed Go array, respectively.
  • the decompression unit 26 decompresses each of the compressed arrays (i.e., the compressed GE array, the compressed R array, the compressed B array and the compressed Go array) and generates the G array, the R array, the B array and the Go array, respectively.
  • the image recovery unit 28 restores the image array 17 from the GE array, R array, B array and Go array.
  • Figure 5 illustrates how the present invention groups elements of a two-dimensional sample array in accordance with a primary red, green, blue (RGB) color filter array having a Bayer pattern.
  • the two-dimensional sample array is received from the optical system.
  • a scene is acquired or captured through an optical system.
  • This optical system may include a blur filter that acts as a low-pass filter for filtering spectral content of the scene of a particular frequency.
  • the system can also include a color filter array (CFA) that receives the incident light and correspondingly passes the light to a sensor array.
  • the sensor array includes a plurality of sites, arranged in M rows and N columns that each sample incident light.
  • the maximum number of line pairs resolvable over the passband corresponding to the green filter is less than (M/2, N/2). Also, the maximum number of line pairs resolvable for the passband for the red filter and the blue filter is less than (M/4, N/4).
  • the optical system can be modeled as a function of the following factors: 1) illumination source, 2) scene content, 3) optics (lens, blur filter, IR cut filter), 4) color filter array transmissivity, and 5) sensor array responsivity.
  • processing step 52 the row indices (i and ii), and the column indices (j and jj) are initialized.
  • processing step 54 the elements of the two-dimensional sample array are grouped according to spatial phase that is dependent on the color filter array pattern.
  • processing step 56 the column indices (j and jj) are incremented.
  • determination block 58 a determination is made whether or not the last column has been reached. If yes, the processing continues to processing block 60 that increments the row indices (i and ii). If no, the processing proceeds to processing step 54.
  • determination block 62 a determination is made whether or not the last row has been reached. If yes, the grouped arrays (namely the G e array, the G Q array, the R array and the B array) are passed to the system. Otherwise, the processing goes to processing step 52.
  • Figure 6 illustrates how the present invention recovers the initial two-dimensional sample array from the four grouped arrays.
  • the grouped arrays (Ge array, G ⁇ array, R array and B array) are received.
  • the row indices (i and ii) are initialized.
  • the column indices (j and jj) are initialized.
  • the two-dimensional sample array is filled with elements of the grouped arrays.
  • the column indices (j and jj) are incremented.
  • decision block 78 a determination is made whether or not the last column has been reached. If yes, processing goes to processing step 80 that increments the row indices (i and ii).
  • processing continues to processing step 74.
  • determination block 82 a determination is made whether or not the last row has been reached. If yes, the recovered two-dimensional sample array S [M,N] is passed to the computer system in processing step 84. If no, the processing goes to step 72.
  • the present invention generates base statistical class quantization models for the following: 1) system information, 2) scene information, and 3) user information (user selections).
  • System information refers to the optics, the color filter array (CFA) employed, and the sensor array.
  • the characteristics of the optics, color filter array, and sensors are collectively termed "convolved responsivities".
  • the optics can include a blurring filter that acts as a low-pass filter.
  • System information concerning a blurring filter can include specific characteristics of how the optics transform light that passes through the lens.
  • a sensor can be a transducer such as a photodiode.
  • System information concerning a sensor can include the transducer characteristics (e.g., quantum efficiency of the transducer).
  • a color filter array has a specific wavelength spectral transmissivity (i.e., for a unit elimination, how much electromagnetic energy at a given frequency/polarization can pass through a specific site in a color array).
  • Scene information includes 1) light level (e.g., lighting conditions and brightness of the scene) and 2) color temperature (e.g., a commonly known indicator for color balance).
  • light level e.g., lighting conditions and brightness of the scene
  • color temperature e.g., a commonly known indicator for color balance
  • User information can include factors such as 1) specified quality (high or low) and 2) specified capacity (a specified maximum storage capacity).
  • the two-dimensional sample array S [M,N] is equal to an illumination source convolved with the scene content convolved with the optic characteristics (lens characteristics, blue characteristics and IR cut filter characteristics) convolved with the CFA transmissivity convolved with the sensor array responsivity convolved with the quantization model.
  • FIG. 7 illustrates in greater detail the image processing unit 100 in which the direct compression system of the present invention can be implemented.
  • the image processing unit 100 is coupled to the photo detector array 102.
  • the photo detector array 102 receives light photons and converting these photons into electrical signals representative of the light intensity and colors at each photo detector sites.
  • the image processing unit 100 receives the focal plane array from the optical system 102.
  • the optical system includes the optical components (e.g., lenses), the color filter array pattern, and also the transducer array (the actual photodiodes).
  • the focal plane array can be stored in a frame buffer 103 that can be implemented as part of a memory 106.
  • the processing unit 100 includes a processor 104 that is coupled to the photodetector array 102 via a bus 105.
  • the memory 106 includes the direct compression system 107 of the present invention, as implemented in the software, and the compression algorithm 108, also implemented in software.
  • the processor 104 when executing the direct compression software 107 of the present invention, described in Figure 5, groups the focal plane array, received from photodetector array 102 into groups according to spatial phase and amplitude. Furthermore, the processor 104, when executing the compression software 108, compresses each group separately.
  • each of the different amplitude and /or phase groups are stored in separate arrays, and each array is compressed separately.
  • the image information may be stored in one array, and the different amplitude groups or phase groups are compressed separately by indexing the elements in the array so that the compression is performed on each group separately (i.e., elements of a particular group are provided to the compress algorithm 108).
  • the compressed results can be stored in a storage device 116.
  • This storage device 116 may be a FLASH device, a hard disk drive, or a magnetic disk drive.
  • the compressed image can be passed to an interface unit 118 that interfaces between the image processing unit 100 and another system 120 (e.g., a network, a computer system, a bus, such as a universal serial bus (USB), or another device).
  • another system 120 e.g., a network, a computer system, a bus, such as a universal serial bus (USB), or another device.
  • a bus controller 114 can be employed to control the transfer of data from the different components of the image processing unit 100.
  • the image processing unit 100 can include a dedicated digital signal processing unit 112 that is specially designed to perform the mathematical operations (e.g., Discrete Cosine Transform) employed by the compression algorithm 108.
  • the processor 104 and the digital signal processing unit 112 are integrated into a specialized multimedia processor having digital signal processing capabilities (e.g., the MMX microprocessor chip available from the assignee of the present invention).
  • a specialized multimedia processor having digital signal processing capabilities (e.g., the MMX microprocessor chip available from the assignee of the present invention).
  • Figure 8 illustrates a computer system 130 in which the image recovery software 142 of the present invention, described in Figure 6, can be implemented.
  • the computer system 130 includes a network interface 132 for receiving the compressed image information.
  • the computer system 130 includes a memory 134 (e.g., Random Access Memory (RAM)) that can include a frame buffer 135 for storing the image information.
  • the memory 134 also includes decompression software 140, image recovery software 142, and color processing software 144.
  • Memory 134 can also include software that edits and manipulates digital image information (not shown).
  • the computer system 130 also includes a processor 136.
  • the processor 136 executes the decompression software 140 and the image recovery software 142 of the present invention.
  • the processor executing the decompression software 140, decompresses the compressed image and provides uncompressed image information arranged in groups according to amplitude and spatial phase.
  • processor 136 executes the image recovery software 142 of the present invention, the original focal plane array is restored.
  • the memory 134 also includes color processing software 144 that directs the processor 136 to perform color processing on the uncompressed focal plane array to generate a particular color format. These color formats can include Red-Green-Blue (RGB) or YCrCb (a chrominance /luminance standard).
  • An RGB formatted image can be utilized by an image processing software package (e.g., Adobe Photo Shop) or can be rasterized to a screen (e.g., computer monitor).
  • an uncompressed image information in a YCrCb format can be translated into a digital video stream and sent directly to a playback machine, such as a NCR.
  • the uncompressed image in YcrCb format can be recompressed for transmission and /or storage in some other format.
  • the processor 136 is a multimedia processor (e.g., the MMX microprocessor chip, available from Intel Corporation), that includes specialized digital signal processing hardware to perform the mathematical operations employed by the decompression software 140.
  • MMX microprocessor chip available from Intel Corporation
  • the data structure or object of the compressed image includes a header having the following information:
  • the compression and decompression process of the present invention is tailored to adapt and respond to a specific color mosaic pattern.
  • the group and ungroup steps are dependent on 1) the number of different CFA filter passbands and 2) the number of elements in an indivisible pattern of the CFA pattern.
  • the G e , R, B, G D form the indivisible CFA pattern (i.e., the Bayer Pattern).
  • the sensor responsivity, CFA spectral transmissivity and IR blocking filter characteristics determine an optimal coding range and statistical distribution for the entropy coding step of the compression. Accordingly, the group and ungroup steps of the compression can also be affected by these factors, as well.
  • color processing is performed by a microcontroller in the image capture device (e.g., a digital camera).
  • a microcontroller in the image capture device
  • Prior art has used a number of different ways — dedicated analog processing, DSP based processing and others.
  • a direct compression method of the present invention allows the color processing steps to be deferred until the image is decompressed. Decompression of the image typically occurs in a computer system (e.g., PC) that has a processor with greater processing and computing ability then the microcontroller in a digital camera or a digital video camera.
  • a computer system e.g., PC
  • the computer system can perform a more thorough and accurate color processing of the color filtering.
  • the present invention performs compression more efficiently than the prior art in at least two aspects.
  • the present invention decreases required compute power/processing, required in the image capture device, by deferring color processing operations until after decompression.
  • the present invention increases achievable resolutions (in terms of number of pixels) and /or speeds (in terms of less time and increased frame rates) by employing the compute power/processing that was freed up through deferring color processing until after decompression.

Abstract

An improved method of image processing is disclosed. This method employs an optical low-pass filter (12) and a color filter (14) with a pattern. The optical low-pass filter and color filter array generate separable aperture functions that are amendable for transform based compression schemes (24). The method of image processing includes the step of capturing the image and directly compressing the image without color processing.

Description

A METHOD FOR DIRECTLY COMPRESSING
A COLOR IMAGE AND TAILORING THE COMPRESSION
BASED ON THE COLOR FILTER ARRAY, OPTICS, AND
SENSOR CHARACTERISTICS
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to data compression, and more specifically relates to a method for directly compressing a color image and tailoring the compression based on characteristics (e.g., color filter array, optics, and sensor characteristics) of an image capture device.
2. Description of the Related Art
An image capture device, having a plurality of photo-sites, captures color images and generates a digital representation of the color image. Conventional color image processing system perform extensive color processing on a digital color image before compressing the image.
A typical color image processing system processes the following color spaces before compressing:
Sensor color space (<SιS2S3...>) Working color space (<WιW2W3>) Display color space (<DιD2D3>) Transmission color space (<TιT2T3>)
The sensor color space is imposed by the scene, optics, CFA and sensor system and is unique for each implementation. The working color space is a color space (e.g., RGB (CIE)) to which corrections such as white balance and gamma models can be applied. The display color space (e.g., RGB (NTSC)) is imposed by the intended display, such as a monitor with known phosphor types. The transmission color space is imposed by the compression /transmission mechanism being implemented, such as YCrCb (ITU-R BT.601).
An example of conventional color processing is computer- intensive as follows. First, an interpolation step predicts missing color sample components since each photo-site typically employs a specific primary color filter (e.g., Red, Green or Blue filters) when, in fact, there are many colors. Next, a transformation step transforms the color sample components from the sensor color space (denoted 5 = <SιS2S3>) to a working color space (denoted W = <Wι W^WO). The sensor color space is a function of the image capture device (e.g., optics, color filters, etc.). The <WιW2W3> working color space is typically a standard such as the YCrCb color space that. Color filter arrays detect color and typically employ three or more filter passbands. Those color filter arrays that employ a primary set of colors are "weighted" around (RGB). Those color filter arrays employing a complementary color set are "weighted" around cyan, magenta and yellow (CMY).
Moreover, the step of processing the sample in the <WιW2W3> working color space to account for perceived color and the temperature of lighting conditions is needed. Furthermore, processing to correct for anticipated non-ideal reproduction and/or display device characteristics introduced by a device (such as the monitor or screen) is needed. Processing is also needed to transform from the <WιW2W3> working color space to a preferred color space, denoted <DιD2D3>. D = <D D2D3> refers to the display's color space (e.g., a RGB color space). This display color space conforms to NTSC receiver phosphor standard and is employed for displaying color on monitors. Finally, sample decimations of (i.e., elimination of) selected preferred color space components are performed on the image as a precursor to later stages of a transform based compression /decompression scheme.
This intensive processing of the color image, prior to compression, expends system resources and reduces the performance of a system. For example, the time for an image capture device to present a compressed image for storage or transmission is significantly increased by the time needed for the color processing steps outlined above.
Accordingly, there remains an unmet need in the industry for a method that directly compresses a color image and defers color processing to after decompression of the image.
SUMMARY OF THE INVENTION
An improved method of image processing is disclosed. This method employs an optical low-pass filter and a color filter with a pattern. The optical low-pass filter and color filter array generate separable aperture functions that are amendable for transform based compression schemes. The method of image processing includes the step of capturing the image and directly compressing the image without color processing. BRIEF DESCRIPTION OF THE DRAWINGS
The objects, features and advantages of the method for the present invention will be apparent from the following description in which:
Figure 1 illustrates an image capture device in which the direct compression system of the present invention can be implemented.
Figure 2 illustrates in greater detail a photo-site shown in Figure
1.
Figure 3A illustrates a Bayer CFA pattern.
Figure 3B illustrates an RGB stripe CFA pattern.
Figure 3C illustrates an interline RGB CFA pattern.
Figure 3D illustrates a complementary Bayer CFA pattern.
Figure 4 illustrates a block diagram of one embodiment of the direct compression system of the present invention.
Figure 5 illustrates how the present invention groups elements of a two-dimensional sample array in accordance with a primary red, green, blue (RGB) color filter array having a Bayer pattern.
Figure 6 illustrates how the present invention recovers the initial two-dimensional sample array from the four grouped arrays. Figure 7 illustrates in greater detail the image processing unit in which the direct compression system of the present invention can be implemented.
Figure 8 illustrates a computer system in which the image recovery software of the present invention, described in Figure 6, can be implemented.
DETAILED DESCRIPTION OF THE INVENTION
Referring to the figures, exemplary embodiments of the invention will now be described. The exemplary embodiments are provided to illustrate aspects of the invention and should not be construed as limiting the scope of the invention. The exemplary embodiments are primarily described with reference to block diagrams or flowcharts. As to the flowcharts, each block within the flowcharts represents both a method step and an apparatus element for performing the method step. Depending upon the implementation, the corresponding apparatus element may be configured in hardware, software, firmware or combinations thereof.
Figure 1 illustrates an image capture device 10 in which the color image processing system 20 of the present invention can be implemented. The image capture device 10 includes an optical system 12 that employs an optical low-pass filter for sampling incident light 11 comprising the scene being captured 11. The optical low-pass filter is employed to prevent aliasing (i.e., to prevent sampling the scene at greater than the Nyquist criteria). Adhering to the Nyquist criteria avoids the introduction of sampling artifacts such as Moire patterns. For example, in this embodiment, the optical low-pass filter employed by the optical system 12 eliminates luma spectral content above NM/2 and chroma spectral content above NM/4.
As is known in the art, luma or luminance can be thought of in terms of a human's visual perception of "brightness" of a scene. Chroma or chrominance can be thought of in terms of a human's visual perception of "colors" in a scene. Spectral content is defined to be the finite interval expressed as a number of cycles per degree for each of the characteristic wavelengths comprising the brightness and color content of a scene as captured by the optical system of an image capturing device. Spectral content can be expressed as line pairs per unit distance on the focal plane.
The image capture device 10 also includes a color filter array 14 that receives the sampled light from the optical system 12. The color filter array 14 includes a plurality of color filter elements that are arranged in a particular pattern. The pattern that defines the mosaic of color filter elements is referred to herein as a color filter array (CFA) pattern. Each filter element has a spectral band-pass characteristic and, accordingly, only allows light of certain wavelengths, specified by the band, to pass. For example, a blue filter element allows light wavelengths corresponding to a predetermined pass band (e.g., 400-500 nm) representing the color blue to pass.
The image capture device 10 also includes a photo-site array 16. The photo-site array 16 includes a plurality of photo-sites 18 arranged in rows and columns. The photo-site array 16 receives light photons and responsive thereto generates a digital representation 17 of the image (hereinafter referred to as an image array 17) and provides this digital representation 17 to the color image processing system 20 of the present invention. The color image processing system 20 of the present invention will be described in greater detail hereinafter with reference to Figure 4.
Figure 2 illustrates in greater detail the photo-site 18 that includes a transducer element 19 (e.g., a photodetector, or photodiode) that receives the filtered light from the color filter array 14. The transducer element 19 converts the light photons into an electrical signal representing the intensity of the light. The photo-site 18 includes an amplifier 21 for amplifying the electrical signal that represents the intensity of photons at that photo-site. The photo-site 18 also includes an analog- to-digital converter (A/D) 23 for converting the amplified electrical signal into a digital representation of the image at that photo-site. Accordingly, each photo site 18 generates a digital value corresponding to the intensity of the image at that particular photo-site 18. The digital representation of each photo-site together comprise the digital representation 17 of the image (i.e., the image array).
Figure 3A illustrates a Bayer CFA pattern. Figure 3B illustrates an RGB stripe CFA pattern. Figure 3C illustrates an interline RGB CFA pattern. Figure 3D illustrates a complementary Bayer CFA pattern. These patterns are shown for a four by four color filter array. The RGB stripe pattern employs green elements, red elements and blue elements. The interline RGB pattern employs green even elements, green odd elements, red even-odd elements and blue odd-even elements. The even and odd designations are employed to reflect spatial locations as indicated by the row and column indices. For example, the red even-odd elements occur at even numbered rows and odd numbered columns. It will be noted by those skilled in the art that one can tailor the CFA patterns to suit a particular compression algorithm.
Figure 4 illustrates a block diagram of one embodiment of the color image processing system 20 of the present invention. The color image processing system 20 includes a group unit 22 that is coupled to receive the electrical signals, representing the image from the light sensor 18. The group unit 22 groups the electrical signals received from the photo sites 12 into different groups based on amplitude and spatial phase of the electrical signal. For example, when the color filter array 16 is arranged in a Bayer pattern, as illustrated in Figure 3 A, the group unit 22 groups the red components (R) into a red group, and groups the blue components (B) into a blue group. Similarly, the green-even components (Gg) are grouped into a green-even group, and the green odd (Go) are grouped into a green-odd group. The green-even (GE) and the green-odd (Go) components relate respectively to the even and odd spatial phase (i.e., the spatial location of the components relative to BAYER pattern) of the green components.
A compression unit 24 is coupled to the group unit 22. The compression unit 24 receives the different groups from the group unit 22 and directly compresses the image information into a compressed format. The compression unit 24 in this embodiment utilizes a transform based entropy coder similar in principle to Joint Photographic Experts Group (JPEG) compression, which is a well known compression technique. For further information regarding the JPEG compression standard, see "The JPEG Still Picture Compression Standard", Gregory K. Wallace, Communications of the ACM, April 1991 (vol. 34, no. 4), pp. 30-44 and TPEG Still Image Data Compression Standard. William B. Pennebaker, et al, Nan Νostrand Reinhold, 1993, ISBN 0-442-01272-1.
It will be understood by those skilled in the art that other transformed based compression algorithms such as progressive DCT (discrete cosine transform) and hierarchical wavelet can be implemented by compression unit 24. The selection of a particular compression algorithm depends on the particular mosaic pattern of the color filter array 16 and other additional factors (e.g., transmissivity, sensor wavelength (photonic) response, frame integration time, and other signal processing parameters). All of these factors can affect the amplitude of the signal at each sample site, and the present invention takes advantage of both the spatial phase of the CFA pattern, as well as, the dynamic range (expressed in number of bits required for digitized sample) in the compression operation.
The compression unit 24 performs well known compression steps such as transformation, coefficient quantization, coefficient ordering, and entropy coding. Quantization tables and selective statistical models are provided, respectively, during coefficient quantization and entropy coding selective. The quantization tables and statistical models are based upon the CFA pattern, the optics, the sensor characteristics, and other image capturing device characteristics. As noted previously, the transform step can be a block DCT or a hierarchical wavelet technique that are well known in the art. The entropy coding can be run-length, symbol generation, Huffman, or arithmetic encoding that are all also well known in the art. The output of the compression unit 24 is a compressed image bitstream.
The decompression unit 26, as noted previously, receives the compressed image bitstream and responsive thereto recovers the image that is grouped in accordance to spatial phase and filter passband. The image recovery unit 28 receives the elements grouped according to spatial phase and filter pass-band and responsive thereto recovers the original image.
The color processing unit 30 receives the image and responsive thereto performs conventional color processing to generate an image array suitable for a particular application. By deferring intensive color processing until after decompression, the present invention allows an image capture device (e.g., digital camera or video camera) to reduce and simplify the processing power necessary in the device.
The present invention recognizes that properly designed combination of optical components and image sensor arrays employing a mosaic of color filters creates separable aperture functions amendable to transform based compression /decompression schemes. Moreover, the present invention recognizes that the spectral band-pass characteristics of certain dyes and pigments, used to manufacture the color filters, produce sample components and color spaces that are amendable to transform based compressio /decompression schemes. Consequently, the present invention exploits these conditions to affect the direct and immediate application of transform based compression/ decompression schemes on the digital representation of analog data samples received from image sensor arrays that employ a mosaic of color filters. By so doing, the present invention defers otherwise computer intensive color processing step until after decompression for reproduction and /or other display purposes. The prior art approaches did not realize or recognize the above issues. Consequently, the prior art teaches compression and decompression only after intensive color processing, as is noted in the Background section.
The color image processing system 20 of the present invention also includes a decompression unit 26. The decompression unit 26 decompresses the compressed image and provides the decompressed image to a image recovery unit 28. The image recovery unit restores the original focal plane array 10. In other words, the image recovery unit 28 performs the reverse operation of group unit 22. The image recovery unit 28 receives the decompressed image information that is grouped according to amplitude and spatial phase and rebuilds or reconstructs the original focal plane array 10 with pixel values arranged in accordance with a predetermined pattern. This predetermined pattern (e.g., Bayer) is determined by the mosaic of color filters arranged in accordance with the color filter array 16.
A color processing unit 30 is coupled to the image recovery unit to receive the digital representation of the image and perform conventional color processing (interpolation, noise reduction, color spectral conversion, decimation, etc.). This image may be a data object or structure that is a digital representation of the image or scene. The "color processing unit" can perform, but is not limited to, the following: arithmetic /logical operations, filtering operations, transform (linear) operations, color space conversions, morphological operations, point operations, and geometric operations.
In summary, color image processing system 20 of the present invention performs image processing on a color image array 17 as follows. The image array is first sorted by the amplitude/phase group unit 22 into a GE array, a R array, a B array and a Gθ- Each of these groups are then separately submitted to the compression unit 24. When the GE array, the R array, the B array, and the Go array are submitted to the compression unit 24, the compression unit 24 generates a compressed GE array, a compressed R array, a compressed B array and a compressed Go array, respectively. The decompression unit 26 decompresses each of the compressed arrays (i.e., the compressed GE array, the compressed R array, the compressed B array and the compressed Go array) and generates the G array, the R array, the B array and the Go array, respectively. The image recovery unit 28 restores the image array 17 from the GE array, R array, B array and Go array.
Figure 5 illustrates how the present invention groups elements of a two-dimensional sample array in accordance with a primary red, green, blue (RGB) color filter array having a Bayer pattern. In processing step 50, the two-dimensional sample array is received from the optical system. For example, a scene is acquired or captured through an optical system. This optical system may include a blur filter that acts as a low-pass filter for filtering spectral content of the scene of a particular frequency. The system can also include a color filter array (CFA) that receives the incident light and correspondingly passes the light to a sensor array. The sensor array, as mentioned previously, includes a plurality of sites, arranged in M rows and N columns that each sample incident light.
According to the Nyquist theorem, the maximum number of line pairs resolvable over the passband corresponding to the green filter is less than (M/2, N/2). Also, the maximum number of line pairs resolvable for the passband for the red filter and the blue filter is less than (M/4, N/4).
The optical system can be modeled as a function of the following factors: 1) illumination source, 2) scene content, 3) optics (lens, blur filter, IR cut filter), 4) color filter array transmissivity, and 5) sensor array responsivity.
In processing step 52, the row indices (i and ii), and the column indices (j and jj) are initialized. In processing step 54, the elements of the two-dimensional sample array are grouped according to spatial phase that is dependent on the color filter array pattern. In processing step 56, the column indices (j and jj) are incremented. In determination block 58, a determination is made whether or not the last column has been reached. If yes, the processing continues to processing block 60 that increments the row indices (i and ii). If no, the processing proceeds to processing step 54. In determination block 62, a determination is made whether or not the last row has been reached. If yes, the grouped arrays (namely the Ge array, the GQ array, the R array and the B array) are passed to the system. Otherwise, the processing goes to processing step 52.
Figure 6 illustrates how the present invention recovers the initial two-dimensional sample array from the four grouped arrays. In step 70, the grouped arrays (Ge array, Gσ array, R array and B array) are received. In step 71, the row indices (i and ii) are initialized. In processing step 72, the column indices (j and jj) are initialized. In processing step 74, the two-dimensional sample array is filled with elements of the grouped arrays. In processing step 76, the column indices (j and jj) are incremented. In decision block 78, a determination is made whether or not the last column has been reached. If yes, processing goes to processing step 80 that increments the row indices (i and ii). If no, processing continues to processing step 74. In determination block 82, a determination is made whether or not the last row has been reached. If yes, the recovered two-dimensional sample array S [M,N] is passed to the computer system in processing step 84. If no, the processing goes to step 72.
Initially, the present invention generates base statistical class quantization models for the following: 1) system information, 2) scene information, and 3) user information (user selections).
System information refers to the optics, the color filter array (CFA) employed, and the sensor array. The characteristics of the optics, color filter array, and sensors are collectively termed "convolved responsivities". As mentioned previously, the optics can include a blurring filter that acts as a low-pass filter. System information concerning a blurring filter can include specific characteristics of how the optics transform light that passes through the lens. A sensor, as mentioned previously, can be a transducer such as a photodiode. System information concerning a sensor can include the transducer characteristics (e.g., quantum efficiency of the transducer). A color filter array has a specific wavelength spectral transmissivity (i.e., for a unit elimination, how much electromagnetic energy at a given frequency/polarization can pass through a specific site in a color array).
Scene information includes 1) light level (e.g., lighting conditions and brightness of the scene) and 2) color temperature (e.g., a commonly known indicator for color balance).
User information can include factors such as 1) specified quality (high or low) and 2) specified capacity (a specified maximum storage capacity).
In one embodiment, the two-dimensional sample array S [M,N] is equal to an illumination source convolved with the scene content convolved with the optic characteristics (lens characteristics, blue characteristics and IR cut filter characteristics) convolved with the CFA transmissivity convolved with the sensor array responsivity convolved with the quantization model.
Figure 7 illustrates in greater detail the image processing unit 100 in which the direct compression system of the present invention can be implemented. The image processing unit 100 is coupled to the photo detector array 102. As noted previously, the photo detector array 102 receives light photons and converting these photons into electrical signals representative of the light intensity and colors at each photo detector sites.
The image processing unit 100 receives the focal plane array from the optical system 102. As noted previously, the optical system includes the optical components (e.g., lenses), the color filter array pattern, and also the transducer array (the actual photodiodes). The focal plane array can be stored in a frame buffer 103 that can be implemented as part of a memory 106.
The processing unit 100 includes a processor 104 that is coupled to the photodetector array 102 via a bus 105. The memory 106 includes the direct compression system 107 of the present invention, as implemented in the software, and the compression algorithm 108, also implemented in software. The processor 104, when executing the direct compression software 107 of the present invention, described in Figure 5, groups the focal plane array, received from photodetector array 102 into groups according to spatial phase and amplitude. Furthermore, the processor 104, when executing the compression software 108, compresses each group separately.
For example, each of the different amplitude and /or phase groups are stored in separate arrays, and each array is compressed separately. In the alternative, the image information may be stored in one array, and the different amplitude groups or phase groups are compressed separately by indexing the elements in the array so that the compression is performed on each group separately (i.e., elements of a particular group are provided to the compress algorithm 108).
The compressed results can be stored in a storage device 116. This storage device 116 may be a FLASH device, a hard disk drive, or a magnetic disk drive. Moreover, the compressed image can be passed to an interface unit 118 that interfaces between the image processing unit 100 and another system 120 (e.g., a network, a computer system, a bus, such as a universal serial bus (USB), or another device).
A bus controller 114 can be employed to control the transfer of data from the different components of the image processing unit 100. The image processing unit 100 can include a dedicated digital signal processing unit 112 that is specially designed to perform the mathematical operations (e.g., Discrete Cosine Transform) employed by the compression algorithm 108.
In the preferred embodiment, the processor 104 and the digital signal processing unit 112 are integrated into a specialized multimedia processor having digital signal processing capabilities (e.g., the MMX microprocessor chip available from the assignee of the present invention).
Figure 8 illustrates a computer system 130 in which the image recovery software 142 of the present invention, described in Figure 6, can be implemented. The computer system 130 includes a network interface 132 for receiving the compressed image information. The computer system 130 includes a memory 134 (e.g., Random Access Memory (RAM)) that can include a frame buffer 135 for storing the image information. The memory 134 also includes decompression software 140, image recovery software 142, and color processing software 144. Memory 134 can also include software that edits and manipulates digital image information (not shown).
The computer system 130 also includes a processor 136. The processor 136 executes the decompression software 140 and the image recovery software 142 of the present invention. The processor, executing the decompression software 140, decompresses the compressed image and provides uncompressed image information arranged in groups according to amplitude and spatial phase. When processor 136 executes the image recovery software 142 of the present invention, the original focal plane array is restored. The memory 134 also includes color processing software 144 that directs the processor 136 to perform color processing on the uncompressed focal plane array to generate a particular color format. These color formats can include Red-Green-Blue (RGB) or YCrCb (a chrominance /luminance standard). An RGB formatted image can be utilized by an image processing software package (e.g., Adobe Photo Shop) or can be rasterized to a screen (e.g., computer monitor). Moreover, an uncompressed image information in a YCrCb format can be translated into a digital video stream and sent directly to a playback machine, such as a NCR. Alternatively, the uncompressed image in YcrCb format can be recompressed for transmission and /or storage in some other format.
As noted previously, in a preferred embodiment, the processor 136 is a multimedia processor (e.g., the MMX microprocessor chip, available from Intel Corporation), that includes specialized digital signal processing hardware to perform the mathematical operations employed by the decompression software 140.
The data structure or object of the compressed image includes a header having the following information:
1) Optical characteristics of the image capture device;
2) The color filter mosaic imposed by the color filter array; and
3) Characteristics related to the sensor /transducer (e.g., the photodiode).
The compression and decompression process of the present invention is tailored to adapt and respond to a specific color mosaic pattern. For example, the group and ungroup steps are dependent on 1) the number of different CFA filter passbands and 2) the number of elements in an indivisible pattern of the CFA pattern. In the example given, the Ge, R, B, GD form the indivisible CFA pattern (i.e., the Bayer Pattern). The sensor responsivity, CFA spectral transmissivity and IR blocking filter characteristics determine an optimal coding range and statistical distribution for the entropy coding step of the compression. Accordingly, the group and ungroup steps of the compression can also be affected by these factors, as well.
In the prior art, color processing is performed by a microcontroller in the image capture device (e.g., a digital camera). Prior art has used a number of different ways — dedicated analog processing, DSP based processing and others. When the color processing is performed in the image capture device, the quality of the processing is limited by time, processing resources, and other factors. A direct compression method of the present invention allows the color processing steps to be deferred until the image is decompressed. Decompression of the image typically occurs in a computer system (e.g., PC) that has a processor with greater processing and computing ability then the microcontroller in a digital camera or a digital video camera. Moreover, because of this increased processing power, (e.g., a floating point unit), and because one is not limited by time constraints, the computer system can perform a more thorough and accurate color processing of the color filtering.
The present invention performs compression more efficiently than the prior art in at least two aspects. First, the present invention decreases required compute power/processing, required in the image capture device, by deferring color processing operations until after decompression. Second, the present invention increases achievable resolutions (in terms of number of pixels) and /or speeds (in terms of less time and increased frame rates) by employing the compute power/processing that was freed up through deferring color processing until after decompression.
The exemplary embodiments described herein are provided merely to illustrate the principles of the invention and should not be construed as limiting the scope of the invention. Rather, the principles of the invention may be applied to a wide range of systems to achieve the advantages described herein and to achieve other advantages or to satisfy other objectives as well.

Claims

1. A method of image processing comprising the steps of:
a) employing an optical low-pass filter and color filter array having a pattern, said optical low-pass filter and color filter array generating separable aperture functions, amendable for transform based compression schemes;
subsequent to step (a), performing the following
b) capturing the image; and
b) directly compressing the image without color processing.
2. The method of image processing of claim 1 further comprising the step of decompressing the image.
3. The method of image processing of claim 1 further comprising color processing of the image.
4. The method of image processing of claim 3 wherein the color processing includes color interpolation.
5. The method of image processing of claim 3 wherein said color processing includes color space conversion.
6. The method of image processing of claim 3 wherein said color processing changes the image into a YCrCb format according to ITU-R BT.601.
7. The method of image processing of claim 3 wherein the color processing changes the image to an RGB format according to NTSC phosphor characteristics.
8. The method of image processing of claim 1 wherein the step of compressing the image includes the step of:
grouping elements of an image according to spatial phase /amplitude specified by the optical low pass filter and the color filter array.
9. The method of image processing of claim 2 wherein decompressing the image includes the step: of ungrouping elements according to spatial phase/amplitude.
10. The method of image processing of claim 8 wherein the spatial phase/amplitude includes red elements, blue elements, green even elements and green odd elements.
11. The method of image processing of claim 1 wherein capturing the image includes the step of employing a color filter array with a Bayer pattern.
12. The method of image processing of claim 1 wherein step (a) includes the step of employing an interline RGB pattern.
13. The method of image processing of claim 1 wherein step (a) includes the step of employing a complementary Bayer pattern.
14. The method of image processing of claim 1 wherein step (a) includes the step of employing a RGB stripe pattern.
15. The method of image processing of claim 1 wherein capturing the image includes the step of employing a lens conforming to the Nyquist criteria.
16. The method of image processing of claim 1 wherein capturing the image includes the step of tailoring the CFA pattern to fit a particular application.
17. The method of image processing of claim 1 wherein capturing the image includes the step of tailoring the optics to suit a particular application.
18. The method of image processing of claim 1 wherein the step of compressing the image employs a wavelet based compression.
19. The method of image processing of claim 1 wherein the step of compressing the image employs a Discrete Cosine Transform (DCT) based compression.
PCT/US1998/012525 1997-09-11 1998-06-16 A method for directly compressing a color image and tailoring the compression based on the color filter array, optics, and sensor characteristics WO1999013429A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU80732/98A AU8073298A (en) 1997-09-11 1998-06-16 A method for directly compressing a color image and tailoring the compression based on the color filter array, optics, and sensor characteristics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US92777197A 1997-09-11 1997-09-11
US08/927,771 1997-09-11

Publications (1)

Publication Number Publication Date
WO1999013429A1 true WO1999013429A1 (en) 1999-03-18

Family

ID=25455233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/012525 WO1999013429A1 (en) 1997-09-11 1998-06-16 A method for directly compressing a color image and tailoring the compression based on the color filter array, optics, and sensor characteristics

Country Status (2)

Country Link
AU (1) AU8073298A (en)
WO (1) WO1999013429A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002023889A2 (en) * 2000-09-06 2002-03-21 Koninklijke Philips Electronics N.V. Data transfer between rgb and ycrcb color spaces for dct interface
EP1200930A1 (en) * 1999-06-08 2002-05-02 Lightsurf Improved digital camera device and methodology for distributed processing and wireless transmission of digital images
US7724281B2 (en) 2002-02-04 2010-05-25 Syniverse Icx Corporation Device facilitating efficient transfer of digital content from media capture device
US7792876B2 (en) 2002-07-23 2010-09-07 Syniverse Icx Corporation Imaging system providing dynamic viewport layering
US7881715B2 (en) 1999-11-05 2011-02-01 Syniverse Icx Corporation Media spooler system and methodology providing efficient transmission of media content from wireless devices
US8212893B2 (en) 1999-06-08 2012-07-03 Verisign, Inc. Digital camera device and methodology for distributed processing and wireless transmission of digital images
US8321288B1 (en) 2001-03-20 2012-11-27 Syniverse Icx Corporation Media asset management system
US9596385B2 (en) 2007-04-11 2017-03-14 Red.Com, Inc. Electronic apparatus
US9716866B2 (en) 2013-02-14 2017-07-25 Red.Com, Inc. Green image data processing
US9792672B2 (en) 2007-04-11 2017-10-17 Red.Com, Llc Video capture devices and methods
US11503294B2 (en) 2017-07-05 2022-11-15 Red.Com, Llc Video image data processing in electronic devices

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4734761A (en) * 1983-06-02 1988-03-29 Konishiroku Photo Industry Co., Ltd. Color image recording apparatus using a color recording cathode-ray tube with a blue-green phosphor, a red phosphor, and blue, green, and red stripe filters
US4740833A (en) * 1985-07-16 1988-04-26 Fuji Photo Film Co., Ltd. Apparatus for producing a hard copy of a color picture from a color video signal processed in accordance with a selected one of a plurality of groups of color conversion coefficients associated with different kinds of color separating filters
US5317428A (en) * 1989-04-26 1994-05-31 Canon Kabushiki Kaisha Image encoding method and apparatus providing variable length bit stream signals
US5479524A (en) * 1993-08-06 1995-12-26 Farrell; Joyce E. Method and apparatus for identifying the color of an image
US5602589A (en) * 1994-08-19 1997-02-11 Xerox Corporation Video image compression using weighted wavelet hierarchical vector quantization
US5627916A (en) * 1992-01-06 1997-05-06 Canon Kabushiki Kaisah Image processing method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4734761A (en) * 1983-06-02 1988-03-29 Konishiroku Photo Industry Co., Ltd. Color image recording apparatus using a color recording cathode-ray tube with a blue-green phosphor, a red phosphor, and blue, green, and red stripe filters
US4740833A (en) * 1985-07-16 1988-04-26 Fuji Photo Film Co., Ltd. Apparatus for producing a hard copy of a color picture from a color video signal processed in accordance with a selected one of a plurality of groups of color conversion coefficients associated with different kinds of color separating filters
US5317428A (en) * 1989-04-26 1994-05-31 Canon Kabushiki Kaisha Image encoding method and apparatus providing variable length bit stream signals
US5627916A (en) * 1992-01-06 1997-05-06 Canon Kabushiki Kaisah Image processing method and apparatus
US5479524A (en) * 1993-08-06 1995-12-26 Farrell; Joyce E. Method and apparatus for identifying the color of an image
US5602589A (en) * 1994-08-19 1997-02-11 Xerox Corporation Video image compression using weighted wavelet hierarchical vector quantization

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8212893B2 (en) 1999-06-08 2012-07-03 Verisign, Inc. Digital camera device and methodology for distributed processing and wireless transmission of digital images
EP1200930A1 (en) * 1999-06-08 2002-05-02 Lightsurf Improved digital camera device and methodology for distributed processing and wireless transmission of digital images
EP1200930A4 (en) * 1999-06-08 2005-06-29 Lightsurf Improved digital camera device and methodology for distributed processing and wireless transmission of digital images
US7881715B2 (en) 1999-11-05 2011-02-01 Syniverse Icx Corporation Media spooler system and methodology providing efficient transmission of media content from wireless devices
WO2002023889A3 (en) * 2000-09-06 2002-06-06 Koninkl Philips Electronics Nv Data transfer between rgb and ycrcb color spaces for dct interface
WO2002023889A2 (en) * 2000-09-06 2002-03-21 Koninklijke Philips Electronics N.V. Data transfer between rgb and ycrcb color spaces for dct interface
US6670960B1 (en) 2000-09-06 2003-12-30 Koninklijke Philips Electronics N.V. Data transfer between RGB and YCRCB color spaces for DCT interface
US8321288B1 (en) 2001-03-20 2012-11-27 Syniverse Icx Corporation Media asset management system
US7724281B2 (en) 2002-02-04 2010-05-25 Syniverse Icx Corporation Device facilitating efficient transfer of digital content from media capture device
US7792876B2 (en) 2002-07-23 2010-09-07 Syniverse Icx Corporation Imaging system providing dynamic viewport layering
US9596385B2 (en) 2007-04-11 2017-03-14 Red.Com, Inc. Electronic apparatus
US9787878B2 (en) 2007-04-11 2017-10-10 Red.Com, Llc Video camera
US9792672B2 (en) 2007-04-11 2017-10-17 Red.Com, Llc Video capture devices and methods
US9716866B2 (en) 2013-02-14 2017-07-25 Red.Com, Inc. Green image data processing
US10582168B2 (en) 2013-02-14 2020-03-03 Red.Com, Llc Green image data processing
US11503294B2 (en) 2017-07-05 2022-11-15 Red.Com, Llc Video image data processing in electronic devices
US11818351B2 (en) 2017-07-05 2023-11-14 Red.Com, Llc Video image data processing in electronic devices

Also Published As

Publication number Publication date
AU8073298A (en) 1999-03-29

Similar Documents

Publication Publication Date Title
US5412427A (en) Electronic camera utilizing image compression feedback for improved color processing
US5053861A (en) Compression method and apparatus for single-sensor color imaging systems
US5065229A (en) Compression method and apparatus for single-sensor color imaging systems
US9787878B2 (en) Video camera
US5541653A (en) Method and appartus for increasing resolution of digital color images using correlated decoding
JP5695080B2 (en) Resolution-based format for compressed image data
JP3864748B2 (en) Image processing apparatus, electronic camera, and image processing program
WO1999013429A1 (en) A method for directly compressing a color image and tailoring the compression based on the color filter array, optics, and sensor characteristics
EP0731616B1 (en) Image pickup device
KR100689639B1 (en) Image processing system and camera system
US7194129B1 (en) Method and system for color space conversion of patterned color images
US6404927B1 (en) Control point generation and data packing for variable length image compression
US20080007637A1 (en) Image sensor that provides compressed data based on junction area
JP2008124530A (en) Raw data compressing method
JP2891246B2 (en) Digital camera

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ CZ DE DE DK DK EE EE ES FI FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA