US20140028870A1 - Concurrent image processing for generating an output image - Google Patents

Concurrent image processing for generating an output image Download PDF

Info

Publication number
US20140028870A1
US20140028870A1 US14/039,922 US201314039922A US2014028870A1 US 20140028870 A1 US20140028870 A1 US 20140028870A1 US 201314039922 A US201314039922 A US 201314039922A US 2014028870 A1 US2014028870 A1 US 2014028870A1
Authority
US
United States
Prior art keywords
image data
image
processing
pipeline
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/039,922
Inventor
David Plowman
Naushir Patuck
Benjamin Sewell
Graham Veitch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/235,975 external-priority patent/US20130021504A1/en
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US14/039,922 priority Critical patent/US20140028870A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VEITCH, GRAHAM, Patuck, Naushir, PLOWMAN, DAVID, SEWELL, BENJAMIN
Publication of US20140028870A1 publication Critical patent/US20140028870A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • H04N5/23232
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Definitions

  • FIG. 1 is a block diagram of one embodiment of an image processing circuitry according to the present disclosure.
  • FIGS. 2-7 are block diagrams of embodiments of an image signal processing pipeline implemented by the pipeline processing logic from the image processing circuitry of FIG. 1 .
  • FIGS. 8-9 are block diagrams of embodiments of encoding and decoding architectures implemented by the pipeline processing logic from the image processing circuitry of FIG. 1 .
  • FIG. 10 is a block diagram illustrating an embodiment of an electronic device employing the image processing circuitry of FIG. 1 .
  • FIGS. 11-18 are flow chart diagrams depicting various functionalities of embodiments of image processing circuitry of FIG. 1 .
  • This disclosure pertains to a device, method, computer useable medium, and processor programmed to automatically utilize parallel image captures in an image processing pipeline in a digital camera, digital video camera, or other imaging device.
  • a device, method, computer useable medium, and processor programmed to automatically utilize parallel image captures in an image processing pipeline in a digital camera, digital video camera, or other imaging device.
  • One of ordinary skill in the art would recognize that the techniques disclosed may also be applied to other contexts and applications as well.
  • a digital imaging device may include an image sensor that provides a number of light-detecting elements (e.g., photodetectors) configured to convert light detected by the image sensor into an electrical signal.
  • An image sensor may also include a color filter array that filters light captured by the image sensor to capture color information.
  • the image data captured by the image sensor may then be processed by an image processing pipeline circuitry, which may apply a number of various image processing operations to the image data to generate a full color image that may be displayed for viewing on a display device, such as a monitor.
  • the illustrated imaging device 150 may be provided as a digital camera configured to acquire both still images and moving images (e.g., video).
  • the device 150 may include lens(es) 110 and one or more image sensors 101 configured to capture and convert light into electrical signals.
  • the image sensor may include a CMOS (complementary metal-oxide-semiconductor) image sensor (e.g., a CMOS active-pixel sensor (APS)) or a CCD (charge-coupled device) sensor.
  • CMOS complementary metal-oxide-semiconductor
  • APS CMOS active-pixel sensor
  • CCD charge-coupled device
  • the image processing circuitry 100 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of the various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs (application-specific integrated circuits)) or software, or via a combination of hardware and software components. The various image processing operations may be provided by the image processing circuitry 100 .
  • the image processing circuitry 100 may include front-end processing logic 103 , pipeline processing logic 104 , and control logic 105 , among others.
  • the image sensor(s) 101 may include a color filter array (e.g., a Bayer filter) and may thus provide both light intensity and wavelength information captured by each imaging pixel of the image sensors 101 to provide for a set of raw image data that may be processed by the front-end processing logic 103 .
  • a color filter array e.g., a Bayer filter
  • a single lens 110 and a single image sensor 101 may be employed in the image processing circuitry. While in other embodiments, multiple lenses 110 and multiple image sensors 101 may be employed, such as for a stereoscopy uses, among others.
  • the front-end processing logic 103 may also receive pixel data from memory 108 .
  • the raw pixel data may be sent to memory 108 from the image sensor 101 .
  • the raw pixel data residing in the memory 108 may then be provided to the front-end processing logic 103 for processing.
  • the front-end processing logic 103 may perform one or more image processing operations.
  • the processed image data may then be provided to the pipeline processing logic 104 for additional processing prior to being displayed (e.g., on display device 106 ), or may be sent to the memory 108 .
  • the pipeline processing logic 104 receives the “front-end” processed data, either directly from the front-end processing logic 103 or from memory 108 , and may provide for additional processing of the image data in the raw domain, as well as in the RGB and YCbCr color spaces, as the case may be.
  • Image data processed by the pipeline processing logic 104 may then be output to the display 106 (or viewfinder) for viewing by a user and/or may be further processed by a graphics engine. Additionally, output from the pipeline processing logic 104 may be sent to memory 108 and the display 106 may read the image data from memory 108 . Further, in some implementations, the pipeline processing logic 104 may also include encoder(s) 107 , such as a compression engine, for encoding the image data prior to being read by the display 106 . The pipeline processing logic 104 may also include decoder(s) for decoding bitstreams or other multimedia data that are received by the imaging device 150 .
  • encoder(s) 107 such as a compression engine
  • the encoder 107 may be a JPEG (Joint Photographic Experts Group) compression engine for encoding still images, or an H.264 compression engine for encoding video images, or some combination thereof. Also, it should be noted that the pipeline processing logic 104 may also receive raw image data from the memory 108 .
  • JPEG Joint Photographic Experts Group
  • the control logic 105 may include a processor 820 ( FIG. 8 ) and/or microcontroller configured to execute one or more routines (e.g., firmware) that may be configured to determine control parameters for the imaging device 150 , as well as control parameters for the pipeline processing logic 104 .
  • the control parameters may include sensor control parameters, camera flash control parameters, lens control parameters (e.g., focal length for focusing or zoom), or a combination of such parameters for the image sensor(s) 101 .
  • the control parameters may also include image processing commands, such as autowhite balance, autofocus, autoexposure, and color adjustments, as well as lens shading correction parameters for the pipeline processing logic 104 .
  • the control parameters may further comprise multiplexing signals or commands for the pipeline processing logic 104 .
  • one embodiment of the pipeline processing logic 104 may perform processes of an image signal processing pipeline by first sending image information to a first process element 201 which may take the raw data produced by the image sensor 101 ( FIG. 1 ) and generate a digital image that will be viewed by a user or undergo further processing by a downstream process element.
  • the image signal processing pipeline may be considered as a series of specialized algorithms that adjusts image data in real-time and is often implemented as an integrated component of a system-on-chip (SoC) image processor.
  • SoC system-on-chip
  • the first process element 201 of an image signal processing pipeline could perform a particular image process such as noise reduction, defective pixel detection/correction, lens shading correction, lens distortion correction, demosaicing, image sharpening, color uniformity, RGB (red, green, blue) contrast, saturation boost process, etc.
  • the pipeline may include a second process element 202 .
  • the second process element 202 could perform a particular and different image process such as noise reduction, defective pixel detection/correction, lens shading correction, demosaicing, image sharpening, color uniformity, RGB contrast, saturation boost process etc.
  • the image data may then be sent to additional element(s) of the pipeline as the case may be, saved to memory 108 ( FIG. 1 ), and/or input for display 106 ( FIG. 1 ).
  • the image signal processing pipeline performed by pipeline processing logic 104 contains parallel paths instead of a single linear path.
  • the parallel paths may provide a first path and a second path.
  • the first path comprises a main processing path and the second path comprises a supplemental processing path. Therefore, while raw image data is being processed in the first path to generate a high-resolution image output suitable for storage, the raw image data is processed in the second and parallel path to generate a lower resolution image that can be generated more quickly (as compared to the first path) and be displayed in the camera viewfinder or display 106 . It may be that the second path contains fewer stages or elements 321 , 322 than the first path.
  • the first path may contain the same number of or less number of stages or elements 311 , 312 as compared to the second path.
  • the second path may involve resolution down-conversion of the image to lessen the amount of pixels that need to be processed during image processing, such as for image analysis, in the pipeline.
  • the benefits of the parallel paths may apply to still images as well as video images captured by the image sensor(s) 101 ( FIG. 1 ). It is noted that some embodiments of the pipeline processing logic 104 utilizes a single image sensor 101 that provides raw data to the first and second paths, where the first path may process the raw data relatively carefully and more slowly than the second path that can generate an image available to be previewed more quickly.
  • each stage in the pipeline may begin processing as soon as image data is available so the entire image does not have to be received from the previous sensor or stage before processing is started.
  • multiple imagers or image sensors 101 may be utilized, as shown in FIG. 4 .
  • one imager or sensor 101 a may provide raw data at a lower resolution than a second image sensor 101 b , where the lower resolution raw data feeds a pipeline path to the display 106 and the higher resolution data feeds a path used for encoding and/or for storage in memory 108 .
  • a secondary or supplemental image may be used in image analysis that can help subsequent image analysis operations for the main image.
  • a secondary image at a smaller size or resolution than the main image might undergo facial recognition algorithms (or other object recognition algorithm) and output of positive results may be used to identify facial structures (or other objects) in the main image. Therefore, the secondary image may be produced in a format that is more suited for some of the applicable states or processing elements in its path. Accordingly, processing elements 411 , 412 , may be divided up between elements that are suited for the main image and processing elements 421 , 422 that are suited for the secondary image. Accordingly, a secondary image may be initially processed, such as being made smaller or scaled, for the benefit of downstream elements.
  • the path of the secondary image may contain a noise filtering element due to a downstream element needed for the secondary image to have undergone noise reduction.
  • the different paths or elements in the different paths may also use different imaging formats. For example, one of the paths may use an integral image format whereas a standard image format is used in the other path. Accordingly, downstream elements in the integral image path may need an integral image format as opposed to a standard image format and vice versa.
  • the images generated by the first and second paths may be stored in memory 108 and made available for subsequent use by other procedures and elements that follow. Accordingly, in one embodiment, while a main image is being processed in a main path of the pipeline, another image which might be downsized or scaled of that image or a previous image may be read by the main path. This may enable more powerful processing in the pipeline, such as during noise filtering.
  • a downscaled version of the main image is generated in a second path and the noise filter in the main path reads the downscaled version of the image and stores those pixels for noise analysis. Since there are the same number of line buffers but the image is downscaled, this effectively allows the noise filter to see further away in the original image because the second image is at a reduced scale.
  • another embodiment utilizes a downscaled version of an image to assist in dynamic range optimization processing.
  • a dynamic range optimization process is provided a way to see further away from a current pixel than would be available by only considering the full resolution image.
  • a high dynamic range imaging process or element also reads a downscaled version of a main image to see further away from the current pixel.
  • raw image data (from an image sensor 101 ) may be provided to the front-end processing logic 103 and processed on a pixel-by-pixel basis in a number of formats.
  • raw pixel data received by the front-end processing logic 103 may be up-sampled for image processing purposes.
  • raw image or pixel data may be down-sampled or scaled.
  • down-sampling of image data may reduce hardware size (e.g., area) and also reduce processing/computational complexity.
  • the front-end processing logic 103 generates two distinct kinds of images for the pipeline processing logic 104 .
  • the imaging device 150 may be capturing video images and the user or the device itself determines to also capture a still image in addition to the video or moving images.
  • a problem to overcome with this task in conventional cameras is that the video images are being generated at a resolution that is less than desired for still images.
  • a potential solution would be to record the video images at the higher resolution desired for the still image, but this would require the pipeline processing logic 104 to undergo processing of the higher resolution video images.
  • one embodiment of the present disclosure captures the raw image data by the sensor 101 at the higher resolution suitable for still image photography. Then, the front-end pipeline processing logic 103 scales down the size of the captured images to a resolution size suitable for video processing before feeding the image data to the appropriate pipeline processing logic 104 . When the user or the imaging device 150 decides to capture an image still, for this one frame, the front-end pipeline processing logic 103 will receive instructions from the control logic 105 and store the desired frame in memory 108 at the higher resolution. Further, in one embodiment, although a main imaging path of the pipeline is handling the video processing, as processing time allows, the main imaging path can be provided the still image from memory 108 .
  • the video processing is assigned a higher priority than the still image processing by the pipeline processing logic 104 .
  • the pipeline processing logic 104 features a single pipeline for processing captured images but has the capability to multiplex the single pipeline between different input images. Therefore, the single pipeline may switch from processing an image or series of images having a high priority to an image or series of images having a lower priority as processing time allows.
  • Multiplexing of the imaging pipeline is also implemented in an embodiment utilizing multiple image sensors 101 .
  • a stereoscopic image device that delivers a left image and a right image of an object to a single image pipeline, as represented in FIG. 5 .
  • the single image pipeline in pipeline processing logic 104 can therefore be multiplexed between the left and right images that are being input in parallel to the image signal processing pipeline so that the pipeline is shared.
  • the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing logic 103 . This reduces latency by not delaying processing of an image until completion of the other image, and processing of the two images will finish more quickly.
  • one embodiment utilizes multiple image sensors 101 that produce multiple inputs for the pipeline processing logic 104 .
  • one of the image sensors 101 b may capture a low resolution image that is fed as a preview of an image recently captured, where the other image sensor 101 a captures a high resolution image of the subject of the picture that is processed in parallel. Otherwise, the low resolution image may be used for framing a shot to be captured, where the subsequent captured shot or image is at a higher resolution and may undergo additional processing. Therefore, this embodiment features an imaging device with two fully parallel image capture and processing pipeline paths.
  • a single image sensor 101 is utilized to capture image information and provide the information to the front-end processing logic 103 , whereby the front-end processing logic 103 may generate two input images for parallel paths in the pipeline processing logic 104 (as represented in FIG. 7 ). Also, in some embodiments, a single image sensor 101 is utilized to capture image information and provide the information to the front-end processing logic 103 , whereby the front-end processing logic 103 may generate two input images for multiplexed input into a single path of the pipeline processing logic 104 (as represented in FIG. 6 ).
  • embodiments may transmit higher and lower resolution (or temporal, or quality) counterparts to expedite frame processing.
  • prediction between frames may be done on a macro block or on a pixel level, where a smaller resolution frame may have macro blocks that correspond to larger macro blocks in the higher resolution images.
  • individual pixels of differing images of the low resolution image may correspond to macro blocks of a higher resolution image.
  • the low resolution images may be used to predict the changes in macro blocks or average groups of images.
  • adaptable video architecture may provide for a scalable video pipeline.
  • Video processing predicts the current frame content utilizing previous content from previous video frames. For example, H.264 uses this temporal coding for video processing. Other spatial and quality coding may also be used for video processing.
  • Scalable video coding SVC is an extension of H.264 that uses video information at different resolutions to predict current frame content.
  • SVC defines a plurality of subset bitstreams 802 a , 802 b , with each subset being independently decodable in a similar fashion as a single H.264 bitstream. Merely by dropping packets from the larger overall bitstream, a subset bitstream can be exposed.
  • Each subset bitstream 802 can represent one or more of scalable resolution, frame rate, and quality video signal. More particularly, the subset bitstreams 802 represent video layers within SVC with the base layer 802 a being fully compatible with H.264 (which is a single layer standard definition), in one embodiment.
  • the overall bitstream 806 When the overall bitstream 806 is transmitted (e.g., by over air broadcast), a receiving device can use the appropriate subset bitstream to perform the video processing.
  • the additional subset bitstream layers can be discarded or used to for temporal, spatial and/or signal quality improvements.
  • a lower resolution image (e.g., 802 a ) may be generated to assist higher resolution encoding even though the resulting bitstream merely comprises the higher resolution version (e.g., 802 b ), while the lower resolution version (e.g., 802 a ) is purged or deleted, in one embodiment.
  • the lower resolution version (e.g., 802 a ) may be generated on the fly by downscaling the higher resolution image (e.g., 802 b ), in one embodiment.
  • some embodiments may concurrently capture a lower resolution image (e.g., 802 a ) and a higher resolution image (e.g., 802 b ) using multiple image sensors 101 .
  • an encoded bitstream may be decoded and processed to create a lower resolution counterpart.
  • each of the resolution (or temporal, or quality) counterparts may be encoded (by one or more encoder portions 804 a , 804 b ) for bitstream delivery or transmission to an end-point user device.
  • the encoded output 806 may comprise layers of the lower resolution/temporal/quality image sequences 802 a and higher resolution/temporal/temporal/quality image sequences 802 b of the same underlying media content.
  • one embodiment generates an encoded output 806 that comprises layers of lower resolution/temporal/quality image sequences 802 a and higher resolution/temporal/quality image sequences 802 b that are not derived from the same original source or not the same underlying media content.
  • two different image sequences may be captured concurrently from dual or multiple image sensors 101 and used as source material for the different layers 802 .
  • an embodiment of the adaptable video (transcode-encode-decode) architecture has at least two modes.
  • the adaptable architecture 804 a is instantiated once for H.264 decode or other single layer standard.
  • the adaptable architecture 804 b is instantiated multiple times, each instance designed to accelerate the decoding of one SVC layer to improve the generated video image.
  • a lower resolution H.264 decode pipeline M
  • M+1 next higher resolution layer
  • Information of values 803 may be tapped out such as, e.g., motion vectors, transform coefficients, and/or image data, prior to the application of a deblocking filter for use in the higher resolution pipeline.
  • a lower quality layer 804 a e.g., signal-to-noise ratio or fidelity
  • the interlayer interpolations 805 may be performed externally by software modules executed by shared general-purpose processing resources of the video device, or by dedicated hardware.
  • decoder architecture 900 may include a plurality of decode pipelines 904 a , 904 b with each decode pipeline being associated with a different resolution.
  • the decode pipelines 904 may be implemented in hardware and/or software modules executed by general-purpose processing resources.
  • Information 903 may be tapped out of a lower resolution decode pipeline (M) 904 a , processed using an interlayer interpolation 905 , and supplied to the next higher resolution decode pipeline (M+1) 904 b for use.
  • M+1 next higher resolution decode pipeline
  • a single decode pipeline 904 may be used to perform the video processing at multiple resolutions.
  • the decode pipeline performs the video processing at a first resolution (M) with information being extracted as appropriate.
  • the decode pipeline may then performs the video processing at the next resolution (M+1) or at another higher resolution (e.g., M+2). Processing flow may be adjusted by sequencing the flow through the different decoding pipelines as appropriate.
  • each of the components 904 a , 904 b of the decoder architecture 900 include the ability to insert or extract cross layer information supporting various layers of encoding.
  • the structure of the decoder 904 a , 904 b is instantiated based upon the particular layers 902 a , 902 b being decoded.
  • Each portion of the decoder architecture 900 may tap out data 903 that is used for decoding of differing layers of the multiple layer streams of images 902 a , 902 b .
  • Prediction vectors or components from the lower layer decoding function 904 a may be fed or inputted to the higher layer decoding functions 904 b .
  • interpolation in software 905 can be used to aid in the interpolation from particular components of one resolution or quality level to the next.
  • interlayer prediction vectors or components are not necessarily stored in memory 108 , because these components may be passed between layers in hardware of the decoder architecture 900 (e.g., field programmable gate arrays, static random access memory (SRAM)-based programmable devices, etc.). Because the lower layers can work faster in the decoding process, the prediction coefficients can be obtained from a lower layer and passed to a higher layer for processing after the lower layer decoding is shut down to save processing resources in the lower layer. Accordingly, in some embodiments, inter-layer processing 905 is handled purely in hardware, without the memory bandwidth overhead of passing prediction information to synchronous dynamic random access memory (SDRAM) for software processing.
  • SDRAM synchronous dynamic random access memory
  • the multiple decoded streams 906 a , 906 b may be used to separately feed different devices or one may be selected and the others purged, the various layers may also be transcoded, in some embodiments, after they have been successfully decoded.
  • FIG. 10 is a block diagram illustrating an example of an electronic device 1005 that may provide for the processing of image data using one or more of the image processing techniques briefly mentioned above.
  • the electronic device 1005 may be any type of electronic device, such as a laptop or desktop computer, a mobile phone, tablet, a digital media player, or the like, that is configured to receive and process image data, such as data acquired using one or more image sensing components.
  • the electronic device 1005 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, among others.
  • the electronic device 1005 may apply such image processing techniques to image data stored in a memory 1030 of the electronic device 1005 .
  • the electronic device 1005 may include one or more imaging devices 1080 , such as an integrated or external digital camera, configured to acquire image data, which may then be processed by the electronic device 1005 using one or more of the above-mentioned image processing techniques.
  • the electronic device 1005 may include various internal and/or external components which contribute to the function of the device 1005 .
  • the various functional blocks shown in FIG. 10 may comprise hardware elements (including circuitry), software elements (including computer code stored on a computer readable medium) or a combination of both hardware and software elements.
  • the electronic device 1005 may include input/output (I/O) ports 1010 , one or more processors 1020 , memory device 1030 , non-volatile storage 1040 , networking device 1050 , power source 1060 , and display 1070 .
  • I/O input/output
  • the electronic device 10 may include one or more imaging devices 1080 , such as a digital camera, and image processing circuitry 1090 .
  • the image processing circuitry 1090 may be configured implement one or more of the above-discussed image processing techniques when processing image data.
  • image data processed by image processing circuitry 1090 may be retrieved from the memory 1030 and/or the non-volatile storage device(s) 1040 , or may be acquired using the imaging device 1080 .
  • the system block diagram of the device 1005 shown in FIG. 10 is intended to be a high-level control diagram depicting various components that may be included in such a device 1005 . That is, the connection lines between each individual component shown in FIG. 1 may not necessarily represent paths or directions through which data flows or is transmitted between various components of the device 1005 .
  • the depicted processor(s) 1020 may, in some embodiments, include multiple processors, such as a main processor (e.g., CPU), and dedicated image and/or video processors. In such embodiments, the processing of image data may be primarily handled by these dedicated processors, thus effectively offloading such tasks from a main processor (CPU).
  • main processor e.g., CPU
  • dedicated image and/or video processors dedicated image and/or video processors.
  • FIG. 11 shown is a flowchart that provides one example of the operation of a portion of the image processing circuitry 100 according to various embodiments. It is understood that the flowchart of FIG. 11 provides merely an example of the many different types of functional arrangements that may be employed to implement the operation of the portion of the image processing circuitry 100 as described herein. As an alternative, the flowchart of FIG. 11 may be viewed as depicting an example of steps of a method implemented in the electronic device 1005 ( FIG. 10 ) according to one or more embodiments.
  • imaging processing circuitry 100 provides an imaging pipeline for processing images captured from one or more image sensors 101 , where the imaging single processing pipeline features two parallel paths for processing the images.
  • an input image obtained from the image sensor(s) 101 is processed at full-resolution.
  • an input image obtained from the image sensor(s) 101 is processed at a down-scaled resolution, as depicted in step 1106 .
  • the down-scaled resolution version of the input image is output from the second parallel path of the pipeline before completion of processing of the input image at full-resolution and is provided for display, in step 1108 .
  • imaging processing circuitry 100 provides an imaging pipeline for processing images captured from one or more image sensors 101 , where the image signal processing pipeline features two parallel paths for processing the images.
  • the image signal processing pipeline features two parallel paths for processing the images.
  • an input image obtained from the image sensor(s) 101 is processed at full-resolution.
  • an input image obtained from the image sensor(s) 101 is processed at a down-scaled resolution, as depicted in step 1206 .
  • the down-scaled resolution version of the input image undergoes image enhancement analysis in the second parallel path that is applied to the full-resolution version of the image in the first parallel path, in step 1208 .
  • image enhancement analysis may include noise filtering, dynamic range optimization, high dynamic range imaging, facial or object recognition, among others.
  • imaging processing circuitry 100 provides an image signal processing pipeline for processing images captured from one or more image sensors 101 , where the pipeline features a single pipeline path for processing the images.
  • multiple input images may be fed into the single pipeline path by multiplexing the different images by front-end circuitry (e.g., front-end processing logic 103 ).
  • front-end circuitry e.g., front-end processing logic 103 .
  • front-end processing logic 103 For example, consider a stereoscopic image device that delivers a left image and a right image of an object to a single image pipeline, as represented in FIG. 5 .
  • the single image pipeline in pipeline processing logic 104 can therefore be multiplexed between the left and right images that are being input in parallel to the pipeline via the front-end circuitry. Instead of processing one of the images in its entirety after the other has been processed in its entirety, the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing circuitry.
  • front-end processing circuitry can receive a single input image from an image sensor 101 .
  • the front-end processing circuitry may then generate two or more input images for multiplexed input into a single path of an image signal processing pipeline of pipeline processing logic 104 (as represented in FIG. 6 ).
  • the single pipeline in pipeline processing logic 104 can therefore be multiplexed between the multiple images that have been generated by the front-end circuitry, in step 1406 .
  • the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing circuitry.
  • one embodiment of the present disclosure captures the raw image data by the sensor 101 at a high resolution suitable for still image photography, in step 1502 . Then, the front-end pipeline processing logic 103 scales down the size of the captured images to a resolution size suitable for video processing, in step 1504 , before feeding the image data to the appropriate pipeline processing logic 104 , in step 1506 . When the user or the imaging device 150 decides to capture an image still, for this one frame, the front-end pipeline processing logic 103 will receive instructions from the control logic 105 and store the desired frame in memory 108 at the higher resolution, in step 1508 . Further, in one embodiment, although a main imaging path of the pipeline is handling the video processing, as processing time allows, the main imaging path can be provided the still image from memory 108 , in step 1510 .
  • a first subset bitstream having a first resolution is obtained and processed in a video pipeline 804 of the video or imaging device 150 , in step 1604 .
  • video information associated with the first subset bitstream is extracted (or tapped) from the video pipeline 804 during processing of the first subset bitstream.
  • interlayer interpolation is performed on at least a portion of the extracted video information.
  • step 1608 at least a portion of the extracted video data is provided to a video pipeline 804 of the video device 105 for processing ( 1610 ) of a second subset bitstream having a second resolution higher than the first resolution.
  • step 1612 if another higher resolution subset bitstream is to be processed, then the flow returns to step 1606 , where interlayer interpolation is performed on at least a portion of the video information extracted during processing of the second subset bitstream. The flow continues until the processing of a higher subset bitstream ends at step 1612 .
  • FIG. 17 a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Accordingly, one embodiment of the present disclosure captures the raw image data (which may be a sequence of images) by an image sensor 101 at a full resolution, in step 1702 . Then, the front-end pipeline processing logic 103 scales down the size of the captured images to a lower resolution size suitable for video processing by a downstream end-point device, in step 1704 .
  • each layer of the input bitstream is encoded and combined to generate a mixed layer output bitstream (e.g., SVC bitstream) that can be delivered for an SVC or SVC-like decoding process to a downstream end-point device, in step 1706 .
  • a mixed layer output bitstream e.g., SVC bitstream
  • FIG. 18 a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments.
  • one embodiment of the present disclosure captures the raw image data (which may be a sequence of images) by an image sensor 101 at a full resolution, in step 1802 .
  • the front-end pipeline processing logic 103 obtains a lower resolution size of image data that is concurrently captured at the same time as the full-resolution image data, in step 1804 .
  • each layer of the input bitstream is encoded and combined to generate a mixed layer output bitstream (e.g., SVC bitstream) that can be delivered for an SVC or SVC-like decoding process to a downstream end-point device.
  • a mixed layer output bitstream e.g., SVC bitstream
  • a “computer readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device.
  • the computer readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical).
  • an electrical connection having one or more wires
  • a portable computer diskette magnetic
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • CDROM portable compact disc read-only memory
  • the scope of certain embodiments includes embodying the functionality of the embodiments in logic embodied in hardware or software-configured mediums.

Abstract

Embodiments of the present application automatically utilize parallel image captures in an image processing pipeline. In one embodiment, image processing circuitry concurrently receives first image data to be processed and second image data to be processed, wherein the second image data is processed to aid in enhancement of the first image data.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of copending U.S. utility application entitled, “Concurrent Image Processing for Generating an Output Image,” having Ser. No. 13/431,064, filed Mar. 27, 2012, which is a continuation-in-part of copending U.S. utility application entitled, “Multiple Image Processing,” having Ser. No. 13/235,975, filed Sep. 19, 2011, which claims priority to copending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, and copending U.S. provisional application entitled “Multimedia Processing” having Ser. No. 61/509,797, filed Jul. 20, 2011, all of which are entirely incorporated herein by reference in their entireties.
  • BACKGROUND
  • With current cameras, there is a significant delay between the capture of an image and the subsequent display of a framed image to the user via a viewfinder. Accordingly, advances in image processing may allow for improvements, such as shorter latency.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
  • FIG. 1 is a block diagram of one embodiment of an image processing circuitry according to the present disclosure.
  • FIGS. 2-7 are block diagrams of embodiments of an image signal processing pipeline implemented by the pipeline processing logic from the image processing circuitry of FIG. 1.
  • FIGS. 8-9 are block diagrams of embodiments of encoding and decoding architectures implemented by the pipeline processing logic from the image processing circuitry of FIG. 1.
  • FIG. 10 is a block diagram illustrating an embodiment of an electronic device employing the image processing circuitry of FIG. 1.
  • FIGS. 11-18 are flow chart diagrams depicting various functionalities of embodiments of image processing circuitry of FIG. 1.
  • DETAILED DESCRIPTION
  • This disclosure pertains to a device, method, computer useable medium, and processor programmed to automatically utilize parallel image captures in an image processing pipeline in a digital camera, digital video camera, or other imaging device. One of ordinary skill in the art would recognize that the techniques disclosed may also be applied to other contexts and applications as well.
  • For cameras in embedded devices, e.g., digital cameras, digital video cameras, mobile phones, personal data assistants (PDAs), tablets, portable music players, and desktop or laptop computers, to produce more visually pleasing images, techniques such as those disclosed herein can improve image quality without incurring significant computational overhead or power costs.
  • To acquire image data, a digital imaging device may include an image sensor that provides a number of light-detecting elements (e.g., photodetectors) configured to convert light detected by the image sensor into an electrical signal. An image sensor may also include a color filter array that filters light captured by the image sensor to capture color information. The image data captured by the image sensor may then be processed by an image processing pipeline circuitry, which may apply a number of various image processing operations to the image data to generate a full color image that may be displayed for viewing on a display device, such as a monitor.
  • Referring to FIG. 1, a block diagram of one embodiment of an image processing circuitry 100 is shown for an imaging device 150. The illustrated imaging device 150 may be provided as a digital camera configured to acquire both still images and moving images (e.g., video). The device 150 may include lens(es) 110 and one or more image sensors 101 configured to capture and convert light into electrical signals. By way of example only, the image sensor may include a CMOS (complementary metal-oxide-semiconductor) image sensor (e.g., a CMOS active-pixel sensor (APS)) or a CCD (charge-coupled device) sensor.
  • In some embodiments, the image processing circuitry 100 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of the various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs (application-specific integrated circuits)) or software, or via a combination of hardware and software components. The various image processing operations may be provided by the image processing circuitry 100.
  • The image processing circuitry 100 may include front-end processing logic 103, pipeline processing logic 104, and control logic 105, among others. The image sensor(s) 101 may include a color filter array (e.g., a Bayer filter) and may thus provide both light intensity and wavelength information captured by each imaging pixel of the image sensors 101 to provide for a set of raw image data that may be processed by the front-end processing logic 103.
  • In some embodiments, a single lens 110 and a single image sensor 101 may be employed in the image processing circuitry. While in other embodiments, multiple lenses 110 and multiple image sensors 101 may be employed, such as for a stereoscopy uses, among others.
  • The front-end processing logic 103 may also receive pixel data from memory 108. For instance, the raw pixel data may be sent to memory 108 from the image sensor 101. The raw pixel data residing in the memory 108 may then be provided to the front-end processing logic 103 for processing.
  • Upon receiving the raw image data (from image sensor 101 or from memory 108), the front-end processing logic 103 may perform one or more image processing operations. The processed image data may then be provided to the pipeline processing logic 104 for additional processing prior to being displayed (e.g., on display device 106), or may be sent to the memory 108. The pipeline processing logic 104 receives the “front-end” processed data, either directly from the front-end processing logic 103 or from memory 108, and may provide for additional processing of the image data in the raw domain, as well as in the RGB and YCbCr color spaces, as the case may be. Image data processed by the pipeline processing logic 104 may then be output to the display 106 (or viewfinder) for viewing by a user and/or may be further processed by a graphics engine. Additionally, output from the pipeline processing logic 104 may be sent to memory 108 and the display 106 may read the image data from memory 108. Further, in some implementations, the pipeline processing logic 104 may also include encoder(s) 107, such as a compression engine, for encoding the image data prior to being read by the display 106. The pipeline processing logic 104 may also include decoder(s) for decoding bitstreams or other multimedia data that are received by the imaging device 150.
  • The encoder 107 may be a JPEG (Joint Photographic Experts Group) compression engine for encoding still images, or an H.264 compression engine for encoding video images, or some combination thereof. Also, it should be noted that the pipeline processing logic 104 may also receive raw image data from the memory 108.
  • The control logic 105 may include a processor 820 (FIG. 8) and/or microcontroller configured to execute one or more routines (e.g., firmware) that may be configured to determine control parameters for the imaging device 150, as well as control parameters for the pipeline processing logic 104. By way of example only, the control parameters may include sensor control parameters, camera flash control parameters, lens control parameters (e.g., focal length for focusing or zoom), or a combination of such parameters for the image sensor(s) 101. The control parameters may also include image processing commands, such as autowhite balance, autofocus, autoexposure, and color adjustments, as well as lens shading correction parameters for the pipeline processing logic 104. The control parameters may further comprise multiplexing signals or commands for the pipeline processing logic 104.
  • Referring now to FIG. 2, one embodiment of the pipeline processing logic 104 may perform processes of an image signal processing pipeline by first sending image information to a first process element 201 which may take the raw data produced by the image sensor 101 (FIG. 1) and generate a digital image that will be viewed by a user or undergo further processing by a downstream process element. Accordingly, the image signal processing pipeline may be considered as a series of specialized algorithms that adjusts image data in real-time and is often implemented as an integrated component of a system-on-chip (SoC) image processor. With an image signal processing pipeline implemented in hardware, front-end image processing can be completed without placing any processing burden on the main application processor 820 (FIG. 8).
  • In one embodiment, the first process element 201 of an image signal processing pipeline could perform a particular image process such as noise reduction, defective pixel detection/correction, lens shading correction, lens distortion correction, demosaicing, image sharpening, color uniformity, RGB (red, green, blue) contrast, saturation boost process, etc. As discussed above, the pipeline may include a second process element 202. In one embodiment, the second process element 202 could perform a particular and different image process such as noise reduction, defective pixel detection/correction, lens shading correction, demosaicing, image sharpening, color uniformity, RGB contrast, saturation boost process etc. The image data may then be sent to additional element(s) of the pipeline as the case may be, saved to memory 108 (FIG. 1), and/or input for display 106 (FIG. 1).
  • Referring next to FIG. 3, in one embodiment, the image signal processing pipeline performed by pipeline processing logic 104 contains parallel paths instead of a single linear path. For example, the parallel paths may provide a first path and a second path. Further, in one embodiment, the first path comprises a main processing path and the second path comprises a supplemental processing path. Therefore, while raw image data is being processed in the first path to generate a high-resolution image output suitable for storage, the raw image data is processed in the second and parallel path to generate a lower resolution image that can be generated more quickly (as compared to the first path) and be displayed in the camera viewfinder or display 106. It may be that the second path contains fewer stages or elements 321, 322 than the first path. Alternatively, the first path may contain the same number of or less number of stages or elements 311, 312 as compared to the second path. Further, the second path may involve resolution down-conversion of the image to lessen the amount of pixels that need to be processed during image processing, such as for image analysis, in the pipeline.
  • The benefits of the parallel paths may apply to still images as well as video images captured by the image sensor(s) 101 (FIG. 1). It is noted that some embodiments of the pipeline processing logic 104 utilizes a single image sensor 101 that provides raw data to the first and second paths, where the first path may process the raw data relatively carefully and more slowly than the second path that can generate an image available to be previewed more quickly.
  • Use of parallel paths in the image signal processing pipeline may enable processing of multiple image data simultaneously while maximizing final image quality. Additionally, each stage in the pipeline may begin processing as soon as image data is available so the entire image does not have to be received from the previous sensor or stage before processing is started.
  • In an alternative embodiment, multiple imagers or image sensors 101 may be utilized, as shown in FIG. 4. For example, one imager or sensor 101 a may provide raw data at a lower resolution than a second image sensor 101 b, where the lower resolution raw data feeds a pipeline path to the display 106 and the higher resolution data feeds a path used for encoding and/or for storage in memory 108.
  • Further, in some embodiments, a secondary or supplemental image may be used in image analysis that can help subsequent image analysis operations for the main image. As an example, a secondary image at a smaller size or resolution than the main image might undergo facial recognition algorithms (or other object recognition algorithm) and output of positive results may be used to identify facial structures (or other objects) in the main image. Therefore, the secondary image may be produced in a format that is more suited for some of the applicable states or processing elements in its path. Accordingly, processing elements 411, 412, may be divided up between elements that are suited for the main image and processing elements 421, 422 that are suited for the secondary image. Accordingly, a secondary image may be initially processed, such as being made smaller or scaled, for the benefit of downstream elements. As an example, the path of the secondary image may contain a noise filtering element due to a downstream element needed for the secondary image to have undergone noise reduction. The different paths or elements in the different paths may also use different imaging formats. For example, one of the paths may use an integral image format whereas a standard image format is used in the other path. Accordingly, downstream elements in the integral image path may need an integral image format as opposed to a standard image format and vice versa.
  • In some embodiments, the images generated by the first and second paths may be stored in memory 108 and made available for subsequent use by other procedures and elements that follow. Accordingly, in one embodiment, while a main image is being processed in a main path of the pipeline, another image which might be downsized or scaled of that image or a previous image may be read by the main path. This may enable more powerful processing in the pipeline, such as during noise filtering.
  • For example, during noise filtering, for any given pixel being processed, neighboring pixels are analyzed. This process of denoising the pixel may have a stronger effect with the more pixels that are able to be analyzed further away from the pixel being processed. Due to hardware constraints, such as memory buffer size used by processing logic, there is a limit in how far away from the current pixel that the process can analyze neighboring pixels. Accordingly, in one embodiment, a downscaled version of the main image is generated in a second path and the noise filter in the main path reads the downscaled version of the image and stores those pixels for noise analysis. Since there are the same number of line buffers but the image is downscaled, this effectively allows the noise filter to see further away in the original image because the second image is at a reduced scale.
  • Accordingly, another embodiment utilizes a downscaled version of an image to assist in dynamic range optimization processing. By having available a downscaled version of an image alongside a full resolution image in memory 108, a dynamic range optimization process is provided a way to see further away from a current pixel than would be available by only considering the full resolution image. In a similar manner, a high dynamic range imaging process or element also reads a downscaled version of a main image to see further away from the current pixel.
  • Referring back to FIG. 1, in one embodiment, raw image data (from an image sensor 101) may be provided to the front-end processing logic 103 and processed on a pixel-by-pixel basis in a number of formats. For example, in one embodiment, raw pixel data received by the front-end processing logic 103 may be up-sampled for image processing purposes. In another embodiment, raw image or pixel data may be down-sampled or scaled. As will be appreciated, down-sampling of image data may reduce hardware size (e.g., area) and also reduce processing/computational complexity.
  • In some embodiments, the front-end processing logic 103 generates two distinct kinds of images for the pipeline processing logic 104. As an example, the imaging device 150 may be capturing video images and the user or the device itself determines to also capture a still image in addition to the video or moving images. A problem to overcome with this task in conventional cameras is that the video images are being generated at a resolution that is less than desired for still images. A potential solution would be to record the video images at the higher resolution desired for the still image, but this would require the pipeline processing logic 104 to undergo processing of the higher resolution video images. However, it is difficult to encode video at a high resolution (e.g., 8 megapixels) and it is also impractical, since video images do not necessarily require a very high resolution.
  • Accordingly, one embodiment of the present disclosure captures the raw image data by the sensor 101 at the higher resolution suitable for still image photography. Then, the front-end pipeline processing logic 103 scales down the size of the captured images to a resolution size suitable for video processing before feeding the image data to the appropriate pipeline processing logic 104. When the user or the imaging device 150 decides to capture an image still, for this one frame, the front-end pipeline processing logic 103 will receive instructions from the control logic 105 and store the desired frame in memory 108 at the higher resolution. Further, in one embodiment, although a main imaging path of the pipeline is handling the video processing, as processing time allows, the main imaging path can be provided the still image from memory 108.
  • Accordingly, in one embodiment, the video processing is assigned a higher priority than the still image processing by the pipeline processing logic 104. In such an embodiment, the pipeline processing logic 104 features a single pipeline for processing captured images but has the capability to multiplex the single pipeline between different input images. Therefore, the single pipeline may switch from processing an image or series of images having a high priority to an image or series of images having a lower priority as processing time allows.
  • Multiplexing of the imaging pipeline is also implemented in an embodiment utilizing multiple image sensors 101. For example, consider a stereoscopic image device that delivers a left image and a right image of an object to a single image pipeline, as represented in FIG. 5. The single image pipeline in pipeline processing logic 104 can therefore be multiplexed between the left and right images that are being input in parallel to the image signal processing pipeline so that the pipeline is shared. Instead of processing one of the images in its entirety after the other has been processed in its entirety, the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing logic 103. This reduces latency by not delaying processing of an image until completion of the other image, and processing of the two images will finish more quickly.
  • Alternatively, one embodiment utilizes multiple image sensors 101 that produce multiple inputs for the pipeline processing logic 104. Referring now to FIG. 6, in one scenario, one of the image sensors 101 b may capture a low resolution image that is fed as a preview of an image recently captured, where the other image sensor 101 a captures a high resolution image of the subject of the picture that is processed in parallel. Otherwise, the low resolution image may be used for framing a shot to be captured, where the subsequent captured shot or image is at a higher resolution and may undergo additional processing. Therefore, this embodiment features an imaging device with two fully parallel image capture and processing pipeline paths.
  • Further, in some embodiments, a single image sensor 101 is utilized to capture image information and provide the information to the front-end processing logic 103, whereby the front-end processing logic 103 may generate two input images for parallel paths in the pipeline processing logic 104 (as represented in FIG. 7). Also, in some embodiments, a single image sensor 101 is utilized to capture image information and provide the information to the front-end processing logic 103, whereby the front-end processing logic 103 may generate two input images for multiplexed input into a single path of the pipeline processing logic 104 (as represented in FIG. 6).
  • As referenced previously, for a given image, embodiments may transmit higher and lower resolution (or temporal, or quality) counterparts to expedite frame processing. In various stages of encoding and decoding processes, prediction between frames may be done on a macro block or on a pixel level, where a smaller resolution frame may have macro blocks that correspond to larger macro blocks in the higher resolution images. Further, individual pixels of differing images of the low resolution image may correspond to macro blocks of a higher resolution image. By passing along low resolution and high resolution images in parallel, the low resolution images may be used to predict the changes in macro blocks or average groups of images.
  • Referring to FIGS. 8 and 9, representations of embodiments of an encoder architecture 800 and decoder architecture 900, implemented by the pipeline processing logic from the image processing circuitry of FIG. 1, are presented. Referring to FIG. 8 and the represented encoder architecture 800, adaptable video architecture may provide for a scalable video pipeline. Video processing predicts the current frame content utilizing previous content from previous video frames. For example, H.264 uses this temporal coding for video processing. Other spatial and quality coding may also be used for video processing. Scalable video coding (SVC) is an extension of H.264 that uses video information at different resolutions to predict current frame content. SVC defines a plurality of subset bitstreams 802 a, 802 b, with each subset being independently decodable in a similar fashion as a single H.264 bitstream. Merely by dropping packets from the larger overall bitstream, a subset bitstream can be exposed. Each subset bitstream 802 can represent one or more of scalable resolution, frame rate, and quality video signal. More particularly, the subset bitstreams 802 represent video layers within SVC with the base layer 802 a being fully compatible with H.264 (which is a single layer standard definition), in one embodiment. When the overall bitstream 806 is transmitted (e.g., by over air broadcast), a receiving device can use the appropriate subset bitstream to perform the video processing. The additional subset bitstream layers can be discarded or used to for temporal, spatial and/or signal quality improvements.
  • Accordingly, during encoding, a lower resolution image (e.g., 802 a) may be generated to assist higher resolution encoding even though the resulting bitstream merely comprises the higher resolution version (e.g., 802 b), while the lower resolution version (e.g., 802 a) is purged or deleted, in one embodiment. The lower resolution version (e.g., 802 a) may be generated on the fly by downscaling the higher resolution image (e.g., 802 b), in one embodiment. Also, some embodiments may concurrently capture a lower resolution image (e.g., 802 a) and a higher resolution image (e.g., 802 b) using multiple image sensors 101.
  • Also, for transcoding, an encoded bitstream may be decoded and processed to create a lower resolution counterpart. Further, each of the resolution (or temporal, or quality) counterparts may be encoded (by one or more encoder portions 804 a, 804 b) for bitstream delivery or transmission to an end-point user device. In one embodiment, the encoded output 806 may comprise layers of the lower resolution/temporal/quality image sequences 802 a and higher resolution/temporal/temporal/quality image sequences 802 b of the same underlying media content.
  • Alternatively or in conjunction, one embodiment generates an encoded output 806 that comprises layers of lower resolution/temporal/quality image sequences 802 a and higher resolution/temporal/quality image sequences 802 b that are not derived from the same original source or not the same underlying media content. For example, two different image sequences may be captured concurrently from dual or multiple image sensors 101 and used as source material for the different layers 802.
  • Therefore, an embodiment of the adaptable video (transcode-encode-decode) architecture has at least two modes. First, the adaptable architecture 804 a is instantiated once for H.264 decode or other single layer standard. Second, the adaptable architecture 804 b is instantiated multiple times, each instance designed to accelerate the decoding of one SVC layer to improve the generated video image. For example, a lower resolution H.264 decode pipeline (M) may dump out internal aspects 803, which may then be read into next higher resolution layer (M+1). Information of values 803 may be tapped out such as, e.g., motion vectors, transform coefficients, and/or image data, prior to the application of a deblocking filter for use in the higher resolution pipeline. This may also be applied to multiple layers of progressively higher quality (and/or bitrate) at the same resolution or combined with different resolution layers. For example, a lower quality layer 804 a (e.g., signal-to-noise ratio or fidelity) may dump out internal aspects 803, which may then be read into next higher quality layer 804 b. The interlayer interpolations 805 (e.g., up sampling and/or filtering) may be performed externally by software modules executed by shared general-purpose processing resources of the video device, or by dedicated hardware.
  • Correspondingly, in some implementations, decoder architecture 900 (FIG. 9) may include a plurality of decode pipelines 904 a, 904 b with each decode pipeline being associated with a different resolution. The decode pipelines 904 may be implemented in hardware and/or software modules executed by general-purpose processing resources. Information 903 may be tapped out of a lower resolution decode pipeline (M) 904 a, processed using an interlayer interpolation 905, and supplied to the next higher resolution decode pipeline (M+1) 904 b for use. In other implementations, a single decode pipeline 904 may be used to perform the video processing at multiple resolutions. In this case, the decode pipeline performs the video processing at a first resolution (M) with information being extracted as appropriate. The decode pipeline may then performs the video processing at the next resolution (M+1) or at another higher resolution (e.g., M+2). Processing flow may be adjusted by sequencing the flow through the different decoding pipelines as appropriate.
  • Further, for a single bitstream 902 b comprising a single layer, an embodiment of the decoder architecture 900 generates a lower resolution/temporal/quality counterpart 902 a, on the fly, and thereafter uses the original and lower resolution/temporal/ quality counterparts 902 a, 902 b in an SVC or SVC-like decoding process. Accordingly, in one embodiment, each of the components 904 a, 904 b of the decoder architecture 900 include the ability to insert or extract cross layer information supporting various layers of encoding.
  • For one embodiment, the structure of the decoder 904 a, 904 b is instantiated based upon the particular layers 902 a, 902 b being decoded. Each portion of the decoder architecture 900 may tap out data 903 that is used for decoding of differing layers of the multiple layer streams of images 902 a, 902 b. Prediction vectors or components from the lower layer decoding function 904 a may be fed or inputted to the higher layer decoding functions 904 b. Further, in one embodiment, interpolation in software 905 can be used to aid in the interpolation from particular components of one resolution or quality level to the next.
  • In some implementations, interlayer prediction vectors or components are not necessarily stored in memory 108, because these components may be passed between layers in hardware of the decoder architecture 900 (e.g., field programmable gate arrays, static random access memory (SRAM)-based programmable devices, etc.). Because the lower layers can work faster in the decoding process, the prediction coefficients can be obtained from a lower layer and passed to a higher layer for processing after the lower layer decoding is shut down to save processing resources in the lower layer. Accordingly, in some embodiments, inter-layer processing 905 is handled purely in hardware, without the memory bandwidth overhead of passing prediction information to synchronous dynamic random access memory (SDRAM) for software processing.
  • While the multiple decoded streams 906 a, 906 b may be used to separately feed different devices or one may be selected and the others purged, the various layers may also be transcoded, in some embodiments, after they have been successfully decoded.
  • Keeping the above points in mind, FIG. 10 is a block diagram illustrating an example of an electronic device 1005 that may provide for the processing of image data using one or more of the image processing techniques briefly mentioned above. The electronic device 1005 may be any type of electronic device, such as a laptop or desktop computer, a mobile phone, tablet, a digital media player, or the like, that is configured to receive and process image data, such as data acquired using one or more image sensing components.
  • Regardless of its form (e.g., portable or non-portable), it should be understood that the electronic device 1005 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, among others. In some embodiments, the electronic device 1005 may apply such image processing techniques to image data stored in a memory 1030 of the electronic device 1005. In further embodiments, the electronic device 1005 may include one or more imaging devices 1080, such as an integrated or external digital camera, configured to acquire image data, which may then be processed by the electronic device 1005 using one or more of the above-mentioned image processing techniques.
  • As shown in FIG. 10, the electronic device 1005 may include various internal and/or external components which contribute to the function of the device 1005. Those of ordinary skill in the art will appreciate that the various functional blocks shown in FIG. 10 may comprise hardware elements (including circuitry), software elements (including computer code stored on a computer readable medium) or a combination of both hardware and software elements. For example, in the presently illustrated embodiment, the electronic device 1005 may include input/output (I/O) ports 1010, one or more processors 1020, memory device 1030, non-volatile storage 1040, networking device 1050, power source 1060, and display 1070. Additionally, the electronic device 10 may include one or more imaging devices 1080, such as a digital camera, and image processing circuitry 1090. As will be discussed further below, the image processing circuitry 1090 may be configured implement one or more of the above-discussed image processing techniques when processing image data. As can be appreciated, image data processed by image processing circuitry 1090 may be retrieved from the memory 1030 and/or the non-volatile storage device(s) 1040, or may be acquired using the imaging device 1080.
  • Before continuing, it should be understood that the system block diagram of the device 1005 shown in FIG. 10 is intended to be a high-level control diagram depicting various components that may be included in such a device 1005. That is, the connection lines between each individual component shown in FIG. 1 may not necessarily represent paths or directions through which data flows or is transmitted between various components of the device 1005. Indeed, as discussed below, the depicted processor(s) 1020 may, in some embodiments, include multiple processors, such as a main processor (e.g., CPU), and dedicated image and/or video processors. In such embodiments, the processing of image data may be primarily handled by these dedicated processors, thus effectively offloading such tasks from a main processor (CPU).
  • Referring next to FIG. 11, shown is a flowchart that provides one example of the operation of a portion of the image processing circuitry 100 according to various embodiments. It is understood that the flowchart of FIG. 11 provides merely an example of the many different types of functional arrangements that may be employed to implement the operation of the portion of the image processing circuitry 100 as described herein. As an alternative, the flowchart of FIG. 11 may be viewed as depicting an example of steps of a method implemented in the electronic device 1005 (FIG. 10) according to one or more embodiments.
  • Beginning in step 1102, imaging processing circuitry 100 provides an imaging pipeline for processing images captured from one or more image sensors 101, where the imaging single processing pipeline features two parallel paths for processing the images. As described in step 1104, in a first parallel path of the pipeline, an input image obtained from the image sensor(s) 101 is processed at full-resolution. Additionally, in a second parallel path of the pipeline, an input image obtained from the image sensor(s) 101 is processed at a down-scaled resolution, as depicted in step 1106. The down-scaled resolution version of the input image is output from the second parallel path of the pipeline before completion of processing of the input image at full-resolution and is provided for display, in step 1108.
  • Next, referring to FIG. 12, shown is a flowchart that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Beginning in step 1202, imaging processing circuitry 100 provides an imaging pipeline for processing images captured from one or more image sensors 101, where the image signal processing pipeline features two parallel paths for processing the images. As described in step 1204, in a first parallel path of the pipeline, an input image obtained from the image sensor(s) 101 is processed at full-resolution. Additionally, in a second parallel path of the pipeline, an input image obtained from the image sensor(s) 101 is processed at a down-scaled resolution, as depicted in step 1206. The down-scaled resolution version of the input image undergoes image enhancement analysis in the second parallel path that is applied to the full-resolution version of the image in the first parallel path, in step 1208. In particular, pixels are able to be analyzed in the down-scaled resolution version of the input image that may not be able to be analyzed as efficiently in the full-resolution version of the input image due to buffer limitations or other hardware restraints. In various embodiments, the type of image enhancement analysis may include noise filtering, dynamic range optimization, high dynamic range imaging, facial or object recognition, among others.
  • In FIG. 13, a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Beginning in step 1302, imaging processing circuitry 100 provides an image signal processing pipeline for processing images captured from one or more image sensors 101, where the pipeline features a single pipeline path for processing the images. As described in step 1304, multiple input images may be fed into the single pipeline path by multiplexing the different images by front-end circuitry (e.g., front-end processing logic 103). For example, consider a stereoscopic image device that delivers a left image and a right image of an object to a single image pipeline, as represented in FIG. 5. The single image pipeline in pipeline processing logic 104 can therefore be multiplexed between the left and right images that are being input in parallel to the pipeline via the front-end circuitry. Instead of processing one of the images in its entirety after the other has been processed in its entirety, the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing circuitry.
  • Further, in FIG. 14, a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Beginning in step 1402, front-end processing circuitry can receive a single input image from an image sensor 101. In step 1404, the front-end processing circuitry may then generate two or more input images for multiplexed input into a single path of an image signal processing pipeline of pipeline processing logic 104 (as represented in FIG. 6). The single pipeline in pipeline processing logic 104 can therefore be multiplexed between the multiple images that have been generated by the front-end circuitry, in step 1406. Instead of processing one of the images in its entirety after the other has been processed in its entirety, the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing circuitry.
  • Next, in FIG. 15, a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Accordingly, one embodiment of the present disclosure captures the raw image data by the sensor 101 at a high resolution suitable for still image photography, in step 1502. Then, the front-end pipeline processing logic 103 scales down the size of the captured images to a resolution size suitable for video processing, in step 1504, before feeding the image data to the appropriate pipeline processing logic 104, in step 1506. When the user or the imaging device 150 decides to capture an image still, for this one frame, the front-end pipeline processing logic 103 will receive instructions from the control logic 105 and store the desired frame in memory 108 at the higher resolution, in step 1508. Further, in one embodiment, although a main imaging path of the pipeline is handling the video processing, as processing time allows, the main imaging path can be provided the still image from memory 108, in step 1510.
  • Referring now to FIG. 16, shown is a flow chart illustrating an example of scalable video pipeline processing. Beginning with step 1602, a first subset bitstream having a first resolution is obtained and processed in a video pipeline 804 of the video or imaging device 150, in step 1604. As discussed above, video information associated with the first subset bitstream is extracted (or tapped) from the video pipeline 804 during processing of the first subset bitstream. In step 1606, interlayer interpolation is performed on at least a portion of the extracted video information.
  • In step 1608, at least a portion of the extracted video data is provided to a video pipeline 804 of the video device 105 for processing (1610) of a second subset bitstream having a second resolution higher than the first resolution. In step 1612, if another higher resolution subset bitstream is to be processed, then the flow returns to step 1606, where interlayer interpolation is performed on at least a portion of the video information extracted during processing of the second subset bitstream. The flow continues until the processing of a higher subset bitstream ends at step 1612.
  • Next, in FIG. 17, a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Accordingly, one embodiment of the present disclosure captures the raw image data (which may be a sequence of images) by an image sensor 101 at a full resolution, in step 1702. Then, the front-end pipeline processing logic 103 scales down the size of the captured images to a lower resolution size suitable for video processing by a downstream end-point device, in step 1704. Then, each layer of the input bitstream is encoded and combined to generate a mixed layer output bitstream (e.g., SVC bitstream) that can be delivered for an SVC or SVC-like decoding process to a downstream end-point device, in step 1706.
  • In FIG. 18, a flow chart is shown that provides an additional example of the operation of a portion of the image processing circuitry 100 according to various embodiments. Accordingly, one embodiment of the present disclosure captures the raw image data (which may be a sequence of images) by an image sensor 101 at a full resolution, in step 1802. Then, the front-end pipeline processing logic 103 obtains a lower resolution size of image data that is concurrently captured at the same time as the full-resolution image data, in step 1804. Accordingly, in step 1806, each layer of the input bitstream is encoded and combined to generate a mixed layer output bitstream (e.g., SVC bitstream) that can be delivered for an SVC or SVC-like decoding process to a downstream end-point device.
  • Any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of embodiments of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art.
  • In the context of this document, a “computer readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of certain embodiments includes embodying the functionality of the embodiments in logic embodied in hardware or software-configured mediums.
  • It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims (20)

Therefore, having thus described various embodiments, at least the following is claimed:
1. A system comprising:
a hardware processor; and
an image processing circuitry configured to receive first image data to be processed and second image data to be processed to aid in enhancement of the first image data,
the image processing circuitry comprising an imaging pipeline for processing a plurality of image data, wherein the image processing circuitry is configured to process the first image data at full-resolution and is configured to process the second image data at a down-scaled resolution; wherein a portion of the image processing circuitry is configured to shut down after passing extracted data for processing of the first image data without storing the extracted data in memory.
2. The system of claim 1, further comprising a first image sensor configured to capture the first image data and a second image sensor configured to capture the second image data.
3. The system of claim 1, wherein the enhancement of the first image data comprises at least one of noise filtering, dynamic range optimization, or high dynamic range imaging.
4. The system of claim 1, wherein the image processing circuitry features multiple parallel pipeline paths comprising a first pipeline configured to process the first image data at full-resolution and a second pipeline configured to process the second image data at the down-scaled resolution.
5. The system of claim 1, wherein the image processing circuitry features a single pipeline path configured to process a multiplexed stream comprising the first image data at full-resolution and the second image data at the down-scaled resolution.
6. The system of claim 1, further comprising an image sensor configured to capture the first image data, wherein the first image data is downsampled to generate the second image data.
7. The system of claim 1, wherein a first portion of the image pipeline that processes the second image data is instantiated to decode the second image data and to tap extracted enhancement data to a second portion of the image pipeline that processes the first image data and is instantiated for decoding the first image data.
8. An image processing method, comprising:
obtaining first image data at full resolution to be processed and second image data at a down-scaled resolution to be processed to aid in processing of the first image data;
processing, by image processing circuitry, the second image data and extracting enhancement image data to aid in processing of the first image data; and
processing, by the image processing circuitry, the first image data using the enhancement image data.
9. The image processing method of claim 8, further comprising:
capturing the first image data with a first image sensor; and
concurrently capturing the second image data with a second image sensor.
10. The image processing method of claim 8, further comprising generating an output bitstream comprising encoded layers of the first image data and the second image data.
11. The image processing method of claim 8, wherein the processing of the first image data and the second image data utilizes multiple parallel pipeline paths comprising a first pipeline configured to process the first image data at full-resolution and a second pipeline configured to process the second image data at the down-scaled resolution.
12. The image processing method of claim 8, wherein the processing of the first image data and the second image data utilizes a single pipeline path configured to process a multiplexed stream comprising the first image data at full-resolution and the second image data at the down-scaled resolution.
13. The image processing method of claim 8, further comprising:
capturing the first image data with a first image sensor; and
downsampling the first image data to generate the second image data.
14. The image processing method of claim 8, wherein the processing of the first image data comprises object or facial recognition.
15. The image processing method of claim 8, wherein the processing comprises decoding of the first image data.
16. The image processing method of claim 15, wherein a portion of an image pipeline that processes the second image data is instantiated for decoding the second image data and tapping the enhancement data to a portion of the image pipeline that processes the first image data and is instantiated for decoding the first image data.
17. A non-transitory computer readable medium having an image processing program, when executed by a hardware processor, causes the hardware processor to:
obtain first image data at full resolution to be processed and second image data at a down-scaled resolution to be processed to aid in processing of the first image data;
extract enhancement image data from the second image data to aid in processing of the first image data; and
process the first image data using the enhancement image data.
18. The computer readable medium of claim 17, the program further causing the hardware processor to:
capture the first image data with a first image sensor; and
concurrently capture the second image data with a second image sensor.
19. The computer readable medium of claim 17, the program further causing the hardware processor to:
capture the first image data with a first image sensor; and
downsample the first image data to generate the second image data.
20. The computer readable medium of claim 17, wherein the processing comprises decoding of the first image data.
US14/039,922 2011-07-20 2013-09-27 Concurrent image processing for generating an output image Abandoned US20140028870A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/039,922 US20140028870A1 (en) 2011-07-20 2013-09-27 Concurrent image processing for generating an output image

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201161509797P 2011-07-20 2011-07-20
US201161509747P 2011-07-20 2011-07-20
US13/235,975 US20130021504A1 (en) 2011-07-20 2011-09-19 Multiple image processing
US13/431,064 US8553109B2 (en) 2011-07-20 2012-03-27 Concurrent image processing for generating an output image
US14/039,922 US20140028870A1 (en) 2011-07-20 2013-09-27 Concurrent image processing for generating an output image

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/431,064 Continuation US8553109B2 (en) 2011-07-20 2012-03-27 Concurrent image processing for generating an output image

Publications (1)

Publication Number Publication Date
US20140028870A1 true US20140028870A1 (en) 2014-01-30

Family

ID=47555528

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/431,064 Active US8553109B2 (en) 2011-07-20 2012-03-27 Concurrent image processing for generating an output image
US14/039,922 Abandoned US20140028870A1 (en) 2011-07-20 2013-09-27 Concurrent image processing for generating an output image

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/431,064 Active US8553109B2 (en) 2011-07-20 2012-03-27 Concurrent image processing for generating an output image

Country Status (1)

Country Link
US (2) US8553109B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016105777A1 (en) * 2014-12-22 2016-06-30 Google Inc. Image sensor having multiple output ports
US9918073B2 (en) 2014-12-22 2018-03-13 Google Llc Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with movable illuminated region of interest

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101316231B1 (en) * 2011-07-26 2013-10-08 엘지이노텍 주식회사 Multi-image processing apparatus
EP2974285B1 (en) * 2013-03-15 2019-06-12 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US9369662B2 (en) * 2013-04-25 2016-06-14 Microsoft Technology Licensing, Llc Smart gallery and automatic music video creation from a set of photos
KR102085270B1 (en) * 2013-08-12 2020-03-05 삼성전자 주식회사 Method for selecting resolution with minimum distortion value and devices performing the method
US9251431B2 (en) * 2014-05-30 2016-02-02 Apple Inc. Object-of-interest detection and recognition with split, full-resolution image processing pipeline
US9449239B2 (en) 2014-05-30 2016-09-20 Apple Inc. Credit card auto-fill
US9565370B2 (en) 2014-05-30 2017-02-07 Apple Inc. System and method for assisting in computer interpretation of surfaces carrying symbols or characters
US20160227228A1 (en) * 2015-01-29 2016-08-04 Vixs Systems, Inc. Video camera with layered encoding, video system and methods for use therewith
US10257394B2 (en) * 2016-02-12 2019-04-09 Contrast, Inc. Combined HDR/LDR video streaming
US10264196B2 (en) 2016-02-12 2019-04-16 Contrast, Inc. Systems and methods for HDR video capture with a mobile device
US10003758B2 (en) * 2016-05-02 2018-06-19 Microsoft Technology Licensing, Llc Defective pixel value correction for digital raw image frames
WO2018031441A1 (en) 2016-08-09 2018-02-15 Contrast, Inc. Real-time hdr video for vehicle control
US20180300515A1 (en) * 2017-04-17 2018-10-18 Symbol Technologies, Llc Method and apparatus for accelerated data decoding
US11265530B2 (en) 2017-07-10 2022-03-01 Contrast, Inc. Stereoscopic camera
US10742834B2 (en) * 2017-07-28 2020-08-11 Advanced Micro Devices, Inc. Buffer management for plug-in architectures in computation graph structures
JP7005284B2 (en) * 2017-11-01 2022-01-21 キヤノン株式会社 Image processing device, control method of image processing device, and program
US10951888B2 (en) 2018-06-04 2021-03-16 Contrast, Inc. Compressed high dynamic range video
US11223762B2 (en) * 2019-12-06 2022-01-11 Samsung Electronics Co., Ltd. Device and method for processing high-resolution image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020087900A1 (en) * 2000-12-29 2002-07-04 Homewood Mark Owen System and method for reducing power consumption in a data processor having a clustered architecture
US20020118750A1 (en) * 1997-04-01 2002-08-29 Sony Corporation Image encoder, image encoding method, image decoder, image decoding method, and distribution media
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement
US6810094B1 (en) * 1998-03-12 2004-10-26 Hitachi, Ltd. Viterbi decoder with pipelined parallel architecture
US20060050785A1 (en) * 2004-09-09 2006-03-09 Nucore Technology Inc. Inserting a high resolution still image into a lower resolution video stream
US20080037838A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US20090003451A1 (en) * 2004-12-10 2009-01-01 Micronas Usa, Inc. Shared pipeline architecture for motion vector prediction and residual decoding
US20100033602A1 (en) * 2008-08-08 2010-02-11 Sanyo Electric Co., Ltd. Image-Shooting Apparatus

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6111596A (en) * 1995-12-29 2000-08-29 Lucent Technologies Inc. Gain and offset correction for efficient stereoscopic coding and improved display
US8456515B2 (en) * 2006-07-25 2013-06-04 Qualcomm Incorporated Stereo image and video directional mapping of offset
US20080030592A1 (en) * 2006-08-01 2008-02-07 Eastman Kodak Company Producing digital image with different resolution portions
KR100817055B1 (en) * 2006-08-23 2008-03-26 삼성전자주식회사 Method and apparatus of Image Processing using feedback route
US7859588B2 (en) * 2007-03-09 2010-12-28 Eastman Kodak Company Method and apparatus for operating a dual lens camera to augment an image
KR101547828B1 (en) * 2009-03-16 2015-08-28 삼성전자주식회사 Apparatus and method for image processing
US8111300B2 (en) * 2009-04-22 2012-02-07 Qualcomm Incorporated System and method to selectively combine video frame image data
US8305485B2 (en) * 2010-04-30 2012-11-06 Eastman Kodak Company Digital camera with coded aperture rangefinder
JP5158138B2 (en) * 2010-06-22 2013-03-06 株式会社ニコン Imaging device, playback device, and playback program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118750A1 (en) * 1997-04-01 2002-08-29 Sony Corporation Image encoder, image encoding method, image decoder, image decoding method, and distribution media
US6810094B1 (en) * 1998-03-12 2004-10-26 Hitachi, Ltd. Viterbi decoder with pipelined parallel architecture
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement
US20020087900A1 (en) * 2000-12-29 2002-07-04 Homewood Mark Owen System and method for reducing power consumption in a data processor having a clustered architecture
US20060050785A1 (en) * 2004-09-09 2006-03-09 Nucore Technology Inc. Inserting a high resolution still image into a lower resolution video stream
US20090003451A1 (en) * 2004-12-10 2009-01-01 Micronas Usa, Inc. Shared pipeline architecture for motion vector prediction and residual decoding
US20080037838A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US20100033602A1 (en) * 2008-08-08 2010-02-11 Sanyo Electric Co., Ltd. Image-Shooting Apparatus

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016105777A1 (en) * 2014-12-22 2016-06-30 Google Inc. Image sensor having multiple output ports
US9615013B2 (en) 2014-12-22 2017-04-04 Google Inc. Image sensor having multiple output ports
US9866740B2 (en) 2014-12-22 2018-01-09 Google Llc Image sensor having multiple output ports
US9918073B2 (en) 2014-12-22 2018-03-13 Google Llc Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with movable illuminated region of interest
US10182182B2 (en) 2014-12-22 2019-01-15 Google Llc Image sensor having multiple output ports

Also Published As

Publication number Publication date
US8553109B2 (en) 2013-10-08
US20130021505A1 (en) 2013-01-24

Similar Documents

Publication Publication Date Title
US8553109B2 (en) Concurrent image processing for generating an output image
US20130021504A1 (en) Multiple image processing
EP3429189B1 (en) Dual image capture processing
US8199222B2 (en) Low-light video frame enhancement
US9325905B2 (en) Generating a zoomed image
JP5845464B2 (en) Image processing apparatus, image processing method, and digital camera
US20110176014A1 (en) Video Stabilization and Reduction of Rolling Shutter Distortion
US20120224766A1 (en) Image processing apparatus, image processing method, and program
US9979887B1 (en) Architecture for video, fast still and high quality still picture processing
US20130188045A1 (en) High Resolution Surveillance Camera
US9628719B2 (en) Read-out mode changeable digital photographing apparatus and method of controlling the same
WO2019045872A1 (en) Dual phase detection auto focus camera sensor data processing
US20170078351A1 (en) Capture and sharing of video
US8854503B2 (en) Image enhancements through multi-image processing
KR20080078700A (en) Scaler architecture for image and video processing
US20110157426A1 (en) Video processing apparatus and video processing method thereof
WO2016171006A1 (en) Encoding device and encoding method, and decoding device and decoding method
JP5407651B2 (en) Image processing apparatus and image processing program
US11843871B1 (en) Smart high dynamic range image clamping
US20240078635A1 (en) Compression of images for generating combined images
US20230281848A1 (en) Bandwidth efficient image processing
US20240095962A1 (en) Image data re-arrangement for improving data compression effectiveness
TWI424371B (en) Video processing device and processing method thereof
JP2017126889A (en) Image processing apparatus, imaging device, image processing method and program
KR20140108034A (en) Photographing apparatus, method for controlling the same, and computer-readable recording medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PLOWMAN, DAVID;PATUCK, NAUSHIR;SEWELL, BENJAMIN;AND OTHERS;SIGNING DATES FROM 20120322 TO 20120326;REEL/FRAME:031509/0567

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119