US20130022116A1 - Camera tap transcoder architecture with feed forward encode data - Google Patents

Camera tap transcoder architecture with feed forward encode data Download PDF

Info

Publication number
US20130022116A1
US20130022116A1 US13/313,345 US201113313345A US2013022116A1 US 20130022116 A1 US20130022116 A1 US 20130022116A1 US 201113313345 A US201113313345 A US 201113313345A US 2013022116 A1 US2013022116 A1 US 2013022116A1
Authority
US
United States
Prior art keywords
encoder
feed forward
data
encode data
media
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/313,345
Inventor
James D. Bennett
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US13/313,345 priority Critical patent/US20130022116A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENNETT, JAMES D.
Publication of US20130022116A1 publication Critical patent/US20130022116A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/192Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Definitions

  • Video and other media is often streamed in compressed form over a communication network to a destination and rendered in real time by a media player. Instead of downloading the media as a file in its entirety and then playing the file, encoded media is sent in a continuous stream of data, decoded at a destination decoder, and played as the data arrives at a media player. As a result, the streaming of media places a great deal of stress on destination decoders and media players, especially when the encoded media data may need to be adjusted to accommodate constraints at the destination.
  • FIG. 1 is a block diagram of one embodiment of a media processing environment according to the present disclosure.
  • FIGS. 2-4 are block diagram of one embodiment of a transcoder from the media processing environment of FIG. 1 .
  • FIGS. 5-6 are flow chart diagrams depicting various functionalities of embodiments of the transcoder of FIG. 1 .
  • FIG. 7 is a block diagram of an electronic device featuring the transcoder of FIG. 1 .
  • Embodiments of the present disclosure utilize supplemental encoding information (“feed forward encode data”) provided from an upstream encoder to assist in the encoding of media, where an upstream encoding of the raw media data generated contents of the supplemental feed forward encode data as a by-product.
  • feed forward encode data supplemental encoding information
  • Embodiments include transcoder architecture that can decode an input encoded media data as raw media data and then utilize feed forward encode data (provided with the encoded media data) to encode the raw media data. Further, embodiments of the transcoder architecture include a camera tap in the transcoder architecture.
  • FIG. 1 illustrates a system 100 for a media processing environment according to an embodiment.
  • a media source encoder 110 may transmit a first encoded media stream over a communication pathway 115 to a transcoder 120 , where the communication pathway is a network connection path (e.g., a cable, connector, wireless network, cable network, satellite network, wired network, etc.).
  • the media source encoder 110 may encode a media raw input file and output a multipass first encoded media stream.
  • the first encoded media stream is received and decoded by a transcoder 120 (via decoder 122 ) and then encoded by the transcoder 120 (via encoder 124 ) as a second encoded media stream.
  • the encoder 124 in encoding the media, may also scale the raw media data before generating the second encoded media stream for a destination decoder 130 .
  • the second encoded media stream is transmitted and received by the destination decoder 130 .
  • a video-image camera 140 is also provided and is shown to contain taps or inputs into a decoder 122 and/or encoder 124 of the transcoder 120 over communication pathways.
  • Various elements of the system components e.g., encoder 110 , 124 , decoders 122 , 130 , camera 140 , etc. may be implemented in hardware and/or software.
  • Feed forward encode data 105 is shown to be supplied from and to encoders 110 , 124 in the environment.
  • feed forward encode data 105 comprises supplemental encoding information that is provided from an encoder and sent downstream in addition with encoded media being supplied from an encoder, such as the media source encoder 110 or an encoder 124 downstream from the media source encoder 110 .
  • video encoding standards such as MPEG-2, ITU-H.264 (also known as MPEG-4, Part 10 and Advanced Video Coding) use motion compensation for compressing video data comprising a series of pictures.
  • intermediate results from motion compensation processes may be provided as feed forward encode data 105 from media source encoder 110 in generating the first encoded media stream, where a downstream encoder 124 utilizes the feed forward encode data to supplement its motion compensation processes used to generate the second encoded media stream.
  • the downstream encoder 124 can rely on computations and configurations of an upstream or previous encoder to assist in encoding of the raw media data.
  • encoder data from the upstream encoder is not discarded and is rather output from the upstream encoder 110 and received by the downstream or secondary encoder 124 to accelerate the secondary encoder's task of encoding the raw media data.
  • Embodiments of the transcoder 120 may also serve up the media data after scaling or converting the media data in a format suitable for and supported by the destination decoder 130 and/or display device. In general, scaling may involve temporal, spatial, and quality modifications and various factors may govern the applicability of scaling, such as with scaled video coding (SVC).
  • SVC scaled video coding
  • the video-image camera may be equipped with its own encoder and may perform its own scaling adjustments before outputting a bit stream to the transcoder 120 and its decoder 122 .
  • the video-image camera 140 may not be equipped with its own encoder and may feed raw media data (video, image, audio, etc.) to the encoder 124 of the transcoder 120 .
  • the video-image camera 140 can still do temporal, spatial, and quality modifications before sending the raw data to the transcoder 120 .
  • the media source encoder 110 may also implement SVC adjustments before sending an output downstream. Accordingly, a bit stream may be scaled to remove parts of the bit stream in order to adapt the output to the various needs or preferences of downstream devices or users as well as varying terminal capabilities or network conditions.
  • the transcoder 120 in the media processing environment of FIG. 1 is shown as being positioned in an intermediate node between the media source encoder 110 and destination decoder 130 .
  • the transcoder 120 may be collocated with a display device and may therefore act as a destination decoder.
  • the decoder 122 and encoder 124 components of the transcoder 120 may be in separate units and may span two separate nodes, in some embodiments.
  • the transcoder 120 in an intermediate node of a communication network, such as a set top box, the transcoder 120 will allow media streams from media sources to be adapted for terminal devices, especially in streaming environments. Also, with a supplemental feed forward encode data 105 being provided from upstream encoders, the transcoder 120 may efficiently and quickly output encoded media streams for downstream displays and players.
  • a front-end interface circuitry 202 to the transcoder 120 provides multiple pipes or channels to possible input streams.
  • the input streams 204 are generally coming in as encoded forms, except for an environment that provides raw data, such as possibly an external video-image camera 140 e , in some embodiments.
  • media sources for the input streams 204 include an “On Demand” media server 206 and broadcast media server 206 that deliver content over the Internet and/or Intranet 210 .
  • media streams may be provided from satellite and cable infrastructures 212 , 214 , a local media storage 216 , and the video-image camera 140 e on its own independent path or pipe.
  • the camera 140 e may be tapped into a decoder 122 of the transcoder 120 . If the video-image camera 140 is not integrated with its own encoder, then the camera 140 i may be tapped into an encoder 124 of the transcoder 120 (via interface circuitry 202 ). For example, in some embodiments, the video-image camera 140 i may be internal to the transcoder 120 and may pass raw media data to the encoder 124 of the transcoder 120 for encoding.
  • an internal video-image camera may be integrated as part of a set top box (having transcoder 120 ) that can capture viewer(s) in front of the set top box and tailor displayed content (e.g., parental filtering) based on saved preference information of identified viewer(s) using facial recognition processing on the captured images.
  • a set top box having transcoder 120
  • displayed content e.g., parental filtering
  • the transcoder 120 may provide multiple encoders and decoders to handle the multiple possible standards and formats that are received and required by upstream and downstream nodes in a streaming environment.
  • encoders and decoders may be hardware accelerated and/or comprised of a general purpose processor and applicable software.
  • a media source encoder 110 in addition to providing a media stream may also provide feed forward encode data 105 .
  • the “On Demand” media server 206 , broadcast media server 208 , satellite and cable infrastructures 212 , 214 , a local media storage 216 , and the video-image camera 140 may therefore provide feed forward encode data 105 on its respective pipes or communication pathways to the transcoder 120 .
  • the “On Demand” media server 206 , broadcast media server 208 , local media storage 216 , and the video-image camera 140 are shown to contain feed forward processing logic that assists in compiling and sending the feed forward encode data 105 .
  • the “On Demand” media server 206 , broadcast media server 208 , local media storage 216 , and the video-image camera 140 are shown to also contain SVC processing logic that assists in scaling bit stream outputs.
  • the transcoder 120 shows encode stream(s) 230 received from an input pipe to multiple input, multiple output (MIMO) decode architecture 122 , where the architecture may therefore include multiple decoders. It is noted that the transcoder 120 is not limited to only receiving and processing media streams. In addition to supporting streaming, embodiments of the transcoder 120 may also support store and forward transmissions and other broadcast transmissions.
  • MIMO multiple input, multiple output
  • feed forward encode data 105 is supplied to decode architecture of decoder 122 .
  • the decode architecture 122 is shown to output raw stream(s) and/or groups of raw stream(s) 232 , where a grouping of raw streams may all be sent to a particular destination device 250 .
  • the decode architecture 122 passes the raw stream(s) and the feed forward encode data 105 to the Multiple Input, Multiple Output encode architecture 124 .
  • the encoder 124 is configured to provide overlay support such that multiple input streams may be combined such that content of one stream is to be overlaid over content of another upon being displayed.
  • the encode architecture 124 may receive input streams (via the interface circuitry 202 ) from local memory storage 240 (that can be removable) or from internal or external video-image cameras 140 i , 140 e . These streams may be encoded or raw streams 230 , 232 , as the case may be.
  • the encode architecture 124 may scale a bit stream during SVC coding and therefore SVC feedback data 242 is passed to the decode architecture 122 and interface circuitry 202 so that SVC feedback data 242 may be provided to upstream nodes.
  • the encoded output bit stream is provided to destination devices.
  • screen assemblies 250 e.g., a device having display hardware and a display driver
  • two screen assemblies may actually be located in the same device or serviced by the same device.
  • transcoder architecture 120 one embodiment of transcoder architecture 120 is depicted. It is understood that FIG. 3 shows one particular approach from a multitude of encode standards, where additional blocks and steps may be represented. In FIG. 3 , sources of possible SVC adjustments are indicated by the dashed lines. Accordingly, the figure shows that scaling of bit streams can be effected by many nodes in a streaming network and by many encoder components. For example, a transcoder 120 may adjust a media stream to generate a media signal based on communication channel or pathway characteristics as well as other factors such as a destination device feedback indicating a current state, such as its current power state. For instance, when the channel characteristics are unfavorable, one or more video parameters such as the bandwidth, frame rate, color depth or resolution can be reduced by transcoder 120 to facilitate accurate decoding of the media signal by the destination device.
  • a transcoder 120 may adjust a media stream to generate a media signal based on communication channel or pathway characteristics as well as other factors such as a destination device feedback indicating a current state,
  • SVC operations may adjust the resolution of a raw image 301 or raw video 302 received as input based on received SVC input 303 .
  • the size of sample blocks may be adjusted in response to an encoder being under stress due to a current workload (e.g., streaming may place a lot of stress on an encoder), as indicated by lines 304 .
  • different numbers of pattern used in a transform e.g., Discrete Cosine Transform (DCT), Discrete Fourier Transform (DFT), etc.
  • DCT Discrete Cosine Transform
  • DFT Discrete Fourier Transform
  • the aggressiveness of the quantizer can be adjusted, as indicated by line 306 .
  • the same adjustments for the DCT and quantizer may be made for the inverse DCT and inverse quantizer components.
  • the searches associated with the motion prediction block 350 are generally intense since many different directions in many different neighboring frames are analyzed.
  • a particular encode standard may define a size of a search area (e.g., how many frames backwards and forwards) to be searched for possible matches with a current block.
  • the motion prediction block 350 may initiate SVC adjustments and adapt on the directions that are searched (e.g., only search backwards, do not look back more than 3 frames, etc.) in response to a buffer constraint, a power constraint, limited processing capabilities, etc., as indicated by line 307 .
  • other blocks or stages may be adjusted, including motion compensation 352 , frame buffer 354 , etc.
  • the encoding operation consists of the forward encoding path 310 and an inverse decoding path 320 .
  • input media data such as a video frame
  • input media data is processed in units of a macroblock (MB) corresponding to a 16 ⁇ 16 displayed pixels.
  • MB macroblock
  • the forward encoding path 310 predicts each macroblock using Intra or Inter-prediction.
  • intra-prediction mode spatial correlation is used in each macroblock to reduce the amount of transmission data necessary to represent an image.
  • redundancies in a frame are removed without comparing with other media frames.
  • inter-prediction mode redundancies are removed by comparing with other media frames.
  • the encoder 120 searches pixels from the macroblock for a similar block, known as a reference block. An identification of the reference block is made and subtracted from the current macroblock to form a residual macroblock or prediction error. Identification of the similar block is known as motion estimation.
  • a memory (frame buffer 354 ) stores the reference block and other reference blocks. The motion prediction block or stage 350 searches the memory for a reference block that is similar to the current macroblock block.
  • the reference block is identified by a motion vector MV and the prediction error during motion compensation 352 .
  • the residual macroblock and motion vectors are transformed (in DCT stage 356 ), quantized (in quantizer stage 358 ), and encoded (in entropy encoder stage 360 ) before being output.
  • the transformation is used to compress the image in Inter-frames or Intra-frames.
  • the quantization stage 358 reduces the amount of information by dividing each coefficient by a particular number to reduce the quantity of possible values that value could have. Because this makes the values fall into a narrower range, this allows entropy coding 360 to express the values more compactly.
  • the entropy encoder 360 removes the redundancies in the final bit-stream, such as recurring patterns in the bit-stream.
  • the quantized data are re-scaled (in inverse quantizer stage 359 ) and inverse transformed (in inverse DCT stage 357 ) and added to the prediction macroblock to reconstruct a coded version of the media frame which is stored for later predictions in the frame buffer 354 .
  • Motion estimation can potentially use a very large number of memory accesses for determining a reference block.
  • the frame is segmented into multiple macroblocks which are reduced to sets of motion vectors. Accordingly, one whole frame is reduced into many sets of motion vectors.
  • a high definition television (HDTV) video comprises 1920 ⁇ 1080 pixel pictures per second, for example.
  • a common block size can be, for example, a 16 ⁇ 16 block of pixels. Therefore, an exhaustive search may not be practical, especially for encoding in real time.
  • the encoder 300 may limit the search for samples of the current macroblock by reducing a search area. Although the foregoing may be faster than an exhaustive search, this can also be time-consuming and computationally intense.
  • an embodiment of the transcoder 120 is shown with possible feed forward encode data sources, indicated by dashed-lines, that can address the foregoing issues.
  • the dashed lines shown in the figure lead to possible streams or sources of feed forward encode data that can be sent with an encoded bit stream output 365 to downstream nodes and devices as a feed forward encode data 105 .
  • the searching operations performed for motion estimation in finding reference blocks, motion vectors, and residuals can be exhaustive and burdensome for a transcoder 120 during encode operations.
  • the upstream encoder For each input block of a video frame, the upstream encoder will search neighboring frames in the inter-prediction stage (or the same frame in an intra-prediction stage) for a reference block. In an exhaustive search, the upstream encoder is not going to know which motion vector to send until all possible frames and blocks have been checked in all possible directions. Once the best matches have been determined and the residuals computed, then a motion vector output can be generated and sent downstream to a downstream transcoder. Accordingly, at the receiving decoder, the output stream from the upstream encoder is decoded into raw data once again and supplied to the downstream encoder of the transcoder 120 .
  • the downstream encoder may not be currently capable to do an exhaustive search, as carried out by the upstream encoder, and therefore may not be capable of producing a high-quality compressed stream, but for the existence of the feed forward encode data 105 provided from the upstream encoder.
  • an embodiment of the upstream encoder 110 extracts results of its search operations and provide them to the downstream encoder 124 as one possible form of feed forward encode data 105 .
  • the encoder 124 may be able to identify the best match for a current pixel, since the search operation had been previously performed by the upstream encoder 110 and the results of the search are now provided to the downstream encoder 124 , as part of feed forward encode data 105 .
  • the encoder may be only able to search for neighboring blocks within a set distance or search area from the current block. Therefore, the best match, as indicated in the feed forward encode data 105 , may not be within the search area.
  • the fourth best match in the exhaustive search area
  • the search area being utilized by the current encoder
  • the feed forward encode data 105 may allow the downstream encoder 124 to limit its motion estimation searching but still generate high quality and fast processing, because a full search is avoided from being implemented.
  • the transcoder 120 may be integrated as part of a personal device, such as a tablet, that does not have comparable processing power or battery power, as compared to the upstream encoder.
  • a personal device such as a tablet
  • the tablet device may provide a compressed video stream that is comparable with that provided by the more powerful upstream encoder.
  • use of the feed forward encode data 105 can appear to increase the process speed of the encoder 124 acting on the data.
  • feed forward encode data 105 may include quantized weight(s) employed by the quantizer 358 in an encoding process, pertinent settings of intermediate stages in the encoding process, quality settings of intermediate stages, residual information not provided in the main output, etc.
  • information used by an encoder to make a decision determining or shaping an output may be useful to a subsequent encoder and therefore may be provided as a supplemental output in the form of feed forward encode data 105 .
  • a subsequent downstream encoder may recheck this information or simply uses the provided information to make its own decision as part of a rule set.
  • the transcoder 120 may itself use feed forward encode data 105 to assist in encoding a bit stream and then pass on the feed forward encode data, without modification, to allow for a downstream encoder to also use the feed forward encode data, in some embodiments.
  • the transcoder 120 may modify or add information to the feed forward encode data or generate new feed forward encode data that can be provided to downstream components, in some embodiments.
  • the feed forward encode may be output concurrently or simultaneously with an encoded bit stream or media data.
  • FIG. 5 is a flowchart representation of a method in accordance with one embodiment of the present disclosure. In particular, a method is presented for use in conjunction with one or more of the functions and features described in conjunction with FIGS. 1-4 .
  • raw media data is received by an encoder 110 .
  • the encoder 110 initiates execution of an encoding process on the raw media data, where the encoding process contains multiple stages in a pipeline arrangement that are to be completed.
  • supplemental information is extracted from individual stages in the pipeline and output (e.g., concurrently with encoded media data) as a feed forward encode data, where the information is used by the individual stage to complete its respective task.
  • coefficient values or weights used in computing an output transform of an input signal may be extracted and included as feed forward encode data and be used by a downstream DCT stage in a downstream encoding process, in step 504 .
  • blocks are compared with an input block and the results of these comparisons and associated searches may also be extracted and included as feed forward encode data 105 .
  • the primary encoded media stream is output from the encoder 110 along with the supplemental feed forward encode data 105 associated with the primary encoded media stream.
  • the supplemental feed forward encode data 105 is also provided in a compressed form.
  • FIG. 6 is a flowchart representation of a method in accordance with one embodiment of the present disclosure.
  • a method is presented for use in conjunction with one or more of the functions and features described in conjunction with FIGS. 1-4 .
  • a primary encoded media stream is received along with the supplemental feed forward encode data 105 associated with the primary encoded media stream by a decoder 122 of a transcoder 120 .
  • the decoder 122 proceeds to decode the primary encoded media stream to generate raw media data that is supplied to an encoder 124 of the transcoder, in step 604 . Further, the decoder 122 passes to the encoder 124 the feed forward encode data 105 , in step 606 .
  • the encoder 124 initiates execution of an encoding process on the raw media data, where the encoding process contains multiple stages in a pipeline arrangement that are to be completed.
  • information is extracted from the feed forward encode data and used to assist in completion of a respective task by a particular stage, in step 610 .
  • coefficient values or weights previously used in computing an output transform of an input signal by an upstream encoder are reused in completing a DCT transform stage in the current encoding process.
  • the results of comparisons completed in a motion prediction stage by an upstream encoder may also be extracted and used by a motion prediction stage in the current encoding process.
  • a second primary encoded media stream is output from the encoder 124 . Further, in some embodiments, the encoder continues to pass or output feed forward encode data downstream that has been used in the encoding process, in step 614 .
  • FIG. 7 shows a block diagram of an example electronic device featuring the transcoder 120 , according to an embodiment.
  • electronic device 700 may include one or more of the elements shown in FIG. 7 .
  • electronic device 700 may include one or more processors (also called central processing units, or CPUs), such as a processor 704 .
  • processors also called central processing units, or CPUs
  • Processor 704 is connected to a communication infrastructure 702 , such as a communication bus.
  • processor 704 can simultaneously operate multiple computing threads.
  • Electronic device 700 also includes a primary or main memory 706 , such as random access memory (RAM).
  • Main memory 706 has stored therein control logic 728 A (computer software), and data.
  • Electronic device 700 also includes one or more secondary storage devices 710 .
  • Secondary storage devices 710 include, for example, a hard disk drive 712 and/or a removable storage device or drive 714 , as well as other types of storage devices, such as memory cards and memory sticks.
  • electronic device 700 may include an industry standard interface, such a universal serial bus (USB) interface for interfacing with devices such as a memory stick.
  • Removable storage drive 714 represents a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup, etc.
  • secondary storage devices 710 may include an operating system 732 and transcoder 120 .
  • Removable storage drive 714 interacts with a removable storage unit 716 .
  • Removable storage unit 716 includes a computer useable or readable storage medium 724 having stored therein computer software 728 B (control logic) and/or data.
  • Removable storage unit 716 represents a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, or any other computer data storage device.
  • Removable storage drive 714 reads from and/or writes to removable storage unit 716 in a well known manner.
  • Electronic device 700 further includes a communication or network interface 718 .
  • Communication interface 718 enables the electronic device 700 to communicate with remote devices.
  • communication interface 718 allows electronic device 700 to communicate over communication networks or mediums 742 (representing a form of a computer useable or readable medium), such as LANs, WANs, the Internet, etc.
  • Network interface 718 may interface with remote sites or networks via wired or wireless connections.
  • Control logic 728 C may be transmitted to and from electronic device 700 via the communication medium 742 .
  • Any apparatus or manufacture comprising a computer useable or readable medium having control logic (software) stored therein is referred to herein as a computer program product or program storage device. This includes, but is not limited to, electronic device 700 , main memory 706 , secondary storage devices 710 , and removable storage unit 716 .
  • Such computer program products having control logic stored therein that, when executed by one or more data processing devices, cause such data processing devices to operate as described herein, represent embodiments of the present disclosure.
  • Electronic device 700 may be implemented in association with a variety of types of display devices.
  • electronic device 700 may be one of a variety of types of media devices, such as a stand-alone display (e.g., a television display such as flat panel display, etc.), a computer, a tablet, a smart phone, a game console, a set top box, a digital video recorder (DVR), a networking device (e.g., a router, a switch, etc.), a server, or other electronic device mentioned elsewhere herein, etc.
  • Media content that is delivered in two-dimensional or three-dimensional form according to embodiments described herein may be stored locally or received from remote locations.
  • such media content may be locally stored for playback (replay TV, DVR), may be stored in removable memory (e.g. DVDs, memory sticks, etc.), may be received on wireless and/or wired pathways through a network such as a home network, through Internet download streaming, through a cable network, a satellite network, and/or a fiber network, etc.
  • FIG. 7 shows a first media content 730 A that is stored in hard disk drive 712 , a second media content 730 B that is stored in storage medium 724 of removable storage unit 716 , and a third media content 730 C that may be remotely stored and received over communication medium 722 by communication interface 718 .
  • Media content 730 may be stored and/or received in these manners and/or in other ways.
  • Video-image camera 140 may include an image sensor device and image processor and/or additional/alternative elements.
  • the video-image camera 140 captures video images, and generates corresponding video data that is output on a video data signal.
  • the video data signal contains the video data that is output on an image processor output signal, including processed pixel data values that correspond to images captured by the image sensor device.
  • the video data signal may include video data captured on a frame-by-frame basis or other basis.
  • the video data signal may include video data formatted as Bayer pattern data or in another image pattern data type known in the art.

Abstract

Embodiments of the present disclosure include transcoder architecture that can decode an input encoded media data as raw media data and then utilize feed forward encode data (provided with the encoded media data) to encode the raw media data. Further, embodiments of the transcoder architecture include a camera tap in the transcoder architecture.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority to copending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, which is entirely incorporated herein by reference.
  • BACKGROUND
  • Video and other media is often streamed in compressed form over a communication network to a destination and rendered in real time by a media player. Instead of downloading the media as a file in its entirety and then playing the file, encoded media is sent in a continuous stream of data, decoded at a destination decoder, and played as the data arrives at a media player. As a result, the streaming of media places a great deal of stress on destination decoders and media players, especially when the encoded media data may need to be adjusted to accommodate constraints at the destination.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
  • FIG. 1 is a block diagram of one embodiment of a media processing environment according to the present disclosure.
  • FIGS. 2-4 are block diagram of one embodiment of a transcoder from the media processing environment of FIG. 1.
  • FIGS. 5-6 are flow chart diagrams depicting various functionalities of embodiments of the transcoder of FIG. 1.
  • FIG. 7 is a block diagram of an electronic device featuring the transcoder of FIG. 1.
  • DETAILED DESCRIPTION
  • Embodiments of the present disclosure utilize supplemental encoding information (“feed forward encode data”) provided from an upstream encoder to assist in the encoding of media, where an upstream encoding of the raw media data generated contents of the supplemental feed forward encode data as a by-product. Embodiments include transcoder architecture that can decode an input encoded media data as raw media data and then utilize feed forward encode data (provided with the encoded media data) to encode the raw media data. Further, embodiments of the transcoder architecture include a camera tap in the transcoder architecture.
  • FIG. 1 illustrates a system 100 for a media processing environment according to an embodiment. In this environment, a media source encoder 110 may transmit a first encoded media stream over a communication pathway 115 to a transcoder 120, where the communication pathway is a network connection path (e.g., a cable, connector, wireless network, cable network, satellite network, wired network, etc.). For example, the media source encoder 110 may encode a media raw input file and output a multipass first encoded media stream.
  • The first encoded media stream is received and decoded by a transcoder 120 (via decoder 122) and then encoded by the transcoder 120 (via encoder 124) as a second encoded media stream. The encoder 124, in encoding the media, may also scale the raw media data before generating the second encoded media stream for a destination decoder 130.
  • Over a communication pathway 125, the second encoded media stream is transmitted and received by the destination decoder 130. A video-image camera 140 is also provided and is shown to contain taps or inputs into a decoder 122 and/or encoder 124 of the transcoder 120 over communication pathways. Various elements of the system components (e.g., encoder 110, 124, decoders 122, 130, camera 140, etc.) may be implemented in hardware and/or software.
  • Feed forward encode data 105 is shown to be supplied from and to encoders 110, 124 in the environment. In one embodiment, feed forward encode data 105 comprises supplemental encoding information that is provided from an encoder and sent downstream in addition with encoded media being supplied from an encoder, such as the media source encoder 110 or an encoder 124 downstream from the media source encoder 110. As an example, video encoding standards such as MPEG-2, ITU-H.264 (also known as MPEG-4, Part 10 and Advanced Video Coding) use motion compensation for compressing video data comprising a series of pictures. Therefore, intermediate results from motion compensation processes may be provided as feed forward encode data 105 from media source encoder 110 in generating the first encoded media stream, where a downstream encoder 124 utilizes the feed forward encode data to supplement its motion compensation processes used to generate the second encoded media stream. As a result, the downstream encoder 124 can rely on computations and configurations of an upstream or previous encoder to assist in encoding of the raw media data.
  • In embodiments of the present disclosure, however, encoder data from the upstream encoder is not discarded and is rather output from the upstream encoder 110 and received by the downstream or secondary encoder 124 to accelerate the secondary encoder's task of encoding the raw media data. Embodiments of the transcoder 120 may also serve up the media data after scaling or converting the media data in a format suitable for and supported by the destination decoder 130 and/or display device. In general, scaling may involve temporal, spatial, and quality modifications and various factors may govern the applicability of scaling, such as with scaled video coding (SVC). One factor is the screen size and screen processing capabilities of a display device, including how many frames per second the device can handle, a capability of the device to process 3D images, current power constraints (e.g., has a limited battery), etc. These are types of possible constraints that may cause the transcoder 120 (or another network encoder) to implement SVC adjustments. For example, in one embodiment, the video-image camera may be equipped with its own encoder and may perform its own scaling adjustments before outputting a bit stream to the transcoder 120 and its decoder 122. In an alternative embodiment, the video-image camera 140 may not be equipped with its own encoder and may feed raw media data (video, image, audio, etc.) to the encoder 124 of the transcoder 120. In this case, the video-image camera 140 can still do temporal, spatial, and quality modifications before sending the raw data to the transcoder 120. Also, the media source encoder 110 may also implement SVC adjustments before sending an output downstream. Accordingly, a bit stream may be scaled to remove parts of the bit stream in order to adapt the output to the various needs or preferences of downstream devices or users as well as varying terminal capabilities or network conditions.
  • The transcoder 120 in the media processing environment of FIG. 1 is shown as being positioned in an intermediate node between the media source encoder 110 and destination decoder 130. In other embodiments or implementations, the transcoder 120 may be collocated with a display device and may therefore act as a destination decoder. Further, the decoder 122 and encoder 124 components of the transcoder 120 may be in separate units and may span two separate nodes, in some embodiments.
  • In FIG. 1, with the transcoder 120 in an intermediate node of a communication network, such as a set top box, the transcoder 120 will allow media streams from media sources to be adapted for terminal devices, especially in streaming environments. Also, with a supplemental feed forward encode data 105 being provided from upstream encoders, the transcoder 120 may efficiently and quickly output encoded media streams for downstream displays and players.
  • Referring now to FIG. 2, one embodiment of a transcoder 120 is depicted. A front-end interface circuitry 202 to the transcoder 120 provides multiple pipes or channels to possible input streams. The input streams 204 are generally coming in as encoded forms, except for an environment that provides raw data, such as possibly an external video-image camera 140 e, in some embodiments. In this example, media sources for the input streams 204 include an “On Demand” media server 206 and broadcast media server 206 that deliver content over the Internet and/or Intranet 210. Also, media streams may be provided from satellite and cable infrastructures 212, 214, a local media storage 216, and the video-image camera 140 e on its own independent path or pipe. In particular, if the video-image camera 140 e is integrated with its own encoder 124, the camera may be tapped into a decoder 122 of the transcoder 120. If the video-image camera 140 is not integrated with its own encoder, then the camera 140 i may be tapped into an encoder 124 of the transcoder 120 (via interface circuitry 202). For example, in some embodiments, the video-image camera 140 i may be internal to the transcoder 120 and may pass raw media data to the encoder 124 of the transcoder 120 for encoding. In one embodiment, an internal video-image camera may be integrated as part of a set top box (having transcoder 120) that can capture viewer(s) in front of the set top box and tailor displayed content (e.g., parental filtering) based on saved preference information of identified viewer(s) using facial recognition processing on the captured images.
  • Accordingly, the transcoder 120 may provide multiple encoders and decoders to handle the multiple possible standards and formats that are received and required by upstream and downstream nodes in a streaming environment. In various embodiments, encoders and decoders may be hardware accelerated and/or comprised of a general purpose processor and applicable software.
  • In the present disclosure, a media source encoder 110 in addition to providing a media stream may also provide feed forward encode data 105. In FIG. 2, the “On Demand” media server 206, broadcast media server 208, satellite and cable infrastructures 212, 214, a local media storage 216, and the video-image camera 140 may therefore provide feed forward encode data 105 on its respective pipes or communication pathways to the transcoder 120. Accordingly, the “On Demand” media server 206, broadcast media server 208, local media storage 216, and the video-image camera 140 are shown to contain feed forward processing logic that assists in compiling and sending the feed forward encode data 105. Correspondingly, the “On Demand” media server 206, broadcast media server 208, local media storage 216, and the video-image camera 140 are shown to also contain SVC processing logic that assists in scaling bit stream outputs.
  • In FIG. 2, the transcoder 120 shows encode stream(s) 230 received from an input pipe to multiple input, multiple output (MIMO) decode architecture 122, where the architecture may therefore include multiple decoders. It is noted that the transcoder 120 is not limited to only receiving and processing media streams. In addition to supporting streaming, embodiments of the transcoder 120 may also support store and forward transmissions and other broadcast transmissions.
  • In addition to the encode streams 230, feed forward encode data 105 is supplied to decode architecture of decoder 122. The decode architecture 122 is shown to output raw stream(s) and/or groups of raw stream(s) 232, where a grouping of raw streams may all be sent to a particular destination device 250. The decode architecture 122 passes the raw stream(s) and the feed forward encode data 105 to the Multiple Input, Multiple Output encode architecture 124. In addition, in one embodiment, the encoder 124 is configured to provide overlay support such that multiple input streams may be combined such that content of one stream is to be overlaid over content of another upon being displayed.
  • Further, the encode architecture 124 may receive input streams (via the interface circuitry 202) from local memory storage 240 (that can be removable) or from internal or external video- image cameras 140 i, 140 e. These streams may be encoded or raw streams 230, 232, as the case may be.
  • During encoding operations, the encode architecture 124 may scale a bit stream during SVC coding and therefore SVC feedback data 242 is passed to the decode architecture 122 and interface circuitry 202 so that SVC feedback data 242 may be provided to upstream nodes. On the downstream side, the encoded output bit stream is provided to destination devices. In the figure, screen assemblies 250 (e.g., a device having display hardware and a display driver) are depicted for the destination devices. It is submitted that two screen assemblies may actually be located in the same device or serviced by the same device.
  • Referring now to FIG. 3, one embodiment of transcoder architecture 120 is depicted. It is understood that FIG. 3 shows one particular approach from a multitude of encode standards, where additional blocks and steps may be represented. In FIG. 3, sources of possible SVC adjustments are indicated by the dashed lines. Accordingly, the figure shows that scaling of bit streams can be effected by many nodes in a streaming network and by many encoder components. For example, a transcoder 120 may adjust a media stream to generate a media signal based on communication channel or pathway characteristics as well as other factors such as a destination device feedback indicating a current state, such as its current power state. For instance, when the channel characteristics are unfavorable, one or more video parameters such as the bandwidth, frame rate, color depth or resolution can be reduced by transcoder 120 to facilitate accurate decoding of the media signal by the destination device.
  • For example, in FIG. 3, SVC operations may adjust the resolution of a raw image 301 or raw video 302 received as input based on received SVC input 303. The size of sample blocks may be adjusted in response to an encoder being under stress due to a current workload (e.g., streaming may place a lot of stress on an encoder), as indicated by lines 304. Also, different numbers of pattern used in a transform (e.g., Discrete Cosine Transform (DCT), Discrete Fourier Transform (DFT), etc.) may be selected to provide improved frequency performance, as indicated by line 305. Additionally, the aggressiveness of the quantizer can be adjusted, as indicated by line 306. Correspondingly, the same adjustments for the DCT and quantizer may be made for the inverse DCT and inverse quantizer components.
  • The searches associated with the motion prediction block 350 (as discussed below) are generally intense since many different directions in many different neighboring frames are analyzed. A particular encode standard may define a size of a search area (e.g., how many frames backwards and forwards) to be searched for possible matches with a current block. However, the motion prediction block 350 may initiate SVC adjustments and adapt on the directions that are searched (e.g., only search backwards, do not look back more than 3 frames, etc.) in response to a buffer constraint, a power constraint, limited processing capabilities, etc., as indicated by line 307. Also, other blocks or stages may be adjusted, including motion compensation 352, frame buffer 354, etc.
  • In focusing on operations of the encoder, the encoding operation consists of the forward encoding path 310 and an inverse decoding path 320. Following a typical H.264 encoding operation, input media data, such as a video frame, is divided into smaller blocks of pixels or samples. In one embodiment, input media data is processed in units of a macroblock (MB) corresponding to a 16×16 displayed pixels.
  • In the encoder, the forward encoding path 310 predicts each macroblock using Intra or Inter-prediction. In intra-prediction mode, spatial correlation is used in each macroblock to reduce the amount of transmission data necessary to represent an image. In turn, redundancies in a frame are removed without comparing with other media frames. Diversely, in inter-prediction mode, redundancies are removed by comparing with other media frames.
  • The encoder 120 then searches pixels from the macroblock for a similar block, known as a reference block. An identification of the reference block is made and subtracted from the current macroblock to form a residual macroblock or prediction error. Identification of the similar block is known as motion estimation. A memory (frame buffer 354) stores the reference block and other reference blocks. The motion prediction block or stage 350 searches the memory for a reference block that is similar to the current macroblock block.
  • Once a reference block is selected, the reference block is identified by a motion vector MV and the prediction error during motion compensation 352. The residual macroblock and motion vectors are transformed (in DCT stage 356), quantized (in quantizer stage 358), and encoded (in entropy encoder stage 360) before being output.
  • The transformation is used to compress the image in Inter-frames or Intra-frames. The quantization stage 358 reduces the amount of information by dividing each coefficient by a particular number to reduce the quantity of possible values that value could have. Because this makes the values fall into a narrower range, this allows entropy coding 360 to express the values more compactly. The entropy encoder 360 removes the redundancies in the final bit-stream, such as recurring patterns in the bit-stream.
  • In parallel, the quantized data are re-scaled (in inverse quantizer stage 359) and inverse transformed (in inverse DCT stage 357) and added to the prediction macroblock to reconstruct a coded version of the media frame which is stored for later predictions in the frame buffer 354.
  • Motion estimation can potentially use a very large number of memory accesses for determining a reference block. For an input frame, the frame is segmented into multiple macroblocks which are reduced to sets of motion vectors. Accordingly, one whole frame is reduced into many sets of motion vectors.
  • To illustrate, a high definition television (HDTV) video comprises 1920×1080 pixel pictures per second, for example. A common block size can be, for example, a 16×16 block of pixels. Therefore, an exhaustive search may not be practical, especially for encoding in real time. In one approach, the encoder 300 may limit the search for samples of the current macroblock by reducing a search area. Although the foregoing may be faster than an exhaustive search, this can also be time-consuming and computationally intense.
  • Referring now to FIG. 4, an embodiment of the transcoder 120 is shown with possible feed forward encode data sources, indicated by dashed-lines, that can address the foregoing issues. In particular, the dashed lines shown in the figure lead to possible streams or sources of feed forward encode data that can be sent with an encoded bit stream output 365 to downstream nodes and devices as a feed forward encode data 105. As stated above, the searching operations performed for motion estimation in finding reference blocks, motion vectors, and residuals can be exhaustive and burdensome for a transcoder 120 during encode operations.
  • As an illustration, consider an upstream encoder that encodes raw video input. For each input block of a video frame, the upstream encoder will search neighboring frames in the inter-prediction stage (or the same frame in an intra-prediction stage) for a reference block. In an exhaustive search, the upstream encoder is not going to know which motion vector to send until all possible frames and blocks have been checked in all possible directions. Once the best matches have been determined and the residuals computed, then a motion vector output can be generated and sent downstream to a downstream transcoder. Accordingly, at the receiving decoder, the output stream from the upstream encoder is decoded into raw data once again and supplied to the downstream encoder of the transcoder 120. The downstream encoder, however, may not be currently capable to do an exhaustive search, as carried out by the upstream encoder, and therefore may not be capable of producing a high-quality compressed stream, but for the existence of the feed forward encode data 105 provided from the upstream encoder.
  • In particular, an embodiment of the upstream encoder 110 extracts results of its search operations and provide them to the downstream encoder 124 as one possible form of feed forward encode data 105. Based on the feed forward encode data 105, then, the encoder 124 may be able to identify the best match for a current pixel, since the search operation had been previously performed by the upstream encoder 110 and the results of the search are now provided to the downstream encoder 124, as part of feed forward encode data 105. Further, due to the constraints on the downstream encoder 124, the encoder may be only able to search for neighboring blocks within a set distance or search area from the current block. Therefore, the best match, as indicated in the feed forward encode data 105, may not be within the search area. However, the fourth best match (in the exhaustive search area) may be within the search area (being utilized by the current encoder) and may be selected as the best match for the current encode operation. Basically, the feed forward encode data 105 may allow the downstream encoder 124 to limit its motion estimation searching but still generate high quality and fast processing, because a full search is avoided from being implemented.
  • Consider, in one embodiment, the transcoder 120 may be integrated as part of a personal device, such as a tablet, that does not have comparable processing power or battery power, as compared to the upstream encoder. Using the feed forward encode data, however, the tablet device may provide a compressed video stream that is comparable with that provided by the more powerful upstream encoder. In a manner of speaking, use of the feed forward encode data 105 can appear to increase the process speed of the encoder 124 acting on the data.
  • Referring back to FIG. 4, the dashed lines coming out of the select encoder components indicate possible source of feed forward encode data 105 that can provide useful information to a downstream component so that it may be reused for similar purposes. Accordingly, possible forms of feed forward encode data 105 may include quantized weight(s) employed by the quantizer 358 in an encoding process, pertinent settings of intermediate stages in the encoding process, quality settings of intermediate stages, residual information not provided in the main output, etc. In general, information used by an encoder to make a decision determining or shaping an output, may be useful to a subsequent encoder and therefore may be provided as a supplemental output in the form of feed forward encode data 105. Therefore, a subsequent downstream encoder may recheck this information or simply uses the provided information to make its own decision as part of a rule set. Correspondingly, the transcoder 120 may itself use feed forward encode data 105 to assist in encoding a bit stream and then pass on the feed forward encode data, without modification, to allow for a downstream encoder to also use the feed forward encode data, in some embodiments. Alternatively, the transcoder 120 may modify or add information to the feed forward encode data or generate new feed forward encode data that can be provided to downstream components, in some embodiments. Accordingly, the feed forward encode may be output concurrently or simultaneously with an encoded bit stream or media data.
  • FIG. 5 is a flowchart representation of a method in accordance with one embodiment of the present disclosure. In particular, a method is presented for use in conjunction with one or more of the functions and features described in conjunction with FIGS. 1-4. In step 502, raw media data is received by an encoder 110. The encoder 110 initiates execution of an encoding process on the raw media data, where the encoding process contains multiple stages in a pipeline arrangement that are to be completed. During the encoding process, supplemental information is extracted from individual stages in the pipeline and output (e.g., concurrently with encoded media data) as a feed forward encode data, where the information is used by the individual stage to complete its respective task. For example, during a DCT transform stage, coefficient values or weights used in computing an output transform of an input signal. This type of information may be extracted and included as feed forward encode data and be used by a downstream DCT stage in a downstream encoding process, in step 504. Also, during a motion prediction stage, blocks are compared with an input block and the results of these comparisons and associated searches may also be extracted and included as feed forward encode data 105. In step 506, the primary encoded media stream is output from the encoder 110 along with the supplemental feed forward encode data 105 associated with the primary encoded media stream. In one embodiment, the supplemental feed forward encode data 105 is also provided in a compressed form.
  • Next, FIG. 6 is a flowchart representation of a method in accordance with one embodiment of the present disclosure. In particular, a method is presented for use in conjunction with one or more of the functions and features described in conjunction with FIGS. 1-4. In step 602, a primary encoded media stream is received along with the supplemental feed forward encode data 105 associated with the primary encoded media stream by a decoder 122 of a transcoder 120. The decoder 122 proceeds to decode the primary encoded media stream to generate raw media data that is supplied to an encoder 124 of the transcoder, in step 604. Further, the decoder 122 passes to the encoder 124 the feed forward encode data 105, in step 606.
  • In step 608, the encoder 124 initiates execution of an encoding process on the raw media data, where the encoding process contains multiple stages in a pipeline arrangement that are to be completed. During the encoding process, information is extracted from the feed forward encode data and used to assist in completion of a respective task by a particular stage, in step 610. For example, during a DCT transform stage, coefficient values or weights previously used in computing an output transform of an input signal by an upstream encoder are reused in completing a DCT transform stage in the current encoding process. Also, during a motion prediction stage, the results of comparisons completed in a motion prediction stage by an upstream encoder may also be extracted and used by a motion prediction stage in the current encoding process. In step 612, a second primary encoded media stream is output from the encoder 124. Further, in some embodiments, the encoder continues to pass or output feed forward encode data downstream that has been used in the encoding process, in step 614.
  • FIG. 7 shows a block diagram of an example electronic device featuring the transcoder 120, according to an embodiment. In embodiments, electronic device 700 may include one or more of the elements shown in FIG. 7. As shown in the example of FIG. 7, electronic device 700 may include one or more processors (also called central processing units, or CPUs), such as a processor 704. Processor 704 is connected to a communication infrastructure 702, such as a communication bus. In some embodiments, processor 704 can simultaneously operate multiple computing threads.
  • Electronic device 700 also includes a primary or main memory 706, such as random access memory (RAM). Main memory 706 has stored therein control logic 728A (computer software), and data.
  • Electronic device 700 also includes one or more secondary storage devices 710. Secondary storage devices 710 include, for example, a hard disk drive 712 and/or a removable storage device or drive 714, as well as other types of storage devices, such as memory cards and memory sticks. For instance, electronic device 700 may include an industry standard interface, such a universal serial bus (USB) interface for interfacing with devices such as a memory stick. Removable storage drive 714 represents a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup, etc. As shown in FIG. 7, secondary storage devices 710 may include an operating system 732 and transcoder 120.
  • Removable storage drive 714 interacts with a removable storage unit 716. Removable storage unit 716 includes a computer useable or readable storage medium 724 having stored therein computer software 728B (control logic) and/or data. Removable storage unit 716 represents a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, or any other computer data storage device. Removable storage drive 714 reads from and/or writes to removable storage unit 716 in a well known manner.
  • Electronic device 700 further includes a communication or network interface 718. Communication interface 718 enables the electronic device 700 to communicate with remote devices. For example, communication interface 718 allows electronic device 700 to communicate over communication networks or mediums 742 (representing a form of a computer useable or readable medium), such as LANs, WANs, the Internet, etc. Network interface 718 may interface with remote sites or networks via wired or wireless connections.
  • Control logic 728C may be transmitted to and from electronic device 700 via the communication medium 742. Any apparatus or manufacture comprising a computer useable or readable medium having control logic (software) stored therein is referred to herein as a computer program product or program storage device. This includes, but is not limited to, electronic device 700, main memory 706, secondary storage devices 710, and removable storage unit 716. Such computer program products, having control logic stored therein that, when executed by one or more data processing devices, cause such data processing devices to operate as described herein, represent embodiments of the present disclosure.
  • Electronic device 700 may be implemented in association with a variety of types of display devices. For instance, electronic device 700 may be one of a variety of types of media devices, such as a stand-alone display (e.g., a television display such as flat panel display, etc.), a computer, a tablet, a smart phone, a game console, a set top box, a digital video recorder (DVR), a networking device (e.g., a router, a switch, etc.), a server, or other electronic device mentioned elsewhere herein, etc. Media content that is delivered in two-dimensional or three-dimensional form according to embodiments described herein may be stored locally or received from remote locations. For instance, such media content may be locally stored for playback (replay TV, DVR), may be stored in removable memory (e.g. DVDs, memory sticks, etc.), may be received on wireless and/or wired pathways through a network such as a home network, through Internet download streaming, through a cable network, a satellite network, and/or a fiber network, etc. For instance, FIG. 7 shows a first media content 730A that is stored in hard disk drive 712, a second media content 730B that is stored in storage medium 724 of removable storage unit 716, and a third media content 730C that may be remotely stored and received over communication medium 722 by communication interface 718. Media content 730 may be stored and/or received in these manners and/or in other ways.
  • Video-image camera 140 may include an image sensor device and image processor and/or additional/alternative elements. The video-image camera 140 captures video images, and generates corresponding video data that is output on a video data signal. In an embodiment, the video data signal contains the video data that is output on an image processor output signal, including processed pixel data values that correspond to images captured by the image sensor device. The video data signal may include video data captured on a frame-by-frame basis or other basis. In an embodiment, the video data signal may include video data formatted as Bayer pattern data or in another image pattern data type known in the art.
  • Any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of an embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present disclosure.
  • It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the present disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present disclosure and protected by the following claims.

Claims (20)

1. A video transcoding system that processes encoded media data generated by a primary encoder, the video transcoding comprising:
at least one decoder that produces decoded media data by decoding the encoded media data generated by the primary encoder;
at least one secondary encoder that receives the decoded media data from the at least one decoder;
the at least one secondary encoder also receiving feed forward encode data generated by the primary encoder; and
the at least one secondary encoder that uses the feed forward encode data to assist in encoding the decoded media data.
2. The video transcoding system of claim 1, further comprising:
a camera that produces an imaging output; and
the at least one secondary encoder producing an encoded output that is related to the imaging output.
3. The video transcoding system of claim 1, wherein the at least one secondary encoder limits a search area size performed in a motion prediction stage of an encoding process, wherein the at least one secondary encoder utilizes the feed forward encode data to assist in the motion prediction stage, wherein the primary encoder that generated the feed forward encode data performed the motion prediction stage for a greater search area size.
4. The video transcoding system of claim 1, wherein the feed forward encode data comprises motion vectors computed by the primary encoder.
5. The video transcoding system of claim 1, wherein the feed forward encode data comprises configuration settings used in completing at least one encoding process stage.
6. The video transcoding system of claim 1, wherein the at least one secondary encoder passes the feed forward encode data along with second encoded media data as outputs.
7. A method used by an encoder for encoding media data, the method comprising:
receiving the media data;
receiving feed forward encode data; and
using the feed forward encode data to assist in the encoding of the media data.
8. The method of claim 7, wherein the feed forward encode data is utilized to limit a search area size performed in a motion prediction stage of the encoding of the media data, wherein an upstream encoder that generated the feed forward encode data performed the motion prediction stage for a greater search area size.
9. The method of claim 7, further comprising:
passing the feed forward encode data along with encoded media data as outputs.
10. The method of claim 7, wherein the feed forward encode data comprises motion vectors from an upstream encoder.
11. The method of claim 7, wherein the feed forward encode data comprises configuration settings used in completing at least one upstream encoding process stage.
12. A video processing system that operates on source encoded media generated by a source encoder, the video processing system comprising:
a transcoding system having at least one decoder and at least one secondary encoder;
the at least one decoder of the transcoding system receives the source encoded media generated by the source encoder;
the at least one decoder of the transcoding system processes the source encoded media to generate decoded media;
the at least one secondary encoder of the transcoding system processes the decoded media to generate secondary encoded media;
a camera, coupled to the transcoding system, that produces an imaging output; and
at least a portion of the transcoding system processing the imaging output of the camera to generate encoded imaging output.
13. The video processing system of claim 12, wherein the at least one secondary encoder of the transcoding system uses feed forward encode data produced by the source encoder to generate the secondary encoded media.
14. The video processing system of claim 12, wherein the imaging output of the camera is encoded and delivered to the at least one decoder.
15. The video processing system of claim 12, wherein the imaging output of the camera is raw and delivered to the at least one decoder.
16. A method used by an encoder that operates on media data, the method comprising:
receiving the media data;
generating an encoded media data output to be consumed by a downstream decoder; and
generating a feed forward encode data output to be consumed by a downstream encoder.
17. The method of claim 16, wherein the feed forward encode data comprises motion vectors from an upstream encoder.
18. The method of claim 16, wherein the feed forward encode data comprises configuration settings used in completing at least one upstream encoding process stage.
19. The method of claim 16, wherein the feed forward encode data is utilized to limit a search area size performed in a motion prediction stage of encoding of the media data at the downstream encoder, wherein the encoder that generated the feed forward encode data performed the motion prediction stage for a greater search area size.
20. The method of claim 16, wherein the encoder concurrently generates the feed forward encode data and the encoded media data output.
US13/313,345 2011-07-20 2011-12-07 Camera tap transcoder architecture with feed forward encode data Abandoned US20130022116A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/313,345 US20130022116A1 (en) 2011-07-20 2011-12-07 Camera tap transcoder architecture with feed forward encode data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161509747P 2011-07-20 2011-07-20
US13/313,345 US20130022116A1 (en) 2011-07-20 2011-12-07 Camera tap transcoder architecture with feed forward encode data

Publications (1)

Publication Number Publication Date
US20130022116A1 true US20130022116A1 (en) 2013-01-24

Family

ID=47555520

Family Applications (9)

Application Number Title Priority Date Filing Date
US13/232,052 Abandoned US20130021512A1 (en) 2011-07-20 2011-09-14 Framing of Images in an Image Capture Device
US13/232,045 Abandoned US20130021488A1 (en) 2011-07-20 2011-09-14 Adjusting Image Capture Device Settings
US13/235,975 Abandoned US20130021504A1 (en) 2011-07-20 2011-09-19 Multiple image processing
US13/245,941 Abandoned US20130021489A1 (en) 2011-07-20 2011-09-27 Regional Image Processing in an Image Capture Device
US13/281,521 Abandoned US20130021490A1 (en) 2011-07-20 2011-10-26 Facial Image Processing in an Image Capture Device
US13/313,345 Abandoned US20130022116A1 (en) 2011-07-20 2011-12-07 Camera tap transcoder architecture with feed forward encode data
US13/313,352 Active 2032-01-11 US9092861B2 (en) 2011-07-20 2011-12-07 Using motion information to assist in image processing
US13/330,047 Abandoned US20130021484A1 (en) 2011-07-20 2011-12-19 Dynamic computation of lens shading
US13/413,863 Abandoned US20130021491A1 (en) 2011-07-20 2012-03-07 Camera Device Systems and Methods

Family Applications Before (5)

Application Number Title Priority Date Filing Date
US13/232,052 Abandoned US20130021512A1 (en) 2011-07-20 2011-09-14 Framing of Images in an Image Capture Device
US13/232,045 Abandoned US20130021488A1 (en) 2011-07-20 2011-09-14 Adjusting Image Capture Device Settings
US13/235,975 Abandoned US20130021504A1 (en) 2011-07-20 2011-09-19 Multiple image processing
US13/245,941 Abandoned US20130021489A1 (en) 2011-07-20 2011-09-27 Regional Image Processing in an Image Capture Device
US13/281,521 Abandoned US20130021490A1 (en) 2011-07-20 2011-10-26 Facial Image Processing in an Image Capture Device

Family Applications After (3)

Application Number Title Priority Date Filing Date
US13/313,352 Active 2032-01-11 US9092861B2 (en) 2011-07-20 2011-12-07 Using motion information to assist in image processing
US13/330,047 Abandoned US20130021484A1 (en) 2011-07-20 2011-12-19 Dynamic computation of lens shading
US13/413,863 Abandoned US20130021491A1 (en) 2011-07-20 2012-03-07 Camera Device Systems and Methods

Country Status (1)

Country Link
US (9) US20130021512A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344256A1 (en) * 2013-05-03 2014-11-20 Splunk Inc. Processing a system search request including external data sources
US9916367B2 (en) 2013-05-03 2018-03-13 Splunk Inc. Processing system search requests from multiple data stores with overlapping data
US11410413B2 (en) 2018-09-10 2022-08-09 Samsung Electronics Co., Ltd. Electronic device for recognizing object and method for controlling electronic device

Families Citing this family (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10116839B2 (en) 2014-08-14 2018-10-30 Atheer Labs, Inc. Methods for camera movement compensation for gesture detection and object recognition
KR100495338B1 (en) 1997-01-27 2005-06-14 피터 디. 하랜드 Coatings, methods and apparatus for reducing reflection from optical substrates
JP5781351B2 (en) * 2011-03-30 2015-09-24 日本アビオニクス株式会社 Imaging apparatus, pixel output level correction method thereof, infrared camera system, and interchangeable lens system
JP5778469B2 (en) 2011-04-28 2015-09-16 日本アビオニクス株式会社 Imaging apparatus, image generation method, infrared camera system, and interchangeable lens system
KR101796481B1 (en) * 2011-11-28 2017-12-04 삼성전자주식회사 Method of eliminating shutter-lags with low power consumption, camera module, and mobile device having the same
US9118876B2 (en) * 2012-03-30 2015-08-25 Verizon Patent And Licensing Inc. Automatic skin tone calibration for camera images
US9462255B1 (en) 2012-04-18 2016-10-04 Amazon Technologies, Inc. Projection and camera system for augmented reality environment
US9619036B2 (en) * 2012-05-11 2017-04-11 Comcast Cable Communications, Llc System and methods for controlling a user experience
US9438805B2 (en) * 2012-06-08 2016-09-06 Sony Corporation Terminal device and image capturing method
US8957973B2 (en) * 2012-06-11 2015-02-17 Omnivision Technologies, Inc. Shutter release using secondary camera
US20130335587A1 (en) * 2012-06-14 2013-12-19 Sony Mobile Communications, Inc. Terminal device and image capturing method
TWI498771B (en) * 2012-07-06 2015-09-01 Pixart Imaging Inc Gesture recognition system and glasses with gesture recognition function
KR101917650B1 (en) * 2012-08-03 2019-01-29 삼성전자 주식회사 Method and apparatus for processing a image in camera device
US9554042B2 (en) * 2012-09-24 2017-01-24 Google Technology Holdings LLC Preventing motion artifacts by intelligently disabling video stabilization
US9286509B1 (en) * 2012-10-19 2016-03-15 Google Inc. Image optimization during facial recognition
JP2014086849A (en) * 2012-10-23 2014-05-12 Sony Corp Content acquisition device and program
US9060127B2 (en) * 2013-01-23 2015-06-16 Orcam Technologies Ltd. Apparatus for adjusting image capture settings
JP2014176034A (en) * 2013-03-12 2014-09-22 Ricoh Co Ltd Video transmission device
US9552630B2 (en) * 2013-04-09 2017-01-24 Honeywell International Inc. Motion deblurring
US9595083B1 (en) * 2013-04-16 2017-03-14 Lockheed Martin Corporation Method and apparatus for image producing with predictions of future positions
WO2014190468A1 (en) 2013-05-27 2014-12-04 Microsoft Corporation Video encoder for images
US10796617B2 (en) * 2013-06-12 2020-10-06 Infineon Technologies Ag Device, method and system for processing an image data stream
US9529513B2 (en) * 2013-08-05 2016-12-27 Microsoft Technology Licensing, Llc Two-hand interaction with natural user interface
US9270959B2 (en) 2013-08-07 2016-02-23 Qualcomm Incorporated Dynamic color shading correction
DE112014004664T5 (en) * 2013-10-09 2016-08-18 Magna Closures Inc. DISPLAY CONTROL FOR VEHICLE WINDOW
CN105339841B (en) 2013-12-06 2018-12-14 华为终端(东莞)有限公司 The photographic method and bimirror head apparatus of bimirror head apparatus
US10931866B2 (en) 2014-01-05 2021-02-23 Light Labs Inc. Methods and apparatus for receiving and storing in a camera a user controllable setting that is used to control composite image generation performed after image capture
US9251594B2 (en) 2014-01-30 2016-02-02 Adobe Systems Incorporated Cropping boundary simplicity
US9245347B2 (en) * 2014-01-30 2016-01-26 Adobe Systems Incorporated Image Cropping suggestion
US10121060B2 (en) * 2014-02-13 2018-11-06 Oath Inc. Automatic group formation and group detection through media recognition
KR102128468B1 (en) * 2014-02-19 2020-06-30 삼성전자주식회사 Image Processing Device and Method including a plurality of image signal processors
CN103841328B (en) * 2014-02-27 2015-03-11 深圳市中兴移动通信有限公司 Low-speed shutter shooting method and device
EP3120556B1 (en) 2014-03-17 2021-01-13 Microsoft Technology Licensing, LLC Encoder-side decisions for screen content encoding
US20150297986A1 (en) * 2014-04-18 2015-10-22 Aquifi, Inc. Systems and methods for interactive video games with motion dependent gesture inputs
WO2015170503A1 (en) * 2014-05-08 2015-11-12 ソニー株式会社 Information processing apparatus and information processing method
US10051196B2 (en) * 2014-05-20 2018-08-14 Lenovo (Singapore) Pte. Ltd. Projecting light at angle corresponding to the field of view of a camera
US10460544B2 (en) * 2014-07-03 2019-10-29 Brady Worldwide, Inc. Lockout/tagout device with non-volatile memory and related system
WO2016019450A1 (en) * 2014-08-06 2016-02-11 Warrian Kevin J Orientation system for image recording devices
KR102225947B1 (en) * 2014-10-24 2021-03-10 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN105549302B (en) 2014-10-31 2018-05-08 国际商业机器公司 The coverage suggestion device of photography and vedio recording equipment
US10334158B2 (en) * 2014-11-03 2019-06-25 Robert John Gove Autonomous media capturing
US20160148648A1 (en) * 2014-11-20 2016-05-26 Facebook, Inc. Systems and methods for improving stabilization in time-lapse media content
CN106416254B (en) 2015-02-06 2019-08-02 微软技术许可有限责任公司 Evaluation stage is skipped during media coding
US11721414B2 (en) 2015-03-12 2023-08-08 Walmart Apollo, Llc Importing structured prescription records from a prescription label on a medication package
WO2016183380A1 (en) * 2015-05-12 2016-11-17 Mine One Gmbh Facial signature methods, systems and software
US10853625B2 (en) 2015-03-21 2020-12-01 Mine One Gmbh Facial signature methods, systems and software
EP3274986A4 (en) 2015-03-21 2019-04-17 Mine One GmbH Virtual 3d methods, systems and software
US20160316220A1 (en) * 2015-04-21 2016-10-27 Microsoft Technology Licensing, Llc Video encoder management strategies
US10165186B1 (en) * 2015-06-19 2018-12-25 Amazon Technologies, Inc. Motion estimation based video stabilization for panoramic video from multi-camera capture device
US10447926B1 (en) 2015-06-19 2019-10-15 Amazon Technologies, Inc. Motion estimation based video compression and encoding
US10136132B2 (en) 2015-07-21 2018-11-20 Microsoft Technology Licensing, Llc Adaptive skip or zero block detection combined with transform size decision
EP3136726B1 (en) * 2015-08-27 2018-03-07 Axis AB Pre-processing of digital images
US9648223B2 (en) * 2015-09-04 2017-05-09 Microvision, Inc. Laser beam scanning assisted autofocus
US9456195B1 (en) * 2015-10-08 2016-09-27 Dual Aperture International Co. Ltd. Application programming interface for multi-aperture imaging systems
US9578221B1 (en) * 2016-01-05 2017-02-21 International Business Machines Corporation Camera field of view visualizer
JP6514140B2 (en) * 2016-03-17 2019-05-15 株式会社東芝 Imaging support apparatus, method and program
US9639935B1 (en) 2016-05-25 2017-05-02 Gopro, Inc. Apparatus and methods for camera alignment model calibration
EP3466051A1 (en) 2016-05-25 2019-04-10 GoPro, Inc. Three-dimensional noise reduction
WO2017205597A1 (en) * 2016-05-25 2017-11-30 Gopro, Inc. Image signal processing-based encoding hints for motion estimation
US10140776B2 (en) * 2016-06-13 2018-11-27 Microsoft Technology Licensing, Llc Altering properties of rendered objects via control points
US9851842B1 (en) * 2016-08-10 2017-12-26 Rovi Guides, Inc. Systems and methods for adjusting display characteristics
US10366122B2 (en) * 2016-09-14 2019-07-30 Ants Technology (Hk) Limited. Methods circuits devices systems and functionally associated machine executable code for generating a searchable real-scene database
CN110084089A (en) * 2016-10-26 2019-08-02 奥康科技有限公司 For analyzing image and providing the wearable device and method of feedback
CN106550227B (en) * 2016-10-27 2019-02-22 成都西纬科技有限公司 A kind of image saturation method of adjustment and device
US10477064B2 (en) 2017-08-21 2019-11-12 Gopro, Inc. Image stitching with electronic rolling shutter correction
US10791265B1 (en) 2017-10-13 2020-09-29 State Farm Mutual Automobile Insurance Company Systems and methods for model-based analysis of damage to a vehicle
US11587046B1 (en) 2017-10-25 2023-02-21 State Farm Mutual Automobile Insurance Company Systems and methods for performing repairs to a vehicle
CN111345036A (en) * 2017-10-26 2020-06-26 京瓷株式会社 Image processing apparatus, imaging apparatus, driving assistance apparatus, moving object, and image processing method
KR20190087977A (en) * 2017-12-25 2019-07-25 저텍 테크놀로지 컴퍼니 리미티드 Laser beam scanning display and augmented reality glasses
JP7456385B2 (en) * 2018-10-25 2024-03-27 ソニーグループ株式会社 Image processing device, image processing method, and program
US10771696B2 (en) * 2018-11-26 2020-09-08 Sony Corporation Physically based camera motion compensation
WO2020142471A1 (en) * 2018-12-30 2020-07-09 Sang Chul Kwon Foldable mobile phone
US11289078B2 (en) * 2019-06-28 2022-03-29 Intel Corporation Voice controlled camera with AI scene detection for precise focusing
US10861127B1 (en) * 2019-09-17 2020-12-08 Gopro, Inc. Image and video processing using multiple pipelines
US11064118B1 (en) 2019-12-18 2021-07-13 Gopro, Inc. Systems and methods for dynamic stabilization adjustment
US11006044B1 (en) * 2020-03-03 2021-05-11 Qualcomm Incorporated Power-efficient dynamic electronic image stabilization
US11284157B2 (en) * 2020-06-11 2022-03-22 Rovi Guides, Inc. Methods and systems facilitating adjustment of multiple variables via a content guidance application
TWI774039B (en) * 2020-08-12 2022-08-11 瑞昱半導體股份有限公司 System for compensating image with fixed pattern noise
US11563899B2 (en) * 2020-08-14 2023-01-24 Raytheon Company Parallelization technique for gain map generation using overlapping sub-images
CN114079735B (en) * 2020-08-19 2024-02-23 瑞昱半导体股份有限公司 Image compensation system for fixed image noise
US11902671B2 (en) * 2021-12-09 2024-02-13 Fotonation Limited Vehicle occupant monitoring system including an image acquisition device with a rolling shutter image sensor
WO2023150800A1 (en) * 2022-02-07 2023-08-10 Gopro, Inc. Methods and apparatus for real-time guided encoding

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010047517A1 (en) * 2000-02-10 2001-11-29 Charilaos Christopoulos Method and apparatus for intelligent transcoding of multimedia data
US20020190876A1 (en) * 2000-12-22 2002-12-19 Lai Angela C. W. Distributed on-demand media transcoding system and method
US20030227974A1 (en) * 2002-06-11 2003-12-11 Hitachi, Ltd. Bitstream transcoder
US20050249285A1 (en) * 2004-04-07 2005-11-10 Qualcomm Incorporated Method and apparatus for frame prediction in hybrid video compression to enable temporal scalability
US20060109900A1 (en) * 2004-11-23 2006-05-25 Bo Shen Image data transcoding
US20060165180A1 (en) * 2005-01-21 2006-07-27 Nec Corporation Transcoder device for transcoding compressed and encoded bitstream of motion picture in syntax level and motion picture communication system
US20070013801A1 (en) * 2004-03-24 2007-01-18 Sezan Muhammed I Methods and Systems for A/V Input Device to Display Networking
US20080165803A1 (en) * 2007-01-08 2008-07-10 General Instrument Corporation Method and Apparatus for Statistically Multiplexing Services
US20090097560A1 (en) * 2007-10-10 2009-04-16 Sony Corporation And Sony Electronics Inc. System for and method of transcoding video sequences from a first format to a second format
US20090217338A1 (en) * 2008-02-25 2009-08-27 Broadcom Corporation Reception verification/non-reception verification of base/enhancement video layers
US20100191832A1 (en) * 2007-07-30 2010-07-29 Kazunori Ozawa Communication terminal, distribution system, method for conversion and program
US20100228876A1 (en) * 2009-03-03 2010-09-09 Viasat, Inc. Space shifting over return satellite communication channels
US20100239001A1 (en) * 2007-05-23 2010-09-23 Kazuteru Watanabe Video streaming system, transcoding device, and video streaming method
US20100309987A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Image acquisition and encoding system
US20110170608A1 (en) * 2010-01-08 2011-07-14 Xun Shi Method and device for video transcoding using quad-tree based mode selection

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100325253B1 (en) * 1998-05-19 2002-03-04 미야즈 준이치롯 Motion vector search method and apparatus
US6486908B1 (en) * 1998-05-27 2002-11-26 Industrial Technology Research Institute Image-based method and system for building spherical panoramas
JP2001245303A (en) * 2000-02-29 2001-09-07 Toshiba Corp Moving picture coder and moving picture coding method
US7034848B2 (en) * 2001-01-05 2006-04-25 Hewlett-Packard Development Company, L.P. System and method for automatically cropping graphical images
KR100582628B1 (en) * 2001-05-31 2006-05-23 캐논 가부시끼가이샤 Information storing apparatus and method therefor
US7801215B2 (en) * 2001-07-24 2010-09-21 Sasken Communication Technologies Limited Motion estimation technique for digital video encoding applications
US20030126622A1 (en) * 2001-12-27 2003-07-03 Koninklijke Philips Electronics N.V. Method for efficiently storing the trajectory of tracked objects in video
KR100850705B1 (en) * 2002-03-09 2008-08-06 삼성전자주식회사 Method for adaptive encoding motion image based on the temperal and spatial complexity and apparatus thereof
US7259784B2 (en) * 2002-06-21 2007-08-21 Microsoft Corporation System and method for camera color calibration and image stitching
US20040131276A1 (en) * 2002-12-23 2004-07-08 John Hudson Region-based image processor
EP3404479A1 (en) * 2002-12-25 2018-11-21 Nikon Corporation Blur correction camera system
KR100566290B1 (en) * 2003-09-18 2006-03-30 삼성전자주식회사 Image Scanning Method By Using Scan Table and Discrete Cosine Transform Apparatus adapted it
JP4123171B2 (en) * 2004-03-08 2008-07-23 ソニー株式会社 Method for manufacturing vibration type gyro sensor element, vibration type gyro sensor element, and method for adjusting vibration direction
WO2007044556A2 (en) * 2005-10-07 2007-04-19 Innovation Management Sciences, L.L.C. Method and apparatus for scalable video decoder using an enhancement stream
TW200816798A (en) * 2006-09-22 2008-04-01 Altek Corp Method of automatic shooting by using an image recognition technology
US7924316B2 (en) * 2007-03-14 2011-04-12 Aptina Imaging Corporation Image feature identification and motion compensation apparatus, systems, and methods
US20090060039A1 (en) * 2007-09-05 2009-03-05 Yasuharu Tanaka Method and apparatus for compression-encoding moving image
US8063942B2 (en) * 2007-10-19 2011-11-22 Qualcomm Incorporated Motion assisted image sensor configuration
US8170342B2 (en) * 2007-11-07 2012-05-01 Microsoft Corporation Image recognition of content
JP2009152672A (en) * 2007-12-18 2009-07-09 Samsung Techwin Co Ltd Recording apparatus, reproducing apparatus, recording method, reproducing method, and program
JP5242151B2 (en) * 2007-12-21 2013-07-24 セミコンダクター・コンポーネンツ・インダストリーズ・リミテッド・ライアビリティ・カンパニー Vibration correction control circuit and imaging apparatus including the same
JP2009159359A (en) * 2007-12-27 2009-07-16 Samsung Techwin Co Ltd Moving image data encoding apparatus, moving image data decoding apparatus, moving image data encoding method, moving image data decoding method and program
US20090323810A1 (en) * 2008-06-26 2009-12-31 Mediatek Inc. Video encoding apparatuses and methods with decoupled data dependency
US7990421B2 (en) * 2008-07-18 2011-08-02 Sony Ericsson Mobile Communications Ab Arrangement and method relating to an image recording device
JP2010039788A (en) * 2008-08-05 2010-02-18 Toshiba Corp Image processing apparatus and method thereof, and image processing program
JP2010147808A (en) * 2008-12-18 2010-07-01 Olympus Imaging Corp Imaging apparatus and image processing method in same
US8311115B2 (en) * 2009-01-29 2012-11-13 Microsoft Corporation Video encoding using previously calculated motion information
US20100194851A1 (en) * 2009-02-03 2010-08-05 Aricent Inc. Panorama image stitching
US8520083B2 (en) * 2009-03-27 2013-08-27 Canon Kabushiki Kaisha Method of removing an artefact from an image
JP5473536B2 (en) * 2009-10-28 2014-04-16 京セラ株式会社 Portable imaging device with projector function
US8681255B2 (en) * 2010-09-28 2014-03-25 Microsoft Corporation Integrated low power depth camera and projection device
US9007428B2 (en) * 2011-06-01 2015-04-14 Apple Inc. Motion-based image stitching
US8554011B2 (en) * 2011-06-07 2013-10-08 Microsoft Corporation Automatic exposure correction of images

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010047517A1 (en) * 2000-02-10 2001-11-29 Charilaos Christopoulos Method and apparatus for intelligent transcoding of multimedia data
US20020190876A1 (en) * 2000-12-22 2002-12-19 Lai Angela C. W. Distributed on-demand media transcoding system and method
US20030227974A1 (en) * 2002-06-11 2003-12-11 Hitachi, Ltd. Bitstream transcoder
US20070013801A1 (en) * 2004-03-24 2007-01-18 Sezan Muhammed I Methods and Systems for A/V Input Device to Display Networking
US20050249285A1 (en) * 2004-04-07 2005-11-10 Qualcomm Incorporated Method and apparatus for frame prediction in hybrid video compression to enable temporal scalability
US20060109900A1 (en) * 2004-11-23 2006-05-25 Bo Shen Image data transcoding
US20060165180A1 (en) * 2005-01-21 2006-07-27 Nec Corporation Transcoder device for transcoding compressed and encoded bitstream of motion picture in syntax level and motion picture communication system
US20080165803A1 (en) * 2007-01-08 2008-07-10 General Instrument Corporation Method and Apparatus for Statistically Multiplexing Services
US20100239001A1 (en) * 2007-05-23 2010-09-23 Kazuteru Watanabe Video streaming system, transcoding device, and video streaming method
US20100191832A1 (en) * 2007-07-30 2010-07-29 Kazunori Ozawa Communication terminal, distribution system, method for conversion and program
US20090097560A1 (en) * 2007-10-10 2009-04-16 Sony Corporation And Sony Electronics Inc. System for and method of transcoding video sequences from a first format to a second format
US20090217338A1 (en) * 2008-02-25 2009-08-27 Broadcom Corporation Reception verification/non-reception verification of base/enhancement video layers
US20100228876A1 (en) * 2009-03-03 2010-09-09 Viasat, Inc. Space shifting over return satellite communication channels
US20100309987A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Image acquisition and encoding system
US20110170608A1 (en) * 2010-01-08 2011-07-14 Xun Shi Method and device for video transcoding using quad-tree based mode selection

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344256A1 (en) * 2013-05-03 2014-11-20 Splunk Inc. Processing a system search request including external data sources
US9514189B2 (en) * 2013-05-03 2016-12-06 Splunk Inc. Processing a system search request including external data sources
US9916385B2 (en) 2013-05-03 2018-03-13 Splunk Inc. Searching raw data from an external data system using a dual mode search system
US9916367B2 (en) 2013-05-03 2018-03-13 Splunk Inc. Processing system search requests from multiple data stores with overlapping data
US10049160B2 (en) 2013-05-03 2018-08-14 Splunk Inc. Processing a system search request across disparate data collection systems
US10726080B2 (en) 2013-05-03 2020-07-28 Splunk Inc. Utilizing a dual mode search
US10860665B2 (en) 2013-05-03 2020-12-08 Splunk Inc. Generating search queries based on query formats for disparate data collection systems
US10860596B2 (en) 2013-05-03 2020-12-08 Splunk Inc. Employing external data stores to service data requests
US11392655B2 (en) 2013-05-03 2022-07-19 Splunk Inc. Determining and spawning a number and type of ERP processes
US11403350B2 (en) 2013-05-03 2022-08-02 Splunk Inc. Mixed mode ERP process executing a mapreduce task
US11416505B2 (en) 2013-05-03 2022-08-16 Splunk Inc. Querying an archive for a data store
US11410413B2 (en) 2018-09-10 2022-08-09 Samsung Electronics Co., Ltd. Electronic device for recognizing object and method for controlling electronic device

Also Published As

Publication number Publication date
US20130021489A1 (en) 2013-01-24
US20130021504A1 (en) 2013-01-24
US20130021490A1 (en) 2013-01-24
US9092861B2 (en) 2015-07-28
US20130021512A1 (en) 2013-01-24
US20130021483A1 (en) 2013-01-24
US20130021491A1 (en) 2013-01-24
US20130021488A1 (en) 2013-01-24
US20130021484A1 (en) 2013-01-24

Similar Documents

Publication Publication Date Title
US20130022116A1 (en) Camera tap transcoder architecture with feed forward encode data
US9998750B2 (en) Systems and methods for guided conversion of video from a first to a second compression format
US11711511B2 (en) Picture prediction method and apparatus
US20150312575A1 (en) Advanced video coding method, system, apparatus, and storage medium
JP2013521717A (en) Enabling delta compression and motion prediction and metadata modification to render images on a remote display
KR102549670B1 (en) Chroma block prediction method and device
CN110546960A (en) multi-layer video streaming system and method
WO2020048502A1 (en) Method and device for bidirectional inter frame prediction
CN113259671B (en) Loop filtering method, device, equipment and storage medium in video coding and decoding
US20130251033A1 (en) Method of compressing video frame using dual object extraction and object trajectory information in video encoding and decoding process
US20190268619A1 (en) Motion vector selection and prediction in video coding systems and methods
US10313669B2 (en) Video data encoding and video encoder configured to perform the same
US20230300346A1 (en) Supporting view direction based random access of bitsteam
KR20060043050A (en) Method for encoding and decoding video signal
CN114930856A (en) Image/video coding method and device
JP2009081622A (en) Moving image compression encoder
US20230300426A1 (en) Dual stream dynamic gop access based on viewport change
JP7463614B2 (en) Dual-stream dynamic GOP access based on viewport change
US20230396801A1 (en) Learned video compression framework for multiple machine tasks
US20240087170A1 (en) Method for multiview picture data encoding, method for multiview picture data decoding, and multiview picture data decoding device
JP6649212B2 (en) Encoding device, decoding device, and image processing system
Pang et al. A Pilot Exploration of Industrial Video Scene Data Embedding using Real-Time MV-HEVC
US20130215965A1 (en) Video encoding and decoding using an epitome
CN114930855A (en) Slice and tile configuration for image/video coding
CN114902681A (en) Method and apparatus for signaling information related to slice in image/video encoding/decoding system

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BENNETT, JAMES D.;REEL/FRAME:027342/0435

Effective date: 20111202

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119