US20030103568A1 - Pixel data selection device for motion compensated interpolation and method thereof - Google Patents

Pixel data selection device for motion compensated interpolation and method thereof Download PDF

Info

Publication number
US20030103568A1
US20030103568A1 US10/219,295 US21929502A US2003103568A1 US 20030103568 A1 US20030103568 A1 US 20030103568A1 US 21929502 A US21929502 A US 21929502A US 2003103568 A1 US2003103568 A1 US 2003103568A1
Authority
US
United States
Prior art keywords
pixel data
field
frame
motion
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/219,295
Other versions
US7720150B2 (en
Inventor
Sung-hee Lee
Heon-hee Moon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, SUNG-HEE, MOON, HEON-HEE
Publication of US20030103568A1 publication Critical patent/US20030103568A1/en
Application granted granted Critical
Publication of US7720150B2 publication Critical patent/US7720150B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors

Definitions

  • the present invention relates to a pixel data selection device for motion compensated interpolation and a method thereof, and more particularly, to a pixel data selection device and method in a video signal conversion.
  • a frame rate conversion relates to a conversion of the number of frame outputs per second.
  • a frame rate is generally expressed as a unit of ‘Hz’.
  • the frame rate conversion is required in different frame rates of video signals. For example, when one wants to watch a film with a 24 Hz frame rate in a TV screen with a 30 Hz frame rate, the 24 Hz frame rate of the film has to be converted to the 30 Hz frame rate of the TV screen by frame repetition and 3-2 pull-down methods.
  • the frame rate conversion includes an up-conversion and a down-conversion to increase and decrease the frame rate, respectively.
  • aliasing generated during down-sampling of the video signals can be easily prevented by using a low-frequency filtering and sampling.
  • the up-conversion however, a nonexistent frame has to be constructed in a temporal axis, and the aliasing generated during compensation of the non-existent frame is hardly controllable since there is no limit in determining a bandwidth of an input video signal.
  • excessive high frequency components on a spatial axis, or excessive motions on the temporal axis further deteriorate an interpolation efficiency of the non-existent frame.
  • a conventional FRC algorithm is divided into a motion compensation type and a non-motion compensation type according to whether the algorithm uses motion information among frames during interpolation.
  • the non-motion compensation type of the FRC algorithm includes the frame repetition method, a linear or non-linear time/space filter method and a simple motion interpolation method using a motion detection method in a local motion area.
  • the frame repetition method simply repeats a preceding frame. In other words, the frame repetition method does not use the motion information.
  • the frame repetition method is one of conventional methods to be implemented in a hardware level. However, when one frame is converted into another frame with a different frame rate, an annoying view like motion jitter or blurring occurs.
  • time/space filter interpolation method In order to overcome the above problems, a time/space filter interpolation method has been suggested.
  • a time/space filter is used to filter neighboring frames of an interpolated frame. This method, however, cannot prevent the aliasing during the interpolation. If an input image contains excessive high frequency components on a spatial axis, the blurring occurs in the interpolated frame.
  • the motion interpolation method is based on both a motion-compensated interpolation (MCI) and a linear interpolation. This method detects a motion in a local area and simplifies computational complexities for motion compensation by applying the detected motion to the linear interpolation of adjacent frames. More specifically, the motion interpolation method performs the interpolation by using a median filter eliminating irregular motion vectors detected from a motion detecting process, and averaging the motion vectors along motion trajectories.
  • MCI motion-compensated interpolation
  • linear interpolation This method detects a motion in a local area and simplifies computational complexities for motion compensation by applying the detected motion to the linear interpolation of adjacent frames. More specifically, the motion interpolation method performs the interpolation by using a median filter eliminating irregular motion vectors detected from a motion detecting process, and averaging the motion vectors along motion trajectories.
  • FIG. 1 is a block diagram showing a general structure of a conventional frame rate converting apparatus
  • FIG. 2 is a view showing an image segmentation used in the conventional frame rate converting apparatus of FIG. 1.
  • the conventional frame rate converting apparatus 100 includes an image segmentation portion 110 , a motion estimation portion 120 , a motion vector refinement portion 130 and a motion compensated-interpolation (MCI) portion 140 .
  • the image segmentation portion 100 divides an image into a changed region and an unchanged region.
  • the changed region is again divided into a covered region and an uncovered region, a stationary background region and a moving object region for more efficient motion estimation.
  • the motion estimation portion 120 estimates the motion in a unit of pixel or block.
  • a motion vector of a block is generated by a block matching algorithm that is generally used in video coding.
  • the block matching algorithm obtains the motion vector for each block based on an assumption that pixels within a certain sized block are moved at a constant rate, i.e., the pixels are moved without enlargement or reduction.
  • the motion estimation in the pixel unit is performed in order to obtain the motion vector closer to a true image. For the motion estimation in a pixel unit, first, based on the assumption that the pixels within the block have a uniform motion, the motion vector of the block unit is obtained. Then, based on the motion vector of the block unit, a motion vector of each pixel of the block is obtained.
  • the motion vector refinement portion 130 refines the improper motion vector that is obtained from the motion estimation portion 120 .
  • the MCI portion 140 obtains the motion vector in a forward direction of the preceding and following frames of an image for interpolation. Then, by using the estimated motion vector, the image for interpolation is recognized according to the regions divided by the image segmentation portion 110 . In other words, the MCI portion 140 compensates motions by using motion information between adjacent frames like the preceding and following frames of a certain image of a current frame to be interpolated.
  • the present invention has been made to overcome the above-and other problems of the related art, and accordingly, it is an object of the present invention to provide an apparatus and a method for motion compensated interpolation, which selects pixel data of a block for interpolation according to a plurality of motion trajectories.
  • the above and other objects may be achieved by providing a pixel data selection device for motion compensated interpolation according to an embodiment of the present invention.
  • the device includes a first storage portion and a second storage portion respectively storing first pixel data and second pixel data, the first pixel data corresponding to a motion vector of a first frame/field that is obtained by delaying an input frame/field, the second pixel data corresponding to the motion vector of a second frame/field that is obtained by delaying the first frame/field one or more times, a first pixel data extract portion and a second pixel data extract portion respectively extracting individual pixel data from the first and the second pixel data corresponding to each candidate motion vector, and a first optimum pixel output portion and a second optimum pixel output portion outputting optimum pixel data for motion compensated interpolation from the extracted individual pixel data.
  • the candidate motion vector is a motion vector of a current block and an adjacent block or a motion vector extracted during motion estimation.
  • the candidate motion vector re-uses a global motion vector detected during motion analysis and a motion vector detected from a block of a preceding frame/field at the same location as a block of a following frame/field.
  • the first and the second pixel data extract portions extract the individual pixel data on the basis of a second candidate motion vector that is obtained by median filtering and/or average filtering of the candidate motion vector.
  • the device includes a first delay portion outputting the first frame/field and a second delay portion outputting a second frame/field that is obtained by first-delaying the first frame/field.
  • the first and the second pixel data extract portions selectively output the individual pixel data with respect to a block of a zero (0) candidate motion vector.
  • the first and second pixel data extract portions extract the individual pixel data by estimating a motion trajectory with the candidate motion vector.
  • the first and the second optimum pixel output portions are a median filter that outputs a median value of the individual pixel data as the optimum pixel data.
  • the first and the second optimum pixel data output portions determine the individual pixel data closest to an average of the individual pixel data as the optimum pixel data for motion compensated interpolation and outputs the determined pixel data for motion compensated interpolation.
  • the second storage portion stores the second pixel data corresponding to the motion vector of the second field, the second field being obtained by second-delaying the first field and having the same characteristic as the first field corresponding to the first pixel data stored in the first storage portion.
  • the device includes a first delay portion outputting the first field, which is obtained by first-delaying the input field and a second delay portion outputting the second field, which is obtained by second-delaying the first field.
  • the above and other objects may be achieved by providing a method of selecting pixel data for motion compensated interpolation according to another embodiment of the present invention.
  • the method includes extracting individual pixel data corresponding to one or more candidate motion vectors from first pixel data of an input first frame/field and second pixel data of an input second frame/field and outputting optimum pixel data for motion compensated interpolation from the extracted individual pixel data.
  • Each candidate motion vector is a motion vector of a current block and an adjacent block disposed adjacent to the current block of an interpolated frame/field, or a motion vector extracted during motion estimation.
  • the candidate motion vector is a global motion vector detected during motion analysis and a motion vector detected from a block of a preceding frame/field at the same location as a block of a following frame/field.
  • the extracting of the pixel data includes extracting the individual pixel data on the basis of a second candidate motion vector that is obtained by median-filtering and/or average-filtering the candidate motion vectors.
  • the extracting of the individual pixel data includes outputting the first frame/field, which is obtained by delaying an inputted frame once and outputting the second frame/field, which is obtained by delaying the first frame/field once.
  • the extracting of the pixel data includes selectively outputting the pixel data for motion compensated interpolation with respect to a block of zero (0) candidate motion and extracting the individual pixel data by estimating a motion trajectory corresponding to the candidate motion vector.
  • the outputting of the optimum pixel data includes extracting a median value of the individual pixel data and determining the individual pixel data closest to an average of the individual pixel data as the optimum pixel data for motion compensated interpolation, and outputting the determined pixel data for motion compensated interpolation.
  • the extracting of the pixel data includes extracting the pixel data for the candidate motion vector from the first pixel data and the second pixel data, the first pixel data being of the first field, and the second pixel data being of the second field that is adjacent to the first field and has the same characteristic as the first field.
  • FIG. 1 is a block diagram of a conventional frame rate converting apparatus
  • FIG. 2 is a view showing an image segmentation used in the conventional frame rate converting apparatus of FIG. 1;
  • FIG. 3 is a block diagram showing a pixel data selection device for motion compensated interpolation according to an embodiment of the present invention
  • FIG. 4 is a view showing motion vectors of a current block and candidate blocks of an interpolated frame/field in the motion compensated interpolation of FIG. 3;
  • FIG. 5 is a view showing each motion trajectory of the current block and the candidate blocks to be interpolated in the motion compensated interpolation of FIG. 3;
  • FIG. 6 is a block diagram showing an a motion compensated-interpolation portion of the pixel data selection device of FIG. 3;
  • FIG. 7 is a block diagram showing a pixel data selection device for motion compensated interpolation to obtain an adjacent field that has the same characteristic as a current field according to another embodiment of the present invention.
  • FIG. 8 is a flowchart explaining a method of selecting pixel data for motion compensated interpolation of FIGS. 3 through 7.
  • FIG. 3 is a block diagram of a pixel data selection device for motion compensated interpolation according to the embodiment of the present invention.
  • the pixel data selection device includes a first delayer 300 , a second delayer 310 , a first storage portion 320 , a first pixel data extract portion 330 , a first optimum pixel data output portion 340 , a second storage portion 350 , a second pixel data extract portion 360 , a second optimum pixel data output portion 370 and a motion compensated-interpolation (MCI) portion 380 .
  • MCI motion compensated-interpolation
  • the first delayer 300 first delays the input frame/field once and outputs a resultant first frame/field.
  • the second delayer 310 delays the first frame/field outputted from the first delayer 300 and outputs a resultant second frame/field. In an interlaced scanning method, if the first field represents an odd-numbered field, the second field represents an even-numbered field, and vice versa.
  • the first delayer 300 outputs the first field that is delayed from the input field. For example, if the input field is an odd field, the first delayer 300 outputs an even first field. Likewise, if the input field is an even field, the first delayer 300 outputs an odd first field. The second delayer 310 outputs the second field that is further delayed from the first field. As a result, the second field becomes an odd second field or even second field that corresponds to the even first field or odd first field output from the first field.
  • the first storage portion 320 stores the first pixel data corresponding to a motion vector of the first frame/field that is delayed from the input frame/field.
  • the first pixel data extract portion 330 extracts individual pixel data corresponding to candidate motion vectors from the stored first pixel data of the first frame/field.
  • the first pixel data extract portion 330 can also extract the individual pixel data corresponding to second candidate motion vectors which are obtained by median-filtering or average-filtering certain candidate motion vectors.
  • the certain candidate motion vector can be a motion vector corresponding to a current block and adjacent blocks disposed adjacent to the current block of a current frame/field to be interpolated, or a plurality of motion vectors extracted during the motion estimation.
  • a global motion vector detected during motion analysis estimation and a local motion vector detected from a block of a preceding frame/field at the same location as a block of a following frame/field can also be re-used as the candidate motion vector.
  • FIG. 4 is a view showing the motion vectors of the current block and candidate blocks. The motion vectors are used to construct the current block to be interpolated in the motion compensated interpolation.
  • the current block of the interpolated frame/field to be interpolated is denoted as V 0
  • the pixel data corresponding to the motion vectors are extracted by using the motion vectors of the current block and the candidate blocks (FIG. 4).
  • the pixel data corresponding to the motion vectors can also be extracted by using the second candidate motion vectors.
  • FIG. 5 is a view showing the motion trajectory corresponding to the current block and the candidate blocks of the interpolated frame/field that is to be interpolated by using the estimated motion vectors of the current block and the candidate blocks.
  • F n ⁇ 1 is a (n ⁇ 1)th frame
  • F n is a (n)th frame
  • F n+1 is a (n+1)th frame
  • F n ⁇ 1 and F n+1 are a preceding frame and a following frame, respectively, while F n corresponds to the interpolated frame that is to be interpolated.
  • the first optimum pixel data output portion 340 outputs the optimum pixel data from the individual pixel data extracted from the first pixel data extract portion 330 .
  • the first optimum pixel data output portion 340 can use a median filter that selects a median value among a plurality of individual pixel data. Accordingly, the first optimum pixel data output portion 340 uses the median filter to select the median value, to thereby output the optimum pixel data for motion compensated interpolation in consideration of a smooth motion trajectory among the blocks.
  • the first optimum pixel data output portion 340 determines the individual pixel data, which is the closest to the average value of the plurality of individual pixel data, as the optimum pixel data, and outputs the determined pixel data. If a complicated motion rather than a translation (linear) motion exists in a motion compensation process, instead of using a complicated motion vector detected by the block matching algorithm, the first optimum pixel data output portion 340 determines the pixel data, which does not use the motion information, as the optimum pixel data and outputs the determined pixel data. Accordingly, the pixel data having a ‘zero (0)’ motion vector can be selected as the optimum pixel data.
  • the optimum pixel data of the second frame/field is also selected.
  • the second delayer 310 delays the first frame/field outputted from the first delayer 300 and outputs the resultant second frame/field.
  • the second storage portion 350 stores the second pixel data corresponding to the motion vectors of the second frame/field that is delayed from the first frame/field.
  • the second pixel data extract portion 360 extracts individual pixel data corresponding to respective candidate motion vectors from the stored second pixel data of the second frame/field.
  • the second pixel data extract portion 360 outputs the extracted pixel data for the block having the zero (0) candidate motion vector.
  • the second pixel data extract portion 360 can also extract the individual pixel data through the estimation of the motion trajectory with the candidate motion vectors.
  • the second optimum pixel data output portion 370 outputs the optimum pixel data from the individual pixel data extracted from the second pixel data extract portion 360 .
  • the second optimum pixel data output portion 370 can use the median filter that outputs the median value among the plurality of the individual pixel data. By selecting the median value by using the median filter, the second optimum pixel data output portion 370 outputs the compensation pixel data in consideration of the smooth motion trajectory among the blocks. Further, the second optimum pixel data output portion 370 can determine the individual pixel data, which is the closest to the average of the plurality of the individual pixel data, as the compensation pixel data and output the determined compensation pixel data.
  • FIG. 6 is a block diagram showing the motion compensated-interpolation portion according to another embodiment of the present invention in detail.
  • the MCI portion 380 includes a motion compensation pixel data generating portion 381 , a temporal average pixel data generating portion 382 , a global motion compensation portion 383 , a zero motion compensation portion 385 , a local motion compensation portion 384 , and a motion vector type selector 386 .
  • the MCI portion 380 receives outputs from the first storage portion 320 , and the first optimum pixel data output portion 340 corresponding to the first frame/field, the second storage portion 350 , and the second optimum pixel data output portion 370 corresponding to the second frame/field, and then obtains interpolated fields or frames through different compensation processes according to the type of the motion blocks.
  • the MCI portion 380 performs the motion compensation by adaptively applying the motion compensation methods according to information about the motion vectors outputted from the pixel selection device.
  • the motion compensation pixel data generating portion 381 generates motion compensation pixel data with an average value as well as a median-value obtained by median-filtering the pixel data of the preceding and following frames.
  • the motion compensation pixel data generating portion 381 receives the outputs from the first and second storage portions 320 , 350 and the first and second optimum pixel data outputting portions 340 , 370 and generates the motion compensation pixel data.
  • the motion compensation pixel data generating portion 381 generates the motion compensation pixel data by:
  • P mc (x) is the motion compensation pixel data
  • l med (x) is a median of the pixel data of the candidate blocks of the preceding frame/field
  • r med (x) is a median of the pixel data of the candidate blocks of the following frame/field
  • x and n are the spatial and time domain indices, respectively.
  • the temporal average pixel data generating portion 382 When there is no motion, the temporal average pixel data generating portion 382 generates temporal average pixel data based on a time average instead of the motion vectors.
  • the temporal average pixel data generating portion 382 receives the outputs from the first and the second storage portions 320 , 350 and the first and second optimum pixel data outputting portions 340 , 370 and determines the temporal average pixel data by:
  • P avg (x) is the temporal average pixel data.
  • the MCI portion 380 performs the motion compensation using the motion vectors. In this case, the motion compensation pixel data is needed.
  • the MCI portion 380 performs the motion compensation simply using the averages along the temporal axis, instead of using the motion vectors. In this case, the temporal average pixel data is needed.
  • the global motion compensation portion 383 performs the motion compensation by using the motion vectors.
  • the global motion compensation portion 383 uses the optimum pixel data that is selected by the pixel data selection device for motion compensation.
  • the ‘global motion’ means an entire scene (image) moving in a direction at a constant speed. In a case of the global motion, substantially the same compensation effect can be obtained by applying the general MCI method.
  • V is motion vector of the current block
  • f i , f n ⁇ 1 , and f n are, respectively, the interpolated frame, the preceding and following frames being consecutively input.
  • the right side of the above equation (Formula 3) can be replaced by the output (P mc (x))of the motion compensation pixel data generating portion 381 .
  • the output of the global motion compensation portion 383 is the left side (f i (x)) of the above equation (Formula 3).
  • the zero motion compensation portion 385 performs the compensation not by using the motion vector, but by simply obtaining an average on the temporal axis.
  • the zero motion compensation portion 385 receives the pixel data selected by the pixel data selection device, and performs the compensation by using the time average of the pixel data.
  • the current block is the zero motion type block
  • the local motion compensation portion 384 performs the compensation with respect to local motions.
  • the local motions are not estimated through the motion of the moving object or block matching.
  • the local motion compensation portion 384 soft-switches to the median value between the motion compensation pixel data and the temporal average pixel data. Since the motion vectors detected from the local motion are sometimes inaccurate, and thus the blocking artifacts can occur when the general MCI method is applied.
  • An adaptive MCI method is used to reduce the block artifacts.
  • the pixel data for interpolation is determined by:
  • the motion compensation pixel data is determined from the preceding frame and the current frame, and the median value of the respective pixel data is obtained.
  • the median value l med , r med is obtained by:
  • the motion analysis portion calculates the correlation of the motion vectors of the candidate blocks, thereby determining a soft switching value M.
  • the determined soft switching value M is used between the motion compensation pixel data and the non-motion compensation pixel data.
  • the local motion compensation portion 384 determines the pixel data, which is to be interpolated, between P mc and P avg according to the soft switching value (M).
  • the motion vector type selector 386 selects one of the pixel data of the global motion compensation portion 383 , local motion compensation portion 384 and zero motion compensation portion 385 each corresponding to an inputted motion vector type. Accordingly, the MCI portion 380 adaptively performs the compensation by selectively applying the compensation method according to the selected motion vector type.
  • FIG. 7 is a block diagram of the pixel data selection device for motion compensation in the neighboring fields having the same identical characteristic.
  • the pixel data selection device includes the first delayer 300 , the second delayer 310 , the first storage portion 320 , the first pixel data extract portion 330 , the first optimum pixel data output portion 340 , the second storage portion 350 , the second pixel data extract portion 360 , the second optimum pixel data output portion 370 and motion compensated-interpolation portion 380 .
  • FIG. 7 is a block diagram of the pixel data selection device for motion compensated interpolation in the neighboring fields having the same identical characteristic.
  • description about the like elements as those of FIG. 3 will be omitted.
  • the difference between FIGS. 3 and 7 is that while FIG. 3 shows the pixel selection device that selects pixel data for motion compensation between any two adjacent frame/fields, FIG. 7 shows the pixel data selection device that selects pixel data for motion compensation between the two neighboring fields having the same identical characteristic.
  • the ‘two neighboring fields having the same identical characteristic’ means that if the first field, which is once delayed from the input field, is an odd field, the second field is an odd field that is second time delayed from the first field. Likewise, if the first field is an even field, the second field becomes an even field.
  • the first delayer 300 first delays the input field and outputs the first field as an odd first field. Then, the second delayer 310 second time delays the odd first field and outputs the second field as the odd second field.
  • the second delayer 310 has a delayer-a 312 and a delayer-b 314 .
  • the delayer-a 312 delays the odd first field and outputs the even second field while the delayer-b 314 delays the even field from the delayer-a 312 one more time and outputs the odd second field.
  • the first and the second pixel data extract portions 330 , 360 extract the pixel data for motion compensation, from the adjacent fields having the same identical characteristics, i.e., from the odd first field and odd second field, or from the even first field and even second field.
  • FIG. 8 is a flowchart showing a method of selecting the pixel data for motion compensated interpolation according to another embodiment of the present invention.
  • the input frame/field is delayed by first and second times to be output as the first and second frame/fields, respectively, in operation S 800 .
  • the first delayer 300 delays the input frame/fields and outputs the resultant field, i.e., the first frame/field.
  • the second delayer 310 secondly delays the first frame/field output from the first delayer 300 and outputs the second frame/field.
  • the first field is an odd field
  • the second field will be the even field corresponding to the odd first field
  • the first field is an even field
  • the second field will be the odd field corresponding to the even first field.
  • the first and second storage portions 320 , 350 store the first pixel data and the second pixel data in operation S 810 .
  • the first pixel data corresponds to the motion vector of the first frame/field, which is firstly delayed from the input frame/field by the first delayer 300
  • the second pixel data corresponds to the motion vector of the second frame/field, which is secondly delayed from the first frame/field by the second delayer 310 .
  • the pixel data is determined by taking the motion trajectory of the current block and the candidate blocks into account in motion compensation.
  • the individual pixel data corresponding to the candidate motion vector is extracted from the first and the second pixel data in operation S 820 .
  • the candidate motion vector can be the motion vector of the current block and the candidate blocks, or it can be a plurality of candidate motion vectors extracted during the motion estimation.
  • the global motion vector detected from the motion analysis and the local motion vector detected from the preceding frame/field at the same location as a block of the following (current) frame/field can also be re-used as the candidate motion vector.
  • the optimum pixel data for motion compensation are output from the extracted individual pixel data in operation S 830 .
  • the optimum pixel data for motion compensation can be output by using a median filter which outputs the median pixel data of the plurality of individual pixel data. By selecting the median pixel data through the median filter, the optimum pixel data for motion compensation can be output in consideration of the smooth motion trajectory. Also, the individual pixel data that is the closest to the average of the plurality of individual pixel data can be determined and output as the optimum pixel data for motion compensation.
  • the pixel data for motion compensation can be selected from the fields of identical characteristic through operations S 810 through S 830 .
  • the pixel data selection device for motion compensated interpolation and a method thereof by selecting pixel data from the adjacent frame/fields to interpolate the current block in consideration of the plurality of motion trajectories, the occurrence of blocking artifacts due to inaccurate estimation of motion vectors can be reduced. Also, since all motion trajectories about the current block and the candidate blocks are taken into account without requiring additional processes, the hardware for the motion compensated interpolation is simplified.

Abstract

A pixel data selection device for motion compensated interpolation and a method thereof selects pixel data in adjacent frame/fields. First and second storage portions respectively store first pixel data corresponding to a motion vector of a first frame/field that is obtained by delaying an input frame/field and second pixel data corresponding to the motion vector of a second frame/field that is obtained by delaying the first frame/field one or more times. First and second pixel data extracting portions extract individual pixel data from the first and the second pixel data corresponding to a candidate motion vector. First and second optimum pixel outputting portions output optimum pixel data for motion compensation from the extracted individual pixel data. By selecting the optimum pixel data from the adjacent frame/field of an interpolated frame/field having a current block and candidate blocks corresponding to a plurality of motion trajectories, occurrence of blocking artifacts due to inaccurate estimate of motion vectors can be reduced.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of Korean Application No. 2001-75535, filed Nov. 30, 2001, in the Korean Industrial Property Office, the disclosure of which is incorporated herein by reference. [0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0002]
  • The present invention relates to a pixel data selection device for motion compensated interpolation and a method thereof, and more particularly, to a pixel data selection device and method in a video signal conversion. [0003]
  • 2. Description of the Related Art [0004]
  • A frame rate conversion (FRC) relates to a conversion of the number of frame outputs per second. A frame rate is generally expressed as a unit of ‘Hz’. The frame rate conversion is required in different frame rates of video signals. For example, when one wants to watch a film with a 24 Hz frame rate in a TV screen with a 30 Hz frame rate, the 24 Hz frame rate of the film has to be converted to the 30 Hz frame rate of the TV screen by frame repetition and 3-2 pull-down methods. [0005]
  • The frame rate conversion includes an up-conversion and a down-conversion to increase and decrease the frame rate, respectively. In the down-conversion, aliasing generated during down-sampling of the video signals can be easily prevented by using a low-frequency filtering and sampling. In the up-conversion, however, a nonexistent frame has to be constructed in a temporal axis, and the aliasing generated during compensation of the non-existent frame is hardly controllable since there is no limit in determining a bandwidth of an input video signal. In addition, excessive high frequency components on a spatial axis, or excessive motions on the temporal axis further deteriorate an interpolation efficiency of the non-existent frame. [0006]
  • A conventional FRC algorithm is divided into a motion compensation type and a non-motion compensation type according to whether the algorithm uses motion information among frames during interpolation. The non-motion compensation type of the FRC algorithm includes the frame repetition method, a linear or non-linear time/space filter method and a simple motion interpolation method using a motion detection method in a local motion area. [0007]
  • The frame repetition method simply repeats a preceding frame. In other words, the frame repetition method does not use the motion information. The frame repetition method is one of conventional methods to be implemented in a hardware level. However, when one frame is converted into another frame with a different frame rate, an annoying view like motion jitter or blurring occurs. [0008]
  • In order to overcome the above problems, a time/space filter interpolation method has been suggested. In the time/space filter interpolation method, a time/space filter is used to filter neighboring frames of an interpolated frame. This method, however, cannot prevent the aliasing during the interpolation. If an input image contains excessive high frequency components on a spatial axis, the blurring occurs in the interpolated frame. [0009]
  • The motion interpolation method is based on both a motion-compensated interpolation (MCI) and a linear interpolation. This method detects a motion in a local area and simplifies computational complexities for motion compensation by applying the detected motion to the linear interpolation of adjacent frames. More specifically, the motion interpolation method performs the interpolation by using a median filter eliminating irregular motion vectors detected from a motion detecting process, and averaging the motion vectors along motion trajectories. [0010]
  • FIG. 1 is a block diagram showing a general structure of a conventional frame rate converting apparatus, and FIG. 2 is a view showing an image segmentation used in the conventional frame rate converting apparatus of FIG. 1. [0011]
  • Referring to FIG. 1, the conventional frame [0012] rate converting apparatus 100 includes an image segmentation portion 110, a motion estimation portion 120, a motion vector refinement portion 130 and a motion compensated-interpolation (MCI) portion 140. As shown in FIG. 2, the image segmentation portion 100 divides an image into a changed region and an unchanged region. The changed region is again divided into a covered region and an uncovered region, a stationary background region and a moving object region for more efficient motion estimation.
  • The [0013] motion estimation portion 120 estimates the motion in a unit of pixel or block. A motion vector of a block is generated by a block matching algorithm that is generally used in video coding. The block matching algorithm obtains the motion vector for each block based on an assumption that pixels within a certain sized block are moved at a constant rate, i.e., the pixels are moved without enlargement or reduction. The motion estimation in the pixel unit is performed in order to obtain the motion vector closer to a true image. For the motion estimation in a pixel unit, first, based on the assumption that the pixels within the block have a uniform motion, the motion vector of the block unit is obtained. Then, based on the motion vector of the block unit, a motion vector of each pixel of the block is obtained.
  • When a proper motion vector is not obtained, however, a visual effect would be less than when the motion information is not used at all. Accordingly, the motion [0014] vector refinement portion 130 refines the improper motion vector that is obtained from the motion estimation portion 120. The MCI portion 140 obtains the motion vector in a forward direction of the preceding and following frames of an image for interpolation. Then, by using the estimated motion vector, the image for interpolation is recognized according to the regions divided by the image segmentation portion 110. In other words, the MCI portion 140 compensates motions by using motion information between adjacent frames like the preceding and following frames of a certain image of a current frame to be interpolated.
  • And sometimes, an inaccurate result is obtained from the motion estimation due to the estimated motion vector that is different from a true motion vector. When there is inaccurate motion estimation by the [0015] MCI portion 140, blocking artifacts occur in interpolated images. Accordingly, after-process methods like an overlapped block motion compensation (OBMC) method are used to remove the blocking artifacts. The OBMC method, however, is only effective when the artifacts of pixel data change edges of the block irregularly. In a format conversion like deinterlacing and the frame/field rate conversion (FRC), the after-process methods such as an OBMC method are not effective in blocking artifacts where the pixel data within the interpolated block is very quite different from that of the adjacent blocks.
  • SUMMARY OF THE INVENTION
  • The present invention has been made to overcome the above-and other problems of the related art, and accordingly, it is an object of the present invention to provide an apparatus and a method for motion compensated interpolation, which selects pixel data of a block for interpolation according to a plurality of motion trajectories. [0016]
  • Additional objects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention. [0017]
  • The above and other objects may be achieved by providing a pixel data selection device for motion compensated interpolation according to an embodiment of the present invention. The device includes a first storage portion and a second storage portion respectively storing first pixel data and second pixel data, the first pixel data corresponding to a motion vector of a first frame/field that is obtained by delaying an input frame/field, the second pixel data corresponding to the motion vector of a second frame/field that is obtained by delaying the first frame/field one or more times, a first pixel data extract portion and a second pixel data extract portion respectively extracting individual pixel data from the first and the second pixel data corresponding to each candidate motion vector, and a first optimum pixel output portion and a second optimum pixel output portion outputting optimum pixel data for motion compensated interpolation from the extracted individual pixel data. [0018]
  • The candidate motion vector is a motion vector of a current block and an adjacent block or a motion vector extracted during motion estimation. The candidate motion vector re-uses a global motion vector detected during motion analysis and a motion vector detected from a block of a preceding frame/field at the same location as a block of a following frame/field. [0019]
  • The first and the second pixel data extract portions extract the individual pixel data on the basis of a second candidate motion vector that is obtained by median filtering and/or average filtering of the candidate motion vector. The device includes a first delay portion outputting the first frame/field and a second delay portion outputting a second frame/field that is obtained by first-delaying the first frame/field. [0020]
  • The first and the second pixel data extract portions selectively output the individual pixel data with respect to a block of a zero (0) candidate motion vector. The first and second pixel data extract portions extract the individual pixel data by estimating a motion trajectory with the candidate motion vector. [0021]
  • The first and the second optimum pixel output portions are a median filter that outputs a median value of the individual pixel data as the optimum pixel data. The first and the second optimum pixel data output portions determine the individual pixel data closest to an average of the individual pixel data as the optimum pixel data for motion compensated interpolation and outputs the determined pixel data for motion compensated interpolation. [0022]
  • The second storage portion stores the second pixel data corresponding to the motion vector of the second field, the second field being obtained by second-delaying the first field and having the same characteristic as the first field corresponding to the first pixel data stored in the first storage portion. The device includes a first delay portion outputting the first field, which is obtained by first-delaying the input field and a second delay portion outputting the second field, which is obtained by second-delaying the first field. [0023]
  • The above and other objects may be achieved by providing a method of selecting pixel data for motion compensated interpolation according to another embodiment of the present invention. The method includes extracting individual pixel data corresponding to one or more candidate motion vectors from first pixel data of an input first frame/field and second pixel data of an input second frame/field and outputting optimum pixel data for motion compensated interpolation from the extracted individual pixel data. [0024]
  • Each candidate motion vector is a motion vector of a current block and an adjacent block disposed adjacent to the current block of an interpolated frame/field, or a motion vector extracted during motion estimation. The candidate motion vector is a global motion vector detected during motion analysis and a motion vector detected from a block of a preceding frame/field at the same location as a block of a following frame/field. [0025]
  • The extracting of the pixel data includes extracting the individual pixel data on the basis of a second candidate motion vector that is obtained by median-filtering and/or average-filtering the candidate motion vectors. The extracting of the individual pixel data includes outputting the first frame/field, which is obtained by delaying an inputted frame once and outputting the second frame/field, which is obtained by delaying the first frame/field once. [0026]
  • The extracting of the pixel data includes selectively outputting the pixel data for motion compensated interpolation with respect to a block of zero (0) candidate motion and extracting the individual pixel data by estimating a motion trajectory corresponding to the candidate motion vector. [0027]
  • The outputting of the optimum pixel data includes extracting a median value of the individual pixel data and determining the individual pixel data closest to an average of the individual pixel data as the optimum pixel data for motion compensated interpolation, and outputting the determined pixel data for motion compensated interpolation. [0028]
  • The extracting of the pixel data includes extracting the pixel data for the candidate motion vector from the first pixel data and the second pixel data, the first pixel data being of the first field, and the second pixel data being of the second field that is adjacent to the first field and has the same characteristic as the first field. [0029]
  • By selecting the pixel data from the adjacent frame/field to interpolate the current block of an interpolated frame/field in accordance with a plurality of motion trajectories corresponding to the current blocks and candidate blocks, the occurrence of block artifacts, which are caused due to inaccurate estimation of a motion vector, can be prevented. Also, since all available motion trajectories are obtained in the candidate motion vectors without requiring additional processes, the hardware for the motion compensated interpolation is simplified.[0030]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other objects and advantages of the invention will become apparent and more readily appreciated from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings of which: [0031]
  • FIG. 1 is a block diagram of a conventional frame rate converting apparatus; [0032]
  • FIG. 2 is a view showing an image segmentation used in the conventional frame rate converting apparatus of FIG. 1; [0033]
  • FIG. 3 is a block diagram showing a pixel data selection device for motion compensated interpolation according to an embodiment of the present invention; [0034]
  • FIG. 4 is a view showing motion vectors of a current block and candidate blocks of an interpolated frame/field in the motion compensated interpolation of FIG. 3; [0035]
  • FIG. 5 is a view showing each motion trajectory of the current block and the candidate blocks to be interpolated in the motion compensated interpolation of FIG. 3; [0036]
  • FIG. 6 is a block diagram showing an a motion compensated-interpolation portion of the pixel data selection device of FIG. 3; [0037]
  • FIG. 7 is a block diagram showing a pixel data selection device for motion compensated interpolation to obtain an adjacent field that has the same characteristic as a current field according to another embodiment of the present invention; and [0038]
  • FIG. 8 is a flowchart explaining a method of selecting pixel data for motion compensated interpolation of FIGS. 3 through 7.[0039]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures. [0040]
  • An embodiment of the present invention will now be described with reference to the drawings. FIG. 3 is a block diagram of a pixel data selection device for motion compensated interpolation according to the embodiment of the present invention. [0041]
  • Referring to FIG. 3, the pixel data selection device includes a [0042] first delayer 300, a second delayer 310, a first storage portion 320, a first pixel data extract portion 330, a first optimum pixel data output portion 340, a second storage portion 350, a second pixel data extract portion 360, a second optimum pixel data output portion 370 and a motion compensated-interpolation (MCI) portion 380.
  • The [0043] first delayer 300 first delays the input frame/field once and outputs a resultant first frame/field. The second delayer 310 delays the first frame/field outputted from the first delayer 300 and outputs a resultant second frame/field. In an interlaced scanning method, if the first field represents an odd-numbered field, the second field represents an even-numbered field, and vice versa.
  • As the odd and even fields are inputted alternatively according to the interlaced scanning method, the [0044] first delayer 300 outputs the first field that is delayed from the input field. For example, if the input field is an odd field, the first delayer 300 outputs an even first field. Likewise, if the input field is an even field, the first delayer 300 outputs an odd first field. The second delayer 310 outputs the second field that is further delayed from the first field. As a result, the second field becomes an odd second field or even second field that corresponds to the even first field or odd first field output from the first field.
  • The [0045] first storage portion 320 stores the first pixel data corresponding to a motion vector of the first frame/field that is delayed from the input frame/field. The first pixel data extract portion 330 extracts individual pixel data corresponding to candidate motion vectors from the stored first pixel data of the first frame/field. The first pixel data extract portion 330 can also extract the individual pixel data corresponding to second candidate motion vectors which are obtained by median-filtering or average-filtering certain candidate motion vectors.
  • Here, the certain candidate motion vector can be a motion vector corresponding to a current block and adjacent blocks disposed adjacent to the current block of a current frame/field to be interpolated, or a plurality of motion vectors extracted during the motion estimation. Alternatively, a global motion vector detected during motion analysis estimation and a local motion vector detected from a block of a preceding frame/field at the same location as a block of a following frame/field can also be re-used as the candidate motion vector. [0046]
  • FIG. 4 is a view showing the motion vectors of the current block and candidate blocks. The motion vectors are used to construct the current block to be interpolated in the motion compensated interpolation. [0047]
  • When the motion estimation is inaccurate, blocking artifacts occur in an interpolated image. In order to eliminate the blocking artifacts, motion trajectories of the current block and the candidate block have to be taken into account in the motion compensated interpolation. [0048]
  • Referring to FIG. 4, the current block of the interpolated frame/field to be interpolated is denoted as V[0049] 0, and the candidate blocks are denoted as Vi (i=1, 2, 3, 4, 5, 6, 7, 8). Provided that a movement within blocks is smooth, if the estimation of the motion vector of the current block is inaccurate, the inaccurate motion vector is replaced by an accurately estimated motion vector of the candidate block. That is, the inaccurate motion vector of the current block is replaced by the motion vector of the candidate block that has the least sum of absolute difference (SAD) among the motion vectors obtained from the candidate blocks.
  • In this embodiment, the pixel data corresponding to the motion vectors are extracted by using the motion vectors of the current block and the candidate blocks (FIG. 4). However, the pixel data corresponding to the motion vectors can also be extracted by using the second candidate motion vectors. [0050]
  • FIG. 5 is a view showing the motion trajectory corresponding to the current block and the candidate blocks of the interpolated frame/field that is to be interpolated by using the estimated motion vectors of the current block and the candidate blocks. [0051]
  • As shown in FIG. 5, among the adjacent frames, F[0052] n−1 is a (n−1)th frame, Fn is a (n)th frame, and Fn+1 is a (n+1)th frame. Fn−1 and Fn+1 are a preceding frame and a following frame, respectively, while Fn corresponds to the interpolated frame that is to be interpolated.
  • Conventionally, motion compensation was made with one motion trajectory for each block. If one of the estimated motion vectors is inaccurate, the block to be interpolated generates the blocking artifacts. Accordingly, in order to reduce an estimation error, the motion vectors have to be obtained through the estimation of the motion trajectories based on motion vector fields that are distributed in a denser pattern. [0053]
  • However, it requires complex computational requirements and establishment of hardware in order to obtain the densely distributed motion vector fields . Accordingly, instead of employing additional processes to add a plurality of motion trajectories applied to the current block to be interpolated, a simple construction having already used motion vectors, which are already estimated and used in the input frame/field will be used in this embodiment. [0054]
  • As shown in FIG. 3, the first optimum pixel [0055] data output portion 340 outputs the optimum pixel data from the individual pixel data extracted from the first pixel data extract portion 330. The first optimum pixel data output portion 340 can use a median filter that selects a median value among a plurality of individual pixel data. Accordingly, the first optimum pixel data output portion 340 uses the median filter to select the median value, to thereby output the optimum pixel data for motion compensated interpolation in consideration of a smooth motion trajectory among the blocks.
  • The first optimum pixel [0056] data output portion 340 determines the individual pixel data, which is the closest to the average value of the plurality of individual pixel data, as the optimum pixel data, and outputs the determined pixel data. If a complicated motion rather than a translation (linear) motion exists in a motion compensation process, instead of using a complicated motion vector detected by the block matching algorithm, the first optimum pixel data output portion 340 determines the pixel data, which does not use the motion information, as the optimum pixel data and outputs the determined pixel data. Accordingly, the pixel data having a ‘zero (0)’ motion vector can be selected as the optimum pixel data.
  • In the same way as described above, the optimum pixel data of the second frame/field is also selected. The [0057] second delayer 310 delays the first frame/field outputted from the first delayer 300 and outputs the resultant second frame/field.
  • The [0058] second storage portion 350 stores the second pixel data corresponding to the motion vectors of the second frame/field that is delayed from the first frame/field. The second pixel data extract portion 360 extracts individual pixel data corresponding to respective candidate motion vectors from the stored second pixel data of the second frame/field. The second pixel data extract portion 360 outputs the extracted pixel data for the block having the zero (0) candidate motion vector. The second pixel data extract portion 360 can also extract the individual pixel data through the estimation of the motion trajectory with the candidate motion vectors.
  • The second optimum pixel [0059] data output portion 370 outputs the optimum pixel data from the individual pixel data extracted from the second pixel data extract portion 360. The second optimum pixel data output portion 370 can use the median filter that outputs the median value among the plurality of the individual pixel data. By selecting the median value by using the median filter, the second optimum pixel data output portion 370 outputs the compensation pixel data in consideration of the smooth motion trajectory among the blocks. Further, the second optimum pixel data output portion 370 can determine the individual pixel data, which is the closest to the average of the plurality of the individual pixel data, as the compensation pixel data and output the determined compensation pixel data.
  • FIG. 6 is a block diagram showing the motion compensated-interpolation portion according to another embodiment of the present invention in detail. Referring to FIG. 6, the [0060] MCI portion 380 includes a motion compensation pixel data generating portion 381, a temporal average pixel data generating portion 382, a global motion compensation portion 383, a zero motion compensation portion 385, a local motion compensation portion 384, and a motion vector type selector 386.
  • The [0061] MCI portion 380 receives outputs from the first storage portion 320, and the first optimum pixel data output portion 340 corresponding to the first frame/field, the second storage portion 350, and the second optimum pixel data output portion 370 corresponding to the second frame/field, and then obtains interpolated fields or frames through different compensation processes according to the type of the motion blocks.
  • The [0062] MCI portion 380 performs the motion compensation by adaptively applying the motion compensation methods according to information about the motion vectors outputted from the pixel selection device. The motion compensation pixel data generating portion 381 generates motion compensation pixel data with an average value as well as a median-value obtained by median-filtering the pixel data of the preceding and following frames. The motion compensation pixel data generating portion 381 receives the outputs from the first and second storage portions 320, 350 and the first and second optimum pixel data outputting portions 340, 370 and generates the motion compensation pixel data. The motion compensation pixel data generating portion 381 generates the motion compensation pixel data by:
  • P mc(x)=MEDIAN{l med(x),r med(x),(f n−1(x)+f n(x))/2},   Formula 1
  • wherein P[0063] mc(x) is the motion compensation pixel data, lmed(x) is a median of the pixel data of the candidate blocks of the preceding frame/field, rmed(x) is a median of the pixel data of the candidate blocks of the following frame/field, and x and n are the spatial and time domain indices, respectively.
  • When there is no motion, the temporal average pixel [0064] data generating portion 382 generates temporal average pixel data based on a time average instead of the motion vectors. The temporal average pixel data generating portion 382 receives the outputs from the first and the second storage portions 320, 350 and the first and second optimum pixel data outputting portions 340, 370 and determines the temporal average pixel data by:
  • P avg(x)=MEDIAN{f n−1(x)+f n(x),(l med+(x),l med)/2},   Formula 2
  • wherein P[0065] avg (x) is the temporal average pixel data.
  • For example, when the motion vectors are accurate, the [0066] MCI portion 380 performs the motion compensation using the motion vectors. In this case, the motion compensation pixel data is needed. When there is no motion, instead of using the motion vectors, the MCI portion 380 performs the motion compensation simply using the averages along the temporal axis, instead of using the motion vectors. In this case, the temporal average pixel data is needed.
  • Also, there is a situation where the estimated motion vectors are inaccurate although a motion occurs. In this case, the [0067] MCI portion 380 soft-switches to a median value between the motion compensation pixel data and the temporal average pixel data according to the accuracy of the estimated motion vectors.
  • The global [0068] motion compensation portion 383 performs the motion compensation by using the motion vectors. The global motion compensation portion 383 uses the optimum pixel data that is selected by the pixel data selection device for motion compensation. The ‘global motion’ means an entire scene (image) moving in a direction at a constant speed. In a case of the global motion, substantially the same compensation effect can be obtained by applying the general MCI method.
  • If the current block is the global motion type, the interpolated frame f[0069] i(x) between the two adjacent frames is determined by: f i ( x ) = 1 2 [ f n - 1 ( x + V -> ) + f n ( x - V -> ) ] , Formula 3
    Figure US20030103568A1-20030605-M00001
  • wherein V is motion vector of the current block, and f[0070] i, fn−1, and fn are, respectively, the interpolated frame, the preceding and following frames being consecutively input.
  • Meanwhile, the right side of the above equation (Formula 3) can be replaced by the output (P[0071] mc(x))of the motion compensation pixel data generating portion 381. In this case, the output of the global motion compensation portion 383 is the left side (fi(x)) of the above equation (Formula 3).
  • For the block that has no motion, the zero [0072] motion compensation portion 385 performs the compensation not by using the motion vector, but by simply obtaining an average on the temporal axis. The zero motion compensation portion 385 receives the pixel data selected by the pixel data selection device, and performs the compensation by using the time average of the pixel data.
  • If the current block is the zero motion type block, the current block of the interpolated frame (f[0073] l,(x)) between the two adjacent frames is determined by: f i ( x ) = 1 2 [ f n - 1 ( x ) + f n ( x ) ] . Formula 4
    Figure US20030103568A1-20030605-M00002
  • The right side of the above equation (Formula 4) can be replaced by the output (P[0074] avg(x)) of the time average pixel data generating portion 382, and in this case, the output of the zero motion compensation portion 385 is fl(x).
  • The local [0075] motion compensation portion 384 performs the compensation with respect to local motions. The local motions are not estimated through the motion of the moving object or block matching. In this case, according to the accuracy of the estimated motion vectors, the local motion compensation portion 384 soft-switches to the median value between the motion compensation pixel data and the temporal average pixel data. Since the motion vectors detected from the local motion are sometimes inaccurate, and thus the blocking artifacts can occur when the general MCI method is applied. An adaptive MCI method is used to reduce the block artifacts.
  • If the current block is a local motion type block, the pixel data for interpolation is determined by: [0076]
  • f i(x)=M·P avg+(1−MP mc, 0≦M≦1.   Formula 5
  • Then, by using the motion vectors of the adjacent blocks, the motion compensation pixel data, is determined from the preceding frame and the current frame, and the median value of the respective pixel data is obtained. The median value l[0077] med, rmed is obtained by:
  • l med(x)=MEDIAN{l i},
  • r med(x)=MEDIAN{r i }, i=0, 1, . . . , N.   Formula 6
  • By the above equations (Formula 6), the smoothing effect of the motion vectors is obtained in a case that the motion vector of the current block is inaccurate. [0078]
  • Further, l[0079] l,and ri for lmed and rmed are obtained by:
  • l l(x)=f n−1(x+{right arrow over (Vl))},
  • r i(x)=f n(x+{right arrow over (Vl))}, i=0,1, . . . , N   Formula 7
  • where {right arrow over (V[0080] i )}is the motion vectors of the current block and the candidate blocks.
  • Also, the motion analysis portion (not shown) calculates the correlation of the motion vectors of the candidate blocks, thereby determining a soft switching value M. The determined soft switching value M is used between the motion compensation pixel data and the non-motion compensation pixel data. The soft switching value M is determined by: [0081] M = i = 0 8 V -> i - V -> 0 . Formula 8
    Figure US20030103568A1-20030605-M00003
  • If the current block is a local motion type block, the local [0082] motion compensation portion 384 determines the pixel data, which is to be interpolated, between Pmc and Pavg according to the soft switching value (M).
  • When the motion vector type is determined by the motion analysis portion (not shown), the motion [0083] vector type selector 386 selects one of the pixel data of the global motion compensation portion 383, local motion compensation portion 384 and zero motion compensation portion 385 each corresponding to an inputted motion vector type. Accordingly, the MCI portion 380 adaptively performs the compensation by selectively applying the compensation method according to the selected motion vector type.
  • FIG. 7 is a block diagram of the pixel data selection device for motion compensation in the neighboring fields having the same identical characteristic. [0084]
  • Referring to FIG. 7, the pixel data selection device includes the [0085] first delayer 300, the second delayer 310, the first storage portion 320, the first pixel data extract portion 330, the first optimum pixel data output portion 340, the second storage portion 350, the second pixel data extract portion 360, the second optimum pixel data output portion 370 and motion compensated-interpolation portion 380.
  • FIG. 7 is a block diagram of the pixel data selection device for motion compensated interpolation in the neighboring fields having the same identical characteristic. Here, description about the like elements as those of FIG. 3 will be omitted. The difference between FIGS. [0086] 3 and 7 is that while FIG. 3 shows the pixel selection device that selects pixel data for motion compensation between any two adjacent frame/fields, FIG. 7 shows the pixel data selection device that selects pixel data for motion compensation between the two neighboring fields having the same identical characteristic.
  • Here, the ‘two neighboring fields having the same identical characteristic’ means that if the first field, which is once delayed from the input field, is an odd field, the second field is an odd field that is second time delayed from the first field. Likewise, if the first field is an even field, the second field becomes an even field. [0087]
  • The [0088] first delayer 300 first delays the input field and outputs the first field as an odd first field. Then, the second delayer 310 second time delays the odd first field and outputs the second field as the odd second field.
  • The [0089] second delayer 310 has a delayer-a 312 and a delayer-b 314. The delayer-a 312 delays the odd first field and outputs the even second field while the delayer-b 314 delays the even field from the delayer-a 312 one more time and outputs the odd second field. In this way, the first and the second pixel data extract portions 330, 360 extract the pixel data for motion compensation, from the adjacent fields having the same identical characteristics, i.e., from the odd first field and odd second field, or from the even first field and even second field.
  • FIG. 8 is a flowchart showing a method of selecting the pixel data for motion compensated interpolation according to another embodiment of the present invention. [0090]
  • Referring to FIG. 8, the input frame/field is delayed by first and second times to be output as the first and second frame/fields, respectively, in operation S[0091] 800. The first delayer 300 delays the input frame/fields and outputs the resultant field, i.e., the first frame/field. The second delayer 310 secondly delays the first frame/field output from the first delayer 300 and outputs the second frame/field. In the interlaced scanning method, if the first field is an odd field, the second field will be the even field corresponding to the odd first field, and if the first field is an even field, the second field will be the odd field corresponding to the even first field.
  • The first and [0092] second storage portions 320, 350 store the first pixel data and the second pixel data in operation S810. The first pixel data corresponds to the motion vector of the first frame/field, which is firstly delayed from the input frame/field by the first delayer 300, while the second pixel data corresponds to the motion vector of the second frame/field, which is secondly delayed from the first frame/field by the second delayer 310. When the motion estimation is inaccurate, the blocking artifacts occur in the interpolated image. In order to reduce the occurrence of the blocking artifacts, the pixel data is determined by taking the motion trajectory of the current block and the candidate blocks into account in motion compensation.
  • The individual pixel data corresponding to the candidate motion vector is extracted from the first and the second pixel data in operation S[0093] 820. Here, the candidate motion vector can be the motion vector of the current block and the candidate blocks, or it can be a plurality of candidate motion vectors extracted during the motion estimation. The global motion vector detected from the motion analysis and the local motion vector detected from the preceding frame/field at the same location as a block of the following (current) frame/field can also be re-used as the candidate motion vector.
  • When the individual pixel data are extracted operation in S[0094] 820, the optimum pixel data for motion compensation are output from the extracted individual pixel data in operation S830. Here, the optimum pixel data for motion compensation can be output by using a median filter which outputs the median pixel data of the plurality of individual pixel data. By selecting the median pixel data through the median filter, the optimum pixel data for motion compensation can be output in consideration of the smooth motion trajectory. Also, the individual pixel data that is the closest to the average of the plurality of individual pixel data can be determined and output as the optimum pixel data for motion compensation.
  • Although one method selects the pixel data for motion compensation from the adjacent fields having the identical characteristic, and the other method selects the pixel data for motion compensation from the two adjacent frame/fields, these two methods are based on the same principle. Accordingly, the pixel data for motion compensation can be selected from the fields of identical characteristic through operations S[0095] 810 through S830.
  • According to the pixel data selection device for motion compensated interpolation and a method thereof, by selecting pixel data from the adjacent frame/fields to interpolate the current block in consideration of the plurality of motion trajectories, the occurrence of blocking artifacts due to inaccurate estimation of motion vectors can be reduced. Also, since all motion trajectories about the current block and the candidate blocks are taken into account without requiring additional processes, the hardware for the motion compensated interpolation is simplified. Also, by analyzing the motion vector fields and categorizing the motions of the respective blocks into ‘global’, ‘local’ and ‘zero’ blocks, processes can be performed adaptively according to the categorized blocks, and therefore, the occurrence of the blocking artifacts due to inaccurate estimation of the motions can be reduced. [0096]
  • Although a few preferred embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents. [0097]

Claims (45)

What is claimed is:
1. A pixel data selection device for motion compensated interpolation, comprising:
a first storage portion and a second storage portion respectively storing first pixel data and second pixel data, the first pixel data corresponding to a first motion vector of a first frame/field that is obtained by delaying an input frame/field, the second pixel data corresponding to a second motion vector of a second frame/field that is obtained by delaying the first frame/field one or more times;
a first pixel data extract portion and a second pixel data extracting portion respectively extracting individual pixel data from the first and the second pixel data corresponding to a plurality of candidate motion vectors; and
a first optimum pixel output portion and a second optimum pixel outputting portion outputting optimum pixel data for the motion compensated interpolation from the extracted individual pixel data.
2. The pixel data selection device of claim 1, further comprising a motion compensated interpolation portion forming interpolated pixel data from the optimum pixel data, wherein the candidate motion vectors are motion vectors corresponding to a current block formed with the interpolated pixel data and an adjacent block disposed adjacent to the current block.
3. The pixel data selection device of claim 1, wherein the candidate motion vectors are a motion vector extracted during motion estimation of the input frame/field and used for forming the input frame/field.
4. The pixel data selection device of claim 1, wherein the candidate motion vectors are a global motion vector detected during motion estimation of the input frame/field and a local motion vector detected from a block of the first frame/field at the same location as a corresponding block of the second frame/field.
5. The pixel data selection device of claim 1, wherein the first and the second pixel data extract portions extract the individual pixel data on the basis of a second candidate motion vector that is obtained by median-filtering and/or average-filtering of the candidate motion vectors.
6. The pixel data selection device of claim 1, further comprising:
a first delay portion outputting the first frame/field by delaying the input frame/field; and
a second delay portion outputting a second frame/field by delaying the first frame/field.
7. The pixel data selection device of claim 1, wherein the first and the second pixel data extract portions selectively output pixel data a block having a zero candidate motion vector as the individual pixel data.
8. The pixel data selection device of claim of claim 1, wherein the first and second pixel data extract portions extract the individual pixel data by estimating a motion trajectory corresponding to respective ones of the candidate motion vectors.
9. The pixel data selection device of claim 1, wherein the first and the second optimum pixel output portions each comprises a median filter that outputs a median value of the individual pixel data.
10. The pixel data selection device of claim 1, wherein the first and the second optimum pixel data output portions determine the individual pixel data closest to an average of the individual pixel data as the optimum pixel data and output the determined pixel data for motion compensation.
11. The pixel data selection device of claim 1, further comprising a delayer delaying the first frame/field to generate the second field having the same characteristic as the first field.
12. The pixel data selection device of claim 11, wherein the delayer comprises:
a first delay portion outputting the first field, which is obtained by first-delaying the input field; and
a second delay portion outputting the second field, which is obtained by second-delaying the first field.
13. A method of selecting pixel data for motion compensated interpolation, comprising:
extracting individual pixel data corresponding to one or more candidate motion vector from first pixel data of an input first frame/field and second pixel data of an input second frame/field; and
outputting optimum pixel data for the motion compensated interpolation from the extracted individual pixel data to form interpolated pixel data of an interpolate current block.
14. The pixel data selection method of claim 13, further comprising:
selecting the candidate motion vectors from motion vectors corresponding to the current block and an adjacent block disposed adjacent the current block.
15. The pixel data selection method of claim 13, further comprising:
selecting the candidate motion vector from motion vectors extracted during motion estimation of the input first frame/field and the input second frame/field.
16. The pixel data selection method of claim 13, further comprising:
selecting the candidate motion vector from a global motion vector detected during motion analysis of the input first frame/field and the input second frame/field and a motion vector detected from a block of the input first frame/field at the same location as a block of the input second frame/field.
17. The pixel data selection method of claim 13, wherein the extracting of the pixel data comprises:
extracting the individual pixel data on the basis of a second candidate motion vector that is obtained by median filtering and/or average filtering the candidate motion vectors.
18. The pixel data selection method of claim 13, further comprising:
outputting the first frame/field, which is obtained by delaying an inputted frame once; and
outputting the second frame/field, which is obtained by delaying the first frame/field at least once.
19. The pixel data selection method of claim 13, wherein the extracting of the pixel data comprises:
selectively outputting pixel data of a block having a zero candidate motion vector as the optimum pixel data.
20. The pixel data selection method of claim 13, wherein the extracting of the pixel data comprises:
generating the individual pixel data by estimating a motion trajectory corresponding to the candidate motion vectors.
21. The pixel data selection method of claim 13, wherein the outputting of the optimum pixel data comprises:
generating a median value of the individual pixel data.
22. The pixel data selection method of claim 13, wherein the outputting of the optimum pixel data comprises:
determining the individual pixel data closest to an average of the individual pixel data as the optimum pixel data for motion compensation; and
outputting the determined pixel data to form the current block.
23. The pixel data selection method of claim 13, wherein the extracting of the individual pixel data comprises:
extracting the individual pixel data corresponding to respective candidate motion vectors from the first pixel data and the second pixel data, the input second frame/field having the same characteristic as that of the input first field.
24. The pixel data selection method of claim 23, wherein the extracting of the individual pixel data comprises:
outputting the input first field, which is delayed from an input frame/ field once; and
outputting the input second field, which is delayed from the input first frame/field at least once.
25. The pixel data selection method of claim 13, wherein the extracting of the individual pixel data comprises:
generating first individual pixel data extracted from the first pixel data corresponding to respective candidate motion vectors and second individual pixel data extracted from the second pixel data corresponding to respective candidate motion vectors.
26. The pixel data selection method of claim 25, wherein the outputting of the optimum pixel data comprises:
generating a first median value of the first individual pixel data and a second median value of the second individual pixel data; and
generating motion compensation pixel data having a third median value of a combination of the first median value, the second median value, and an averaging value of the first and second pixel data to form the interpolated pixel data.
27. The pixel data selection method of claim 26, wherein the outputting of the motion compensated pixel data comprises:
generating temporal average pixel data having an average value of the first individual pixel data and the second individual pixel data to form the interpolated pixel data.
28. The pixel data selection method of claim 27, wherein the outputting of the motion compensated pixel data comprises:
forming the interpolated pixel data by a combination of the motion compensation pixel data and the temporal average pixel data in a predetermined ratio.
29. The pixel data selection method of claim 25, wherein the outputting of the optimum pixel data comprises:
generating an average of the first pixel data and the second pixel data to form the interpolated block when the candidate motion vector represents no motion in the input first and second frames/fields corresponding to the current block.
30. A pixel data selection device for motion compensated interpolation to form an interpolated block, comprising:
a pixel data selector receiving a first frame/field and a second frame/field having candidate blocks corresponding to a plurality of candidate motion vectors of the interpolated block and selecting pixel data corresponding to the candidate blocks of the first frame/field and the second frame/field; and
a motion compensated interpolation portion forming the interpolated block in response to the selected pixel data.
31. The device of claim 30, wherein the pixel data selector generates the candidate motion vectors corresponding to the interpolated block and adjacent blocks disposed adjacent to the interpolated block.
32. The device of claim 31, wherein the number of the adjacent blocks are at least 4.
33. The device of claim 30, wherein the pixel data selector generates the pixel data corresponding to a predetermined block having a least sum of absolute difference (SAD) among candidate motion vectors as the selected pixel data.
34. The device of claim 30, wherein the pixel data selector determines the candidate blocks disposed in both the first frame/field and the second frame/field corresponding to the candidate motion vectors.
35. The device of claim 30, wherein one of the candidate motion vectors passes through the interpolated block.
36. The device of claim 30, wherein the pixel data selector determines a plurality of motion trajectories relating to corresponding blocks of the first frame/field and the second frame/field to select the pixel data.
37. The device of claim 30, wherein the first frame/field and the second frame/field are an even numbered frame/field and an odd numbered frame/field, respectively.
38. The device of claim 30, wherein the candidate motion vectors are motion vectors used to construct the first frame/field and the second frame/field.
39. The device of claim 30, wherein the interpolated block is disposed between the first field and the second field in a temporal axis.
40. The device of claim 30, wherein the pixel data selector comprises a median filter generating a median value of the candidate blocks as the selected pixel data.
41. The device of claim 30, wherein the pixel data comprises first pixel data selected from first blocks of the first frame/field and second pixel data selected from second blocks of the second frame/field, and the motion compensated interpolation portion generates pixel data to form the interpolated block in response to the first pixel data and the second pixel data.
42. The device of claim 41, wherein the pixel data selector generates a first median value of the first pixel data and a second median value of the second pixel data, and the motion compensated interpolation portion generates motion compensation pixel data having a median value of a combination of the first median value, the second median value, and an averaging value of the first and second pixel data to form the interpolated block.
43. The device of claim 42, wherein the motion compensated interpolation portion generates temporal average pixel data having the average value of the first pixel data and the second pixel data to form the interpolated block.
44. The device of claim 43, wherein the motion compensated interpolation portion generates motion compensation pixel data and temporal average pixel data and forms the interpolated block having a first portion of the motion compensation pixel data and a second portion of temporal average pixel data.
45. The device of claim 44, wherein the candidate motion vectors comprise one of global motion portion, a local motion portion, and a zero motion portion, and the motion compensated interpolation portion forms the interpolated block having a combination of the motion compensation pixel data and the temporal average pixel data in a predetermined ratio in response to the global motion portion, the local motion portion, and the zero motion portion of the candidate motion vectors.
US10/219,295 2001-11-30 2002-08-16 Pixel data selection device for motion compensated interpolation and method thereof Expired - Fee Related US7720150B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2001-0075535A KR100412501B1 (en) 2001-11-30 2001-11-30 Pixel-data selection device for motion compensation and method of the same
KR2001-75535 2001-11-30

Publications (2)

Publication Number Publication Date
US20030103568A1 true US20030103568A1 (en) 2003-06-05
US7720150B2 US7720150B2 (en) 2010-05-18

Family

ID=19716515

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/219,295 Expired - Fee Related US7720150B2 (en) 2001-11-30 2002-08-16 Pixel data selection device for motion compensated interpolation and method thereof

Country Status (5)

Country Link
US (1) US7720150B2 (en)
JP (1) JP2003174628A (en)
KR (1) KR100412501B1 (en)
CN (1) CN1237796C (en)
NL (1) NL1021982C2 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040213470A1 (en) * 2002-04-25 2004-10-28 Sony Corporation Image processing apparatus and method
US20050013364A1 (en) * 2003-07-14 2005-01-20 Chun-Ming Hsu Method of motion vector determination in digital video compression
US20050213663A1 (en) * 2004-03-29 2005-09-29 Koji Aoyama Image processing apparatus and method, recording medium, and program
US20060023786A1 (en) * 2002-11-26 2006-02-02 Yongmin Li Method and system for estimating global motion in video sequences
US20060072663A1 (en) * 2002-11-26 2006-04-06 British Telecommunications Public Limited Company Method and system for estimating global motion in video sequences
US20060262853A1 (en) * 2005-05-20 2006-11-23 Microsoft Corporation Low complexity motion compensated frame interpolation method
US20090073311A1 (en) * 2005-04-28 2009-03-19 Koichi Hamada Frame rate conversion device and video display device
US20090154825A1 (en) * 2007-12-14 2009-06-18 Intel Corporation Reduction filter based on smart neighbor selection and weighting (nrf-snsw)
US20090161978A1 (en) * 2007-12-20 2009-06-25 Marinko Karanovic Halo Artifact Removal Method
US20100027667A1 (en) * 2007-01-26 2010-02-04 Jonatan Samuelsson Motion estimation for uncovered frame regions
US20110069762A1 (en) * 2008-05-29 2011-03-24 Olympus Corporation Image processing apparatus, electronic device, image processing method, and storage medium storing image processing program
US20110206127A1 (en) * 2010-02-05 2011-08-25 Sensio Technologies Inc. Method and Apparatus of Frame Interpolation
EA016695B1 (en) * 2011-09-01 2012-06-29 Закрытое Акционерное Общество "Импульс" Method of reducing noise in image
US20120294370A1 (en) * 2010-10-06 2012-11-22 Yi-Jen Chiu System and method for low complexity motion vector derivation
TWI418212B (en) * 2007-06-12 2013-12-01 Himax Tech Ltd Method of frame interpolation for frame rate up-conversion
US20140063031A1 (en) * 2012-09-05 2014-03-06 Imagination Technologies Limited Pixel buffering
US9509995B2 (en) 2010-12-21 2016-11-29 Intel Corporation System and method for enhanced DMVD processing
US9955179B2 (en) 2009-07-03 2018-04-24 Intel Corporation Methods and systems for motion vector derivation at a video decoder
CN111345041A (en) * 2017-09-28 2020-06-26 Vid拓展公司 Complexity reduction for overlapped block motion compensation

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100703788B1 (en) * 2005-06-10 2007-04-06 삼성전자주식회사 Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
WO2007007225A2 (en) * 2005-07-12 2007-01-18 Nxp B.V. Method and device for removing motion blur effects
GB2432068B (en) * 2005-11-02 2010-10-06 Imagination Tech Ltd Motion estimation
KR20070069615A (en) 2005-12-28 2007-07-03 삼성전자주식회사 Motion estimator and motion estimating method
US20080018788A1 (en) * 2006-07-20 2008-01-24 Samsung Electronics Co., Ltd. Methods and systems of deinterlacing using super resolution technology
JP4181598B2 (en) * 2006-12-22 2008-11-19 シャープ株式会社 Image display apparatus and method, image processing apparatus and method
JP5023780B2 (en) * 2007-04-13 2012-09-12 ソニー株式会社 Image processing apparatus, image processing method, and program
US20080304568A1 (en) * 2007-06-11 2008-12-11 Himax Technologies Limited Method for motion-compensated frame rate up-conversion
US20090207314A1 (en) * 2008-02-14 2009-08-20 Brian Heng Method and system for motion vector estimation using a pivotal pixel search
KR101498124B1 (en) 2008-10-23 2015-03-05 삼성전자주식회사 Apparatus and method for improving frame rate using motion trajectory
US8471959B1 (en) * 2009-09-17 2013-06-25 Pixelworks, Inc. Multi-channel video frame interpolation
JP5782787B2 (en) * 2011-04-01 2015-09-24 ソニー株式会社 Display device and display method
GB201113527D0 (en) 2011-08-04 2011-09-21 Imagination Tech Ltd External vectors in a motion estimation system
CN104202555B (en) * 2014-09-29 2017-10-20 建荣集成电路科技(珠海)有限公司 Interlace-removing method and device
KR102089433B1 (en) * 2017-09-14 2020-03-16 국방과학연구소 Multidirectional hierarchical motion estimation method for video coding tool
US10817996B2 (en) * 2018-07-16 2020-10-27 Samsung Electronics Co., Ltd. Devices for and methods of combining content from multiple frames
CN109151538B (en) 2018-09-17 2021-02-05 深圳Tcl新技术有限公司 Image display method and device, smart television and readable storage medium

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5198901A (en) * 1991-09-23 1993-03-30 Matsushita Electric Corporation Of America Derivation and use of motion vectors in a differential pulse code modulation system
US5317397A (en) * 1991-05-31 1994-05-31 Kabushiki Kaisha Toshiba Predictive coding using spatial-temporal filtering and plural motion vectors
US5400087A (en) * 1992-07-06 1995-03-21 Mitsubishi Denki Kabushiki Kaisha Motion vector detecting device for compensating for movements in a motion picture
US5495300A (en) * 1992-06-11 1996-02-27 U.S. Philips Corporation Motion-compensated picture signal interpolation
US5534946A (en) * 1992-05-15 1996-07-09 U.S. Philips Corporation Apparatus for performing motion-compensated picture signal interpolation
US5539469A (en) * 1994-12-30 1996-07-23 Daewoo Electronics Co., Ltd. Apparatus for determining motion vectors through the use of an adaptive median filtering technique
US5550591A (en) * 1993-10-28 1996-08-27 Goldstar Co., Ltd. Motion compensator for digital image restoration
US5565922A (en) * 1994-04-11 1996-10-15 General Instrument Corporation Of Delaware Motion compensation for interlaced digital video signals
US5719631A (en) * 1995-11-07 1998-02-17 Siemens Aktiengesellschaft Method and apparatus for coding a video data stream of a video sequence formed of image blocks
US5754240A (en) * 1995-10-04 1998-05-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus for calculating the pixel values of a block from one or two prediction blocks
US5825418A (en) * 1993-09-14 1998-10-20 Goldstar, Co., Ltd. B-frame processing apparatus including a motion compensation apparatus in the unit of a half pixel for an image decoder
US5909511A (en) * 1995-03-20 1999-06-01 Sony Corporation High-efficiency coding method, high-efficiency coding apparatus, recording and reproducing apparatus, and information transmission system
US5995154A (en) * 1995-12-22 1999-11-30 Thomson Multimedia S.A. Process for interpolating progressive frames
US6058212A (en) * 1996-01-17 2000-05-02 Nec Corporation Motion compensated interframe prediction method based on adaptive motion vector interpolation
US6188727B1 (en) * 1996-09-13 2001-02-13 Lg Electronics Inc. Simplicity HDTV video decoder and its decoding method
US20020036705A1 (en) * 2000-06-13 2002-03-28 Samsung Electronics Co., Ltd. Format converter using bi-directional motion vector and method thereof
US6504872B1 (en) * 2000-07-28 2003-01-07 Zenith Electronics Corporation Down-conversion decoder for interlaced video
US6658059B1 (en) * 1999-01-15 2003-12-02 Digital Video Express, L.P. Motion field modeling and estimation using motion transform
US7006157B2 (en) * 2002-02-19 2006-02-28 Samsung Electronics Co., Ltd. Apparatus and method for converting frame rate
US7333545B2 (en) * 1999-03-30 2008-02-19 Sony Corporation Digital video decoding, buffering and frame-rate converting method and apparatus
US7336838B2 (en) * 2003-06-16 2008-02-26 Samsung Electronics Co., Ltd. Pixel-data selection device to provide motion compensation, and a method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61159880A (en) 1985-01-07 1986-07-19 Nippon Hoso Kyokai <Nhk> System converting device
GB2277002B (en) 1993-04-08 1997-04-09 Sony Uk Ltd Motion compensated video signal processing
DE69727911D1 (en) 1997-04-24 2004-04-08 St Microelectronics Srl Method for increasing the motion-estimated and motion-compensated frame rate for video applications, and device for using such a method
US6442203B1 (en) 1999-11-05 2002-08-27 Demografx System and method for motion compensation and frame rate conversion

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5317397A (en) * 1991-05-31 1994-05-31 Kabushiki Kaisha Toshiba Predictive coding using spatial-temporal filtering and plural motion vectors
US5198901A (en) * 1991-09-23 1993-03-30 Matsushita Electric Corporation Of America Derivation and use of motion vectors in a differential pulse code modulation system
US5534946A (en) * 1992-05-15 1996-07-09 U.S. Philips Corporation Apparatus for performing motion-compensated picture signal interpolation
US5495300A (en) * 1992-06-11 1996-02-27 U.S. Philips Corporation Motion-compensated picture signal interpolation
US5400087A (en) * 1992-07-06 1995-03-21 Mitsubishi Denki Kabushiki Kaisha Motion vector detecting device for compensating for movements in a motion picture
US5825418A (en) * 1993-09-14 1998-10-20 Goldstar, Co., Ltd. B-frame processing apparatus including a motion compensation apparatus in the unit of a half pixel for an image decoder
US5550591A (en) * 1993-10-28 1996-08-27 Goldstar Co., Ltd. Motion compensator for digital image restoration
US5565922A (en) * 1994-04-11 1996-10-15 General Instrument Corporation Of Delaware Motion compensation for interlaced digital video signals
US5539469A (en) * 1994-12-30 1996-07-23 Daewoo Electronics Co., Ltd. Apparatus for determining motion vectors through the use of an adaptive median filtering technique
US5909511A (en) * 1995-03-20 1999-06-01 Sony Corporation High-efficiency coding method, high-efficiency coding apparatus, recording and reproducing apparatus, and information transmission system
US5754240A (en) * 1995-10-04 1998-05-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus for calculating the pixel values of a block from one or two prediction blocks
US5719631A (en) * 1995-11-07 1998-02-17 Siemens Aktiengesellschaft Method and apparatus for coding a video data stream of a video sequence formed of image blocks
US5995154A (en) * 1995-12-22 1999-11-30 Thomson Multimedia S.A. Process for interpolating progressive frames
US6058212A (en) * 1996-01-17 2000-05-02 Nec Corporation Motion compensated interframe prediction method based on adaptive motion vector interpolation
US6188727B1 (en) * 1996-09-13 2001-02-13 Lg Electronics Inc. Simplicity HDTV video decoder and its decoding method
US6658059B1 (en) * 1999-01-15 2003-12-02 Digital Video Express, L.P. Motion field modeling and estimation using motion transform
US7333545B2 (en) * 1999-03-30 2008-02-19 Sony Corporation Digital video decoding, buffering and frame-rate converting method and apparatus
US20020036705A1 (en) * 2000-06-13 2002-03-28 Samsung Electronics Co., Ltd. Format converter using bi-directional motion vector and method thereof
US6504872B1 (en) * 2000-07-28 2003-01-07 Zenith Electronics Corporation Down-conversion decoder for interlaced video
US7006157B2 (en) * 2002-02-19 2006-02-28 Samsung Electronics Co., Ltd. Apparatus and method for converting frame rate
US7336838B2 (en) * 2003-06-16 2008-02-26 Samsung Electronics Co., Ltd. Pixel-data selection device to provide motion compensation, and a method thereof

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040213470A1 (en) * 2002-04-25 2004-10-28 Sony Corporation Image processing apparatus and method
US7555043B2 (en) * 2002-04-25 2009-06-30 Sony Corporation Image processing apparatus and method
US8290040B2 (en) * 2002-11-26 2012-10-16 British Telecommunications Public Limited Company Method and system for estimating global motion in video sequences
US20060023786A1 (en) * 2002-11-26 2006-02-02 Yongmin Li Method and system for estimating global motion in video sequences
US20060072663A1 (en) * 2002-11-26 2006-04-06 British Telecommunications Public Limited Company Method and system for estimating global motion in video sequences
US8494051B2 (en) * 2002-11-26 2013-07-23 British Telecommunications Plc Method and system for estimating global motion in video sequences
US7145950B2 (en) * 2003-07-14 2006-12-05 Primax Electronics Ltd. Method of motion vector determination in digital video compression
US20050013364A1 (en) * 2003-07-14 2005-01-20 Chun-Ming Hsu Method of motion vector determination in digital video compression
US8503531B2 (en) * 2004-03-29 2013-08-06 Sony Corporation Image processing apparatus and method, recording medium, and program
US20050213663A1 (en) * 2004-03-29 2005-09-29 Koji Aoyama Image processing apparatus and method, recording medium, and program
US20090073311A1 (en) * 2005-04-28 2009-03-19 Koichi Hamada Frame rate conversion device and video display device
US20060262853A1 (en) * 2005-05-20 2006-11-23 Microsoft Corporation Low complexity motion compensated frame interpolation method
US8018998B2 (en) * 2005-05-20 2011-09-13 Microsoft Corporation Low complexity motion compensated frame interpolation method
US9860554B2 (en) * 2007-01-26 2018-01-02 Telefonaktiebolaget Lm Ericsson (Publ) Motion estimation for uncovered frame regions
US20100027667A1 (en) * 2007-01-26 2010-02-04 Jonatan Samuelsson Motion estimation for uncovered frame regions
TWI418212B (en) * 2007-06-12 2013-12-01 Himax Tech Ltd Method of frame interpolation for frame rate up-conversion
US8150196B2 (en) * 2007-12-14 2012-04-03 Intel Corporation Reduction filter based on smart neighbor selection and weighting (NRF-SNSW)
US20090154825A1 (en) * 2007-12-14 2009-06-18 Intel Corporation Reduction filter based on smart neighbor selection and weighting (nrf-snsw)
US20090161978A1 (en) * 2007-12-20 2009-06-25 Marinko Karanovic Halo Artifact Removal Method
US8798130B2 (en) * 2008-05-29 2014-08-05 Olympus Corporation Image processing apparatus, electronic device, image processing method, and storage medium storing image processing program
US20110069762A1 (en) * 2008-05-29 2011-03-24 Olympus Corporation Image processing apparatus, electronic device, image processing method, and storage medium storing image processing program
US10863194B2 (en) 2009-07-03 2020-12-08 Intel Corporation Methods and systems for motion vector derivation at a video decoder
US9955179B2 (en) 2009-07-03 2018-04-24 Intel Corporation Methods and systems for motion vector derivation at a video decoder
US11765380B2 (en) 2009-07-03 2023-09-19 Tahoe Research, Ltd. Methods and systems for motion vector derivation at a video decoder
US10404994B2 (en) 2009-07-03 2019-09-03 Intel Corporation Methods and systems for motion vector derivation at a video decoder
US20110206127A1 (en) * 2010-02-05 2011-08-25 Sensio Technologies Inc. Method and Apparatus of Frame Interpolation
US20120294370A1 (en) * 2010-10-06 2012-11-22 Yi-Jen Chiu System and method for low complexity motion vector derivation
US9509995B2 (en) 2010-12-21 2016-11-29 Intel Corporation System and method for enhanced DMVD processing
EA016695B1 (en) * 2011-09-01 2012-06-29 Закрытое Акционерное Общество "Импульс" Method of reducing noise in image
EP2565842A1 (en) * 2011-09-01 2013-03-06 ZAO Impuls Method of image noise reduction
US8797426B2 (en) 2011-09-01 2014-08-05 Zakrytoe akcionernoe obshchestvo “Impul's” Method of image noise reduction
US20140063031A1 (en) * 2012-09-05 2014-03-06 Imagination Technologies Limited Pixel buffering
US11587199B2 (en) 2012-09-05 2023-02-21 Imagination Technologies Limited Upscaling lower resolution image data for processing
US10109032B2 (en) * 2012-09-05 2018-10-23 Imagination Technologies Limted Pixel buffering
CN111345041A (en) * 2017-09-28 2020-06-26 Vid拓展公司 Complexity reduction for overlapped block motion compensation

Also Published As

Publication number Publication date
US7720150B2 (en) 2010-05-18
CN1422074A (en) 2003-06-04
KR100412501B1 (en) 2003-12-31
NL1021982C2 (en) 2005-12-15
KR20030044691A (en) 2003-06-09
CN1237796C (en) 2006-01-18
JP2003174628A (en) 2003-06-20
NL1021982A1 (en) 2003-06-03

Similar Documents

Publication Publication Date Title
US7720150B2 (en) Pixel data selection device for motion compensated interpolation and method thereof
US7057665B2 (en) Deinterlacing apparatus and method
US6900846B2 (en) Format converter using bi-directional motion vector and method thereof
US6985187B2 (en) Motion-adaptive interpolation apparatus and method thereof
US7042512B2 (en) Apparatus and method for adaptive motion compensated de-interlacing of video data
US6940557B2 (en) Adaptive interlace-to-progressive scan conversion algorithm
US7075988B2 (en) Apparatus and method of converting frame and/or field rate using adaptive motion compensation
KR100611517B1 (en) Method and system for interpolation of digital signals
EP1223748B1 (en) Motion detection in an interlaced video signal
US20030161403A1 (en) Apparatus for and method of transforming scanning format
JP2011508516A (en) Image interpolation to reduce halo
NL1027270C2 (en) The interlining device with a noise reduction / removal device.
KR100484182B1 (en) Apparatus and method for deinterlacing
KR100967521B1 (en) Equipment and method for de-interlacing using directional correlations
JP2007520966A (en) Motion compensated deinterlacing with film mode adaptation
KR100968642B1 (en) Method and interpolation device for calculating a motion vector, display device comprising the interpolation device, and computer program
US6760376B1 (en) Motion compensated upconversion for video scan rate conversion
JP4179089B2 (en) Motion estimation method for motion image interpolation and motion estimation device for motion image interpolation
KR100644601B1 (en) Apparatus for de-interlacing video data using motion-compensated interpolation and method thereof
KR100382650B1 (en) Method and apparatus for detecting motion using scaled motion information in video signal processing system and data interpolating method and apparatus therefor
KR20050015189A (en) Apparatus for de-interlacing based on phase corrected field and method therefor, and recording medium for recording programs for realizing the same
KR100382651B1 (en) Method and apparatus for detecting motion using region-wise motion decision information in video signal processing system and data interpolating method and apparatus therefor

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, SUNG-HEE;MOON, HEON-HEE;REEL/FRAME:013204/0992

Effective date: 20020807

Owner name: SAMSUNG ELECTRONICS CO., LTD.,KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, SUNG-HEE;MOON, HEON-HEE;REEL/FRAME:013204/0992

Effective date: 20020807

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.)

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20180518