USRE42148E1 - Method and apparatus for visual lossless image syntactic encoding - Google Patents

Method and apparatus for visual lossless image syntactic encoding Download PDF

Info

Publication number
USRE42148E1
USRE42148E1 US12/196,180 US19618008A USRE42148E US RE42148 E1 USRE42148 E1 US RE42148E1 US 19618008 A US19618008 A US 19618008A US RE42148 E USRE42148 E US RE42148E
Authority
US
United States
Prior art keywords
frame
video
pixel
details
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US12/196,180
Inventor
Semion Sheraizin
Vitaly Sheraizin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digimedia Tech LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=11073740&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=USRE42148(E1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Individual filed Critical Individual
Priority to US12/196,180 priority Critical patent/USRE42148E1/en
Application granted granted Critical
Publication of USRE42148E1 publication Critical patent/USRE42148E1/en
Assigned to RATEZE REMOTE MGMT. L.L.C. reassignment RATEZE REMOTE MGMT. L.L.C. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: SOMLE DEVELOPMENT, L.L.C.
Assigned to SOMLE DEVELOPMENT, L.L.C. reassignment SOMLE DEVELOPMENT, L.L.C. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VLS COM LTD.
Assigned to INTELLECTUAL VENTURES ASSETS 145 LLC reassignment INTELLECTUAL VENTURES ASSETS 145 LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RATEZE REMOTE MGMT. L.L.C.
Assigned to DIGIMEDIA TECH, LLC reassignment DIGIMEDIA TECH, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTELLECTUAL VENTURES ASSETS 145 LLC
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Definitions

  • the present invention relates generally to processing of video images and, in particular, to syntactic encoding of images for later compression by standard compression techniques.
  • TV digital broadcast television
  • video conferencing video conferencing
  • interactive TV etc.
  • All of these signals, in their digital form, are divided into frames, each of which consists of many pixels (image elements), each of which requires 8-24 bits to describe them.
  • the result is megabits of data per frame.
  • An object of the present invention is to provide a method and apparatus for video compression which is generally lossless vis-à-vis what the human eye perceives.
  • a visual perception threshold unit for image processing.
  • the threshold unit identifies a plurality of visual perception threshold levels to be associated with the pixels of a video frame, wherein the threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of the video frame.
  • the visual perception threshold unit which includes a parameter generator and a threshold generator.
  • the parameter generator generates a multiplicity of parameters that describe at least some of the information content of the processed frame.
  • the threshold generator generates a plurality of visual perception threshold levels to be associated with the pixels of the video frame.
  • the threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of the frame.
  • the parameter generator includes a volume unit, a color unit, an intensity unit or some combination of the three.
  • the volume unit determines the volume of information in the frame, the color unit determines the per pixel color and the intensity unit determines a cross-frame change of intensity.
  • a method for generating visual perception thresholds includes analysis of the details of the frames of a video signal, estimating the parameters of the details, and defining a visual perception threshold for each detail in accordance with the estimated detail parameters.
  • the method includes determining which details in the image can be distinguished by the human eye and which ones can only be detected by it.
  • the method also includes providing one bit to describe a pixel which can only be detected by the human eye, and providing three bits to describe a pixel which can be distinguished by the human eye.
  • the method also includes smoothing the data of less-distinguished details.
  • the step of determining details also includes identifying areas of high contrast and areas whose details have small dimensions.
  • FIG. 1 is an example of a video frame
  • FIG. 2 is a block diagram illustration of a video compression system having a visual lossless syntactic encoder, constructed and operative in accordance with a preferred embodiment of the present invention
  • FIG. 3 is a block diagram illustration of the details of the visual lossless syntactic encoder of FIG. 2 ;
  • FIG. 4 is a graphical illustration of the transfer functions for a number of high pass filters useful in the syntactic encoder of FIG. 3 ;
  • FIGS. 5A and 5B are block diagram illustrations of alternative embodiments of a controllable filter bank forming part of the syntactic encoder of FIG. 3 ;
  • FIG. 6 is a graphical illustration of the transfer functions for a number of low pass filters useful in the controllable filter bank of FIGS. 5A and 5B ;
  • FIG. 7 is a graphical illustration of the transfer function for a non-linear filter useful in the controllable filter bank of FIGS. 5A and 5B ;
  • FIGS. 8A , 8 B and 8 C are block diagram illustrations of alternative embodiments of an inter-frame processor forming a controlled filter portion of the syntactic encoder of FIG. 3 ;
  • FIG. 9 is a block diagram illustration of a spatial-temporal analyzer forming part of the syntactic encoder of FIG. 3 ;
  • FIGS. 10A and 10B are detail illustrations of the analyzer of FIG. 9 ;
  • FIG. 11 is a detail illustration of a frame analyzer forming part of the syntactic encoder of FIG. 3 .
  • the present invention is a method for describing, and then encoding, images based on which details in the image can be distinguished by the human eye and which ones can only be detected by it.
  • FIG. 1 is a grey-scale image of a plurality of shapes of a bird in flight, ranging from a photograph of one (labeled 10 ) to a very stylized version of one (labeled 12 ).
  • the background of the image is very dark at the top of the image and very light at the bottom of the image.
  • the human eye can distinguish most of the birds of the image. However, there is at least one bird, labeled 14 , which the eye can detect but cannot determine all of its relative contrast details. Furthermore, there are large swaths of the image (in the background) which have no details in them.
  • the present invention is a method and system for syntactic encoding of video frames before they are sent to a standard video compression unit.
  • the present invention separates the details of a frame into two different types, those that can only be detected (for which only one bit will suffice to describe each of their pixels) and those which can be distinguished (for which at least three bits are needed to describe the intensity of each of their pixels).
  • FIG. 2 shows a visual lossless syntactic (VLS) encoder 20 connected to a standard video transmitter 22 which includes a video compression encoder 24 , such as a standard MPEG encoder, and a modulator 26 .
  • VLS encoder 20 transforms an incoming video signal such that video compression encoder 24 can compress the video signal two to five times more than video compression encoder 24 can do on its own, resulting in a significantly reduced volume bit stream to be transmitted.
  • Modulator 26 modulates the reduced volume bit stream and transmits it to a receiver 30 , which, as in the prior art, includes a demodulator 32 and a decoder 34 .
  • Demodulator 32 demodulates the transmitted signal and decoder 34 decodes and decompresses the demodulated signal. The result is provided to a monitor 36 for display.
  • encoder 20 attempts to quantify each frame of the video signal according to which sections of the frame are more or less distinguished by the human eye. For the less-distinguished sections, encoder 20 either provides pixels of a minimum bit volume, thus reducing the overall bit volume of the frame or smoothes the data of the sections such that video compression encoder 24 will later significantly compress these sections, thus resulting in a smaller bit volume in the compressed frame. Since the human eye does not distinguish these sections, the reproduced frame is not perceived significantly differently than the original frame, despite its smaller bit volume.
  • Encoder 20 comprises an input frame memory 40 , a frame analyzer 42 , an intra-frame processor 44 , an output frame memory 46 and an inter-frame processor 48 .
  • Analyzer 42 analyzes each frame to separate it into subclasses, where subclasses define areas whose pixels cannot be distinguished from each other.
  • Intra-frame processor 44 spatially filters each pixel of the frame according to its subclass and, optionally, also provides each pixel of the frame with the appropriate number of bits.
  • Inter-frame processor 48 provides temporal filtering (i.e. inter-frame filtering) and updates output frame memory 46 with the elements of the current frame which are different than those of the previous frame.
  • frames are composed of pixels, each having luminance Y and two chrominance C r and C b components, each of which is typically defined by eight bits.
  • VLS encoder 20 generally separately processes the three components.
  • the bandwidth of the chrominance signals is half as wide as that of the luminance signal.
  • the filters (in the x direction of the frame) for chrominance have a narrower bandwidth.
  • the following discussion shows the filters for the luminance signal Y.
  • Frame analyzer 42 comprises a spatial-temporal analyzer 50 , a parameter estimator 52 , a visual perception threshold determiner 54 and a subclass determiner 56 . Details of these elements are provided in FIGS. 9-11 , discussed hereinbelow.
  • spatial-temporal analyzer 50 generates a plurality of filtered frames from the current frame, each filtered through a different high pass filter (HPF), where each high pass filter retains a different range of frequencies therein.
  • HPF high pass filter
  • FIG. 4 is an amplitude vs. frequency graph illustrating the transfer functions of an exemplary set of high pass filters for frames in a non-interlacing scan format.
  • Four graphs are shown. It can be seen that the curve labeled HPF-R 3 has a cutoff frequency of 1 MHz and thus, retains portions of the frame with information above 1 MHz.
  • curve HPF-R 2 has a cutoff frequency of 2 MHz
  • HPF-C 2 has a cutoff frequency of 3 MHz
  • HPF-R 1 and HPF-C 1 have a cutoff frequency of 4 MHz.
  • the terminology “Rx” refers to operations on a row of pixels while the terminology “Cx” refers to operations on a column of pixels.
  • the filters of FIG. 4 implement the following finite impulse response (FIR) filters on either a row of pixels (the x direction of the frame) or a column of pixels (the y direction of the frame), where the number of pixels used in the filter defines the power of the cosine.
  • FIR finite impulse response
  • a filter implementing cos 10 x takes 10 pixels around the pixel of interest, five to one side and five to the other side of the pixel of interest.
  • the high pass filters can also be considered as digital equivalents of optical apertures.
  • filters HPF-R 1 and HPF-C 1 retain only very small details in the frame (of 1-4 pixels in size) while filter HPF-R 3 retains much larger details (of up to 11 pixels).
  • the filtered frames will be labeled by the type of filter (HPF-X) used to create them.
  • analyzer 50 also generates difference frames between the current frame and another, earlier frame.
  • the previous frame is typically at most 15 frames earlier.
  • a “group” of pictures or frames (GOP) is a series of frames for which difference frames are generated.
  • Parameter estimator 52 takes the current frame and the filtered and difference frames and generates a set of parameters that describe the information content of the current frame.
  • the parameters are determined on a pixel-by-pixel basis or on a per frame basis, as relevant. It is noted that the parameters do not have to be calculated to great accuracy as they are used in combination to determine a per pixel, visual perception threshold THD i .
  • SNR Signal to noise ratio
  • Normalized N ⁇ i this measures the change ⁇ i , per pixel i, from the current frame to its previous frame. This value is then normalized by the maximum intensity I MAX possible for the pixel.
  • Normalized volume of intraframe change NI XY this measures the volume of change in a frame I XY (or how much detail there is in a frame), normalized by the maximum possible amount of information MAX INFO within a frame (i.e. 8 bits per pixel x N pixels per frame). Since the highest frequency range indicates the amount of change in a frame, the volume of change I XY is a sum of the intensities in the filtered frame having the highest frequency range, such as filtered frame HPF-R 1 .
  • Normalized volume of interframe changes NI F this measures the volume of changes I F between the current frame and its previous frame, normalized by the maximum possible amount of information MAX INFO within a frame.
  • the volume of interframe changes I F is the sum of the intensities in the difference frame.
  • Normalized volume of change within a group of frames NI GOP : this measures the volume of changes I GOP over a group of frames, where the group is from 2 to 15 frames, as selected by the user. It is normalized by the maximum possible amount of information MAX INFO within a frame and by the number of frames in the group.
  • Normalized luminance level NY i is the luminance level of a pixel in the current frame. It is normalized by the maximum intensity I MAX possible for the pixel.
  • Color saturation p I this is the color saturation level of the ith pixel and it is determined by: [ 0.78 ⁇ ( C r , i - 128 160 ) 2 + 0.24 ⁇ ( C b , i - 128 126 ) 2 ] 1 / 2
  • C r,i and C b,i are the chrominance levels of the ith pixel.
  • Hue h i this is the general hue of the ith pixel and is determined by: arctan ⁇ ( 1.4 ⁇ C r , i - 128 C b , i - 128 ) .
  • hue h i can be determined by interpolating Table 1, below.
  • hue R i (h i ) this is the human vision response to a given hue and is given by Table 1, below. Interpolation is typically used to produce a specific value of the response R(h) for a specific value of hue h.
  • Subclass determiner 56 compares each pixel i of each high pass filtered frame HPF-X to its associated threshold THD i to determine whether or not that pixel is significantly present in each filtered frame, where “significantly present” is defined by the threshold level and by the “detail dimension” (i.e. the size of the object or detail in the image of which the pixel forms a part). Subclass determiner 56 then defines the subclass to which the pixel belongs.
  • the pixel if the pixel is not present in any of the filtered frames, the pixel must belong to an object of large size or the detail is only detected but not distinguished. If the pixel is only found in the filtered frame of HPF-C 2 or in both frames HPF-C 1 and HPF-C 2 , it must be a horizontal edge (an edge in the Y direction of the frame). If it is found in filtered frames HPF-R 3 and HPF-C 2 , it is a single small detail. If the pixel is found only in filtered frames HPF-R 1 , HPF-R 2 and HPF-R 3 , it is a very small vertical edge. If, in addition, it is also found in filtered frame HPF-C 2 , then the pixel is a very small, single detail.
  • the output of subclass determiner 56 is an indication of the subclass to which each pixel of the current frame belongs.
  • Intra-frame processor 44 performs spatial filtering of the frame, where the type of filter utilized varies in accordance with the subclass to which the pixel belongs.
  • intra-frame processor 44 filters each subclass of the frame differently and according to the information content of the subclass.
  • the filtering limits the bandwidth of each subclass which is equivalent to sampling the data at different frequencies. Subclasses with a lot of content are sampled at a high frequency while subclasses with little content, such as a plain background area, are sampled at a low frequency.
  • intra-frame processor 44 changes the intensity of the pixel by an amount less than the visual distinguishing threshold for that pixel. Pixels whose contrast is lower than the threshold (i.e. details which were detected only) are transformed with non-linear filters. If desired, the data size of the detected only pixels can be reduced from 8 bits to 1 or 2 bits, depending on the visual threshold level and the detail dimension for the pixel. For the other pixels (i.e. the distinguished ones), 3 or 4 bits is sufficient.
  • Intra-frame processor 44 comprises a controllable filter bank 60 and a filter selector 62 .
  • Controllable filter bank 60 comprises a set of low pass and non-linear filters, shown in FIGS. 5A and 5B to which reference is now made, which filter selector 62 activates, based on the subclass to which the pixel belongs. Selector 62 can activate more than one filter, as necessary.
  • FIGS. 5A and 5B are two, alternative embodiments of controllable filter bank 60 . Both comprise two sections 64 and 66 which operate on columns (i.e. line to line) and on rows (i.e. within a line), respectively. In each section 64 and 66 , there is a choice of filters, each controlled by an appropriate switch, labeled SW-X, where X is one of C 1 , C 2 , R 1 , R 2 , R 3 (selecting one of the low pass filters (LPF)), D-C, D-R (selecting to pass the relevant pixel directly).
  • Filter selector 62 switches the relevant switch, thereby activating the relevant filter.
  • non-linear filters NLF-R and NLF-C are activated by switches R 3 and C 2 , respectively.
  • the outputs of non-linear filters NLF-R and NLF-C are added to the outputs of low pass filters LPF-R 3 and LPF-C 2 , respectively.
  • Controllable filter bank 60 also includes time aligners (TA) which add any necessary delays to ensure that the pixel currently being processed remains at its appropriate location within the frame.
  • TA time aligners
  • the low pass filters are associated with the high pass filters used in analyzer 50 .
  • the cutoff frequencies of the low pass filters are close to those of the high pass filters.
  • the low pass filters thus pass that which their associated high pass filters ignore.
  • FIG. 6 illustrates exemplary low pass filters for the example provided hereinabove.
  • Low pass filter LPF-R 3 has a cutoff frequency of 0.5 MHz and thus, generally does not retain anything which its associated high pass filter HPF-R 3 (with a cutoff frequency of 1 MHz) retains.
  • Filter LPF-R 2 has a cutoff frequency of 1 MHz
  • filter LPF-C 2 has a cutoff frequency of 1.25 MHz
  • filters LPF-C 1 and LPF-R 1 have a cutoff frequency of about 2 MHz.
  • filters LPF-Cx operate on the columns of the frame and filters LPF-Rx operate on the rows of the frame.
  • FIG. 7 illustrates an exemplary transfer function for the non-linear filters (NLF) which models the response of the eye when detecting a detail.
  • the transfer function defines an output value Vout normalized by the threshold level THD i as a function of an input value Vin also normalized by the threshold level THD i .
  • the input-output relationship is described by a polynomial of high order. A typical order might be six, though lower orders, of power two or three, are also feasible.
  • Table 3 lists the type of filters activated per subclass, where the header for the column indicates both the type of filter and the label of the switch SW-X of FIGS. 5A and 5B .
  • FIG. 5B includes rounding elements RND which reduce the number of bits of a pixel from eight to three or four bits, depending on the subclass to which the pixel belongs.
  • Table 4 illustrates the logic for the example presented hereinabove, where the items which are not active for the subclass are indicated by “N/A”.
  • the output of intra-frame processor 44 is a processed version of the current frame which uses fewer bits to describe the frame than the original version.
  • inter-frame processor 48 which provides temporal filtering (i.e. inter-frame filtering) to further process the current frame. Since the present invention provides a full frame as output, inter-frame processor 48 determines which pixels have changed significantly from the previous frame and amends those only, storing the new version in the appropriate location in output frame memory 46 .
  • FIGS. 8A and 8B are open loop versions (i.e. the previous frame is the frame previously input into inter-frame processor 48 ) while the embodiment of FIG. 8C is a closed loop version (i.e. the previous frame is the frame previously produced by inter-frame processor 48 ). All of the embodiments comprise a summer 68 , a low pass filter (LPF) 70 , a high pass filter (HPF) 72 , two comparators 74 and 76 , two switches 78 and 80 , controlled by the results of comparators 74 and 76 , respectively, and a summer 82 .
  • FIGS. 8A and 8B additionally include an intermediate memory 84 for storing the output of intra-frame processor 44 .
  • Summer 68 takes the difference of the processed current frame, produced by processor 44 , and the previous frame, stored in either intermediate memory 84 ( FIGS. 8A and 8B ) or in frame memory 46 (FIG. 8 C). The difference frame is then processed in two parallel tracks.
  • the low pass filter is used.
  • Each pixel of the filtered frame is compared to a general, large detail, threshold THD-LF which is typically set to 5% of the maximum expected intensity for the frame.
  • THD-LF typically set to 5% of the maximum expected intensity for the frame.
  • the difference frame is high pass filtered. Since high pass filtering retains the small details, each pixel of the high pass filtered frame is compared to the particular threshold THD, for that pixel, as produced by threshold determiner 54 . If the difference pixel has an intensity above the threshold THD i (i.e. the change in the pixel is significant for detailed visual perception), it is allowed through (i.e. switch 80 is set to pass the pixel).
  • Summer 82 adds the filtered difference pixels passed by switches 78 and/or 80 with the pixel of the previous frame to “produce the new pixel”. If switches 78 and 80 did not pass anything, the new pixel is the same as the previous pixel. Otherwise, the new pixel is the sum of the previous pixel and the low and high frequency components of the difference pixel.
  • FIGS. 9 , 10 A, 10 B and 11 detail elements of frame analyzer 42 .
  • the term “ML” indicates a memory line of the current frame
  • “MP” indicates a memory pixel of the current frame
  • “MF” indicates a memory frame
  • “VD” indicates the vertical drive signal
  • “TA” indicates a time alignment, e.g. a delay
  • CNT indicates a counter.
  • FIG. 9 generally illustrates the operation of spatial-temporal analyzer 50 and FIGS. 10A and 10B provide one detailed embodiment for the spatial analysis and temporal analysis portions 51 and 53 , respectively.
  • FIG. 11 details parameter estimator 52 , threshold determiner 54 and subclass determiner 56 . As these figures are deemed to be self-explanatory, no further explanation will be included here.
  • the present invention can be implemented with a field programmable gate array (FPGA) and the frame memory can be implemented with SRAM or SDRAM.
  • FPGA field programmable gate array

Abstract

A visual perception threshold unit for image processing identifies a plurality of visual perception threshold levels to be associated with the pixels of a video frame, wherein the threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of the video frame. The present invention also includes a method of generating visual perception thresholds by analysis of the details of the video frames, estimating the parameters of the details, and defining a visual perception threshold for each detail in accordance with the estimated detail parameters. The present invention further includes a method of describing images by determining which details in the image can be distinguished by the human eye and which ones can only be detected by it.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation application of U.S. Ser. No. 10/121,685, filed Apr. 15, 2002, now U.S. Pat. No. 6,952,500, which is a continuation application of U.S. Ser. No. 09/524,618, filed Mar. 14, 2000, issued as U.S. Pat. No. 6,473,532, which patents are incorporated herein by reference.
FIELD OF THE INVENTION
The present invention relates generally to processing of video images and, in particular, to syntactic encoding of images for later compression by standard compression techniques.
BACKGROUND OF THE INVENTION
There are many types of video signals, such as digital broadcast television (TV), video conferencing, interactive TV, etc. All of these signals, in their digital form, are divided into frames, each of which consists of many pixels (image elements), each of which requires 8-24 bits to describe them. The result is megabits of data per frame.
Before storing and/or transmitting these signals, they typically are compressed, using one of many standard video compression techniques, such as JPEG, MPEG, H-compression, etc. These compression standards use video signal transforms and intra- and inter-frame coding which exploit spatial and temporal correlations among pixels of a frame and across frames.
However, these compression techniques create a number of well-known, undesirable and unacceptable artifacts, such as blockiness, low resolution and wiggles, among others. These are particularly problematic for broadcast TV (satellite TV, cable TV, etc.) or for systems with very low bit rates (video conferencing, videophone).
Much research has been performed to try and improve the standard compression techniques. The following patents and articles discuss various prior art methods to do so:
U.S. Pat. Nos. 5,870,501, 5,847,766, 5,845,012, 5,796,864, 5,774,593, 5,586,200, 5,491,519, 5,341,442;
Raj Talluri et al, “A Robust, Scalable, Object-Based Video Compression Technique for Very Low Bit-Rate Coding,” IEEE Transactions of Circuit and Systems for Video Technology, vol. 7, No. 1, February 1997;
AwadKh. Al-Asmari, “An Adaptive Hybrid Coding Scheme for HDTV and Digital Sequences,” IEEE Transactions on Consumer Electronics, vol. 42, No. 3, pp. 926-936, August 1995;
Kwok-tung Lo and Jian Feng, “Predictive Mean Search Algorithms for Fast VQ Encoding of Images,” IEEE Transactions On Consumer Electronics, vol. 41, No. 2, pp. 327-331, May 1995;
James Goel et al. “Pre-processing for MPEG Compression Using Adaptive Spatial Filtering”, IEEE Transactions On Consumer Electronics, vol. 41, No. 3, pp. 687-698, August 1995;
Jian Feng et al. “Motion Adaptive Classified Vector Quantization for ATM Video Coding”, IEEE Transactions on Consumer Electronics, vol. 41, No. 2, p. 322-326, May 1995;
Austin Y. Lan et al., “Scene-Context Dependent Reference—Frame Placement for MPEG Video Coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No.3, pp. 478-489, April 1999;
Kuo-Chin Fan, Kou-Sou Kan, “An Active Scene Analysis-Based approach for Pseudoconstant Bit-Rate Video Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 8 No.2, pp. 159-170, April 1998;
Takashi Ida and Yoko Sambansugi, “Image Segmentation and Contour Detection Using Fractal Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, No. 8, pp. 968-975, December 1998;
Liang Shen and Rangaraj M. Rangayyan, “A Segmentation-Based Lossless Image Coding Method for High-Resolution Medical Image Compression,” IEEE Transactions on Medical Imaging, vol. 16, No. 3, pp. 301-316, June 1997;
Adrian Munteanu et al., “Wavelet-Based Lossless Compression of Coronary Angiographic Images”, IEEE Transactions on Medical Imaging, vol. 18, No. 3, p. 272-281, March 1999; and
Akira Okumura et al., “Signal Analysis and Compression Performance Evaluation of Pathological Microscopic Images,” IEEE Transactions on Medical Imaging, vol. 16, No. 6, pp. 701-710, December 1997.
SUMMARY OF THE INVENTION
An object of the present invention is to provide a method and apparatus for video compression which is generally lossless vis-à-vis what the human eye perceives.
There is therefore provided, in accordance with a preferred embodiment of the present invention, a visual perception threshold unit for image processing. The threshold unit identifies a plurality of visual perception threshold levels to be associated with the pixels of a video frame, wherein the threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of the video frame.
There is also provided, in accordance with a preferred embodiment of the present invention, the visual perception threshold unit which includes a parameter generator and a threshold generator. The parameter generator generates a multiplicity of parameters that describe at least some of the information content of the processed frame. From the parameters, the threshold generator generates a plurality of visual perception threshold levels to be associated with the pixels of the video frame. The threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of the frame.
Moreover, in accordance with a preferred embodiment of the present invention, the parameter generator includes a volume unit, a color unit, an intensity unit or some combination of the three. The volume unit determines the volume of information in the frame, the color unit determines the per pixel color and the intensity unit determines a cross-frame change of intensity.
There is also provided, in accordance with a preferred embodiment of the present invention, a method for generating visual perception thresholds. The method includes analysis of the details of the frames of a video signal, estimating the parameters of the details, and defining a visual perception threshold for each detail in accordance with the estimated detail parameters.
There is also provided, in accordance with a preferred embodiment of the present invention, a method for describing images. The method includes determining which details in the image can be distinguished by the human eye and which ones can only be detected by it.
Moreover, in accordance with a preferred embodiment of the present invention, the method also includes providing one bit to describe a pixel which can only be detected by the human eye, and providing three bits to describe a pixel which can be distinguished by the human eye.
Further, in accordance with a preferred embodiment of the present invention, the method also includes smoothing the data of less-distinguished details.
Finally, in accordance with a preferred embodiment of the present invention, the step of determining details also includes identifying areas of high contrast and areas whose details have small dimensions.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the appended drawings in which:
FIG. 1 is an example of a video frame;
FIG. 2 is a block diagram illustration of a video compression system having a visual lossless syntactic encoder, constructed and operative in accordance with a preferred embodiment of the present invention;
FIG. 3 is a block diagram illustration of the details of the visual lossless syntactic encoder of FIG. 2;
FIG. 4 is a graphical illustration of the transfer functions for a number of high pass filters useful in the syntactic encoder of FIG. 3;
FIGS. 5A and 5B are block diagram illustrations of alternative embodiments of a controllable filter bank forming part of the syntactic encoder of FIG. 3;
FIG. 6 is a graphical illustration of the transfer functions for a number of low pass filters useful in the controllable filter bank of FIGS. 5A and 5B;
FIG. 7 is a graphical illustration of the transfer function for a non-linear filter useful in the controllable filter bank of FIGS. 5A and 5B;
FIGS. 8A, 8B and 8C are block diagram illustrations of alternative embodiments of an inter-frame processor forming a controlled filter portion of the syntactic encoder of FIG. 3;
FIG. 9 is a block diagram illustration of a spatial-temporal analyzer forming part of the syntactic encoder of FIG. 3;
FIGS. 10A and 10B are detail illustrations of the analyzer of FIG. 9; and
FIG. 11 is a detail illustration of a frame analyzer forming part of the syntactic encoder of FIG. 3.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
Applicants have realized that there are different levels of image detail in an image and that the human eye perceives these details in different ways. In particular, Applicants have realized the following:
    • 1. Picture details whose detection mainly depends on the level of noise in the image occupy approximately 50-80% of an image.
    • 2. A visual perception detection threshold for image details does not depend on the shape of the details in the image.
    • 3. A visual perception threshold THD depends on a number of picture parameters, including the general brightness of the image. It does not depend on the noise spectrum.
The present invention is a method for describing, and then encoding, images based on which details in the image can be distinguished by the human eye and which ones can only be detected by it.
Reference is now made to FIG. 1, which is a grey-scale image of a plurality of shapes of a bird in flight, ranging from a photograph of one (labeled 10) to a very stylized version of one (labeled 12). The background of the image is very dark at the top of the image and very light at the bottom of the image.
The human eye can distinguish most of the birds of the image. However, there is at least one bird, labeled 14, which the eye can detect but cannot determine all of its relative contrast details. Furthermore, there are large swaths of the image (in the background) which have no details in them.
The present invention is a method and system for syntactic encoding of video frames before they are sent to a standard video compression unit. The present invention separates the details of a frame into two different types, those that can only be detected (for which only one bit will suffice to describe each of their pixels) and those which can be distinguished (for which at least three bits are needed to describe the intensity of each of their pixels).
Reference is now made to FIG. 2, which illustrates the present invention within an image transmission system. Thus, FIG. 2 shows a visual lossless syntactic (VLS) encoder 20 connected to a standard video transmitter 22 which includes a video compression encoder 24, such as a standard MPEG encoder, and a modulator 26. VLS encoder 20 transforms an incoming video signal such that video compression encoder 24 can compress the video signal two to five times more than video compression encoder 24 can do on its own, resulting in a significantly reduced volume bit stream to be transmitted.
Modulator 26 modulates the reduced volume bit stream and transmits it to a receiver 30, which, as in the prior art, includes a demodulator 32 and a decoder 34. Demodulator 32 demodulates the transmitted signal and decoder 34 decodes and decompresses the demodulated signal. The result is provided to a monitor 36 for display.
It will be appreciated that, although the compression ratios are high in the present invention, the resultant video displayed on monitor 36 is not visually degraded. This is because encoder 20 attempts to quantify each frame of the video signal according to which sections of the frame are more or less distinguished by the human eye. For the less-distinguished sections, encoder 20 either provides pixels of a minimum bit volume, thus reducing the overall bit volume of the frame or smoothes the data of the sections such that video compression encoder 24 will later significantly compress these sections, thus resulting in a smaller bit volume in the compressed frame. Since the human eye does not distinguish these sections, the reproduced frame is not perceived significantly differently than the original frame, despite its smaller bit volume.
Reference is now made to FIG. 3, which details the elements of VLS encoder 20. Encoder 20 comprises an input frame memory 40, a frame analyzer 42, an intra-frame processor 44, an output frame memory 46 and an inter-frame processor 48. Analyzer 42 analyzes each frame to separate it into subclasses, where subclasses define areas whose pixels cannot be distinguished from each other. Intra-frame processor 44 spatially filters each pixel of the frame according to its subclass and, optionally, also provides each pixel of the frame with the appropriate number of bits. Inter-frame processor 48 provides temporal filtering (i.e. inter-frame filtering) and updates output frame memory 46 with the elements of the current frame which are different than those of the previous frame.
It is noted that frames are composed of pixels, each having luminance Y and two chrominance Cr and Cb components, each of which is typically defined by eight bits. VLS encoder 20 generally separately processes the three components. However, the bandwidth of the chrominance signals is half as wide as that of the luminance signal. Thus, the filters (in the x direction of the frame) for chrominance have a narrower bandwidth. The following discussion shows the filters for the luminance signal Y.
Frame analyzer 42 comprises a spatial-temporal analyzer 50, a parameter estimator 52, a visual perception threshold determiner 54 and a subclass determiner 56. Details of these elements are provided in FIGS. 9-11, discussed hereinbelow.
As discussed hereinabove, details which the human eye distinguishes are ones of high contrast and ones whose details have small dimensions. Areas of high contrast are areas with a lot of high frequency content. Thus, spatial-temporal analyzer 50 generates a plurality of filtered frames from the current frame, each filtered through a different high pass filter (HPF), where each high pass filter retains a different range of frequencies therein.
FIG. 4, to which reference is now briefly made, is an amplitude vs. frequency graph illustrating the transfer functions of an exemplary set of high pass filters for frames in a non-interlacing scan format. Four graphs are shown. It can be seen that the curve labeled HPF-R3 has a cutoff frequency of 1 MHz and thus, retains portions of the frame with information above 1 MHz. Similarly, curve HPF-R2 has a cutoff frequency of 2 MHz, HPF-C2 has a cutoff frequency of 3 MHz and HPF-R1 and HPF-C1 have a cutoff frequency of 4 MHz. As will be discussed hereinbelow, the terminology “Rx” refers to operations on a row of pixels while the terminology “Cx” refers to operations on a column of pixels.
In particular, the filters of FIG. 4 implement the following finite impulse response (FIR) filters on either a row of pixels (the x direction of the frame) or a column of pixels (the y direction of the frame), where the number of pixels used in the filter defines the power of the cosine. For example, a filter implementing cos10 x takes 10 pixels around the pixel of interest, five to one side and five to the other side of the pixel of interest.
    • HPF-R3: 1−cos10 x
    • HPF-R2: 1−cos6 x
    • HPF-R1: 1−cos2 x
    • HPF-C2: 1-cos4 y
    • HPF-C1: 1−cos2 y
The high pass filters can also be considered as digital equivalents of optical apertures. The higher the cut-off frequency, the smaller the aperture. Thus, filters HPF-R1 and HPF-C1 retain only very small details in the frame (of 1-4 pixels in size) while filter HPF-R3 retains much larger details (of up to 11 pixels).
In the following, the filtered frames will be labeled by the type of filter (HPF-X) used to create them.
Returning to FIG. 3, analyzer 50 also generates difference frames between the current frame and another, earlier frame. The previous frame is typically at most 15 frames earlier. A “group” of pictures or frames (GOP) is a series of frames for which difference frames are generated.
Parameter estimator 52 takes the current frame and the filtered and difference frames and generates a set of parameters that describe the information content of the current frame. The parameters are determined on a pixel-by-pixel basis or on a per frame basis, as relevant. It is noted that the parameters do not have to be calculated to great accuracy as they are used in combination to determine a per pixel, visual perception threshold THDi.
At least some of the following parameters are determined:
Signal to noise ratio (SNR): this parameter can be determined by generating a difference frame between the current frame and the frame before it, high pass filtering of the difference frame, summing the intensities of the pixels in the filtered frame, normalized by both the number of pixels N in a frame and the maximum intensity IMAX possible for the pixel. If the frame is a television frame, the maximum intensity is 255 quanta (8 bits). The highs frequency filter selects only those intensities lower than 3σ, where σ indicates a level less than which the human eye cannot perceive noise. For example, σ can be 46 dB, equivalent to a reduction in signal strength of a factor of 200.
Normalized NΔi: this measures the change Δi, per pixel i, from the current frame to its previous frame. This value is then normalized by the maximum intensity IMAX possible for the pixel.
Normalized volume of intraframe change NIXY: this measures the volume of change in a frame IXY (or how much detail there is in a frame), normalized by the maximum possible amount of information MAXINFO within a frame (i.e. 8 bits per pixel x N pixels per frame). Since the highest frequency range indicates the amount of change in a frame, the volume of change IXY is a sum of the intensities in the filtered frame having the highest frequency range, such as filtered frame HPF-R1.
Normalized volume of interframe changes NIF: this measures the volume of changes IF between the current frame and its previous frame, normalized by the maximum possible amount of information MAXINFO within a frame. The volume of interframe changes IF is the sum of the intensities in the difference frame.
Normalized volume of change within a group of frames NIGOP: this measures the volume of changes IGOP over a group of frames, where the group is from 2 to 15 frames, as selected by the user. It is normalized by the maximum possible amount of information MAXINFO within a frame and by the number of frames in the group.
Normalized luminance level NYi: Yi is the luminance level of a pixel in the current frame. It is normalized by the maximum intensity IMAX possible for the pixel.
Color saturation pI: this is the color saturation level of the ith pixel and it is determined by: [ 0.78 ( C r , i - 128 160 ) 2 + 0.24 ( C b , i - 128 126 ) 2 ] 1 / 2
where Cr,i and Cb,i are the chrominance levels of the ith pixel.
Hue hi: this is the general hue of the ith pixel and is determined by: arctan ( 1.4 C r , i - 128 C b , i - 128 ) .
Alternatively, hue hi can be determined by interpolating Table 1, below.
Response to hue Ri(hi): this is the human vision response to a given hue and is given by Table 1, below. Interpolation is typically used to produce a specific value of the response R(h) for a specific value of hue h.
TABLE 1
Color Y Cr Cb h (nm) R(h)
White 235 128 128
Yellow 210 16 146 575 0.92
Cyan 170 166 16 490 0.21
Green 145 54 34 510 0.59
Magenta 106 202 222 0.2
Red 81 90 240 630 0.3
Blue 41 240 110 475 0.11
Black 16 128 128
Visual perception threshold determiner 54 determines the visual perception threshold THDI per pixel as follows: THD i = THD m in ( 1 + i + NI XY + NI F + NI GOP + NY i + p i + ( 1 - R i ( h i ) ) + 200 SNR )
Subclass determiner 56 compares each pixel i of each high pass filtered frame HPF-X to its associated threshold THDi to determine whether or not that pixel is significantly present in each filtered frame, where “significantly present” is defined by the threshold level and by the “detail dimension” (i.e. the size of the object or detail in the image of which the pixel forms a part). Subclass determiner 56 then defines the subclass to which the pixel belongs.
For the example provided above, if the pixel is not present in any of the filtered frames, the pixel must belong to an object of large size or the detail is only detected but not distinguished. If the pixel is only found in the filtered frame of HPF-C2 or in both frames HPF-C1 and HPF-C2, it must be a horizontal edge (an edge in the Y direction of the frame). If it is found in filtered frames HPF-R3 and HPF-C2, it is a single small detail. If the pixel is found only in filtered frames HPF-R1, HPF-R2 and HPF-R3, it is a very small vertical edge. If, in addition, it is also found in filtered frame HPF-C2, then the pixel is a very small, single detail.
The above logic is summarized and expanded in Table 2.
TABLE 2
High Pass Filters
Subclass R1 R2 R3 C1 C2 Remarks
1 0 0 0 0 0 Large detail or detected detail only
2 0 0 0 0 1 Horizontal edge
3 0 0 0 1 1 Horizontal edge
4 0 0 1 0 0 Vertical edge
5 0 0 1 0 1 Single small detail
6 0 0 1 1 1 Single small detail
7 0 1 1 0 0 Vertical edge
8 0 1 1 0 1 Single small detail
9 0 1 1 1 1 Single small detail
10 1 1 1 0 0 Very small vertical edge
11 1 1 1 0 1 Very small single detail
12 1 1 1 1 1 Very small single detail
The output of subclass determiner 56 is an indication of the subclass to which each pixel of the current frame belongs. Intra-frame processor 44 performs spatial filtering of the frame, where the type of filter utilized varies in accordance with the subclass to which the pixel belongs.
In accordance with a preferred embodiment of the present invention, intra-frame processor 44 filters each subclass of the frame differently and according to the information content of the subclass. The filtering limits the bandwidth of each subclass which is equivalent to sampling the data at different frequencies. Subclasses with a lot of content are sampled at a high frequency while subclasses with little content, such as a plain background area, are sampled at a low frequency.
Another way to consider the operation of the filters is that they smooth the data of the subclass, removing “noisiness” in the picture that the human eye does not perceive. Thus, intra-frame processor 44 changes the intensity of the pixel by an amount less than the visual distinguishing threshold for that pixel. Pixels whose contrast is lower than the threshold (i.e. details which were detected only) are transformed with non-linear filters. If desired, the data size of the detected only pixels can be reduced from 8 bits to 1 or 2 bits, depending on the visual threshold level and the detail dimension for the pixel. For the other pixels (i.e. the distinguished ones), 3 or 4 bits is sufficient.
Intra-frame processor 44 comprises a controllable filter bank 60 and a filter selector 62. Controllable filter bank 60 comprises a set of low pass and non-linear filters, shown in FIGS. 5A and 5B to which reference is now made, which filter selector 62 activates, based on the subclass to which the pixel belongs. Selector 62 can activate more than one filter, as necessary.
FIGS. 5A and 5B are two, alternative embodiments of controllable filter bank 60. Both comprise two sections 64 and 66 which operate on columns (i.e. line to line) and on rows (i.e. within a line), respectively. In each section 64 and 66, there is a choice of filters, each controlled by an appropriate switch, labeled SW-X, where X is one of C1, C2, R1, R2, R3 (selecting one of the low pass filters (LPF)), D-C, D-R (selecting to pass the relevant pixel directly). Filter selector 62 switches the relevant switch, thereby activating the relevant filter. It is noted that the non-linear filters NLF-R and NLF-C are activated by switches R3 and C2, respectively. Thus, the outputs of non-linear filters NLF-R and NLF-C are added to the outputs of low pass filters LPF-R3 and LPF-C2, respectively.
Controllable filter bank 60 also includes time aligners (TA) which add any necessary delays to ensure that the pixel currently being processed remains at its appropriate location within the frame.
The low pass filters (LPF) are associated with the high pass filters used in analyzer 50. Thus, the cutoff frequencies of the low pass filters are close to those of the high pass filters. The low pass filters thus pass that which their associated high pass filters ignore.
FIG. 6, to which reference is now briefly made, illustrates exemplary low pass filters for the example provided hereinabove. Low pass filter LPF-R3 has a cutoff frequency of 0.5 MHz and thus, generally does not retain anything which its associated high pass filter HPF-R3 (with a cutoff frequency of 1 MHz) retains. Filter LPF-R2 has a cutoff frequency of 1 MHz, filter LPF-C2 has a cutoff frequency of 1.25 MHz and filters LPF-C1 and LPF-R1 have a cutoff frequency of about 2 MHz. As for the high frequency filters, filters LPF-Cx operate on the columns of the frame and filters LPF-Rx operate on the rows of the frame.
FIG. 7, to which reference is now briefly made, illustrates an exemplary transfer function for the non-linear filters (NLF) which models the response of the eye when detecting a detail. The transfer function defines an output value Vout normalized by the threshold level THDi as a function of an input value Vin also normalized by the threshold level THDi. As can be seen in the figure, the input-output relationship is described by a polynomial of high order. A typical order might be six, though lower orders, of power two or three, are also feasible.
Table 3 lists the type of filters activated per subclass, where the header for the column indicates both the type of filter and the label of the switch SW-X of FIGS. 5A and 5B.
TABLE 3
Low Pass Filters
Subclass R1 R2 R3 C1 C2 D-R D-C
1 0 0 1 0 1 0 0
2 0 0 1 1 0 0 0
3 0 0 1 0 0 0 1
4 0 1 0 0 1 0 0
5 0 1 0 1 0 0 0
6 0 1 0 0 0 0 1
7 1 0 0 0 1 0 0
8 1 0 0 1 0 0 0
9 1 0 0 0 0 0 1
10 0 0 0 0 1 1 0
11 0 0 0 1 0 1 0
12 0 0 0 0 0 1 1
FIG. 5B includes rounding elements RND which reduce the number of bits of a pixel from eight to three or four bits, depending on the subclass to which the pixel belongs. Table 4 illustrates the logic for the example presented hereinabove, where the items which are not active for the subclass are indicated by “N/A”.
TABLE 4
RND-R0 RND-R1 RND-R2 RND-C0 RND-C1
subclass (Z1) (Z2) (Z3) (Z4) (Z5)
1 N/A N/A N/A N/A N/A
2 N/A N/A N/A N/A 4 bit
3 N/A N/A N/A 4 bit N/A
4 N/A N/A 4 bit N/A N/A
5 N/A N/A 4 bit N/A 4 bit
6 N/A N/A 4 bit 4 bit N/A
7 N/A 4 bit N/A N/A N/A
8 N/A 3 bit N/A N/A 3 bit
9 N/A 3 bit N/A 3 bit N/A
10 4 bit N/A N/A N/A N/A
11 3 bit N/A N/A N/A 3 bit
12 3 bit N/A N/A 3 bit N/A
The output of intra-frame processor 44 is a processed version of the current frame which uses fewer bits to describe the frame than the original version.
Reference is now made to FIGS. 8A, 8B and 8C, which illustrate three alternative embodiments for inter-frame processor 48 which provides temporal filtering (i.e. inter-frame filtering) to further process the current frame. Since the present invention provides a full frame as output, inter-frame processor 48 determines which pixels have changed significantly from the previous frame and amends those only, storing the new version in the appropriate location in output frame memory 46.
The embodiments of FIGS. 8A and 8B are open loop versions (i.e. the previous frame is the frame previously input into inter-frame processor 48) while the embodiment of FIG. 8C is a closed loop version (i.e. the previous frame is the frame previously produced by inter-frame processor 48). All of the embodiments comprise a summer 68, a low pass filter (LPF) 70, a high pass filter (HPF) 72, two comparators 74 and 76, two switches 78 and 80, controlled by the results of comparators 74 and 76, respectively, and a summer 82. FIGS. 8A and 8B additionally include an intermediate memory 84 for storing the output of intra-frame processor 44.
Summer 68 takes the difference of the processed current frame, produced by processor 44, and the previous frame, stored in either intermediate memory 84 (FIGS. 8A and 8B) or in frame memory 46 (FIG. 8C). The difference frame is then processed in two parallel tracks.
In the first track, the low pass filter is used. Each pixel of the filtered frame is compared to a general, large detail, threshold THD-LF which is typically set to 5% of the maximum expected intensity for the frame. Thus, the pixels which are kept are only those which changed by more than 5% (i.e. those whose changes can be “seen” by the human eye).
) In the second track, the difference frame is high pass filtered. Since high pass filtering retains the small details, each pixel of the high pass filtered frame is compared to the particular threshold THD, for that pixel, as produced by threshold determiner 54. If the difference pixel has an intensity above the threshold THDi (i.e. the change in the pixel is significant for detailed visual perception), it is allowed through (i.e. switch 80 is set to pass the pixel).
Summer 82 adds the filtered difference pixels passed by switches 78 and/or 80 with the pixel of the previous frame to “produce the new pixel”. If switches 78 and 80 did not pass anything, the new pixel is the same as the previous pixel. Otherwise, the new pixel is the sum of the previous pixel and the low and high frequency components of the difference pixel.
Reference is now briefly made to FIGS. 9, 10A, 10B and 11 which detail elements of frame analyzer 42. In these figures, the term “ML” indicates a memory line of the current frame, “MP” indicates a memory pixel of the current frame, “MF” indicates a memory frame, “VD” indicates the vertical drive signal, “TA” indicates a time alignment, e.g. a delay, and CNT indicates a counter.
FIG. 9 generally illustrates the operation of spatial-temporal analyzer 50 and FIGS. 10A and 10B provide one detailed embodiment for the spatial analysis and temporal analysis portions 51 and 53, respectively. FIG. 11 details parameter estimator 52, threshold determiner 54 and subclass determiner 56. As these figures are deemed to be self-explanatory, no further explanation will be included here.
It is noted that the present invention can be implemented with a field programmable gate array (FPGA) and the frame memory can be implemented with SRAM or SDRAM.
The methods and apparatus disclosed herein have been described without reference to specific hardware or software. Rather, the methods and apparatus have been described in a manner sufficient to enable persons of ordinary skill in the art to readily adapt commercially available hardware and software as may be needed to reduce any of the embodiments of the present invention to practice without undue experimentation and using conventional techniques.
It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described herein above. Rather the scope of the invention is defined by the claims that follow:

Claims (31)

1. A visual perception threshold unit for image processing, the threshold unit comprising:
a parameter generator to generate a multiplicity of parameters that describe at least some of the information content of at least one video frame to be processed; and
a threshold generator to generate from said parameters, a plurality of visual perception threshold levels to be associated with the pixels of the at least one video frame,
wherein said threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of said at least one video frame.
2. A unit according to claim 1 and , wherein said parameter generator comprises at least one of the following units:
a volume unit which determines the a volume of information in said at least one video frame;
a color unit which determines a per pixel color; and
an intensity unit which determines a cross-frame change of intensity.
3. A method of generating visual perception thresholds for image processing implemented by one or more elements of a video encoding device, the method comprising:
analyzing details of frames of a video signal;
estimating parameters of said details; and
defining a visual perception threshold for each of said details in accordance with said estimated detail parameters,
wherein said estimating comprises at least one of the following:
determining a per-pixel signal intensity change between a current frame and a previous frame, normalized by a maximum intensity;
determining a normalized volume of intraframe change by high frequency filtering of said current frame, summing the intensities of said filtered frame and normalizing the resultant sum by the a maximum possible amount of information within a frame;
generating a volume of inter-frame changes between a said current frame and its said previous frame normalized by said maximum possible amount of information volume within a frame;
generating a normalized volume of inter-frame changes for a group of pictures frames from the output of said previous step of generating;
evaluating a signal-to-noise ratio by high pass filtering a difference frame between said current frame and its said previous frame by selecting those intensities of said difference frame lower than a threshold defined as three times a noise level under which noise intensities are not perceptible to the human eye, summing the intensities of the pixels in the filtered difference frame and normalizing said sum by said maximum intensity and by the a total number of pixels in a frame;
generating a normalized intensity value per-pixel;
generating a per-pixel color saturation level;
generating a per-pixel hue value; and
determining a per-pixel response to said hue value.
4. A method for describing an image implemeneted by one or more elements of a video encoding device, the method comprising
determining which details in said image can be distinguished by the human eye and which ones can only be detected by it;
providing one bit to describe a pixel which can only be detected by the human eye; and
providing three bits to describe a pixel which can be distinguished by the human eye.
5. A video compression system comprising:
a parameter generator to generate one or more parameters that describe information content of a video frame; and
a threshold generator to generate, from at least one of the parameters, a plurality of visual perception threshold levels to be associated with pixels of the video frame, wherein said threshold levels define contrast levels above which a pixel of the video frame can be visually distinguished from its neighboring pixels of the video frame.
6. A video compression system according to claim 5, wherein the parameter generator comprises a volume unit configured to determine a volume of information in the video frame.
7. A video compression system according to claim 5, wherein the parameter generator comprises a color unit configured to determine a per pixel color.
8. A video compression system according to claim 5, wherein the parameter generator comprises an intensity unit configured to determine a cross-frame change of intensity.
9. A video compression system according to claim 5, wherein the parameter generator and the threshold generator are implemented in a field programmable gate array (FPGA).
10. A video encoder comprising:
a parameter generator to generate multiple parameters that describe information content of a video frame; and
a threshold generator to generate, from at least one of the multiple parameters, a plurality of visual perception threshold levels to be associated with pixels of the video frame, wherein said threshold levels define contrast levels above which a pixel of the video frame can be visually distinguished from its neighboring pixels of the video frame.
11. A video encoder according to claim 10, wherein the parameter generator comprises a volume unit configured to determine a volume of information in the video frame.
12. A video encoder according to claim 10, wherein the parameter generator comprises a color unit configured to determine a per pixel color.
13. A video encoder according to claim 10, wherein the parameter generator comprises an intensity unit configured to determine a cross-frame change of intensity.
14. A video encoder according to claim 10, wherein the video encoder is embodied in a field programmable gate array (FPGA).
15. A video encoder according to claim 10, wherein the video encoder comprises a visual lossless syntactic encoder.
16. A system comprising:
means for generating one or more parameters that describe information content of a video frame; and
means for generating, from at least one of the parameters, a plurality of visual perception threshold levels to be associated with pixels of the video frame, wherein said threshold levels define contrast levels above which a pixel of the video frame can be visually distinguished from its neighboring pixels of the video frame.
17. A system according to claim 16, wherein the one or more parameters are associated with at least one of:
a volume of information in the video frame;
a cross-frame change of intensity; or
a per pixel color.
18. A system according to claim 16, wherein the system is embodied in a field programmable gate array (FPGA).
19. A video compression system comprising:
means for analyzing one or more details associated with one or more frames of a video signal;
means for estimating parameters of individual analyzed details; and
means for defining a visual perception threshold for individual analyzed details in accordance with at least one of the estimated parameters,
wherein said means for estimating comprises at least one of:
means for determining a per-pixel signal intensity change between a current frame and a previous frame, normalized by a maximum intensity;
means for determining a normalized volume of intraframe change by high frequency filtering of said current frame, summing the intensities of said filtered frame and normalizing the resultant sum by a maximum possible amount of information within a frame;
means for generating a volume of inter-frame changes between said current frame and said previous frame normalized by said maximum possible amount of information within a frame;
means for generating a normalized volume of inter-frame changes within a group of frames normalized by said maximum possible amount of information within a frame and by a number of frames comprising said group of frames;
means for evaluating a signal-to-noise ratio by high pass filtering a difference frame between said current frame and said previous frame by selecting intensities of said difference frame lower than a threshold defined as three times a noise level under which noise intensities are not visually perceptible, summing the intensities of pixels in the filtered difference frame and normalizing said sum by said maximum intensity and by the total number of pixels in a frame;
means for generating a normalized intensity value per-pixel;
means for generating a per-pixel color saturation level;
means for generating a per-pixel hue value; or
means for determining a per-pixel response to said hue value.
20. A video compression system according to claim 19, embodied in a field programmable gate array (FPGA).
21. A method implemented by one or more elements of a video encoding device comprising:
identifying one or more distinguishable details in an image, individual distinguishable details being associated with a contrast level at which a pixel can be visually distinguished from among its neighboring pixels;
using a plurality of bits to describe individual identified distinguishable details; and
using less than said plurality of bits to describe one or more individual details in the image not identified as distinguishable.
22. A method according to claim 21, further comprising identifying the one or more of the individual details in the image not identified as distinguishable as being visually detectable.
23. A method according to claim 21, wherein using the plurality of bits comprises using three bits.
24. A method according to claim 21, further comprising performing the identifying, the using the plurality of bits, and the using less than said plurality of bits by the one or more elements embodied in a field programmable gate array (FPGA).
25. A system comprising:
means for identifying one or more distinguishable details in an image, individual distinguishable details being associated with a contrast level at which a pixel can be visually distinguished from among its neighboring pixels;
means for using a plurality of bits to describe individual identified distinguishable details; and
means for using less than said plurality of bits to describe one or more individual details in the image not identified as distinguishable.
26. A system according to claim 25, wherein one or more of the individual details in the image not identified as distinguishable are identified as being visually detectable.
27. A system according to claim 25, wherein the plurality of bits comprises three bits.
28. A system according to claim 25 comprising part of a field programmable gate array (FPGA).
29. A visual perception threshold unit according to claim 1, wherein one or both of the parameter generator or the threshold generator are implemented in a video encoder.
30. A visual perception threshold unit according to claim 29, wherein the video encoder comprises a visual lossless syntactic encoder.
31. A visual perception threshold unit according to claim 1 comprising part of a field programmable gate array (FPGA).
US12/196,180 2000-01-23 2008-08-21 Method and apparatus for visual lossless image syntactic encoding Expired - Lifetime USRE42148E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/196,180 USRE42148E1 (en) 2000-01-23 2008-08-21 Method and apparatus for visual lossless image syntactic encoding

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
IL134182A IL134182A (en) 2000-01-23 2000-01-23 Method and apparatus for visual lossless pre-processing
IL134182 2000-01-23
US09/524,618 US6473532B1 (en) 2000-01-23 2000-03-14 Method and apparatus for visual lossless image syntactic encoding
US10/121,685 US6952500B2 (en) 2000-01-23 2002-04-15 Method and apparatus for visual lossless image syntactic encoding
US11/036,062 US7095903B2 (en) 2000-01-23 2005-01-18 Method and apparatus for visual lossless image syntactic encoding
US12/196,180 USRE42148E1 (en) 2000-01-23 2008-08-21 Method and apparatus for visual lossless image syntactic encoding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/036,062 Reissue US7095903B2 (en) 2000-01-23 2005-01-18 Method and apparatus for visual lossless image syntactic encoding

Publications (1)

Publication Number Publication Date
USRE42148E1 true USRE42148E1 (en) 2011-02-15

Family

ID=11073740

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/524,618 Expired - Lifetime US6473532B1 (en) 2000-01-23 2000-03-14 Method and apparatus for visual lossless image syntactic encoding
US10/121,685 Expired - Lifetime US6952500B2 (en) 2000-01-23 2002-04-15 Method and apparatus for visual lossless image syntactic encoding
US11/036,062 Ceased US7095903B2 (en) 2000-01-23 2005-01-18 Method and apparatus for visual lossless image syntactic encoding
US12/196,180 Expired - Lifetime USRE42148E1 (en) 2000-01-23 2008-08-21 Method and apparatus for visual lossless image syntactic encoding

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/524,618 Expired - Lifetime US6473532B1 (en) 2000-01-23 2000-03-14 Method and apparatus for visual lossless image syntactic encoding
US10/121,685 Expired - Lifetime US6952500B2 (en) 2000-01-23 2002-04-15 Method and apparatus for visual lossless image syntactic encoding
US11/036,062 Ceased US7095903B2 (en) 2000-01-23 2005-01-18 Method and apparatus for visual lossless image syntactic encoding

Country Status (5)

Country Link
US (4) US6473532B1 (en)
EP (1) EP1260094A4 (en)
AU (1) AU2001228771A1 (en)
IL (1) IL134182A (en)
WO (1) WO2001054392A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090316019A1 (en) * 2008-06-20 2009-12-24 Altek Corporation Tone adjustment method for digital image and electronic apparatus using the same
US20100225817A1 (en) * 2000-06-28 2010-09-09 Sheraizin Semion M Real Time Motion Picture Segmentation and Superposition

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL134182A (en) 2000-01-23 2006-08-01 Vls Com Ltd Method and apparatus for visual lossless pre-processing
US6735339B1 (en) * 2000-10-27 2004-05-11 Dolby Laboratories Licensing Corporation Multi-stage encoding of signal components that are classified according to component value
US7453468B2 (en) * 2000-11-29 2008-11-18 Xerox Corporation Intelligent color to texture converter
US6744818B2 (en) * 2000-12-27 2004-06-01 Vls Com Ltd. Method and apparatus for visual perception encoding
US7277587B2 (en) * 2002-04-26 2007-10-02 Sharp Laboratories Of America, Inc. System and method for lossless video coding
WO2004008773A1 (en) * 2002-07-11 2004-01-22 Matsushita Electric Industrial Co., Ltd. Filtering intensity decision method, moving picture encoding method, and moving picture decoding method
US20040131117A1 (en) * 2003-01-07 2004-07-08 Sheraizin Vitaly S. Method and apparatus for improving MPEG picture compression
US7786988B2 (en) 2003-07-16 2010-08-31 Honeywood Technologies, Llc Window information preservation for spatially varying power conservation
US7580033B2 (en) 2003-07-16 2009-08-25 Honeywood Technologies, Llc Spatial-based power savings
US7663597B2 (en) * 2003-07-16 2010-02-16 Honeywood Technologies, Llc LCD plateau power conservation
US7583260B2 (en) 2003-07-16 2009-09-01 Honeywood Technologies, Llc Color preservation for spatially varying power conservation
US20060020906A1 (en) * 2003-07-16 2006-01-26 Plut William J Graphics preservation for spatially varying display device power conversation
US7602388B2 (en) * 2003-07-16 2009-10-13 Honeywood Technologies, Llc Edge preservation for spatially varying power conservation
US7714831B2 (en) 2003-07-16 2010-05-11 Honeywood Technologies, Llc Background plateau manipulation for display device power conservation
US7636488B2 (en) * 2003-12-18 2009-12-22 Itt Manufacturing Enterprises, Inc. User adjustable image enhancement filtering
US7903902B2 (en) 2004-07-26 2011-03-08 Sheraizin Semion M Adaptive image improvement
US7639892B2 (en) * 2004-07-26 2009-12-29 Sheraizin Semion M Adaptive image improvement
KR100697516B1 (en) * 2004-10-27 2007-03-20 엘지전자 주식회사 Moving picture coding method based on 3D wavelet transformation
US8780957B2 (en) 2005-01-14 2014-07-15 Qualcomm Incorporated Optimal weights for MMSE space-time equalizer of multicode CDMA system
US7526142B2 (en) * 2005-02-22 2009-04-28 Sheraizin Vitaly S Enhancement of decompressed video
CN101171843B (en) * 2005-03-10 2010-10-13 高通股份有限公司 Content classification for multimedia processing
US7169920B2 (en) * 2005-04-22 2007-01-30 Xerox Corporation Photoreceptors
US7760210B2 (en) * 2005-05-04 2010-07-20 Honeywood Technologies, Llc White-based power savings
US8755446B2 (en) * 2005-05-04 2014-06-17 Intel Corporation Varying sharpness based on motion in video sequences
US7602408B2 (en) 2005-05-04 2009-10-13 Honeywood Technologies, Llc Luminance suppression power conservation
US9113147B2 (en) 2005-09-27 2015-08-18 Qualcomm Incorporated Scalability techniques based on content information
US8948260B2 (en) 2005-10-17 2015-02-03 Qualcomm Incorporated Adaptive GOP structure in video streaming
US9131164B2 (en) 2006-04-04 2015-09-08 Qualcomm Incorporated Preprocessor method and apparatus
EP1924097A1 (en) * 2006-11-14 2008-05-21 Sony Deutschland Gmbh Motion and scene change detection using color components
US8818744B2 (en) 2008-10-16 2014-08-26 Tektronix, Inc. Test and measurement instrument and method of switching waveform display styles
US9842410B2 (en) 2015-06-18 2017-12-12 Samsung Electronics Co., Ltd. Image compression and decompression with noise separation

Citations (128)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2697758A (en) 1950-08-01 1954-12-21 Rca Corp Gamma correcting circuit
US3961133A (en) 1974-05-24 1976-06-01 The Singer Company Method and apparatus for combining video images with proper occlusion
GB1503612A (en) 1974-02-19 1978-03-15 Sonex Int Corp Arrangements for keying video signals
JPS5571363A (en) 1978-11-24 1980-05-29 Sony Corp Video synthesizing unit
US4855825A (en) 1984-06-08 1989-08-08 Valtion Teknillinen Tutkimuskeskus Method and apparatus for detecting the most powerfully changed picture areas in a live video signal
JPH01206775A (en) 1988-02-13 1989-08-18 Sony Corp Gamma correcting circuit for luminance signal
US4947255A (en) 1988-09-19 1990-08-07 The Grass Valley Group, Inc. Video luminance self keyer
US5012333A (en) 1989-01-05 1991-04-30 Eastman Kodak Company Interactive dynamic range adjustment system for printing digital images
JPH0483480A (en) 1990-07-26 1992-03-17 Nippon Hoso Kyokai <Nhk> Polarizing key type image synthesizer
US5126847A (en) 1989-09-28 1992-06-30 Sony Corporation Apparatus for producing a composite signal from real moving picture and still picture video signals
US5194943A (en) 1990-11-06 1993-03-16 Hitachi, Ltd. Video camera having a γ-correction circuit for correcting level characteristics of a luminance signal included in a video signal
US5245445A (en) 1991-03-22 1993-09-14 Ricoh Company, Ltd. Image processing apparatus
US5301016A (en) 1991-12-21 1994-04-05 U.S. Philips Corporation Method of and arrangement for deriving a control signal for inserting a background signal into parts of a foreground signal
JPH06133221A (en) 1992-10-14 1994-05-13 Sony Corp Image pickup device
US5339171A (en) 1990-04-24 1994-08-16 Ricoh Company, Ltd. Image processing apparatus especially suitable for producing smooth-edged output multi-level tone data having fewer levels than input multi-level tone data
US5341442A (en) 1992-01-21 1994-08-23 Supermac Technology, Inc. Method and apparatus for compression data by generating base image data from luminance and chrominance components and detail image data from luminance component
US5384601A (en) 1992-08-25 1995-01-24 Matsushita Electric Industrial Co., Ltd. Color adjustment apparatus for automatically changing colors
US5404174A (en) 1992-06-29 1995-04-04 Victor Company Of Japan, Ltd. Scene change detector for detecting a scene change of a moving picture
US5428398A (en) 1992-04-10 1995-06-27 Faroudja; Yves C. Method and apparatus for producing from a standard-bandwidth television signal a signal which when reproduced provides a high-definition-like video image relatively free of artifacts
US5467404A (en) 1991-08-14 1995-11-14 Agfa-Gevaert Method and apparatus for contrast enhancement
US5488675A (en) 1994-03-31 1996-01-30 David Sarnoff Research Center, Inc. Stabilizing estimate of location of target region inferred from tracked multiple landmark regions of a video image
US5491514A (en) 1993-01-28 1996-02-13 Matsushita Electric Industrial Co., Ltd. Coding apparatus, decoding apparatus, coding-decoding apparatus for video signals, and optical disks conforming thereto
US5491517A (en) 1994-03-14 1996-02-13 Scitex America Corporation System for implanting an image into a video stream
US5491519A (en) 1993-12-16 1996-02-13 Daewoo Electronics Co., Ltd. Pre-processing filter apparatus for use in an image encoding system
US5510824A (en) 1993-07-26 1996-04-23 Texas Instruments, Inc. Spatial light modulator array
US5537510A (en) 1994-12-30 1996-07-16 Daewoo Electronics Co., Ltd. Adaptive digital audio encoding apparatus and a bit allocation method thereof
JPH08191440A (en) 1995-01-10 1996-07-23 Fukuda Denshi Co Ltd Method and device for correcting endoscope image
US5539475A (en) 1993-09-10 1996-07-23 Sony Corporation Method of and apparatus for deriving a key signal from a digital video signal
US5542008A (en) 1990-02-28 1996-07-30 Victor Company Of Japan, Ltd. Method of and apparatus for compressing image representing signals
US5555557A (en) 1990-04-23 1996-09-10 Xerox Corporation Bit-map image resolution converter with controlled compensation for write-white xerographic laser printing
US5557340A (en) 1990-12-13 1996-09-17 Rank Cintel Limited Noise reduction in video signals
US5566251A (en) 1991-09-18 1996-10-15 David Sarnoff Research Center, Inc Video merging employing pattern-key insertion
US5565921A (en) 1993-03-16 1996-10-15 Olympus Optical Co., Ltd. Motion-adaptive image signal processing system
US5586200A (en) 1994-01-07 1996-12-17 Panasonic Technologies, Inc. Segmentation based image compression system
US5592226A (en) 1994-01-26 1997-01-07 Btg Usa Inc. Method and apparatus for video data compression using temporally adaptive motion interpolation
US5613035A (en) 1994-01-18 1997-03-18 Daewoo Electronics Co., Ltd. Apparatus for adaptively encoding input digital audio signals from a plurality of channels
US5627937A (en) 1995-01-09 1997-05-06 Daewoo Electronics Co. Ltd. Apparatus for adaptively encoding input digital audio signals from a plurality of channels
US5648801A (en) 1994-12-16 1997-07-15 International Business Machines Corporation Grayscale printing system
US5653234A (en) 1995-09-29 1997-08-05 Siemens Medical Systems, Inc. Method and apparatus for adaptive spatial image filtering
US5694492A (en) 1994-04-30 1997-12-02 Daewoo Electronics Co., Ltd Post-processing method and apparatus for removing a blocking effect in a decoded image signal
US5717463A (en) 1995-07-24 1998-02-10 Motorola, Inc. Method and system for estimating motion within a video sequence
EP0502615B1 (en) 1991-03-07 1998-06-03 Matsushita Electric Industrial Co., Ltd. Video signal motion detection method and noise reducer using said method
US5774593A (en) 1995-07-24 1998-06-30 University Of Washington Automatic scene decomposition and optimization of MPEG compressed video
US5787203A (en) 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
US5790195A (en) 1993-12-28 1998-08-04 Canon Kabushiki Kaisha Image processing apparatus
US5796864A (en) 1992-05-12 1998-08-18 Apple Computer, Inc. Method and apparatus for real-time lossless compression and decompression of image data
US5799111A (en) 1991-06-14 1998-08-25 D.V.P. Technologies, Ltd. Apparatus and methods for smoothing images
US5828776A (en) 1994-09-20 1998-10-27 Neopath, Inc. Apparatus for identification and integration of multiple cell patterns
US5838835A (en) 1994-05-03 1998-11-17 U.S. Philips Corporation Better contrast noise by residue image
US5844607A (en) 1996-04-03 1998-12-01 International Business Machines Corporation Method and apparatus for scene change detection in digital video compression
US5844614A (en) 1995-01-09 1998-12-01 Matsushita Electric Industrial Co., Ltd. Video signal decoding apparatus
US5845012A (en) 1995-03-20 1998-12-01 Daewoo Electronics Co., Ltd. Apparatus for encoding an image signal having a still object
US5847772A (en) 1996-09-11 1998-12-08 Wells; Aaron Adaptive filter for video processing applications
US5847766A (en) 1994-05-31 1998-12-08 Samsung Electronics Co, Ltd. Video encoding method and apparatus based on human visual sensitivity
US5850294A (en) 1995-12-18 1998-12-15 Lucent Technologies Inc. Method and apparatus for post-processing images
US5852475A (en) 1995-06-06 1998-12-22 Compression Labs, Inc. Transform artifact reduction process
US5870501A (en) 1996-07-11 1999-02-09 Daewoo Electronics, Co., Ltd. Method and apparatus for encoding a contour image in a video signal
US5881174A (en) 1997-02-18 1999-03-09 Daewoo Electronics Co., Ltd. Method and apparatus for adaptively coding a contour of an object
US5883983A (en) 1996-03-23 1999-03-16 Samsung Electronics Co., Ltd. Adaptive postprocessing system for reducing blocking effects and ringing noise in decompressed image signals
US5901178A (en) 1996-02-26 1999-05-04 Solana Technology Development Corporation Post-compression hidden data transport for video
US5914748A (en) 1996-08-30 1999-06-22 Eastman Kodak Company Method and apparatus for generating a composite image using the difference of two images
US5974159A (en) 1996-03-29 1999-10-26 Sarnoff Corporation Method and apparatus for assessing the visibility of differences between two image sequences
US5982926A (en) 1995-01-17 1999-11-09 At & T Ipm Corp. Real-time image enhancement techniques
US5991464A (en) 1998-04-03 1999-11-23 Odyssey Technologies Method and system for adaptive video image resolution enhancement
US5995656A (en) 1996-05-21 1999-11-30 Samsung Electronics Co., Ltd. Image enhancing method using lowpass filtering and histogram equalization and a device therefor
US6005626A (en) 1997-01-09 1999-12-21 Sun Microsystems, Inc. Digital video signal encoder and encoding method
US6014172A (en) 1997-03-21 2000-01-11 Trw Inc. Optimized video compression from a single process step
US6037986A (en) 1996-07-16 2000-03-14 Divicom Inc. Video preprocessing method and apparatus with selective filtering based on motion detection
WO2000019726A1 (en) 1998-09-29 2000-04-06 General Instrument Corporation Method and apparatus for detecting scene changes and adjusting picture coding type in a high definition television encoder
US6055340A (en) 1997-02-28 2000-04-25 Fuji Photo Film Co., Ltd. Method and apparatus for processing digital images to suppress their noise and enhancing their sharpness
US6094511A (en) 1996-07-31 2000-07-25 Canon Kabushiki Kaisha Image filtering method and apparatus with interpolation according to mapping function to produce final image
US6097848A (en) 1997-11-03 2000-08-01 Welch Allyn, Inc. Noise reduction apparatus for electronic edge enhancement
US6100625A (en) 1997-11-10 2000-08-08 Nec Corporation Piezoelectric ceramic transducer and method of forming the same
US6130723A (en) 1998-01-15 2000-10-10 Innovision Corporation Method and system for improving image quality on an interlaced video display
US6191772B1 (en) 1992-11-02 2001-02-20 Cagent Technologies, Inc. Resolution enhancement for video display using multi-line interpolation
US6229925B1 (en) 1997-05-27 2001-05-08 Thomas Broadcast Systems Pre-processing device for MPEG 2 coding
US6236751B1 (en) 1998-09-23 2001-05-22 Xerox Corporation Automatic method for determining piecewise linear transformation from an image histogram
US20010003545A1 (en) 1999-12-08 2001-06-14 Lg Electronics Inc. Method for eliminating blocking effect in compressed video signal
US6259489B1 (en) 1996-04-12 2001-07-10 Snell & Wilcox Limited Video noise reducer
US6282299B1 (en) 1996-08-30 2001-08-28 Regents Of The University Of Minnesota Method and apparatus for video watermarking using perceptual masks
US6320676B1 (en) 1997-02-04 2001-11-20 Fuji Photo Film Co., Ltd. Method of predicting and processing image fine structures
EP0729117B1 (en) 1995-02-21 2001-11-21 Hitachi, Ltd. Method and apparatus for detecting a point of change in moving images
US20020015508A1 (en) 2000-06-19 2002-02-07 Digimarc Corporation Perceptual modeling of media signals based on local contrast and directional edges
US6366705B1 (en) 1999-01-28 2002-04-02 Lucent Technologies Inc. Perceptual preprocessing techniques to reduce complexity of video coders
US6385647B1 (en) 1997-08-18 2002-05-07 Mci Communications Corporations System for selectively routing data via either a network that supports Internet protocol or via satellite transmission network based on size of the data
US6404460B1 (en) 1999-02-19 2002-06-11 Omnivision Technologies, Inc. Edge enhancement with background noise reduction in video image processing
US20020104854A1 (en) 2001-01-24 2002-08-08 Jensen Bjorn Slot Dosing spout for mounting on a container
US20020122494A1 (en) 2000-12-27 2002-09-05 Sheraizin Vitaly S. Method and apparatus for visual perception encoding
US6463173B1 (en) 1995-10-30 2002-10-08 Hewlett-Packard Company System and method for histogram-based image contrast enhancement
US6466912B1 (en) 1997-09-25 2002-10-15 At&T Corp. Perceptual coding of audio signals employing envelope uncertainty
US6473532B1 (en) 2000-01-23 2002-10-29 Vls Com Ltd. Method and apparatus for visual lossless image syntactic encoding
US20020181598A1 (en) 2001-04-16 2002-12-05 Mitsubishi Electric Research Laboratories, Inc. Estimating total average distortion in a video with variable frameskip
US6509158B1 (en) 1988-09-15 2003-01-21 Wisconsin Alumni Research Foundation Image processing and analysis of individual nucleic acid molecules
US6554181B1 (en) 1998-02-09 2003-04-29 Sig Combibloc Gmbh & Co. Kg Reclosable pouring element and a flat gable composite packaging provided therewith
US6559826B1 (en) 1998-11-06 2003-05-06 Silicon Graphics, Inc. Method for modeling and updating a colorimetric reference profile for a flat panel display
US6567116B1 (en) 1998-11-20 2003-05-20 James A. Aman Multiple object tracking system
US20030107681A1 (en) 2001-12-12 2003-06-12 Masayuki Otawara Contrast correcting circuit
US6580825B2 (en) 1999-05-13 2003-06-17 Hewlett-Packard Company Contrast enhancement of an image using luminance and RGB statistical metrics
US20030122969A1 (en) 2001-11-08 2003-07-03 Olympus Optical Co., Ltd. Noise reduction system, noise reduction method, recording medium, and electronic camera
US20030152283A1 (en) 1998-08-05 2003-08-14 Kagumi Moriwaki Image correction device, image correction method and computer program product in memory for image correction
US6610256B2 (en) 1989-04-05 2003-08-26 Wisconsin Alumni Research Foundation Image processing and analysis of individual nucleic acid molecules
US6628327B1 (en) 1997-01-08 2003-09-30 Ricoh Co., Ltd Method and a system for improving resolution in color image data generated by a color image sensor
US6707487B1 (en) 1998-11-20 2004-03-16 In The Play, Inc. Method for representing real-time motion
US6728317B1 (en) 1996-01-30 2004-04-27 Dolby Laboratories Licensing Corporation Moving image compression quality enhancement using displacement filters with negative lobes
US20040091145A1 (en) 2002-04-11 2004-05-13 Atsushi Kohashi Image signal processing system and camera having the image signal processing system
US6753929B1 (en) 2000-06-28 2004-06-22 Vls Com Ltd. Method and system for real time motion picture segmentation and superposition
US6757449B1 (en) 1999-11-17 2004-06-29 Xerox Corporation Methods and systems for processing anti-aliased images
US6782287B2 (en) 2000-06-27 2004-08-24 The Board Of Trustees Of The Leland Stanford Junior University Method and apparatus for tracking a medical instrument based on image registration
US20040184673A1 (en) 2003-03-17 2004-09-23 Oki Data Corporation Image processing method and image processing apparatus
US20040190789A1 (en) 2003-03-26 2004-09-30 Microsoft Corporation Automatic analysis and adjustment of digital images with exposure problems
US6835693B2 (en) 2002-11-12 2004-12-28 Eastman Kodak Company Composite positioning imaging element
US6845181B2 (en) 2001-07-12 2005-01-18 Eastman Kodak Company Method for processing a digital image to adjust brightness
US20050013485A1 (en) 1999-06-25 2005-01-20 Minolta Co., Ltd. Image processor
US6847391B1 (en) 1988-10-17 2005-01-25 Lord Samuel Anthony Kassatly Multi-point video conference system
US6873442B1 (en) 2000-11-07 2005-03-29 Eastman Kodak Company Method and system for generating a low resolution image from a sparsely sampled extended dynamic range image sensing device
US6940903B2 (en) 2001-03-05 2005-09-06 Intervideo, Inc. Systems and methods for performing bit rate allocation for a video data stream
US6940545B1 (en) 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
US20050259185A1 (en) 2004-05-21 2005-11-24 Moon-Cheol Kim Gamma correction apparatus and method capable of preventing noise boost-up
US20060013503A1 (en) 2004-07-16 2006-01-19 Samsung Electronics Co., Ltd. Methods of preventing noise boost in image contrast enhancement
US7003174B2 (en) 2001-07-02 2006-02-21 Corel Corporation Removal of block encoding artifacts
US7075993B2 (en) 2001-06-12 2006-07-11 Digital Interactive Streams, Inc. Correction system and method for enhancing digital video
US7087021B2 (en) 2001-02-20 2006-08-08 Giovanni Paternostro Methods of screening for genes and agents affecting cardiac function
US7110601B2 (en) 2001-10-25 2006-09-19 Japan Aerospace Exploration Agency Method for detecting linear image in planar picture
US7139425B2 (en) 2000-08-28 2006-11-21 Fuji Photo Film Co., Ltd. Method and apparatus for correcting white balance, method for correcting density and a recording medium on which a program for carrying out the methods is recorded
US7184071B2 (en) 2002-08-23 2007-02-27 University Of Maryland Method of three-dimensional object reconstruction from a video sequence using a generic model
US7221805B1 (en) 2001-12-21 2007-05-22 Cognex Technology And Investment Corporation Method for generating a focused image of an object
US7526142B2 (en) 2005-02-22 2009-04-28 Sheraizin Vitaly S Enhancement of decompressed video
US7639892B2 (en) 2004-07-26 2009-12-29 Sheraizin Semion M Adaptive image improvement

Patent Citations (139)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2697758A (en) 1950-08-01 1954-12-21 Rca Corp Gamma correcting circuit
GB1503612A (en) 1974-02-19 1978-03-15 Sonex Int Corp Arrangements for keying video signals
US3961133A (en) 1974-05-24 1976-06-01 The Singer Company Method and apparatus for combining video images with proper occlusion
JPS5571363A (en) 1978-11-24 1980-05-29 Sony Corp Video synthesizing unit
US4855825A (en) 1984-06-08 1989-08-08 Valtion Teknillinen Tutkimuskeskus Method and apparatus for detecting the most powerfully changed picture areas in a live video signal
JPH01206775A (en) 1988-02-13 1989-08-18 Sony Corp Gamma correcting circuit for luminance signal
US7049074B2 (en) 1988-09-15 2006-05-23 Wisconsin Alumni Research Foundation Methods and compositions for the manipulation and characterization of individual nucleic acid molecules
US6509158B1 (en) 1988-09-15 2003-01-21 Wisconsin Alumni Research Foundation Image processing and analysis of individual nucleic acid molecules
US4947255A (en) 1988-09-19 1990-08-07 The Grass Valley Group, Inc. Video luminance self keyer
US6847391B1 (en) 1988-10-17 2005-01-25 Lord Samuel Anthony Kassatly Multi-point video conference system
US5012333A (en) 1989-01-05 1991-04-30 Eastman Kodak Company Interactive dynamic range adjustment system for printing digital images
US6610256B2 (en) 1989-04-05 2003-08-26 Wisconsin Alumni Research Foundation Image processing and analysis of individual nucleic acid molecules
US5126847A (en) 1989-09-28 1992-06-30 Sony Corporation Apparatus for producing a composite signal from real moving picture and still picture video signals
US5542008A (en) 1990-02-28 1996-07-30 Victor Company Of Japan, Ltd. Method of and apparatus for compressing image representing signals
US5555557A (en) 1990-04-23 1996-09-10 Xerox Corporation Bit-map image resolution converter with controlled compensation for write-white xerographic laser printing
US5339171A (en) 1990-04-24 1994-08-16 Ricoh Company, Ltd. Image processing apparatus especially suitable for producing smooth-edged output multi-level tone data having fewer levels than input multi-level tone data
JPH0483480A (en) 1990-07-26 1992-03-17 Nippon Hoso Kyokai <Nhk> Polarizing key type image synthesizer
US5194943A (en) 1990-11-06 1993-03-16 Hitachi, Ltd. Video camera having a γ-correction circuit for correcting level characteristics of a luminance signal included in a video signal
US5557340A (en) 1990-12-13 1996-09-17 Rank Cintel Limited Noise reduction in video signals
EP0502615B1 (en) 1991-03-07 1998-06-03 Matsushita Electric Industrial Co., Ltd. Video signal motion detection method and noise reducer using said method
US5245445A (en) 1991-03-22 1993-09-14 Ricoh Company, Ltd. Image processing apparatus
US5799111A (en) 1991-06-14 1998-08-25 D.V.P. Technologies, Ltd. Apparatus and methods for smoothing images
US5467404A (en) 1991-08-14 1995-11-14 Agfa-Gevaert Method and apparatus for contrast enhancement
US5566251A (en) 1991-09-18 1996-10-15 David Sarnoff Research Center, Inc Video merging employing pattern-key insertion
US5301016A (en) 1991-12-21 1994-04-05 U.S. Philips Corporation Method of and arrangement for deriving a control signal for inserting a background signal into parts of a foreground signal
US5341442A (en) 1992-01-21 1994-08-23 Supermac Technology, Inc. Method and apparatus for compression data by generating base image data from luminance and chrominance components and detail image data from luminance component
US5428398A (en) 1992-04-10 1995-06-27 Faroudja; Yves C. Method and apparatus for producing from a standard-bandwidth television signal a signal which when reproduced provides a high-definition-like video image relatively free of artifacts
US5796864A (en) 1992-05-12 1998-08-18 Apple Computer, Inc. Method and apparatus for real-time lossless compression and decompression of image data
US5404174A (en) 1992-06-29 1995-04-04 Victor Company Of Japan, Ltd. Scene change detector for detecting a scene change of a moving picture
US5384601A (en) 1992-08-25 1995-01-24 Matsushita Electric Industrial Co., Ltd. Color adjustment apparatus for automatically changing colors
JPH06133221A (en) 1992-10-14 1994-05-13 Sony Corp Image pickup device
US6191772B1 (en) 1992-11-02 2001-02-20 Cagent Technologies, Inc. Resolution enhancement for video display using multi-line interpolation
US5491514A (en) 1993-01-28 1996-02-13 Matsushita Electric Industrial Co., Ltd. Coding apparatus, decoding apparatus, coding-decoding apparatus for video signals, and optical disks conforming thereto
US5565921A (en) 1993-03-16 1996-10-15 Olympus Optical Co., Ltd. Motion-adaptive image signal processing system
US5510824A (en) 1993-07-26 1996-04-23 Texas Instruments, Inc. Spatial light modulator array
US5614937A (en) 1993-07-26 1997-03-25 Texas Instruments Incorporated Method for high resolution printing
US5627580A (en) 1993-07-26 1997-05-06 Texas Instruments Incorporated System and method for enhanced printing
US5539475A (en) 1993-09-10 1996-07-23 Sony Corporation Method of and apparatus for deriving a key signal from a digital video signal
US5491519A (en) 1993-12-16 1996-02-13 Daewoo Electronics Co., Ltd. Pre-processing filter apparatus for use in an image encoding system
US5790195A (en) 1993-12-28 1998-08-04 Canon Kabushiki Kaisha Image processing apparatus
US5586200A (en) 1994-01-07 1996-12-17 Panasonic Technologies, Inc. Segmentation based image compression system
US5613035A (en) 1994-01-18 1997-03-18 Daewoo Electronics Co., Ltd. Apparatus for adaptively encoding input digital audio signals from a plurality of channels
US5592226A (en) 1994-01-26 1997-01-07 Btg Usa Inc. Method and apparatus for video data compression using temporally adaptive motion interpolation
US5491517A (en) 1994-03-14 1996-02-13 Scitex America Corporation System for implanting an image into a video stream
US5488675A (en) 1994-03-31 1996-01-30 David Sarnoff Research Center, Inc. Stabilizing estimate of location of target region inferred from tracked multiple landmark regions of a video image
US5694492A (en) 1994-04-30 1997-12-02 Daewoo Electronics Co., Ltd Post-processing method and apparatus for removing a blocking effect in a decoded image signal
US5838835A (en) 1994-05-03 1998-11-17 U.S. Philips Corporation Better contrast noise by residue image
US5847766A (en) 1994-05-31 1998-12-08 Samsung Electronics Co, Ltd. Video encoding method and apparatus based on human visual sensitivity
US5828776A (en) 1994-09-20 1998-10-27 Neopath, Inc. Apparatus for identification and integration of multiple cell patterns
US5648801A (en) 1994-12-16 1997-07-15 International Business Machines Corporation Grayscale printing system
US5537510A (en) 1994-12-30 1996-07-16 Daewoo Electronics Co., Ltd. Adaptive digital audio encoding apparatus and a bit allocation method thereof
US5627937A (en) 1995-01-09 1997-05-06 Daewoo Electronics Co. Ltd. Apparatus for adaptively encoding input digital audio signals from a plurality of channels
US5844614A (en) 1995-01-09 1998-12-01 Matsushita Electric Industrial Co., Ltd. Video signal decoding apparatus
JPH08191440A (en) 1995-01-10 1996-07-23 Fukuda Denshi Co Ltd Method and device for correcting endoscope image
US5982926A (en) 1995-01-17 1999-11-09 At & T Ipm Corp. Real-time image enhancement techniques
EP0729117B1 (en) 1995-02-21 2001-11-21 Hitachi, Ltd. Method and apparatus for detecting a point of change in moving images
US5845012A (en) 1995-03-20 1998-12-01 Daewoo Electronics Co., Ltd. Apparatus for encoding an image signal having a still object
US5852475A (en) 1995-06-06 1998-12-22 Compression Labs, Inc. Transform artifact reduction process
US5717463A (en) 1995-07-24 1998-02-10 Motorola, Inc. Method and system for estimating motion within a video sequence
US5774593A (en) 1995-07-24 1998-06-30 University Of Washington Automatic scene decomposition and optimization of MPEG compressed video
US5653234A (en) 1995-09-29 1997-08-05 Siemens Medical Systems, Inc. Method and apparatus for adaptive spatial image filtering
US6463173B1 (en) 1995-10-30 2002-10-08 Hewlett-Packard Company System and method for histogram-based image contrast enhancement
US5850294A (en) 1995-12-18 1998-12-15 Lucent Technologies Inc. Method and apparatus for post-processing images
US5787203A (en) 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
US6728317B1 (en) 1996-01-30 2004-04-27 Dolby Laboratories Licensing Corporation Moving image compression quality enhancement using displacement filters with negative lobes
US5901178A (en) 1996-02-26 1999-05-04 Solana Technology Development Corporation Post-compression hidden data transport for video
US5883983A (en) 1996-03-23 1999-03-16 Samsung Electronics Co., Ltd. Adaptive postprocessing system for reducing blocking effects and ringing noise in decompressed image signals
US5974159A (en) 1996-03-29 1999-10-26 Sarnoff Corporation Method and apparatus for assessing the visibility of differences between two image sequences
US5844607A (en) 1996-04-03 1998-12-01 International Business Machines Corporation Method and apparatus for scene change detection in digital video compression
US6259489B1 (en) 1996-04-12 2001-07-10 Snell & Wilcox Limited Video noise reducer
US5995656A (en) 1996-05-21 1999-11-30 Samsung Electronics Co., Ltd. Image enhancing method using lowpass filtering and histogram equalization and a device therefor
US5870501A (en) 1996-07-11 1999-02-09 Daewoo Electronics, Co., Ltd. Method and apparatus for encoding a contour image in a video signal
US6037986A (en) 1996-07-16 2000-03-14 Divicom Inc. Video preprocessing method and apparatus with selective filtering based on motion detection
US6094511A (en) 1996-07-31 2000-07-25 Canon Kabushiki Kaisha Image filtering method and apparatus with interpolation according to mapping function to produce final image
US5914748A (en) 1996-08-30 1999-06-22 Eastman Kodak Company Method and apparatus for generating a composite image using the difference of two images
US6282299B1 (en) 1996-08-30 2001-08-28 Regents Of The University Of Minnesota Method and apparatus for video watermarking using perceptual masks
US5847772A (en) 1996-09-11 1998-12-08 Wells; Aaron Adaptive filter for video processing applications
US6628327B1 (en) 1997-01-08 2003-09-30 Ricoh Co., Ltd Method and a system for improving resolution in color image data generated by a color image sensor
US6005626A (en) 1997-01-09 1999-12-21 Sun Microsystems, Inc. Digital video signal encoder and encoding method
US6320676B1 (en) 1997-02-04 2001-11-20 Fuji Photo Film Co., Ltd. Method of predicting and processing image fine structures
US6522425B2 (en) 1997-02-04 2003-02-18 Fuji Photo Film Co., Ltd. Method of predicting and processing image fine structures
US5881174A (en) 1997-02-18 1999-03-09 Daewoo Electronics Co., Ltd. Method and apparatus for adaptively coding a contour of an object
US6055340A (en) 1997-02-28 2000-04-25 Fuji Photo Film Co., Ltd. Method and apparatus for processing digital images to suppress their noise and enhancing their sharpness
US6014172A (en) 1997-03-21 2000-01-11 Trw Inc. Optimized video compression from a single process step
US6229925B1 (en) 1997-05-27 2001-05-08 Thomas Broadcast Systems Pre-processing device for MPEG 2 coding
US6385647B1 (en) 1997-08-18 2002-05-07 Mci Communications Corporations System for selectively routing data via either a network that supports Internet protocol or via satellite transmission network based on size of the data
US6466912B1 (en) 1997-09-25 2002-10-15 At&T Corp. Perceptual coding of audio signals employing envelope uncertainty
US6097848A (en) 1997-11-03 2000-08-01 Welch Allyn, Inc. Noise reduction apparatus for electronic edge enhancement
US6100625A (en) 1997-11-10 2000-08-08 Nec Corporation Piezoelectric ceramic transducer and method of forming the same
US6130723A (en) 1998-01-15 2000-10-10 Innovision Corporation Method and system for improving image quality on an interlaced video display
US6554181B1 (en) 1998-02-09 2003-04-29 Sig Combibloc Gmbh & Co. Kg Reclosable pouring element and a flat gable composite packaging provided therewith
US5991464A (en) 1998-04-03 1999-11-23 Odyssey Technologies Method and system for adaptive video image resolution enhancement
US20030152283A1 (en) 1998-08-05 2003-08-14 Kagumi Moriwaki Image correction device, image correction method and computer program product in memory for image correction
US6643398B2 (en) 1998-08-05 2003-11-04 Minolta Co., Ltd. Image correction device, image correction method and computer program product in memory for image correction
US6236751B1 (en) 1998-09-23 2001-05-22 Xerox Corporation Automatic method for determining piecewise linear transformation from an image histogram
WO2000019726A1 (en) 1998-09-29 2000-04-06 General Instrument Corporation Method and apparatus for detecting scene changes and adjusting picture coding type in a high definition television encoder
US6559826B1 (en) 1998-11-06 2003-05-06 Silicon Graphics, Inc. Method for modeling and updating a colorimetric reference profile for a flat panel display
US6707487B1 (en) 1998-11-20 2004-03-16 In The Play, Inc. Method for representing real-time motion
US6567116B1 (en) 1998-11-20 2003-05-20 James A. Aman Multiple object tracking system
US6366705B1 (en) 1999-01-28 2002-04-02 Lucent Technologies Inc. Perceptual preprocessing techniques to reduce complexity of video coders
US6404460B1 (en) 1999-02-19 2002-06-11 Omnivision Technologies, Inc. Edge enhancement with background noise reduction in video image processing
US6580825B2 (en) 1999-05-13 2003-06-17 Hewlett-Packard Company Contrast enhancement of an image using luminance and RGB statistical metrics
US20050013485A1 (en) 1999-06-25 2005-01-20 Minolta Co., Ltd. Image processor
US6757449B1 (en) 1999-11-17 2004-06-29 Xerox Corporation Methods and systems for processing anti-aliased images
US20010003545A1 (en) 1999-12-08 2001-06-14 Lg Electronics Inc. Method for eliminating blocking effect in compressed video signal
US6473532B1 (en) 2000-01-23 2002-10-29 Vls Com Ltd. Method and apparatus for visual lossless image syntactic encoding
US6940545B1 (en) 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
US20020015508A1 (en) 2000-06-19 2002-02-07 Digimarc Corporation Perceptual modeling of media signals based on local contrast and directional edges
US6782287B2 (en) 2000-06-27 2004-08-24 The Board Of Trustees Of The Leland Stanford Junior University Method and apparatus for tracking a medical instrument based on image registration
US6753929B1 (en) 2000-06-28 2004-06-22 Vls Com Ltd. Method and system for real time motion picture segmentation and superposition
US7742108B2 (en) 2000-06-28 2010-06-22 Sheraizin Semion M Method and system for real time motion picture segmentation and superposition
US7139425B2 (en) 2000-08-28 2006-11-21 Fuji Photo Film Co., Ltd. Method and apparatus for correcting white balance, method for correcting density and a recording medium on which a program for carrying out the methods is recorded
US6873442B1 (en) 2000-11-07 2005-03-29 Eastman Kodak Company Method and system for generating a low resolution image from a sparsely sampled extended dynamic range image sensing device
US20020122494A1 (en) 2000-12-27 2002-09-05 Sheraizin Vitaly S. Method and apparatus for visual perception encoding
US20020104854A1 (en) 2001-01-24 2002-08-08 Jensen Bjorn Slot Dosing spout for mounting on a container
US7087021B2 (en) 2001-02-20 2006-08-08 Giovanni Paternostro Methods of screening for genes and agents affecting cardiac function
US6970506B2 (en) 2001-03-05 2005-11-29 Intervideo, Inc. Systems and methods for reducing frame rates in a video data stream
US7221706B2 (en) 2001-03-05 2007-05-22 Intervideo, Inc. Systems and methods for performing bit rate allocation for a video data stream
US7164717B2 (en) 2001-03-05 2007-01-16 Intervideo, Inc. Systems and methods for detecting scene changes in a video data stream
US7133451B2 (en) 2001-03-05 2006-11-07 Intervideo, Inc. Systems and methods for refreshing macroblocks
US6940903B2 (en) 2001-03-05 2005-09-06 Intervideo, Inc. Systems and methods for performing bit rate allocation for a video data stream
US20020181598A1 (en) 2001-04-16 2002-12-05 Mitsubishi Electric Research Laboratories, Inc. Estimating total average distortion in a video with variable frameskip
US7075993B2 (en) 2001-06-12 2006-07-11 Digital Interactive Streams, Inc. Correction system and method for enhancing digital video
US7003174B2 (en) 2001-07-02 2006-02-21 Corel Corporation Removal of block encoding artifacts
US6845181B2 (en) 2001-07-12 2005-01-18 Eastman Kodak Company Method for processing a digital image to adjust brightness
US7110601B2 (en) 2001-10-25 2006-09-19 Japan Aerospace Exploration Agency Method for detecting linear image in planar picture
US20030122969A1 (en) 2001-11-08 2003-07-03 Olympus Optical Co., Ltd. Noise reduction system, noise reduction method, recording medium, and electronic camera
US20030107681A1 (en) 2001-12-12 2003-06-12 Masayuki Otawara Contrast correcting circuit
US7221805B1 (en) 2001-12-21 2007-05-22 Cognex Technology And Investment Corporation Method for generating a focused image of an object
US20040091145A1 (en) 2002-04-11 2004-05-13 Atsushi Kohashi Image signal processing system and camera having the image signal processing system
US7184071B2 (en) 2002-08-23 2007-02-27 University Of Maryland Method of three-dimensional object reconstruction from a video sequence using a generic model
US6835693B2 (en) 2002-11-12 2004-12-28 Eastman Kodak Company Composite positioning imaging element
US20040184673A1 (en) 2003-03-17 2004-09-23 Oki Data Corporation Image processing method and image processing apparatus
US20040190789A1 (en) 2003-03-26 2004-09-30 Microsoft Corporation Automatic analysis and adjustment of digital images with exposure problems
US20050259185A1 (en) 2004-05-21 2005-11-24 Moon-Cheol Kim Gamma correction apparatus and method capable of preventing noise boost-up
US20060013503A1 (en) 2004-07-16 2006-01-19 Samsung Electronics Co., Ltd. Methods of preventing noise boost in image contrast enhancement
US7639892B2 (en) 2004-07-26 2009-12-29 Sheraizin Semion M Adaptive image improvement
US7526142B2 (en) 2005-02-22 2009-04-28 Sheraizin Vitaly S Enhancement of decompressed video
US20090161754A1 (en) 2005-02-22 2009-06-25 Somle Development, L.L.C. Enhancement of decompressed video

Non-Patent Citations (64)

* Cited by examiner, † Cited by third party
Title
"Non Final Office Action", U.S. Appl. No. 10/851,190, (Sep. 1, 2009), 8 pages.
"Non Final Office Action", U.S. Appl. No. 10/898,557, (Jul. 8, 2009), 6 pages.
"Non Final Office Action", U.S. Appl. No. 10/898,557, (Jun. 8, 2010), 5 pages.
"Non Final Office Action", U.S. Appl. No. 12/316,168, (Jun. 24, 2009), 11 pages.
"Notice of Allowability", U.S. Appl. No. 12/316,168, (Jun. 29, 2010), 7 pages.
"Notice of Allowance", U.S. Appl. No. 10/851,190, (Feb. 8, 2010), 4 pages.
"Notice of Allowance", U.S. Appl. No. 10/898,557, (Jan. 27, 2010), 6 pages.
"Notice of Allowance", U.S. Appl. No. 11/027,674, (Feb. 24, 2009), 16 pages.
"Notice of Allowance", U.S. Appl. No. 12/316,168, (Jan. 29, 2010), 8 pages.
"Notice of Allowance", U.S. Appl. No. 12/316,168, (Jun. 1, 2010), 9 pages.
"Notice of Allowance/Base Issue Fee", U.S. Appl. No. 11/027,674, (Jul. 23, 2009), 6 pages.
"Restriction Requirement", U.S. Appl. No. 10/851,190, (May 19, 2009),8 pages.
Awadkh, Al-Asmari "An Adaptive Hybrid Coding Scheme for HDTV and Digital Sequences", IEEE Transacitons on Consumer Electronics, vol. 42, No. 3,(Aug. 1995),926-936.
Banhom, et al., "Digital Image Restoration", IEEE Signal Proc., (Mar. 1997),24-41.
Belkacem-Boussaid, "A New Image Smoothing Method Based on a Simple Model of Spatial Processing in the Earl Stages of Human Vision", IEEE Trans. on Image Proc., vol. 9, No. 2,(Feb. 2000),220-226.
Brice, Richard "Multimedia and Virtual Reality Engineering", (1997),1-8, 174-175, 280-283.
Chan, "A practical postprocessing technique for real-time block-based coding sytem", IEEE trans on circuits and systems for video technology, vol. 8, No. 1(Feb. 1998),4-8.
Chan, et al., "The Digital TV Filter and Nonlinear Denoising", IEEE Trans on Image Proc., vol. 10, No. 2,(Feb. 2001),231-241.
Chang, Dah-Chung "Image contrast enhancement based on local standard deviation", IEEE trans on medical imaging, vol. 17, No. 4,(Aug. 1998), 518-531.
Cheol, Hong M., et al., "A new adaptive quantization method to reduce blocking effect", IEEE transaction on consumer electronics, vol. 44, No. 3,(Aug. 1998),768-772.
Choung, et al., "A fast adaptive image restoration filter for reducing block artifact in compressed images", IEEE trans on consumer electronics, vol. 44, No. 1,(Nov. 1997),1340-1346.
Conway, Lynn et al., "Video mirroring and Iconic Gestures: Enhancing Basic Videophones to Provide Visual Coaching and Visual Control", IEEE Transactions on Consumer Electronics, vol. 44, No. 2,(May 1998),388-397.
Fan, Kuo-Chin et al., "An Active Scene Analysis-Based approach for Pseudo constant Bit-Rate Video Coding", IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, No. 2,(Apr. 1998),159-170.
Feng, Jian et al., "Motion Adaptive Classified Vector Quantization for ATM Video Coding", IEEE Transactions on Consumer Electronics,, vol. 41, No. 2,(May 1995),322-326.
Goel, James et al., "Pre-processing for MPEG Compression Using Adaptive Spatial Filtering", IEEE Transactions On Consumer electronics, vol. 41, No. 3,(Aug. 1995),687-698.
Hentschel, et al., "Effective Paking Filter and is Implementation on a Programmable Architecture", IEEE Trans. on Consumer Electronics, vol. 47, No. 1,(Feb. 2001),33-39.
Hier, et al., "Real time locally adaptive contrast enhancement; A practical key to overcoming display and human visual system limitation", SID digest, (1993),491-493.
Immerkaer, "Use of Blur-Space of Deblurring and Edge- Preserving Noise Smoothing", IEEE Trans On Image Proc., vol. 10, No. 6,(Jun. 2001),837-840.
Jeon, B et al., "Blocking artifacts reduction in image compression with block boundary discontinuity criterion", IEEE trans on circuits and systems for video technology, vol. 8, No. 3,(Jun. 1998),34557.
Jostschulte, et al., "Perception Adaptive Temporal TV-noise Reduction Using Contour Preserving Prefilter Techniques", IEEE on Consumer Electronics, vol. 44, No. 3,(Aug. 1998),1091-1096.
Kim, et al., "An advanced contrast enhancement using partially overlapped sub-block histogram equalization", IEEE Trans on circuits and systems for video technology, vol. 11, No. 4,(Apr. 2001),475-484.
Kim, et al., "Impact of HVS Models on Model-based Halftoning", IEEE Trans. on Image Proc., vol. 11, No. 3,(Mar. 2002),258-269.
Kim, Tae K., et al., "Contrast enhancement system using spatiallly adaptive histogram equalization with temporal filtering", IEEE trans on consumer electronics, vol. 44, No. 1,(Feb. 1998),82-87.
Kim, Yeong T., "Contrast enhancement using brightness preserving bi-histogram equalization", IEEE trans on consumer electronics, vol. 43, No. 1,(Feb. 1997),1-8.
Kuo, et al., "Adaptive postprocessor for block endoded images", IEEE trans on circuits and systems for video technology, vol. 5, No. 4,(Aug. 1995),298-304.
Kwok, Tung Lo "Predictive Mean Search Algorithms for Fast VQ Encoding of Images", IEEE Transactions On Consumer Electronics, vol. 41, No. 2,(May 1995),327-331.
Lan, Austin Y., et al., "Scene-Context Dependent Reference-Frame Placement for MPEG Video Coding", IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 3,(Apr. 1999),478-489.
Lee, et al., "Efficient algorithm and architecture for post processor in HDTV", IEEE trans on consumer electronics, vol. 44, No. 1,(Feb. 1998), 16-26.
Leonard, Eugene "Considerations regarding the use of digital data to generate video backgrounds", SMPTE journal, vol. 87,,(Aug. 1987),499-504.
Liang, Shen "A Segmentation-Based Lossless Image Coding Method for High-Resolution Medical Image Compression", IEEE Transactions on Medical Imaging, vol. 16, No. 3,(Jun. 1997),301-316.
Lim, Jae "Two dimensional signal and image processing", USA Simon & Schuster, (1990),430.
Liu, et al., "A new postprocessing technique for the block based DCT coding based on the convex- projection theory", IEEE trans on consumer electronics, vol. 4, No. 3,(Aug. 1998), 1054-1061.
Liu, et al., "Complexity- Regularized Image Denoising", IEEE Trans on Image Processing, vol. 10, No. 6,(Jun. 2001),341-351.
Mancuso, et al., "A new post-processing algorithm to reduce artifacts in block coded images", IEEE trans on consumer electronics, vol. 43, No. 3,(Aug. 1997),303-307.
Massimo, Mancuso et al., "Advanced pre/ post processing for DCT coded images", IEEE transactions on consumer electronics, vol. 44, No. 3,(Aug. 1998),1039-1041.
Meier, et al., "Reduction of blocking artifacts in image and video coding", IEEE trans on cicuits and systems for video technology, (Apr. 1999),490-500.
Min, et al., "A new adaptive quantization method to reduce blocking effect", IEEE Trans on consumer electronics, vol. 44, No. 3,(Aug. 1998),768-772.
Munteanu, Adrian et al., "Wavelet-Based Lossless Compression of Coronary Angiographic Images", IEEE Transactions on Medical Imaging, vol. 18, No. 3,(Mar. 1999),272-281.
Okumura, Akira et al., "Signal Analysis and Compression performance Evaluation of Pathological Microscopic Images", IEEE Transactions on Medical Imaging, vol. 16, No. 6,(Dec. 1997),701-710.
Olukayode, A et al., "An algorithm for integrated noise reduction and sharpness enhancement", IEEE Transactions on Consumer Electronics, vol. 46, No. 3,(Aug. 2000),474-480.
Pappas, et al., "Digital Color Restoration of Old Paintings", IEEEE Trans. on Image Proc., vol. 9, No. 2,(Feb. 2000),291-294.
Polesel, Andrea et al., "Image Enhancement Via Adaptive Unsharp Masking", IEEE transactions on image processing, vol. 9, No. 3,(Mar. 2000),505-510.
Russ, John C., "The image processing handbook", CRS press Inc., (1995),674.
Sakaue, Shigeo et al., "Adaptive gamma processing of the video camera for expansion of the dynamic range", IEEE trans on consumer electronics, vol. 41, No. 3,(Aug. 1995),555-582.
Sherazain, et al., "U.S. Appl. No. 09/524,618", filed Mar. 14, 2000.
Stark, Alex J., "Adaptive image contrast enhancement Enhancement using generalizations of histogram equalization", IEEE trans on image processing, vol. 9, No. 5,(May 2000),889-896.
Sung-Hoon, Hong, "Joint video coding of MPEG-2 video programs for digital broadcasting services", IEEE transactions on broadcasting, vol. 44, No. 2,(Jun. 1998),153-164.
Takashi, Ida et al., "Image Segmentation and Contour Detection Using Fractal Coding", IEEE Transitions on Circuits and Systems for Video Technology, vol. 8, No. 8,(Dec. 1998),968-975.
Talluri, Raj et al., "A Robust, Scalable, Object-Based Video Compression Technique for Very Low Bit-Rate Coding", IEEE Transaction of Circuit and Systems for Video Technology, (Feb. 1997),vol. 7, No. 1.
Tao, Chen "Adaptive postfiltering of transform coefficients for the reduction of blocking artifacts", IEEE transactions on circuits and systems for video technology, vol. 11, No. 5,(May 2001),594-602.
Tescher, Andrew "Multimedia is the message", IEEE signal processing magazine, vol. 16, No. 1,(Jan. 1999),44-54.
Yang, et al., "Maximum-Likelihood Parameter Estimation for Image Ringing-Artifact Removal", IEEE Trans. on Cicuits and Systems for Video Technology, vol. 11, No. 8,(Aug. 2001),963-973.
Yang, J et al., "Noise estimation for blocking artifacts reduction in DCT coded images", IEEE trans on circuits and systems for video tech nology, vol. 10, No. 7,(Oct. 2000),1116-1120.
Zhong, et al., "Derivation of prediction equation for blocking effect reduction", IEEE trans on circuits and systems for video technology, vol. 9, No. 3,(Apr. 1999),415-418.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100225817A1 (en) * 2000-06-28 2010-09-09 Sheraizin Semion M Real Time Motion Picture Segmentation and Superposition
US8098332B2 (en) 2000-06-28 2012-01-17 Somle Development, L.L.C. Real time motion picture segmentation and superposition
US20090316019A1 (en) * 2008-06-20 2009-12-24 Altek Corporation Tone adjustment method for digital image and electronic apparatus using the same
US8189073B2 (en) * 2008-06-20 2012-05-29 Altek Corporation Tone adjustment method for digital image and electronic apparatus using the same

Also Published As

Publication number Publication date
US6952500B2 (en) 2005-10-04
WO2001054392A3 (en) 2002-01-24
WO2001054392A2 (en) 2001-07-26
US7095903B2 (en) 2006-08-22
US20030067982A1 (en) 2003-04-10
EP1260094A4 (en) 2004-04-07
US20050123208A1 (en) 2005-06-09
AU2001228771A1 (en) 2001-07-31
EP1260094A2 (en) 2002-11-27
IL134182A0 (en) 2001-04-30
US6473532B1 (en) 2002-10-29
IL134182A (en) 2006-08-01

Similar Documents

Publication Publication Date Title
USRE42148E1 (en) Method and apparatus for visual lossless image syntactic encoding
US7203234B1 (en) Method of directional filtering for post-processing compressed video
US8139883B2 (en) System and method for image and video encoding artifacts reduction and quality improvement
US6807317B2 (en) Method and decoder system for reducing quantization effects of a decoded image
US7957467B2 (en) Content-adaptive block artifact removal in spatial domain
Dubois et al. Noise reduction in image sequences using motion-compensated temporal filtering
US7366242B2 (en) Median filter combinations for video noise reduction
US5150432A (en) Apparatus for encoding/decoding video signals to improve quality of a specific region
US6983078B2 (en) System and method for improving image quality in processed images
US20070025447A1 (en) Noise filter for video compression
US20030123747A1 (en) System for and method of sharpness enhancement using coding information and local spatial features
KR19980086812A (en) Method and apparatus for filtering video image data
EP1506525B1 (en) System for and method of sharpness enhancement for coded digital video
US7574060B2 (en) Deblocker for postprocess deblocking
CN107707915B (en) Sample the control method and its image processing system of point self-adapted offset filtering
KR101098300B1 (en) Spatial signal conversion
CN111770334A (en) Data encoding method and device, and data decoding method and device
Yeh et al. Deblocking filter by color psychology analysis for H. 264/AVC video coders
JPH08331587A (en) Luminance and chrominance signal separator circuit for television signal
JPH0775109A (en) Image information compressing method

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: RATEZE REMOTE MGMT. L.L.C., DELAWARE

Free format text: MERGER;ASSIGNOR:SOMLE DEVELOPMENT, L.L.C.;REEL/FRAME:037181/0942

Effective date: 20150826

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12

AS Assignment

Owner name: SOMLE DEVELOPMENT, L.L.C., DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VLS COM LTD.;REEL/FRAME:050858/0237

Effective date: 20080514

AS Assignment

Owner name: INTELLECTUAL VENTURES ASSETS 145 LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RATEZE REMOTE MGMT. L.L.C.;REEL/FRAME:050963/0865

Effective date: 20191031

AS Assignment

Owner name: DIGIMEDIA TECH, LLC, GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTELLECTUAL VENTURES ASSETS 145 LLC;REEL/FRAME:051408/0628

Effective date: 20191115