US20050069221A1 - Method and system for noise reduction in an image - Google Patents
Method and system for noise reduction in an image Download PDFInfo
- Publication number
- US20050069221A1 US20050069221A1 US10/673,612 US67361203A US2005069221A1 US 20050069221 A1 US20050069221 A1 US 20050069221A1 US 67361203 A US67361203 A US 67361203A US 2005069221 A1 US2005069221 A1 US 2005069221A1
- Authority
- US
- United States
- Prior art keywords
- layer
- edge
- image
- blending
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/142—Edging; Contouring
Definitions
- Video images, and especially analog video signals can be corrupted by varied types of temporal and spatial noise during acquisition, transmission, and recording of the image.
- Typical types of noise include thermal noise, single frequency modulation distortion noise, and impulse noise.
- Noise reduction techniques that apply linear or non-linear filters on video signals can reduce the amount of noise.
- One such technique is to apply a low-pass filter on the video signal.
- simple low-pass filtering tends to produce over-smoothed video that appears blurry.
- Other filters such as Wiener filters, Kalman filters are better at removing one or more of spatial noise or temporal noise but can be expensive in terms of implementation and device costs.
- FIG. 1 represents, in flow diagram form, a method in accordance with the present disclosure
- FIG. 2 represents, in block diagram form, a system in accordance with a specific embodiment of the present disclosure.
- FIG. 3 represents Boolean edge data in accordance with a specific embodiment of the present disclosure
- FIG. 4 represents weighted edge data in accordance with a specific embodiment of the present disclosure
- FIG. 5 represents, in flow diagram form, a method of blending accordance with a specific embodiment of the present disclosure
- FIGS. 6 and 7 represent, in flow diagram form, methods in accordance with specific embodiments of the present disclosure
- FIG. 8 represents, in block diagram form, a system in accordance with the present disclosure.
- FIG. 9 represents an intermediate data in accordance with a specific embodiment of the present disclosure.
- a source image is smoothed to create a smoothed image
- an edge detector is used to create an edge layer.
- a blending controller is used to control a blending between the source image and the smoothed image.
- the blended destination image maintains detail while eliminating unwanted noise.
- FIG. 1 illustrates in flow diagram form a method in accordance with the present disclosure. The method of FIG. 1 is discussed with reference to the system of FIG. 2 , which illustrates in block diagram form data flow through a system 100 in accordance with the present disclosure.
- System 100 comprises noise filter 140 , edge detector 150 and blending controller 160 .
- system 100 includes memory (not specifically shown) for storing image information including source image 112 , smoothed layer 120 , edge layer 130 , and destination layer 135 .
- the layers are accessible by the noise filter 140 , edge detector 150 and blending controller 160 .
- a first image layer is received at step 11 .
- source layer 117 is an example of such a first image layer.
- the source layer 117 is one of three component layers, along with layers 115 and 116 , which make up the source image 112 .
- Examples of types of component layers 115 - 117 include RGB component layers, YUV component layers, and component layers of any other color spaces.
- the system 100 receives source image 112 by receipt of a video stream or by accessing a memory location.
- the source providing the source image can be a head end device, a TV receiver, a Video Cassette Recorder, a DVD (Digital Versatile Disk) player, or other video sources.
- each image layer 115 - 117 is processed independently.
- data flow with respect to FIGS. 1 and 2 will be discussed with respect to one of the source layers, source layer 117 .
- the layer information may be stored and processed as frames or partial frames, such as line buffers depending upon a systems specific implementation. For ease of discussion, it will be assumed the image information is stored as entire frames.
- a first edge layer is determined based on the first image layer.
- source layer 117 is processed by the edge detector 150 to determine the edge layer 130 .
- the edge layer 130 comprises a plurality of memory locations corresponding to the plurality of pixels of an image.
- the edge layer 130 contains a pixel edge indicator for each pixel that indicates whether a pixel is associated with an edge of an image.
- a pixel edge indicator is a Boolean representation indicating the presence or absence of an edge. For example, a positive Boolean pixel edge indicator would indicate that a specific pixel is part of an image edge.
- the square root operation can be removed and the magnitude compared to a predefined edge-level that controls what is considered an edge, and ultimately controls how many details will be preserved in a final destination layer. For example, if the magnitude is larger than the predefined value of the edge-level, then the corresponding pixel is said to be an edge pixel.
- the edge detector 150 can determine the presence of an edge solely based on a pixel's horizontal or vertical edge component.
- FIG. 3 represents a portion 212 of a Boolean edge layer that contains Boolean pixel edge indicators.
- the 10 by 10 matrix of FIG. 3 represents the top left-most pixels of an image layer.
- a “1” value indicates that the pixel at that location has been identified as an edge pixel
- a “0” value indicates that the pixel at that location has been identified as not being an edge pixel.
- the edge layer 130 can contain weighted edge indicators, as illustrated in FIG. 4 .
- FIG. 4 illustrates a portion of a weighted edge layer 214 that contains weighted edge indicators.
- the specific implementation of FIG. 4 assigns a weighted value to a pixel by determining a number of Boolean pixel edge indicators within +/ ⁇ 1 pixels of the pixel and storing this value in the lower four-bits of a data byte, and storing in the upper four-bits of the data byte the number of Boolean pixel edge indicators that are at +/ ⁇ two pixels from the pixel.
- the number of Boolean pixel edge indicators in layer 212 that are at +/ ⁇ two pixels from the pixel can be calculated by determining the number Boolean pixel edge indicators that are within +/ ⁇ two pixels from the pixel and subtracting the number of Boolean pixels edge indicators that are +/ ⁇ 1 pixel from the pixel.
- the number of Boolean edge pixels in layer 212 within +/ ⁇ pixels of pixel P(4,4) is indicated by the value 5 in layer 114 .
- This value is stored in the lower 4-bits of the pixel P(4,4) in the weighted edge layer 214 .
- the number of edge pixels within the box 203 of Boolean edge layer 212 is 12.
- the number of these pixels that are two pixels away from the pixel location P(4,4) is determined by subtracting the number of edge pixels within the box 201 from the edge pixels within the box 212 , which results in a value of 7.
- This value is stored in the upper 4-bits of pixel P(4,4) in layer 211 . Therefore the weighted pixel value of pixel P(4,4) is “75”. It will be appreciated that many other schemes for determining and/or storing weighted edge values are possible.
- the first image layer is blended with a first other layer based upon the first edge layer.
- the first other layer is the source layer 117 .
- the blending controller 160 of FIG. 2 is used to implement the blending, and uses information of edge layer 130 to blend the source layer 117 with the smoothed layer 120 to preserve edges and fine structures in the source image layer 117 .
- edge layer contains only Boolean edge information one of two blending ratios can be implemented by the blending controller 160 at each pixel location.
- the ability to blend a pixel based on one of only two blending ratios will not provide enough blending options to provide a destination image with an enhanced image.
- a weighted edge layer such as is illustrated in FIG. 4 can be used by the blending controller 160 to select one of more than two blending ratios with respect to the blending of a specific pixel.
- FIG. 5 discloses a specific blending method for use with weighted edge values.
- WEIGHT1 represent the number of edge pixels within +/ ⁇ 1 pixel of the pixel being scaled. With respect to FIG. 4 , this would be the value stored at the lower 4 bits of a pixel location. If WEIGHT1 is greater than threshold T1, flow proceeds to step 322 , where the source image pixel is copied directly to the destination layer, such as destination layer 135 . Otherwise, flow proceeds to step 303 .
- step 303 a determination is made whether the pixel edge value WEIGHT1 is greater than T2. If so, flow proceeds to step 323 , where the source image pixel is blended with the smoothed image at a ratio of 3:1. Otherwise, flow proceeds to step 304 .
- step 304 a determination is made whether the pixel edge value WEIGHT1 is greater than T3. If so, flow proceeds to step 3 : 1 , where the source image pixel is blended with the smoothed image at a ratio of 1:1. Otherwise, flow proceeds to step 305 .
- WEIGHT2 represent the number of edge pixels at +/ ⁇ 2 pixels of the pixel being scaled. With respect to FIG. 4 , this would be the value stored at the upper 4 bits of a pixel location. If WEIGHT2 is greater than the value of X4 flow proceeds to step 325 , where the source image pixel is blended with the smoothed image at a ratio of 1:3. Otherwise, flow proceeds to step 326 .
- the destination pixel is set equal to the smoothed pixel.
- step 306 a determination is made whether there is another pixel. If so, flow proceeds to step 302 to process the next pixel, otherwise the flow ends.
- the variables T1, T2, T3, and T4 are predetermined, and as such can be preset or user defined variables. In another embodiment, the variables T1 through T4 can be statistically determined based upon the source image. In a specific embodiment, the variables T1 to T4 are set to 7, 3, 1, and 3 respectively.
- FIG. 6 illustrates another method in accordance with an embodiment of the present disclosure.
- a first image layer of an image is received. In a manner similar to step 11 of FIG. 1 .
- the first other layer is determined.
- the first other layer is determined by noise filter 140 , which filters the source layer to provide a smoothed image.
- Noise filter 140 can be any type of noise filter, but will typically be either a low-pass filter or median filter depending upon the cost-performance ratio considerations of system 100 .
- a low-pass filter consisting of a five-tap horizontal filter and a five-tap vertical filter is used. Different coefficients can be used depending upon a desired noise level.
- three noise levels implemented by the noise filters have cut-off frequencies of 0.7 fs, 0.5 fs, and 0.3 fs.
- An intermediate smoothing layer can be formed by applying the low-pass filter on the horizontal direction and storing the results in memory, with a final smoothed layer including filtering in the vertical direction being formed prior to blending.
- a 2-dimensional median filter can be used supporting three sizes: 1 ⁇ 1, 3 ⁇ 3, and 5 ⁇ 5.
- a first edge layer is determined in a manner similar to that discussed with respect to step 12 of FIG. 1 .
- the first image and the first other layer are blended in a manner similar to that discussed with respect to step 13 of FIG. 1 .
- FIG. 7 illustrates another method in accordance with an embodiment of the present disclosure. Steps 41 , 42 , and 43 are similar in function to steps 21 , 22 , and 23 previously discussed.
- Step 44 is similar to step 41 but receives a second source image layer instead of the first source image layer.
- Step 45 is similar to step 42 , but a second edge layer based on the second source image layer is determined.
- Step 46 is similar to step 43 , but the second source image layer is blended with the second other layer instead of the first source image layer being blended with the first layer.
- the result of step 46 is a second blended video layer.
- a composite image combining the first and second blended video layers is provided. It will be appreciated that typically, additional steps, analogous to steps 41 - 43 will be performed to generate a third blending layer from which a composite image is formed.
- FIG. 8 illustrates, in block diagram form, a data processing system that may represent a general purpose processing system, such as a personal computer or a personal digital assistant, or an application specific system such as a media server, internet appliance, home networking hubs, and the like.
- the system 500 is illustrated to include a central processing unit 510 , which may be a conventional or proprietary data processor, memory including random access memory 512 , read only memory 514 , input output adapter 522 , a user interface adapter 520 , a communications interface adapter 524 , and a multimedia controller 526 .
- a central processing unit 510 which may be a conventional or proprietary data processor, memory including random access memory 512 , read only memory 514 , input output adapter 522 , a user interface adapter 520 , a communications interface adapter 524 , and a multimedia controller 526 .
- the input output (I/O) adapter 526 can be further connected to various peripherals such as disk drives 547 , printer 545 , removable storage devices 546 , as well as other standard and proprietary I/O devices.
- the user interface adapter 520 can be considered to be a specialized I/O adapter.
- the adapter 520 is illustrated to be connected to a mouse 540 , and a keyboard 541 .
- the user interface adapter 520 may be connected to other devices capable of providing various types of user control, such as touch screen devices.
- the communications interface adapter 524 is connected to a bridge 550 such as is associated with a local or a wide area network, and a modem 551 . By connecting the system bus 502 to various communication devices, external access to information can be obtained.
- the multimedia controller 526 will generally include a video graphics controller capable of generating smoothed images in the manner discussed herein that can be displayed, saved, or transmitted.
- the multimedia controller 526 can include a system of FIG. 2 , which can be implemented in hardware or software.
- Software implementations can be stored in any on of various memory locations, including RAM 512 and ROM 514 , in addition software implementation software can be stored in the multimedia controller 526 .
- the system of FIG. 2 may be a data processor within the controller 526 for executing instruction, or it maybe a shared processor, such as CPU 510 .
- FIG. 9 illustrates an intermediate table, where each pixel stores the number of edge pixels within +/ ⁇ 1 horizontal pixel in the lower four-bits of the byte, and the number of edge pixels at +/ ⁇ 2 horizontal pixels in the upper four-bits of the byte.
- the number of edge pixels within +/ ⁇ pixels for a pixel P(x, y) can be determined by adding the lower four bits of pixels P(x, y ⁇ 1), P(x,y), and P(x, y+1).
- the number of edge pixels at +/ ⁇ 2 pixels for a pixel P(x, y) is determined by adding the upper four bits of P(x, y ⁇ 2), P(x, y ⁇ 1), P(x,y), P(x, y+1), and P(x, y+2) to the lower four bits of P(x, y ⁇ 2) and P(x, y+2).
Abstract
Description
- Video images, and especially analog video signals can be corrupted by varied types of temporal and spatial noise during acquisition, transmission, and recording of the image. Typical types of noise include thermal noise, single frequency modulation distortion noise, and impulse noise. Noise reduction techniques that apply linear or non-linear filters on video signals can reduce the amount of noise. One such technique is to apply a low-pass filter on the video signal. However, simple low-pass filtering tends to produce over-smoothed video that appears blurry. Other filters such as Wiener filters, Kalman filters are better at removing one or more of spatial noise or temporal noise but can be expensive in terms of implementation and device costs.
- Therefore, a method of noise reduction that overcomes these problems would be useful.
- The present disclosure relates to data processing, and more specifically to image and video processing.
- The present disclosure may be better understood, and its features and advantages made apparent to those skilled in the art by referencing the accompanying drawings.
-
FIG. 1 represents, in flow diagram form, a method in accordance with the present disclosure; -
FIG. 2 represents, in block diagram form, a system in accordance with a specific embodiment of the present disclosure. -
FIG. 3 represents Boolean edge data in accordance with a specific embodiment of the present disclosure; -
FIG. 4 represents weighted edge data in accordance with a specific embodiment of the present disclosure; -
FIG. 5 represents, in flow diagram form, a method of blending accordance with a specific embodiment of the present disclosure; -
FIGS. 6 and 7 represent, in flow diagram form, methods in accordance with specific embodiments of the present disclosure; -
FIG. 8 represents, in block diagram form, a system in accordance with the present disclosure; and -
FIG. 9 represents an intermediate data in accordance with a specific embodiment of the present disclosure. - The use of the same reference symbols in different drawings indicates similar or identical items.
- In a specific embodiment of the present disclosure, a source image is smoothed to create a smoothed image, and an edge detector is used to create an edge layer. A blending controller is used to control a blending between the source image and the smoothed image. The blended destination image maintains detail while eliminating unwanted noise. Specific implementations of the present disclosure are better understood with reference to
FIGS. 1-8 . -
FIG. 1 illustrates in flow diagram form a method in accordance with the present disclosure. The method ofFIG. 1 is discussed with reference to the system ofFIG. 2 , which illustrates in block diagram form data flow through a system 100 in accordance with the present disclosure. - System 100 comprises
noise filter 140,edge detector 150 andblending controller 160. In addition, system 100 includes memory (not specifically shown) for storing image information includingsource image 112, smoothedlayer 120,edge layer 130, anddestination layer 135. The layers are accessible by thenoise filter 140,edge detector 150 andblending controller 160. - A first image layer is received at
step 11. Referring toFIG. 2 ,source layer 117 is an example of such a first image layer. Thesource layer 117 is one of three component layers, along withlayers source image 112. Examples of types of component layers 115-117 include RGB component layers, YUV component layers, and component layers of any other color spaces. The system 100 receivessource image 112 by receipt of a video stream or by accessing a memory location. - The source providing the source image can be a head end device, a TV receiver, a Video Cassette Recorder, a DVD (Digital Versatile Disk) player, or other video sources.
- Upon receipt of the
source image 112, each image layer 115-117 is processed independently. For purposes of discussion, data flow with respect toFIGS. 1 and 2 will be discussed with respect to one of the source layers,source layer 117. In addition, it will be appreciated that the layer information may be stored and processed as frames or partial frames, such as line buffers depending upon a systems specific implementation. For ease of discussion, it will be assumed the image information is stored as entire frames. - At
step 12, a first edge layer is determined based on the first image layer. With reference toFIG. 2 ,source layer 117 is processed by theedge detector 150 to determine theedge layer 130. - The
edge layer 130 comprises a plurality of memory locations corresponding to the plurality of pixels of an image. Theedge layer 130 contains a pixel edge indicator for each pixel that indicates whether a pixel is associated with an edge of an image. In one embodiment, a pixel edge indicator is a Boolean representation indicating the presence or absence of an edge. For example, a positive Boolean pixel edge indicator would indicate that a specific pixel is part of an image edge. - The
edge detector 150 can detect edges by determining a gradient for each pixel location. For example, the horizontal and vertical gradient of a pixel can be calculated using the equations
Grad — x=p(i+1,j)−p(i−1,j); and
Grad — y=p(i,j+1)−p(i,j−1). - A rectangular-to-polar conversion, such as M(i,j)=SQRT(Grad_x2+grad_y2), can then be performed to get a magnitude of the edge. Alternatively, the square root operation can be removed and the magnitude compared to a predefined edge-level that controls what is considered an edge, and ultimately controls how many details will be preserved in a final destination layer. For example, if the magnitude is larger than the predefined value of the edge-level, then the corresponding pixel is said to be an edge pixel. In other embodiments, the
edge detector 150 can determine the presence of an edge solely based on a pixel's horizontal or vertical edge component. -
FIG. 3 represents aportion 212 of a Boolean edge layer that contains Boolean pixel edge indicators. The 10 by 10 matrix ofFIG. 3 represents the top left-most pixels of an image layer. For purposes of illustration, a “1” value indicates that the pixel at that location has been identified as an edge pixel, while a “0” value indicates that the pixel at that location has been identified as not being an edge pixel. In an alternate embodiment, theedge layer 130 can contain weighted edge indicators, as illustrated inFIG. 4 . -
FIG. 4 illustrates a portion of aweighted edge layer 214 that contains weighted edge indicators. The specific implementation ofFIG. 4 assigns a weighted value to a pixel by determining a number of Boolean pixel edge indicators within +/−1 pixels of the pixel and storing this value in the lower four-bits of a data byte, and storing in the upper four-bits of the data byte the number of Boolean pixel edge indicators that are at +/−two pixels from the pixel. The number of Boolean pixel edge indicators inlayer 212 that are at +/−two pixels from the pixel can be calculated by determining the number Boolean pixel edge indicators that are within +/−two pixels from the pixel and subtracting the number of Boolean pixels edge indicators that are +/−1 pixel from the pixel. - For example, the number of Boolean edge pixels in
layer 212 within +/− pixels of pixel P(4,4) is indicated by thevalue 5 in layer 114. This value is stored in the lower 4-bits of the pixel P(4,4) in theweighted edge layer 214. The number of edge pixels within thebox 203 ofBoolean edge layer 212 is 12. The number of these pixels that are two pixels away from the pixel location P(4,4) is determined by subtracting the number of edge pixels within thebox 201 from the edge pixels within thebox 212, which results in a value of 7. This value is stored in the upper 4-bits of pixel P(4,4) inlayer 211. Therefore the weighted pixel value of pixel P(4,4) is “75”. It will be appreciated that many other schemes for determining and/or storing weighted edge values are possible. - Returning to
FIG. 1 , atstep 13 the first image layer is blended with a first other layer based upon the first edge layer. In one embodiment, the first other layer is thesource layer 117. - The blending
controller 160 ofFIG. 2 is used to implement the blending, and uses information ofedge layer 130 to blend thesource layer 117 with the smoothedlayer 120 to preserve edges and fine structures in thesource image layer 117. When the edge layer contains only Boolean edge information one of two blending ratios can be implemented by the blendingcontroller 160 at each pixel location. However, typically, the ability to blend a pixel based on one of only two blending ratios will not provide enough blending options to provide a destination image with an enhanced image. - To provide additional levels of blending, a weighted edge layer, such as is illustrated in
FIG. 4 can be used by the blendingcontroller 160 to select one of more than two blending ratios with respect to the blending of a specific pixel.FIG. 5 discloses a specific blending method for use with weighted edge values. - At
step 302, a determination is made whether the pixel edge value labeled WEIGHT1 is greater than a variable T1. WEIGHT1 represent the number of edge pixels within +/−1 pixel of the pixel being scaled. With respect toFIG. 4 , this would be the value stored at the lower 4 bits of a pixel location. If WEIGHT1 is greater than threshold T1, flow proceeds to step 322, where the source image pixel is copied directly to the destination layer, such asdestination layer 135. Otherwise, flow proceeds to step 303. - At
step 303, a determination is made whether the pixel edge value WEIGHT1 is greater than T2. If so, flow proceeds to step 323, where the source image pixel is blended with the smoothed image at a ratio of 3:1. Otherwise, flow proceeds to step 304. - At
step 304, a determination is made whether the pixel edge value WEIGHT1 is greater than T3. If so, flow proceeds to step 3:1, where the source image pixel is blended with the smoothed image at a ratio of 1:1. Otherwise, flow proceeds to step 305. - At
step 305, a determination is made whether the pixel edge value labeled WEIGHT2 is greater than a variable T4. WEIGHT2 represent the number of edge pixels at +/−2 pixels of the pixel being scaled. With respect toFIG. 4 , this would be the value stored at the upper 4 bits of a pixel location. If WEIGHT2 is greater than the value of X4 flow proceeds to step 325, where the source image pixel is blended with the smoothed image at a ratio of 1:3. Otherwise, flow proceeds to step 326. - At
step 326, the destination pixel is set equal to the smoothed pixel. - At
step 306, a determination is made whether there is another pixel. If so, flow proceeds to step 302 to process the next pixel, otherwise the flow ends. - The variables T1, T2, T3, and T4 are predetermined, and as such can be preset or user defined variables. In another embodiment, the variables T1 through T4 can be statistically determined based upon the source image. In a specific embodiment, the variables T1 to T4 are set to 7, 3, 1, and 3 respectively.
-
FIG. 6 illustrates another method in accordance with an embodiment of the present disclosure. - At
step 21, a first image layer of an image is received. In a manner similar to step 11 ofFIG. 1 . - At
step 22, the first other layer is determined. Typically, the first other layer is determined bynoise filter 140, which filters the source layer to provide a smoothed image.Noise filter 140 can be any type of noise filter, but will typically be either a low-pass filter or median filter depending upon the cost-performance ratio considerations of system 100. In one embodiment, a low-pass filter consisting of a five-tap horizontal filter and a five-tap vertical filter is used. Different coefficients can be used depending upon a desired noise level. In one embodiment, three noise levels implemented by the noise filters have cut-off frequencies of 0.7 fs, 0.5 fs, and 0.3 fs. An intermediate smoothing layer can be formed by applying the low-pass filter on the horizontal direction and storing the results in memory, with a final smoothed layer including filtering in the vertical direction being formed prior to blending. A 2-dimensional median filter can be used supporting three sizes: 1×1, 3×3, and 5×5. - At
step 23, a first edge layer is determined in a manner similar to that discussed with respect to step 12 ofFIG. 1 . - At
step 24, the first image and the first other layer are blended in a manner similar to that discussed with respect to step 13 ofFIG. 1 . -
FIG. 7 illustrates another method in accordance with an embodiment of the present disclosure.Steps steps -
Step 44 is similar to step 41 but receives a second source image layer instead of the first source image layer. -
Step 45 is similar to step 42, but a second edge layer based on the second source image layer is determined. -
Step 46 is similar to step 43, but the second source image layer is blended with the second other layer instead of the first source image layer being blended with the first layer. The result ofstep 46 is a second blended video layer. - At
step 47, a composite image combining the first and second blended video layers is provided. It will be appreciated that typically, additional steps, analogous to steps 41-43 will be performed to generate a third blending layer from which a composite image is formed. -
FIG. 8 illustrates, in block diagram form, a data processing system that may represent a general purpose processing system, such as a personal computer or a personal digital assistant, or an application specific system such as a media server, internet appliance, home networking hubs, and the like. Thesystem 500 is illustrated to include acentral processing unit 510, which may be a conventional or proprietary data processor, memory includingrandom access memory 512, read only memory 514,input output adapter 522, auser interface adapter 520, a communications interface adapter 524, and amultimedia controller 526. - The input output (I/O)
adapter 526 can be further connected to various peripherals such asdisk drives 547,printer 545,removable storage devices 546, as well as other standard and proprietary I/O devices. - The
user interface adapter 520 can be considered to be a specialized I/O adapter. Theadapter 520 is illustrated to be connected to a mouse 540, and akeyboard 541. In addition, theuser interface adapter 520 may be connected to other devices capable of providing various types of user control, such as touch screen devices. - The communications interface adapter 524 is connected to a
bridge 550 such as is associated with a local or a wide area network, and amodem 551. By connecting thesystem bus 502 to various communication devices, external access to information can be obtained. - The
multimedia controller 526 will generally include a video graphics controller capable of generating smoothed images in the manner discussed herein that can be displayed, saved, or transmitted. In a specific embodiment illustrated themultimedia controller 526 can include a system ofFIG. 2 , which can be implemented in hardware or software. Software implementations can be stored in any on of various memory locations, includingRAM 512 and ROM 514, in addition software implementation software can be stored in themultimedia controller 526. When implemented in software, the system ofFIG. 2 may be a data processor within thecontroller 526 for executing instruction, or it maybe a shared processor, such asCPU 510. - The preceding detailed description of the figures, reference has been made to the accompanying drawings which form a part thereof, and to which show by way of illustration specific embodiments in which the invention may be practiced. It will be appreciated that many other varied embodiments that incorporate the teachings herein may be easily constructed by those skilled in the art. For example, intermediate edge layers can be used to derive the final edge layer used by the blending
controller 160. Once such intermediate layer would contain pixel information that indicates horizontally adjacent pixel information. For example,FIG. 9 illustrates an intermediate table, where each pixel stores the number of edge pixels within +/−1 horizontal pixel in the lower four-bits of the byte, and the number of edge pixels at +/−2 horizontal pixels in the upper four-bits of the byte. Such an intermediate layer allows for efficient calculation of the final edge layer. For example, the number of edge pixels within +/−pixels for a pixel P(x, y) can be determined by adding the lower four bits of pixels P(x, y−1), P(x,y), and P(x, y+1). In a similar manner, the number of edge pixels at +/−2 pixels for a pixel P(x, y) is determined by adding the upper four bits of P(x, y−2), P(x, y−1), P(x,y), P(x, y+1), and P(x, y+2) to the lower four bits of P(x, y−2) and P(x, y+2). Utilizing an intermediate layer in this fashion reduces the computations needed to calculate the weighted edge values by reusing the horizontal edge data. Accordingly, the present disclosure is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention. The preceding detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined only by the appended claims.
Claims (20)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/673,612 US7668396B2 (en) | 2003-09-29 | 2003-09-29 | Method and system for noise reduction in an image |
CNA200480028340XA CN1860777A (en) | 2003-09-29 | 2004-09-29 | Method and system for noise reduction in an image |
JP2006527254A JP2007507134A (en) | 2003-09-29 | 2004-09-29 | Noise reduction method and system for images |
EP04786682A EP1668888A4 (en) | 2003-09-29 | 2004-09-29 | Method and system for noise reduction in an image |
PCT/CA2004/001764 WO2005036872A1 (en) | 2003-09-29 | 2004-09-29 | Method and system for noise reduction in an image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/673,612 US7668396B2 (en) | 2003-09-29 | 2003-09-29 | Method and system for noise reduction in an image |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050069221A1 true US20050069221A1 (en) | 2005-03-31 |
US7668396B2 US7668396B2 (en) | 2010-02-23 |
Family
ID=34376648
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/673,612 Expired - Fee Related US7668396B2 (en) | 2003-09-29 | 2003-09-29 | Method and system for noise reduction in an image |
Country Status (5)
Country | Link |
---|---|
US (1) | US7668396B2 (en) |
EP (1) | EP1668888A4 (en) |
JP (1) | JP2007507134A (en) |
CN (1) | CN1860777A (en) |
WO (1) | WO2005036872A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4524717B2 (en) * | 2008-06-13 | 2010-08-18 | 富士フイルム株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
US9307251B2 (en) * | 2009-08-19 | 2016-04-05 | Sharp Laboratories Of America, Inc. | Methods and systems for determining data-adaptive weights for motion estimation in a video sequence |
Citations (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4866395A (en) * | 1988-11-14 | 1989-09-12 | Gte Government Systems Corporation | Universal carrier recovery and data detection for digital communication systems |
US5027203A (en) * | 1989-04-27 | 1991-06-25 | Sony Corporation | Motion dependent video signal processing |
US5093847A (en) * | 1990-12-21 | 1992-03-03 | Silicon Systems, Inc. | Adaptive phase lock loop |
US5115812A (en) * | 1988-11-30 | 1992-05-26 | Hitachi, Ltd. | Magnetic resonance imaging method for moving object |
US5231677A (en) * | 1984-12-28 | 1993-07-27 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US5253056A (en) * | 1992-07-02 | 1993-10-12 | At&T Bell Laboratories | Spatial/frequency hybrid video coding facilitating the derivatives of variable-resolution images |
US5392137A (en) * | 1992-04-30 | 1995-02-21 | Ricoh Company, Ltd. | Image processing apparatus in which filtering is selected for input image characteristics |
US5475434A (en) * | 1993-08-17 | 1995-12-12 | Goldstar Co. Ltd. | Blocking effect attenuation apparatus for high definition television receiver |
US5563950A (en) * | 1995-03-31 | 1996-10-08 | International Business Machines Corporation | System and methods for data encryption using public key cryptography |
US5602589A (en) * | 1994-08-19 | 1997-02-11 | Xerox Corporation | Video image compression using weighted wavelet hierarchical vector quantization |
US5606630A (en) * | 1992-12-28 | 1997-02-25 | Minolta Camera Kabushiki Kaisha | Photographed image reproducing apparatus |
US5635985A (en) * | 1994-10-11 | 1997-06-03 | Hitachi America, Ltd. | Low cost joint HD/SD television decoder methods and apparatus |
US5644361A (en) * | 1994-11-30 | 1997-07-01 | National Semiconductor Corporation | Subsampled frame storage technique for reduced memory size |
US5652749A (en) * | 1995-02-03 | 1997-07-29 | International Business Machines Corporation | Apparatus and method for segmentation and time synchronization of the transmission of a multiple program multimedia data stream |
US5729360A (en) * | 1994-01-14 | 1998-03-17 | Fuji Xerox Co., Ltd. | Color image processing method and system |
US5732391A (en) * | 1994-03-09 | 1998-03-24 | Motorola, Inc. | Method and apparatus of reducing processing steps in an audio compression system using psychoacoustic parameters |
US5737020A (en) * | 1995-03-27 | 1998-04-07 | International Business Machines Corporation | Adaptive field/frame encoding of discrete cosine transform |
US5740028A (en) * | 1993-01-18 | 1998-04-14 | Canon Kabushiki Kaisha | Information input/output control device and method therefor |
US5747796A (en) * | 1995-07-13 | 1998-05-05 | Sharp Kabushiki Kaisha | Waveguide type compact optical scanner and manufacturing method thereof |
US5790686A (en) * | 1995-09-19 | 1998-08-04 | University Of Maryland At College Park | DCT-based motion estimation method |
US5844627A (en) * | 1995-09-11 | 1998-12-01 | Minerya System, Inc. | Structure and method for reducing spatial noise |
US5844545A (en) * | 1991-02-05 | 1998-12-01 | Minolta Co., Ltd. | Image display apparatus capable of combining image displayed with high resolution and image displayed with low resolution |
US5850443A (en) * | 1996-08-15 | 1998-12-15 | Entrust Technologies, Ltd. | Key management system for mixed-trust environments |
US5940130A (en) * | 1994-04-21 | 1999-08-17 | British Telecommunications Public Limited Company | Video transcoder with by-pass transfer of extracted motion compensation data |
US5996029A (en) * | 1993-01-18 | 1999-11-30 | Canon Kabushiki Kaisha | Information input/output control apparatus and method for indicating which of at least one information terminal device is able to execute a functional operation based on environmental information |
US6005624A (en) * | 1996-12-20 | 1999-12-21 | Lsi Logic Corporation | System and method for performing motion compensation using a skewed tile storage format for improved efficiency |
US6005623A (en) * | 1994-06-08 | 1999-12-21 | Matsushita Electric Industrial Co., Ltd. | Image conversion apparatus for transforming compressed image data of different resolutions wherein side information is scaled |
US6011558A (en) * | 1997-09-23 | 2000-01-04 | Industrial Technology Research Institute | Intelligent stitcher for panoramic image-based virtual worlds |
US6014694A (en) * | 1997-06-26 | 2000-01-11 | Citrix Systems, Inc. | System for adaptive video/audio transport over a network |
US6040863A (en) * | 1993-03-24 | 2000-03-21 | Sony Corporation | Method of coding and decoding motion vector and apparatus therefor, and method of coding and decoding picture signal and apparatus therefor |
US6081295A (en) * | 1994-05-13 | 2000-06-27 | Deutsche Thomson-Brandt Gmbh | Method and apparatus for transcoding bit streams with video data |
US6141693A (en) * | 1996-06-03 | 2000-10-31 | Webtv Networks, Inc. | Method and apparatus for extracting digital data from a video stream and using the digital data to configure the video stream for display on a television set |
US6144402A (en) * | 1997-07-08 | 2000-11-07 | Microtune, Inc. | Internet transaction acceleration |
US6160913A (en) * | 1998-03-25 | 2000-12-12 | Eastman Kodak Company | Method and apparatus for digital halftone dots detection and removal in business documents |
US6167084A (en) * | 1998-08-27 | 2000-12-26 | Motorola, Inc. | Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals |
US6182203B1 (en) * | 1997-01-24 | 2001-01-30 | Texas Instruments Incorporated | Microprocessor |
US6215821B1 (en) * | 1996-08-07 | 2001-04-10 | Lucent Technologies, Inc. | Communication system using an intersource coding technique |
US6219358B1 (en) * | 1998-09-11 | 2001-04-17 | Scientific-Atlanta, Inc. | Adaptive rate control for insertion of data into arbitrary bit rate data streams |
US6222886B1 (en) * | 1996-06-24 | 2001-04-24 | Kabushiki Kaisha Toshiba | Compression based reduced memory video decoder |
US6236683B1 (en) * | 1991-08-21 | 2001-05-22 | Sgs-Thomson Microelectronics S.A. | Image predictor |
US6259741B1 (en) * | 1999-02-18 | 2001-07-10 | General Instrument Corporation | Method of architecture for converting MPEG-2 4:2:2-profile bitstreams into main-profile bitstreams |
US6263022B1 (en) * | 1999-07-06 | 2001-07-17 | Philips Electronics North America Corp. | System and method for fine granular scalable video with selective quality enhancement |
US20010026591A1 (en) * | 1998-07-27 | 2001-10-04 | Avishai Keren | Multimedia stream compression |
US6300973B1 (en) * | 2000-01-13 | 2001-10-09 | Meir Feder | Method and system for multimedia communication control |
US6307939B1 (en) * | 1996-08-20 | 2001-10-23 | France Telecom | Method and equipment for allocating to a television program, which is already conditionally accessed, a complementary conditional access |
US6314138B1 (en) * | 1997-07-22 | 2001-11-06 | U.S. Philips Corporation | Method of switching between video sequencing and corresponding device |
US6323904B1 (en) * | 1996-04-22 | 2001-11-27 | Electrocraft Laboratories Limited | Multifunction video compression circuit |
US6366614B1 (en) * | 1996-10-11 | 2002-04-02 | Qualcomm Inc. | Adaptive rate control for digital video compression |
US6385248B1 (en) * | 1998-05-12 | 2002-05-07 | Hitachi America Ltd. | Methods and apparatus for processing luminance and chrominance image data |
US6438168B2 (en) * | 2000-06-27 | 2002-08-20 | Bamboo Media Casting, Inc. | Bandwidth scaling of a compressed video stream |
US20020114015A1 (en) * | 2000-12-21 | 2002-08-22 | Shinichi Fujii | Apparatus and method for controlling optical system |
US20020145931A1 (en) * | 2000-11-09 | 2002-10-10 | Pitts Robert L. | Method and apparatus for storing data in an integrated circuit |
US6480541B1 (en) * | 1996-11-27 | 2002-11-12 | Realnetworks, Inc. | Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts |
US20030007099A1 (en) * | 2001-06-19 | 2003-01-09 | Biao Zhang | Motion adaptive noise reduction method and system |
US6526099B1 (en) * | 1996-10-25 | 2003-02-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Transcoder |
US6549561B2 (en) * | 2001-02-21 | 2003-04-15 | Magis Networks, Inc. | OFDM pilot tone tracking for wireless LAN |
US20030093661A1 (en) * | 2001-08-10 | 2003-05-15 | Loh Thiam Wah | Eeprom agent record |
US6584509B2 (en) * | 1998-06-23 | 2003-06-24 | Intel Corporation | Recognizing audio and video streams over PPP links in the absence of an announcement protocol |
US20030152148A1 (en) * | 2001-11-21 | 2003-08-14 | Indra Laksono | System and method for multiple channel video transcoding |
US6633683B1 (en) * | 2000-06-26 | 2003-10-14 | Miranda Technologies Inc. | Apparatus and method for adaptively reducing noise in a noisy input image signal |
US6707937B1 (en) * | 2000-07-14 | 2004-03-16 | Agilent Technologies, Inc. | Interpolation of edge portions of a digital image |
US6714202B2 (en) * | 1999-12-02 | 2004-03-30 | Canon Kabushiki Kaisha | Method for encoding animation in an image file |
US6724726B1 (en) * | 1999-10-26 | 2004-04-20 | Mitsubishi Denki Kabushiki Kaisha | Method of putting a flow of packets of a network for transporting packets of variable length into conformity with a traffic contract |
US6748020B1 (en) * | 2000-10-25 | 2004-06-08 | General Instrument Corporation | Transcoder-multiplexer (transmux) software architecture |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5764698A (en) | 1993-12-30 | 1998-06-09 | International Business Machines Corporation | Method and apparatus for efficient compression of high quality digital audio |
JPH07210670A (en) | 1994-01-21 | 1995-08-11 | Fuji Xerox Co Ltd | Image processor |
EP0698990B1 (en) | 1994-08-25 | 1999-02-17 | STMicroelectronics S.r.l. | Fuzzy device for image noise reduction |
EP0739138A3 (en) | 1995-04-19 | 1997-11-05 | AT&T IPM Corp. | Method and apparatus for matching compressed video signals to a communications channel |
JP3423835B2 (en) | 1996-05-01 | 2003-07-07 | 沖電気工業株式会社 | Compression encoding device with scramble and decompression reproduction device thereof |
JP3328532B2 (en) | 1997-01-22 | 2002-09-24 | シャープ株式会社 | Digital data encoding method |
WO1998038798A1 (en) | 1997-02-26 | 1998-09-03 | Mitsubishi Denki Kabushiki Kaisha | Device, system, and method for distributing video data |
DE69803639T2 (en) | 1997-08-07 | 2002-08-08 | Matsushita Electric Ind Co Ltd | Device and method for detecting a motion vector |
US6310919B1 (en) | 1998-05-07 | 2001-10-30 | Sarnoff Corporation | Method and apparatus for adaptively scaling motion vector information in an information stream decoder |
KR100548891B1 (en) | 1998-06-15 | 2006-02-02 | 마츠시타 덴끼 산교 가부시키가이샤 | Audio coding apparatus and method |
US6625211B1 (en) | 1999-02-25 | 2003-09-23 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for transforming moving picture coding system |
DE19946267C2 (en) | 1999-09-27 | 2002-09-26 | Harman Becker Automotive Sys | Digital transcoding system |
US6647061B1 (en) | 2000-06-09 | 2003-11-11 | General Instrument Corporation | Video size conversion and transcoding from MPEG-2 to MPEG-4 |
FR2813742A1 (en) | 2000-09-05 | 2002-03-08 | Koninkl Philips Electronics Nv | BINARY FLOW CONVERSION METHOD |
JP4517495B2 (en) | 2000-11-10 | 2010-08-04 | ソニー株式会社 | Image information conversion apparatus, image information conversion method, encoding apparatus, and encoding method |
KR100433516B1 (en) | 2000-12-08 | 2004-05-31 | 삼성전자주식회사 | Transcoding method |
US8107524B2 (en) | 2001-03-30 | 2012-01-31 | Vixs Systems, Inc. | Adaptive bandwidth footprint matching for multiple compressed video streams in a fixed bandwidth network |
-
2003
- 2003-09-29 US US10/673,612 patent/US7668396B2/en not_active Expired - Fee Related
-
2004
- 2004-09-29 WO PCT/CA2004/001764 patent/WO2005036872A1/en active Application Filing
- 2004-09-29 CN CNA200480028340XA patent/CN1860777A/en active Pending
- 2004-09-29 EP EP04786682A patent/EP1668888A4/en not_active Withdrawn
- 2004-09-29 JP JP2006527254A patent/JP2007507134A/en active Pending
Patent Citations (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5231677A (en) * | 1984-12-28 | 1993-07-27 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US4866395A (en) * | 1988-11-14 | 1989-09-12 | Gte Government Systems Corporation | Universal carrier recovery and data detection for digital communication systems |
US5115812A (en) * | 1988-11-30 | 1992-05-26 | Hitachi, Ltd. | Magnetic resonance imaging method for moving object |
US5027203A (en) * | 1989-04-27 | 1991-06-25 | Sony Corporation | Motion dependent video signal processing |
US5093847A (en) * | 1990-12-21 | 1992-03-03 | Silicon Systems, Inc. | Adaptive phase lock loop |
US5844545A (en) * | 1991-02-05 | 1998-12-01 | Minolta Co., Ltd. | Image display apparatus capable of combining image displayed with high resolution and image displayed with low resolution |
US6236683B1 (en) * | 1991-08-21 | 2001-05-22 | Sgs-Thomson Microelectronics S.A. | Image predictor |
US5392137A (en) * | 1992-04-30 | 1995-02-21 | Ricoh Company, Ltd. | Image processing apparatus in which filtering is selected for input image characteristics |
US5253056A (en) * | 1992-07-02 | 1993-10-12 | At&T Bell Laboratories | Spatial/frequency hybrid video coding facilitating the derivatives of variable-resolution images |
US5606630A (en) * | 1992-12-28 | 1997-02-25 | Minolta Camera Kabushiki Kaisha | Photographed image reproducing apparatus |
US5740028A (en) * | 1993-01-18 | 1998-04-14 | Canon Kabushiki Kaisha | Information input/output control device and method therefor |
US5996029A (en) * | 1993-01-18 | 1999-11-30 | Canon Kabushiki Kaisha | Information input/output control apparatus and method for indicating which of at least one information terminal device is able to execute a functional operation based on environmental information |
US6040863A (en) * | 1993-03-24 | 2000-03-21 | Sony Corporation | Method of coding and decoding motion vector and apparatus therefor, and method of coding and decoding picture signal and apparatus therefor |
US5475434A (en) * | 1993-08-17 | 1995-12-12 | Goldstar Co. Ltd. | Blocking effect attenuation apparatus for high definition television receiver |
US5729360A (en) * | 1994-01-14 | 1998-03-17 | Fuji Xerox Co., Ltd. | Color image processing method and system |
US5732391A (en) * | 1994-03-09 | 1998-03-24 | Motorola, Inc. | Method and apparatus of reducing processing steps in an audio compression system using psychoacoustic parameters |
US5940130A (en) * | 1994-04-21 | 1999-08-17 | British Telecommunications Public Limited Company | Video transcoder with by-pass transfer of extracted motion compensation data |
US6081295A (en) * | 1994-05-13 | 2000-06-27 | Deutsche Thomson-Brandt Gmbh | Method and apparatus for transcoding bit streams with video data |
US6005623A (en) * | 1994-06-08 | 1999-12-21 | Matsushita Electric Industrial Co., Ltd. | Image conversion apparatus for transforming compressed image data of different resolutions wherein side information is scaled |
US5602589A (en) * | 1994-08-19 | 1997-02-11 | Xerox Corporation | Video image compression using weighted wavelet hierarchical vector quantization |
US5635985A (en) * | 1994-10-11 | 1997-06-03 | Hitachi America, Ltd. | Low cost joint HD/SD television decoder methods and apparatus |
US5644361A (en) * | 1994-11-30 | 1997-07-01 | National Semiconductor Corporation | Subsampled frame storage technique for reduced memory size |
US5652749A (en) * | 1995-02-03 | 1997-07-29 | International Business Machines Corporation | Apparatus and method for segmentation and time synchronization of the transmission of a multiple program multimedia data stream |
US5737020A (en) * | 1995-03-27 | 1998-04-07 | International Business Machines Corporation | Adaptive field/frame encoding of discrete cosine transform |
US5563950A (en) * | 1995-03-31 | 1996-10-08 | International Business Machines Corporation | System and methods for data encryption using public key cryptography |
US5747796A (en) * | 1995-07-13 | 1998-05-05 | Sharp Kabushiki Kaisha | Waveguide type compact optical scanner and manufacturing method thereof |
US5844627A (en) * | 1995-09-11 | 1998-12-01 | Minerya System, Inc. | Structure and method for reducing spatial noise |
US5790686A (en) * | 1995-09-19 | 1998-08-04 | University Of Maryland At College Park | DCT-based motion estimation method |
US6323904B1 (en) * | 1996-04-22 | 2001-11-27 | Electrocraft Laboratories Limited | Multifunction video compression circuit |
US6141693A (en) * | 1996-06-03 | 2000-10-31 | Webtv Networks, Inc. | Method and apparatus for extracting digital data from a video stream and using the digital data to configure the video stream for display on a television set |
US6222886B1 (en) * | 1996-06-24 | 2001-04-24 | Kabushiki Kaisha Toshiba | Compression based reduced memory video decoder |
US6215821B1 (en) * | 1996-08-07 | 2001-04-10 | Lucent Technologies, Inc. | Communication system using an intersource coding technique |
US5850443A (en) * | 1996-08-15 | 1998-12-15 | Entrust Technologies, Ltd. | Key management system for mixed-trust environments |
US6307939B1 (en) * | 1996-08-20 | 2001-10-23 | France Telecom | Method and equipment for allocating to a television program, which is already conditionally accessed, a complementary conditional access |
US6366614B1 (en) * | 1996-10-11 | 2002-04-02 | Qualcomm Inc. | Adaptive rate control for digital video compression |
US6526099B1 (en) * | 1996-10-25 | 2003-02-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Transcoder |
US6480541B1 (en) * | 1996-11-27 | 2002-11-12 | Realnetworks, Inc. | Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts |
US6005624A (en) * | 1996-12-20 | 1999-12-21 | Lsi Logic Corporation | System and method for performing motion compensation using a skewed tile storage format for improved efficiency |
US6182203B1 (en) * | 1997-01-24 | 2001-01-30 | Texas Instruments Incorporated | Microprocessor |
US6014694A (en) * | 1997-06-26 | 2000-01-11 | Citrix Systems, Inc. | System for adaptive video/audio transport over a network |
US6144402A (en) * | 1997-07-08 | 2000-11-07 | Microtune, Inc. | Internet transaction acceleration |
US6314138B1 (en) * | 1997-07-22 | 2001-11-06 | U.S. Philips Corporation | Method of switching between video sequencing and corresponding device |
US6011558A (en) * | 1997-09-23 | 2000-01-04 | Industrial Technology Research Institute | Intelligent stitcher for panoramic image-based virtual worlds |
US6160913A (en) * | 1998-03-25 | 2000-12-12 | Eastman Kodak Company | Method and apparatus for digital halftone dots detection and removal in business documents |
US6385248B1 (en) * | 1998-05-12 | 2002-05-07 | Hitachi America Ltd. | Methods and apparatus for processing luminance and chrominance image data |
US6584509B2 (en) * | 1998-06-23 | 2003-06-24 | Intel Corporation | Recognizing audio and video streams over PPP links in the absence of an announcement protocol |
US20010026591A1 (en) * | 1998-07-27 | 2001-10-04 | Avishai Keren | Multimedia stream compression |
US6167084A (en) * | 1998-08-27 | 2000-12-26 | Motorola, Inc. | Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals |
US6219358B1 (en) * | 1998-09-11 | 2001-04-17 | Scientific-Atlanta, Inc. | Adaptive rate control for insertion of data into arbitrary bit rate data streams |
US6259741B1 (en) * | 1999-02-18 | 2001-07-10 | General Instrument Corporation | Method of architecture for converting MPEG-2 4:2:2-profile bitstreams into main-profile bitstreams |
US6263022B1 (en) * | 1999-07-06 | 2001-07-17 | Philips Electronics North America Corp. | System and method for fine granular scalable video with selective quality enhancement |
US6724726B1 (en) * | 1999-10-26 | 2004-04-20 | Mitsubishi Denki Kabushiki Kaisha | Method of putting a flow of packets of a network for transporting packets of variable length into conformity with a traffic contract |
US6714202B2 (en) * | 1999-12-02 | 2004-03-30 | Canon Kabushiki Kaisha | Method for encoding animation in an image file |
US6300973B1 (en) * | 2000-01-13 | 2001-10-09 | Meir Feder | Method and system for multimedia communication control |
US6633683B1 (en) * | 2000-06-26 | 2003-10-14 | Miranda Technologies Inc. | Apparatus and method for adaptively reducing noise in a noisy input image signal |
US6438168B2 (en) * | 2000-06-27 | 2002-08-20 | Bamboo Media Casting, Inc. | Bandwidth scaling of a compressed video stream |
US6707937B1 (en) * | 2000-07-14 | 2004-03-16 | Agilent Technologies, Inc. | Interpolation of edge portions of a digital image |
US6748020B1 (en) * | 2000-10-25 | 2004-06-08 | General Instrument Corporation | Transcoder-multiplexer (transmux) software architecture |
US20020145931A1 (en) * | 2000-11-09 | 2002-10-10 | Pitts Robert L. | Method and apparatus for storing data in an integrated circuit |
US20020114015A1 (en) * | 2000-12-21 | 2002-08-22 | Shinichi Fujii | Apparatus and method for controlling optical system |
US6549561B2 (en) * | 2001-02-21 | 2003-04-15 | Magis Networks, Inc. | OFDM pilot tone tracking for wireless LAN |
US20030007099A1 (en) * | 2001-06-19 | 2003-01-09 | Biao Zhang | Motion adaptive noise reduction method and system |
US20030093661A1 (en) * | 2001-08-10 | 2003-05-15 | Loh Thiam Wah | Eeprom agent record |
US20030152148A1 (en) * | 2001-11-21 | 2003-08-14 | Indra Laksono | System and method for multiple channel video transcoding |
Also Published As
Publication number | Publication date |
---|---|
EP1668888A1 (en) | 2006-06-14 |
JP2007507134A (en) | 2007-03-22 |
US7668396B2 (en) | 2010-02-23 |
WO2005036872A1 (en) | 2005-04-21 |
CN1860777A (en) | 2006-11-08 |
EP1668888A4 (en) | 2006-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7151863B1 (en) | Color clamping | |
US9305337B2 (en) | System, method, and apparatus for smoothing of edges in images to remove irregularities | |
KR101649882B1 (en) | Block noise detection and filtering | |
US6094511A (en) | Image filtering method and apparatus with interpolation according to mapping function to produce final image | |
US6928196B1 (en) | Method for kernel selection for image interpolation | |
US7199837B2 (en) | System for improved ratiometric expansion and method thereof | |
CN100456318C (en) | Method for simutaneously suppressing undershoot and over shoot for increasing digital image | |
JP2002515988A (en) | System and method for the conversion of progressively scanned images into an input format for television | |
US7551805B2 (en) | Converting the resolution of an image using interpolation and displaying the converted image | |
US7277101B2 (en) | Method and system for scaling images | |
TW200803467A (en) | Selective local transient improvement and peaking for video sharpness enhancement | |
JP2000032465A (en) | Nonlinear adaptive image filter eliminating noise such as blocking artifact or the like | |
AU6788998A (en) | Apparatus and methods for selectively feathering the matte image of a composite image | |
JP4949463B2 (en) | Upscaling | |
CN101674448B (en) | Video signal processing device, video signal processing method | |
US20060103892A1 (en) | System and method for a vector difference mean filter for noise suppression | |
WO2014008329A1 (en) | System and method to enhance and process a digital image | |
JP2008500757A (en) | Method and system for enhancing the sharpness of a video signal | |
EP1631068A2 (en) | Apparatus and method for converting interlaced image into progressive image | |
US7668396B2 (en) | Method and system for noise reduction in an image | |
CN101796566A (en) | Image processing device, image processing method, and program | |
JP2008278185A (en) | Data processor and data processing method, and program | |
US20070258653A1 (en) | Unit for and Method of Image Conversion | |
JP5593515B2 (en) | Image enhancement method, image enhancer, image rendering system, and computer program | |
KR20050121148A (en) | Image interpolation method, and apparatus of the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VIXS SYSTEMS INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZENG, STEVE ZHIHUA;REEL/FRAME:014573/0376 Effective date: 20030926 Owner name: VIXS SYSTEMS INC.,CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZENG, STEVE ZHIHUA;REEL/FRAME:014573/0376 Effective date: 20030926 |
|
AS | Assignment |
Owner name: COMERICA BANK, CANADA Free format text: SECURITY AGREEMENT;ASSIGNOR:VIXS SYSTEMS INC.;REEL/FRAME:022240/0446 Effective date: 20081114 Owner name: COMERICA BANK,CANADA Free format text: SECURITY AGREEMENT;ASSIGNOR:VIXS SYSTEMS INC.;REEL/FRAME:022240/0446 Effective date: 20081114 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: VIXS SYSTEMS, INC., CANADA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:043601/0817 Effective date: 20170802 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220223 |