US20020176113A1 - Dynamic image correction and imaging systems - Google Patents

Dynamic image correction and imaging systems Download PDF

Info

Publication number
US20020176113A1
US20020176113A1 US09/960,276 US96027601A US2002176113A1 US 20020176113 A1 US20020176113 A1 US 20020176113A1 US 96027601 A US96027601 A US 96027601A US 2002176113 A1 US2002176113 A1 US 2002176113A1
Authority
US
United States
Prior art keywords
image
pixel
mask
original
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/960,276
Inventor
Albert Edgar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eastman Kodak Co
Original Assignee
Applied Science Fiction Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applied Science Fiction Inc filed Critical Applied Science Fiction Inc
Priority to US09/960,276 priority Critical patent/US20020176113A1/en
Assigned to APPLIED SCIENCE FICTION, INC. reassignment APPLIED SCIENCE FICTION, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EDGAR, ALBERT D.
Assigned to RHO VENTURES (QP), L.P., CENTERPOINT VENTURE PARTNERS, L.P. reassignment RHO VENTURES (QP), L.P. SECURITY AGREEMENT Assignors: APPLIED SCIENCE FICTION, INC.
Assigned to CENTERPOINT VENTURE PARTNERS, L.P., RHO VENTURES (QP), L.P. reassignment CENTERPOINT VENTURE PARTNERS, L.P. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: APPLIED SCIENCE FICTION, INC.
Publication of US20020176113A1 publication Critical patent/US20020176113A1/en
Assigned to RHO VENTURES (QP), L.P., CENTERPOINT VENTURE PARTNERS, L.P. reassignment RHO VENTURES (QP), L.P. SECURITY AGREEMENT Assignors: APPLIED SCIENCE FICTION, INC.
Assigned to EASTMAN KODAK COMPANY reassignment EASTMAN KODAK COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: APPLIED SCIENCE FICTION, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T5/75
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/407Control or modification of tonal gradation or of extreme levels, e.g. background level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • H04N1/4092Edge or detail enhancement

Definitions

  • the present invention relates generally to imaging systems and image processing and more particularly to dynamic image correction and imaging systems.
  • a variety of methods are commonly employed to capture an image. For example, photographic film may be exposed to light reflected from a desired subject to record a latent image withing the film. The film is then developed to generate a “negative” or “positive” from which prints or transparencies can be made and delivered to consumers. The negative, positive, or print can be scanned to produce a digital representation of the subject. Alternately, digital devices such as digital camera, video recorder, and the like, may be used to directly capture a digital representation of the desired subject by measuring the reflected light from the subject.
  • Lighting is particularly important when capturing images and care is often take to ensure the proper lighting of the subject matter of the image. If too much light is reflected from the subject, the captured image will be over-exposed, and the final image will appear washed-out. If too little light, the captured image will appear under-exposed, and the final image will appear dark. Similarly, if the proper lighting is not provided from a proper angle, for example when one part of an image is in bright light while another part is in shadow, some of the image might be properly exposed, while the remainder of the image is either under-exposed or over-exposed. Conventional digital devices are particularly prone to having over-exposed and under-exposed portions of an image.
  • cutout filter 110 A drawback of cutout filter 110 is that image detail within the regions is not properly corrected unless the selected region is truly homogeneous, which is not very likely. As a result, detail within each region is lost.
  • the number of regions selected for filtering may be increased, but selecting more regions greatly increases the time and labor needed to generate the cutout filter 110 .
  • this technique and other conventional techniques tends to create visually unappealing boundaries between the regions.
  • a method of enhancing an image comprises obtaining an image mask of the original image.
  • the image mask and the original image each comprise a plurality of pixels having varying values.
  • the plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image.
  • the pixels are further arranged to form areas of less sharp regions corresponding to areas of less rapidly changing pixel values in the original image.
  • the method further comprises combining the image mask with the original image to obtain a masked image.
  • Another embodiment of the present invention provides for a digital file tangibly embodied in a computer readable medium.
  • the digital file is generated by implementing a method comprising obtaining an image mask of an original image.
  • the image mask and the original image each comprise a plurality of pixels having varying values.
  • the plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image.
  • the pixels are further arranged to form areas of less sharp regions corresponding to areas of less rapidly changing pixel values in the original image.
  • the method further comprises combining the image mask with the original image to obtain a masked image.
  • An additional embodiment of the present invention provides for a computer readable medium tangibly embodying a program of instructions.
  • the program of instructions is capable of obtaining an image mask of an original image.
  • the image mask and the original image each comprise a plurality of pixels having varying values.
  • the plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image.
  • the pixels are further arranged to form areas of less sharp regions corresponding to areas of less rapidly changing pixel values in the original image.
  • the program of instructions is further capable of combining the image mask with the original image to obtain a masked image.
  • Yet another embodiment of the present invention provides for a system comprising an image sensor to convert light reflected from an image into information representative of the image, a processor, memory operably coupled to the processor, and a program of instructions capable of being store in the memory and executed by the processor.
  • the program of instructions manipulate the processor to obtain an image mask, the image mask and the information representative of the image each including a plurality of pixels having varying values, wherein the values of the plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image and less sharp regions corresponding to areas of less rapidly changing pixel values in the original image.
  • the program of instructions also manipulate the processor to combine the image mask with the information representative of the image to obtain a masked image.
  • An advantage of at least one embodiment of the present invention is that an image to improve reproducible detail can be generated without user intervention.
  • An additional advantage of at least one embodiment of the present invention is that an image mask can be automatically applied to an original image to generate an image with improved image detail within a reproducible dynamic range due to the image detail preserved in the image mask.
  • Yet another advantage of at least one embodiment of the present invention is that calculations to improve the image detail in scanned images can be performed relatively quickly, due to a lower processing overhead and less user intervention than conventional methods.
  • FIG. 1 is an illustration showing a conventional cutout filter
  • FIG. 2 is a block diagram illustrating a method for dynamic image correction according to one embodiment of the present invention
  • FIG. 3 is a block diagram of an original image and a dynamic image mask according to one embodiment of the present invention.
  • FIG. 4 is a set of graphs showing intensity values of pixels around an edge before and after a blurring algorithm has been applied according to one embodiment of the present invention
  • FIG. 5 is a block diagram of a method for generating a dynamic image mask according to at least one embodiment of the present invention
  • FIG. 6 is a representation of an dynamic image mask with properties according to at least one embodiment of the present invention.
  • FIG. 7 is a block diagram illustrating a method of applying a dynamic image mask to an image according to at least one embodiment of the present invention
  • FIG. 8A is a block diagram illustrating a wrinkle reduction process in accordance with one embodiment of the invention.
  • FIG. 8B- 1 is a picture illustrating an original image
  • FIG. 8B- 2 is a picture illustrating the image of 8 B- 1 with the wrinkle reduction process applied
  • FIG. 9 is a block diagram illustrating an image capture system according to at least one embodiment of the present invention.
  • FIG. 10 is a chart illustrating improvements in the dynamic range of various image representations according to at least one embodiment of the present invention.
  • FIGS. 2 - 9 illustrate a method for dynamic image correction and imaging systems having enhanced images.
  • one embodiment of dynamic image correction utilizes a dynamic image mask that uses a blurring algorithm that maintains sharp boundaries of the image.
  • the dynamic image mask is then applied to the image.
  • the dynamic image mask is used to increase the amount of reproducible detail within an image.
  • the dynamic image mask is used to suppress median frequencies and maintain sharp boundaries.
  • the dynamic image mask can be regionally applied using an electronic brush.
  • various embodiments of the dynamic image mask can be used as a correction map for other correction and enhancement functions.
  • Systems for utilizing digital image correction can include a variety of image capturing or processing systems, such as digital cameras, video cameras, scanners, image processing software, and the like.
  • dynamic image correction 200 includes creating a dynamic image mask B from an original image A.
  • the dynamic image mask B be is then combined with original image A to generate an enhanced image C.
  • the enhanced image C has improved image detail over original image A, within a reproducible dynamic range.
  • original image A may contain detail which may not be appropriately represented when output for display or printing, such as containing high contrast over-exposed (bright) regions and under-exposed (shadow) regions. It would be helpful to brighten the detail in the shadow regions and decreasing the brightness of the bright regions without losing image detail. At least one embodiment of the present invention automatically performs this function.
  • conventional methods of simply dividing the original image into a bright and shadow regions will not generally suffice to improve complex images. Images generally contain complex and diverse regions of varying contrast levels, and as a result, conventional methods generally produce inadequate results.
  • original image A is provided.
  • Original image A is an electronic representation of a subject and includes one or more characteristic values corresponding to specific locations, or pixels. Each pixel has one or more associated values, or planes, that represents information about a particular location on the subject.
  • the values corresponding to each pixel can be a measure of any suitable characteristic of the subject.
  • the values may represent the color, colors, luminance, incidence angle, x-ray density, or any other value representing a characteristic or combination of characteristics.
  • Original image A can be obtained in any suitable manner and need not correlate directly to conventional color images.
  • One implementation obtains original image A by digitizing an image using a scanner, such as a flatbed, film scanner, and the like.
  • Another implementation obtains original image A by directly capturing the image using a digital device, such as a digital camera, video camera, and the like.
  • the original image A is captured using an imaging device, such as magnetic resonance imaging system, radar system, and the like.
  • the characteristic values do not correlate to colors but to other characteristics of the subject matter imaged.
  • the original image A could also be obtained by computer generation or other similar technique.
  • Dynamic image correction 200 does not depend upon how the original image A is obtained, but only that the original image A includes one or more values that represent the image.
  • a dynamic image mask B is generated from original image A.
  • the pixel values of the dynamic image mask B are generated relative to a pixel in the original image A.
  • the pixels generated for dynamic image mask B are calculated using weighted averages of select pixels in original image A, as discussed in greater detail below. It will be appreciated that the pixels generated for dynamic image mask B may be calculated using any number of methods without departing from the spirit or the scope of the present invention.
  • Dynamic image mask B maintains the sharp edges in the original image A while blurring regions the surrounding the sharp edges.
  • rapidly changing characteristics i.e., values or contrast
  • dynamic image mask B At the same time, less rapidly changing values in original image A can be averaged to generate blurred regions in dynamic image mask B.
  • the calculations performed on original image A produce a dynamic image mask B which preserves the boundaries between dissimilar pixels in original image A while blurring areas containing similar pixels, as will be discussed further in FIG. 3.
  • a dynamic image mask B is often calculated for each characteristic value.
  • the red values are used to calculate the blurring and edge parameters of the dynamic image mask B for the red color
  • the blue values are used to calculate the blurring and edge parameters of the dynamic image mask B for the blue color, and so on.
  • the dynamic image mask B can use different characteristics, or planes, to establish the regions and boundaries for different characteristics.
  • the red values could be used to establish the blurring and edge parameters that are applied to each of the red, green, and blue values.
  • a calculated luminance value could be used to calculate the blurring and edge parameters that are then applied to the red, green, and blue values of each pixel.
  • a dynamic image mask B is only calculated for certain characteristics. Using the same example as above, a dynamic image mask B for the colors red and green may be calculated, but the values of the color blue are combined without change, as described in greater detail below.
  • dynamic image mask B is applied to original image A to produce enhanced image C.
  • Dynamic image mask B is generally applied to original image A by use of an overlay technique.
  • a mathematical operation such as division between the pixel values of original image A and the corresponding pixel values in dynamic image mask B, can be used to generate the pixel values of enhanced image C.
  • the process of generating and applying the dynamic image mask B is performed as part of a set of instructions run by an information processing system.
  • the processes of steps 210 , 220 , and 230 can be performed within an image processing system, implemented by photo-lab technicians, in a system used by a customer without the assistance of a lab technician, incorporated into a scanner, digital camera, video recorder and the like, or performed by a computer system external to the image capturing device.
  • the processes are automated by a program of executable instructions executed by an information processing system such that minimal user interaction is required.
  • the enhanced image C is delivered in the form desired.
  • the form in which the enhanced image C is delivered includes, but is not limited to, a digital file, a photographic print, or a film record.
  • Digital files can be stored on mass storage devices, tape drives, CD recorders, DVD recorders, and/or various forms of volatile or non-volatile memory.
  • Digital files can also be transferred to other systems using a communications adapter, where the file can be sent to the Internet, an intranet, as an e-mail, etc.
  • a digital file can also be prepared for retrieval at an image processing kiosk which allows customers to recover their pictures and print them out in a form of their choosing without the assistance of a film development technician.
  • the enhanced image C can also be displayed as an image on a display or printed using a computer printer.
  • the enhanced image C also can be represented on a form of film record, such as a film negative, positive image, or photographic print.
  • film record such as a film negative, positive image, or photographic print.
  • enhanced image C generally contains desirable detail from original image A so that a larger quantity of image detail from original image A is effectively compressed into a dynamic range capable of being reproduced in print and can be preserved thereby.
  • FIG. 3 a diagram of an original image and a blurred image are shown, according to one embodiment of the present invention.
  • Original image A is composed of a plurality of pixels, such as pixels numbered 301 - 325 .
  • Dynamic image mask B is composed of corresponding pixels, such as pixels numbered 351 - 375 , calculated from the pixels of original image A. As described in greater detail below, the pixel values of dynamic image mask B are calculated using an averaging function that accounts for sharp edges.
  • a sharp edge is generally defined by a variation between pixel values greater than a certain sharpness threshold, or Gain.
  • the sharpness threshold allows the pixels to be differentiated into regions for purposes of averaging calculations.
  • the sharpness threshold is varied by a user. In other embodiments, the sharpness threshold is fixed within the software.
  • the pixels calculated for dynamic image mask B correspond to averages taken over regions of pixels in original image A, taking into account the sharpness threshold, or Gain.
  • pixel 363 corresponds to calculations performed around pixel 313 .
  • pixel 363 is calculated by averaging the values of pixels 311 - 315 , 303 , 308 , 318 , and 323 .
  • pixel 363 is calculated by averaging the values of pixels 307 - 309 , 312 - 314 , and 317 - 319 .
  • the pixels are assigned a weight based on their relative distance from pixel 313 . In this embodiment, pixels that are relatively closer have a greater impact on the averaging calculation that pixels that are relatively remote.
  • the weight function, w N can be used to apply a separate weight to each of the pixel values. Only values of w N between zero and one are accepted. Accordingly, if the value of w N is returned as a negative value, the returned weight for the pixel being weighed is zero.
  • PixelN is the contrast value of the pixel being weighed. Center pixel is the value of the central pixel, around which the blurring is being performed.
  • Gain is a threshold value used to determine a contrast threshold for a sharp edge.
  • the value of Gain can be decreased as the pixel being weighed is further from the central pixel. Lowering the value of Gain allows small changes in the contrast between pixelN and centerpixel to result in negative w N , and thus be weighed to zero. Accordingly, in one embodiment, the farther the pixel is from the centerpixel, the smaller Gain gets and the more likely it is that the value of w N will be negative and the pixel will be assigned a weight of zero.
  • the choice of Gain is chosen to preferably decrease slowly as the distance from the central pixel is increased.
  • the values of Gain used can be adapted for the desired application; however, it has been found that slower changes in Gain provide images with more pleasing detail than sharper changes in Gain. Furthermore, the weight function itself can be altered without departing from the scope of the present invention.
  • a sum of each of the pixel values, multiplied by their relative weights, can be calculated.
  • the sum can then be divided by the sum of the individual weights to generate the weighted average of the pixels, which can be used for the pixels of dynamic image mask B.
  • the minimum weight calculated from the pixels adjacent to the central pixel can also be used and multiplied by each of the pixels surrounding the central pixel. Multiplying by the weight of an adjacent pixel allows the blurring to be effectively turned “off” if the contrast around the central pixel is changing too rapidly.
  • This embodiment of the processes performed to generate dynamic image mask B can be likened to a sandblaster.
  • a sandblaster can be used to soften, or blur, the textures it is working over. Accordingly, the blurring algorithm as described above will be herein referred to as the sandblaster algorithm.
  • a sandblaster has an effective radius over which it is used, with the material closer to the center of the sandblasting radius affected most.
  • a radius is selected and measured from the central pixel.
  • the pressure of a sandblaster can be adjusted to affect more change.
  • the Gain value in the described algorithm can be altered to affect more or less blurring.
  • the preferred radius is 4 and the preferred Gain is 40.
  • the sandblaster algorithm can be performed in one dimensional increments. For example, to calculate the value of pixel 362 , the pixels surrounding pixel 312 are considered. In one embodiment of the present invention, the averaged pixel values are determined using the neighboring vertical pixels and then the neighboring horizontal pixel values, as described above. Alternatively, windows can be generated and applied to average in pixels around the central pixel together, in both the horizontal and vertical directions. Color images can compose multiple image planes, wherein the multiple image planes may include planes for each color, a red plane, a green plane, and a blue plane. In a preferred embodiment, the sandblaster algorithm is only performed on one plane at a time.
  • the sandblaster algorithm can be calculated taking other image planes into account, calculating in the values of pixels relative to the central pixel from different color planes.
  • performing multi-dimensional calculations over an image may increase the processing time.
  • pixels which are near an image edge, such as pixel 311 may ignore values desired from pixels beyond the limits of original image A.
  • the images along the edge use their value to reproduce pixel values beyond the image edge, for calculation with the sandblaster algorithm.
  • zeroes may be used for values lying outside the edges of original image A.
  • FIG. 4 a graph of intensities across a row of intensities, before and after the sandblaster blurring algorithm has been applied is shown, according to at least one embodiment of the present invention.
  • Graph 450 represents the intensity values in an original image A around an edge representing contrasting intensity.
  • Graph 460 represents the intensities for dynamic image mask B, among the same pixels as graph 450 .
  • Two distinct intensity levels are identifiable in graph 450 .
  • a low intensity can be identified among pixels 451 - 454 and a high intensity region can be identified by pixel 465 .
  • the radius used to blur the pixels around the central pixel described in FIG. 3 is one factor in how much blurring will be performed. If too large a radius is used, little blurring may result. For example, if the pixel considered for blurring was pixel 451 and the radius was set large enough, the blurred value of pixel 452 may not change much. With the radius set large enough, pixel 452 will be averaged with many pixels above its intensity, such as pixel 451 . Pixel 452 will also be averaged with many pixels below its intensity, such as pixel 453 . If the radius is too large, there could be enough pixels with intensities above pixel 452 and enough pixels with intensities below pixel 452 that the value for pixel 452 will remain unchanged since the intensity value of pixel 452 lies between the high and low extremes.
  • a decimated representation of the original image A can contain half the resolution of the original image A. Some of the detail in the original image is lost in the decimated representation. Performing blurring on the decimated image with a specific radius can relate to covering twice the radius in the original image.
  • Graph 462 shows a graph of intensities in the dynamic image mask A using the sandblaster blurring algorithm.
  • the blurring is enough to bring down the intensity of pixel 452 in the original image to pixel 462 in the blurred representation.
  • Pixel 455 in a separate intensity level is increased in intensity to pixel 465 in the blurred representation.
  • the blurring is turned off for pixels along an edge. Turning off the blurring allows the sharpness among edges to be preserved in the blurred representation, preserving edges between regions with a high contrast of intensities.
  • Pixel 454 lies along an edge, where the intensity for pixels nearby, such as pixel 455 , is much higher. The intensity of pixel 454 is not changed, preserving the difference in contrast between pixel 454 and the pixels of higher intensity, such as pixel 455 .
  • FIG. 5 a block diagram of a method for generating another embodiment of a dynamic image mask B is illustrated.
  • the sandblaster algorithm can be used to create a blurred image with sharp edges and blurred regions.
  • a pyramidal decomposition is performed on the original image, as shown in FIG. 5.
  • the original image A is received.
  • the image size is reduced.
  • the image size is reduced in half.
  • the reduction in image size may be performed using a standard digital image decimation.
  • the decimation is performed by discarding every other pixel in the original image from step 510 .
  • step 525 the sandblaster algorithm, as discussed in FIG. 3, is performed on the decimated image to create a blurred image.
  • the decimated image contains half the resolution of the original image A. Some of the detail in the original image A is lost to the decimated image.
  • the effective radius covered by the algorithm can relate to twice the radius in the original image. Since some of the detail from the original image A is not present in the decimated image, more blurring can result with the sandblaster algorithm.
  • the effective blur radius and amount of detail blurred increased in inverse proportion to the change in resolution in the decimated images. For example, performing the sandblast algorithm in step 525 to the reduced image of step 535 has twice the effective radius of performing the same algorithm to the original image, while the reduced image has half the resolution of the original image.
  • step 536 the blurred image is decimated once again.
  • step 526 the decimated image from step 536 is blurred using the sandblaster algorithm. Further decimation steps 537 - 539 and sandblaster steps 527 - 329 are consecutively performed on the outputs of previous steps.
  • step 550 the blurred image from the sandblaster step 529 is subtracted from the decimated output of decimation step 550 .
  • step 560 the mixed output from step 550 is up-sampled.
  • the image is increased to twice its pixel resolution. Increasing the image size may be performed by repeating the image values of present pixels to fill new pixels.
  • Interpolation may also be performed to determine the values of the new pixels.
  • the up-sampled image from step 560 is added to the blurred image from step 528 .
  • the combined image information is subtracted from the decimated output from step 538 .
  • the calculations in step 552 are performed to recover image detail that may have been lost.
  • Mixer steps 554 and 552 consecutively performed with up-sampling steps 562 - 366 , attempt to generate mask data.
  • a mixer is used to combine the up-sampled image data from step 566 with the blurred image data from step 525 .
  • the output from the mixer in step 558 is then up-sampled, in step 580 , to produce the image mask of the received image.
  • the dynamic image mask B is then prepared for delivery and use, as in step 590 .
  • the resultant image mask generated is a monochrome mask, used to apply itself to the intensities of the individual image color planes in the original image.
  • a monochrome image plane can be calculated from separate image color planes. For example, in one embodiment, the values of the monochrome image mask are determined using the following equation:
  • OUT refers to the pixel being calculated in the monochromatic image mask.
  • MAX(R,G) is a function in which the maximum intensity between the intensity value of the pixel in the red plane and the intensity value of the pixel in the green plane is chosen.
  • the formula can be appended to include:
  • 50% B is half of the intensity value in the blue plane.
  • the dynamic image mask B may also be made to represent image intensities, such as the intensity among black and white values. It will be appreciated that while full color image masks may be used, they will require more processing overhead than using monochrome masks.
  • FIG. 6 a dynamic image mask B is illustrated, in comparison to a prior-art conventional cutout filter shown in FIG. 1, with properties representative of an dynamic image mask B created according to at least one embodiment of the present invention.
  • the dynamic image mask B shown in FIG. 6 will be generally referred to as revelation mask 650 .
  • the conventional image mask shown in FIG. 1 (prior-art) will be generally referred to as conventional filter 110 .
  • the revelation mask 650 maintains some of the detail lost to conventional image masks. Edges are preserved between regions of rapidly changing contrasts. For example, light region 690 , generated to brighten detail within windows in the original image, maintains edges to show sharp contrast to the darker region 680 , which is generated to darken details in the walls shown in the original image. It should be noted that while edges are maintained between regions of rapidly changing contrasts, blurring is accomplished within the regions. For example, the details in the roof of the original image contain dark and light areas with a gradual shift in contrast. In conventional filter 110 , dark region 127 is generated to maintain contrast with the lighter areas in the tower on the roof.
  • revelation mask 650 maintains the gradual shift in contrast as can be noted by the blurred shift in intensity between the tower region 655 and the lighter region 670 , allowing the roof in the original image to maintain a gradual shift in intensity contrast while maintaining the sharp contrast of dark region 655 against the darker region 660 , representing the background sky in the original image.
  • FIG. 7 a method for generating an enhance image C in accordance with one embodiment of the present invention is illustrated.
  • Image information related to an original image A is mathematically combined with information from dynamic image mask B.
  • the combined data is used to create the enhanced image C.
  • the enhanced image C is generated on a pixel by pixel basis.
  • Each corresponding pixel from original image A and dynamic image mask B is combined to form a pixel in masked image 710 .
  • pixel data from pixel 715 , of original image A is combined with pixel information from pixel 735 , of digital image mask B, using mathematical manipulation, such as overlay function 720 .
  • the combined data is used to represent pixel 715 of enhanced image C.
  • Overlay function 720 is a function used to overlay the pixel information between original image A and dynamic image mask B.
  • OUT refers to the value of the pixel in dynamic masked image B.
  • IN refers to the value of the pixel taken from original image A.
  • MASK refers to the value of the corresponding pixel in enhanced image C.
  • the value of pixel 714 is divided by 3 ⁇ 4 the value of pixel 734 , with the addition of an offset.
  • the offset, 1 ⁇ 4 is chosen to prevent an error from occurring due to diving by zero.
  • the offset can also be chosen to lighten shadows in the resultant masked image 710 .
  • dynamic image mask B can be a monochromatic mask.
  • the dynamic image mask B can be used to control the white and black levels in images.
  • Grayscale contrast is the contrast over large areas in an image.
  • Image contrast refers to the contrast of details within an image.
  • overlay function 720 is altered according to settings made by a user. Independent control of the image contrast and grayscale contrast can be provided.
  • Control can be used to produce images using low image contrast in highlights and high image contrast in shadows. Additionally, functions can be added to control the generation of the dynamic image mask B. Control can be offered over the pressure (Gain) and radius (region) effected through the sandblaster algorithm (described in FIG. 3). Additionally, control over the histogram of the image can be offered through control over the image contrast and the grayscale contrast. A normalized image can be generated in which histogram leveling can be performed without destroying image contrast.
  • the controls, functions, and algorithms described herein can be performed within an information processing system. It will be appreciated that other systems may be employed, such as through image processing kiosks, to produce enhanced image C, in keeping with the scope of the present invention.
  • a wrinkle reduction process 800 in accordance with one embodiment of the present invention is illustrated. As described in greater detail below, this embodiment of the wrinkle reduction process 800 operates to suppress median frequencies without suppressing high definition detail or low frequency contrast. As a result, people have a younger look without sacrificing detail.
  • a dynamic image mask B is calculated from original image A, as shown by block 802 .
  • the dynamic image mask B is calculated using a radius of 5 and a Gain of 64, as discussed in FIG. 3.
  • the dynamic image mask B is then passed through a low pass filter 804 .
  • the low pass filter 804 is preferably a “soft focus” filter.
  • the low pass filter 804 is calculated as the average of a Gaussian average with a radius of one and a Gaussian average with a radius of three. Other types of low pass filters may be used without departing from the scope of the present invention.
  • the original image A is also passed through a high pass filter 806 .
  • the high pass filter 806 is calculated as the inverse of the average of the Gaussian average with a blur of one and a gaussian average with a blur of three.
  • Other types of high pass filters may be used without departing from the scope of the present invention.
  • the results from the low pass filter 804 and the high pass filter 806 are then added together to form a median mask 808 .
  • the median mask 808 can then be applied to the original image A using, for example, applicator 810 to produce an enhanced image.
  • the applicator 810 is an electronic brush that can be varied by radius to apply the median mask 808 only to those areas of the original image A specified by the user.
  • Other types of applicators 810 may be used to apply the median mask 808 to the original image A.
  • FIG. 8B- 1 illustrates an untouched original image 820
  • FIG. 8B- 2 illustrates the same image after having the wrinkle reduction process 800 applied to the image 820
  • the wrinkle reduction process 800 reduces the viable affects of age of the person in the image, without sacrificing the minute detail of the image and without apparent blurring or softening of the details. This creates a more pleasing image to the eye and most importantly, more pleasing to the person in the picture.
  • the same process can be applied to other parts of the image to produce similar results.
  • the wrinkle reduction process 800 when applied to clothing, produces the appearance of a freshly pressed shirt or pants without affecting the details or appearing blurry.
  • Image capture system 900 used to implement one or more embodiments of the present invention is illustrated.
  • Image capture system 900 includes any device capable of capturing data representative of an image and subsequently processing the data according to the teachings set forth herein.
  • image capture system 900 could include a digital camera, video recorder, a scanner, image processing software, and the like.
  • An embodiment where image capture system 900 includes a digital camera is discussed subsequently for ease of illustration. The following discussion may be applied to other embodiments of image capture system 900 without departing from the spirit or scope of the present invention.
  • Image capture system 900 includes, but is not limited to, image sensor 910 , analog-to-digital (A/D) convertor 920 , color decoder 930 , color management system 940 , storage system 950 , and/or display 960 .
  • image capture system 900 is connected to printer 980 via a serial cable, printer cable, universal serial bus, networked connection, and the like.
  • Image sensor 910 in one embodiment, captures an image and converts the captured image into electrical information representative of the image.
  • Image sensor 910 could include an image sensor on a digital camera, such as a charge coupled device (CCD) sensor, complementary metal oxide semiconductor sensor, and the like.
  • CCD charge coupled device
  • a CCD sensor converts photons reflected off of or transmitted through a subject into stored electrical charge at the location of each photosite of the CCD sensor.
  • the stored electrical charge of each photosite is then used to obtain a value associated with the photosite.
  • Each photosite could have a one-to-one correspondence with the pixels of the resulting image, or photosites are used in conjunction to determine the value of one or more pixels.
  • image sensor 910 sends electrical information representing a captured image to A/D convertor 920 in analog form, which converts the electrical information from an analog form to a digital form.
  • image sensor 910 captures an image and outputs the electrical information representing the image in digital form. It will be appreciated that, in this case, A/D convertor 920 would not be necessary.
  • photosites on image sensors such as CCDs
  • image sensors such as CCDs
  • a number of methods may be used to convert the intensity values of the photosites (i.e. a black and white image) into corresponding color values for each photosite.
  • one method of obtaining color information is to use a beam splitter to focus the image onto more than one image sensor.
  • each image sensor has a filter associated with a color.
  • image sensor 910 could include three CCD sensors, where one CCD sensor is filtered for red light, another CCD sensor is filtered for green light, and the third sensor is filtered for blue light.
  • Another method is to use a rotating device having separate color filters between the light source (the image) and image sensor 910 . As each color filter rotates in front of image sensor 910 , a separate image corresponding to the color filter is captured.
  • a rotating disk could have a filter for each of the primary colors red, blue and green. In this case, the disk would rotate a red filter, a blue filter, and a green filter sequentially in front of image sensor 910 , and as each filter was rotated in front, a separate image would be captured.
  • a permanent filter could be placed over each individual photosite.
  • image sensor 910 By breaking up image sensor 910 into a variety of different photosites associated with different colors, the actual color associated with a specific point or pixel of a captured element may be interpolated.
  • a common pattern used is the Bayer filter pattern, where rows of red and green sensitive photosites are alternated with rows of blue and green photosites.
  • the Bayer filter pattern there is often many more green color sensitive photosites than there are blue or red color sensitive photosites, as the human eye is more sensitive to green than the others, so more green color information should be present for a captured image to be perceived as “true color” by the human eye.
  • color decoder 930 receives the digital output representing an image from A/D convertor 920 and converts the information from intensity values (black-and-white) to color values.
  • image sensor 910 could utilize a Bayer filter pattern as discussed previously.
  • the black-and-white digital output from A/D convertor 920 could be interpolated or processed to generate data representative of one or more color images.
  • color decoder 930 could generate data representative of one or more full color images, one or more monochrome images, and the like.
  • color management system 940 processes the data for output and/or storage. For example, color management system 940 could attenuate the dynamic range of the data from color decoder 930 . This may be done to reduce the amount of data associated with a captured image. Color management 940 could also format the data into a variety of formats, such as a Joint Picture Experts Group (JPEG) format, a tagged image file format (TIFF), a bitmap format, and the like. Color management system 940 may perform a number of other processes or methods to prepare the data representative of an image for display or output, such as compressing the data, converting the data from an analog to a digital format, etc.
  • JPEG Joint Picture Experts Group
  • TIFF tagged image file format
  • Color management system 940 processes data representative of an image
  • the data in one embodiment, is stored on storage 950 and/or displayed on display 960 .
  • Storage 950 could include memory, such as removable flash memory for a digital camera, a storage disk, such as a hard drive or a floppy disk, and the like.
  • Display 960 could include a liquid crystal display (LCD), a cathode ray tube (CRT) display, and other devices used to display or preview captured images.
  • the data representative of an image could be processed by printer driver 970 to be printed by printer 970 .
  • Printer 970 could include a photograph printer, a desktop printer, a copier machine, a fax machine, a laser printer, and the like.
  • Printer driver 970 could be collocated, physically or logically, with printer 970 , on a computer connected to printer 960 , and the like. It will be appreciated that one or more of the elements of image capture system 900 may be implemented as a state machine, as combinational logic, as software executable on a data processor, and the like. It will also be appreciated that the method or processes performed by one or more of the elements of image capture system 900 may be performed by a single device or system. For example, color decoder 930 and color management 940 could be implemented as a monolithic microprocessor or as a combined set of executable instructions.
  • Image capture system 900 can be used to implement one or more methods of various embodiments of the present invention.
  • the methods herein referred to collectively as the image mask method, may be implemented at one or more stages of the image capturing process of image system 900 .
  • the image mask method may be applied at stage 925 between the output of digital data from A/D convertor 920 and the input of color decoder 930 .
  • stage 925 may be the optimal location for application of the image mask method. For example, if data representative of an image output from image sensor 910 is monochrome (or black-and-white) information yet to be decoded into color information, less information may need to be processed using the image mask method than after conversion of the data to color information.
  • the image mask method does not affect the accuracy or operation of color decoder 930 .
  • the image mask method may be applied at stage 935 between color decoder 930 and color management system 940 .
  • the location of stage 935 may not be as optimal as stage 925 , since there may be more data to process between color decoder 930 and color management system 940 .
  • color decoder 93 0 could generate data for each of the primary colors, resulting in three times the information to be processed by the image mask method at stage 935 .
  • An image mask method may also be implemented at stage 945 between color management system 940 and storage 950 and/or display 960 .
  • application of the image mask method at stage 945 may not generate results as favorable as at stages 925 , 935 .
  • the image mask method may be implemented at stages 965 , 975 .
  • the image mask method may be implemented by printer driver 970
  • the image mask method may be implemented between printer driver 970 and printer 980 .
  • the connection between a system connected to printer driver 970 such as a computer, and printer 980 could include software and/or hardware to implement the image mask method.
  • the data representative of a captured image to be printed may have reduced dynamic range and/or loss of other information as a result of processing by color management system 940 .
  • the image mask method performed at stages 925 , 935 , 945 , 965 , and/or 975 is implemented as a set of instructions executed by processor 942 .
  • Processor 942 can include a microprocessor, a state machine, combinational logic circuitry, and the like.
  • the set of instructions are stored and retrieved from memory 943 , where memory 943 can include random access memory, read only memory, flash memory, a storage device, and the like.
  • processor 942 in one embodiment, also executes instructions for performing the operations of one or more of the elements of image capture system 900 .
  • processor 942 could execute instructions to perform the color decoding operations performed by color decoder 930 and then execute the set of instructions representative of the image mask method at stage 935 .
  • stage 925 is often the optimal location for implementation of the image mask method, for reasons discussed previously, it may be difficult to implement the image mask method at this location.
  • image sensor 910 , A/D convertor 920 , and color decoder 930 could be implemented as a monolithic electronic circuit. In this case it might prove difficult to modify the circuit to implement the method.
  • more than one element of image capture system 900 such as color decoder 930 and color management system 940 , may be implemented as a single software application.
  • the software application may be proprietary software where modification is prohibited, or the source code of the software may not be available, making modification of the software application difficult.
  • the image mask method may not be implemented in the optimal location, application of the image mask method in a more suitable location often will result in improved image quality and detail.
  • application of the image mask method at stage 945 results in data representative of an image having improved quality and/or detail over the data output by color management system 940 .
  • the improved image data may result in an improved image for display on display 960 , subsequent display when retrieved from storage 960 , or physical replication by printer 980 .
  • the image mask method may be employed more than once.
  • the image mask method may be employed at stage 925 to perform an initial compression of the dynamic range of the image, and then again at stage 945 for to further compress the images dynamic range.
  • FIG. 10 a chart showing various improvements in image types is illustrated according to at least one embodiment of the present invention.
  • an implementation of at least one embodiment of the present invention may be used to improve the dynamic range of representations of captured images.
  • the horizontal axis of chart 1000 represents the dynamic range of various types of image representations.
  • the dynamic range of images, as presented to the human eye, (i.e. “real life”) is represented by range 1006 .
  • the dynamic range decreases sequentially from real life (range 1006 ) to printed transparencies (range 1005 ), CRT displays (range 1004 ), glossy photographic prints (range 1003 ), matte photographic prints (range 1002 ), and LCD displays (range 1001 ).
  • sequence of dynamic ranges of various image representations is a general comparison and the sequence of dynamic ranges should not be taken as absolute in all cases.
  • a CRT display (range 1004 ) which could have a dynamic range greater than printed transparencies (range 1005 ).
  • the dynamic range of the representation of an image may be improved.
  • image information having a dynamic range comparable to a glossy photographic print (range 1003 )
  • image information having a dynamic range equivalent to a CRT display (range 1004 ) may be compressed into a dynamic range usable for matte photographic prints (range 1002 ), and so on.
  • an image mask method, as disclosed herein may be used to improve the dynamic range used for display of a captured image, thereby improving the quality of the display of the captured image.
  • One of the preferred implementations of the invention is as sets of computer readable instructions resident in the random access memory of one or more processing systems configured generally as described in FIGS. 1 - 10 .
  • the set of instructions may be stored in another computer readable memory, for example, in a hard disk drive or in a removable memory such as an optical disk for eventual use in a CD drive or DVD drive or a floppy disk for eventual use in a floppy disk drive.
  • the set of instructions can be stored in the memory of another image processing system and transmitted over a local area network or a wide area network, such as the Internet, where the transmitted signal could be a signal propagated through a medium such as an ISDN line, or the signal may be propagated through an air medium and received by a local satellite to be transferred to the processing system.
  • a signal may be a composite signal comprising a carrier signal, and contained within the carrier signal is the desired information containing at least one computer program instruction implementing the invention, and may be downloaded as such when desired by the user.
  • the physical storage and/or transfer of the sets of instructions physically changes the medium upon which it is stored electrically, magnetically, or chemically so that the medium carries computer readable information.

Abstract

A method, system and software are disclosed for applying an image mask for improving image detail in a digital image. An electronic representation of an image is scanned or captured using an image capture device. A dynamic image mask is generated from the electronic representation of the image. The dynamic image mask has sharp edges which are representative of rapidly changing boundaries in the original image and blurred regions in less rapidly changing areas. The dynamic image mask is applied to the electronic representation of the original image to produce an enhanced image. The enhanced image may have certain advantages. For example, in some embodiments, the enhanced image can be view on a display with much more viewing detail that conventional systems.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of the following U.S. Provisional Patent Applications: Serial No. 60/234,520, filed on Sep. 21, 2000, and entitled “Method of Generating an Image Mask for Improving Image Detail;” Serial No. 60/234,408, filed on Sep. 21, 2000, and entitled “Method of Applying An Image Mask For Improving Image Detail;” and Serial No. 60/285,591, filed on Apr. 19, 2001, and entitled “Method and System and Software for Applying an Image Mask for Improving Image Detail;” of common assignee herewith.[0001]
  • FIELD OF THE INVENTION
  • The present invention relates generally to imaging systems and image processing and more particularly to dynamic image correction and imaging systems. [0002]
  • BACKGROUND OF THE INVENTION
  • A variety of methods are commonly employed to capture an image. For example, photographic film may be exposed to light reflected from a desired subject to record a latent image withing the film. The film is then developed to generate a “negative” or “positive” from which prints or transparencies can be made and delivered to consumers. The negative, positive, or print can be scanned to produce a digital representation of the subject. Alternately, digital devices such as digital camera, video recorder, and the like, may be used to directly capture a digital representation of the desired subject by measuring the reflected light from the subject. [0003]
  • Lighting is particularly important when capturing images and care is often take to ensure the proper lighting of the subject matter of the image. If too much light is reflected from the subject, the captured image will be over-exposed, and the final image will appear washed-out. If too little light, the captured image will appear under-exposed, and the final image will appear dark. Similarly, if the proper lighting is not provided from a proper angle, for example when one part of an image is in bright light while another part is in shadow, some of the image might be properly exposed, while the remainder of the image is either under-exposed or over-exposed. Conventional digital devices are particularly prone to having over-exposed and under-exposed portions of an image. [0004]
  • If during an image capture process the subject is over-exposed or under-exposed, the mistake can sometimes be minimized in the processing (or development) and/or printing process. Typically, when an image is captured on film, the negative contains much more image detail than can be reproduced in a photographic print, and so a photographic print includes only a portion of the information available to be printed. Similarly, images captured directly by digital devices often have considerably more image detail then can be reproduced or output. By choosing the proper portion of the image detail to print, the final processed image may be compensated for the mistakes made during image capture. However, particularly in the case in which some areas of an image are underexposed and other areas of an image are over-exposed, it is difficult to correct both the under-exposed and over-exposed portions of the image. [0005]
  • Conventional correction techniques for reducing the effects of over-exposed and under-exposed regions are generally performed by hand and can be extremely expensive. One conventional correction technique is to apply a cutout filter. In this technique, the image is divided into large, homogeneous regions, and a filter is applied to each of these regions. Referring now to FIG. 1, in which a [0006] conventional cutout filter 110 is shown. The original image is of a castle. Assume that in the original image, the sky 160 lacks detail and is washed out, while the castle 120 is in shadow. The cutout filter 110 has a dark sky 160 and a light castle 120, so that when applied to the original image, the sky 160 in the resultant image will be darker, and the castle 120 will be lighter, thereby improving “gross” image detail.
  • A drawback of [0007] cutout filter 110 is that image detail within the regions is not properly corrected unless the selected region is truly homogeneous, which is not very likely. As a result, detail within each region is lost. The number of regions selected for filtering may be increased, but selecting more regions greatly increases the time and labor needed to generate the cutout filter 110. In addition, this technique and other conventional techniques tends to create visually unappealing boundaries between the regions.
  • SUMMARY OF THE INVENTION
  • In accordance with one implementation of the present invention a method of enhancing an image is provided. In one embodiment, the method comprises obtaining an image mask of the original image. The image mask and the original image each comprise a plurality of pixels having varying values. The plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image. The pixels are further arranged to form areas of less sharp regions corresponding to areas of less rapidly changing pixel values in the original image. The method further comprises combining the image mask with the original image to obtain a masked image. [0008]
  • Another embodiment of the present invention provides for a digital file tangibly embodied in a computer readable medium. The digital file is generated by implementing a method comprising obtaining an image mask of an original image. The image mask and the original image each comprise a plurality of pixels having varying values. The plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image. The pixels are further arranged to form areas of less sharp regions corresponding to areas of less rapidly changing pixel values in the original image. The method further comprises combining the image mask with the original image to obtain a masked image. [0009]
  • An additional embodiment of the present invention provides for a computer readable medium tangibly embodying a program of instructions. The program of instructions is capable of obtaining an image mask of an original image. The image mask and the original image each comprise a plurality of pixels having varying values. The plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image. The pixels are further arranged to form areas of less sharp regions corresponding to areas of less rapidly changing pixel values in the original image. The program of instructions is further capable of combining the image mask with the original image to obtain a masked image. [0010]
  • Yet another embodiment of the present invention provides for a system comprising an image sensor to convert light reflected from an image into information representative of the image, a processor, memory operably coupled to the processor, and a program of instructions capable of being store in the memory and executed by the processor. The program of instructions manipulate the processor to obtain an image mask, the image mask and the information representative of the image each including a plurality of pixels having varying values, wherein the values of the plurality of mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image and less sharp regions corresponding to areas of less rapidly changing pixel values in the original image. The program of instructions also manipulate the processor to combine the image mask with the information representative of the image to obtain a masked image. [0011]
  • An advantage of at least one embodiment of the present invention is that an image to improve reproducible detail can be generated without user intervention. [0012]
  • An additional advantage of at least one embodiment of the present invention is that an image mask can be automatically applied to an original image to generate an image with improved image detail within a reproducible dynamic range due to the image detail preserved in the image mask. [0013]
  • Yet another advantage of at least one embodiment of the present invention is that calculations to improve the image detail in scanned images can be performed relatively quickly, due to a lower processing overhead and less user intervention than conventional methods. [0014]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Other objects, advantages, features and characteristics of the present invention, as well as methods, operation and functions of related elements of structure, and the combination of parts and economies of manufacture, will become apparent upon consideration of the following description and claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures, and wherein: [0015]
  • FIG. 1 is an illustration showing a conventional cutout filter; [0016]
  • FIG. 2 is a block diagram illustrating a method for dynamic image correction according to one embodiment of the present invention; [0017]
  • FIG. 3 is a block diagram of an original image and a dynamic image mask according to one embodiment of the present invention; [0018]
  • FIG. 4 is a set of graphs showing intensity values of pixels around an edge before and after a blurring algorithm has been applied according to one embodiment of the present invention; [0019]
  • FIG. 5 is a block diagram of a method for generating a dynamic image mask according to at least one embodiment of the present invention; [0020]
  • FIG. 6 is a representation of an dynamic image mask with properties according to at least one embodiment of the present invention; [0021]
  • FIG. 7 is a block diagram illustrating a method of applying a dynamic image mask to an image according to at least one embodiment of the present invention; [0022]
  • FIG. 8A is a block diagram illustrating a wrinkle reduction process in accordance with one embodiment of the invention; [0023]
  • FIG. 8B-[0024] 1 is a picture illustrating an original image;
  • FIG. 8B-[0025] 2 is a picture illustrating the image of 8B-1 with the wrinkle reduction process applied;
  • FIG. 9 is a block diagram illustrating an image capture system according to at least one embodiment of the present invention; and [0026]
  • FIG. 10 is a chart illustrating improvements in the dynamic range of various image representations according to at least one embodiment of the present invention. [0027]
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIGS. [0028] 2-9 illustrate a method for dynamic image correction and imaging systems having enhanced images. As described in greater detail below, one embodiment of dynamic image correction utilizes a dynamic image mask that uses a blurring algorithm that maintains sharp boundaries of the image. The dynamic image mask is then applied to the image. In some implementations, the dynamic image mask is used to increase the amount of reproducible detail within an image. In another implementation, the dynamic image mask is used to suppress median frequencies and maintain sharp boundaries. In this implementation, the dynamic image mask can be regionally applied using an electronic brush. In yet other implementations, various embodiments of the dynamic image mask can be used as a correction map for other correction and enhancement functions. Systems for utilizing digital image correction can include a variety of image capturing or processing systems, such as digital cameras, video cameras, scanners, image processing software, and the like.
  • Referring now to FIG. 2, one method of dynamic image correction [0029] 200 is described. In this embodiment, dynamic image correction 200 includes creating a dynamic image mask B from an original image A. The dynamic image mask B be is then combined with original image A to generate an enhanced image C. In one embodiment, the enhanced image C has improved image detail over original image A, within a reproducible dynamic range. For example, original image A may contain detail which may not be appropriately represented when output for display or printing, such as containing high contrast over-exposed (bright) regions and under-exposed (shadow) regions. It would be helpful to brighten the detail in the shadow regions and decreasing the brightness of the bright regions without losing image detail. At least one embodiment of the present invention automatically performs this function. In contrast, conventional methods of simply dividing the original image into a bright and shadow regions will not generally suffice to improve complex images. Images generally contain complex and diverse regions of varying contrast levels, and as a result, conventional methods generally produce inadequate results.
  • In [0030] step 210, original image A is provided. Original image A is an electronic representation of a subject and includes one or more characteristic values corresponding to specific locations, or pixels. Each pixel has one or more associated values, or planes, that represents information about a particular location on the subject. For original image A, the values corresponding to each pixel can be a measure of any suitable characteristic of the subject. For example, the values may represent the color, colors, luminance, incidence angle, x-ray density, or any other value representing a characteristic or combination of characteristics.
  • Original image A can be obtained in any suitable manner and need not correlate directly to conventional color images. One implementation obtains original image A by digitizing an image using a scanner, such as a flatbed, film scanner, and the like. Another implementation obtains original image A by directly capturing the image using a digital device, such as a digital camera, video camera, and the like. In yet another implementation, the original image A is captured using an imaging device, such as magnetic resonance imaging system, radar system, and the like. In this embodiment, the characteristic values do not correlate to colors but to other characteristics of the subject matter imaged. The original image A could also be obtained by computer generation or other similar technique. Dynamic image correction [0031] 200 does not depend upon how the original image A is obtained, but only that the original image A includes one or more values that represent the image.
  • In [0032] step 220, a dynamic image mask B is generated from original image A. In the preferred embodiment, the pixel values of the dynamic image mask B are generated relative to a pixel in the original image A. In at least one embodiment, the pixels generated for dynamic image mask B are calculated using weighted averages of select pixels in original image A, as discussed in greater detail below. It will be appreciated that the pixels generated for dynamic image mask B may be calculated using any number of methods without departing from the spirit or the scope of the present invention.
  • Dynamic image mask B maintains the sharp edges in the original image A while blurring regions the surrounding the sharp edges. In effect, rapidly changing characteristics, i.e., values or contrast, in original image A are used to determine sharp edges in dynamic image mask B. At the same time, less rapidly changing values in original image A can be averaged to generate blurred regions in dynamic image mask B. In effect, the calculations performed on original image A produce a dynamic image mask B which preserves the boundaries between dissimilar pixels in original image A while blurring areas containing similar pixels, as will be discussed further in FIG. 3. [0033]
  • A dynamic image mask B is often calculated for each characteristic value. For example, in the case of an original image having red, green, and blue values for each pixel, the red values are used to calculate the blurring and edge parameters of the dynamic image mask B for the red color, the blue values are used to calculate the blurring and edge parameters of the dynamic image mask B for the blue color, and so on. The dynamic image mask B can use different characteristics, or planes, to establish the regions and boundaries for different characteristics. For example, in the case of an original image having red, green, and blue color values for each pixel, the red values could be used to establish the blurring and edge parameters that are applied to each of the red, green, and blue values. Similarly, a calculated luminance value could be used to calculate the blurring and edge parameters that are then applied to the red, green, and blue values of each pixel. In other embodiments, a dynamic image mask B is only calculated for certain characteristics. Using the same example as above, a dynamic image mask B for the colors red and green may be calculated, but the values of the color blue are combined without change, as described in greater detail below. [0034]
  • In [0035] step 230, dynamic image mask B is applied to original image A to produce enhanced image C. Dynamic image mask B is generally applied to original image A by use of an overlay technique. As discussed further in FIG. 7, a mathematical operation, such as division between the pixel values of original image A and the corresponding pixel values in dynamic image mask B, can be used to generate the pixel values of enhanced image C.
  • In general, the process of generating and applying the dynamic image mask B is performed as part of a set of instructions run by an information processing system. The processes of [0036] steps 210, 220, and 230 can be performed within an image processing system, implemented by photo-lab technicians, in a system used by a customer without the assistance of a lab technician, incorporated into a scanner, digital camera, video recorder and the like, or performed by a computer system external to the image capturing device. In at least one embodiment, the processes are automated by a program of executable instructions executed by an information processing system such that minimal user interaction is required.
  • In [0037] step 240, the enhanced image C is delivered in the form desired. The form in which the enhanced image C is delivered includes, but is not limited to, a digital file, a photographic print, or a film record. Digital files can be stored on mass storage devices, tape drives, CD recorders, DVD recorders, and/or various forms of volatile or non-volatile memory. Digital files can also be transferred to other systems using a communications adapter, where the file can be sent to the Internet, an intranet, as an e-mail, etc. A digital file can also be prepared for retrieval at an image processing kiosk which allows customers to recover their pictures and print them out in a form of their choosing without the assistance of a film development technician. The enhanced image C can also be displayed as an image on a display or printed using a computer printer. The enhanced image C also can be represented on a form of film record, such as a film negative, positive image, or photographic print. In conventional printing processes, when an image is printed, a large portion of the dynamic range is lost. In contrast, enhanced image C generally contains desirable detail from original image A so that a larger quantity of image detail from original image A is effectively compressed into a dynamic range capable of being reproduced in print and can be preserved thereby.
  • Referring now to FIG. 3, a diagram of an original image and a blurred image are shown, according to one embodiment of the present invention. Original image A is composed of a plurality of pixels, such as pixels numbered [0038] 301-325. Dynamic image mask B is composed of corresponding pixels, such as pixels numbered 351-375, calculated from the pixels of original image A. As described in greater detail below, the pixel values of dynamic image mask B are calculated using an averaging function that accounts for sharp edges.
  • A sharp edge is generally defined by a variation between pixel values greater than a certain sharpness threshold, or Gain. In effect, the sharpness threshold allows the pixels to be differentiated into regions for purposes of averaging calculations. In some embodiments, the sharpness threshold is varied by a user. In other embodiments, the sharpness threshold is fixed within the software. [0039]
  • The pixels calculated for dynamic image mask B correspond to averages taken over regions of pixels in original image A, taking into account the sharpness threshold, or Gain. For example, [0040] pixel 363 corresponds to calculations performed around pixel 313. In one embodiment, provided that pixels 301-325 are similar, i.e., difference is below the sharpness threshold, pixel 363 is calculated by averaging the values of pixels 311-315, 303, 308, 318, and 323. In another embodiment, pixel 363 is calculated by averaging the values of pixels 307-309, 312-314, and 317-319. Any suitable number or selection process for the averaging process may be used without departing from the scope of the present invention. In the preferred embodiment, the pixels are assigned a weight based on their relative distance from pixel 313. In this embodiment, pixels that are relatively closer have a greater impact on the averaging calculation that pixels that are relatively remote.
  • In one embodiment of the present invention, the dynamic image mask B is calculated using a weighting function as described by the following equation: [0041] w N = ( 1 - ( pixel N - centerpixel Gain ) .
    Figure US20020176113A1-20021128-M00001
  • The weight function, w[0042] N, can be used to apply a separate weight to each of the pixel values. Only values of wN between zero and one are accepted. Accordingly, if the value of wN is returned as a negative value, the returned weight for the pixel being weighed is zero. Using the first example above, if pixel 313 was being calculated, wN could be used to apply a weight to each of the pixels 311-315, 303, 308, 318, and 323. PixelN is the contrast value of the pixel being weighed. Center pixel is the value of the central pixel, around which the blurring is being performed. Gain is a threshold value used to determine a contrast threshold for a sharp edge. For example, if pixel 362 is being calculated and the difference in contrast between pixel 313 and pixel 308 is 15, with Gain set to 10, the returned value of wN is negative. Accordingly, since negative values are not allowed, pixel 308 is assigned a weight of zero, keeping the value of pixel 308 from affecting the calculation of pixel 362.
  • The value of Gain can be decreased as the pixel being weighed is further from the central pixel. Lowering the value of Gain allows small changes in the contrast between pixelN and centerpixel to result in negative w[0043] N, and thus be weighed to zero. Accordingly, in one embodiment, the farther the pixel is from the centerpixel, the smaller Gain gets and the more likely it is that the value of wN will be negative and the pixel will be assigned a weight of zero. The choice of Gain is chosen to preferably decrease slowly as the distance from the central pixel is increased. The values of Gain used can be adapted for the desired application; however, it has been found that slower changes in Gain provide images with more pleasing detail than sharper changes in Gain. Furthermore, the weight function itself can be altered without departing from the scope of the present invention.
  • Once the weights of the surrounding pixels have been calculated, a sum of each of the pixel values, multiplied by their relative weights, can be calculated. The sum can then be divided by the sum of the individual weights to generate the weighted average of the pixels, which can be used for the pixels of dynamic image mask B. The minimum weight calculated from the pixels adjacent to the central pixel can also be used and multiplied by each of the pixels surrounding the central pixel. Multiplying by the weight of an adjacent pixel allows the blurring to be effectively turned “off” if the contrast around the central pixel is changing too rapidly. For example, if the difference in contrast between a central pixel and an adjacent pixel is large enough to warrant a sharp edge in dynamic image mask B, the weight of the adjacent pixel will be zero, forcing all other values to zero and allowing the central pixel to retain its value, effectively creating a sharp edge in dynamic image mask B. [0044]
  • This embodiment of the processes performed to generate dynamic image mask B can be likened to a sandblaster. A sandblaster can be used to soften, or blur, the textures it is working over. Accordingly, the blurring algorithm as described above will be herein referred to as the sandblaster algorithm. A sandblaster has an effective radius over which it is used, with the material closer to the center of the sandblasting radius affected most. In the blurring algorithm described, a radius is selected and measured from the central pixel. The pressure of a sandblaster can be adjusted to affect more change. The Gain value in the described algorithm can be altered to affect more or less blurring. In at least one embodiment, the preferred radius is 4 and the preferred Gain is 40. [0045]
  • The sandblaster algorithm can be performed in one dimensional increments. For example, to calculate the value of [0046] pixel 362, the pixels surrounding pixel 312 are considered. In one embodiment of the present invention, the averaged pixel values are determined using the neighboring vertical pixels and then the neighboring horizontal pixel values, as described above. Alternatively, windows can be generated and applied to average in pixels around the central pixel together, in both the horizontal and vertical directions. Color images can compose multiple image planes, wherein the multiple image planes may include planes for each color, a red plane, a green plane, and a blue plane. In a preferred embodiment, the sandblaster algorithm is only performed on one plane at a time. Alternatively, the sandblaster algorithm can be calculated taking other image planes into account, calculating in the values of pixels relative to the central pixel from different color planes. However, it should be noted that performing multi-dimensional calculations over an image may increase the processing time. Additionally, pixels which are near an image edge, such as pixel 311 may ignore values desired from pixels beyond the limits of original image A. In one embodiment, the images along the edge use their value to reproduce pixel values beyond the image edge, for calculation with the sandblaster algorithm. Additionally, zeroes may be used for values lying outside the edges of original image A.
  • Referring now to FIG. 4, a graph of intensities across a row of intensities, before and after the sandblaster blurring algorithm has been applied is shown, according to at least one embodiment of the present invention. [0047] Graph 450 represents the intensity values in an original image A around an edge representing contrasting intensity. Graph 460 represents the intensities for dynamic image mask B, among the same pixels as graph 450.
  • Two distinct intensity levels are identifiable in [0048] graph 450. A low intensity can be identified among pixels 451-454 and a high intensity region can be identified by pixel 465. The radius used to blur the pixels around the central pixel described in FIG. 3 is one factor in how much blurring will be performed. If too large a radius is used, little blurring may result. For example, if the pixel considered for blurring was pixel 451 and the radius was set large enough, the blurred value of pixel 452 may not change much. With the radius set large enough, pixel 452 will be averaged with many pixels above its intensity, such as pixel 451. Pixel 452 will also be averaged with many pixels below its intensity, such as pixel 453. If the radius is too large, there could be enough pixels with intensities above pixel 452 and enough pixels with intensities below pixel 452 that the value for pixel 452 will remain unchanged since the intensity value of pixel 452 lies between the high and low extremes.
  • Little blurring could also result from selecting too small a radius for blurring. In selecting a small radius, only the intensity values of pixels immediately by the selected pixel will be considered. For example, selecting [0049] pixel 452 as the central pixel. If the radius is too small, allowing pixels only as far as pixel 451, pixels 453 and 454 may not be considered in the blurring around pixel 452. Selection of the radius has drastic effects to how much blurring is accomplished. The blurring radius must be large enough to average enough of a region of pixels while being small enough to effect enough blurring. In one embodiment, the blurring radius can be controlled automatically. As shown in FIG. 5, blurring can be performed over decimated representations of an original image using pyramidal decomposition. By performing a blurring algorithm and decimating the image, the effective radius of the blur is automatically increased as the image resolution is decreased. A decimated representation of the original image A can contain half the resolution of the original image A. Some of the detail in the original image is lost in the decimated representation. Performing blurring on the decimated image with a specific radius can relate to covering twice the radius in the original image.
  • [0050] Graph 462 shows a graph of intensities in the dynamic image mask A using the sandblaster blurring algorithm. As can be seen, the blurring is enough to bring down the intensity of pixel 452 in the original image to pixel 462 in the blurred representation. Pixel 455, in a separate intensity level is increased in intensity to pixel 465 in the blurred representation. In at least one embodiment, the blurring is turned off for pixels along an edge. Turning off the blurring allows the sharpness among edges to be preserved in the blurred representation, preserving edges between regions with a high contrast of intensities. Pixel 454 lies along an edge, where the intensity for pixels nearby, such as pixel 455, is much higher. The intensity of pixel 454 is not changed, preserving the difference in contrast between pixel 454 and the pixels of higher intensity, such as pixel 455.
  • Referring now to FIG. 5, a block diagram of a method for generating another embodiment of a dynamic image mask B is illustrated. In this embodiment, the sandblaster algorithm can be used to create a blurred image with sharp edges and blurred regions. To improve the detail captured by an image mask incorporating the sandblaster blurring algorithm, a pyramidal decomposition is performed on the original image, as shown in FIG. 5. In [0051] step 510, the original image A is received.
  • In [0052] step 535, the image size is reduced. In at least one embodiment, the image size is reduced in half. The reduction in image size may be performed using a standard digital image decimation. In one embodiment, the decimation is performed by discarding every other pixel in the original image from step 510.
  • In [0053] step 525, the sandblaster algorithm, as discussed in FIG. 3, is performed on the decimated image to create a blurred image. As previously discussed for FIG. 4, the decimated image contains half the resolution of the original image A. Some of the detail in the original image A is lost to the decimated image. By performing the sandblaster algorithm on the decimated image, the effective radius covered by the algorithm can relate to twice the radius in the original image. Since some of the detail from the original image A is not present in the decimated image, more blurring can result with the sandblaster algorithm. As the images described herein are decimated, the effective blur radius and amount of detail blurred increased in inverse proportion to the change in resolution in the decimated images. For example, performing the sandblast algorithm in step 525 to the reduced image of step 535 has twice the effective radius of performing the same algorithm to the original image, while the reduced image has half the resolution of the original image.
  • In [0054] step 536, the blurred image is decimated once again. In step 526, the decimated image from step 536 is blurred using the sandblaster algorithm. Further decimation steps 537-539 and sandblaster steps 527-329 are consecutively performed on the outputs of previous steps. In step 550, the blurred image from the sandblaster step 529 is subtracted from the decimated output of decimation step 550. In step 560, the mixed output from step 550 is up-sampled. In one embodiment, the image is increased to twice its pixel resolution. Increasing the image size may be performed by repeating the image values of present pixels to fill new pixels. Interpolation may also be performed to determine the values of the new pixels. in step 552, the up-sampled image from step 560, is added to the blurred image from step 528. The combined image information is subtracted from the decimated output from step 538. The calculations in step 552 are performed to recover image detail that may have been lost. Mixer steps 554 and 552, consecutively performed with up-sampling steps 562-366, attempt to generate mask data. In step 558, a mixer is used to combine the up-sampled image data from step 566 with the blurred image data from step 525. The output from the mixer in step 558 is then up-sampled, in step 580, to produce the image mask of the received image. The dynamic image mask B is then prepared for delivery and use, as in step 590.
  • It will be appreciated that additional or less blurring may be performed among the steps of the pyramidal decomposition described herein. It should be noted that by not performing the blurring algorithm on the original image, a significant amount of processing time may be saved. Calculations based on the decimated images can be performed faster and with less overhead than calculations based off the original image, producing detailed image masks. The image masks produced using the described method preferably include sharp edges based on rapidly changing boundaries found in the original image A, and blurred regions among less rapidly changing boundaries. It should also be appreciated that more or less steps may be performed as part of the pyramidal decomposition described herein, without departing from the scope of the present invention. [0055]
  • In the described embodiment, pyramidal decomposition is performed along a single image color plane. It will be appreciated that additional color planes may also be presented in the steps shown. Furthermore, multi-dimensional processing, wherein information from different color planes or planes of brightness is processed concurrently, may also be performed. According to at least one embodiment of the present invention, the resultant image mask generated is a monochrome mask, used to apply itself to the intensities of the individual image color planes in the original image. A monochrome image plane can be calculated from separate image color planes. For example, in one embodiment, the values of the monochrome image mask are determined using the following equation: [0056]
  • OUT=MAX(R,G).
  • OUT refers to the pixel being calculated in the monochromatic image mask. MAX(R,G) is a function in which the maximum intensity between the intensity value of the pixel in the red plane and the intensity value of the pixel in the green plane is chosen. In the case of a dynamic image mask pixel which contains more than 80% of its intensity from the blue plane, the formula can be appended to include: [0057]
  • OUT=OUT+50% B.
  • wherein 50% B is half of the intensity value in the blue plane. The dynamic image mask B may also be made to represent image intensities, such as the intensity among black and white values. It will be appreciated that while full color image masks may be used, they will require more processing overhead than using monochrome masks. [0058]
  • Referring now to FIG. 6, a dynamic image mask B is illustrated, in comparison to a prior-art conventional cutout filter shown in FIG. 1, with properties representative of an dynamic image mask B created according to at least one embodiment of the present invention. The dynamic image mask B shown in FIG. 6 will be generally referred to as [0059] revelation mask 650. The conventional image mask shown in FIG. 1 (prior-art) will be generally referred to as conventional filter 110.
  • The [0060] revelation mask 650 maintains some of the detail lost to conventional image masks. Edges are preserved between regions of rapidly changing contrasts. For example, light region 690, generated to brighten detail within windows in the original image, maintains edges to show sharp contrast to the darker region 680, which is generated to darken details in the walls shown in the original image. It should be noted that while edges are maintained between regions of rapidly changing contrasts, blurring is accomplished within the regions. For example, the details in the roof of the original image contain dark and light areas with a gradual shift in contrast. In conventional filter 110, dark region 127 is generated to maintain contrast with the lighter areas in the tower on the roof. When conventional filter 110 is overlaid with the original image, the resultant image will show a sharp contrast difference between dark region 127 and light region 120 which does not maintain the gradual difference in the original image. In comparison, revelation mask 650 maintains the gradual shift in contrast as can be noted by the blurred shift in intensity between the tower region 655 and the lighter region 670, allowing the roof in the original image to maintain a gradual shift in intensity contrast while maintaining the sharp contrast of dark region 655 against the darker region 660, representing the background sky in the original image.
  • Referring now to FIG. 7, a method for generating an enhance image C in accordance with one embodiment of the present invention is illustrated. Image information related to an original image A is mathematically combined with information from dynamic image mask B. The combined data is used to create the enhanced image C. [0061]
  • The enhanced image C is generated on a pixel by pixel basis. Each corresponding pixel from original image A and dynamic image mask B is combined to form a pixel in [0062] masked image 710. For example, pixel data from pixel 715, of original image A, is combined with pixel information from pixel 735, of digital image mask B, using mathematical manipulation, such as overlay function 720. The combined data is used to represent pixel 715 of enhanced image C.
  • [0063] Overlay function 720 is a function used to overlay the pixel information between original image A and dynamic image mask B. In one embodiment of the present invention, overlay function 720 involves mathematical manipulation and is defined by the equation: OUT = IN 3 4 MASK + 1 4 .
    Figure US20020176113A1-20021128-M00002
  • OUT refers to the value of the pixel in dynamic masked image B. IN refers to the value of the pixel taken from original image A. MASK refers to the value of the corresponding pixel in enhanced image C. For example, to produce the output value of [0064] pixel 714, the value of pixel 714 is divided by ¾ the value of pixel 734, with the addition of an offset. The offset, ¼, is chosen to prevent an error from occurring due to diving by zero. The offset can also be chosen to lighten shadows in the resultant masked image 710.
  • In one embodiment, the application of dynamic image mask B to original image A is performed through software run on a information processing system. As previously discussed, dynamic image mask B can be a monochromatic mask. The dynamic image mask B can be used to control the white and black levels in images. Grayscale contrast is the contrast over large areas in an image. Image contrast refers to the contrast of details within an image. Through manipulation of the proportion of the value of MASK and the offset used in [0065] overlay function 720, the grayscale contrast and the image contrast can be altered to best enhance the enhanced image C 310. In one embodiment of the present invention, overlay function 720 is altered according to settings made by a user. Independent control of the image contrast and grayscale contrast can be provided. Control can be used to produce images using low image contrast in highlights and high image contrast in shadows. Additionally, functions can be added to control the generation of the dynamic image mask B. Control can be offered over the pressure (Gain) and radius (region) effected through the sandblaster algorithm (described in FIG. 3). Additionally, control over the histogram of the image can be offered through control over the image contrast and the grayscale contrast. A normalized image can be generated in which histogram leveling can be performed without destroying image contrast. The controls, functions, and algorithms described herein can be performed within an information processing system. It will be appreciated that other systems may be employed, such as through image processing kiosks, to produce enhanced image C, in keeping with the scope of the present invention.
  • Referring to FIG. 8A, a [0066] wrinkle reduction process 800 in accordance with one embodiment of the present invention is illustrated. As described in greater detail below, this embodiment of the wrinkle reduction process 800 operates to suppress median frequencies without suppressing high definition detail or low frequency contrast. As a result, people have a younger look without sacrificing detail.
  • In the embodiment illustrated, a dynamic image mask B is calculated from original image A, as shown by [0067] block 802. In the preferred embodiment, the dynamic image mask B is calculated using a radius of 5 and a Gain of 64, as discussed in FIG. 3. The dynamic image mask B is then passed through a low pass filter 804. The low pass filter 804 is preferably a “soft focus” filter. In one embodiment, the low pass filter 804 is calculated as the average of a Gaussian average with a radius of one and a Gaussian average with a radius of three. Other types of low pass filters may be used without departing from the scope of the present invention.
  • The original image A is also passed through a [0068] high pass filter 806. In one embodiment, the high pass filter 806 is calculated as the inverse of the average of the Gaussian average with a blur of one and a gaussian average with a blur of three. Other types of high pass filters may be used without departing from the scope of the present invention.
  • The results from the [0069] low pass filter 804 and the high pass filter 806 are then added together to form a median mask 808. The median mask 808 can then be applied to the original image A using, for example, applicator 810 to produce an enhanced image. In the preferred embodiment, the applicator 810 is an electronic brush that can be varied by radius to apply the median mask 808 only to those areas of the original image A specified by the user. Other types of applicators 810 may be used to apply the median mask 808 to the original image A.
  • FIG. 8B-[0070] 1 illustrates an untouched original image 820, and FIG. 8B-2 illustrates the same image after having the wrinkle reduction process 800 applied to the image 820. As can be seen, the wrinkle reduction process 800 reduces the viable affects of age of the person in the image, without sacrificing the minute detail of the image and without apparent blurring or softening of the details. This creates a more pleasing image to the eye and most importantly, more pleasing to the person in the picture. The same process can be applied to other parts of the image to produce similar results. For example, when applied to clothing, the wrinkle reduction process 800 produces the appearance of a freshly pressed shirt or pants without affecting the details or appearing blurry. Although only a few of the applications of the wrinkle reduction process 800 and dynamic image mask B have been illustrated, it should be understood that they may be used for any suitable purpose or combination without departing from the scope of the present invention.
  • Referring to FIG. 9, an [0071] image capture system 900 used to implement one or more embodiments of the present invention is illustrated. Image capture system 900 includes any device capable of capturing data representative of an image and subsequently processing the data according to the teachings set forth herein. For example, image capture system 900 could include a digital camera, video recorder, a scanner, image processing software, and the like. An embodiment where image capture system 900 includes a digital camera is discussed subsequently for ease of illustration. The following discussion may be applied to other embodiments of image capture system 900 without departing from the spirit or scope of the present invention.
  • [0072] Image capture system 900 includes, but is not limited to, image sensor 910, analog-to-digital (A/D) convertor 920, color decoder 930, color management system 940, storage system 950, and/or display 960. In at least one embodiment, image capture system 900 is connected to printer 980 via a serial cable, printer cable, universal serial bus, networked connection, and the like. Image sensor 910, in one embodiment, captures an image and converts the captured image into electrical information representative of the image. Image sensor 910 could include an image sensor on a digital camera, such as a charge coupled device (CCD) sensor, complementary metal oxide semiconductor sensor, and the like. For example, a CCD sensor converts photons reflected off of or transmitted through a subject into stored electrical charge at the location of each photosite of the CCD sensor. The stored electrical charge of each photosite is then used to obtain a value associated with the photosite. Each photosite could have a one-to-one correspondence with the pixels of the resulting image, or photosites are used in conjunction to determine the value of one or more pixels.
  • In one embodiment, [0073] image sensor 910 sends electrical information representing a captured image to A/D convertor 920 in analog form, which converts the electrical information from an analog form to a digital form. Alternatively, in one embodiment, image sensor 910 captures an image and outputs the electrical information representing the image in digital form. It will be appreciated that, in this case, A/D convertor 920 would not be necessary.
  • It will be appreciated that photosites on image sensors, such as CCDs, often only measure the magnitude or intensity of the light striking a photosite. In this case, a number of methods may be used to convert the intensity values of the photosites (i.e. a black and white image) into corresponding color values for each photosite. For example, one method of obtaining color information is to use a beam splitter to focus the image onto more than one image sensor. In this case, each image sensor has a filter associated with a color. For example, [0074] image sensor 910 could include three CCD sensors, where one CCD sensor is filtered for red light, another CCD sensor is filtered for green light, and the third sensor is filtered for blue light. Another method is to use a rotating device having separate color filters between the light source (the image) and image sensor 910. As each color filter rotates in front of image sensor 910, a separate image corresponding to the color filter is captured. For example, a rotating disk could have a filter for each of the primary colors red, blue and green. In this case, the disk would rotate a red filter, a blue filter, and a green filter sequentially in front of image sensor 910, and as each filter was rotated in front, a separate image would be captured.
  • Alternatively, a permanent filter could be placed over each individual photosite. By breaking up [0075] image sensor 910 into a variety of different photosites associated with different colors, the actual color associated with a specific point or pixel of a captured element may be interpolated. For example, a common pattern used is the Bayer filter pattern, where rows of red and green sensitive photosites are alternated with rows of blue and green photosites. In the Bayer filter pattern, there is often many more green color sensitive photosites than there are blue or red color sensitive photosites, as the human eye is more sensitive to green than the others, so more green color information should be present for a captured image to be perceived as “true color” by the human eye.
  • Accordingly, in one embodiment, [0076] color decoder 930 receives the digital output representing an image from A/D convertor 920 and converts the information from intensity values (black-and-white) to color values. For example, image sensor 910 could utilize a Bayer filter pattern as discussed previously. In this case, the black-and-white digital output from A/D convertor 920 could be interpolated or processed to generate data representative of one or more color images. For example, color decoder 930 could generate data representative of one or more full color images, one or more monochrome images, and the like.
  • Using the data representative of an image generated by [0077] color decoder 930, in one embodiment, color management system 940 processes the data for output and/or storage. For example, color management system 940 could attenuate the dynamic range of the data from color decoder 930. This may be done to reduce the amount of data associated with a captured image. Color management 940 could also format the data into a variety of formats, such as a Joint Picture Experts Group (JPEG) format, a tagged image file format (TIFF), a bitmap format, and the like. Color management system 940 may perform a number of other processes or methods to prepare the data representative of an image for display or output, such as compressing the data, converting the data from an analog to a digital format, etc.
  • After [0078] color management system 940 processes data representative of an image, the data, in one embodiment, is stored on storage 950 and/or displayed on display 960. Storage 950 could include memory, such as removable flash memory for a digital camera, a storage disk, such as a hard drive or a floppy disk, and the like. Display 960 could include a liquid crystal display (LCD), a cathode ray tube (CRT) display, and other devices used to display or preview captured images. In an alternative embodiment, the data representative of an image could be processed by printer driver 970 to be printed by printer 970. Printer 970 could include a photograph printer, a desktop printer, a copier machine, a fax machine, a laser printer, and the like. Printer driver 970 could be collocated, physically or logically, with printer 970, on a computer connected to printer 960, and the like. It will be appreciated that one or more of the elements of image capture system 900 may be implemented as a state machine, as combinational logic, as software executable on a data processor, and the like. It will also be appreciated that the method or processes performed by one or more of the elements of image capture system 900 may be performed by a single device or system. For example, color decoder 930 and color management 940 could be implemented as a monolithic microprocessor or as a combined set of executable instructions.
  • [0079] Image capture system 900 can be used to implement one or more methods of various embodiments of the present invention. The methods, herein referred to collectively as the image mask method, may be implemented at one or more stages of the image capturing process of image system 900. In one embodiment, the image mask method may be applied at stage 925 between the output of digital data from A/D convertor 920 and the input of color decoder 930. In many cases, stage 925 may be the optimal location for application of the image mask method. For example, if data representative of an image output from image sensor 910 is monochrome (or black-and-white) information yet to be decoded into color information, less information may need to be processed using the image mask method than after conversion of the data to color information. For example, if the data were to be decoded into the three primary colors (red, blue, green), three times of information may need to be processed, as there are three colors associated with each pixel of a captured image. The image mask method, according to at least one embodiment discussed previously, does not affect the accuracy or operation of color decoder 930.
  • Alternatively, the image mask method may be applied at [0080] stage 935 between color decoder 930 and color management system 940. In some situations, the location of stage 935 may not be as optimal as stage 925, since there may be more data to process between color decoder 930 and color management system 940. For example, color decoder 93 0 could generate data for each of the primary colors, resulting in three times the information to be processed by the image mask method at stage 935. An image mask method may also be implemented at stage 945 between color management system 940 and storage 950 and/or display 960. However, since the data output by color management system 940 often has been processed which may result in compression and/or loss of information and dynamic range, therefore application of the image mask method at stage 945 may not generate results as favorable as at stages 925, 935.
  • If the captured image is to be printed, the image mask method may be implemented at [0081] stages 965, 975. At stage 965, the image mask method may be implemented by printer driver 970, while at stage 965, the image mask method may be implemented between printer driver 970 and printer 980. For example, the connection between a system connected to printer driver 970, such as a computer, and printer 980 could include software and/or hardware to implement the image mask method. However, as discussed with reference to stage 945, the data representative of a captured image to be printed may have reduced dynamic range and/or loss of other information as a result of processing by color management system 940.
  • In at least one embodiment, the image mask method performed at [0082] stages 925, 935, 945, 965, and/or 975 is implemented as a set of instructions executed by processor 942. Processor 942 can include a microprocessor, a state machine, combinational logic circuitry, and the like. In one implementation, the set of instructions are stored and retrieved from memory 943, where memory 943 can include random access memory, read only memory, flash memory, a storage device, and the like. Note that processor 942, in one embodiment, also executes instructions for performing the operations of one or more of the elements of image capture system 900. For example, processor 942 could execute instructions to perform the color decoding operations performed by color decoder 930 and then execute the set of instructions representative of the image mask method at stage 935.
  • It will be appreciated that the cost or effort to implement the image mask method at an optimal or desired stage (stages [0083] 925-975) may be prohibitive, resulting in the implementation of the image mask method at an alternate stage. For example, although stage 925 is often the optimal location for implementation of the image mask method, for reasons discussed previously, it may be difficult to implement the image mask method at this location. For example, image sensor 910, A/D convertor 920, and color decoder 930 could be implemented as a monolithic electronic circuit. In this case it might prove difficult to modify the circuit to implement the method. Alternatively, more than one element of image capture system 900, such as color decoder 930 and color management system 940, may be implemented as a single software application. In this case, the software application may be proprietary software where modification is prohibited, or the source code of the software may not be available, making modification of the software application difficult.
  • In the event that the image mask method may not be implemented in the optimal location, application of the image mask method in a more suitable location often will result in improved image quality and detail. For example, even though the dynamic range of data representative of an image may be reduced after processing by [0084] color management system 940, application of the image mask method at stage 945, in one embodiment, results in data representative of an image having improved quality and/or detail over the data output by color management system 940. The improved image data may result in an improved image for display on display 960, subsequent display when retrieved from storage 960, or physical replication by printer 980. It will be appreciated that the image mask method may be employed more than once. For example, the image mask method may be employed at stage 925 to perform an initial compression of the dynamic range of the image, and then again at stage 945 for to further compress the images dynamic range.
  • Referring now to FIG. 10, a chart showing various improvements in image types is illustrated according to at least one embodiment of the present invention. As discussed previously, an implementation of at least one embodiment of the present invention may be used to improve the dynamic range of representations of captured images. The horizontal axis of [0085] chart 1000 represents the dynamic range of various types of image representations. The dynamic range of images, as presented to the human eye, (i.e. “real life”) is represented by range 1006. The dynamic range decreases sequentially from real life (range 1006) to printed transparencies (range 1005), CRT displays (range 1004), glossy photographic prints (range 1003), matte photographic prints (range 1002), and LCD displays (range 1001). Note that the sequence of dynamic ranges of various image representations is a general comparison and the sequence of dynamic ranges should not be taken as absolute in all cases. For example, there could exist a CRT display (range 1004) which could have a dynamic range greater than printed transparencies (range 1005).
  • According to at least one embodiment, by applying an image mask method disclosed herein, the dynamic range of the representation of an image may be improved. For example, by applying an image mask method sometime before data representing an image is displayed on an LCD monitor, image information having a dynamic range comparable to a glossy photographic print (range [0086] 1003), could be compressed for display on an LCD monitor having a dynamic range 1001, resulting in an improved display image. Likewise, image information having a dynamic range equivalent to a CRT display (range 1004) may be compressed into a dynamic range usable for matte photographic prints (range 1002), and so on. As a result, an image mask method, as disclosed herein, may be used to improve the dynamic range used for display of a captured image, thereby improving the quality of the display of the captured image.
  • One of the preferred implementations of the invention is as sets of computer readable instructions resident in the random access memory of one or more processing systems configured generally as described in FIGS. [0087] 1-10. Until required by the processing system, the set of instructions may be stored in another computer readable memory, for example, in a hard disk drive or in a removable memory such as an optical disk for eventual use in a CD drive or DVD drive or a floppy disk for eventual use in a floppy disk drive. Further, the set of instructions can be stored in the memory of another image processing system and transmitted over a local area network or a wide area network, such as the Internet, where the transmitted signal could be a signal propagated through a medium such as an ISDN line, or the signal may be propagated through an air medium and received by a local satellite to be transferred to the processing system. Such a signal may be a composite signal comprising a carrier signal, and contained within the carrier signal is the desired information containing at least one computer program instruction implementing the invention, and may be downloaded as such when desired by the user. One skilled in the art would appreciate that the physical storage and/or transfer of the sets of instructions physically changes the medium upon which it is stored electrically, magnetically, or chemically so that the medium carries computer readable information. The preceding detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
  • In the preceding detailed description of the figures, reference has been made to the accompanying drawings which form a part thereof, and in which is shown by way of illustration specific preferred embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, chemical and electrical changes may be made without departing from the spirit or scope of the invention. To avoid detail not necessary to enable those skilled in the art to practice the invention, the description may omit certain information known to those skilled in the art. Furthermore, many other varied embodiments that incorporate the teachings of the invention may be easily constructed by those skilled in the art. Accordingly, the present invention is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention. The preceding detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims. [0088]

Claims (48)

What is claimed is:
1. A method for enhancing a digital image comprising:
providing a digital original image comprised of a plurality of pixels, wherein each pixel includes an original value corresponding to a characteristic of the image;
calculating a dynamic image mask value for each pixel by averaging the original value of a pixel with the original values of the pixels proximate that pixel having original values lower than a threshold sharpness; and
applying the dynamic image mask value to the original value for each corresponding pixel using a mathematical function to produce an enhanced image.
2. The method of claim 1, wherein providing a digital original image comprises capturing a digital original image using a digital capture device.
3. The method of claim 1, wherein providing a digital original image comprises capturing a digital original image using an imaging system.
4. The method of claim 1, wherein the original value corresponding to a characteristic of the image comprises an intensity value corresponding to a color.
5. The method of claim 1, wherein the original value corresponding to a characteristic of the image comprises an intensity value corresponding to range of frequencies.
6. The method of claim 1, wherein averaging the original value of a pixel with only the original values of the pixels proximate that pixel having original values less than a sharpness threshold comprises averaging the original value of a pixel with only the weighted original values of the pixels proximate that pixel having original values less than a sharpness threshold.
7. The method of claim 6, wherein the weighted original values are determined according to the following formula:
w N = ( 1 - ( pixel N - centerpixel Gain ) ,
Figure US20020176113A1-20021128-M00003
wherein pixelN is the value of the pixel being weighed, center pixel is the value of a central pixel, and wherein Gain is the threshold sharpness.
8. The method of claim 1, wherein the original values used to calculate the difference less than the sharpness threshold correspond to different characteristics than the original values used in averaging.
9. The method of claim 1, wherein calculating a dynamic image mask value includes performing a pyramidal decomposition on the original image.
10. The method of claim 1, wherein the mathematical function comprises division.
11. The method of claim 1, wherein the mathematical function comprises:
OUT = IN 3 4 MASK + 1 4 ,
Figure US20020176113A1-20021128-M00004
wherein OUT is the value of the pixel being calculated in the enhanced scanned image, IN is the value of the relative pixel in the original image, and MASK is the value of the relative pixel in the dynamic image mask.
12. The method of claim 1, further comprising performing histogram leveling to the enhanced scanned image.
13. The method of claim 1, wherein the enhanced scanned image includes an image contrast and a grayscale contrast.
14. The method of claim 13, wherein the image contrast and the grayscale contrast can be controlled independently of each other.
15. The method of claim 1, wherein the dynamic image mask value may be proportionally varied by a user.
16. A system comprising:
a sensor system operable to produce electronic signals corresponding to certain characteristics of a subject;
a processor operable to receive the electronic signals and produce image values for each pixel; and
a memory media having software stored thereon, wherein the software is operable to:
calculate a dynamic image mask value for each pixel by averaging the image value of a pixel with the image values of the pixels proximate that pixel having image values lower than a threshold sharpness; and
apply the dynamic image mask value to the image value for each corresponding pixel using a mathematical function to produce an enhanced image.
17. The system of claim 16, wherein the sensor system operates to measure light from the subject.
18. The system of claim 16, wherein the sensor system operates to measure a magnetic resonance pulse.
19. The system of claim 16, further comprising a printer operable to print the enhanced image.
20. The system of claim 19, wherein the printer comprises a photographic printer.
21. The system of claim 16, further comprising a digital output device operable to store the enhanced image.
22. The system of claim 16, wherein the system comprises a digital device within the group of a digital camera and a video camera.
23. The system of claim 16, wherein the system comprises an imaging system within the group of a magnetic resonance imaging system and a radar system.
24. The system of claim 16, wherein the software is loaded into an image capturing device.
25. The system of claim 16, wherein the system comprises a printer device.
26. A software tangibly embodied in a computer readable medium, said software operable to produce an enhanced image by implementing a method comprising:
generating a dynamic image mask from a digital original image, the dynamic image mask and the original image each comprising a plurality of pixels having varying values, wherein the values of the plurality of dynamic image mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image and less sharp regions corresponding to areas of less rapidly changing pixel values in the original image; and
combining the dynamic image mask with the original image to produce the enhanced image.
27. The software of claim 26, wherein:
the original image includes an amount of image detail encoded in a physically reproducible dynamic range; and
wherein the enhanced image includes an increased amount of detail encoded in the physically reproducible dynamic range.
28. The software of claim 26, wherein combining the dynamic image mask with the original image is performed through mathematical manipulation.
29. The software of claim 28, wherein the mathematical manipulation includes division.
30. The software of claim 26, wherein the pixels in the dynamic image mask are generated according to the equation,
OUT = IN 3 4 MASK + 1 4 ,
Figure US20020176113A1-20021128-M00005
wherein OUT is the value of the pixel being calculated in the enhanced image, IN is the value of the relative pixel in the original image, and MASK is the value of the relative pixel in the dynamic image mask.
31. The software of claim 26, further comprising histogram leveling.
32. The software of claim 26, wherein the value of a pixel in the dynamic image mask is generated by averaging the value of a central pixel corresponding to the pixel in the original image with weighted values of a plurality of neighboring pixels in the original image.
33. The software of claim 32, wherein the weighting of the plurality of neighboring pixels is dependant on a proximity of the neighboring pixels to the central pixel and a contrast of the plurality of neighboring pixels to the central pixel.
34. The software of claim 26, wherein the weight of pixels in the dynamic image mask is determined according to the following formula:
w N = ( 1 - ( pixel N - centerpixel Gain ) ,
Figure US20020176113A1-20021128-M00006
wherein pixelN is the value of the pixel being weighed, center pixel is the value of the central pixel, and wherein Gain is a threshold contrast value for determining a sharp edge.
35. The software of claim 26, wherein the value of a pixel in the dynamic image mask is generated based on a relationship of the value of a different characteristic.
36. The software of claim 26, wherein the generating the dynamic image mask includes performing a pyramidal decomposition on the original image.
37. The software of claim 26, wherein the software is resident on a computer.
38. The software of claim 26, wherein the software is resident on a digital camera.
39. A system comprising:
an image sensor to convert light reflected from an image into information representative of the image;
a processor;
memory operably coupled to said processor; and
a program of instructions capable of being stored in said memory and executed by said processor, said program of instructions to manipulate said processor to:
obtain a dynamic image mask, the dynamic image mask and the information representative of the image each including a plurality of pixels having varying values, wherein the values of the plurality of dynamic image mask pixels are set to form sharper edges corresponding to areas of more rapidly changing pixel values in the original image and less sharp regions corresponding to areas of less rapidly changing pixel values in the original image; and
combine the image mask with the information representative of the image to obtain a masked image.
40. The system of claim 39, further including a color decoder, operably connected to said image sensor, to generate color information from the information representative of the image.
41 The system of claim 40, wherein said program of instructions are executed on an output of said image sensor, and where a result of said executed program of instructions are input to said color decoder.
42. The system of claim 39, further including a color management system, operably connected to said color decoder, to process said color information.
43. The system of claim 42, wherein said program of instructions are executed on an output of said color decoder, and where a result of said executed program of instructions are input to said color management system.
44. The system of claim 43, wherein said output of said color decoder is information representative of a red portion of the image, a green portion of the image, and a blue portion of the image.
45. The system of claim 42, further including a storage system, operably connected to said color management system, to store the color information.
46. The system of claim 45, wherein said program of instructions are executed on an output of said color management system, and where a result of said executed program of instructions are input to said storage system.
47. The system of claim 39, further including a display, operable to display a representation of the information representative of the image.
48. The system of claim 47, wherein said program of instructions are executed on an output of a color management system, and where a result of said executed program of instructions are input to said display.
US09/960,276 2000-09-21 2001-09-21 Dynamic image correction and imaging systems Abandoned US20020176113A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/960,276 US20020176113A1 (en) 2000-09-21 2001-09-21 Dynamic image correction and imaging systems

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US23452000P 2000-09-21 2000-09-21
US23440800P 2000-09-21 2000-09-21
US28559101P 2001-04-19 2001-04-19
US09/960,276 US20020176113A1 (en) 2000-09-21 2001-09-21 Dynamic image correction and imaging systems

Publications (1)

Publication Number Publication Date
US20020176113A1 true US20020176113A1 (en) 2002-11-28

Family

ID=27398563

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/960,239 Expired - Fee Related US7016080B2 (en) 2000-09-21 2001-09-21 Method and system for improving scanned image detail
US09/960,276 Abandoned US20020176113A1 (en) 2000-09-21 2001-09-21 Dynamic image correction and imaging systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/960,239 Expired - Fee Related US7016080B2 (en) 2000-09-21 2001-09-21 Method and system for improving scanned image detail

Country Status (7)

Country Link
US (2) US7016080B2 (en)
EP (1) EP1323292A2 (en)
JP (1) JP2004517384A (en)
CN (1) CN1474997A (en)
AU (1) AU2001294669A1 (en)
TW (1) TW538382B (en)
WO (1) WO2002025928A2 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020103006A1 (en) * 2001-01-31 2002-08-01 Steven Doe Liquid crystal display device
US20040051794A1 (en) * 2002-09-12 2004-03-18 Pentax Corporation Filter process
US20040066458A1 (en) * 2002-07-12 2004-04-08 Hiroyuki Kawamura Imaging system
EP1443459A2 (en) * 2003-02-03 2004-08-04 Noritsu Koki Co., Ltd. Image processing method and apparatus for correcting photographic images
US20050057484A1 (en) * 2003-09-15 2005-03-17 Diefenbaugh Paul S. Automatic image luminance control with backlight adjustment
US20050286794A1 (en) * 2004-06-24 2005-12-29 Apple Computer, Inc. Gaussian blur approximation suitable for GPU
US7016080B2 (en) * 2000-09-21 2006-03-21 Eastman Kodak Company Method and system for improving scanned image detail
US20060182363A1 (en) * 2004-12-21 2006-08-17 Vladimir Jellus Method for correcting inhomogeneities in an image, and an imaging apparatus therefor
US20060232823A1 (en) * 2005-04-13 2006-10-19 Hooper David S Image contrast enhancement
US20060285164A1 (en) * 2005-06-21 2006-12-21 Chun-Yi Wang Method for Processing Multi-layered Image Data
US20070244844A1 (en) * 2006-03-23 2007-10-18 Intelliscience Corporation Methods and systems for data analysis and feature recognition
US20080031548A1 (en) * 2006-03-23 2008-02-07 Intelliscience Corporation Systems and methods for data transformation
US20080033984A1 (en) * 2006-04-10 2008-02-07 Intelliscience Corporation Systems and methods for data point processing
WO2008022222A2 (en) * 2006-08-15 2008-02-21 Intelliscience Corporation Systems and methods for data transformation
US20080170801A1 (en) * 2005-09-05 2008-07-17 Algosoft Limited Automatic digital film and video restoration
CN101593267A (en) * 2008-05-27 2009-12-02 三星电子株式会社 The method of display label, display tag system and writing display tag information
US20100202262A1 (en) * 2009-02-10 2010-08-12 Anchor Bay Technologies, Inc. Block noise detection and filtering
US20100309344A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Chroma noise reduction for cameras
US20100309345A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Radially-Based Chroma Noise Reduction for Cameras
US20110242367A1 (en) * 2010-03-31 2011-10-06 Samsung Electronics Co., Ltd. Image processing method and photographing apparatus using the same
US8175992B2 (en) 2008-03-17 2012-05-08 Intelliscience Corporation Methods and systems for compound feature creation, processing, and identification in conjunction with a data analysis and feature recognition system wherein hit weights are summed
WO2013078182A1 (en) * 2011-11-21 2013-05-30 Georgetown University System and method for enhancing the legibility of degraded images
KR20130069494A (en) * 2011-12-16 2013-06-26 지멘스 악티엔게젤샤프트 Method to create an mr image of an examination subject with a magnetic resonance system, as well as a corresponding magnetic resonance system
US20130230244A1 (en) * 2012-03-02 2013-09-05 Chintan Intwala Continuously Adjustable Bleed for Selected Region Blurring
US8559746B2 (en) 2008-09-04 2013-10-15 Silicon Image, Inc. System, method, and apparatus for smoothing of edges in images to remove irregularities
US8625885B2 (en) 2006-03-23 2014-01-07 Intelliscience Corporation Methods and systems for data analysis and feature recognition
CN103679656A (en) * 2013-10-21 2014-03-26 厦门美图网科技有限公司 Automatic image sharpening method
US20160078601A1 (en) * 2014-09-12 2016-03-17 Tmm, Inc. Image upsampling using local adaptive weighting
US10510153B1 (en) * 2017-06-26 2019-12-17 Amazon Technologies, Inc. Camera-level image processing
US10580149B1 (en) * 2017-06-26 2020-03-03 Amazon Technologies, Inc. Camera-level image processing
US20230200639A1 (en) * 2021-12-27 2023-06-29 Novasight Ltd. Method and device for treating / preventing refractive errors as well as for image processing and display

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6927804B2 (en) * 2002-09-09 2005-08-09 Eastman Kodak Company Reducing color aliasing artifacts from color digital images
US20040116796A1 (en) * 2002-12-17 2004-06-17 Jianying Li Methods and apparatus for scoring a substance
EP1605402A2 (en) * 2004-06-10 2005-12-14 Sony Corporation Image processing device and method, recording medium, and program for blur correction
US8442311B1 (en) 2005-06-30 2013-05-14 Teradici Corporation Apparatus and method for encoding an image generated in part by graphical commands
US7782339B1 (en) 2004-06-30 2010-08-24 Teradici Corporation Method and apparatus for generating masks for a multi-layer image decomposition
JP2006098803A (en) * 2004-09-29 2006-04-13 Toshiba Corp Moving image processing method, moving image processing apparatus and moving image processing program
JP4893079B2 (en) * 2006-04-14 2012-03-07 ソニー株式会社 Boundary value table optimization device, liquid discharge head, liquid discharge device, and computer program
JP2008067230A (en) * 2006-09-08 2008-03-21 Sony Corp Image processing apparatus, image processing method, and program
TWI408486B (en) * 2008-12-30 2013-09-11 Ind Tech Res Inst Camera with dynamic calibration and method thereof
EP2333623A1 (en) * 2009-12-11 2011-06-15 Siemens Aktiengesellschaft Monitoring system for data acquisition in a production environment
US20110210960A1 (en) * 2010-02-26 2011-09-01 Google Inc. Hierarchical blurring of texture maps
US8285069B2 (en) 2010-03-30 2012-10-09 Chunghwa Picture Tubes, Ltd. Image processing device and method thereof
CN101882306B (en) * 2010-06-13 2011-12-21 浙江大学 High-precision joining method of uneven surface object picture
TWI492096B (en) 2010-10-29 2015-07-11 Au Optronics Corp 3d image interactive system and position-bias compensation method of the same
US9031346B2 (en) * 2011-01-07 2015-05-12 Tp Vision Holding B.V. Method for converting input image data into output image data, image conversion unit for converting input image data into output image data, image processing apparatus, display device
US9646366B2 (en) * 2012-11-30 2017-05-09 Change Healthcare Llc Method and apparatus for enhancing medical images
EP3613012A1 (en) * 2017-04-19 2020-02-26 Schneider Electric IT Corporation Systems and methods of proximity detection for rack enclosures
CN110830727B (en) * 2018-08-07 2021-06-22 浙江宇视科技有限公司 Automatic exposure ratio adjusting method and device
CN112394536B (en) * 2019-07-31 2022-04-29 华为技术有限公司 Optical anti-shake device and control method
TWI768709B (en) * 2021-01-19 2022-06-21 福邦科技國際股份有限公司 Dual image fusion method and device

Citations (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2404138A (en) * 1941-10-06 1946-07-16 Alvin L Mayer Apparatus for developing exposed photographic prints
US3250689A (en) * 1965-05-03 1966-05-10 Robert G Seyl Simplified method of measuring corrosion using reference electrode
US3520690A (en) * 1965-06-25 1970-07-14 Fuji Photo Film Co Ltd Process for controlling dye gradation in color photographic element
US3587435A (en) * 1969-04-24 1971-06-28 Pat P Chioffe Film processing machine
US3615479A (en) * 1968-05-27 1971-10-26 Itek Corp Automatic film processing method and apparatus therefor
US3615498A (en) * 1967-07-29 1971-10-26 Fuji Photo Film Co Ltd Color developers containing substituted nbenzyl-p-aminophenol competing developing agents
US3747120A (en) * 1971-01-11 1973-07-17 N Stemme Arrangement of writing mechanisms for writing on paper with a coloredliquid
US3833161A (en) * 1972-02-08 1974-09-03 Bosch Photokino Gmbh Apparatus for intercepting and threading the leader of convoluted motion picture film or the like
US3903541A (en) * 1971-07-27 1975-09-02 Meister Frederick W Von Apparatus for processing printing plates precoated on one side only
US3946398A (en) * 1970-06-29 1976-03-23 Silonics, Inc. Method and apparatus for recording with writing fluids and drop projection means therefor
US3959048A (en) * 1974-11-29 1976-05-25 Stanfield James S Apparatus and method for repairing elongated flexible strips having damaged sprocket feed holes along the edge thereof
US4026756A (en) * 1976-03-19 1977-05-31 Stanfield James S Apparatus for repairing elongated flexible strips having damaged sprocket feed holes along the edge thereof
US4081577A (en) * 1973-12-26 1978-03-28 American Hoechst Corporation Pulsed spray of fluids
US4142107A (en) * 1977-06-30 1979-02-27 International Business Machines Corporation Resist development control system
US4215927A (en) * 1979-04-13 1980-08-05 Scott Paper Company Lithographic plate processing apparatus
US4249985A (en) * 1979-03-05 1981-02-10 Stanfield James S Pressure roller for apparatus useful in repairing sprocket holes on strip material
US4265545A (en) * 1979-07-27 1981-05-05 Intec Corporation Multiple source laser scanning inspection system
US4501480A (en) * 1981-10-16 1985-02-26 Pioneer Electronic Corporation System for developing a photo-resist material used as a recording medium
US4564280A (en) * 1982-10-28 1986-01-14 Fujitsu Limited Method and apparatus for developing resist film including a movable nozzle arm
US4594598A (en) * 1982-10-26 1986-06-10 Sharp Kabushiki Kaisha Printer head mounting assembly in an ink jet system printer
US4636808A (en) * 1985-09-09 1987-01-13 Eastman Kodak Company Continuous ink jet printer
US4666307A (en) * 1984-01-19 1987-05-19 Fuji Photo Film Co., Ltd. Method for calibrating photographic image information
US4670779A (en) * 1984-01-10 1987-06-02 Sharp Kabushiki Kaisha Color-picture analyzing apparatus with red-purpose and green-purpose filters
US4736221A (en) * 1985-10-18 1988-04-05 Fuji Photo Film Co., Ltd. Method and device for processing photographic film using atomized liquid processing agents
US4741621A (en) * 1986-08-18 1988-05-03 Westinghouse Electric Corp. Geometric surface inspection system with dual overlap light stripe generator
US4745040A (en) * 1976-08-27 1988-05-17 Levine Alfred B Method for destructive electronic development of photo film
US4755844A (en) * 1985-04-30 1988-07-05 Kabushiki Kaisha Toshiba Automatic developing device
US4777102A (en) * 1976-08-27 1988-10-11 Levine Alfred B Method and apparatus for electronic development of color photographic film
US4796061A (en) * 1985-11-16 1989-01-03 Dainippon Screen Mfg. Co., Ltd. Device for detachably attaching a film onto a drum in a drum type picture scanning recording apparatus
US4814630A (en) * 1987-06-29 1989-03-21 Ncr Corporation Document illuminating apparatus using light sources A, B, and C in periodic arrays
US4821114A (en) * 1986-05-02 1989-04-11 Dr. Ing. Rudolf Hell Gmbh Opto-electronic scanning arrangement
US4845551A (en) * 1985-05-31 1989-07-04 Fuji Photo Film Co., Ltd. Method for correcting color photographic image data on the basis of calibration data read from a reference film
US4851311A (en) * 1987-12-17 1989-07-25 Texas Instruments Incorporated Process for determining photoresist develop time by optical transmission
US4857430A (en) * 1987-12-17 1989-08-15 Texas Instruments Incorporated Process and system for determining photoresist development endpoint by effluent analysis
US4875067A (en) * 1987-07-23 1989-10-17 Fuji Photo Film Co., Ltd. Processing apparatus
US4994918A (en) * 1989-04-28 1991-02-19 Bts Broadcast Television Systems Gmbh Method and circuit for the automatic correction of errors in image steadiness during film scanning
US5027146A (en) * 1989-08-31 1991-06-25 Eastman Kodak Company Processing apparatus
US5034767A (en) * 1987-08-28 1991-07-23 Hanetz International Inc. Development system
US5101286A (en) * 1990-03-21 1992-03-31 Eastman Kodak Company Scanning film during the film process for output to a video monitor
US5124216A (en) * 1990-07-31 1992-06-23 At&T Bell Laboratories Method for monitoring photoresist latent images
US5155596A (en) * 1990-12-03 1992-10-13 Eastman Kodak Company Film scanner illumination system having an automatic light control
US5196285A (en) * 1990-05-18 1993-03-23 Xinix, Inc. Method for control of photoresist develop processes
US5200817A (en) * 1991-08-29 1993-04-06 Xerox Corporation Conversion of an RGB color scanner into a colorimetric scanner
US5212512A (en) * 1990-11-30 1993-05-18 Fuji Photo Film Co., Ltd. Photofinishing system
US5231439A (en) * 1990-08-03 1993-07-27 Fuji Photo Film Co., Ltd. Photographic film handling method
US5235352A (en) * 1991-08-16 1993-08-10 Compaq Computer Corporation High density ink jet printhead
US5255408A (en) * 1992-02-11 1993-10-26 Eastman Kodak Company Photographic film cleaner
US5296923A (en) * 1991-01-09 1994-03-22 Konica Corporation Color image reproducing device and method
US5334247A (en) * 1991-07-25 1994-08-02 Eastman Kodak Company Coater design for low flowrate coating applications
US5350664A (en) * 1993-02-12 1994-09-27 Eastman Kodak Company Photographic elements for producing blue, green, and red exposure records of the same hue and methods for the retrieval and differentiation of the exposure records
US5350651A (en) * 1993-02-12 1994-09-27 Eastman Kodak Company Methods for the retrieval and differentiation of blue, green and red exposure records of the same hue from photographic elements containing absorbing interlayers
US5357307A (en) * 1992-11-25 1994-10-18 Eastman Kodak Company Apparatus for processing photosensitive material
US5391443A (en) * 1991-07-19 1995-02-21 Eastman Kodak Company Process for the extraction of spectral image records from dye image forming photographic elements
US5414779A (en) * 1993-06-14 1995-05-09 Eastman Kodak Company Image frame detection
US5416550A (en) * 1990-09-14 1995-05-16 Eastman Kodak Company Photographic processing apparatus
US5418119A (en) * 1993-07-16 1995-05-23 Eastman Kodak Company Photographic elements for producing blue, green and red exposure records of the same hue
US5418597A (en) * 1992-09-14 1995-05-23 Eastman Kodak Company Clamping arrangement for film scanning apparatus
US5436738A (en) * 1992-01-22 1995-07-25 Eastman Kodak Company Three dimensional thermal internegative photographic printing apparatus and method
US5440365A (en) * 1993-10-14 1995-08-08 Eastman Kodak Company Photosensitive material processor
US5447811A (en) * 1992-09-24 1995-09-05 Eastman Kodak Company Color image reproduction of scenes with preferential tone mapping
US5448380A (en) * 1993-07-31 1995-09-05 Samsung Electronics Co., Ltd. color image processing method and apparatus for correcting a color signal from an input image device
US5452018A (en) * 1991-04-19 1995-09-19 Sony Electronics Inc. Digital color correction system having gross and fine adjustment modes
US5496669A (en) * 1992-07-01 1996-03-05 Interuniversitair Micro-Elektronica Centrum Vzw System for detecting a latent image using an alignment apparatus
US5516608A (en) * 1994-02-28 1996-05-14 International Business Machines Corporation Method for controlling a line dimension arising in photolithographic processes
US5519510A (en) * 1992-07-17 1996-05-21 International Business Machines Corporation Electronic film development
US5546477A (en) * 1993-03-30 1996-08-13 Klics, Inc. Data compression and decompression
US5550566A (en) * 1993-07-15 1996-08-27 Media Vision, Inc. Video capture expansion card
US5552904A (en) * 1994-01-31 1996-09-03 Samsung Electronics Co., Ltd. Color correction method and apparatus using adaptive region separation
US5563717A (en) * 1995-02-03 1996-10-08 Eastman Kodak Company Method and means for calibration of photographic media using pre-exposed miniature images
US5568270A (en) * 1992-12-09 1996-10-22 Fuji Photo Film Co., Ltd. Image reading apparatus which varies reading time according to image density
US5596415A (en) * 1993-06-14 1997-01-21 Eastman Kodak Company Iterative predictor-based detection of image frame locations
US5627016A (en) * 1996-02-29 1997-05-06 Eastman Kodak Company Method and apparatus for photofinishing photosensitive film
US5649260A (en) * 1995-06-26 1997-07-15 Eastman Kodak Company Automated photofinishing apparatus
US5664255A (en) * 1996-05-29 1997-09-02 Eastman Kodak Company Photographic printing and processing apparatus
US5664253A (en) * 1995-09-12 1997-09-02 Eastman Kodak Company Stand alone photofinishing apparatus
US5667944A (en) * 1995-10-25 1997-09-16 Eastman Kodak Company Digital process sensitivity correction
US5678116A (en) * 1994-04-06 1997-10-14 Dainippon Screen Mfg. Co., Ltd. Method and apparatus for drying a substrate having a resist film with a miniaturized pattern
US5726773A (en) * 1994-11-29 1998-03-10 Carl-Zeiss-Stiftung Apparatus for scanning and digitizing photographic image objects and method of operating said apparatus
US5729631A (en) * 1993-11-30 1998-03-17 Polaroid Corporation Image noise reduction system using a wiener variant filter in a pyramid image representation
US5739897A (en) * 1994-08-16 1998-04-14 Gretag Imaging Ag Method and system for creating index prints on and/or with a photographic printer
US5771318A (en) * 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
US5771107A (en) * 1995-01-11 1998-06-23 Mita Industrial Co., Ltd. Image processor with image edge emphasizing capability
US5790277A (en) * 1994-06-08 1998-08-04 International Business Machines Corporation Duplex film scanning
US5867606A (en) * 1997-08-12 1999-02-02 Hewlett-Packard Company Apparatus and method for determining the appropriate amount of sharpening for an image
US5870172A (en) * 1996-03-29 1999-02-09 Blume; Stephen T. Apparatus for producing a video and digital image directly from dental x-ray film
US5880819A (en) * 1995-06-29 1999-03-09 Fuji Photo Film Co., Ltd. Photographic film loading method, photographic film conveying apparatus, and image reading apparatus
US5892595A (en) * 1996-01-26 1999-04-06 Ricoh Company, Ltd. Image reading apparatus for correct positioning of color component values of each picture element
US5930388A (en) * 1996-10-24 1999-07-27 Sharp Kabuskiki Kaisha Color image processing apparatus
US5959720A (en) * 1996-03-22 1999-09-28 Eastman Kodak Company Method for color balance determination
US6065824A (en) * 1994-12-22 2000-05-23 Hewlett-Packard Company Method and apparatus for storing information on a replaceable ink container
US6069714A (en) * 1996-12-05 2000-05-30 Applied Science Fiction, Inc. Method and apparatus for reducing noise in electronic film development
US6088084A (en) * 1997-10-17 2000-07-11 Fuji Photo Film Co., Ltd. Original carrier and image reader
US6089687A (en) * 1998-03-09 2000-07-18 Hewlett-Packard Company Method and apparatus for specifying ink volume in an ink container
US6101273A (en) * 1995-10-31 2000-08-08 Fuji Photo Film Co., Ltd. Image reproducing method and apparatus
US6102508A (en) * 1996-09-27 2000-08-15 Hewlett-Packard Company Method and apparatus for selecting printer consumables
US6200738B1 (en) * 1998-10-29 2001-03-13 Konica Corporation Image forming method
US6370279B1 (en) * 1997-04-10 2002-04-09 Samsung Electronics Co., Ltd. Block-based image processing method and apparatus therefor
US6707940B1 (en) * 2000-03-31 2004-03-16 Intel Corporation Method and apparatus for image segmentation

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE682559A (en) 1965-06-16 1966-11-14
US3617282A (en) 1970-05-18 1971-11-02 Eastman Kodak Co Nucleating agents for photographic reversal processes
JPS5459343A (en) * 1977-10-20 1979-05-12 Green Cross Corp Food additives for supplyng food and feed being defficient from fiber substance
US4301469A (en) 1980-04-30 1981-11-17 United Technologies Corporation Run length encoder for color raster scanner
US4490729A (en) 1982-09-15 1984-12-25 The Mead Corporation Ink jet printer
US4607779A (en) * 1983-08-11 1986-08-26 National Semiconductor Corporation Non-impact thermocompression gang bonding method
JPS6089723A (en) 1983-10-21 1985-05-20 Canon Inc Color information detector
DE3581010D1 (en) 1984-07-09 1991-02-07 Sigma Corp DEVELOPMENT END POINT PROCEDURE.
US4623236A (en) 1985-10-31 1986-11-18 Polaroid Corporation Photographic processing composition applicator
CA1309166C (en) 1988-05-20 1992-10-20 Toshinobu Haruki Image sensing apparatus having automatic iris function of automatically adjusting exposure in response to video signal
US5267030A (en) 1989-12-22 1993-11-30 Eastman Kodak Company Method and associated apparatus for forming image data metrics which achieve media compatibility for subsequent imaging application
GB9100194D0 (en) 1991-01-05 1991-02-20 Ilford Ltd Roll film assembly
US5081692A (en) * 1991-04-04 1992-01-14 Eastman Kodak Company Unsharp masking using center weighted local variance for image sharpening and noise suppression
JP2654284B2 (en) 1991-10-03 1997-09-17 富士写真フイルム株式会社 Photo print system
JP2936085B2 (en) * 1991-11-19 1999-08-23 富士写真フイルム株式会社 Image data processing method and apparatus
US5266805A (en) 1992-05-05 1993-11-30 International Business Machines Corporation System and method for image recovery
US5371542A (en) 1992-06-23 1994-12-06 The United States Of America As Represented By The Secretary Of The Navy Dual waveband signal processing system
CA2093840C (en) 1992-07-17 1999-08-10 Albert D. Edgar Duplex film scanning
JPH07125902A (en) 1993-10-29 1995-05-16 Minolta Co Ltd Image printer
US5477345A (en) 1993-12-15 1995-12-19 Xerox Corporation Apparatus for subsampling chrominance
JPH07274004A (en) * 1994-03-29 1995-10-20 Dainippon Screen Mfg Co Ltd Sharpness emphasizing device for picture
JPH0877341A (en) 1994-08-29 1996-03-22 Xerox Corp Equipment and method for color image processing
US5587752A (en) 1995-06-05 1996-12-24 Eastman Kodak Company Camera, system and method for producing composite photographic image
US5695914A (en) 1995-09-15 1997-12-09 Eastman Kodak Company Process of forming a dye image
US5698382A (en) 1995-09-25 1997-12-16 Konica Corporation Processing method for silver halide color photographic light-sensitive material
US5845007A (en) * 1996-01-02 1998-12-01 Cognex Corporation Machine vision method and apparatus for edge-based image histogram analysis
AU727503B2 (en) * 1996-07-31 2000-12-14 Canon Kabushiki Kaisha Image filtering method and apparatus
US5691118A (en) 1996-10-10 1997-11-25 Eastman Kodak Company Color paper processing using two acidic stop solutions before and after bleaching
EP0917347A3 (en) * 1997-11-17 2000-12-13 Xerox Corporation Dynamically adjustable unsharp masking for digital image processing
JP2004517384A (en) * 2000-09-21 2004-06-10 アプライド・サイエンス・フィクション Dynamic image correction and image system
JP4281311B2 (en) * 2001-09-11 2009-06-17 セイコーエプソン株式会社 Image processing using subject information
ATE453205T1 (en) * 2001-10-10 2010-01-15 Applied Materials Israel Ltd METHOD AND DEVICE FOR AUTOMATIC IMAGE GENERATION SUITABLE FOR ALIGNMENT OF A CHARGED PARTICLE BEAM COLUMN

Patent Citations (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2404138A (en) * 1941-10-06 1946-07-16 Alvin L Mayer Apparatus for developing exposed photographic prints
US3250689A (en) * 1965-05-03 1966-05-10 Robert G Seyl Simplified method of measuring corrosion using reference electrode
US3520690A (en) * 1965-06-25 1970-07-14 Fuji Photo Film Co Ltd Process for controlling dye gradation in color photographic element
US3615498A (en) * 1967-07-29 1971-10-26 Fuji Photo Film Co Ltd Color developers containing substituted nbenzyl-p-aminophenol competing developing agents
US3615479A (en) * 1968-05-27 1971-10-26 Itek Corp Automatic film processing method and apparatus therefor
US3587435A (en) * 1969-04-24 1971-06-28 Pat P Chioffe Film processing machine
US3946398A (en) * 1970-06-29 1976-03-23 Silonics, Inc. Method and apparatus for recording with writing fluids and drop projection means therefor
US3747120A (en) * 1971-01-11 1973-07-17 N Stemme Arrangement of writing mechanisms for writing on paper with a coloredliquid
US3903541A (en) * 1971-07-27 1975-09-02 Meister Frederick W Von Apparatus for processing printing plates precoated on one side only
US3833161A (en) * 1972-02-08 1974-09-03 Bosch Photokino Gmbh Apparatus for intercepting and threading the leader of convoluted motion picture film or the like
US4081577A (en) * 1973-12-26 1978-03-28 American Hoechst Corporation Pulsed spray of fluids
US3959048A (en) * 1974-11-29 1976-05-25 Stanfield James S Apparatus and method for repairing elongated flexible strips having damaged sprocket feed holes along the edge thereof
US4026756A (en) * 1976-03-19 1977-05-31 Stanfield James S Apparatus for repairing elongated flexible strips having damaged sprocket feed holes along the edge thereof
US4745040A (en) * 1976-08-27 1988-05-17 Levine Alfred B Method for destructive electronic development of photo film
US4777102A (en) * 1976-08-27 1988-10-11 Levine Alfred B Method and apparatus for electronic development of color photographic film
US4142107A (en) * 1977-06-30 1979-02-27 International Business Machines Corporation Resist development control system
US4249985A (en) * 1979-03-05 1981-02-10 Stanfield James S Pressure roller for apparatus useful in repairing sprocket holes on strip material
US4215927A (en) * 1979-04-13 1980-08-05 Scott Paper Company Lithographic plate processing apparatus
US4265545A (en) * 1979-07-27 1981-05-05 Intec Corporation Multiple source laser scanning inspection system
US4501480A (en) * 1981-10-16 1985-02-26 Pioneer Electronic Corporation System for developing a photo-resist material used as a recording medium
US4594598A (en) * 1982-10-26 1986-06-10 Sharp Kabushiki Kaisha Printer head mounting assembly in an ink jet system printer
US4564280A (en) * 1982-10-28 1986-01-14 Fujitsu Limited Method and apparatus for developing resist film including a movable nozzle arm
US4670779A (en) * 1984-01-10 1987-06-02 Sharp Kabushiki Kaisha Color-picture analyzing apparatus with red-purpose and green-purpose filters
US4666307A (en) * 1984-01-19 1987-05-19 Fuji Photo Film Co., Ltd. Method for calibrating photographic image information
US4755844A (en) * 1985-04-30 1988-07-05 Kabushiki Kaisha Toshiba Automatic developing device
US4845551A (en) * 1985-05-31 1989-07-04 Fuji Photo Film Co., Ltd. Method for correcting color photographic image data on the basis of calibration data read from a reference film
US4636808A (en) * 1985-09-09 1987-01-13 Eastman Kodak Company Continuous ink jet printer
US4736221A (en) * 1985-10-18 1988-04-05 Fuji Photo Film Co., Ltd. Method and device for processing photographic film using atomized liquid processing agents
US4796061A (en) * 1985-11-16 1989-01-03 Dainippon Screen Mfg. Co., Ltd. Device for detachably attaching a film onto a drum in a drum type picture scanning recording apparatus
US4821114A (en) * 1986-05-02 1989-04-11 Dr. Ing. Rudolf Hell Gmbh Opto-electronic scanning arrangement
US4741621A (en) * 1986-08-18 1988-05-03 Westinghouse Electric Corp. Geometric surface inspection system with dual overlap light stripe generator
US4814630A (en) * 1987-06-29 1989-03-21 Ncr Corporation Document illuminating apparatus using light sources A, B, and C in periodic arrays
US4875067A (en) * 1987-07-23 1989-10-17 Fuji Photo Film Co., Ltd. Processing apparatus
US5034767A (en) * 1987-08-28 1991-07-23 Hanetz International Inc. Development system
US4857430A (en) * 1987-12-17 1989-08-15 Texas Instruments Incorporated Process and system for determining photoresist development endpoint by effluent analysis
US4851311A (en) * 1987-12-17 1989-07-25 Texas Instruments Incorporated Process for determining photoresist develop time by optical transmission
US4994918A (en) * 1989-04-28 1991-02-19 Bts Broadcast Television Systems Gmbh Method and circuit for the automatic correction of errors in image steadiness during film scanning
US5027146A (en) * 1989-08-31 1991-06-25 Eastman Kodak Company Processing apparatus
US5101286A (en) * 1990-03-21 1992-03-31 Eastman Kodak Company Scanning film during the film process for output to a video monitor
US5196285A (en) * 1990-05-18 1993-03-23 Xinix, Inc. Method for control of photoresist develop processes
US5292605A (en) * 1990-05-18 1994-03-08 Xinix, Inc. Method for control of photoresist develop processes
US5124216A (en) * 1990-07-31 1992-06-23 At&T Bell Laboratories Method for monitoring photoresist latent images
US5231439A (en) * 1990-08-03 1993-07-27 Fuji Photo Film Co., Ltd. Photographic film handling method
US5416550A (en) * 1990-09-14 1995-05-16 Eastman Kodak Company Photographic processing apparatus
US5212512A (en) * 1990-11-30 1993-05-18 Fuji Photo Film Co., Ltd. Photofinishing system
US5155596A (en) * 1990-12-03 1992-10-13 Eastman Kodak Company Film scanner illumination system having an automatic light control
US5296923A (en) * 1991-01-09 1994-03-22 Konica Corporation Color image reproducing device and method
US5452018A (en) * 1991-04-19 1995-09-19 Sony Electronics Inc. Digital color correction system having gross and fine adjustment modes
US5391443A (en) * 1991-07-19 1995-02-21 Eastman Kodak Company Process for the extraction of spectral image records from dye image forming photographic elements
US5334247A (en) * 1991-07-25 1994-08-02 Eastman Kodak Company Coater design for low flowrate coating applications
US5235352A (en) * 1991-08-16 1993-08-10 Compaq Computer Corporation High density ink jet printhead
US5200817A (en) * 1991-08-29 1993-04-06 Xerox Corporation Conversion of an RGB color scanner into a colorimetric scanner
US5436738A (en) * 1992-01-22 1995-07-25 Eastman Kodak Company Three dimensional thermal internegative photographic printing apparatus and method
US5255408A (en) * 1992-02-11 1993-10-26 Eastman Kodak Company Photographic film cleaner
US5496669A (en) * 1992-07-01 1996-03-05 Interuniversitair Micro-Elektronica Centrum Vzw System for detecting a latent image using an alignment apparatus
US5519510A (en) * 1992-07-17 1996-05-21 International Business Machines Corporation Electronic film development
US5418597A (en) * 1992-09-14 1995-05-23 Eastman Kodak Company Clamping arrangement for film scanning apparatus
US5447811A (en) * 1992-09-24 1995-09-05 Eastman Kodak Company Color image reproduction of scenes with preferential tone mapping
US5357307A (en) * 1992-11-25 1994-10-18 Eastman Kodak Company Apparatus for processing photosensitive material
US5568270A (en) * 1992-12-09 1996-10-22 Fuji Photo Film Co., Ltd. Image reading apparatus which varies reading time according to image density
US5350651A (en) * 1993-02-12 1994-09-27 Eastman Kodak Company Methods for the retrieval and differentiation of blue, green and red exposure records of the same hue from photographic elements containing absorbing interlayers
US5350664A (en) * 1993-02-12 1994-09-27 Eastman Kodak Company Photographic elements for producing blue, green, and red exposure records of the same hue and methods for the retrieval and differentiation of the exposure records
US5546477A (en) * 1993-03-30 1996-08-13 Klics, Inc. Data compression and decompression
US5596415A (en) * 1993-06-14 1997-01-21 Eastman Kodak Company Iterative predictor-based detection of image frame locations
US5414779A (en) * 1993-06-14 1995-05-09 Eastman Kodak Company Image frame detection
US5550566A (en) * 1993-07-15 1996-08-27 Media Vision, Inc. Video capture expansion card
US5418119A (en) * 1993-07-16 1995-05-23 Eastman Kodak Company Photographic elements for producing blue, green and red exposure records of the same hue
US5448380A (en) * 1993-07-31 1995-09-05 Samsung Electronics Co., Ltd. color image processing method and apparatus for correcting a color signal from an input image device
US5440365A (en) * 1993-10-14 1995-08-08 Eastman Kodak Company Photosensitive material processor
US5729631A (en) * 1993-11-30 1998-03-17 Polaroid Corporation Image noise reduction system using a wiener variant filter in a pyramid image representation
US5552904A (en) * 1994-01-31 1996-09-03 Samsung Electronics Co., Ltd. Color correction method and apparatus using adaptive region separation
US5516608A (en) * 1994-02-28 1996-05-14 International Business Machines Corporation Method for controlling a line dimension arising in photolithographic processes
US5678116A (en) * 1994-04-06 1997-10-14 Dainippon Screen Mfg. Co., Ltd. Method and apparatus for drying a substrate having a resist film with a miniaturized pattern
US5790277A (en) * 1994-06-08 1998-08-04 International Business Machines Corporation Duplex film scanning
US5739897A (en) * 1994-08-16 1998-04-14 Gretag Imaging Ag Method and system for creating index prints on and/or with a photographic printer
US5726773A (en) * 1994-11-29 1998-03-10 Carl-Zeiss-Stiftung Apparatus for scanning and digitizing photographic image objects and method of operating said apparatus
US6065824A (en) * 1994-12-22 2000-05-23 Hewlett-Packard Company Method and apparatus for storing information on a replaceable ink container
US5771107A (en) * 1995-01-11 1998-06-23 Mita Industrial Co., Ltd. Image processor with image edge emphasizing capability
US5563717A (en) * 1995-02-03 1996-10-08 Eastman Kodak Company Method and means for calibration of photographic media using pre-exposed miniature images
US5649260A (en) * 1995-06-26 1997-07-15 Eastman Kodak Company Automated photofinishing apparatus
US5880819A (en) * 1995-06-29 1999-03-09 Fuji Photo Film Co., Ltd. Photographic film loading method, photographic film conveying apparatus, and image reading apparatus
US5664253A (en) * 1995-09-12 1997-09-02 Eastman Kodak Company Stand alone photofinishing apparatus
US5667944A (en) * 1995-10-25 1997-09-16 Eastman Kodak Company Digital process sensitivity correction
US6101273A (en) * 1995-10-31 2000-08-08 Fuji Photo Film Co., Ltd. Image reproducing method and apparatus
US5892595A (en) * 1996-01-26 1999-04-06 Ricoh Company, Ltd. Image reading apparatus for correct positioning of color component values of each picture element
US5627016A (en) * 1996-02-29 1997-05-06 Eastman Kodak Company Method and apparatus for photofinishing photosensitive film
US5959720A (en) * 1996-03-22 1999-09-28 Eastman Kodak Company Method for color balance determination
US5870172A (en) * 1996-03-29 1999-02-09 Blume; Stephen T. Apparatus for producing a video and digital image directly from dental x-ray film
US5664255A (en) * 1996-05-29 1997-09-02 Eastman Kodak Company Photographic printing and processing apparatus
US5771318A (en) * 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
US6102508A (en) * 1996-09-27 2000-08-15 Hewlett-Packard Company Method and apparatus for selecting printer consumables
US5930388A (en) * 1996-10-24 1999-07-27 Sharp Kabuskiki Kaisha Color image processing apparatus
US6069714A (en) * 1996-12-05 2000-05-30 Applied Science Fiction, Inc. Method and apparatus for reducing noise in electronic film development
US6370279B1 (en) * 1997-04-10 2002-04-09 Samsung Electronics Co., Ltd. Block-based image processing method and apparatus therefor
US5867606A (en) * 1997-08-12 1999-02-02 Hewlett-Packard Company Apparatus and method for determining the appropriate amount of sharpening for an image
US6088084A (en) * 1997-10-17 2000-07-11 Fuji Photo Film Co., Ltd. Original carrier and image reader
US6089687A (en) * 1998-03-09 2000-07-18 Hewlett-Packard Company Method and apparatus for specifying ink volume in an ink container
US6200738B1 (en) * 1998-10-29 2001-03-13 Konica Corporation Image forming method
US6707940B1 (en) * 2000-03-31 2004-03-16 Intel Corporation Method and apparatus for image segmentation

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016080B2 (en) * 2000-09-21 2006-03-21 Eastman Kodak Company Method and system for improving scanned image detail
US20020103006A1 (en) * 2001-01-31 2002-08-01 Steven Doe Liquid crystal display device
US20040066458A1 (en) * 2002-07-12 2004-04-08 Hiroyuki Kawamura Imaging system
US20040051794A1 (en) * 2002-09-12 2004-03-18 Pentax Corporation Filter process
US7683944B2 (en) * 2002-09-12 2010-03-23 Hoya Corporation Filter process for obtaining a soft focus picture image
EP1443459A2 (en) * 2003-02-03 2004-08-04 Noritsu Koki Co., Ltd. Image processing method and apparatus for correcting photographic images
US20040184672A1 (en) * 2003-02-03 2004-09-23 Kenji Murakami Image processing method and apparatus for correcting photographic images
EP1443459A3 (en) * 2003-02-03 2004-09-29 Noritsu Koki Co., Ltd. Image processing method and apparatus for correcting photographic images
US20050057484A1 (en) * 2003-09-15 2005-03-17 Diefenbaugh Paul S. Automatic image luminance control with backlight adjustment
US20050286794A1 (en) * 2004-06-24 2005-12-29 Apple Computer, Inc. Gaussian blur approximation suitable for GPU
US7397964B2 (en) * 2004-06-24 2008-07-08 Apple Inc. Gaussian blur approximation suitable for GPU
US20060182363A1 (en) * 2004-12-21 2006-08-17 Vladimir Jellus Method for correcting inhomogeneities in an image, and an imaging apparatus therefor
US7672498B2 (en) * 2004-12-21 2010-03-02 Siemens Aktiengesellschaft Method for correcting inhomogeneities in an image, and an imaging apparatus therefor
US20060232823A1 (en) * 2005-04-13 2006-10-19 Hooper David S Image contrast enhancement
US8014034B2 (en) 2005-04-13 2011-09-06 Acd Systems International Inc. Image contrast enhancement
US8228560B2 (en) 2005-04-13 2012-07-24 Acd Systems International Inc. Image contrast enhancement
US20070036456A1 (en) * 2005-04-13 2007-02-15 Hooper David S Image contrast enhancement
US8928947B2 (en) 2005-04-13 2015-01-06 Acd Systems International Inc. Image contrast enhancement
US20060285164A1 (en) * 2005-06-21 2006-12-21 Chun-Yi Wang Method for Processing Multi-layered Image Data
US7769244B2 (en) * 2005-09-05 2010-08-03 Algosoft-Tech Usa, Llc. Automatic digital film and video restoration
US20080170801A1 (en) * 2005-09-05 2008-07-17 Algosoft Limited Automatic digital film and video restoration
US20080031548A1 (en) * 2006-03-23 2008-02-07 Intelliscience Corporation Systems and methods for data transformation
US20100017353A1 (en) * 2006-03-23 2010-01-21 Intelliscience Corporation Methods and systems for data analysis and feature recognition
US8625885B2 (en) 2006-03-23 2014-01-07 Intelliscience Corporation Methods and systems for data analysis and feature recognition
US20070244844A1 (en) * 2006-03-23 2007-10-18 Intelliscience Corporation Methods and systems for data analysis and feature recognition
US20080033984A1 (en) * 2006-04-10 2008-02-07 Intelliscience Corporation Systems and methods for data point processing
WO2008022222A3 (en) * 2006-08-15 2008-08-21 Intelliscience Corp Systems and methods for data transformation
WO2008022222A2 (en) * 2006-08-15 2008-02-21 Intelliscience Corporation Systems and methods for data transformation
US8175992B2 (en) 2008-03-17 2012-05-08 Intelliscience Corporation Methods and systems for compound feature creation, processing, and identification in conjunction with a data analysis and feature recognition system wherein hit weights are summed
US8400280B2 (en) * 2008-05-27 2013-03-19 Samsung Electronics Co., Ltd. Display tag, display tag system having display tag, and method for writing display tag information
US20090295549A1 (en) * 2008-05-27 2009-12-03 Samsung Electronics Co., Ltd. Display tag, display tag system having display tag, and method for writing display tag information
CN101593267A (en) * 2008-05-27 2009-12-02 三星电子株式会社 The method of display label, display tag system and writing display tag information
US9305337B2 (en) 2008-09-04 2016-04-05 Lattice Semiconductor Corporation System, method, and apparatus for smoothing of edges in images to remove irregularities
US8559746B2 (en) 2008-09-04 2013-10-15 Silicon Image, Inc. System, method, and apparatus for smoothing of edges in images to remove irregularities
US20100202262A1 (en) * 2009-02-10 2010-08-12 Anchor Bay Technologies, Inc. Block noise detection and filtering
US8452117B2 (en) * 2009-02-10 2013-05-28 Silicon Image, Inc. Block noise detection and filtering
US8891897B2 (en) 2009-02-10 2014-11-18 Silicon Image, Inc. Block noise detection and filtering
US8274583B2 (en) 2009-06-05 2012-09-25 Apple Inc. Radially-based chroma noise reduction for cameras
US8284271B2 (en) * 2009-06-05 2012-10-09 Apple Inc. Chroma noise reduction for cameras
US20100309345A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Radially-Based Chroma Noise Reduction for Cameras
US20100309344A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Chroma noise reduction for cameras
US20110242367A1 (en) * 2010-03-31 2011-10-06 Samsung Electronics Co., Ltd. Image processing method and photographing apparatus using the same
US8681244B2 (en) * 2010-03-31 2014-03-25 Samsung Electronics Co., Ltd Image processing method using blurring and photographing apparatus using the same
WO2013078182A1 (en) * 2011-11-21 2013-05-30 Georgetown University System and method for enhancing the legibility of degraded images
US8995782B2 (en) 2011-11-21 2015-03-31 Georgetown University System and method for enhancing the legibility of degraded images
US9361676B2 (en) 2011-11-21 2016-06-07 Georgetown University System and method for enhancing the legibility of degraded images
KR101683689B1 (en) 2011-12-16 2016-12-07 지멘스 악티엔게젤샤프트 Method to create an mr image of an examination subject with a magnetic resonance system, as well as a corresponding magnetic resonance system
KR20130069494A (en) * 2011-12-16 2013-06-26 지멘스 악티엔게젤샤프트 Method to create an mr image of an examination subject with a magnetic resonance system, as well as a corresponding magnetic resonance system
US9297871B2 (en) 2011-12-16 2016-03-29 Siemens Aktiengesellschaft Magnetic resonance system and method to generate a magnetic resonance image of an examination subject
US20130230244A1 (en) * 2012-03-02 2013-09-05 Chintan Intwala Continuously Adjustable Bleed for Selected Region Blurring
US9019310B2 (en) 2012-03-02 2015-04-28 Adobe Systems Incorporated Methods and apparatus for applying complex continuous gradients to images
US8693776B2 (en) * 2012-03-02 2014-04-08 Adobe Systems Incorporated Continuously adjustable bleed for selected region blurring
US8831371B2 (en) 2012-03-02 2014-09-09 Adobe Systems Incorporated Methods and apparatus for applying blur patterns to images
US8824793B2 (en) 2012-03-02 2014-09-02 Adobe Systems Incorporated Methods and apparatus for applying a bokeh effect to images
CN103679656A (en) * 2013-10-21 2014-03-26 厦门美图网科技有限公司 Automatic image sharpening method
US20160078601A1 (en) * 2014-09-12 2016-03-17 Tmm, Inc. Image upsampling using local adaptive weighting
US9600868B2 (en) * 2014-09-12 2017-03-21 Tmm, Inc. Image upsampling using local adaptive weighting
US10510153B1 (en) * 2017-06-26 2019-12-17 Amazon Technologies, Inc. Camera-level image processing
US10580149B1 (en) * 2017-06-26 2020-03-03 Amazon Technologies, Inc. Camera-level image processing
US20230200639A1 (en) * 2021-12-27 2023-06-29 Novasight Ltd. Method and device for treating / preventing refractive errors as well as for image processing and display
US11918287B2 (en) * 2021-12-27 2024-03-05 Novasight Ltd. Method and device for treating / preventing refractive errors as well as for image processing and display

Also Published As

Publication number Publication date
US7016080B2 (en) 2006-03-21
WO2002025928A2 (en) 2002-03-28
JP2004517384A (en) 2004-06-10
AU2001294669A1 (en) 2002-04-02
US20020126327A1 (en) 2002-09-12
TW538382B (en) 2003-06-21
CN1474997A (en) 2004-02-11
EP1323292A2 (en) 2003-07-02
WO2002025928A3 (en) 2003-01-16

Similar Documents

Publication Publication Date Title
US20020176113A1 (en) Dynamic image correction and imaging systems
US6822762B2 (en) Local color correction
EP0398861B1 (en) Method for adaptively sharpening electronic images
US8363123B2 (en) Image pickup apparatus, color noise reduction method, and color noise reduction program
Mann Comparametric equations with practical applications in quantigraphic image processing
US7302110B2 (en) Image enhancement methods and apparatus therefor
US7068853B2 (en) Tone scale adjustment of digital images
US6366318B1 (en) CFA correction for CFA images captured at partial resolution
US7769241B2 (en) Method of sharpening using panchromatic pixels
US6792160B2 (en) General purpose image enhancement algorithm which augments the visual perception of detail in digital images
EP1139284B1 (en) Method and apparatus for performing local color correction
JPH0922460A (en) Image processing method and device therefor
JP2003134352A (en) Image processing method and apparatus, and program therefor
JP2007096509A (en) Image processing apparatus and image processing method
JP4600424B2 (en) Development processing apparatus for undeveloped image data, development processing method, and computer program for development processing
JP2011228807A (en) Image processing program, image processing apparatus, and image processing method
JPH10214339A (en) Picture filtering method
US20060056722A1 (en) Edge preserving method and apparatus for image processing
US6384937B1 (en) Image processing method and apparatus
JP3729118B2 (en) Image processing method, image processing apparatus, image processing program, and computer-readable recording medium recording the same
JP2020145553A (en) Image processing apparatus, image processing method and program
JP4032200B2 (en) Image data interpolation method, image data interpolation device, and computer readable recording medium recording image data interpolation program
US5633734A (en) Method and apparatus for modifying a fluorescent portion of a digital image
JPH08223425A (en) Image processing method and its device
US20040012799A1 (en) Method for making an exposure adjustment on a rendered image

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLIED SCIENCE FICTION, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EDGAR, ALBERT D.;REEL/FRAME:012562/0417

Effective date: 20011115

AS Assignment

Owner name: RHO VENTURES (QP), L.P., NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:012997/0113

Effective date: 20020723

Owner name: CENTERPOINT VENTURE PARTNERS, L.P., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:012997/0211

Effective date: 20020723

Owner name: CENTERPOINT VENTURE PARTNERS, L.P., TEXAS

Free format text: SECURITY INTEREST;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:012997/0113

Effective date: 20020723

Owner name: RHO VENTURES (QP), L.P., NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:012997/0211

Effective date: 20020723

AS Assignment

Owner name: CENTERPOINT VENTURE PARTNERS, L.P., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:013506/0065

Effective date: 20030213

Owner name: RHO VENTURES (QP), L.P., NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:013506/0065

Effective date: 20030213

AS Assignment

Owner name: EASTMAN KODAK COMPANY, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED SCIENCE FICTION, INC.;REEL/FRAME:014293/0774

Effective date: 20030521

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION