US20080298719A1 - Sub-resolution alignment of images - Google Patents

Sub-resolution alignment of images Download PDF

Info

Publication number
US20080298719A1
US20080298719A1 US12/144,495 US14449508A US2008298719A1 US 20080298719 A1 US20080298719 A1 US 20080298719A1 US 14449508 A US14449508 A US 14449508A US 2008298719 A1 US2008298719 A1 US 2008298719A1
Authority
US
United States
Prior art keywords
image
images
integrated circuit
acquired
gray levels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/144,495
Inventor
Madhumita Sengupta
Mamta Sinha
Theodore R. Lundquist
William Thompson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FEI EFA Inc
Original Assignee
DCG Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DCG Systems Inc filed Critical DCG Systems Inc
Priority to US12/144,495 priority Critical patent/US20080298719A1/en
Publication of US20080298719A1 publication Critical patent/US20080298719A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Definitions

  • the present application relates to sub-resolution alignment of images, for example, such as used in probing and editing integrated circuits.
  • An integrated circuit (“IC”) integrates a large number of electronic circuit elements on a single semiconductor substrate with high density: today's technology allows a minimum feature size on the order of 0.1 micron.
  • circuit elements can be probed and edited.
  • LVP laser voltage probing
  • FIB focused ion beam
  • a circuit element first is located on the substrate of an IC under test.
  • this step includes aligning corresponding features of two different images of the IC under test.
  • the first image can be an acquired image that describes the actual position of the circuit.
  • the second image can be derived from a computer-aided design (“CAD”) image that lays out the complicated map of circuit elements.
  • CAD computer-aided design
  • a CAD image is ideal representation of the IC and typically is generated by a human operator using a CAD system. Once the acquired image is aligned, or registered, with the CAD image, a conventional system can navigate, that is, steer, an IC probing device to a circuit element to be probed.
  • an IC can be imaged, for example, by infrared (“IR”) light.
  • IR light can image the IC from the silicon side, i.e., through the substrate.
  • silicon side imaging may use IR light with a wavelength of about one micron.
  • Using an IR wavelength of about one micron results in an acquired image of roughly the same resolution as the wavelength of the IR light used for imaging. That is, the resulting IR image has a resolution of about one micron.
  • Such an IR image typically cannot adequately be used to resolve sub-resolution features, i.e., circuit elements that are smaller than the IR wavelength.
  • an attempt can be made to align an IR image with the corresponding CAD image with sub-resolution accuracy.
  • a human operator can try to align an IR image with a CAD image visually.
  • This method typically gives an optimal accuracy of about one micron, which is essentially the same as the resolution of the IR image, and typically insufficient for LVP or FIB editing.
  • aligning IR and CAD images with sufficient accuracy one can try standard alignment techniques, such as intensity correlation, edge detection or binary correlation algorithms. These techniques tend to give limited accuracy as well, because IR images may be distorted by light diffraction and other optical effects. Alignment of an IR and a CAD image may be further complicated by substantial intensity variations. Intensity on the IR image can depend on several parameters, including thickness and reflectivity of different layers. Furthermore, IR images may have optical ghosts that may cause an alignment method to produce incorrect results.
  • the present inventors discovered techniques for aligning images with sub-resolution precision.
  • An implementation of such techniques aligns features in a high-resolution CAD image of an IC design with corresponding features in a lower-resolution acquired (e.g., IR) image of the actual IC.
  • Implementations of systems and techniques for achieving sub-resolution alignment of images may include various combinations of the following features.
  • a plurality of images including a first image and a second image having a higher resolution than the first image, may be aligned by generating an oversampled cross correlation image.
  • the oversampled cross correlation image corresponds to relative displacements of the first and second images.
  • an offset value can be determined. The offset value corresponds to a misalignment of the first and second images.
  • images including a first image and a second image having a higher resolution than the first image, may be aligned by achieving a sub-resolution alignment of the first and second images.
  • the sub-resolution alignment may be achieved by performing a cross correlation of the images and a frequency domain interpolation of the images.
  • integrated circuit devices may be inspected by obtaining a computer-generated representation of a physical layout of an integrated circuit design, and acquiring an image of an integrated circuit device corresponding to the integrated circuit design.
  • the acquired image may have a resolution that is lower than a resolution of the computer-generated representation.
  • An oversampled cross correlation image can be generated.
  • the oversampled cross correlation image corresponds to displacements of the computer-generated representation and the acquired image.
  • an offset value can be determined.
  • the determined offset value corresponds to a misalignment of the computer-generated representation and the acquired image.
  • the computer-generated representation and the acquired image can be aligned based on the determined offset value.
  • the integrated circuit device can be probed based on a result of the alignment.
  • alignment of a plurality of images can be facilitated by pre-processing the second image to optimize one or more properties of the second image, and generating an oversampled cross correlation image.
  • the oversampled cross correlation image corresponds to displacements of the first and second images.
  • an offset value can be determined. The determined offset value corresponds to a misalignment of the first and second images.
  • the first and second images can be aligned. Alignment of the first and second images can be achieved to a precision greater than the resolution of the first image. After aligning the first and second images, another iteration of generating an oversampled cross correlation image and determining an offset value for the first and second images can be performed.
  • Generating the oversampled cross correlation image can include generating a cross correlation image.
  • the cross correlation image corresponds to relative displacements of the first and second images.
  • the cross correlation image can be oversampled to generate the oversampled cross correlation image.
  • Oversampling the cross correlation image can include generating sub-pixel points for the oversampled cross correlation image.
  • frequencies missing from the cross correlation image can be excluded. Excluding frequencies can include using a zero padding technique to set high frequency components to zero.
  • Generating sub-pixel points can use a spatial domain technique. At least one of the first and second images can be oversampled; then, the first and second images can be cross-correlated to generate the oversampled cross correlation image.
  • the first and second images can represent a common object.
  • the common object can include a physical layout of an integrated circuit. Based on the determined offset value, an apparatus can be navigated to a specified point on the integrated circuit with a precision greater than the resolution of the first image.
  • the integrated circuit can be probed at the specified point.
  • the integrated circuit can be probed with laser voltage probing.
  • the integrated circuit can be edited at the specified point.
  • the integrated circuit can be edited with focused ion beam.
  • the common object can represent a voltage contrast of an integrated circuit.
  • At least one of the first and second images can include an acquired image.
  • the acquired image can be an acquired image of an integrated circuit.
  • the acquired image can be acquired from the silicon side or the front side of the integrated circuit.
  • the acquired image can be an optically acquired image.
  • the optically acquired image can be an infrared image.
  • the acquired image can be a voltage contrast image, a scanning electron microscope image, a FIB image, or can be acquired by an electron beam prober.
  • the second image can include an ideal reference image.
  • the ideal reference image can be an image of an integrated circuit generated by a computer-aided design system.
  • Generating the oversampled cross correlation image can include calculating correlation values that characterize relative displacements and corresponding overlaps of the first and second images.
  • the correlation values can be calculated using Fast Fourier Transform techniques.
  • Determining the offset value can include determining a location of a maximum correlation value between the first and second images. The maximum correlation value can be used as a confidence factor characterizing confidence in the offset value.
  • Pre-processing can include one or more of adjusting rotation, adjusting magnification, adjusting intensity, and filtering.
  • Adjusting rotation can include calculating angular mismatch between the first and the second image.
  • Magnification can be adjusted using a 3-point alignment technique.
  • Adjusting intensity can include normalizing intensities of the first and second images by a histogram equalization technique.
  • Intensity can be adjusted by matching gray-scale levels in corresponding regions of the first and second images.
  • Filtering can include applying a low pass filter to the second image.
  • Filtering can include filtering with a point spread function. The point spread function can simulate optical ghosting in one of the first and second images.
  • a second oversampled cross correlation image can be generated.
  • the second oversampled cross correlation image corresponds to relative displacements of the second image and a third image.
  • the third image can be aligned with the first and second images based on the second oversampled cross correlation image.
  • an image alignment system can include an image acquisition system, a pre-processor, a cross correlator, an interpolator, and an alignment component.
  • the image acquisition system can be capable of acquiring a first image of an object.
  • the pre-processor can be configured to optimize properties of a second image of the object.
  • the second image may have a greater resolution than the first image.
  • the cross correlator can be configured to generate a cross correlation image corresponding to displacements of the first image and the pre-processed second image.
  • the interpolator can be configured to determine, based on the cross correlation image, an offset value corresponding to a misalignment of the first and second images.
  • the alignment component can be configured to align the first and second images based on the determined offset value.
  • the object can be an integrated circuit.
  • the image acquisition system can include an infrared imaging device and/or a focused ion beam device.
  • the pre-processor can be configured to perform one or more of the following operations: rotation adjustment, magnification adjustment, intensity adjustment and filtering.
  • the pre-processor can be configured to perform filtering based on a point spread function.
  • One or more of the following can be implemented in software: the pre-processor, the cross-correlator, the interpolator and the alignment component.
  • the alignment component can include elements to digitally align the first and second images.
  • the interpolator can include an oversampler configured to oversample the cross correlation image.
  • a lower-resolution image can be aligned with a higher-resolution image so that the alignment has an accuracy that exceeds the lower resolution.
  • the lower-resolution image can be a FIB image, an electron beam image, or an optical, e.g., IR image;
  • the higher-resolution image can be a computer generated, e.g., a CAD image, or an acquired image, e.g., a FIB image.
  • An IR image can be used to locate a circuit element of an IC, even if the circuit element is smaller than the resolution of the IR image.
  • the images can be aligned with sub-resolution precision, if one or both images are distorted by optical effects, e.g., by optical ghosts. Alignment of the images can be automated even when the images have different resolutions or distortions.
  • FIG. 1 shows a block diagram of a system capable of aligning an optical image with a CAD image with sub-resolution accuracy.
  • FIG. 2 is a flowchart that shows a method for aligning two images in accordance with an implementation of the application.
  • FIG. 3 is a flowchart showing a method for pre-processing images in an implementation of the application.
  • FIG. 4A shows an example CAD image.
  • FIG. 4B shows an example optical image corresponding to the CAD image in FIG. 4A .
  • FIG. 4C shows an alignment of the images in FIGS. 4A and 4B .
  • FIG. 5 is a graph of a point spread function.
  • FIG. 6A shows an example CAD image.
  • FIG. 6B shows an example optical image corresponding to the CAD image in FIG. 6A .
  • FIG. 6C shows an alignment of the images in FIGS. 6A and 6B .
  • FIG. 7A shows a graph of an offset value calculation with an unfiltered CAD image.
  • FIG. 7B shows a graph of an offset value calculation with a filtered CAD image.
  • FIG. 1 shows a block diagram of an alignment system 100 for aligning, with sub-resolution accuracy, an optical image and a CAD image. More generally, the alignment system 100 aligns features in a higher-resolution reference image, e.g., a CAD image of an integrated circuit 175 , with corresponding features in a lower-resolution acquired image, e.g., an optical image of the IC 175 .
  • the IC 175 can be held by a sample holder 170 , and imaged using IR light 165 by an optical system 160 , such as the Schlumberger IDS OptFIB integrated diagnostic system:
  • the IC 175 can have a flip-chip design.
  • the optical system 160 can image the IC 175 from the silicon side, i.e., through the substrate. Alternatively, the IC 175 can be imaged from the front side where circuit elements are exposed.
  • the alignment system 100 receives the optical image from the optical system 160 , and the CAD image from a CAD storage device 105 . Furthermore, after aligning the optical and CAD images, the alignment system 100 can provide information to a controller 150 to locate a circuit element of the IC 175 .
  • the controller 150 may manipulate the sample holder 170 to move the IC 175 into a desired position, or the optical system to image the IC 175 in a desired way.
  • the controller 150 can be implemented in a microprocessor-controlled device.
  • the alignment system 100 can be implemented, e.g., as computer software running on a computer. As explained with reference to FIG. 2 , the alignment system 100 uses a pre-processor 110 , a cross correlator 120 , an oversampler 130 , and a maximum calculator 140 to align acquired images with reference images.
  • a method 200 can align two different images of the same object including, e.g., a reference image and an acquired image.
  • the reference image has a higher resolution than the acquired image.
  • the reference image can be a theoretical, or ideal, image, such as a CAD image of an IC, such as shown in FIG. 4A .
  • the acquired image can be a lower-resolution measured image such as a FIB image, an image measured by an electron-beam prober, a voltage contrast image, or an optical image, such as an IR or a visible light image of the IC.
  • the reference image can be an acquired image with higher resolution, such as a high resolution FIB, voltage contrast, or scanning electron microscope image.
  • One or both of the two images may be distorted by optical effects or other noise that deteriorates image quality.
  • the method 200 can align images having optical ghosts.
  • the method 200 first pre-processes the images to decrease the differences between the two images ( 210 ).
  • the pre-processor 110 pre-processes the optical image or the CAD image by rotation adjustment 112 , magnification adjustment 114 , intensity adjustment 116 , and/or filtering 118 . Methods for using the components 112 - 118 are discussed with reference to FIGS. 3-6C .
  • a cross correlation image is generated from the two images ( 220 ).
  • the cross correlator 120 can produce a cross correlation image from the pre-processed optical and CAD images. Examples of determining the cross correlation of images are shown in FIGS. 7A-7B .
  • the cross correlation image is oversampled, i.e., interpolated, to obtain sub-resolution accuracy ( 230 ).
  • the cross correlation image is oversampled by the oversampler 130 .
  • an offset value is calculated to characterize the misalignment of the two images ( 240 ).
  • the maximum calculator 140 calculates an offset value that maximizes the correlation between the optical and CAD images of the IC 175 .
  • an interpolator can calculate an offset value directly from the cross correlation image.
  • the two images are aligned (step 250 ); for example, the two images can be aligned digitally by the alignment system 100 .
  • the controller 150 can navigate the optical system 160 , the sample holder 170 , or any other, e.g., LVP or electron-beam, probing device to a particular circuit element of the IC 175 .
  • FIG. 3 is a flowchart showing details of pre-processing images ( 210 ).
  • pre-processing can include adjustments of rotation ( 312 ), magnification ( 314 ), intensity ( 316 ), and resolution ( 318 ). As discussed below in detail, these adjustments can be performed, e.g., by the pre-processing device 110 , or, alternatively, by a human operator. If one of the two images is an optical image of the integrated circuit 175 , the controller 150 can instruct the optical system 160 or the sample holder 170 to perform operations as part of the pre-processing adjustments of the optical image.
  • preprocessing starts with a rotation adjustment to adjust the relative orientation of the two images in order to correct orientation mismatches ( 312 ).
  • Rotation adjustment can include computing an angle ⁇ to characterize the difference between angular orientations of the two images.
  • the angle computation can be performed, e.g., by the rotation adjustment component 112 .
  • rotation adjustment 112 is performed by computing the angle ⁇ using Radon transforms as described by R. Bracewell, “The Fourier Transform and its Applications”, McGraw-Hill (1986), (“Bracewell”). For aligning an optical image and a CAD image of an IC 175 , rotation adjustment 112 involves sending the value of the angle ⁇ to the controller 150 .
  • the controller can rotate the sample holder 170 holding the IC 175 by the angle ⁇ .
  • a human operator can rotate the sample holder 170 .
  • the rotation adjusting device 112 can digitally rotate the optical image, or the CAD image, or both.
  • Pre-processing can include adjusting magnifications to equalize two images for alignment ( 314 ). If they have different magnifications, the two images can be aligned only locally: when a small sub-region of the images is aligned, other sub-regions can remain misaligned. Consequently, aligning different sub-regions can give different offset values. For example, as shown for an inverter section of an IC in FIGS. 4A-C , a CAD image ( FIG. 4A ) cannot be properly aligned globally with a corresponding optical (IR) image ( FIG. 4B ) due to a magnification mismatch (as shown in FIG. 4C ). Due to the magnification mismatch between FIGS. 4A and 4B , computed offsets may vary by as much as 12 pixels when attempting to align different sub-regions.
  • IR optical
  • magnification adjustment 114 matches magnification between two images, such as an optical image and a CAD image.
  • the magnification adjustment component 114 can use a 3-point alignment and layout overlay techniques.
  • a 3-point alignment technique includes locking an optical image to a CAD image at three positions and then adjusting the magnification of one or both of the two images.
  • An overlay technique includes overlaying an optical image over a CAD image and then similarly adjusting the magnification of one or both of the images.
  • Magnification adjustment can be done, for example, digitally by the magnification adjustment component 114 , or manually by a human operator.
  • the controller 150 can instruct the optical system 160 to change the magnification of the optical image.
  • the magnification adjustment component 114 can automatically match magnification when the CAD magnification is known with sufficient accuracy.
  • pre-processing can include intensity adjustments to equalize intensities of the two images for alignment ( 316 ).
  • an optical image and a CAD image may have inconsistent intensities for corresponding regions (see, e.g., FIGS. 4A and 4B ). Equalizing these intensities tends to improve alignment quality.
  • intensities may vary due to the substrate thickness. For example, a thick silicon substrate absorbs much of the reflected light and this absorption leads to low intensity.
  • different layers of an IC may have different reflectivity causing different intensities in the optical image.
  • intensity adjustment 116 involves assigning gray levels to layers displayed in a CAD image of an IC.
  • the assigned gray levels can be matched with intensities in a corresponding optical image.
  • the intensity adjusting device 116 can assign a black color to features representing diffusion regions, and a white color to features representing bright metal areas.
  • a human operator can assist in assigning gray levels to features on the CAD image.
  • the intensity adjusting device 116 can change the intensity of the optical image by manipulation of the optical system 160 through the controller 150 .
  • the intensity adjusting device 116 can normalize intensities of the two images, for example, by using known histogram equalization techniques, such as disclosed in J. Russ, “Image Processing Handbook”, IEEE Press (1999).
  • pre-processing may include adjusting resolution to equalize two images for alignment ( 318 ).
  • the filter 118 can adjust resolution by applying a filtering technique to one of the two images; optionally, both images can be filtered.
  • the filter 118 can use various different filtering techniques, such as a general high or low pass frequency filter or a point spread function (“PSF”).
  • PSF is also known as an impulse response function or a transfer function. If one of the two images is an optical image, the PSF may be characteristic to an optical system that acquired the optical image.
  • a point spread function can characterize an IR optical microscope used for imaging integrated circuits. Typically, such as in the function shown in FIG.
  • a point spread function spreads in a cylindrically symmetric way around a center point, where the PSF assumes a maximum value. But, depending on aberrations in the optical system, a PSF can be asymmetric as well.
  • a point spread function may have diffraction maximums, or lobes, circling the center point.
  • a point spread function can be obtained from a diffraction image of a point source. The diffraction image can be calculated from theoretical models, or measured in an experiment.
  • the filter component 118 can lower the resolution of a higher resolution image to match the resolution of a lower resolution image.
  • the filter component 118 can filter the higher resolution image with a general low-pass frequency filter or a point spread function.
  • the filter component 118 can convolute the point spread function with the higher resolution image. Convolution can be implemented, for example, by direct integration or using Fourier transforms, as described in detail, e.g., in “Bracewell”.
  • the filter 118 can use fast Fourier transformation (“FFT”) to implement the convolution. As a result of the convolution, the higher resolution image is turned into a convoluted image.
  • FFT fast Fourier transformation
  • the convoluted image can have a resolution that matches the resolution of the lower resolution image.
  • the resolution match is reached by lowering the resolution of the higher resolution image, noise is not enhanced, and the result is independent of image quality of the lower resolution, typically acquired, image.
  • convolution generates a convoluted image that satisfies the Nyquist condition for sub-resolution offset computation.
  • the filter 118 can sharpen a lower resolution image to match the resolution of a higher resolution image.
  • the filter 118 can use a general or a special high pass frequency filter.
  • the special high pass filter can be based on a PSF that characterizes the optical device that acquired the optical image. High pass filtering, however, can accentuate high frequency noise in the optical image, especially when the optical image has a low signal to noise ratio.
  • the filter 118 can use other filtering techniques, such as noise reduction techniques explained in more detail by K. Watson in “Processing remote sensing images using the 2-D FFT-noise reduction and other applications”, Geophysics 58 (1993) 835, (“Watson”).
  • the method 200 can obtain a cross correlation image ( 220 ).
  • the cross correlating device 120 can calculate a cross correlation image from two images, denoted by f and g, according to Equation (1):
  • an ordered pair (x,y) refers to an image pixel with x coordinate x and y coordinate y; similarly, (ij) denotes a pixel with x coordinate i and y coordinate j; c(x,y) refers to a (x,y) pixel of the cross correlation image; f(x,y) and g(x,y) refer to a (x,y) pixel of the image f and the image g, respectively.
  • a pixel c(x,y) characterizes an overlap of the two images for relative displacements x in the x direction, and y in the y direction.
  • the cross correlating device 120 can calculate the cross correlation image by directly following the summation in Equation (1).
  • the two images, f and g can be Fourier transformed first to obtain two Fourier images, F and G, respectively.
  • the cross correlation image can be obtained by inverse Fourier transformation from the product of one of the Fourier images, say F, and the complex conjugate of the other Fourier image, in this case, G*.
  • Fourier transformations can be performed by FFT methods.
  • a cross correlation image can characterize an overlap, or correlation, of two images for alignment.
  • one of the two images can be a CAD image, as shown in FIG. 6A ; the other image can be an optical image, as shown in FIG. 6B .
  • a cross correlation image can be calculated from the two images as described above.
  • pixels can characterize correlations for various displacements.
  • the x coordinate describes the relative displacements of the two images; in some normalized units, the y coordinate characterizes a correlation value, or overlap. The greater the overlap between the two images, the larger the correlation value.
  • a cross correlation image can be oversampled, or interpolated ( 230 ).
  • the oversampler 130 can oversample a cross correlation image of CAD and optical images. Oversampling produces an oversampled image that has extra sub-pixel points generated from the original points of the cross correlation image. As shown in FIGS. 7A and 7B , the oversampler 130 can generate sub-pixel points that are, for example, 0.25 or 0.1 pixel apart.
  • the oversampler 130 can oversample an image with spatial domain interpolation techniques, such as nearest neighbor, bilinear, or cubic spline interpolation.
  • the generated sub-pixel points can introduce new, or alter existing frequency components.
  • the original cross correlation image has no high frequency component that exceeds the inverse of the distance between pixels.
  • Sub-pixel points can artificially introduce non-zero value for such high frequency components.
  • an oversampling technique can generate sub-pixel points, for example, by a zero padding technique.
  • zero padding sets new high frequency components to zero.
  • the oversampler 130 may perform the following: Fourier transform a cross correlation image; enlarge the Fourier space of the cross correlation image by adding new high frequency components; set the new high frequency components to zero; and inverse Fourier transform the enlarged image to provide an oversampled image (see reference “Watson”).
  • the oversampler 130 can Fourier transform an image by using FFT.
  • the oversampler 130 can use a spatial domain interpolation technique, such as Sinc interpolation (see, e.g., “Bracewell”), that does not introduce new high frequency components.
  • the oversampler 130 can oversample one or both of the two images. For example, when the alignment system 100 aligns an optical image of an IC with a corresponding CAD image, the oversampler 130 can generate sub-pixel points for the optical image. With these sub-pixel points for the optical image, the cross correlator 120 can calculate sub-pixel points for a cross correlation image without further oversampling.
  • the method 200 can calculate an offset value from a cross correlation image in step 220 .
  • the maximum calculator 120 can find a maximum correlation value in a cross correlation image of two images.
  • FIGS. 7A-7B show graphs representing cross correlation images obtained from the images in FIG. 6A and in FIG. 6B , respectively.
  • the location of the maximum correlation value provides an offset value that describes a displacement to align the two images.
  • the maximum calculator 120 can find a maximum of a cross correlation image with or without oversampling. Without oversampling, as shown, for example, in FIG. 7A , the maximum location gives an integer pixel ( ⁇ 6 pixel, in FIG. 7A ) as an offset value. With oversampling, however, the highest maximum can be located with sub-pixel precision ( ⁇ 5.6 pixel, in FIG. 7A ), as described by the sub-pixel points obtained from oversampling the cross correlation image.
  • a maximum provides an offset value as the location of the maximum.
  • the correlation value at the maximum can be interpreted as a confidence factor characterizing confidence in the offset value.
  • cross correlation images are calculated for different sub-regions of the two images to be aligned. For each sub-region, an offset value and a confidence factor is obtained.
  • the confidence factor can be used for selecting the sub-region that is used for alignment: the higher the confidence factor, the more likely that the corresponding offset value is close to the correct one.
  • the confidence factor can be given to a human operator for selecting a sub-region for alignment.
  • the misalignment may be due to an optical ghost in the optical image.
  • the optical ghost may cause more than one maximum in the cross correlation image (see, e.g., FIG. 7A ).
  • the height of these maximums can depend on a point spread function that is used during image pre-processing in resolution adjustment 318 .
  • the highest maximum obtained can differ depending on whether resolution is adjusted with or without a PSF.
  • the cross correlation image may show a highest maximum located at the ghost position, shown around ⁇ 5.6 pixels in FIG. 7A .
  • the relative height might change to make the true maximum, shown in FIG. 7B around ⁇ 0.8 pixels, higher than the maximum for the ghost.
  • Appropriate alignment is typically obtained with a PSF characterizing the optical system that produced the ghosted optical image.
  • two images can be aligned ( FIG. 2 ; 250 ).
  • the cross correlator 120 can provide a second cross correlation image.
  • the alignment system 100 can apply the steps 230 - 250 of the method 200 a second time.
  • the second alignment can provide a better alignment than the first alignment. Potentially, further iterations may produce better results.
  • a first pair of images may include the first and second images.
  • the third image can be aligned with one of the images of the first pair; optionally, the third image can also be aligned with the other image of the first pair.
  • a combined image can be generated from the aligned first pair, and the third image can be aligned with the combined image.
  • an IR image of a test IC can be aligned with a corresponding CAD image with sub-resolution accuracy.
  • the IR image is measured from the silicon side through the substrate.
  • the test IC has test features with linear dimensions below one micron.
  • the IR image shows the test features with a resolution of about one micron.
  • the IR image is divided into twenty different regions of interest (ROI), each ROI having 256 ⁇ 256 pixels; each pixel corresponding to 0.189 micron.
  • Each ROI is independently aligned with a corresponding region of the CAD image using a preferred implementation of the method 200 ( FIG. 2 ).
  • the CAD image is resolution adjusted with a point spread function ( FIG. 5 ).
  • a second alignment repeats the steps 230 - 250 of the method 200 to improve alignment accuracy between the IR and CAD images.
  • the sample features are exposed, and the sample IC is imaged with a FIB to provide a high resolution image.
  • the high resolution FIB image is aligned both with the CAD image and the IR image to estimate errors in the alignment of the IR and CAD images.
  • the estimated errors are Ex in the x direction and Ey in the y direction; E is the total (square root) error.
  • sub-resolution alignment methods and apparatuses have been described for images used in IC probing and editing systems. Nevertheless, it will be understood that the application can be implemented for sub-resolution alignment of images for other systems as well: for example, lithographic systems, scanning electron microscopy, or laser scanning microscopy.
  • computational aspects described here can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Where appropriate, aspects of these systems and techniques can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output.
  • a computer system can be used having a display device such as a monitor or LCD screen for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer system.
  • the computer system can be programmed to provide a graphical user interface through which computer programs interact with users.

Abstract

A plurality of images, including a first image and a second image having a higher resolution than the first image, are aligned by generating an oversampled cross correlation image that corresponds to relative displacements of the first and second images, and, based on the oversampled cross correlation image, determining an offset value that corresponds to a misalignment of the first and second images. The first and second images are aligned to a precision greater than the resolution of the first image, based on the determined offset value. Enhanced results are achieved by performing another iteration of generating an oversampled cross correlation image and determining an offset value for the first and second images. Generating the oversampled cross correlation image may involve generating a cross correlation image that corresponds to relative displacements of the first and second images, and oversampling the cross correlation image to generate the oversampled cross correlation image.

Description

    RELATED APPLICATION
  • This application claims priority to U.S. Provisional Patent Application No. 60/294,716, filed May 30, 2001.
  • BACKGROUND
  • The present application relates to sub-resolution alignment of images, for example, such as used in probing and editing integrated circuits.
  • An integrated circuit (“IC”) integrates a large number of electronic circuit elements on a single semiconductor substrate with high density: today's technology allows a minimum feature size on the order of 0.1 micron. During designing, prototyping, and testing an IC, circuit elements can be probed and edited. To probe or edit an IC using, for example, laser voltage probing (“LVP”) or focused ion beam (“FIB”), a circuit element first is located on the substrate of an IC under test. Typically, this step includes aligning corresponding features of two different images of the IC under test. The first image can be an acquired image that describes the actual position of the circuit. The second image can be derived from a computer-aided design (“CAD”) image that lays out the complicated map of circuit elements. In general, a CAD image is ideal representation of the IC and typically is generated by a human operator using a CAD system. Once the acquired image is aligned, or registered, with the CAD image, a conventional system can navigate, that is, steer, an IC probing device to a circuit element to be probed.
  • To acquire an image for alignment, an IC can be imaged, for example, by infrared (“IR”) light. Typically used for an IC with a flip-chip design, IR light can image the IC from the silicon side, i.e., through the substrate. To see through the substrate, which can be several hundred microns thick, silicon side imaging may use IR light with a wavelength of about one micron. Using an IR wavelength of about one micron, however, results in an acquired image of roughly the same resolution as the wavelength of the IR light used for imaging. That is, the resulting IR image has a resolution of about one micron. Such an IR image typically cannot adequately be used to resolve sub-resolution features, i.e., circuit elements that are smaller than the IR wavelength.
  • To locate sub-resolution features for IC probing or editing, an attempt can be made to align an IR image with the corresponding CAD image with sub-resolution accuracy. For example, a human operator can try to align an IR image with a CAD image visually. This method, however, typically gives an optimal accuracy of about one micron, which is essentially the same as the resolution of the IR image, and typically insufficient for LVP or FIB editing. For aligning IR and CAD images with sufficient accuracy, one can try standard alignment techniques, such as intensity correlation, edge detection or binary correlation algorithms. These techniques tend to give limited accuracy as well, because IR images may be distorted by light diffraction and other optical effects. Alignment of an IR and a CAD image may be further complicated by substantial intensity variations. Intensity on the IR image can depend on several parameters, including thickness and reflectivity of different layers. Furthermore, IR images may have optical ghosts that may cause an alignment method to produce incorrect results.
  • SUMMARY
  • The present inventors discovered techniques for aligning images with sub-resolution precision. An implementation of such techniques aligns features in a high-resolution CAD image of an IC design with corresponding features in a lower-resolution acquired (e.g., IR) image of the actual IC. Implementations of systems and techniques for achieving sub-resolution alignment of images may include various combinations of the following features.
  • In general, in one aspect, a plurality of images, including a first image and a second image having a higher resolution than the first image, may be aligned by generating an oversampled cross correlation image. The oversampled cross correlation image corresponds to relative displacements of the first and second images. Based on the oversampled cross correlation image, an offset value can be determined. The offset value corresponds to a misalignment of the first and second images.
  • In general, in another aspect, images, including a first image and a second image having a higher resolution than the first image, may be aligned by achieving a sub-resolution alignment of the first and second images. The sub-resolution alignment may be achieved by performing a cross correlation of the images and a frequency domain interpolation of the images.
  • In general, in another aspect, integrated circuit devices may be inspected by obtaining a computer-generated representation of a physical layout of an integrated circuit design, and acquiring an image of an integrated circuit device corresponding to the integrated circuit design. The acquired image may have a resolution that is lower than a resolution of the computer-generated representation. An oversampled cross correlation image can be generated. The oversampled cross correlation image corresponds to displacements of the computer-generated representation and the acquired image. Based on the oversampled cross correlation image, an offset value can be determined. The determined offset value corresponds to a misalignment of the computer-generated representation and the acquired image. With a precision exceeding the resolution of the acquired image, the computer-generated representation and the acquired image can be aligned based on the determined offset value. The integrated circuit device can be probed based on a result of the alignment.
  • In general, in another aspect, alignment of a plurality of images, including a first image and a second image having a higher resolution than the first image, can be facilitated by pre-processing the second image to optimize one or more properties of the second image, and generating an oversampled cross correlation image. The oversampled cross correlation image corresponds to displacements of the first and second images. Based on the oversampled cross correlation image, an offset value can be determined. The determined offset value corresponds to a misalignment of the first and second images.
  • Advantageous implementations may include one or more of the following features. Based on the determined offset value, the first and second images can be aligned. Alignment of the first and second images can be achieved to a precision greater than the resolution of the first image. After aligning the first and second images, another iteration of generating an oversampled cross correlation image and determining an offset value for the first and second images can be performed.
  • Generating the oversampled cross correlation image can include generating a cross correlation image. The cross correlation image corresponds to relative displacements of the first and second images. The cross correlation image can be oversampled to generate the oversampled cross correlation image. Oversampling the cross correlation image can include generating sub-pixel points for the oversampled cross correlation image. In the oversampled cross correlation image, frequencies missing from the cross correlation image can be excluded. Excluding frequencies can include using a zero padding technique to set high frequency components to zero. Generating sub-pixel points can use a spatial domain technique. At least one of the first and second images can be oversampled; then, the first and second images can be cross-correlated to generate the oversampled cross correlation image.
  • The first and second images can represent a common object. The common object can include a physical layout of an integrated circuit. Based on the determined offset value, an apparatus can be navigated to a specified point on the integrated circuit with a precision greater than the resolution of the first image. The integrated circuit can be probed at the specified point. The integrated circuit can be probed with laser voltage probing. The integrated circuit can be edited at the specified point. The integrated circuit can be edited with focused ion beam. The common object can represent a voltage contrast of an integrated circuit.
  • At least one of the first and second images can include an acquired image. The acquired image can be an acquired image of an integrated circuit. The acquired image can be acquired from the silicon side or the front side of the integrated circuit. The acquired image can be an optically acquired image. The optically acquired image can be an infrared image. The acquired image can be a voltage contrast image, a scanning electron microscope image, a FIB image, or can be acquired by an electron beam prober. The second image can include an ideal reference image. The ideal reference image can be an image of an integrated circuit generated by a computer-aided design system.
  • Generating the oversampled cross correlation image can include calculating correlation values that characterize relative displacements and corresponding overlaps of the first and second images. The correlation values can be calculated using Fast Fourier Transform techniques. Determining the offset value can include determining a location of a maximum correlation value between the first and second images. The maximum correlation value can be used as a confidence factor characterizing confidence in the offset value.
  • Prior to generating the oversampled cross correlation image, one or each of the first and second images can be pre-processed to reduce mismatch between the first and the second image. Pre-processing can include one or more of adjusting rotation, adjusting magnification, adjusting intensity, and filtering. Adjusting rotation can include calculating angular mismatch between the first and the second image. Magnification can be adjusted using a 3-point alignment technique. Adjusting intensity can include normalizing intensities of the first and second images by a histogram equalization technique. Intensity can be adjusted by matching gray-scale levels in corresponding regions of the first and second images. Filtering can include applying a low pass filter to the second image. Filtering can include filtering with a point spread function. The point spread function can simulate optical ghosting in one of the first and second images.
  • A second oversampled cross correlation image can be generated. The second oversampled cross correlation image corresponds to relative displacements of the second image and a third image. The third image can be aligned with the first and second images based on the second oversampled cross correlation image.
  • In general, in another aspect, an image alignment system can include an image acquisition system, a pre-processor, a cross correlator, an interpolator, and an alignment component. The image acquisition system can be capable of acquiring a first image of an object. The pre-processor can be configured to optimize properties of a second image of the object. The second image may have a greater resolution than the first image. The cross correlator can be configured to generate a cross correlation image corresponding to displacements of the first image and the pre-processed second image. The interpolator can be configured to determine, based on the cross correlation image, an offset value corresponding to a misalignment of the first and second images. The alignment component can be configured to align the first and second images based on the determined offset value.
  • Advantageous implementations can include one or more of the following features. The object can be an integrated circuit. The image acquisition system can include an infrared imaging device and/or a focused ion beam device. The pre-processor can be configured to perform one or more of the following operations: rotation adjustment, magnification adjustment, intensity adjustment and filtering. The pre-processor can be configured to perform filtering based on a point spread function. One or more of the following can be implemented in software: the pre-processor, the cross-correlator, the interpolator and the alignment component. The alignment component can include elements to digitally align the first and second images. The interpolator can include an oversampler configured to oversample the cross correlation image.
  • The systems and techniques described here may be implemented in a method or as an apparatus, including a computer program product, and provide one or more of the following advantages. A lower-resolution image can be aligned with a higher-resolution image so that the alignment has an accuracy that exceeds the lower resolution. The lower-resolution image can be a FIB image, an electron beam image, or an optical, e.g., IR image; the higher-resolution image can be a computer generated, e.g., a CAD image, or an acquired image, e.g., a FIB image. An IR image can be used to locate a circuit element of an IC, even if the circuit element is smaller than the resolution of the IR image. The images can be aligned with sub-resolution precision, if one or both images are distorted by optical effects, e.g., by optical ghosts. Alignment of the images can be automated even when the images have different resolutions or distortions.
  • The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
  • DRAWING DESCRIPTIONS
  • FIG. 1 shows a block diagram of a system capable of aligning an optical image with a CAD image with sub-resolution accuracy.
  • FIG. 2 is a flowchart that shows a method for aligning two images in accordance with an implementation of the application.
  • FIG. 3 is a flowchart showing a method for pre-processing images in an implementation of the application.
  • FIG. 4A shows an example CAD image.
  • FIG. 4B shows an example optical image corresponding to the CAD image in FIG. 4A.
  • FIG. 4C shows an alignment of the images in FIGS. 4A and 4B.
  • FIG. 5 is a graph of a point spread function.
  • FIG. 6A shows an example CAD image.
  • FIG. 6B shows an example optical image corresponding to the CAD image in FIG. 6A.
  • FIG. 6C shows an alignment of the images in FIGS. 6A and 6B.
  • FIG. 7A shows a graph of an offset value calculation with an unfiltered CAD image.
  • FIG. 7B shows a graph of an offset value calculation with a filtered CAD image.
  • Like reference symbols in the various drawings indicate like elements.
  • DETAILED DESCRIPTION
  • FIG. 1 shows a block diagram of an alignment system 100 for aligning, with sub-resolution accuracy, an optical image and a CAD image. More generally, the alignment system 100 aligns features in a higher-resolution reference image, e.g., a CAD image of an integrated circuit 175, with corresponding features in a lower-resolution acquired image, e.g., an optical image of the IC 175. The IC 175 can be held by a sample holder 170, and imaged using IR light 165 by an optical system 160, such as the Schlumberger IDS OptFIB integrated diagnostic system: The IC 175 can have a flip-chip design. The optical system 160 can image the IC 175 from the silicon side, i.e., through the substrate. Alternatively, the IC 175 can be imaged from the front side where circuit elements are exposed.
  • As shown in FIG. 1, in one implementation, the alignment system 100 receives the optical image from the optical system 160, and the CAD image from a CAD storage device 105. Furthermore, after aligning the optical and CAD images, the alignment system 100 can provide information to a controller 150 to locate a circuit element of the IC 175. The controller 150 may manipulate the sample holder 170 to move the IC 175 into a desired position, or the optical system to image the IC 175 in a desired way. The controller 150 can be implemented in a microprocessor-controlled device. The alignment system 100 can be implemented, e.g., as computer software running on a computer. As explained with reference to FIG. 2, the alignment system 100 uses a pre-processor 110, a cross correlator 120, an oversampler 130, and a maximum calculator 140 to align acquired images with reference images.
  • As shown in FIG. 2, in one implementation, a method 200 can align two different images of the same object including, e.g., a reference image and an acquired image. In general, the reference image has a higher resolution than the acquired image. The reference image can be a theoretical, or ideal, image, such as a CAD image of an IC, such as shown in FIG. 4A. The acquired image can be a lower-resolution measured image such as a FIB image, an image measured by an electron-beam prober, a voltage contrast image, or an optical image, such as an IR or a visible light image of the IC. Alternatively, the reference image can be an acquired image with higher resolution, such as a high resolution FIB, voltage contrast, or scanning electron microscope image. One or both of the two images may be distorted by optical effects or other noise that deteriorates image quality. Furthermore, the method 200 can align images having optical ghosts.
  • As implemented, the method 200 first pre-processes the images to decrease the differences between the two images (210). In the implementation shown in FIG. 1, the pre-processor 110 pre-processes the optical image or the CAD image by rotation adjustment 112, magnification adjustment 114, intensity adjustment 116, and/or filtering 118. Methods for using the components 112-118 are discussed with reference to FIGS. 3-6C.
  • Next, a cross correlation image is generated from the two images (220). For example, the cross correlator 120 can produce a cross correlation image from the pre-processed optical and CAD images. Examples of determining the cross correlation of images are shown in FIGS. 7A-7B.
  • Next, the cross correlation image is oversampled, i.e., interpolated, to obtain sub-resolution accuracy (230). The cross correlation image is oversampled by the oversampler 130. From the oversampled cross correlation image, an offset value is calculated to characterize the misalignment of the two images (240). The maximum calculator 140 calculates an offset value that maximizes the correlation between the optical and CAD images of the IC 175. In an alternative implementation, an interpolator can calculate an offset value directly from the cross correlation image. Finally, based on the offset value, the two images are aligned (step 250); for example, the two images can be aligned digitally by the alignment system 100. Once the optical and CAD images are aligned, or equivalently, registered, the controller 150 can navigate the optical system 160, the sample holder 170, or any other, e.g., LVP or electron-beam, probing device to a particular circuit element of the IC 175.
  • FIG. 3 is a flowchart showing details of pre-processing images (210). To decrease mismatch of two images for alignment, pre-processing can include adjustments of rotation (312), magnification (314), intensity (316), and resolution (318). As discussed below in detail, these adjustments can be performed, e.g., by the pre-processing device 110, or, alternatively, by a human operator. If one of the two images is an optical image of the integrated circuit 175, the controller 150 can instruct the optical system 160 or the sample holder 170 to perform operations as part of the pre-processing adjustments of the optical image.
  • In one implementation, preprocessing starts with a rotation adjustment to adjust the relative orientation of the two images in order to correct orientation mismatches (312). Rotation adjustment can include computing an angle θ to characterize the difference between angular orientations of the two images. The angle computation can be performed, e.g., by the rotation adjustment component 112. In one implementation, rotation adjustment 112 is performed by computing the angle θ using Radon transforms as described by R. Bracewell, “The Fourier Transform and its Applications”, McGraw-Hill (1986), (“Bracewell”). For aligning an optical image and a CAD image of an IC 175, rotation adjustment 112 involves sending the value of the angle θ to the controller 150. Then, the controller can rotate the sample holder 170 holding the IC 175 by the angle θ. Optionally, a human operator can rotate the sample holder 170. Alternatively, the rotation adjusting device 112 can digitally rotate the optical image, or the CAD image, or both.
  • Pre-processing can include adjusting magnifications to equalize two images for alignment (314). If they have different magnifications, the two images can be aligned only locally: when a small sub-region of the images is aligned, other sub-regions can remain misaligned. Consequently, aligning different sub-regions can give different offset values. For example, as shown for an inverter section of an IC in FIGS. 4A-C, a CAD image (FIG. 4A) cannot be properly aligned globally with a corresponding optical (IR) image (FIG. 4B) due to a magnification mismatch (as shown in FIG. 4C). Due to the magnification mismatch between FIGS. 4A and 4B, computed offsets may vary by as much as 12 pixels when attempting to align different sub-regions.
  • In the implementation shown in FIG. 1, magnification adjustment 114 matches magnification between two images, such as an optical image and a CAD image. For matching magnification, the magnification adjustment component 114 can use a 3-point alignment and layout overlay techniques. A 3-point alignment technique includes locking an optical image to a CAD image at three positions and then adjusting the magnification of one or both of the two images. An overlay technique includes overlaying an optical image over a CAD image and then similarly adjusting the magnification of one or both of the images. Magnification adjustment can be done, for example, digitally by the magnification adjustment component 114, or manually by a human operator. Alternatively, the controller 150 can instruct the optical system 160 to change the magnification of the optical image. Optionally, the magnification adjustment component 114 can automatically match magnification when the CAD magnification is known with sufficient accuracy.
  • Next, pre-processing can include intensity adjustments to equalize intensities of the two images for alignment (316). For example, an optical image and a CAD image may have inconsistent intensities for corresponding regions (see, e.g., FIGS. 4A and 4B). Equalizing these intensities tends to improve alignment quality. In an optical (IR) image of a flip-chip IC, intensities may vary due to the substrate thickness. For example, a thick silicon substrate absorbs much of the reflected light and this absorption leads to low intensity. Furthermore, different layers of an IC may have different reflectivity causing different intensities in the optical image.
  • To match intensity variations, in one implementation, intensity adjustment 116 involves assigning gray levels to layers displayed in a CAD image of an IC. The assigned gray levels can be matched with intensities in a corresponding optical image. For example, in an IR optical image of an IC, diffusion regions appear as dark, and metal regions appear as bright areas. Accordingly, in the corresponding CAD image, the intensity adjusting device 116 can assign a black color to features representing diffusion regions, and a white color to features representing bright metal areas. Alternatively, a human operator can assist in assigning gray levels to features on the CAD image. Optionally, the intensity adjusting device 116 can change the intensity of the optical image by manipulation of the optical system 160 through the controller 150. Finally, the intensity adjusting device 116 can normalize intensities of the two images, for example, by using known histogram equalization techniques, such as disclosed in J. Russ, “Image Processing Handbook”, IEEE Press (1999).
  • Next, pre-processing may include adjusting resolution to equalize two images for alignment (318). In one implementation, the filter 118 can adjust resolution by applying a filtering technique to one of the two images; optionally, both images can be filtered. The filter 118 can use various different filtering techniques, such as a general high or low pass frequency filter or a point spread function (“PSF”). The PSF is also known as an impulse response function or a transfer function. If one of the two images is an optical image, the PSF may be characteristic to an optical system that acquired the optical image. As shown in FIG. 5, a point spread function can characterize an IR optical microscope used for imaging integrated circuits. Typically, such as in the function shown in FIG. 5, a point spread function spreads in a cylindrically symmetric way around a center point, where the PSF assumes a maximum value. But, depending on aberrations in the optical system, a PSF can be asymmetric as well. As shown in FIG. 5, a point spread function may have diffraction maximums, or lobes, circling the center point. A point spread function can be obtained from a diffraction image of a point source. The diffraction image can be calculated from theoretical models, or measured in an experiment.
  • In one implementation, the filter component 118 can lower the resolution of a higher resolution image to match the resolution of a lower resolution image. To lower the resolution of the higher resolution image, the filter component 118 can filter the higher resolution image with a general low-pass frequency filter or a point spread function. To perform filtering with the PSF, the filter component 118 can convolute the point spread function with the higher resolution image. Convolution can be implemented, for example, by direct integration or using Fourier transforms, as described in detail, e.g., in “Bracewell”. In particular, the filter 118 can use fast Fourier transformation (“FFT”) to implement the convolution. As a result of the convolution, the higher resolution image is turned into a convoluted image. The convoluted image can have a resolution that matches the resolution of the lower resolution image. Advantageously, when the resolution match is reached by lowering the resolution of the higher resolution image, noise is not enhanced, and the result is independent of image quality of the lower resolution, typically acquired, image. Furthermore, by lowering the resolution of the higher resolution image before offset calculation, convolution generates a convoluted image that satisfies the Nyquist condition for sub-resolution offset computation.
  • In an alternative implementation, the filter 118 can sharpen a lower resolution image to match the resolution of a higher resolution image. To sharpen the lower resolution image, the filter 118 can use a general or a special high pass frequency filter. For example, if the lower resolution image is an optical image, the special high pass filter can be based on a PSF that characterizes the optical device that acquired the optical image. High pass filtering, however, can accentuate high frequency noise in the optical image, especially when the optical image has a low signal to noise ratio. Optionally, the filter 118 can use other filtering techniques, such as noise reduction techniques explained in more detail by K. Watson in “Processing remote sensing images using the 2-D FFT-noise reduction and other applications”, Geophysics 58 (1993) 835, (“Watson”).
  • Referring back to FIG. 2, after pre-processing two images for alignment, the method 200 can obtain a cross correlation image (220). In one implementation, the cross correlating device 120 can calculate a cross correlation image from two images, denoted by f and g, according to Equation (1):

  • c(x,y)=Σij f(ij)g(i-x j-y).  (1)
  • In Equation (1), an ordered pair (x,y) refers to an image pixel with x coordinate x and y coordinate y; similarly, (ij) denotes a pixel with x coordinate i and y coordinate j; c(x,y) refers to a (x,y) pixel of the cross correlation image; f(x,y) and g(x,y) refer to a (x,y) pixel of the image f and the image g, respectively. According to Equation (1), in the cross correlation image, a pixel c(x,y) characterizes an overlap of the two images for relative displacements x in the x direction, and y in the y direction. The cross correlating device 120 can calculate the cross correlation image by directly following the summation in Equation (1). Alternatively, the two images, f and g, can be Fourier transformed first to obtain two Fourier images, F and G, respectively. As explained in more detail by text books, such as “Bracewell”, the cross correlation image can be obtained by inverse Fourier transformation from the product of one of the Fourier images, say F, and the complex conjugate of the other Fourier image, in this case, G*. Optionally, Fourier transformations can be performed by FFT methods.
  • As illustrated in FIGS. 6-7, a cross correlation image can characterize an overlap, or correlation, of two images for alignment. For example, one of the two images can be a CAD image, as shown in FIG. 6A; the other image can be an optical image, as shown in FIG. 6B. A cross correlation image can be calculated from the two images as described above. As shown, e.g., for the x direction in FIG. 7, in a cross correlation image, pixels can characterize correlations for various displacements. In FIG. 7, using one pixel as a unit, the x coordinate describes the relative displacements of the two images; in some normalized units, the y coordinate characterizes a correlation value, or overlap. The greater the overlap between the two images, the larger the correlation value.
  • To reach sub-resolution accuracy, a cross correlation image can be oversampled, or interpolated (230). In one implementation, the oversampler 130 can oversample a cross correlation image of CAD and optical images. Oversampling produces an oversampled image that has extra sub-pixel points generated from the original points of the cross correlation image. As shown in FIGS. 7A and 7B, the oversampler 130 can generate sub-pixel points that are, for example, 0.25 or 0.1 pixel apart. The oversampler 130 can oversample an image with spatial domain interpolation techniques, such as nearest neighbor, bilinear, or cubic spline interpolation. Most spatial domain techniques, however, can distort the high frequency content of the cross correlation image: the generated sub-pixel points can introduce new, or alter existing frequency components. For example, the original cross correlation image has no high frequency component that exceeds the inverse of the distance between pixels. Sub-pixel points, however, can artificially introduce non-zero value for such high frequency components.
  • To avoid artificial introduction of new high frequency components, an oversampling technique can generate sub-pixel points, for example, by a zero padding technique. In the oversampled image, zero padding sets new high frequency components to zero. For zero padding, the oversampler 130 may perform the following: Fourier transform a cross correlation image; enlarge the Fourier space of the cross correlation image by adding new high frequency components; set the new high frequency components to zero; and inverse Fourier transform the enlarged image to provide an oversampled image (see reference “Watson”). Optionally, the oversampler 130 can Fourier transform an image by using FFT. Alternatively, the oversampler 130 can use a spatial domain interpolation technique, such as Sinc interpolation (see, e.g., “Bracewell”), that does not introduce new high frequency components.
  • In one implementation, before the cross correlator 120 calculates a cross correlation image from two images to be aligned, the oversampler 130 can oversample one or both of the two images. For example, when the alignment system 100 aligns an optical image of an IC with a corresponding CAD image, the oversampler 130 can generate sub-pixel points for the optical image. With these sub-pixel points for the optical image, the cross correlator 120 can calculate sub-pixel points for a cross correlation image without further oversampling.
  • Referring back to FIG. 2, the method 200 can calculate an offset value from a cross correlation image in step 220. In one implementation, the maximum calculator 120 can find a maximum correlation value in a cross correlation image of two images. For example, FIGS. 7A-7B show graphs representing cross correlation images obtained from the images in FIG. 6A and in FIG. 6B, respectively. The location of the maximum correlation value provides an offset value that describes a displacement to align the two images. The maximum calculator 120 can find a maximum of a cross correlation image with or without oversampling. Without oversampling, as shown, for example, in FIG. 7A, the maximum location gives an integer pixel (−6 pixel, in FIG. 7A) as an offset value. With oversampling, however, the highest maximum can be located with sub-pixel precision (−5.6 pixel, in FIG. 7A), as described by the sub-pixel points obtained from oversampling the cross correlation image.
  • In a cross correlation image, a maximum provides an offset value as the location of the maximum. At the same time, the correlation value at the maximum can be interpreted as a confidence factor characterizing confidence in the offset value. In one implementation, cross correlation images are calculated for different sub-regions of the two images to be aligned. For each sub-region, an offset value and a confidence factor is obtained. The confidence factor can be used for selecting the sub-region that is used for alignment: the higher the confidence factor, the more likely that the corresponding offset value is close to the correct one. Optionally, the confidence factor can be given to a human operator for selecting a sub-region for alignment.
  • As shown in FIG. 6C for aligning the CAD image in FIG. 6A and the optical image in FIG. 6B, even with oversampling, substantial misalignment may occur. The misalignment may be due to an optical ghost in the optical image. When a ghosted optical image is used to calculate a cross correlation image, the optical ghost may cause more than one maximum in the cross correlation image (see, e.g., FIG. 7A). The height of these maximums can depend on a point spread function that is used during image pre-processing in resolution adjustment 318. For example, as shown in FIGS. 7A and 7B, the highest maximum obtained can differ depending on whether resolution is adjusted with or without a PSF. When resolution is not adjusted with a PSF, the cross correlation image may show a highest maximum located at the ghost position, shown around −5.6 pixels in FIG. 7A. When resolution is adjusted with a PSF, the relative height might change to make the true maximum, shown in FIG. 7B around −0.8 pixels, higher than the maximum for the ghost. Appropriate alignment is typically obtained with a PSF characterizing the optical system that produced the ghosted optical image.
  • With the help of an offset value, two images can be aligned (FIG. 2; 250). In one implementation, after aligning the two images, the cross correlator 120 can provide a second cross correlation image. Then, the alignment system 100 can apply the steps 230-250 of the method 200 a second time. The second alignment can provide a better alignment than the first alignment. Potentially, further iterations may produce better results.
  • Furthermore, more than two images can be aligned as well. For example, three images can be aligned by selecting pairs of images for alignment. A first pair of images may include the first and second images. After aligning the images of the first pair, the third image can be aligned with one of the images of the first pair; optionally, the third image can also be aligned with the other image of the first pair. Alternatively, a combined image can be generated from the aligned first pair, and the third image can be aligned with the combined image.
  • TABLE 1
    Ex Ey Ex Ey E
    ROI pix pix nm nm nm CC H
    1 0.4 0.1 76 19 78 0.62 0.86
    2 0.4 0.1 76 19 78 0.62 0.86
    3 0.2 0.2 38 38 54 0.56 0.68
    4 0.3 0.1 57 19 60 0.57 0.63
    5 0.1 0.1 19 19 26 0.69 0.83
    6 0.1 0.4 19 76 78 0.69 0.83
    7 0.1 0.4 19 76 78 0.69 0.84
    8 0.1 0.4 19 76 78 0.69 0.84
    9 0.1 0.4 19 76 78 0.69 0.84
    10 0.1 0.3 19 57 60 0.69 0.85
    11 0.1 0.2 19 38 42 0.69 0.85
    12 0.1 0.2 19 38 42 0.69 0.85
    13 0.0 0.2 0 38 38 0.69 0.86
    14 0.0 0.3 0 57 57 0.70 0.86
    15 0.0 0.2 0 38 38 0.70 0.87
    16 0.0 0.3 0 57 57 0.70 0.87
    17 0.1 0.3 19 57 60 0.65 0.84
    18 0.2 0.3 38 57 68 0.66 0.84
    19 0.5 0.1 95 19 96 0.71 0.92
    20 0.5 0.0 95 0 95 0.71 0.94
    μ = 63 nm
    σ = 19 nm
  • As shown in Table 1, in one implementation, an IR image of a test IC can be aligned with a corresponding CAD image with sub-resolution accuracy. The IR image is measured from the silicon side through the substrate. In a flip-chip packaging, the test IC has test features with linear dimensions below one micron. The IR image, however, shows the test features with a resolution of about one micron. To estimate statistical properties of alignment with the CAD image, the IR image is divided into twenty different regions of interest (ROI), each ROI having 256×256 pixels; each pixel corresponding to 0.189 micron. Each ROI is independently aligned with a corresponding region of the CAD image using a preferred implementation of the method 200 (FIG. 2). In the preferred implementation, the CAD image is resolution adjusted with a point spread function (FIG. 5). Furthermore, after a first alignment, a second alignment repeats the steps 230-250 of the method 200 to improve alignment accuracy between the IR and CAD images.
  • Next, to calculate the accuracy of the alignment, the sample features are exposed, and the sample IC is imaged with a FIB to provide a high resolution image. The high resolution FIB image is aligned both with the CAD image and the IR image to estimate errors in the alignment of the IR and CAD images. As shown in Table 1, the estimated errors are Ex in the x direction and Ey in the y direction; E is the total (square root) error. For the alignment of the CAD and IR images, the average alignment error is μ=63 nm with a standard deviation σ=19 nm. Consequently, since the average alignment error is less than 0.1 micron, i.e., 1/10 of the resolution of the IR image, the IR and CAD images are aligned with sub-resolution accuracy.
  • Furthermore, as shown in Table 1, confidence factors (CC) are normalized as 1.0 for perfect alignment, and the information content value (H) corresponds to Shannon's entropy as described, e.g., in “Bracewell”.
  • Various implementations, sub-resolution alignment methods and apparatuses have been described for images used in IC probing and editing systems. Nevertheless, it will be understood that the application can be implemented for sub-resolution alignment of images for other systems as well: for example, lithographic systems, scanning electron microscopy, or laser scanning microscopy.
  • The computational aspects described here can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Where appropriate, aspects of these systems and techniques can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output.
  • To provide for interaction with a user, a computer system can be used having a display device such as a monitor or LCD screen for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer system. The computer system can be programmed to provide a graphical user interface through which computer programs interact with users.
  • Other embodiments are within the scope of the following claims.

Claims (28)

1.-93. (canceled)
94. A method for aligning images of an integrated circuit comprising:
acquiring an image of the integrated circuit under test;
applying a point spread function to a computer aided design image of the integrated circuit, wherein the point spread function is characteristic of a system that acquires the acquired image, to generate a modified image; and
registering the acquired image with the modified image.
95. The method of claim 94, wherein the acquired image is one of an optical image, an infrared image, a voltage contrast image, a scanning electron microscope image, a focused ion beam image, and an electron beam prober image.
96. The method of claim 94, further comprising assigning gray levels to layers of the computer aided design image.
97. The method of claim 96, wherein assigning gray levels to layers of the computer aided design image data comprises matching intensities of the gray levels with regions of the acquired image.
98. The method of claim 96, wherein assigning gray levels to layers of the computer aided design image data comprises assigning gray levels according to differing regions of the integrated circuit in the acquired image.
99. The method of claim 98, wherein one of the differing regions is a diffusion region.
100. The method of claim 98, wherein one of the differing regions is a metal region.
101. The method of claim 98, further comprising locating a circuit element on a substrate of the integrated circuit.
102. The method of claim 94, further comprising locating a circuit element on a substrate of the integrated circuit.
103. The method of claim 102, further comprising probing the circuit element.
104. The method of claim 96, further comprising selecting one of the layers based on an analysis of the applied gray levels.
105. The method of claim 94, wherein the image is acquired from a silicon side of the integrated circuit.
106. The method of claim 94, wherein the image is acquired from a front side of the integrated circuit.
107. The method of claim 94, wherein registering the acquired image with the modified image comprises automatically registering the acquired image with the modified image.
108. The method of claim 96, further comprising automatically selecting one of the layers based on the applied gray levels.
109. A machine-readable storage device that provides executable instructions which, when executed by a programmable processor, cause the processor to perform a method comprising:
acquiring an image of the integrated circuit under test;
applying a point spread function to a computer aided design image of the integrated circuit, wherein the point spread function is characteristic of a system that acquires the acquired image, to generate a modified image; and
registering the acquired image with the computer aided design image.
110. The machine-readable storage device of claim 109, wherein the acquired image is one of an optical image, an infrared image, a voltage contrast image, a scanning electron microscope image, a focused ion beam image and an electron beam prober image.
111. The machine-readable storage device of claim 109, further comprising assigning gray levels to layers of the computer aided design image.
112. The machine-readable storage device of claim 111, wherein assigning gray levels to layers of the computer aided design image data comprises matching intensities of the gray levels with regions of the acquired image.
113. The machine-readable storage device of claim 111, wherein assigning gray levels to layers of the computer aided design image data comprises assigning gray levels to areas of the integrated circuit based on the acquired image.
114. The machine-readable storage device of claim 113, wherein one of the regions is a diffusion region.
115. The machine-readable storage device of claim 113, wherein one of the regions is a metal region.
116. The machine-readable storage device of claim 113, further comprising locating a circuit element on a substrate of the integrated circuit.
117. The machine-readable storage device of claim 109, further comprising locating a circuit element on a substrate of the integrated circuit.
118. The machine-readable storage device of claim 111, further comprising selecting one of the layers based on an analysis of the applied gray levels.
119. The machine-readable storage device of claim 109, wherein the image is acquired from a silicon side of the integrated circuit.
120. The machine-readable storage device of claim 109, wherein the image is acquired from a front side of the integrated circuit.
US12/144,495 2001-05-30 2008-06-23 Sub-resolution alignment of images Abandoned US20080298719A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/144,495 US20080298719A1 (en) 2001-05-30 2008-06-23 Sub-resolution alignment of images

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US29471601P 2001-05-30 2001-05-30
US10/159,527 US6848087B2 (en) 2001-05-30 2002-05-30 Sub-resolution alignment of images
US10/946,667 US7409653B2 (en) 2001-05-30 2004-09-21 Sub-resolution alignment of images
US12/144,495 US20080298719A1 (en) 2001-05-30 2008-06-23 Sub-resolution alignment of images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/946,667 Continuation US7409653B2 (en) 2001-05-30 2004-09-21 Sub-resolution alignment of images

Publications (1)

Publication Number Publication Date
US20080298719A1 true US20080298719A1 (en) 2008-12-04

Family

ID=23134618

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/159,527 Expired - Fee Related US6848087B2 (en) 2001-05-30 2002-05-30 Sub-resolution alignment of images
US10/946,667 Expired - Lifetime US7409653B2 (en) 2001-05-30 2004-09-21 Sub-resolution alignment of images
US12/144,495 Abandoned US20080298719A1 (en) 2001-05-30 2008-06-23 Sub-resolution alignment of images

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/159,527 Expired - Fee Related US6848087B2 (en) 2001-05-30 2002-05-30 Sub-resolution alignment of images
US10/946,667 Expired - Lifetime US7409653B2 (en) 2001-05-30 2004-09-21 Sub-resolution alignment of images

Country Status (4)

Country Link
US (3) US6848087B2 (en)
EP (1) EP1390814A2 (en)
AU (1) AU2002312182A1 (en)
WO (1) WO2002097535A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090190827A1 (en) * 2008-01-25 2009-07-30 Fuji Jukogyo Kabushiki Kaisha Environment recognition system
US20090190800A1 (en) * 2008-01-25 2009-07-30 Fuji Jukogyo Kabushiki Kaisha Vehicle environment recognition system
US20110135216A1 (en) * 2009-12-09 2011-06-09 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus for correcting degradation component of image
US20110149045A1 (en) * 2008-04-29 2011-06-23 Alexander Wuerz-Wessel Camera and method for controlling a camera
US20120121183A1 (en) * 2009-05-04 2012-05-17 Maneesha Joshi Apparatus and Method for Lane Marking Analysis
US20120218290A1 (en) * 2011-02-28 2012-08-30 Varian Medical Systems International Ag Method and system for interactive control of window/level parameters of multi-image displays
US20140098245A1 (en) * 2012-10-10 2014-04-10 Microsoft Corporation Reducing ghosting and other image artifacts in a wedge-based imaging system
WO2018208871A1 (en) * 2017-05-11 2018-11-15 Kla-Tencor Corporation A learning based approach for aligning images acquired with different modalities
WO2022128374A1 (en) * 2020-12-16 2022-06-23 Asml Netherlands B.V. Topology-based image rendering in charged-particle beam inspection systems
US20220301135A1 (en) * 2019-06-03 2022-09-22 Hamamatsu Photonics K.K. Method for inspecting semiconductor and semiconductor inspecting device
US11593953B2 (en) 2018-11-01 2023-02-28 Intelligent Imaging Innovations, Inc. Image processing using registration by localized cross correlation (LXCOR)
WO2023110292A1 (en) * 2021-12-15 2023-06-22 Asml Netherlands B.V. Auto parameter tuning for charged particle inspection image alignment

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2383221A (en) * 2001-12-13 2003-06-18 Sony Uk Ltd Method of identifying a codeword used as a watermark
JP2004031709A (en) * 2002-06-27 2004-01-29 Seiko Instruments Inc Waferless measuring recipe generating system
US7110602B2 (en) * 2002-08-21 2006-09-19 Raytheon Company System and method for detection of image edges using a polar algorithm process
DE10333712A1 (en) * 2003-07-23 2005-03-03 Carl Zeiss Failure reduced depiction method e.g. for digital cameras, microscopes, involves illustrating object by optical mechanism and has illustration unit to collect intensity values
EP1524625A2 (en) * 2003-10-17 2005-04-20 Matsushita Electric Industrial Co., Ltd. Enhancement of interpolated image
US7283677B2 (en) * 2004-08-31 2007-10-16 Hewlett-Packard Development Company, L.P. Measuring sub-wavelength displacements
US7512260B2 (en) * 2004-09-06 2009-03-31 Omron Corporation Substrate inspection method and apparatus
DE102005014794B4 (en) * 2005-03-31 2009-01-15 Advanced Micro Devices, Inc., Sunnyvale A method of testing a multi-sample semiconductor sample
CN101061721B (en) * 2005-06-07 2010-05-26 松下电器产业株式会社 Monitoring system, monitoring method, and camera terminal
FR2889874B1 (en) * 2005-08-16 2007-09-21 Commissariat Energie Atomique METHOD FOR MEASURING THE TRAVEL SPEED
EP1996917A2 (en) * 2006-03-10 2008-12-03 Corning Incorporated Optimized method for lid biosensor resonance detection
JP4707605B2 (en) * 2006-05-16 2011-06-22 三菱電機株式会社 Image inspection method and image inspection apparatus using the method
US8045786B2 (en) * 2006-10-24 2011-10-25 Kla-Tencor Technologies Corp. Waferless recipe optimization
KR101385428B1 (en) 2006-12-15 2014-04-14 칼 짜이스 에스엠에스 게엠베하 Method and apparatus for determining the position of a structure on a carrier relative to a reference point of the carrier
DE102007033815A1 (en) * 2007-05-25 2008-11-27 Carl Zeiss Sms Gmbh Method and device for determining the relative overlay shift of superimposed layers
GB0807411D0 (en) * 2008-04-23 2008-05-28 Mitsubishi Electric Inf Tech Scale robust feature-based indentfiers for image identification
US7999944B2 (en) * 2008-10-23 2011-08-16 Corning Incorporated Multi-channel swept wavelength optical interrogation system and method for using same
CN102150418B (en) * 2008-12-22 2014-10-15 松下电器产业株式会社 Image enlargement apparatus, method, and integrated circuit
DE102009035290B4 (en) * 2009-07-30 2021-07-15 Carl Zeiss Smt Gmbh Method and device for determining the relative position of a first structure to a second structure or a part thereof
EP2407928A1 (en) * 2010-07-16 2012-01-18 STMicroelectronics (Grenoble 2) SAS Fidelity measurement of digital images
EP2428795A1 (en) * 2010-09-14 2012-03-14 Siemens Aktiengesellschaft Apparatus and method for automatic inspection of through-holes of a component
EP2686830B1 (en) * 2011-03-15 2015-01-21 Siemens Healthcare Diagnostics Inc. Multi-view stereo systems and methods for tube inventory in healthcare diagnostics
US8611692B2 (en) 2011-09-26 2013-12-17 Northrop Grumman Systems Corporation Automated image registration with varied amounts of a priori information using a minimum entropy method
TWI500924B (en) 2011-11-16 2015-09-21 Dcg Systems Inc Apparatus and method for polarization diversity imaging and alignment
US9041793B2 (en) * 2012-05-17 2015-05-26 Fei Company Scanning microscope having an adaptive scan
KR20140102038A (en) * 2013-02-13 2014-08-21 삼성전자주식회사 Video matching device and video matching method
TWI563470B (en) * 2013-04-03 2016-12-21 Altek Semiconductor Corp Super-resolution image processing method and image processing device thereof
US9304089B2 (en) 2013-04-05 2016-04-05 Mitutoyo Corporation System and method for obtaining images with offset utilized for enhanced edge resolution
JP6229323B2 (en) * 2013-06-13 2017-11-15 富士通株式会社 Surface inspection method, surface inspection apparatus, and surface inspection program
US9715725B2 (en) * 2013-12-21 2017-07-25 Kla-Tencor Corp. Context-based inspection for dark field inspection
US9384537B2 (en) * 2014-08-31 2016-07-05 National Taiwan University Virtual spatial overlap modulation microscopy for resolution improvement
WO2016085560A1 (en) * 2014-11-25 2016-06-02 Cypress Semiconductor Corporation Methods and sensors for multiphase scanning in the fingerprint and touch applications

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6314212B1 (en) * 1999-03-02 2001-11-06 Veeco Instruments Inc. High precision optical metrology using frequency domain interpolation
US6504947B1 (en) * 1998-04-17 2003-01-07 Nec Corporation Method and apparatus for multi-level rounding and pattern inspection

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4730158A (en) * 1986-06-06 1988-03-08 Santa Barbara Research Center Electron-beam probing of photodiodes
US4805123B1 (en) * 1986-07-14 1998-10-13 Kla Instr Corp Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US5550937A (en) 1992-11-23 1996-08-27 Harris Corporation Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries
US5548326A (en) 1993-10-06 1996-08-20 Cognex Corporation Efficient image registration
US5755501A (en) * 1994-08-31 1998-05-26 Omron Corporation Image display device and optical low-pass filter
US5995681A (en) * 1997-06-03 1999-11-30 Harris Corporation Adjustment of sensor geometry model parameters using digital imagery co-registration process to reduce errors in digital imagery geolocation data
FR2777374B1 (en) * 1998-04-10 2000-05-12 Commissariat Energie Atomique METHOD OF RECORDING TWO DIFFERENT IMAGES OF THE SAME OBJECT
US6282309B1 (en) * 1998-05-29 2001-08-28 Kla-Tencor Corporation Enhanced sensitivity automated photomask inspection system
US6256767B1 (en) * 1999-03-29 2001-07-03 Hewlett-Packard Company Demultiplexer for a molecular wire crossbar network (MWCN DEMUX)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504947B1 (en) * 1998-04-17 2003-01-07 Nec Corporation Method and apparatus for multi-level rounding and pattern inspection
US6314212B1 (en) * 1999-03-02 2001-11-06 Veeco Instruments Inc. High precision optical metrology using frequency domain interpolation

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244027B2 (en) * 2008-01-25 2012-08-14 Fuji Jukogyo Kabushiki Kaisha Vehicle environment recognition system
US20090190800A1 (en) * 2008-01-25 2009-07-30 Fuji Jukogyo Kabushiki Kaisha Vehicle environment recognition system
US20090190827A1 (en) * 2008-01-25 2009-07-30 Fuji Jukogyo Kabushiki Kaisha Environment recognition system
US8437536B2 (en) 2008-01-25 2013-05-07 Fuji Jukogyo Kabushiki Kaisha Environment recognition system
US20110149045A1 (en) * 2008-04-29 2011-06-23 Alexander Wuerz-Wessel Camera and method for controlling a camera
US8929660B2 (en) * 2009-05-04 2015-01-06 Tomtom North America, Inc. Apparatus and method for lane marking analysis
US20120121183A1 (en) * 2009-05-04 2012-05-17 Maneesha Joshi Apparatus and Method for Lane Marking Analysis
US8798389B2 (en) * 2009-12-09 2014-08-05 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus for correcting degradation component of image
US20110135216A1 (en) * 2009-12-09 2011-06-09 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus for correcting degradation component of image
US10854173B2 (en) 2011-02-28 2020-12-01 Varian Medical Systems International Ag Systems and methods for interactive control of window/level parameters of multi-image displays
US11315529B2 (en) 2011-02-28 2022-04-26 Varian Medical Systems International Ag Systems and methods for interactive control of window/level parameters of multi-image displays
US10152951B2 (en) * 2011-02-28 2018-12-11 Varian Medical Systems International Ag Method and system for interactive control of window/level parameters of multi-image displays
US20120218290A1 (en) * 2011-02-28 2012-08-30 Varian Medical Systems International Ag Method and system for interactive control of window/level parameters of multi-image displays
US20140098245A1 (en) * 2012-10-10 2014-04-10 Microsoft Corporation Reducing ghosting and other image artifacts in a wedge-based imaging system
US9436980B2 (en) * 2012-10-10 2016-09-06 Microsoft Technology Licensing, Llc Reducing ghosting and other image artifacts in a wedge-based imaging system
US10733744B2 (en) 2017-05-11 2020-08-04 Kla-Tencor Corp. Learning based approach for aligning images acquired with different modalities
WO2018208871A1 (en) * 2017-05-11 2018-11-15 Kla-Tencor Corporation A learning based approach for aligning images acquired with different modalities
US11593953B2 (en) 2018-11-01 2023-02-28 Intelligent Imaging Innovations, Inc. Image processing using registration by localized cross correlation (LXCOR)
US20220301135A1 (en) * 2019-06-03 2022-09-22 Hamamatsu Photonics K.K. Method for inspecting semiconductor and semiconductor inspecting device
EP3958210A4 (en) * 2019-06-03 2023-05-03 Hamamatsu Photonics K.K. Method for inspecting semiconductor and semiconductor inspecting device
JP7413376B2 (en) 2019-06-03 2024-01-15 浜松ホトニクス株式会社 Semiconductor testing method and semiconductor testing equipment
WO2022128374A1 (en) * 2020-12-16 2022-06-23 Asml Netherlands B.V. Topology-based image rendering in charged-particle beam inspection systems
WO2023110292A1 (en) * 2021-12-15 2023-06-22 Asml Netherlands B.V. Auto parameter tuning for charged particle inspection image alignment

Also Published As

Publication number Publication date
WO2002097535A2 (en) 2002-12-05
US7409653B2 (en) 2008-08-05
US20020199164A1 (en) 2002-12-26
WO2002097535A3 (en) 2003-11-27
EP1390814A2 (en) 2004-02-25
AU2002312182A1 (en) 2002-12-09
US6848087B2 (en) 2005-01-25
US20050044519A1 (en) 2005-02-24

Similar Documents

Publication Publication Date Title
US6848087B2 (en) Sub-resolution alignment of images
Pham et al. Robust fusion of irregularly sampled data using adaptive normalized convolution
US6266452B1 (en) Image registration method
Laidler et al. TFIT: A Photometry Package Using Prior Information for Mixed‐Resolution Data Sets
US6268611B1 (en) Feature-free registration of dissimilar images using a robust similarity metric
US7990462B2 (en) Simple method for calculating camera defocus from an image scene
US6678404B1 (en) Automatic referencing for computer vision applications
GB2532541A (en) Depth map generation
US20060002635A1 (en) Computing a higher resolution image from multiple lower resolution images using model-based, robust bayesian estimation
US20110129154A1 (en) Image Processing Apparatus, Image Processing Method, and Computer Program
JPH10509817A (en) Signal restoration method and apparatus
EP0968482A1 (en) Multi-view image registration with application to mosaicing and lens distortion correction
JP2010511257A (en) Panchromatic modulation of multispectral images
AU2011253779A1 (en) Estimation of shift and small image distortion
Hsu et al. Automatic seamless mosaicing of microscopic images: enhancing appearance with colour degradation compensation and wavelet‐based blending
Mahmoudzadeh et al. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration
JP4130546B2 (en) Color filter array image reconstruction device
EP2057848A2 (en) System and method for optical section image line removal
Qian et al. Enhancing spatial resolution of hyperspectral imagery using sensor's intrinsic keystone distortion
US20070206847A1 (en) Correction of vibration-induced and random positioning errors in tomosynthesis
JP2024507089A (en) Image correspondence analysis device and its analysis method
WO1991020054A1 (en) Patterned part inspection
WO2005053314A2 (en) Inspection apparatus and method
Valdes The Reduction of CCD Mosaic Data
CN112991205A (en) Coronal image enhancement method, system, medium, equipment and application

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION