WO2006058735A1 - Enhanced optical coherence tomography for anatomical mapping - Google Patents

Enhanced optical coherence tomography for anatomical mapping Download PDF

Info

Publication number
WO2006058735A1
WO2006058735A1 PCT/EP2005/012801 EP2005012801W WO2006058735A1 WO 2006058735 A1 WO2006058735 A1 WO 2006058735A1 EP 2005012801 W EP2005012801 W EP 2005012801W WO 2006058735 A1 WO2006058735 A1 WO 2006058735A1
Authority
WO
WIPO (PCT)
Prior art keywords
recited
intensity
sample
oct
outputs
Prior art date
Application number
PCT/EP2005/012801
Other languages
French (fr)
Inventor
Robert W. Knighton
Shuliang Jiao
Giovanni Gregori
Carmen A. Puliafito
Original Assignee
Carl Zeiss Meditec Ag
University Of Miami
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=35797766&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2006058735(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Carl Zeiss Meditec Ag, University Of Miami filed Critical Carl Zeiss Meditec Ag
Priority to EP05815426.1A priority Critical patent/EP1833359B1/en
Priority to CA2584958A priority patent/CA2584958C/en
Publication of WO2006058735A1 publication Critical patent/WO2006058735A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02041Interferometers characterised by particular imaging or detection techniques
    • G01B9/02044Imaging in the frequency domain, e.g. by using a spectrometer
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02083Interferometers characterised by particular signal processing and presentation
    • G01B9/02089Displaying the signal, e.g. for user interaction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/0209Low-coherence interferometers
    • G01B9/02091Tomographic interferometers, e.g. based on optical coherence

Definitions

  • the present invention relates to coherent waveform based imaging, and more particularly to an optical coherence tomography (OCT) imaging system.
  • OCT optical coherence tomography
  • OCT optical coherence tomography
  • One successful application of OCT imagery can include the use of OCT in the dermato logical imaging of the skin.
  • Another successful application of OCT imagery can include sectional imaging of the retina in the field of ophthalmology.
  • time domain based OCT has produced cross-sectional images of the retina of the eye that have proven value to ophthalmologists (see for example, Swanson, E. A. et al. (1993). "In- vivo retinal imaging by optical coherence tomography.” Optics Letters 18(21): 1864-1866; Izatt, J. A. et al. (1993). "Ophthalmic diagnostics using optical coherence tomography”. Ophthalmic Technologies III, SPIE, 1877: 136-144, Los Angeles, CA, USA). Notwithstanding, time domain OCT instruments cannot acquire sufficient data to characterize completely important retinal pathologies.
  • the scanning laser ophthalmoscope provides en face fundus images familiar to ophthalmologists (see for example, Sharp, P. F. et al (2004) "The scanning laser ophthalmoscope — a review of its role in bioscience and medicine” Physics in Medicine and Biology 49: 1085-1096).
  • en face views are familiar to ophthalmologists not only from direct observations, but also from fundus photographs and fluorescein angiography.
  • the strength of an en face view is that structural abnormalities at different retinal locations can be related to each other and to major retinal landmarks such as the fovea and optic nerve head.
  • SLO/OCT SLO/OCT
  • time domain SLO/OCT systems utilize two-dimensional transverse scans to provide sectional images in planes perpendicular to the depth of the sample.
  • the fundus image can be acquired by splitting the reflected sample light during the transverse scan into two detection channels.
  • a first channel can accommodate OCT while the second channel can be utilized in acquiring intensity image (see for example, US5975697, US6769769, CA2390072, US20040036838, US20040233457, and WO2004102112).
  • the sectional images can be summed along the depth of the image (see for example, Hitzenberger, C. K. et al. (2003).
  • the present invention is an OCT method, system and apparatus which addresses the foregoing deficiencies of time domain ocular imaging.
  • a novel method, system and apparatus for anatomical mapping using spectral domain OCT As well, time-domain OCT and swept source frequency OCT also can be applied to provide the enhanced anatomical mapping of the present invention.
  • the term spectral domain OCT (SD-OCT) is sometimes used to include both spectrometer based spectral or Fourier domain OCT, and tunable laser based swept source OCT since their basic principle of operation is very similar (see for example, Choma, M. A. et al. (2003).
  • a fundus intensity image can be acquired from a spectral domain scanning of light back-reflected from an eye.
  • the term "intensity image” as defined in this invention is a two-dimensional en-face image extracted from a 3D OCT data block by integrating the OCT signal over a depth range (Z- axis) greater than the axial resolution of the OCT system.
  • a fundus image can be extracted by the method disclosed herein from OCT data covering the fundus of the human eye.
  • the fundus intensity image actually represents a reduction in total information content.
  • a 3-D OCT data set can be reduced to generate an ocular mapping, including edema mappings and thickness mappings.
  • a partial fundus intensity image can be produced from the spectral domain scanning of the eye to generate an en face view of the retinal structure of the eye without first requiring a full segmentation of the 3-D OCT data.
  • the partial intensity image is a two-dimensional en-face image extracted from a 3D OCT data block by integrating the OCT signal over a depth range greater than the axial resolution of the OCT system, but including only selected regions of the 3-D anatomical structure for integration.
  • the present invention can differ from conventional, time domain OCT technology because the mapping system of the present invention can be optimized to obtain the ZM190PCT
  • the present invention utilizes optimized sampling grids. For example, in the case of volume measurements of fluid-filled spaces, which require several samples spread over the areal extent of the space, a raster scan can be utilized having evenly spaced points on a square grid.
  • the data set acquired by the spectral domain scan can contain all of the 3-D information about retinal structure needed to estimate the volumes of fluid-filled spaces.
  • 3-D segmentation algorithms can be applied to the intensity data set to outline spatial structures.
  • intensity information can be extracted from each measured spectrum to provide an en face view or an image of the fundus that can be used immediately to judge image data quality, hi this regard, an operator can then decide whether or not to take another image. Because the intensity image is generated from the same data as the subsequent 3-D data set, the intensity image provides all necessary landmarks for precisely locating on the fundus the lesions revealed by the cross-sectional OCT scan. The intensity image also provides the landmarks needed for orienting the 3-D data set to other fundus images, such as fundus photographs and fluorescein angiograms.
  • the fundus intensity image can be generated before segmentation of the 3-D data set, the fundus intensity image can be used to guide the segmentation.
  • retinal blood vessels represent discontinuities in the layered retinal architecture that often disrupt segmentation of cross-sectional images.
  • the fundus intensity image can be processed by ordinary 2-D algorithms to identify the blood vessel positions. Subsequent processing of the OCT cross-sectional images then can be modified at each vessel location. For instance, an algorithm designed to follow layers can skip past a vessel and resume segmentation on the other side of the vessel. Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
  • FIG. 1 is a schematic illustration of an OCT imaging system configured for anatomical mapping in accordance with the present invention.
  • Figure 2 is a flow chart illustrating a process for anatomical mapping in the OCT imaging system of Figure 1.
  • the present invention is a system, method and apparatus for anatomical mapping • through the production of a 3-D data set, for instance through imagery produced utilizing OCT.
  • a fundus intensity image can be acquired from a spectral domain scanning of light back-reflected from an eye.
  • the 3-D data set can be reduced to generate an ocular mapping, including edema mappings and thickness mappings.
  • a partial fundus intensity image can be produced from the spectral domain scanning of the eye to generate an en face view of the retinal structure of the eye without first requiring a full segmentation of the 3-D data set.
  • FIG. 1 is a schematic illustration of an OCT imaging system configured for anatomical mapping in accordance with the present invention.
  • a low coherent light source 105 can be provided.
  • the low coherent light source 105 can be a super-luminescent diode, such as a super-luminescent diode having a center wavelength sufficient to penetrate the vitreous portions of the eye 140 — typically between 800 and 900 run.
  • a light source as a broadband light source.
  • the light source 105 can be coupled to a beam splitter 115 by way of optional fiber-based isolator 110.
  • the beam splitter 115 can be a 2 x 2 3dB fiber coupler configured to split light emanating from the light source 105 into sample and reference arms.
  • the arm can be coupled into the optical head of a light delivery system 165.
  • the light delivery system 165 can include an x-y scanner 130 and optics for delivering the sample light into the eye 140 and collecting the back-reflected sample light.
  • the back-reflected sample light can be partially reflected through operation of a dichroic mirror 135 into a video camera 145 for real time viewing of the fundus.
  • the reference arm can include a variable neutral density filter 125 used to adjust the reference intensity before and after the light reaches the optical reference system 170.
  • Polarization controllers 120 can be used in both the reference and sample arms for fine tuning of the sample and reference polarization states to achieve maximal interference.
  • a spectrometer 160 can be coupled to a line scan camera 155 to detect the combined reference and sample light.
  • a spectrometer will include an optical element such as a diffraction grating for angularly dispersing the light as a function of wavelength. The dispersed light falls on the pixels of the line scan camera.
  • the light intensities measured by the camera pixels contain information corresponding to the distribution of reflection sites along the Z-axis depth in the sample.
  • Both the line scan camera 155 and the video camera 145 can be communicatively linked to a computer 150.
  • the computer 150 can be coupled both to a data acquisition processor 190 configured to acquire data from the line scan camera 155, and also to a data reduction processor 195 configured to reduce the acquired data to produce anatomical maps 185A and partial fundus imagery 185B.
  • the 3-D data sets would also be used to create cross-sectional OCT images 185C as known in the art.
  • both a 3-D data set 180 and an intensity image 175 can be acquired.
  • the intensity image 175 can be acquired in real time directly from the data acquisition processor using either digital or analog processing. Complex data processing associated with generating OCT images is not required. Consequently, an operator can discard the image if it is determined that the image is of too low quality without permitting further data processing.
  • the 3-D data set 180 by comparison, can be produced by operation of the data reduction processor 195 from the image acquired by the data acquisition process 190.
  • an intensity image 175 can also be derived from the 3-D data set.
  • the 3-D data set can be useful for data registration to other imaging modalities.
  • blood vessels can be identified as an initial step in a segmentation process.
  • the output wavelength of the light source can be varied or scanned, for example, via frequency modulation. Only a single photodetector is required rather than a spectrometer and a line scan camera. The intensity data as a function of wavelength is extracted over time.
  • Figure 2 is a flow chart illustrating a process for anatomical mapping in the OCT imaging system of Figure 1.
  • the inherent speed of a spectral domain OCT can be utilized to sample the entire macula of an eye on a square grid.
  • a total of 65,536 a-scans can be collected in slightly over one second by following a left-to-right, top-to-bottom scan pattern in which adjacent a-scans are separated by 23 ⁇ m, which provides adequate lateral resolution to sample fluid-filled spaces of the eye.
  • a fundus image can be generated directly from intensity data included within the spectra produced by the spectral domain scan.
  • each captured spectrum contains at least two components from which can be extracted a signal proportional to the total reflected intensity.
  • This intensity signal can be used to produce a 2-D image of the fundus that advantageously can appear similar to the image from an SLO.
  • Various methods can be employed to obtain the intensity signal in "real-time", so that the fundus image can be painted on a display screen as rapidly as it is acquired.
  • the method for extracting intensity information can be understood by considering the primary mathematical expression for the spectral intensity of the interference fringes falling on the line scan camera:
  • G» ⁇ i + ⁇ R » , + 2 ⁇ ,/*.*. » ⁇ s[2*v(r n - ⁇ m )] + 2 ⁇ s[2 ⁇ v( ⁇ n - ⁇ r )] ⁇ ,
  • v is the frequency of the light
  • R 5n is the intensity reflection of a small region in the sample
  • G s (v) is the spectral density of the light source
  • the reflection of the reference arm is unity
  • distances are represented by propagation times ⁇ n , ⁇ m in the sample arm and ⁇ r in the reference arm and summation is across all axial depths in the sample beam.
  • brackets is the mutual interference for all light scattered within the sample and the last term contains the interference between the scattered sample light and the reference light from which an a-scan is calculated.
  • the parameter v also represents position on the line scan camera.
  • the primary equation can be broken conceptually into two parts: the first two terms represent slow variation across the linear array of camera pixels while the last two terms represent oscillations (interference fringes).
  • the first two terms in the primary equation represent slow variation in time of the detected signal, while the last two terms represent oscillations.
  • the primary equation can be summed across v (in practice, across the pixels of the line scan camera).
  • the cosine terms in the primary equation which have many cycles across the spectrum will sum to (approximately) zero.
  • applying the summation can yield
  • G 5 is the total source power.
  • the first term can, in principle, be ignored for display purposes leaving ⁇ R sn , which is proportional to the sum of all reflected intensities in the n sample, i.e., the desired fundus intensity.
  • Digital summation of the camera output can be very fast and this method may be most suitable for painting the fundus image into a display window as rapidly as it is acquired.
  • Various techniques can be used, if necessary, to compensate for small variations in G 5 with x,y. It is clear that other low-pass filter techniques ZM190PCT
  • the fundus intensity can also be derived from the fourth term of the primary equation by separating the oscillatory component from the slow variation and recognizing that, for retina and other low reflectance samples, the third term is small relative to the fourth term.
  • One method to achieve this is to apply a Fourier transform to the primary equation, to discard the low frequency terms, and then to square and sum the high frequency terms. By ParsevaPs theorem this is equivalent in the camera output signal to high pass filtering the primary equation to remove the low frequency component, then squaring the remaining oscillatory component and summing over the spectrum.
  • Neglecting the third term of the equation, the high pass filtering, squaring and summing of the primary equation can produce:
  • the fundus intensity image can be displayed.
  • the fundus intensity image can provide a plurality of benefits.
  • the display of the fundus intensity image can provide immediate feedback on scan quality to the operator. Scans contaminated by eye movements, blinks or other artifacts can be repeated before data ZM190PCT
  • the fundus intensity image is generated from the same data as the subsequent 3-D data set, the fundus intensity image can provide all necessary landmarks for orienting the 3-D data to other fundus images, such as fundus photographs and fluorescein angiography.
  • the fundus intensity image serves to locate features that are seen in cross-sectional images.
  • the acquired set of spectral domain scans can be converted into a data set that contains the axial (Z-axis) reflectance of the sample.
  • the acquired spectra can be transformed from points linear in wavelength to points linear in frequency (wave number). This transformation can be accomplished mathematically.
  • the transformation can be accomplished by building a line scan camera that has unequally-spaced pixels.
  • the modified line scan camera can incorporate pixels spaced according to the reciprocal of the distance along the light detector chip in the camera. The consequent variation in pixel size will cause variation in pixel gain, but this can be compensated electronically or digitally. The transformation would therefore take place instantly in hardware, greatly speeding up the generation of the spatial data.
  • the product of the transformation may then be used in the conversion 240 to produce a set of Z-axis intensity information data for each unique X and Y position of the beam on the sample, i.e. a 3-D data set.
  • this 3-D data can be converted directly into a full intensity image, for example, by squaring and summing the intensity data at each X/Y position.
  • the 3-D data is also used to generate conventional OCT cross-sectional images 185C.
  • the 3-D data can also be subjected to a partial segmentation process (block 250) in order to outline and quantify fluid-filled spaces and other retinal features.
  • This segmented data can also be used (block 260), to generate ocular maps, such as "edema maps” and "thickness maps” that can be used for diagnosis and treatment.
  • the partial segmentation can also be used to generate a partial fundus image.
  • the partial fundus image is generated the same way as the full fundus image from the 3-D data however, the depth location and range around that depth is limited and selected through the segmentation process.
  • a partial fundus image can be created to highlight certain features or landmarks in the eye. Note that a full segmentation process would be one that identifies many (or all) retinal layers and/or structures.
  • a partial fundus image requires finding only one surface, either a specific retinal layer, like the retinal pigment epithelium, or ZM190PCT
  • retinal tilt and curvature such as the locus of the centroids of each a-scan.
  • the retina can be tilted and curved with respect to the Cartesian coordinate system of the acquired 3-D data set.
  • Reference surfaces identified during segmentation can be used as boundaries to create a partial intensity image that help identify the tilt and curvature.
  • the reference surfaces can capture this tilt and curvature as mathematical surfaces within the acquired data set.
  • These reference surfaces can be generated by combining lines segmented on individual longitudinal cross sectional scans (B- scans).
  • Reference surfaces also can typically correspond to known anatomical layers of the retina.
  • Likely examples include the retinal pigment epithelium (RPE) and the inner limiting membrane (ILM).
  • the reference surfaces can be used to visualize pathology or to define layers within the data set that have the variation due to retinal tilt and curvature removed.
  • Data points within these layers can be used to form en face images or partial intensity images that represent retinal structures localized to ajparticular depth or anatomic layer.
  • a characteristic of the data set for the partial fundus intensity image is that it has thickness, in general greater than the axial resolution of the OCT. This and the fact that the layer is curved and tilted in a way intended to capture anatomy distinguishes it from "C- scans”—the tangential slices produced by SLO/OCT instruments—which are perpendicular to the scanning beam. Hence, C-scans are thin, the thickness being defined by the axial resolution, and they are not usually aligned with retinal structures.
  • intensity in axial scans can be expressed as
  • tissue layer defined as above can be expressed by its upper boundary
  • T 0 (x, y)and lower boundary T 1 (x, y) T 0 (x, y) and lower boundary T 1 (x, y) .
  • the total reflectance of the scatterers in the tissue layer at each point (x,y) can be obtained by squaring
  • the partial fundus intensity image (as well as the fundus intensity image) can be obtained without the explicit use of the above equation.
  • any monotonic transformation of the a-line intensities, when summed, can produce an apparent partial fundus intensity image, though the relative intensities of the different structures can differ.
  • the squaring step can be omitted prior to the summing step and the resulting partial fundus intensity image can suffice.
  • performing a monotonic transformation, strictly speaking, is not required as a transformation having a "little wiggle" can provide a similar result.
  • the above equation can provide a true intensity and serves as a preferred example of the technique.
  • a partial fundus image need not be restricted to a summation between two surfaces, but could also be defined by an integral under a z-profile that followed the reference surface, (e.g., the two surface example defines a rectangular z-profile).
  • the partial fundus intensity image can be applied to the registration of OCT with other modalities. Specifically, in addition to its use for display of pathology, a partial fundus intensity image can be used to provide high contrast images of the retinal blood vessels that are useful for aligning the OCT data set to images from other clinical modalities (such as fundus photographs). Due to scattering by red blood cells, major retinal blood vessels cast ZM190PCT
  • partial fundus intensity image can be referred to as a shadowgram as other partial fundus intensity images can be used to emphasize intra-retinal or sub-retinal structures by the reflected light of the structures rather than the shadows produced by the incidence of light upon the structures.
  • the fundus intensity image also solves the specific problem of image registration for assessment of the retinal nerve fiber layer in glaucoma diagnosis. In the case of the retinal nerve fiber layer, more accurate registration will reduce measurement variance and improve the detection of glaucoma progression.
  • the method of the present invention can be realized in hardware, software, or a combination of hardware and software.
  • An implementation of the method of the present invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system, or other apparatus adapted for carrying out the methods described herein, is suited to perform the functions described herein.
  • a typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which, when loaded in a computer system, is able to carry out these methods.
  • Computer program or application in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following a) conversion to another language, code or notation; b) reproduction in a different material form.
  • this invention can be embodied in other specific forms without departing from the spirit or essential attributes thereof, and ZM190PCT

Abstract

A system, method and apparatus for anatomical mapping utilizing optical coherence tomography. In the present invention, 3-dimensional fundus intensity imagery can be acquired from a scanning of light back-reflected from an eye. The scanning can include spectral domain scanning, as an example. A fundus intensity image can be acquired in real-time. The 3-dimensional data set can be reduced to generate an anatomical mapping, such as an edema mapping and a thickness mapping. Optionally, a partial fundus intensity image can be produced from the scanning of the eye to generate an en face view of the retinal structure of the eye without first requiring a full segmentation of the 3-D data set. Advantageously, the system, method and apparatus of the present invention can provide quantitative three-dimensional information about the spatial location and extent of macular edema and other pathologies. This three-dimensional information can be used to determine the need for treatment, monitor the effectiveness of treatment and identify the return of fluid that may signal the need for retreatment.

Description

ENHANCED OPTICAL COHERENCE TOMOGRAPHY FOR ANATOMICAL MAPPING
BACKGROUND OF THE INVENTION
Technical Field
[0001 ] The present invention relates to coherent waveform based imaging, and more particularly to an optical coherence tomography (OCT) imaging system.
Statement of the Related Art
[0002] Many imaging systems utilize coherent waveforms to obtain information regarding target objects of interest. Examples include OCT, ultrasound diagnostics, and synthetic aperture radar. OCT is a low-coherence interferometer-based noninvasive medical imaging modality that can provide high-resolution sectional images of biological tissues (see for example, US5321501, US5459570, Huang, D. et al. (1991). "Optical coherence tomography." Science 254(5035): 1178-81). Since first introduced, OCT has been used in a variety of medical research and diagnostic applications. One successful application of OCT imagery can include the use of OCT in the dermato logical imaging of the skin. Another successful application of OCT imagery can include sectional imaging of the retina in the field of ophthalmology. In this regard, time domain based OCT has produced cross-sectional images of the retina of the eye that have proven value to ophthalmologists (see for example, Swanson, E. A. et al. (1993). "In- vivo retinal imaging by optical coherence tomography." Optics Letters 18(21): 1864-1866; Izatt, J. A. et al. (1993). "Ophthalmic diagnostics using optical coherence tomography". Ophthalmic Technologies III, SPIE, 1877: 136-144, Los Angeles, CA, USA). Notwithstanding, time domain OCT instruments cannot acquire sufficient data to characterize completely important retinal pathologies. [0003] The limitations of time domain OCT are the natural result of the inherent difficulties in acquiring and processing imagery of an unstable target — the human eye. For example, although ophthalmic OCT has been commercialized for several years, the spatial registration of an OCT image to fundus landmarks has not been achieved satisfactorily. In this regard, fundus landmarks can be used to relate different structural abnormalities at ZM190PCT
different retinal locations. Precise spatial registration of OCT sections to tissue location also can be important when interpreting other medical images. Yet, the unavoidable eye movement of a patient during image acquisition can complicate the ability to achieve precise spatial registration due to the unavoidable distortion of the OCT image. [0004] As an alternative to OCT, the scanning laser ophthalmoscope (SLO) provides en face fundus images familiar to ophthalmologists (see for example, Sharp, P. F. et al (2004) "The scanning laser ophthalmoscope — a review of its role in bioscience and medicine" Physics in Medicine and Biology 49: 1085-1096). In this regard, en face views are familiar to ophthalmologists not only from direct observations, but also from fundus photographs and fluorescein angiography. The strength of an en face view is that structural abnormalities at different retinal locations can be related to each other and to major retinal landmarks such as the fovea and optic nerve head. In any case, combining OCT with SLO (SLO/OCT) provides one possible means for precise spatial registration of the OCT image while providing an en face fundus image (see for example, US5975697, US6769769, CA2390072, US20040036838, US20040233457, and WO2004102112).
[0005] Specifically, time domain SLO/OCT systems utilize two-dimensional transverse scans to provide sectional images in planes perpendicular to the depth of the sample. In an SLO/OCT system, the fundus image can be acquired by splitting the reflected sample light during the transverse scan into two detection channels. A first channel can accommodate OCT while the second channel can be utilized in acquiring intensity image (see for example, US5975697, US6769769, CA2390072, US20040036838, US20040233457, and WO2004102112). As an alterative approach, the sectional images can be summed along the depth of the image (see for example, Hitzenberger, C. K. et al. (2003). "Three-dimensional imaging of the human retina by high-speed optical coherence tomography." Optics Express 11(21): 2753-2761; Puliafito C. A. "Summary and Significance" American Academy of Ophthalmology Subspecialty Day on retina, Section X: Ocular Imaging, New Orleans, Oct. 23, 2004, 3:06 pm). The approach of two detection channels can require a more complicated setup and the signal-to-noise ratio of the OCT may be reduced by a partial sacrifice of the back-reflected sample light. By comparison, in the approach of summing the OCT images along their depth, accuracy can be sacrificed when the eye moves between different sections. ZM190PCT
SUMMARY OF THE INVENTION
[0006] The present invention is an OCT method, system and apparatus which addresses the foregoing deficiencies of time domain ocular imaging. In particular, what is provided is a novel method, system and apparatus for anatomical mapping using spectral domain OCT. As well, time-domain OCT and swept source frequency OCT also can be applied to provide the enhanced anatomical mapping of the present invention. In this invention, the term spectral domain OCT (SD-OCT) is sometimes used to include both spectrometer based spectral or Fourier domain OCT, and tunable laser based swept source OCT since their basic principle of operation is very similar (see for example, Choma, M. A. et al. (2003). "Sensitivity advantage of swept source and Fourier domain optical coherence tomography." Optics Express 11(18): 2183-2189). hi accordance with the present invention, a fundus intensity image can be acquired from a spectral domain scanning of light back-reflected from an eye. The term "intensity image" as defined in this invention is a two-dimensional en-face image extracted from a 3D OCT data block by integrating the OCT signal over a depth range (Z- axis) greater than the axial resolution of the OCT system. For example, a fundus image can be extracted by the method disclosed herein from OCT data covering the fundus of the human eye. The fundus intensity image actually represents a reduction in total information content. Specifically, the squaring and summing over the spectra results in a loss of all depth information. Yet, the loss of depth information can achieve a shortcut to a particularly useful item of distilled information for the purpose of medical diagnostics. [0007] A 3-D OCT data set can be reduced to generate an ocular mapping, including edema mappings and thickness mappings. Optionally, a partial fundus intensity image can be produced from the spectral domain scanning of the eye to generate an en face view of the retinal structure of the eye without first requiring a full segmentation of the 3-D OCT data. The partial intensity image is a two-dimensional en-face image extracted from a 3D OCT data block by integrating the OCT signal over a depth range greater than the axial resolution of the OCT system, but including only selected regions of the 3-D anatomical structure for integration.
[0008] The present invention can differ from conventional, time domain OCT technology because the mapping system of the present invention can be optimized to obtain the ZM190PCT
information necessary for assessing macular edema while simultaneously producing a fundus intensity image. As part of the optimization, the present invention utilizes optimized sampling grids. For example, in the case of volume measurements of fluid-filled spaces, which require several samples spread over the areal extent of the space, a raster scan can be utilized having evenly spaced points on a square grid.
[0009] Importantly, in accordance with the present invention, the data set acquired by the spectral domain scan can contain all of the 3-D information about retinal structure needed to estimate the volumes of fluid-filled spaces. Accordingly, 3-D segmentation algorithms can be applied to the intensity data set to outline spatial structures. As yet another important aspect of the invention, intensity information can be extracted from each measured spectrum to provide an en face view or an image of the fundus that can be used immediately to judge image data quality, hi this regard, an operator can then decide whether or not to take another image. Because the intensity image is generated from the same data as the subsequent 3-D data set, the intensity image provides all necessary landmarks for precisely locating on the fundus the lesions revealed by the cross-sectional OCT scan. The intensity image also provides the landmarks needed for orienting the 3-D data set to other fundus images, such as fundus photographs and fluorescein angiograms.
[0010] As an additional advantage, because the fundus intensity image can be generated before segmentation of the 3-D data set, the fundus intensity image can be used to guide the segmentation. For example, retinal blood vessels represent discontinuities in the layered retinal architecture that often disrupt segmentation of cross-sectional images. The fundus intensity image can be processed by ordinary 2-D algorithms to identify the blood vessel positions. Subsequent processing of the OCT cross-sectional images then can be modified at each vessel location. For instance, an algorithm designed to follow layers can skip past a vessel and resume segmentation on the other side of the vessel. Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The aspects of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. ZM190PCT
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] A more complete understanding of the present invention, and the attendant advantages and features thereof, will be more readily understood by reference to the following detailed description when considered in conjunction with the accompanying drawings, wherein:
[0012] Figure 1 is a schematic illustration of an OCT imaging system configured for anatomical mapping in accordance with the present invention; and,
[0013] Figure 2 is a flow chart illustrating a process for anatomical mapping in the OCT imaging system of Figure 1.
DETAILED DESCRIPTION OF THE INVENTION
[0014] The present invention is a system, method and apparatus for anatomical mapping through the production of a 3-D data set, for instance through imagery produced utilizing OCT. In accordance with the present invention, a fundus intensity image can be acquired from a spectral domain scanning of light back-reflected from an eye. The 3-D data set can be reduced to generate an ocular mapping, including edema mappings and thickness mappings. Optionally, a partial fundus intensity image can be produced from the spectral domain scanning of the eye to generate an en face view of the retinal structure of the eye without first requiring a full segmentation of the 3-D data set.
[0015] In further illustration, Figure 1 is a schematic illustration of an OCT imaging system configured for anatomical mapping in accordance with the present invention. As shown in Figure 1, a low coherent light source 105 can be provided. The low coherent light source 105 can be a super-luminescent diode, such as a super-luminescent diode having a center wavelength sufficient to penetrate the vitreous portions of the eye 140 — typically between 800 and 900 run. In this context, those skilled in the art consider such a light source as a broadband light source. The light source 105 can be coupled to a beam splitter 115 by way of optional fiber-based isolator 110.
[0016] Specifically, the beam splitter 115 can be a 2 x 2 3dB fiber coupler configured to split light emanating from the light source 105 into sample and reference arms. The sample ZM190PCT
arm can be coupled into the optical head of a light delivery system 165. The light delivery system 165 can include an x-y scanner 130 and optics for delivering the sample light into the eye 140 and collecting the back-reflected sample light. Optionally, the back-reflected sample light can be partially reflected through operation of a dichroic mirror 135 into a video camera 145 for real time viewing of the fundus.
[0017] By comparison to the sample arm, the reference arm can include a variable neutral density filter 125 used to adjust the reference intensity before and after the light reaches the optical reference system 170. Polarization controllers 120 can be used in both the reference and sample arms for fine tuning of the sample and reference polarization states to achieve maximal interference. In an exemplary configuration of the detection arm, a spectrometer 160 can be coupled to a line scan camera 155 to detect the combined reference and sample light. As is known in the art, a spectrometer will include an optical element such as a diffraction grating for angularly dispersing the light as a function of wavelength. The dispersed light falls on the pixels of the line scan camera. The light intensities measured by the camera pixels contain information corresponding to the distribution of reflection sites along the Z-axis depth in the sample.
[0018] Both the line scan camera 155 and the video camera 145 can be communicatively linked to a computer 150. The computer 150 can be coupled both to a data acquisition processor 190 configured to acquire data from the line scan camera 155, and also to a data reduction processor 195 configured to reduce the acquired data to produce anatomical maps 185A and partial fundus imagery 185B. The 3-D data sets would also be used to create cross-sectional OCT images 185C as known in the art.
[0019] By operation of the data acquisition processor 190, both a 3-D data set 180 and an intensity image 175 can be acquired. The intensity image 175 can be acquired in real time directly from the data acquisition processor using either digital or analog processing. Complex data processing associated with generating OCT images is not required. Consequently, an operator can discard the image if it is determined that the image is of too low quality without permitting further data processing. The 3-D data set 180, by comparison, can be produced by operation of the data reduction processor 195 from the image acquired by the data acquisition process 190. Notably, an intensity image 175 can also be derived from the 3-D data set. An intensity image created either directly from the data acquisition or from ZM190PCT
the 3-D data set can be useful for data registration to other imaging modalities. Moreover, blood vessels can be identified as an initial step in a segmentation process. In a swept source spectral domain OCT embodiment, the output wavelength of the light source can be varied or scanned, for example, via frequency modulation. Only a single photodetector is required rather than a spectrometer and a line scan camera. The intensity data as a function of wavelength is extracted over time.
[0020] In a more particular illustration, Figure 2 is a flow chart illustrating a process for anatomical mapping in the OCT imaging system of Figure 1. Beginning first in block 210, the inherent speed of a spectral domain OCT can be utilized to sample the entire macula of an eye on a square grid. In a specific aspect of the invention, a total of 65,536 a-scans can be collected in slightly over one second by following a left-to-right, top-to-bottom scan pattern in which adjacent a-scans are separated by 23 μm, which provides adequate lateral resolution to sample fluid-filled spaces of the eye.
[0021] hi block 220, a fundus image can be generated directly from intensity data included within the spectra produced by the spectral domain scan. In this regard, it is to be noted that each captured spectrum contains at least two components from which can be extracted a signal proportional to the total reflected intensity. This intensity signal can be used to produce a 2-D image of the fundus that advantageously can appear similar to the image from an SLO. Various methods can be employed to obtain the intensity signal in "real-time", so that the fundus image can be painted on a display screen as rapidly as it is acquired.
[0022] The method for extracting intensity information can be understood by considering the primary mathematical expression for the spectral intensity of the interference fringes falling on the line scan camera:
G» = G»{i + ∑R», + 2∑ ,/*.*.» ∞s[2*v(rn - τm)] + 2∑^∞s[2πv(τn - τr )]\ ,
I n n*m n J
where v is the frequency of the light; R5n is the intensity reflection of a small region in the sample; Gs(v) is the spectral density of the light source; the reflection of the reference arm is unity; distances are represented by propagation times τnm in the sample arm and τr in the reference arm and summation is across all axial depths in the sample beam. The third term in ZM190PCT
brackets is the mutual interference for all light scattered within the sample and the last term contains the interference between the scattered sample light and the reference light from which an a-scan is calculated. It is important to note that, in embodiments where the detector is a spectrometer, the parameter v also represents position on the line scan camera. Hence, the primary equation can be broken conceptually into two parts: the first two terms represent slow variation across the linear array of camera pixels while the last two terms represent oscillations (interference fringes). Similarly, in the embodiments where the source is swept through a range of frequencies v the first two terms in the primary equation represent slow variation in time of the detected signal, while the last two terms represent oscillations. [0023] The new insight that leads to generation of a fundus image is to recognize that the desired information about total reflected sample intensity resides in both the second and fourth terms of the primary equation. This permits multiple methods for extracting the intensity. The method selected for a particular application will depend on desired speed and accuracy. Several examples of methodologies for obtaining a fundus intensity image follow in which F(x,y) is the output of the processing method for an a-line at scan point x,y on the fundus.
[0024] In a first example, the primary equation can be summed across v (in practice, across the pixels of the line scan camera). Notably, the cosine terms in the primary equation which have many cycles across the spectrum will sum to (approximately) zero. Thus, applying the summation can yield
Figure imgf000010_0001
where G5 is the total source power. The first term can, in principle, be ignored for display purposes leaving ∑Rsn , which is proportional to the sum of all reflected intensities in the n sample, i.e., the desired fundus intensity. Digital summation of the camera output can be very fast and this method may be most suitable for painting the fundus image into a display window as rapidly as it is acquired. Various techniques can be used, if necessary, to compensate for small variations in G5 with x,y. It is clear that other low-pass filter techniques ZM190PCT
besides simple summation can also be applied to the primary equation to yield the above result.
[0025] In another example, the fundus intensity can also be derived from the fourth term of the primary equation by separating the oscillatory component from the slow variation and recognizing that, for retina and other low reflectance samples, the third term is small relative to the fourth term. One method to achieve this is to apply a Fourier transform to the primary equation, to discard the low frequency terms, and then to square and sum the high frequency terms. By ParsevaPs theorem this is equivalent in the camera output signal to high pass filtering the primary equation to remove the low frequency component, then squaring the remaining oscillatory component and summing over the spectrum. [0026] Neglecting the third term of the equation, the high pass filtering, squaring and summing of the primary equation can produce:
Figure imgf000011_0001
which can be displayed directly to produce an intensity image. It is noted that one can modify the parameters of the high-pass filter in order to change the contrast of the fundus image by changing the relative contributions of shallower and deeper structures. The choice of domain in which to operate depends on considerations of speed and ease of implementation. Operations on the camera output can be carried out digitally or, for highest speed, by analog processing. Similarly, in a third exemplary method for obtaining a fundus intensity image, the observation that a spectrometer normally discards the zero order beam can be utilized such that the intensity of the zero order beam also can have the form of the equation yielded by the summation of the primary equation. Thus, the intensity signal can be obtained directly with an auxiliary photodetector and displayed with conventional methods. [0027] Returning now to Figure 2, regardless of how the fundus intensity image is generated, in block 230 the fundus intensity image can be displayed. In this regard, the fundus intensity image can provide a plurality of benefits. First, the display of the fundus intensity image can provide immediate feedback on scan quality to the operator. Scans contaminated by eye movements, blinks or other artifacts can be repeated before data ZM190PCT
processing. Second, because the fundus intensity image is generated from the same data as the subsequent 3-D data set, the fundus intensity image can provide all necessary landmarks for orienting the 3-D data to other fundus images, such as fundus photographs and fluorescein angiography. Third, the fundus intensity image serves to locate features that are seen in cross-sectional images.
[0028] In block 240, the acquired set of spectral domain scans can be converted into a data set that contains the axial (Z-axis) reflectance of the sample. As part of the conversion process, the acquired spectra can be transformed from points linear in wavelength to points linear in frequency (wave number). This transformation can be accomplished mathematically. Alternatively, the transformation can be accomplished by building a line scan camera that has unequally-spaced pixels. Specifically, the modified line scan camera can incorporate pixels spaced according to the reciprocal of the distance along the light detector chip in the camera. The consequent variation in pixel size will cause variation in pixel gain, but this can be compensated electronically or digitally. The transformation would therefore take place instantly in hardware, greatly speeding up the generation of the spatial data. The product of the transformation may then be used in the conversion 240 to produce a set of Z-axis intensity information data for each unique X and Y position of the beam on the sample, i.e. a 3-D data set. As noted above, this 3-D data can be converted directly into a full intensity image, for example, by squaring and summing the intensity data at each X/Y position. The 3-D data is also used to generate conventional OCT cross-sectional images 185C. The 3-D data can also be subjected to a partial segmentation process (block 250) in order to outline and quantify fluid-filled spaces and other retinal features. This segmented data can also be used (block 260), to generate ocular maps, such as "edema maps" and "thickness maps" that can be used for diagnosis and treatment. The partial segmentation can also be used to generate a partial fundus image. The partial fundus image is generated the same way as the full fundus image from the 3-D data however, the depth location and range around that depth is limited and selected through the segmentation process. As discussed below, a partial fundus image can be created to highlight certain features or landmarks in the eye. Note that a full segmentation process would be one that identifies many (or all) retinal layers and/or structures. A partial fundus image requires finding only one surface, either a specific retinal layer, like the retinal pigment epithelium, or ZM190PCT
a more general description of retinal tilt and curvature such as the locus of the centroids of each a-scan.
[0029] Notably, the retina can be tilted and curved with respect to the Cartesian coordinate system of the acquired 3-D data set. Reference surfaces identified during segmentation can be used as boundaries to create a partial intensity image that help identify the tilt and curvature. The reference surfaces can capture this tilt and curvature as mathematical surfaces within the acquired data set. These reference surfaces can be generated by combining lines segmented on individual longitudinal cross sectional scans (B- scans). Reference surfaces also can typically correspond to known anatomical layers of the retina. Likely examples include the retinal pigment epithelium (RPE) and the inner limiting membrane (ILM). The reference surfaces can be used to visualize pathology or to define layers within the data set that have the variation due to retinal tilt and curvature removed. Data points within these layers can be used to form en face images or partial intensity images that represent retinal structures localized to ajparticular depth or anatomic layer. [0030] A characteristic of the data set for the partial fundus intensity image is that it has thickness, in general greater than the axial resolution of the OCT. This and the fact that the layer is curved and tilted in a way intended to capture anatomy distinguishes it from "C- scans"— the tangential slices produced by SLO/OCT instruments—which are perpendicular to the scanning beam. Hence, C-scans are thin, the thickness being defined by the axial resolution, and they are not usually aligned with retinal structures. [0031] Specifically, intensity in axial scans can be expressed as
Figure imgf000013_0001
where distance along the z-axis is represented by propagation time i = nz / c with n the refractive index of tissue and c the speed of light in vacuum; τn and τr are the propagation times for light reflected by the nth scatterer in the sample and the reference mirror, respectively; Rn is the normalized intensity reflection of the nth scatterer; and summation is across all axial depths in the sample beam. r(τ) is the autocorrelation function of the light source and expresses the axial resolution of the system. Note T[τ ± 2(τn - τr )] falls rapidly ZM190PCT
to zero as (τnr) becomes greater than the axial resolution. For each depth τ , therefore, the reflected intensity comes from only a thin layer of the Rn .
[0032] Thus, a tissue layer defined as above can be expressed by its upper boundary
T0 (x, y)and lower boundary T1 (x, y) . The total reflectance of the scatterers in the tissue layer at each point (x,y) can be obtained by squaring
Figure imgf000014_0001
and then summing from r0 to τx . The result is the desired partial fundus intensity image
Figure imgf000014_0002
[0033] Still, it is to be understood that the partial fundus intensity image (as well as the fundus intensity image) can be obtained without the explicit use of the above equation. In fact, any monotonic transformation of the a-line intensities, when summed, can produce an apparent partial fundus intensity image, though the relative intensities of the different structures can differ. For example, the squaring step can be omitted prior to the summing step and the resulting partial fundus intensity image can suffice. In fact, performing a monotonic transformation, strictly speaking, is not required as a transformation having a "little wiggle" can provide a similar result. Notwithstanding, the above equation can provide a true intensity and serves as a preferred example of the technique. Note that the generation of a partial fundus image need not be restricted to a summation between two surfaces, but could also be defined by an integral under a z-profile that followed the reference surface, (e.g., the two surface example defines a rectangular z-profile).
[0034] The partial fundus intensity image can be applied to the registration of OCT with other modalities. Specifically, in addition to its use for display of pathology, a partial fundus intensity image can be used to provide high contrast images of the retinal blood vessels that are useful for aligning the OCT data set to images from other clinical modalities (such as fundus photographs). Due to scattering by red blood cells, major retinal blood vessels cast ZM190PCT
strong shadows onto the underlying tissue. By placing one boundary of the slab a distance below the ILM that excludes the light scattered by the blood vessels themselves, then forming the partial fundus intensity image from all tissue below that boundary, these shadows are emphasized and the blood vessel pattern stands out in stark relief against the brighter retinal reflection. This type of partial fundus intensity image can be referred to as a shadowgram as other partial fundus intensity images can be used to emphasize intra-retinal or sub-retinal structures by the reflected light of the structures rather than the shadows produced by the incidence of light upon the structures.
[0035] The fundus intensity image also solves the specific problem of image registration for assessment of the retinal nerve fiber layer in glaucoma diagnosis. In the case of the retinal nerve fiber layer, more accurate registration will reduce measurement variance and improve the detection of glaucoma progression.
[0036] The method of the present invention can be realized in hardware, software, or a combination of hardware and software. An implementation of the method of the present invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system, or other apparatus adapted for carrying out the methods described herein, is suited to perform the functions described herein. [0037] A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which, when loaded in a computer system, is able to carry out these methods.
[0038] Computer program or application in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following a) conversion to another language, code or notation; b) reproduction in a different material form. Significantly, this invention can be embodied in other specific forms without departing from the spirit or essential attributes thereof, and ZM190PCT
accordingly, reference should be had to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.
ZM190PCT
The following references are incorporated herein by reference:
US PATENT DOCUMENTS US5321501 US5459570 US5975697 US6769769 US20040036838 US20040233457
FOREIGN PATENT DOCUMENTS CA2390072 WO2004102112
OTHER PUBLICATIONS
Bower, B. A. et al. "Rapid Volumetric Imaging of the Human Retina in vivo Using a Low- Cost, Spectral-Domain Optical Coherence Tomography System" ARVO annual meeting, May 02, 2005, 8:45 AM - 9:00 AM, #1050
Choma, M. A. et al. (2003). "Sensitivity advantage of swept source and Fourier domain optical coherence tomography. " Optics Express 11(18): 2183-2189
de Boer, J.F. et al. "Ultra-High Speed and Ultra-High Resolution Structural, Flow Velocity, and Polarization Sensitive Sd/Fd-Oct of the Human Retina" ARVO annual meeting, May 02, 2005, 12:15 PM -12:30 PM, #1116
Ferguson, R.D. et al. "Enhanced Retinal Imaging With Tracking Optical Coherence Tomography (TOCT)" ARVO annual meeting, May 02, 2005, 12:45 PM - 1:00 PM, #1118 ZM190PCT
Fujimoto, J. "Advances in High Resolution and Spectral OCT" Ocular Imaging 2005, Palm Beach, Florida, Dec. 4, 2005, 11:00am
Gregori, G. et al. "3-D OCT Maps of Retinal Pathologies" ARVO annual meeting, May 02, 2005, 10:00 AM -10:15 AM, #1055
Hitzenberger, C. K. et al. (2003). "Three-dimensional imaging of the human retina by highspeed optical coherence tomography." Optics Express 11(21): 2753-2761
Huang, D. et al. (1991). "Optical coherence tomography." Science 254(5035): 1178-81
Izatt, J. A. et al. (1993). "Ophthalmic diagnostics using optical coherence tomography". Ophthalmic Technologies III, SPIE, 1877: 136-144, Los Angeles, CA, USA
Jiao, S., et al. "Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography" Photonics West, Jan 22, 2005, Ophthalmic Technologies XV, Program # 5688-81.
Jiao, S. et al. (2005) "Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography" Optics Express 13, 444-452,
Jiao, S. et al. "Macula Mapping and Simultaneous Acquisition of Sectional and Fundus Ophthalmic Images With Three-Dimensional Spectral-Domain Optical Coherence Tomography" ARVO annual meeting, May 02, 2005, 11:45 AM -12:00 PM, #1114
Ko, T.H. et al. "Three Dimensional Retinal Imaging of Small Animals With High-speed, Ultrahigh Resolution Optical Coherence Tomography" ARVO annual meeting, May 02, 2005, 9:00 AM - 9:15 AM, #1051 ZM190PCT
Puliafito C. A. "Summary and Significance" American Academy of Ophthalmology Subspecialty Day on retina, Section X: Ocular Imaging, New Orleans, Oct. 23, 2004, 3:06 pm
Sacu, S. et al. 'Imaging of Pigment Epithelial Disease Using Threedimensional (3D) Ultrahigh Resolution (UHR) Optical Coherence Tomography (OCT)" ARVO annual meeting, May 03, 2005, 11:15 AM - 1:00 PM, # 2563/B116
Schmidt-Erfurth, U.M. et al. "Three-Dimensional Ultrahigh Resolution Optical Coherence Tomography (3D UHR OCT): A Video Presentation" ARVO annual meeting, May 02, 2005, 12:00 PM -12:15 PM, #1115
Scholda, CD. et al. 'Visualization of the Vitreoretinal Interface Using Three-Dimensional Ultrahigh Resolution Optical Coherence Tomography" ARVO annual meeting, May 02, 2005, 9:45 AM -10:00 AM, #1054
Sharp, P. F. et al (2004) "The scanning laser ophthalmoscope — a review of its role in bioscience and medicine" Physics in Medicine and Biology 49: 1085-1096)
Srinivasan, VJ. et al. "Intraretinal Thickness Mapping using Three-Dimensional, High- Speed Ultrahigh Resolution OCT" ARVO annual meeting, May 02, 2005, 11:30 AM -11:45 AM, #1113
Swanson, E. A. et al. (1993). "In- vivo retinal imaging by optical coherence tomography." Optics Letters 18(21): 1864-1866;
Werner, J. S. et al. "Three-Dimensional Retinal Imaging With High Speed and High Resolution OCT" ARVO annual meeting, May 02, 2005, 9:15 AM - 9:30 AM, #1052

Claims

ZM190PCTClaims:
1. An apparatus for obtaining images of an object comprising: a spectral domain optical coherence tomography (OCT) system, said system including a light source, a beam splitter for dividing the light along a sample path and a reference path, said sample path further including a scanner for scanning the light in at least one of the X and Y directions over the sample; a detector for receiving light returned from both the sample and the reference paths and generating a plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X /Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis; and a processor for directly converting each set of outputs into a single intensity value corresponding to the integrated intensity in the Z axis for the associated X/Y position of the light on the sample, said single intensity values being suitable for generating a two dimensional image intensity map of the object.
2. An apparatus as recited in claim 1, further including a monitor for displaying the intensity values as a two dimensional intensity map.
3. An apparatus as recited in claim 1, wherein said processor converts the detector outputs to intensity values by one of (a) summing the outputs, (b) squaring and then summing the outputs, (c) high pass filtering or high pass filtering and then squaring and summing the results.
4. An apparatus as recited in claim 1, wherein the two dimensional intensity map is used to register an image obtained with another imaging modality.
5. An apparatus as recited in claim 1, wherein the processor further converts the detector outputs into a 3-D data set with Z-axis intensity information for each X/Y position, said 3-D data sets being suitable for generating OCT cross-sectional images.
-1- ZM190PCT
6. An apparatus as recited in claim 5, wherein the two dimensional intensity map is used to register an OCT cross-sectional image.
7. An apparatus as recited in claim 5, wherein the intensity map is displayed in parallel with an OCT cross-sectional image.
8. An apparatus as recited in claim 5, wherein the processor further functions to segment the 3-D data sets based on landmarks.
9. An apparatus as recited in claim 8, wherein the landmarks are identified from the information generated from the intensity map.
10. An apparatus as recited in claim 8, wherein the landmarks include a pair of physical boundaries within the sample.
11. An apparatus as recited in claim 8, wherein the segmented data is used to generate a partial intensity image.
12. An apparatus as recited in claim 8, wherein the segmented data is used to generate an ocular map.
13. An apparatus as recited in claim 8, wherein the segmented data is used to quantify fluid filled spaces.
14. An apparatus as recited in claim 1, wherein the light source is a broadband light source and the detector is a spectrometer.
15. An apparatus as recited in claim 13, wherein the spectrometer includes a grating and a detector array.
-2- ZM190PCT
16. An apparatus as recited in claim 1, wherein the source is tunable and swept and intensity at different wavelengths is obtained over time.
17. An apparatus for obtaining images of an object comprising: a spectral domain optical coherence tomography (SD-OCT) system, said system including a scanner for scanning a light beam in at least one of the X and Y directions over the sample, said system being one of a swept source SD-OCT system and a spectrometer based SD-OCT system, said system generating a plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X /Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis; and a processor for directly converting each set of outputs into a single intensity value corresponding to the integrated intensity in the Z axis for the associated XfY position of the light on the sample, said single intensity values being suitable for generating a two dimensional image intensity map of the object.
18. An apparatus as recited in claim 17, further including a monitor for displaying the intensity values as a two dimensional intensity map.
19. An apparatus as recited in claim 17, wherein said processor converts the detector outputs to intensity values by one of (a) summing the outputs, (b) squaring and then summing the outputs, (c) high pass filtering or high pass filtering and then squaring and summing the results.
20. An apparatus as recited in claim 17, wherein the light source is a broadband light source and the detector is a spectrometer.
21. A method of generating intensity images of an object utilizing outputs generated by a spectral domain optical coherence tomography (SD-OCT) system, said system including a scanner for scanning a light beam in at least one of the X and Y directions over
-3- ZM190PCT
the sample, said system being one of a swept source SD-OCT system and a spectrometer based SD-OCT system, said system generating a plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X/Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis, said method comprising the steps of: directly converting each set of outputs into a single intensity value corresponding to the integrated intensity in the Z axis for the associated X/Y position of the light on the sample; and generating a two-dimensional intensity map of the integrated intensity values.
22. A method as recited in claim 21 , further including the step of displaying the intensity map on a monitor.
23. A method as recited in claim 21, wherein the step of converting outputs into single intensity values is performed by one of (a) summing the outputs, (b) squaring and then summing the outputs, (c) high pass filtering or high pass filtering and then squaring and summing.the results.
24. A method as recited in claim 21, further including the step of using the two dimensional intensity map to register an image obtained with another imaging modality.
25. A method as recited in claim 21 , further including the step of converting the detector outputs into a 3-D data set with Z-axis intensity information for each X/Y position, said 3-D data sets being suitable for generating OCT cross-sectional images.
26. A method as recited in claim 25, further including the step of using the two dimensional intensity map to register an OCT cross-sectional image.
27. A method as recited in claim 25, further including the step of displaying the intensity map in parallel with an OCT cross-sectional image.
-4- ZM190PCT
28. A method as recited in claim 25, further including the step of segmenting the 3-D data sets based on landmarks.
29. A method as recited in claim 25, wherein the landmarks are identified from the information generated from the intensity map.
30. A method as recited in claim 25, wherein the landmarks include a pair of physical boundaries within the sample.
31. A method as recited in claim 25, wherein the segmented data is used to generate a partial intensity image.
32. A method as recited in claim 25, wherein the segmented data is used to generate an ocular map.
33. A method as recited in claim 25, wherein the segmented data is used to quantify fluid filled spaces.
34. An apparatus for obtaining images of an object comprising: a spectral domain optical coherence tomography (OCT) system, said system including a light source, a beam splitter for dividing the light along a sample path and a reference path, said sample path further including a scanner for scanning the light in at least one of the X and Y directions over the sample; a detector for receiving light returned from both the sample and the reference paths and generating a plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X /Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis; and a processor for converting the outputs of each set of outputs into a 3-D data set including Z-axis intensity information for each X/Y position, said 3-D data sets being suitable for generating OCT cross-sectional images, said processor further for
-5- ZM190PCT
converting at least some of the 3-D data sets into a single intensity value corresponding to the integrated intensity across at least a portion of the Z axis for the associated X/Y position of the light on the sample, said single intensity values being suitable for generating a two dimensional image intensity map of the object.
35. An apparatus as recited in claim 34, further including a monitor for displaying the intensity values as a two dimensional intensity map.
36. An apparatus as recited in claim 34, wherein the two dimensional intensity map is used to register an OCT cross-sectional image.
37. An apparatus as recited in claim 34, wherein the intensity map is displayed in parallel with an OCT cross-sectional image.
38. An apparatus as recited in claim 34, wherein the two dimensional intensity map is used to register an image obtained with another imaging modality.
39. An apparatus as recited in claim 34, wherein the processor further functions to segment the 3-D data sets based on landmarks.
40. An apparatus as recited in claim 39, wherein the landmarks are identified from the information generated from the intensity map.
41. An apparatus as recited in claim 39, wherein the landmarks include a pair of physical boundaries within the sample.
42. An apparatus as recited in claim 39, wherein the segmented data is used to generate a partial intensity image.
43. An apparatus as recited in claim 34, wherein the light source is a broadband light source and the detector is a spectrometer.
-6- ZM190PCT
44. An apparatus for obtaining images of an object comprising: a spectral domain optical coherence tomography (SD-OCT) system, said system including a scanner for scanning a light beam in at least one of the X and Y directions over the sample, said system being one of a swept source SD-OCT system and a spectrometer based SD-OCT system, said system generating a plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X /Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis; and a processor for converting the outputs of each set of outputs into a 3-D data set including Z-axis intensity information for each X/Y position, said 3-D data sets being suitable for generating OCT cross-sectional images, said processor further for converting at least some of the 3-D data sets into a single intensity value corresponding to the integrated intensity across at least a portion of the Z axis for the associated X/Y position of the light on the sample, said single intensity values being suitable for generating a two dimensional image intensity map of the object.
45. An apparatus as recited in claim 44, further including a monitor for displaying the intensity values as a two dimensional intensity map.
46. An apparatus as recited in claim 44, wherein the two dimensional intensity map is used to register an OCT cross-sectional image.
47. An apparatus as recited in claim 44, wherein the intensity map is displayed in parallel with an OCT cross-sectional image.
48. An apparatus as recited in claim 44, wherein the two dimensional intensity map is used to register an image obtained with another imaging modality.
-7- ZM190PCT
49. An apparatus as recited in claim 44, wherein the processor further functions to segment the 3-D data sets based on landmarks.
50. An apparatus as recited in claim 49, wherein the landmarks are identified from the information generated from the intensity map.
51. An apparatus as recited in claim 49, wherein the landmarks include a pair of physical boundaries within the sample.
52. An apparatus as recited in claim 49, wherein the segmented data is used to generate a partial intensity image.
53. An apparatus as recited in claim 44, wherein the light source is a broadband light source and the detector is a spectrometer.
54. A method of generating intensity images of an object utilizing the output from a spectral domain optical coherence tomography (SD-OCT) system, said system including a scanner for scanning a light beam in at least one of the X and Y directions over the sample, said system being one of a swept source SD-OCT system and a spectrometer based SD-OCT system, said system generating a plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X /Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis, said method comprising the steps of: converting the outputs of each set of outputs into a 3-D data set including Z- axis intensity information for each XJY position, said 3-D data sets being suitable for generating OCT cross-sectional images; converting at least some of the 3-D data sets into a single intensity value corresponding to the integrated intensity across at least a portion of the Z axis for the associated XJY position of the light on the sample; and generating a two dimensional image map of the integrated intensity values.
-8- ZM190PCT
55. A method as recited in claim 54, further including the step of displaying the intensity map on a monitor.
56. A method as recited in claim 54, further including the step of using the two dimensional intensity map to register an OCT cross-sectional image.
57. A method as recited in claim 54, further including the step of displaying the intensity map in parallel with an OCT cross-sectional image.
58. A method as recited in claim 54, further including the step of using the two dimensional intensity map to register an image obtained with another imaging modality.
59. A method as recited in claim 54, further including the step of segmenting the 3-D data sets based on landmarks.
60. A method as recited in claim 59, wherein the landmarks are identified from the information generated from the intensity map.
61. A method as recited in claim 59, wherein the landmarks include a pair of physical boundaries within the sample.
62. A method as recited in claim 59, wherein the segmented data is used to generate a partial intensity image.
63. An apparatus for obtaining images of an object comprising: a spectral domain optical coherence tomography (OCT) system, said system including a broad band light source, a beam splitter for dividing the light along a sample path and a reference path, said sample path further including a scanner for scanning the light in at least one of the X and Y directions onto the sample; a detector for receiving light returned from both the sample and the reference paths, said detector including a spectrometer and a line scan camera for generating a
-9- PCT
plurality of sets of outputs, each set corresponding to varying intensities at different wavelengths of said source at a particular X /Y position of said light on said sample, said intensities including information about the reflection distribution along the sample in the Z axis, said detector further including an auxiliary detector element for monitoring zero order light and generating an output corresponding to the integrated intensity in the Z axis for the associated X/Y position of the light on the sample; and a processor for converting each set of outputs from the line scan camera into a 3-D data sets including Z-axis intensity information for each X/Y position, said 3-D data sets being suitable for generating cross-sectional images, said processor for converting the outputs from the auxiliary detector element into a format suitable for generating a two dimensional image intensity map of the object.
-10-
PCT/EP2005/012801 2004-12-02 2005-12-01 Enhanced optical coherence tomography for anatomical mapping WO2006058735A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP05815426.1A EP1833359B1 (en) 2004-12-02 2005-12-01 Enhanced optical coherence tomography for anatomical mapping
CA2584958A CA2584958C (en) 2004-12-02 2005-12-01 Enhanced optical coherence tomography for anatomical mapping

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63238704P 2004-12-02 2004-12-02
US60/632,387 2004-12-02
US11/219,992 2005-09-06
US11/219,992 US7301644B2 (en) 2004-12-02 2005-09-06 Enhanced optical coherence tomography for anatomical mapping

Publications (1)

Publication Number Publication Date
WO2006058735A1 true WO2006058735A1 (en) 2006-06-08

Family

ID=35797766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2005/012801 WO2006058735A1 (en) 2004-12-02 2005-12-01 Enhanced optical coherence tomography for anatomical mapping

Country Status (4)

Country Link
US (5) US7301644B2 (en)
EP (2) EP1833359B1 (en)
CA (1) CA2584958C (en)
WO (1) WO2006058735A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008052793A1 (en) * 2006-11-02 2008-05-08 Heidelberg Engineering Gmbh Method and apparatus for retinal diagnosis
EP1969995A1 (en) * 2007-03-14 2008-09-17 Haag-Streit Ag Eye testing device
EP1975549A1 (en) * 2007-03-30 2008-10-01 Kabushiki Kaisha TOPCON Optical image measurement device and optical image measurement method
DE102008063225A1 (en) * 2008-12-23 2010-07-01 Carl Zeiss Meditec Ag Device for Swept Source Optical Coherence Domain Reflectometry
WO2011020482A1 (en) * 2009-08-19 2011-02-24 Optopol Technology S.A. Method and apparatus for optical coherence tomography
WO2011108231A1 (en) * 2010-03-02 2011-09-09 Canon Kabushiki Kaisha Image processing apparatus, control method, and optical coherence tomography system
GB2485345A (en) * 2010-11-05 2012-05-16 Queen Mary & Westfield College Optical coherence tomography scanning to identify a region of interest in a sample
WO2013139481A1 (en) * 2012-03-21 2013-09-26 Ludwig-Maximilians-Universität München Method for reducing the dimensionality of a spatially registered signal derived from the optical properties of a sample, and device therefor
EP1935329B2 (en) 2006-12-22 2015-08-05 Kabushiki Kaisha TOPCON Fundus oculi observation device and fundus oculi image display device
EP3100671A1 (en) * 2015-06-02 2016-12-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US9784559B2 (en) 2012-03-21 2017-10-10 Ludwig-Maximilians-Universität München Swept source OCT system and method with phase-locked detection

Families Citing this family (196)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076255A1 (en) * 2003-10-15 2007-04-05 Dai Nipon Printing Co., Ltd Image output apparatus, image output method, and image display method
US7301644B2 (en) * 2004-12-02 2007-11-27 University Of Miami Enhanced optical coherence tomography for anatomical mapping
US7884945B2 (en) * 2005-01-21 2011-02-08 Massachusetts Institute Of Technology Methods and apparatus for optical coherence tomography scanning
US7365856B2 (en) 2005-01-21 2008-04-29 Carl Zeiss Meditec, Inc. Method of motion correction in optical coherence tomography imaging
US7805009B2 (en) 2005-04-06 2010-09-28 Carl Zeiss Meditec, Inc. Method and apparatus for measuring motion of a subject using a series of partial images from an imaging system
CN101288102B (en) * 2005-08-01 2013-03-20 拜奥普蒂根公司 Methods and systems for analysis of three dimensional data sets obtained from samples
US7668342B2 (en) 2005-09-09 2010-02-23 Carl Zeiss Meditec, Inc. Method of bioimage data processing for revealing more meaningful anatomic features of diseased tissues
JP5371433B2 (en) 2005-09-29 2013-12-18 ザ ジェネラル ホスピタル コーポレイション Optical imaging method and apparatus by spectral coding
WO2007041125A1 (en) 2005-09-29 2007-04-12 Bioptigen, Inc. Portable optical coherence tomography devices and related systems
JP4642681B2 (en) * 2005-09-30 2011-03-02 富士フイルム株式会社 Optical tomographic imaging system
WO2007044612A2 (en) * 2005-10-07 2007-04-19 Bioptigen, Inc. Imaging systems using unpolarized light and related methods and controllers
US10524656B2 (en) 2005-10-28 2020-01-07 Topcon Medical Laser Systems Inc. Photomedical treatment system and method with a virtual aiming device
WO2007059206A2 (en) * 2005-11-15 2007-05-24 Bioptigen, Inc. Spectral domain phase microscopy (sdpm) dual mode imaging systems and related methods
WO2007061769A2 (en) * 2005-11-18 2007-05-31 Duke University Method and system of coregistrating optical coherence tomography (oct) with other clinical tests
GB2435322A (en) * 2006-02-15 2007-08-22 Oti Ophthalmic Technologies Measuring curvature or axial position using OCT
US7768652B2 (en) * 2006-03-16 2010-08-03 Carl Zeiss Meditec, Inc. Methods for mapping tissue with optical coherence tomography data
US7719692B2 (en) * 2006-04-28 2010-05-18 Bioptigen, Inc. Methods, systems and computer program products for optical coherence tomography (OCT) using automatic dispersion compensation
JP4855150B2 (en) * 2006-06-09 2012-01-18 株式会社トプコン Fundus observation apparatus, ophthalmic image processing apparatus, and ophthalmic image processing program
US20070291277A1 (en) 2006-06-20 2007-12-20 Everett Matthew J Spectral domain optical coherence tomography system
US7742174B2 (en) * 2006-07-17 2010-06-22 Bioptigen, Inc. Methods, systems and computer program products for removing undesired artifacts in fourier domain optical coherence tomography (FDOCT) systems using continuous phase modulation and related phase modulators
US8223143B2 (en) * 2006-10-27 2012-07-17 Carl Zeiss Meditec, Inc. User interface for efficiently displaying relevant OCT imaging data
US7830525B2 (en) 2006-11-01 2010-11-09 Bioptigen, Inc. Optical coherence imaging systems having a mechanism for shifting focus and scanning modality and related adapters
WO2008088868A2 (en) 2007-01-19 2008-07-24 Bioptigen, Inc. Methods, systems and computer program products for processing images generated using fourier domain optical coherence tomography (fdoct)
JP4971863B2 (en) * 2007-04-18 2012-07-11 株式会社トプコン Optical image measuring device
US8180131B2 (en) * 2007-05-04 2012-05-15 Bioptigen, Inc. Methods, systems and computer program products for mixed-density optical coherence tomography (OCT) imaging
CN101778593B (en) * 2007-06-15 2013-03-20 南加州大学 Method for analyzing image of optical coherence tomography
US8184885B2 (en) * 2007-07-24 2012-05-22 University Of Pittsburgh - Of The Commonwealth System Of Higher Education System and method for visualizing a structure of interest
JP4940069B2 (en) * 2007-09-10 2012-05-30 国立大学法人 東京大学 Fundus observation apparatus, fundus image processing apparatus, and program
US7798647B2 (en) * 2007-09-18 2010-09-21 Carl Zeiss Meditec, Inc. RNFL measurement analysis
DE202007014435U1 (en) * 2007-10-16 2009-03-05 Gurny, Eric Optical sensor for a measuring device
US8401246B2 (en) * 2007-11-08 2013-03-19 Topcon Medical Systems, Inc. Mapping of retinal parameters from combined fundus image and three-dimensional optical coherence tomography
JP4933413B2 (en) * 2007-12-11 2012-05-16 株式会社トーメーコーポレーション Anterior segment optical coherence tomography apparatus and anterior segment optical coherence tomography method
GB0802290D0 (en) * 2008-02-08 2008-03-12 Univ Kent Canterbury Camera adapter based optical imaging apparatus
US11839430B2 (en) 2008-03-27 2023-12-12 Doheny Eye Institute Optical coherence tomography-based ophthalmic testing methods, devices and systems
US8348429B2 (en) 2008-03-27 2013-01-08 Doheny Eye Institute Optical coherence tomography device, method, and system
JP5739323B2 (en) * 2008-04-14 2015-06-24 オプトビュー,インコーポレーテッド Optical coherence tomography eye registration method
WO2009131655A2 (en) * 2008-04-23 2009-10-29 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Automated assessment of optic nerve head with spectral domain optical coherence tomograph
US8079711B2 (en) 2008-04-24 2011-12-20 Carl Zeiss Meditec, Inc. Method for finding the lateral position of the fovea in an SDOCT image volume
JP5473265B2 (en) * 2008-07-09 2014-04-16 キヤノン株式会社 Multilayer structure measuring method and multilayer structure measuring apparatus
WO2010009450A1 (en) 2008-07-18 2010-01-21 Doheny Eye Institute Optical coherence tomography device, method, and system
JP5371315B2 (en) * 2008-07-30 2013-12-18 キヤノン株式会社 Optical coherence tomography method and optical coherence tomography apparatus
WO2010017356A2 (en) * 2008-08-08 2010-02-11 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Establishing compatibility between two-and three dimensional optical coherence tomography scans
US20100079580A1 (en) * 2008-09-30 2010-04-01 Waring Iv George O Apparatus and method for biomedical imaging
US8500279B2 (en) * 2008-11-06 2013-08-06 Carl Zeiss Meditec, Inc. Variable resolution optical coherence tomography scanner and method for using same
US8974059B2 (en) * 2008-12-03 2015-03-10 University Of Miami Retinal imaging system for the mouse or rat or other small animals
JP5602363B2 (en) * 2008-12-26 2014-10-08 キヤノン株式会社 Optical coherence tomography system
EP2226003B1 (en) * 2009-03-05 2015-05-06 Brainlab AG Medical image registration by means of optical coherence tomography
US8831304B2 (en) * 2009-05-29 2014-09-09 University of Pittsburgh—of the Commonwealth System of Higher Education Blood vessel segmentation with three-dimensional spectral domain optical coherence tomography
JP4850927B2 (en) * 2009-06-02 2012-01-11 キヤノン株式会社 Image processing apparatus, image processing method, and computer program
JP5473429B2 (en) * 2009-06-25 2014-04-16 キヤノン株式会社 Fundus imaging apparatus and control method thereof
US8332016B2 (en) * 2009-08-04 2012-12-11 Carl Zeiss Meditec, Inc. Non-linear projections of 3-D medical imaging data
JP5017328B2 (en) * 2009-08-11 2012-09-05 キヤノン株式会社 Tomographic imaging apparatus, control method therefor, program, and storage medium
JP4822084B2 (en) * 2009-09-30 2011-11-24 ブラザー工業株式会社 Secure print setting program and secure print setting method
WO2011059655A1 (en) * 2009-10-29 2011-05-19 Optovue, Inc. Enhanced imaging for optical coherence tomography
US8744159B2 (en) * 2010-03-05 2014-06-03 Bioptigen, Inc. Methods, systems and computer program products for collapsing volume data to lower dimensional representations thereof using histogram projection
JP5752955B2 (en) * 2010-03-16 2015-07-22 株式会社ニデック Optical tomography system
WO2011153275A1 (en) * 2010-06-01 2011-12-08 Optovue, Inc. Method and apparatus for enhanced eye measurement
JP5657941B2 (en) * 2010-07-30 2015-01-21 株式会社トプコン Optical tomographic imaging apparatus and operating method thereof
US9101293B2 (en) 2010-08-05 2015-08-11 Carl Zeiss Meditec, Inc. Automated analysis of the optic nerve head: measurements, methods and representations
EP2609851B1 (en) * 2010-08-24 2020-02-26 Kowa Company, Ltd. Visual field examination system
CN103348359A (en) 2010-11-17 2013-10-09 光视有限公司 3D retinal disruptions detection using optical coherence tomography
JP5702991B2 (en) 2010-11-19 2015-04-15 キヤノン株式会社 Image processing apparatus and image processing method
US10140699B2 (en) 2010-12-07 2018-11-27 University Of Iowa Research Foundation Optimal, user-friendly, object background separation
JP5842330B2 (en) * 2010-12-27 2016-01-13 株式会社ニデック Fundus photocoagulation laser device
JP6005663B2 (en) 2011-01-20 2016-10-12 ユニバーシティ オブ アイオワ リサーチ ファウンデーション Automatic measurement of arteriovenous ratio in blood vessel images
US9033510B2 (en) 2011-03-30 2015-05-19 Carl Zeiss Meditec, Inc. Systems and methods for efficiently obtaining measurements of the human eye using tracking
KR101792588B1 (en) * 2011-04-12 2017-11-01 삼성전자주식회사 Apparatus and method optical coherence tomography using multiple beams
CA2834289A1 (en) * 2011-04-29 2012-11-01 Optovue, Inc. Improved imaging with real-time tracking using optical coherence tomography
US9226654B2 (en) 2011-04-29 2016-01-05 Carl Zeiss Meditec, Inc. Systems and methods for automated classification of abnormalities in optical coherence tomography images of the eye
US20120281236A1 (en) * 2011-05-04 2012-11-08 The Johns Hopkins University Four-dimensional optical coherence tomography imaging and guidance system
US20140125983A1 (en) * 2011-05-31 2014-05-08 Tornado Medical Systems, Inc. Interferometery on a planar substrate
US8857988B2 (en) 2011-07-07 2014-10-14 Carl Zeiss Meditec, Inc. Data acquisition methods for reduced motion artifacts and applications in OCT angiography
US8433393B2 (en) 2011-07-07 2013-04-30 Carl Zeiss Meditec, Inc. Inter-frame complex OCT data analysis techniques
JP6025311B2 (en) 2011-08-01 2016-11-16 キヤノン株式会社 Ophthalmic diagnosis support apparatus and method
JP5975650B2 (en) * 2012-01-16 2016-08-23 キヤノン株式会社 Image forming method and apparatus
US9101294B2 (en) 2012-01-19 2015-08-11 Carl Zeiss Meditec, Inc. Systems and methods for enhanced accuracy in OCT imaging of the cornea
US8944597B2 (en) 2012-01-19 2015-02-03 Carl Zeiss Meditec, Inc. Standardized display of optical coherence tomography imaging data
JP6200902B2 (en) * 2012-02-03 2017-09-20 オレゴン ヘルス アンド サイエンス ユニバーシティ Optical flow imaging in vivo
JP6143421B2 (en) * 2012-03-30 2017-06-07 キヤノン株式会社 Optical coherence tomography apparatus and method
WO2013157673A1 (en) * 2012-04-18 2013-10-24 Lg Electronics Inc. Optical coherence tomography and control method for the same
WO2013165614A1 (en) 2012-05-04 2013-11-07 University Of Iowa Research Foundation Automated assessment of glaucoma loss from optical coherence tomography
US9192294B2 (en) 2012-05-10 2015-11-24 Carl Zeiss Meditec, Inc. Systems and methods for faster optical coherence tomography acquisition and processing
US9357916B2 (en) 2012-05-10 2016-06-07 Carl Zeiss Meditec, Inc. Analysis and visualization of OCT angiography data
US8876292B2 (en) 2012-07-03 2014-11-04 Nidek Co., Ltd. Fundus imaging apparatus
US8781190B2 (en) * 2012-08-13 2014-07-15 Crystalvue Medical Corporation Image-recognition method for assisting ophthalmic examination instrument
WO2014038812A1 (en) * 2012-09-06 2014-03-13 Samsung Electronics Co., Ltd. Method and apparatus for displaying stereoscopic information related to ultrasound sectional plane of target object
US9677869B2 (en) 2012-12-05 2017-06-13 Perimeter Medical Imaging, Inc. System and method for generating a wide-field OCT image of a portion of a sample
US9025159B2 (en) * 2012-12-10 2015-05-05 The Johns Hopkins University Real-time 3D and 4D fourier domain doppler optical coherence tomography system
US8783868B2 (en) 2012-12-21 2014-07-22 Carl Zeiss Meditec, Inc. Two-dimensional confocal imaging using OCT light source and scan optics
EP2749204B1 (en) * 2012-12-28 2016-03-16 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US9179834B2 (en) 2013-02-01 2015-11-10 Kabushiki Kaisha Topcon Attenuation-based optic neuropathy detection with three-dimensional optical coherence tomography
US9351698B2 (en) 2013-03-12 2016-05-31 Lightlab Imaging, Inc. Vascular data processing and image registration systems, methods, and apparatuses
US9241626B2 (en) 2013-03-14 2016-01-26 Carl Zeiss Meditec, Inc. Systems and methods for improved acquisition of ophthalmic optical coherence tomography data
US10772497B2 (en) 2014-09-12 2020-09-15 Envision Diagnostics, Inc. Medical interfaces and other medical devices, systems, and methods for performing eye exams
US9226856B2 (en) 2013-03-14 2016-01-05 Envision Diagnostics, Inc. Inflatable medical interfaces and other medical devices, systems, and methods
US9420945B2 (en) 2013-03-14 2016-08-23 Carl Zeiss Meditec, Inc. User interface for acquisition, display and analysis of ophthalmic diagnostic data
US20140276025A1 (en) * 2013-03-14 2014-09-18 Carl Zeiss Meditec, Inc. Multimodal integration of ocular data acquisition and analysis
US20140293289A1 (en) * 2013-03-27 2014-10-02 Kabushiki Kaisha Topcon Method for Generating Two-Dimensional Images From Three-Dimensional Optical Coherence Tomography Interferogram Data
JP6415030B2 (en) * 2013-08-07 2018-10-31 キヤノン株式会社 Image processing apparatus, image processing method, and program
US9778021B2 (en) 2013-08-29 2017-10-03 Carl Zeiss Meditec, Inc. Evaluation of optical coherence tomographic data prior to segmentation
US9471975B2 (en) 2013-10-22 2016-10-18 Bioptigen, Inc. Methods, systems and computer program products for dynamic optical histology using optical coherence tomography
US9377291B2 (en) 2013-12-05 2016-06-28 Bioptigen, Inc. Image registration, averaging, and compounding for high speed extended depth optical coherence tomography
US10307056B2 (en) 2013-12-05 2019-06-04 Bioptigen, Inc. Systems and methods for quantitative doppler optical coherence tomography
US9526412B2 (en) 2014-01-21 2016-12-27 Kabushiki Kaisha Topcon Geographic atrophy identification and measurement
US9237847B2 (en) 2014-02-11 2016-01-19 Welch Allyn, Inc. Ophthalmoscope device
US9211064B2 (en) 2014-02-11 2015-12-15 Welch Allyn, Inc. Fundus imaging system
JP6345944B2 (en) * 2014-02-21 2018-06-20 株式会社ミツトヨ Oblique incidence interferometer
WO2015134641A1 (en) * 2014-03-04 2015-09-11 University Of Southern California Extended duration optical coherence tomography (oct) system
US10149610B2 (en) * 2014-04-25 2018-12-11 Carl Zeiss Meditec, Inc. Methods and systems for automatic detection and classification of ocular inflammation
US10398302B2 (en) 2014-05-02 2019-09-03 Carl Zeiss Meditec, Inc. Enhanced vessel characterization in optical coherence tomograogphy angiography
US11071452B2 (en) 2014-06-30 2021-07-27 Nidek Co., Ltd. Optical coherence tomography device, optical coherence tomography calculation method, and optical coherence tomography calculation program
WO2016001868A1 (en) * 2014-07-02 2016-01-07 Si14 S.P.A. A method for acquiring and processing images of an ocular fundus by means of a portable electronic device
US9636011B2 (en) 2014-07-03 2017-05-02 Carl Zeiss Meditec, Inc. Systems and methods for spectrally dispersed illumination optical coherence tomography
US9759544B2 (en) 2014-08-08 2017-09-12 Carl Zeiss Meditec, Inc. Methods of reducing motion artifacts for optical coherence tomography angiography
US10499813B2 (en) 2014-09-12 2019-12-10 Lightlab Imaging, Inc. Methods, systems and apparatus for temporal calibration of an intravascular imaging system
US10258231B2 (en) 2014-12-30 2019-04-16 Optovue, Inc. Methods and apparatus for retina blood vessel assessment with OCT angiography
US10105107B2 (en) 2015-01-08 2018-10-23 St. Jude Medical International Holding S.À R.L. Medical system having combined and synergized data output from multiple independent inputs
US10117568B2 (en) 2015-01-15 2018-11-06 Kabushiki Kaisha Topcon Geographic atrophy identification and measurement
US9700206B2 (en) 2015-02-05 2017-07-11 Carl Zeiss Meditec, Inc. Acquistion and analysis techniques for improved outcomes in optical coherence tomography angiography
US10368734B2 (en) 2015-02-19 2019-08-06 Carl Zeiss Meditec, Inc. Methods and systems for combined morphological and angiographic analyses of retinal features
US11045088B2 (en) 2015-02-27 2021-06-29 Welch Allyn, Inc. Through focus retinal image capturing
US10799115B2 (en) 2015-02-27 2020-10-13 Welch Allyn, Inc. Through focus retinal image capturing
CA2980289A1 (en) 2015-03-20 2016-09-29 Glaukos Corporation Gonioscopic devices
US10115194B2 (en) 2015-04-06 2018-10-30 IDx, LLC Systems and methods for feature detection in retinal images
US9984459B2 (en) 2015-04-15 2018-05-29 Kabushiki Kaisha Topcon OCT angiography calculation with optimized signal processing
CN104835150B (en) * 2015-04-23 2018-06-19 深圳大学 A kind of optical fundus blood vessel geometry key point image processing method and device based on study
US9996921B2 (en) 2015-05-17 2018-06-12 LIGHTLAB IMAGING, lNC. Detection of metal stent struts
US10222956B2 (en) 2015-05-17 2019-03-05 Lightlab Imaging, Inc. Intravascular imaging user interface systems and methods
US10646198B2 (en) 2015-05-17 2020-05-12 Lightlab Imaging, Inc. Intravascular imaging and guide catheter detection methods and systems
US10109058B2 (en) 2015-05-17 2018-10-23 Lightlab Imaging, Inc. Intravascular imaging system interfaces and stent detection methods
JP6606881B2 (en) * 2015-06-16 2019-11-20 株式会社ニデック OCT signal processing apparatus, OCT signal processing program, and OCT apparatus
CN104933715A (en) * 2015-06-16 2015-09-23 山东大学(威海) Registration method applied to retina fundus image
US10136804B2 (en) 2015-07-24 2018-11-27 Welch Allyn, Inc. Automatic fundus image capture system
US10338795B2 (en) 2015-07-25 2019-07-02 Lightlab Imaging, Inc. Intravascular data visualization and interface systems and methods
CN104958061B (en) * 2015-07-28 2016-09-14 北京信息科技大学 The optical fundus OCT image method of binocular stereo vision three-dimensional imaging and system thereof
US11039741B2 (en) 2015-09-17 2021-06-22 Envision Diagnostics, Inc. Medical interfaces and other medical devices, systems, and methods for performing eye exams
US10264963B2 (en) 2015-09-24 2019-04-23 Carl Zeiss Meditec, Inc. Methods for high sensitivity flow visualization
CN105374028B (en) * 2015-10-12 2018-10-02 中国科学院上海光学精密机械研究所 The method of optical coherent chromatographic imaging retinal images layering
US10772495B2 (en) 2015-11-02 2020-09-15 Welch Allyn, Inc. Retinal image capturing
US10402965B1 (en) 2015-11-12 2019-09-03 Carl Zeiss Meditec, Inc. Systems and methods for reducing artifacts in OCT angiography images
JP6894896B2 (en) 2015-11-18 2021-06-30 ライトラボ・イメージング・インコーポレーテッド X-ray image feature detection and alignment systems and methods
CN115998310A (en) 2015-11-23 2023-04-25 光学实验室成像公司 Detection and verification of shadows in intravascular images
WO2017120217A1 (en) 2016-01-07 2017-07-13 Welch Allyn, Inc. Infrared fundus imaging system
EP3213668B1 (en) * 2016-03-03 2021-07-28 Nidek Co., Ltd. Ophthalmic image processing apparatus
CN109643449A (en) 2016-04-14 2019-04-16 光学实验室成像公司 The identification of vessel branch
US10512395B2 (en) 2016-04-29 2019-12-24 Carl Zeiss Meditec, Inc. Montaging of wide-field fundus images
EP3448234A4 (en) 2016-04-30 2019-05-01 Envision Diagnostics, Inc. Medical devices, systems, and methods for performing eye exams and eye tracking
US10631754B2 (en) 2016-05-16 2020-04-28 Lightlab Imaging, Inc. Intravascular absorbable stent detection and diagnostic methods and systems
US10832051B1 (en) * 2016-06-13 2020-11-10 Facebook Technologies, Llc Eye tracking using optical coherence methods
US20180012359A1 (en) * 2016-07-06 2018-01-11 Marinko Venci Sarunic Systems and Methods for Automated Image Classification and Segmentation
WO2018033770A1 (en) * 2016-08-16 2018-02-22 Stroma Medical Corporation Method and apparatus for prediction of post-operative perceived iris color
US10602926B2 (en) 2016-09-29 2020-03-31 Welch Allyn, Inc. Through focus retinal image capturing
US11026581B2 (en) * 2016-09-30 2021-06-08 Industrial Technology Research Institute Optical probe for detecting biological tissue
EP3558091A4 (en) 2016-12-21 2020-12-02 Acucela, Inc. Miniaturized mobile, low cost optical coherence tomography system for home based ophthalmic applications
US10896490B2 (en) * 2016-12-23 2021-01-19 Oregon Health & Science University Systems and methods for reflectance-based projection-resolved optical coherence tomography angiography
JP6829992B2 (en) * 2016-12-28 2021-02-17 株式会社キーエンス Optical scanning height measuring device
JP6866167B2 (en) * 2017-01-18 2021-04-28 キヤノン株式会社 Information processing equipment, information processing methods and programs
US10674906B2 (en) 2017-02-24 2020-06-09 Glaukos Corporation Gonioscopes
USD833008S1 (en) 2017-02-27 2018-11-06 Glaukos Corporation Gonioscope
US10441164B1 (en) 2017-03-09 2019-10-15 Carl Zeiss Meditec, Inc. Correction of decorrelation tail artifacts in a whole OCT-A volume
US10832402B2 (en) 2017-03-10 2020-11-10 Carl Zeiss Meditec, Inc. Methods for detection and enhanced visualization of pathologies in a human eye
US20200375452A1 (en) * 2017-03-27 2020-12-03 The Board Of Trustees Of The University Of Illinois An optical coherence tomography (oct) system and method that measure stimulus-evoked neural activity and hemodynamic responses
US10839515B2 (en) 2017-04-28 2020-11-17 Massachusetts Institute Of Technology Systems and methods for generating and displaying OCT angiography data using variable interscan time analysis
WO2018204748A1 (en) * 2017-05-05 2018-11-08 Massachusetts Institute Of Technology Systems and methods for generating and displaying oct blood flow speeds by merging mutiple integrated spatial samplings government license rights
EP3655748B1 (en) 2017-07-18 2023-08-09 Perimeter Medical Imaging, Inc. Sample container for stabilizing and aligning excised biological tissue samples for ex vivo analysis
US11412928B2 (en) 2017-08-11 2022-08-16 Carl Zeiss Meditec, Inc. Systems and methods for improved ophthalmic imaging
WO2019045144A1 (en) * 2017-08-31 2019-03-07 (주)레벨소프트 Medical image processing apparatus and medical image processing method which are for medical navigation device
US11000187B2 (en) 2017-09-07 2021-05-11 Carl Zeiss Meditec, Inc. Systems and methods for improved montaging of ophthalmic imaging data
US10963046B1 (en) 2018-05-17 2021-03-30 Facebook Technologies, Llc Drift corrected eye tracking
US11096574B2 (en) 2018-05-24 2021-08-24 Welch Allyn, Inc. Retinal image capturing
CN112638233A (en) 2018-06-20 2021-04-09 奥克塞拉有限公司 Miniature mobile low-cost optical coherence tomography system based on home ophthalmic applications
CN110448266B (en) * 2018-12-29 2022-03-04 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Random laser confocal line scanning three-dimensional ophthalmoscope and imaging method
CN113574542A (en) 2019-02-08 2021-10-29 卡尔蔡司医疗技术公司 Segmentation and classification of geographic atrophy patterns in patients with age-related macular degeneration in wide-field autofluorescence images
CN113396440A (en) 2019-02-14 2021-09-14 卡尔蔡司医疗技术公司 System for OCT image conversion and ophthalmic image denoising and neural network thereof
US20220160228A1 (en) 2019-03-20 2022-05-26 Carl Zeiss Meditec Ag A patient tuned ophthalmic imaging system with single exposure multi-type imaging, improved focusing, and improved angiography image sequence display
JP7409793B2 (en) * 2019-07-02 2024-01-09 株式会社トプコン Optical coherence tomography (OCT) device operating method, OCT data processing device operating method, OCT device, OCT data processing device
EP3760967A3 (en) 2019-07-02 2021-04-07 Topcon Corporation Method of processing optical coherence tomography (oct) data
EP4007518A1 (en) 2019-08-01 2022-06-08 Carl Zeiss Meditec, Inc. Ophthalmic imaging with k-mirror scanning, efficient interferometry, and pupil alignment through spatial frequency analysis
US20220400943A1 (en) 2019-09-06 2022-12-22 Carl Zeiss Meditec, Inc. Machine learning methods for creating structure-derived visual field priors
EP4076135A1 (en) 2019-12-18 2022-10-26 Carl Zeiss Meditec AG Personalized patient interface for ophthalmic devices
JP2023508946A (en) 2019-12-26 2023-03-06 アキュセラ インコーポレイテッド Optical Coherence Tomography Patient Alignment System for Home-Based Ophthalmic Applications
US20230073778A1 (en) 2020-02-21 2023-03-09 Carl Zeiss Meditec, Inc. Oct zonule imaging
EP4128138A1 (en) 2020-03-30 2023-02-08 Carl Zeiss Meditec, Inc. Correction of flow projection artifacts in octa volumes using neural networks
CN115443480A (en) 2020-04-29 2022-12-06 卡尔蔡司医疗技术公司 OCT orthopathology segmentation using channel-encoded slabs
EP4142571A1 (en) 2020-04-29 2023-03-08 Carl Zeiss Meditec, Inc. Real-time ir fundus image tracking in the presence of artifacts using a reference landmark
CN115413350A (en) 2020-04-30 2022-11-29 卡尔蔡司医疗技术公司 Bruch's membrane segmentation in OCT
US10959613B1 (en) 2020-08-04 2021-03-30 Acucela Inc. Scan pattern and signal processing for optical coherence tomography
US11393094B2 (en) 2020-09-11 2022-07-19 Acucela Inc. Artificial intelligence for evaluation of optical coherence tomography images
US11911105B2 (en) 2020-09-30 2024-02-27 Acucela Inc. Myopia prediction, diagnosis, planning, and monitoring device
WO2022112546A1 (en) 2020-11-30 2022-06-02 Carl Zeiss Meditec, Inc. Quality maps for optical coherence tomography angiography
WO2022117661A2 (en) 2020-12-04 2022-06-09 Carl Zeiss Meditec, Inc. Using multiple sub-volumes, thicknesses, and curvatures for oct/octa data registration and retinal landmark detection
US20240081638A1 (en) 2021-02-01 2024-03-14 Carl Zeiss Meditec, Inc. Micro-bench oct design
US20240127446A1 (en) 2021-02-26 2024-04-18 Carl Zeiss Meditec, Inc. Semi-supervised fundus image quality assessment method using ir tracking
EP4312717A1 (en) 2021-03-24 2024-02-07 Acucela Inc. Axial length measurement monitor
CN117795607A (en) 2021-06-17 2024-03-29 卡尔蔡司医疗技术公司 Medical data sharing using blockchain
WO2023126340A1 (en) 2021-12-27 2023-07-06 Carl Zeiss Meditec, Inc. Fluid tracking in wet amd patients using thickness change analysis

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004043245A1 (en) * 2002-11-07 2004-05-27 Pawel Woszczyk A method of fast imaging of objects by means of spectral optical coherence tomography

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465147A (en) * 1991-04-29 1995-11-07 Massachusetts Institute Of Technology Method and apparatus for acquiring images using a ccd detector array and no transverse scanner
US5321501A (en) * 1991-04-29 1994-06-14 Massachusetts Institute Of Technology Method and apparatus for optical imaging with means for controlling the longitudinal range of the sample
JPH05203908A (en) * 1991-06-07 1993-08-13 Hughes Aircraft Co Single light valve full color projection display device
EP0971624A1 (en) * 1997-03-13 2000-01-19 Biomax Technologies, Inc. Methods and apparatus for detecting the rejection of transplanted tissue
US5921926A (en) * 1997-07-28 1999-07-13 University Of Central Florida Three dimensional optical imaging colposcopy
US5975697A (en) * 1998-11-25 1999-11-02 Oti Ophthalmic Technologies, Inc. Optical mapping apparatus with adjustable depth resolution
CA2390072C (en) * 2002-06-28 2018-02-27 Adrian Gh Podoleanu Optical mapping apparatus with adjustable depth resolution and multiple functionality
US7474407B2 (en) * 2003-02-20 2009-01-06 Applied Science Innovations Optical coherence tomography with 3d coherence scanning
US6927860B2 (en) 2003-05-19 2005-08-09 Oti Ophthalmic Technologies Inc. Optical mapping apparatus with optimized OCT configuration
WO2004111929A2 (en) * 2003-05-28 2004-12-23 Duke University Improved system for fourier domain optical coherence tomography
US7433046B2 (en) * 2004-09-03 2008-10-07 Carl Ziess Meditec, Inc. Patterned spinning disk based optical phase shifter for spectral domain optical coherence tomography
US7301644B2 (en) * 2004-12-02 2007-11-27 University Of Miami Enhanced optical coherence tomography for anatomical mapping

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004043245A1 (en) * 2002-11-07 2004-05-27 Pawel Woszczyk A method of fast imaging of objects by means of spectral optical coherence tomography

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHOMA M A; SARUNIC M V; CHANGHUEI YANG; IZATT J A: "Sensitivity advantage of swept source and Fourier domain optical coherence tomography", OPTICS EXPRESS, vol. 11, no. 18, 8 September 2003 (2003-09-08), pages 2183 - 2189, XP002370567 *
SHULIANG JIAO; KNIGHTON R; XIANGRUN HUANG; GREGORI G; PULIAFITO C A: "Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography", OPTICS EXPRESS, vol. 13, no. 2, 24 January 2005 (2005-01-24), pages 444 - 452, XP002370566 *
WOJTKOWSKI ET AL: "Three-dimensional Retinal Imaging with High-Speed Ultrahigh-Resolution Optical Coherence Tomography", OPHTHALMOLOGY, J. B. LIPPINCOTT CO., PHILADELPHIA, PA, US, vol. 112, no. 10, October 2005 (2005-10-01), pages 1734 - 1746, XP005104095, ISSN: 0161-6420 *
WOJTKOWSKI M ET AL: "Ophthalmic imaging by spectral optical coherence tomography", AMERICAN JOURNAL OF OPHTHALMOLOGY, OPHTHALMIC PUBL., CHICAGO, IL,, US, vol. 138, no. 3, September 2004 (2004-09-01), pages 412 - 419, XP004722403, ISSN: 0002-9394 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7971999B2 (en) 2006-11-02 2011-07-05 Heidelberg Engineering Gmbh Method and apparatus for retinal diagnosis
WO2008052793A1 (en) * 2006-11-02 2008-05-08 Heidelberg Engineering Gmbh Method and apparatus for retinal diagnosis
EP1935329B2 (en) 2006-12-22 2015-08-05 Kabushiki Kaisha TOPCON Fundus oculi observation device and fundus oculi image display device
EP1969995A1 (en) * 2007-03-14 2008-09-17 Haag-Streit Ag Eye testing device
US7604351B2 (en) 2007-03-30 2009-10-20 Kabushiki Kaisha Topcon Optical image measurement device and optical image measurement method
EP1975549A1 (en) * 2007-03-30 2008-10-01 Kabushiki Kaisha TOPCON Optical image measurement device and optical image measurement method
DE102008063225A1 (en) * 2008-12-23 2010-07-01 Carl Zeiss Meditec Ag Device for Swept Source Optical Coherence Domain Reflectometry
US8690330B2 (en) 2008-12-23 2014-04-08 Carl Zeiss Meditec Ag Device for swept-source optical coherence domain reflectometry
US9044164B2 (en) 2008-12-23 2015-06-02 Carl Zeiss Meditec Ag Device for swept source optical coherence domain reflectometry
WO2011020482A1 (en) * 2009-08-19 2011-02-24 Optopol Technology S.A. Method and apparatus for optical coherence tomography
WO2011108231A1 (en) * 2010-03-02 2011-09-09 Canon Kabushiki Kaisha Image processing apparatus, control method, and optical coherence tomography system
CN102802505A (en) * 2010-03-02 2012-11-28 佳能株式会社 Image processing apparatus, control method, and optical coherence tomography system
US9025847B2 (en) 2010-03-02 2015-05-05 Canon Kabushiki Kaisha Image processing apparatus, control method, and optical coherence tomography system
GB2485345A (en) * 2010-11-05 2012-05-16 Queen Mary & Westfield College Optical coherence tomography scanning to identify a region of interest in a sample
WO2013139481A1 (en) * 2012-03-21 2013-09-26 Ludwig-Maximilians-Universität München Method for reducing the dimensionality of a spatially registered signal derived from the optical properties of a sample, and device therefor
US9709380B2 (en) 2012-03-21 2017-07-18 Ludwig-Maximilians-Universität München Method for reducing the dimensionality of a spatially registered signal derived from the optical properties of a sample, and device therefor
US9784559B2 (en) 2012-03-21 2017-10-10 Ludwig-Maximilians-Universität München Swept source OCT system and method with phase-locked detection
EP3100671A1 (en) * 2015-06-02 2016-12-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20160358347A1 (en) * 2015-06-02 2016-12-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US9875559B2 (en) 2015-06-02 2018-01-23 Canon Kabushiki Kaisha Image processing apparatus and image processing method

Also Published As

Publication number Publication date
US20110273667A1 (en) 2011-11-10
US7301644B2 (en) 2007-11-27
US20060119858A1 (en) 2006-06-08
US20080068560A1 (en) 2008-03-20
EP2612589A3 (en) 2013-07-31
EP1833359B1 (en) 2017-06-28
US20100208201A1 (en) 2010-08-19
CA2584958C (en) 2016-01-26
US7505142B2 (en) 2009-03-17
EP1833359A1 (en) 2007-09-19
EP2612589A2 (en) 2013-07-10
CA2584958A1 (en) 2006-06-08
US20090180123A1 (en) 2009-07-16
US7924429B2 (en) 2011-04-12
US8319974B2 (en) 2012-11-27
US7659990B2 (en) 2010-02-09
EP2612589B1 (en) 2018-04-11

Similar Documents

Publication Publication Date Title
US7301644B2 (en) Enhanced optical coherence tomography for anatomical mapping
US11861830B2 (en) Image analysis
JP4777362B2 (en) Motion correction method in optical coherence tomography imaging
JP5685013B2 (en) Optical tomographic imaging apparatus, control method therefor, and program
JP5166889B2 (en) Quantitative measurement device for fundus blood flow
JP4940070B2 (en) Fundus observation apparatus, ophthalmic image processing apparatus, and program
JP6632267B2 (en) Ophthalmic apparatus, display control method and program
JP6702764B2 (en) Optical coherence tomographic data processing method, program for executing the method, and processing apparatus
JP6143422B2 (en) Image processing apparatus and method
JP6798095B2 (en) Optical coherence tomography equipment and control programs used for it
US9243887B2 (en) Lateral distortion corrected optical coherence tomography system
US20130188132A1 (en) Standardized display of optical coherence tomography imaging data
JP7162553B2 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
EP3845119A1 (en) Ophthalmology device, and control method therefor
JP2018068778A (en) Ophthalmologic oct analyzer and ophthalmologic analysis program
JP7246862B2 (en) IMAGE PROCESSING DEVICE, CONTROL METHOD AND PROGRAM OF IMAGE PROCESSING DEVICE
JP6849780B2 (en) Ophthalmic equipment, display control methods and programs
Rosen et al. Multiplanar OCT/confocal ophthalmoscope in the clinic
JP2019180692A (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2584958

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2005815426

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2005815426

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2005815426

Country of ref document: EP