US20040218804A1 - Image analysis system and method - Google Patents
Image analysis system and method Download PDFInfo
- Publication number
- US20040218804A1 US20040218804A1 US10/769,150 US76915004A US2004218804A1 US 20040218804 A1 US20040218804 A1 US 20040218804A1 US 76915004 A US76915004 A US 76915004A US 2004218804 A1 US2004218804 A1 US 2004218804A1
- Authority
- US
- United States
- Prior art keywords
- image
- sample
- pixels
- imaging
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 128
- 238000010191 image analysis Methods 0.000 title abstract description 14
- 238000003384 imaging method Methods 0.000 claims abstract description 245
- 239000013078 crystal Substances 0.000 claims abstract description 142
- 239000002131 composite material Substances 0.000 claims abstract description 18
- 238000005286 illumination Methods 0.000 claims description 46
- 238000013528 artificial neural network Methods 0.000 claims description 33
- 239000002244 precipitate Substances 0.000 claims description 27
- 230000010287 polarization Effects 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 abstract description 164
- 238000011156 evaluation Methods 0.000 abstract description 5
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 52
- 239000003990 capacitor Substances 0.000 description 36
- 238000003860 storage Methods 0.000 description 21
- 229910052724 xenon Inorganic materials 0.000 description 18
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 18
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 16
- 238000012549 training Methods 0.000 description 16
- 238000001914 filtration Methods 0.000 description 12
- 238000003708 edge detection Methods 0.000 description 11
- 230000004044 response Effects 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 150000001875 compounds Chemical class 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 239000000463 material Substances 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000002425 crystallisation Methods 0.000 description 5
- 230000008025 crystallization Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 229920002521 macromolecule Polymers 0.000 description 5
- 238000001816 cooling Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 229920003023 plastic Polymers 0.000 description 4
- 239000004033 plastic Substances 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 229910052782 aluminium Inorganic materials 0.000 description 3
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- -1 i.e. Substances 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 102000004169 proteins and genes Human genes 0.000 description 3
- 108090000623 proteins and genes Proteins 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 229910000831 Steel Inorganic materials 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000002050 diffraction method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003703 image analysis method Methods 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000010959 steel Substances 0.000 description 2
- 230000026676 system process Effects 0.000 description 2
- 230000032258 transport Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000009423 ventilation Methods 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000005388 borosilicate glass Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003638 chemical reducing agent Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 150000002739 metals Chemical class 0.000 description 1
- 229910052754 neon Inorganic materials 0.000 description 1
- GKAOGPIIYCISHV-UHFFFAOYSA-N neon atom Chemical compound [Ne] GKAOGPIIYCISHV-UHFFFAOYSA-N 0.000 description 1
- 102000039446 nucleic acids Human genes 0.000 description 1
- 108020004707 nucleic acids Proteins 0.000 description 1
- 150000007523 nucleic acids Chemical class 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000005057 refrigeration Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 229910001220 stainless steel Inorganic materials 0.000 description 1
- 239000010935 stainless steel Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000002424 x-ray crystallography Methods 0.000 description 1
Images
Classifications
-
- C—CHEMISTRY; METALLURGY
- C30—CRYSTAL GROWTH
- C30B—SINGLE-CRYSTAL GROWTH; UNIDIRECTIONAL SOLIDIFICATION OF EUTECTIC MATERIAL OR UNIDIRECTIONAL DEMIXING OF EUTECTOID MATERIAL; REFINING BY ZONE-MELTING OF MATERIAL; PRODUCTION OF A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; SINGLE CRYSTALS OR HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; AFTER-TREATMENT OF SINGLE CRYSTALS OR A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; APPARATUS THEREFOR
- C30B29/00—Single crystals or homogeneous polycrystalline material with defined structure characterised by the material or by their shape
- C30B29/54—Organic compounds
- C30B29/58—Macromolecular compounds
-
- C—CHEMISTRY; METALLURGY
- C30—CRYSTAL GROWTH
- C30B—SINGLE-CRYSTAL GROWTH; UNIDIRECTIONAL SOLIDIFICATION OF EUTECTIC MATERIAL OR UNIDIRECTIONAL DEMIXING OF EUTECTOID MATERIAL; REFINING BY ZONE-MELTING OF MATERIAL; PRODUCTION OF A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; SINGLE CRYSTALS OR HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; AFTER-TREATMENT OF SINGLE CRYSTALS OR A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; APPARATUS THEREFOR
- C30B7/00—Single-crystal growth from solutions using solvents which are liquid at normal temperature, e.g. aqueous solutions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/251—Colorimeters; Construction thereof
- G01N21/253—Colorimeters; Construction thereof for batch operation, i.e. multisample apparatus
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/028—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations having reaction cells in the form of microtitration plates
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/0016—Technical microscopes, e.g. for inspection or measuring in industrial production processes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
- B01L—CHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
- B01L9/00—Supporting devices; Holding devices
- B01L9/52—Supports specially adapted for flat sample carriers, e.g. for plates, slides, chips
- B01L9/523—Supports specially adapted for flat sample carriers, e.g. for plates, slides, chips for multisample carriers, e.g. used for microtitration plates
-
- G01N15/1433—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/40—Concentrating samples
- G01N1/4022—Concentrating samples by thermal techniques; Phase changes
- G01N2001/4027—Concentrating samples by thermal techniques; Phase changes evaporation leaving a concentrated sample
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Electro-optical investigation, e.g. flow cytometers
- G01N2015/1493—Particle size
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N2035/00346—Heating or cooling arrangements
- G01N2035/00356—Holding samples at elevated temperature (incubation)
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N2035/00346—Heating or cooling arrangements
- G01N2035/00455—Controlling humidity in analyser
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00871—Communications between instruments or with remote terminals
- G01N2035/00881—Communications between instruments or with remote terminals network configurations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/0401—Sample carriers, cuvettes or reaction vessels
- G01N2035/0418—Plate elements with several rows of samples
- G01N2035/042—Plate elements with several rows of samples moved independently, e.g. by fork manipulator
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/0401—Sample carriers, cuvettes or reaction vessels
- G01N2035/0418—Plate elements with several rows of samples
- G01N2035/0425—Stacks, magazines or elevators for plates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/046—General conveyor features
- G01N2035/0462—Buffers [FIFO] or stacks [LIFO] for holding carriers between operations
- G01N2035/0463—Buffers [FIFO] or stacks [LIFO] for holding carriers between operations in incubators
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/0099—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor comprising robots or similar manipulators
Definitions
- This invention generally relates to systems and methods for analyzing and exploiting images. More particularly, the invention relates to systems and methods for identifying and analyzing images of substances in samples.
- X-ray crystallography is used to determine the three-dimensional structure of macromolecules, e.g., proteins, nucleic acids, etc.
- This technique requires the growth of crystals of the target macromolecule.
- crystal growth of macromolecules is dependent on several environmental conditions, e.g., temperature, pH, salt, and ionic strength.
- growing crystals of macromolecules requires identifying the specific environmental conditions that will promote crystallization for any given macromolecule.
- it is insufficient to find conditions that result in any type of crystal growth; rather, the objective is to determine those conditions that yield well-diffracting crystals, i.e., crystal configurations that provide the resolution desired to make the data useful.
- an image may be periodically generated for each sample and provided to a technician, who need not be geographically co-located with the sample, to analyze the image to evaluate crystal growth.
- Automated image evaluation techniques can also be used to analyze the image and evaluate the presence of crystal growth and increase system throughput.
- current image analysis techniques do not always receive sufficient information from the sample image to accurately evaluate crystal growth. Important information learned as a result of analyzing the image is not automatically exploited, or used for further analysis to facilitate a user's evaluation of the image. Additionally, in current systems, the results of analyzing the image are not adequately provided to facilitate easy interpretation and efficient decision making.
- the invention comprises a method of evaluating crystal growth in a crystal growth system, comprising receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, and analyzing information depicted in said second image to determine the contents of said sample.
- the different imaging parameter included in the method can be depth-of-field, illumination brightness level, focus, the area imaged, the center location of the area imaged, illumination source type, magnification, polarization, and/or illumination source position.
- analyzing said first image comprises determining a region of interest in said first image and wherein said information comprises is used to adjust said second set of imaging parameters so that the imaging system generates a zoomed-in second image of said region on interest.
- analyzing information in the method of evaluating crystal growth in a crystal growth system comprises determining whether said first image depicts the presence of crystals, and can further comprise, wherein said first image comprises pixels, and said determining comprises classifying said pixels and comparing the number of pixels classified as crystals to a threshold value.
- a method of evaluating crystal growth in a crystal growth system comprises counting the number of said pixels depicting objects in the sample and evaluating said number using a threshold value.
- the method of analyzing crystal growth comprises receiving a first image having pixels depicting crystal growth information of a sample, identifying a first set of pixels in said first image comprising a first region of interest, receiving a second image having pixels depicting crystal growth information of said sample, identifying a second set of pixels in said second image comprising a second region of interest, merging said first set of pixels and said second set of pixels to form a composite image, and analyzing said composite image to identify crystal growth information of said sample.
- said first image is generated by an imaging system using a first set of imaging parameters
- said second image is generated by said imaging system using a second set of imaging parameters
- said second set of imaging parameters comprises at least one imaging parameter that is different from the imaging parameters in said first set of imaging parameters
- a method of analyzing crystal growth information comprises receiving a first image comprising a set of pixels that depict the contents of a sample, determining information for each pixel in said set of pixels, wherein said information comprises a classification describing the type of sample content depicted by said each pixel, and a color code associated with each classification, generating a second image based on said information and said set of pixels, displaying said second image, and visually analyzing said second image to determine crystal growth information of the sample.
- the invention comprises a system for detecting crystal growth information comprising an imaging subsystem with means for generating an image of a sample, wherein said image comprises pixels that depict the content of said sample, an image analyzer subsystem coupled to said imaging system with means for receiving said image, means for classifying the content of said sample using said pixels and means for determining whether said sample should be re-imaged based on said classifying; and a scheduler subsystem coupled to said imaging analyzer system with means for causing said imaging subsystem to re-image said sample.
- the invention comprises a computer-readable medium containing instructions for analyzing samples in a crystal growth system, by receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, analyzing information depicted in said second image to determine the contents of said sample.
- FIG. 1A is a high-level block diagram of an imaging system according to the invention.
- FIG. 1B is high-level block diagram of another imaging system according to the invention.
- FIG. 2 is a perspective view of an imaging system according to the invention.
- FIG. 3 is a perspective view of the imaging system shown in FIG. 2, viewed from a different angle.
- FIG. 4 is a perspective view of the imaging system shown in FIG. 2, viewed from yet a different angle.
- FIG. 5 is a plan front view of the imaging system shown in FIG. 2.
- FIG. 6 is a plan, right side view of the imaging system shown in FIG. 2.
- FIGS. 7A and 7B are perspective views from different angles of a lens system as can be used with the imaging system shown in FIG. 2.
- FIG. 8 is a perspective view from below of a photo-filter carriage that can be used with the imaging system shown in FIG. 2.
- FIG. 9 is a perspective view of certain components as assembled in the imaging system shown in FIG. 2.
- FIG. 10 is a plan front view of certain components as assembled in the imaging system shown in FIG. 2.
- FIG. 11 is a plan, right side view of the components shown in FIG. 10.
- FIG. 12 is a perspective view of a light source as can be used with the imaging system shown in FIG. 2.
- FIG. 13 is a perspective view of a sample mount with the light source shown in FIG. 12, viewed from a different angle.
- FIG. 14A is a plant top view of the light source shown in FIG. 12.
- FIG. 14B is a cross-sectional view along the plane A-A of the light source shown in FIG. 14A.
- FIG. 15 is a exploded, perspective view of certain components of the sample mount and the light source shown in FIG. 13.
- FIG. 16 is a functional block diagram of an illumination duration control circuit as can be used with the light source shown in FIG. 12.
- FIG. 17 is a functional block diagram of an automated sample analysis system in which the imaging system according to the invention can be used.
- FIG. 18 is a block diagram of an imaging and analysis system.
- FIG. 19 is a block diagram of a computer that includes a Crystal Resolve analysis module, according to one aspect of the invention.
- FIG. 20A is a block diagram of an analysis system process, according to one embodiment of the invention.
- FIG. 20B is a block diagram of an analysis system process, according to one embodiment of the invention.
- FIG. 21 is a flow diagram of an imaging analysis process, according to one embodiment of the invention.
- FIG. 22 is a flow diagram of an imaging analysis and control process, according to one embodiment of the invention.
- FIG. 23 is a flow diagram of an analysis process, according to one embodiment of the invention.
- the imaging and analysis system and methods disclosed here are related to embodiments of an automated sample analysis system having an imaging system that is described in the related U.S. provisional patent application No. 60/444,519, entitled “AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD.”
- An imaging system that can provide images of samples for analysis, in response to control information, is described hereinbelow, followed by a description of a system and processes for analyzing the images.
- image”, “subimage” or “pixels” as used herein at various locations do not necessarily mean an optical image, subimage or pixels which are either usually displayed or printed, but rather include digital representations or other representations of such image, subimage or pixels.
- sample refers to any type of suitable sample, for example, drops, droplets, the contents of a well, the contents of a capillary, a sample in gel or any other embodiment of containing a sample or material.
- FIG. 1A is a high-level block diagram of an imaging system 100 .
- the imaging system 100 has an assembly 105 that is controlled by controllers and logic 110 .
- the assembly 105 includes a stage 115 that holds and transports target samples to be imaged by an image capture device 120 .
- the imaging system 100 employs an optics assembly 125 to enhance the view of the target samples before the image capture device 120 obtains the images of the samples.
- An illuminator 130 is configured as part of the assembly 105 to direct light at the samples held in the stage 115 .
- the assembly 105 also includes a translator 135 that provides the structural support members and actuators to move any one combination of the stage 115 , image capture device 120 , optics 125 , or illuminator 130 .
- the translator 135 may be configured to move the combination of components in one, two, or three dimensions.
- the stage 115 remains stationary while the translator 135 moves the image capture device 120 and optics 125 to a desired well position in a sample plate held by the stage 115 .
- the translator 135 moves the stage 115 in a first axis and the image capture device 120 and optics 125 in a second axis which is substantially perpendicular to the first axis.
- the controllers and logic 110 of the imaging system 100 provide instructions to and coordinate the activities of the components of the assembly 105 .
- the controllers may include a microprocessor, controller, microcontroller, or any other computing device.
- the logic includes the instructions to cause the controller to perform the tasks or processing described here.
- FIG. 1B is high-level block diagram of an imaging system 150 .
- the imaging system 150 includes an assembly 155 in communication with controllers and logic 160 .
- the assembly 155 may also be in communication with a data storage device 190 , which itself may be configured for communication with the controllers and logic 160 .
- the controllers and logic 160 control and coordinate the activities of the components of the assembly 155 .
- the assembly 155 includes a sample plate mount 165 suitably configured to receive micro-titer plates of various configurations and sizes.
- the sample plate mount 165 can be configured to receive any sample matrix that carries samples, regardless of whether the samples are stored in individuals sample wells, rest on the surface of the sample matrix (e.g., as droplets), or are embedded in the sample matrix.
- a source of flash lighting 180 is arranged to direct light bursts to the samples stored in the micro-titer plate carried by the sample plate mount 165 . An inventive system and method of providing the flash lighting 180 will be discussed with reference to FIG. 16.
- the assembly 155 includes a compound lens 175 that cooperates with a digital camera 170 to acquire images of the samples in the sample plate.
- the compound lens 175 may consist, for example, of an objective lens, a zoom lens, and additional optics chosen to provide the digital camera 170 with the desired image from the light from the samples.
- the compound lens 175 may be motorized (i.e., provided with one or more actuators) so that the controllers and logic 160 can automatically focus the scene, zoom on the scene, and set the aperture.
- the assembly 155 includes an x-y translator that moves either the sample plate mount 165 or the compound lens 175 , or both.
- the x-y translator moves both the digital camera 170 and the compound lens 175 .
- the x-y translator 185 is configured to move the sample plate mount 165 in two axis, e.g., x and y coordinates.
- the x-y translator 185 moves the compound lens 175 in two axis, while the sample plate mount 165 remains stationary.
- the x-y translator consists of multiple and separate actuators that move independently from one another the sample mount 165 or the compound lens 175 .
- the assembly 155 , the controllers and logic 160 , and the data storage 190 are depicted as separate components for schematic purposes only. That is, in some embodiments of the imaging system 150 it is advantageous to, for example, integrate the data storage device 190 into the assembly 155 and to include the controllers and logic 160 as part of one or more of the components shown as being part of the assembly 155 .
- the sample mount 165 , digital camera 170 , compound lens 175 , flash lighting 180 , and x-y translator 185 need not all be configured as part of a single assembly 155 as shown.
- FIGS. 2-16 depict a specific embodiment of the imaging system.
- FIGS. 2-16 depict a specific embodiment of the imaging system.
- the following description of the specific embodiment should not be taken to limit the full scope of the inventive imaging system.
- the imaging system 200 includes a sample plate mount 210 that receives a sample plate 212 .
- An x-translator having an actuator 218 (see FIG. 4) is coupled to the sample plate mount 210 to move the sample plate mount 210 into position above a light source 216 and below a lens assembly 230 .
- a digital camera 214 is coupled to the lens assembly 230 to capture images of the wells in the sample plate 212 .
- a y-translator having an actuator 220 is coupled to the lens assembly 230 to move the lens assembly 230 into position over a desired well of the sample plate 212 .
- the digital camera 214 , lens assembly 230 , sample plate mount 210 , light source 216 , x-translator 218 , and y-translator 220 are mounted on a platform 240 (see FIG. 2).
- the platform 240 generally consists of several structural members, brackets, or walls, e.g., base 242 , side wall 244 , front wall 250 , bracket 252 , bracket 246 , post 248 , and support member 254 .
- the light source 216 can be fastened to the base 242 .
- Rails 256 and 258 which support the lens assembly 230 are fastened to the wall 250 of the platform 240 and to the support member 254 .
- the sample plate mount 210 is supported by a rail 262 and an outport guide 253 of the support member 254 .
- the rail 262 is supported through attachment to the side wall 244 and the post 248 .
- the platform 240 may be constructed of any of several suitable materials, including but not limited to, aluminum, steel, or plastics. Because in some applications it is critical to keep vibration of the platform 240 to a minimum, materials that provide rigidity to the platform 240 are preferred in such applications.
- the rails 256 , 258 , and 262 these are preferably manufactured with very smooth surfaces to carry the lens assembly 230 or the sample plate mount 210 in a smooth fashion, thereby avoiding vibrations.
- supporting the lens assembly 230 may be done by coupling the linear plain bearings 264 and 266 to the rails 256 and 258 . A similar coupling using a “bushing” 267 (see FIG.
- Bearings 264 , 266 , and 267 are chosen to provide smooth bearing surfaces for smooth translation of the load, e.g., the lens assembly 230 or the sample plate mount 210 .
- the sample plate mount 210 may be constructed from any rigid material, e.g., steel, aluminum, or plastics. Preferably the sample plate mount 210 is configured to accommodate, either directly or through the use of adapters, various standard sizes of micro-titer plates. Micro-titer plates that may be used with the sample plate mount 210 include, but are not limited to, crystallography plates manufactured by Linbro, Douglas, Greiner, and Corning. As will be described further below, the sample plate mount 210 is coupled to an actuator 218 for moving the sample plate mount 210 in one axis.
- the imaging system 200 includes two independent translators. Typically, the sample plate mount 210 and the lens assembly 230 move on a plane that is substantially parallel to a plane defined by the sample plate 212 carried by the sample plate mount 210 . In one embodiment, the controllers and logic 110 or 160 can control x-, y-translators to position the sample plate mount 210 and the lens assembly 230 at the coordinates of a specific well of the sample plate 212 .
- An x-axis translator for moving the sample plate mount 210 consists of an actuator 218 (see FIG. 4) that rotates a threaded rod 219 (or “lead screw”) about its axis in clockwise or counter-clockwise directions.
- the actuator 218 is coupled to the rod 219 via a belt (not shown) and pulleys 221 and 221 ′.
- the sample plate 210 mount is fastened to a “bushing” 267 (see FIG. 10) that rides on the rail 262 .
- the sample plate mount 210 is also supported by the outport guide 253 (see FIGS. 6 and 11) of the support member 254 .
- the “bushing” 267 is additionally coupled in a known manner to the rod 219 .
- the actuator 218 turns in one direction, its power is transmitted via the belt and pulleys 221 and 221 ′ to the rod 219 , which then moves the “bushing” 267 and, thereby, moves the sample plate mount 210 in a linear direction.
- a y-axis translator for moving the lens assembly 230 consists of an actuator 220 (see FIG. 3) that rotates a threaded rod 260 about its axis in clockwise or counter-clockwise directions.
- the actuator 220 is coupled to the rod 260 through a slotted disc coupling (not shown).
- the lens assembly 230 is coupled to bearings 264 and 266 that respectively ride on rails 256 and 258 .
- the bearings 264 and 266 are coupled to the rod 260 through plate 255 and the bracket 257 (see FIG. 6) in a known manner.
- the actuator 220 turns in one direction, its power is transmitted via the slotted disc coupling to the rod 260 , which then moves the bearings 264 and 266 and, thereby, moves the lens assembly in a linear direction.
- the actuators 218 and 220 may be direct current gear motors or 3-phase servo motors, for example.
- the type of motors employed as the actuators 218 or 220 will depend on, among other things, the weight of the sample plate mount 210 plus sample plate 212 or the lens assembly 230 and the digital camera 214 . Another factor in determining the type of motor is the desired speed.
- actuators 218 and 220 having a positioning precision of 10-microns are used. Suitable motors may be obtained from PITMANN® of Harleysville, Pa.
- each translator mechanism independently translates along an axis of motion each of the sample plate mount 210 and the lens assembly 230 .
- the imaging system 200 may be configured so that an x-y translator (or set of x-, y-translators) moves the lens assembly in the x-y coordinate area, while the sample plate mount 210 remains stationary over the light source 216 .
- the x-, y-translators employ optical sensors 285 and 287 (see FIG. 5) as to sense the start or end positions (“home positions”) of the lens assembly 230 or the sample plate mount 210 .
- the imaging system 200 may also include a z-axis translator (not shown) to lift or lower the sample plate mount 210 , lens assembly 230 , or light source 216 .
- the z-axis translator may consist of, for example, an actuator, a lead screw, one or more rails, and appropriate bearings and fasteners.
- the actuators 218 and 220 may be governed by a controller (not shown). Suitable controllers may be obtained from J R Kerr Automation Engineering of Flagstaff, Arizona. The controller may be configured to interpret high level commands from a computing device. In one embodiment, when a specific axis is addressed, the controller causes the actuator 220 , for example, to move and keeps count of the travel distance and final location. The controller can be programmed to move the actuator 220 at varying speed, torque, and acceleration.
- the image capture device can be a film camera, a digital camera, a CMOS camera, a charge coupled device (CCD), and the like, or some other apparatus for capturing an image of an object.
- the embodiments of the imaging system 200 described here employ a digital camera 214 .
- a suitable digital camera 214 is, for example, a CMOS digital camera. However, it should be apparent that several digital photography devices could also be employed.
- the CMOS camera 214 is preferred because it provides random access to the image data and is relatively low cost. In conventional imaging systems for crystallography, a CMOS camera is typically not used because in those systems the level of light is insufficient for this type of camera. In contrast, the imaging system 200 is configured to provide the level of light necessary to allow use of a CMOS camera.
- the digital camera 214 can be a CMOS camera having a pixel resolution of 1280 ⁇ 1024 pixels, Bayer color filter, a pixel size of 7.5 ⁇ 7.5 microns, and a data interface governed by the IEEE 1394 standard (commonly known as “Firewire”).
- the digital camera 124 may be fully digital and not require a frame grabber.
- the digital camera 124 may also have a centered pixel area, e.g. a 1024 ⁇ 1024 or 800 ⁇ 600 pixel subset of the array, which enhances the image quality since the edges of the array where optical distortions increase are avoided.
- the digital camera 214 is connected separately to a host computer (not shown) via a Firewire data interface. This allows for rapid transfer of large amounts of image data, e.g., five images per second.
- One embodiment of the lens assembly 230 includes an objective lens 231 , a zoom lens 233 , and an adapter 235 . These optical components are chosen to provide suitable field of view, magnification, and image quality.
- the objective lens 231 , zoom lens 233 , and adapter 235 may be purchased from, for example, Navitar Inc. of Rochester, N.Y.
- the zoom lens 233 may be the “12 ⁇ UltraZoom” zoom lens manufactured by Navitar.
- the zoom lens 233 may provide a 12:1 zoom factor, a focus range of about 12-mm, and an aperture of about 0.14.
- the zoom lens 233 preferably includes adapters for mounting the objective lens 231 .
- the zoom lens 233 may have actuators 233 A, 233 B, and 233 C for providing, respectively, automatic aperture adjustment, autozoom, and autofocus functionality.
- actuators 233 B and 233 C have gear reductions of 262:1.
- the gear reduction ratio is chosen to suit the particular application. For example, a 5752:1 gear ratio for the focus actuator 233 C may be too slow for some applications of the imaging system 200 .
- the actuators 233 A, 233 B, and 233 C may be obtained from Navitar or from MicroMo Electronics, Inc. of Clearwater, Florida.
- the objective lens 231 may be, for example, a 5 ⁇ Mitutoyo Infinity Corrected Long Working Distance Microscope Objective (model M Plan Apo 5) microscope accessory.
- the objective lens 231 is coupled to the zoom lens 233 . Since the light source 216 delivers sufficient light to the sample plate 212 , the lens assembly 230 is configured to allow for setting a small aperture in order to increase the depth of field.
- the objective lens 231 preferably provides a working distance that allows adequate room beneath the lens assembly 230 to manipulate a sample plate 212 and provide a photo-filter carriage 237 in the image path. In one embodiment, the working distance of the objective lens 231 is about 34-mm.
- the adapter 235 serves to allow use of the digital camera 214 .
- the adapter 235 may be, for example, a 1 ⁇ Adapter model number 1-6015 sold by Navitar.
- different combinations of objective lenses 231 and adapters 235 may be used, e.g., a 2 ⁇ Adapter and 2 ⁇ Objective combination.
- the combination of 1 ⁇ Adapter and 5 ⁇ Objective provides a suitable image for most applications of the imaging system 200 .
- it is desirable to use a 0.67 ⁇ Adapter 235 with a 10 ⁇ Objective 231 for example, to provide a higher image resolution.
- the optical components of the lens assembly 230 can be provided with actuators for remote and automatic control.
- controllers and control logic can control the actuators 233 A, 233 B, 233 C, and 233 D.
- the actuators e.g., dc motors
- the actuators 233 A, 233 B, 233 C, and 233 D are preferably provided with encoders to provide position information to the controllers.
- the actuators on the lens assembly 230 are 17-mm direct current motors with 100:1 gear reducers. These motors may be obtained from PITMANN® of Harleysville, Pennsylvania.
- the lens assembly 230 may also include a photo-filter carriage 237 that is configured to hold optical filters (not shown).
- the photo-filter carriage 237 can hold polarization plates or color light filtering plates.
- FIG. 8 illustrates one embodiment of a photo-filter carriage 237 that may be used with the imaging system 200 .
- the photo-filter carriage 237 includes a filter wheel 237 A for receiving one or more photo-filters (not shown) in openings 237 B.
- the photo-filters may be held in place in the filter wheel 237 A in a variety of ways. For example, in the embodiment illustrated in FIG. 8, caps 237 C in cooperation with suitable fasteners hold the photo-filters in place.
- the filter wheel 237 A may be coupled to an actuator 233 D for remote and automatic control of the filter wheel 237 A.
- the actuator 233 D and the filter wheel 237 A may be fastened, in a conventional manner, to a clamp 237 D that is coupled to, for example, the objective lens 231 or the zoom lens 233 (see FIGS. 1 and 9).
- a polarization filter is coupled to a filter wheel so that the polarization filter covers about 90 degrees of the wheel.
- the polarization filter can be rotated so that the applied polarization varies between zero and ninety degrees.
- the use of the polarization filter with a polarized light source can provide analysis of the effect of samples on polarized light. For example, when a polarized light source and the polarization filter are cross-polarized then minimal light should get to the objective lens 231 , unless the sample re-orients the polarized light, such as can happen when the light passes through crystals.
- the digital camera 214 in combination with the lens assembly 230 provides a broad depth of field to allow imaging of objects such as protein crystals at varying depths within a sample droplet stored in a sample well of a sample plate 212 .
- the lens assembly 230 has a 12:1 zoom lens and, in cooperation with the digital camera 214 , can provide a 1 micron optical resolution.
- the lens assembly 230 and the digital camera 214 may be integrated as a single assembly.
- FIG. 12 shows a perspective view of the light source 216 . Since the crystallization of substances is often highly sensitive to temperature changes, the light source 216 is preferably configured to minimize the amount of heat transferred to the sample plate 212 , e.g., by isolating and removing heat generated by the electronics 1408 and illuminators 1402 (see FIG. 14B).
- the light source 216 includes a housing 1202 adapted to store one or more illuminators 1402 (see FIGS. 14B and 15), cooling elements 1404 , heat reflecting glass 1406 , light diffuser plate 1206 , and corresponding electronics 1405 and 1408 .
- the housing 1202 consists of a plurality of walls that serve as structural support for the internal components and that substantially isolate the internal components from the external environment.
- the housing 1202 can be constructed of a variety of materials including, but not limited to, stainless steel, aluminum, and hard plastics. A material with a low coefficient of heat transfer is preferred so as to substantially keep heat generated within the housing 1202 from reaching the outside through the walls of the housing 1202 .
- cooling elements 1404 are provided.
- one or more of the internal surfaces of the walls of the housing 1202 may be coated with a suitable material that absorbs or reflects various types of radiation and prevents them from reaching the outside of the housing 1202 .
- the top wall 1204 A of the housing 1202 has an opening to receive and support a light diffuser plate 1206 .
- the plate 1206 serves to diffuse light from the illuminators 1402 onto the sample plate 212 .
- the plate 1206 may be, for example, a sheet of translucent plastic.
- a heat reflecting glass (“hot mirror”) 1406 inside the housing 1202 and adjacent and below the plate 1206 , a heat reflecting glass (“hot mirror”) 1406 (see FIG. 14B) is provided inside the housing 1202 and adjacent and below the plate 1206 .
- the heat reflecting glass 1406 prevents most infra-red energy from exiting the housing 1202 .
- the wall 1204 B of the housing 1202 may be provided with a plurality of orifices 1208 that allows a cooling element 1404 , such as fan, to draw air into the housing 1202 for cooling the internal components.
- a wall 1204 C (see FIG. 14B) of the housing 1202 can be fitted with an opening 1410 for receiving a duct that guides forced air out of the housing 1202 .
- a wall 1204 D (see FIG. 13) of the housing 1202 can be fitted with a power plug 1208 and a communications port 1302 .
- the housing 1202 is preferably adapted to isolate an operator of the imaging system 200 from high voltages that may be used to fire the illuminators 1402 .
- the housing 1202 may be configured in a variety of ways not limited to that detailed above.
- the ventilation openings 1208 on wall 1204 B may be replaced by one or more fans built into the wall 1204 B or the wall 1204 E.
- the ventilation openings 1208 may be located on the bottom wall (not shown) of the housing 1202 , for example.
- the light source 216 includes one or more illuminators 1402 that generate light rays.
- the illuminators 1402 may be various types, for example, incandescent bulbs, light emitting diodes, or fluorescent tubes of various types including, but not limited to, mercury- or neon-based fluorescent tubes.
- the illuminators 1402 are two xenon tubes. Xenon tubes are well known in the relevant technology and are readily available.
- the xenon tubes 1402 can include borosilicate glass that absorbs ultra-violate radiation. Xenon tubes are preferred because they produce sufficient light to allow use of a CMOS camera 214 in the imaging system 200 . Xenon tubes are also preferred since they provide a broad spectrum of light rays, which enables use of color to enhance detection of crystal growth in the wells of the sample plate 212 .
- the actual dimensions of the illuminators 1402 are chosen to suit the specific application.
- the xenon tubes 1402 are long enough to cover one dimension of the sample plate 212 so that it is not necessary to move the light source 216 when the lens assembly 230 or sample plate mount 210 are repositioned.
- the illuminators 1402 may be supported on a board 1405 , which may also support electronics for control of the illuminators 1402 .
- two illuminators 1402 are positioned to provide different locations of the illumination source, e.g., both on-axis and off-axis lighting of the wells in the sample plate 212 .
- the imaging axis of the lens assembly means the principal axes of the lens assembly.
- first and second xenon tubes 1402 can be positioned, respectively, a first and second distance from the imaging axis of the lens assembly 230 .
- the first and second distances are substantially equal in length, and the first xenon tube is positioned opposite the imaging axis from the second xenon tube.
- the xenon tubes 1402 are mounted about an inch on either side of the area directly under the lens assembly 230 .
- This configuration allows the use of an indirect lighting effect when only one xenon tube is fired. That is, when two xenon tubes are positioned off the imaging axis, the controllers and logic 110 or 160 can control the tubes to provide on-axis or off-axis illumination of the sample plate 212 .
- One xenon tube can be fired to provide off-axis illumination of the sample plate 212 .
- off-axis illumination is preferred because it produces shadows on small objects in a sample droplet stored in a well of the sample plate 212 . The shadows caused by off-axis lighting enhance the ability of the controllers and logic 110 or 160 , or an operator, to detect objects in the sample.
- the controllers and logic 160 control the assembly 155 to capture two images of a droplet in a well plate of the sample plate 212 .
- the imaging system 150 captures one image with the light source 216 lighting the sample with a first xenon tube.
- the imaging system 150 captures a second image with the light source 216 lighting the sample with the second xenon tube.
- the controllers and logic 160 can then combine the data from both images and perform an analysis based on the combined data. This results in enhanced characterization of the sample since the combination of the images typically provides more information about crystallization of the sample than a single image acquired with standard back lighting of the scene.
- a source filter 270 may be inserted in a filter slot 272 so that the filter 270 is interposed between the light source 216 and the sample plate 212 .
- the various filters 270 may be inserted and removed from the filter slot 272 by a plate handler.
- the filter 270 may be automatically removed, or exchanged with another filter, by the imaging system 200 .
- the source filter 270 may be any type of filter, such as a wavelength specific filter (e.g. red, blue, yellow, etc.) or a polarization filter.
- the light source 216 includes one or more illuminators 1402 (e.g., fluorescent tubes) adapted to provide flash lighting. That is the illuminators 1402 are controlled to illuminate only momentarily the sample plate 212 as the digital camera 214 captures an image of a well in the sample plate 212 .
- This arrangement provides benefits over known devices in which illuminators remain in the on-position throughout the entire time that the sample plate 212 is handled by an imaging system.
- the imaging system 200 since the illuminators 1402 are turned on for only a fraction of a second per image, very little heat radiation is transferred to the wells of the sample plate 212 .
- one benefit of this configuration is that the imaging system 200 can provide high illumination levels for the camera 214 while minimizing energy or radiation transfer to the samples in the sample plate 210 .
- An exemplary control circuit 1600 that provides controlled flash lighting is described below with reference to FIG. 16.
- FIG. 16 is a functional block diagram of an illumination duration (“flash”) control circuit 1600 for an illuminator 1402 .
- flash illumination duration
- the illuminator 1402 can be, for example, a xenon tube having a length greater than the maximum width of the sample plate 212 to be used in the imaging system 100 , 150 , or 200 . By having such a dimension, the illuminator 1402 can be located underneath and along one axis of the sample plate 212 to illuminate all the wells in one row or column of the sample plate 212 without repositioning the illuminator 1402 .
- a first end of the illuminator 1402 is connected to a first capacitor 1602 and a first resistor 1604 .
- the opposite end of the first resistor 1604 is connected to a power supply 1606 .
- the power supply 1606 may be controlled by a dedicated RS232 line, for example.
- the opposite or second end of the first capacitor 1602 that is not connected to the illuminator 1402 is connected to ground or a voltage common.
- the second end of the illuminator 1402 is connected to the anode of a first silicon controlled rectifier (“SCR”) 1607 and a first terminal of a second capacitor 1608 , respectively.
- SCR silicon controlled rectifier
- An SCR is a solid state switching device that can provide fast, variable proportional control of electric power.
- a resistor 1620 is connected between the first terminal of the second capacitor and the cathode of a second SCR 1610 .
- the second terminal of the second capacitor 1608 is connected to an anode of the second SCR 1610 .
- the cathode of the first SCR 1607 is connected to the ground or voltage common potential.
- the cathode of the second SCR 1610 is connected to the cathode of the first SCR 1607 and is similarly connected to ground or the voltage common potential.
- the anode of the second SCR 1610 is also connected to a second resistor 1614 that connects the anode of the second SCR 1610 to the power supply 1606 .
- a trigger 1612 of the illuminator 1402 is connected to the gate of the first SCR 1607 so that both can be triggered simultaneously. This common connection controls the trigger 1612 of the illuminator 1402 and the start of illumination.
- the gate of the second SCR 1610 controls a stop or end of illumination.
- the duration of illumination provided by the illuminator 1402 can be controlled as follows. Initially, the first and second SCRs 1607 and 1610 , respectively, are not conducting. The first capacitor 1602 is charged up to the level of the voltage of the power supply 1606 using the first resistor 1604 . The power supply 1606 can, for example, charge the first capacitor to 300 volts or more.
- the size of the first capacitor 1602 relates to the amount of energy that can be transferred to the illuminator 1402 .
- the illuminator 1402 provides an illumination based in part on the amount of energy provided by the first capacitor 1602 .
- the first capacitor 1602 can be one capacitor or a bank of capacitors.
- the first capacitor 1602 can be, for example, a 600° F. capacitor.
- the size of the resistors 1620 and 1614 are determined in part by the desired voltage rise time on the second capacitor 1608 . Smaller resistors 1620 and 1614 allow the second capacitor 1608 to charge quickly. However, the second SCR 1610 can inadvertently trigger if the voltage impulse at its anode is too great. Thus, the value of the resistors 1620 and 1614 are typically chosen to allow the second capacitor 1608 to recharge before the next image flash trigger, but not to recharge so quickly as to inadvertently trigger conduction in the second SCR 1610 .
- the resistor 1620 provides an electrical path from the anode of the first SCR 1607 to ground or voltage common to allow the second capacitor 1608 to charge.
- the illuminator 1402 is ready to trigger once the first capacitor 1602 is charged.
- the second capacitor 1608 is charged by the power supply 1606 through the second resistor 1614 concurrent with the charging of the first capacitor 1602 .
- the second capacitor 1608 is chosen to be large enough to generate a current potential that shuts off the first SCR 1607 and, thus, to terminate illumination by the illuminator 1402 .
- the second capacitor 1608 can be a single capacitor or can be a bank of capacitors.
- the second capacitor 1608 can be, for example, a 20 ⁇ F capacitor.
- the duration of illumination can be controlled.
- the illuminator 1402 initially illuminates when the trigger signal is provided to the control of the illuminator 1402 and the gate of the first SCR 1607 .
- the illuminator 1402 can include a triggering circuit that triggers the illuminator 1402 in response to a logic signal. If the illuminator 1402 does not include this circuit, an external triggering circuit can be included.
- the first SCR 1607 conducts in response to the trigger signal.
- the first SCR 1607 then continues to conduct even in the absence of a gate signal.
- the first SCR 1607 can be shut off by interrupting the current through the SCR or by reducing the voltage drop across the first SCR 1607 to below the forward voltage of the device.
- the second SCR 1610 is controlled by a stop signal generator 1616 to connect the second capacitor 1608 in parallel with the first SCR 1607 .
- the second capacitor 1608 is charged in opposite polarity to the voltage drop across the first SCR 1607 .
- the voltage from the second capacitor 1608 is placed in opposite polarity across the first SCR 1607 thereby shutting off the first SCR 1607 .
- the second end of the illuminator 1402 and the first terminal of the second capacitor 1608 are pulled to ground via the first SCR 1607 .
- the illuminator 1402 then illuminates in response to the current flowing through the illuminator 1402 .
- the second SCR 1610 controls turn-off of the illuminator 1402 .
- the second SCR 1610 begins to conduct when a stop signal is applied to the gate of the second SCR 1610 . This pulls the second terminal of the second capacitor 1608 to ground.
- the voltage across the second capacitor 1608 momentarily causes the voltage at the anode of the first SCR 1607 to be pushed below the ground or voltage common potential.
- a negative voltage at the anode of the first SCR 1607 results in a loss of current flowing through the first SCR 1607 , which results in shut down of the first SCR 1607 .
- the second capacitor 1608 discharges almost immediately.
- the illuminator 1402 shuts off when the first SCR 1607 turns off because there is no longer a current path through the illuminator 1402 .
- a microprocessor, controller, or microcontroller can be programmed to control the trigger 1612 and stop signal generator 1616 .
- the processor controls the trigger signal to initiate illumination with the illuminator 1402 .
- the processor then controls the stop signal to control termination of the illuminator 1402 .
- the processor can thus control the trigger and stop signals to control the duration of the illumination.
- the processor can control the duration of the illumination (a “flash”) in predetermined intervals or can control the duration of the illumination over a range of time. For example, the processor can control the duration of the flash in microsecond steps across an interval of approximately 20 ⁇ S-600 ⁇ S.
- the processor can control the lower range of the duration of the flash to be 0, 20, 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, or 550 ⁇ S.
- the processor can control the upper range of the duration of the flash to be 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550 or 600 ⁇ S.
- the digital camera 214 issues the signal to turn on the illuminator 1402 so that the “flash” will be in synchronization with the electronic shutter of the digital camera 214 .
- the power supply 1606 can be a controllable high voltage power supply.
- the microprocessor, controller, or microcontroller can also control the output voltage of the power supply 1606 to further control the illumination provided by the illuminator 1402 .
- the microprocessor can control the output voltage of the power supply 1606 to vary the illumination provided by the illuminator 1402 for the same illumination duration.
- the microprocessor can control the power supply 1606 to a lower output voltage to minimize the illumination.
- the microprocessor can control the power supply 1606 to a higher output voltage, thereby increasing the illumination.
- the microprocessor can control the output voltage of the power supply 1606 over a range of, for example, 180-300 volts.
- the illuminator 1402 may not consistently illuminate for voltages below 180 volts when the illuminator 1402 is a xenon flash tube.
- the microprocessor can control the output voltage of the power supply 1606 using a digital control word.
- the microcontroller can control the output voltage of the power supply 1606 in steps determined in part by the number of bits in the control word and the tunable range of the power supply 1606 .
- the microcontroller can, for example provide a 10-bit control word, an 8-bit control word, a 6-bit control word, a 4-bit control word, or a 2-bit control word.
- the power supply 1606 output voltage can be continuously variable over a predetermined range.
- the microcontroller can control a level of illumination by controlling the illumination duration, the power supply 1606 output voltage, or a combination of the two.
- the microprocessor's ability to control the combination of the two permits a wider range of brightness outputs than if only one parameter were controllable.
- the microprocessors ability to control both illumination duration and power supply 1606 output voltage is advantageous for different lens zoom conditions. When magnification is low, such as when the lens is zoomed out, a relatively small amount of light is required. When magnification is high, a relatively large amount of light is required to capture an image. Use of filters and varying apertures may also be used to adjust the amount of light from the light source.
- the imaging system 200 includes software modules that control and direct the lens assembly 230 to perform the following functions.
- the imaging system 200 is configured to automatically control the brightness of the image. For example, after the camera 214 captures an image of a well of the sample plate 212 , the software determines whether the brightness is within predetermined thresholds. If the brightness does not fall within the thresholds, the controllers and logic of the imaging system 200 iteratively adjust the illumination intensity of the illuminators 1402 to adjust the brightness of the images until the brightness falls within the thresholds. In some embodiments, the brightness of the image may be evaluated based on a predetermined region (or set of pixels) of the image captured.
- the brightness of the illuminators 1402 may be adjusted when capturing a plurality of images of the same sample droplet.
- the controllers and logic 160 control the assembly 155 to capture two images of a droplet in a well plate of the sample plate 212 .
- the imaging system 150 captures one image with the light source 216 lighting the sample with a first brightness level.
- the imaging system 150 captures a second image with the light source 216 lighting the sample with a second brightness level.
- the controllers and logic 160 can then combine the data from both images and perform an analysis based on the combined data, which may result in enhanced characterization of the sample.
- the brightness used for the second image may be logically controlled based on analyzing the brightness of the first image, determining if a lighter or darker second image may result in enhanced characterization of the sample, and adjusting the light source 216 to light the sample accordingly.
- the imaging system 200 can also be configured with software to automatically focus the image.
- An exemplary autofocus routine is as follows. Once the lens assembly 230 is positioned over a sample of the sample plate 210 , the objective lens 231 is moved along its imaging axis to a predetermined starting position. The camera 214 then acquires an image of the sample and/or well at that focus position. In one embodiment, the software obtains a “focus score.” This may be done, for example, by examining the brightness values of a set of pixels (e.g., a 500 ⁇ 3 pixel area) in the captured image, applying a low pass filter, and computing the sum of the squares of the differences in brightness of adjacent pixels for the set of pixels. The position and focus score data points are stored in an array.
- the objective lens 231 is moved to the next predetermined incremental position on its imaging axis, and the process of acquiring an image, computing the focus score, and storing the position and focus score values is repeated. This process continues until the objective lens 231 has been moved to all the predetermined or desired positions, e.g., until it reaches a predetermined end position by incrementally moving in a predetermined step size from the starting position.
- the step size depends at least in part upon a predetermined maximum number of images to be acquired during the autofocus routine.
- the software searches the lens position/focus score array to identify the lens position with the best focus score.
- the software then proceeds to compute the lens positions that are midway from the best focus score position to positions adjacent to it in the array. That is, the software examines the array of positions already imaged, finds the nearest position greater than the lens position associated with the best focus score, and calculates a “midpoint” position between them. A similar process is performed with regard to the nearest lens position that is less than the best focus score position.
- the software acquires images at the midpoint positions and obtains corresponding focus scores.
- the software once again evaluates the array to identify the image with the best focus score, using a step size that is, say, one-half of the initial step size. These tasks are repeated until, for example, a maximum number of images acquired during autofocus, or a minimum step size, has been reached.
- the imaging system 200 performs the processes of autofocusing and automatically adjusting the brightness, as described above, for each well sample of a sample plate 212 received by the imaging system 200 . After the desired brightness and focus are set, the imaging system 200 then captures an image and stores it in, for example, the data storage 190 . In one embodiment, the automatically determined brightness and focus are also stored for each sample. In another embodiment, the software of the imaging system 200 calculates and stores a value associated with the mean of the brightness and focus positions for the aggregate of wells samples of the first plate. This value is then associated with each of the position/focus score data points in the array. Subsequent plates are examined using the mean brightness and focus as initial imaging values.
- the imaging system 200 may also include additional functionality related to automatically finding the edges of a droplet in a well of a sample plate 212 .
- the imaging system 200 finds the centroid of the droplet and moves the lens assembly 230 to the centroid.
- the imaging system 200 determines the magnification required to image substantially only that area corresponding to the droplet, adjusts the zoom, and acquires the image.
- the imaging system 200 may be configured to perform automatic adjustment of aperture.
- the imaging system 200 receives settings for either maximum image resolution or maximum depth of field.
- the imaging system 200 determines the corresponding aperture by, for example, looking one or more tables having values correlating aperture with maximum resolution and/or maximum depth of field.
- magnification data may be part of these tables.
- the imaging system 200 may be configured to perform automatic zoom of a substance in a sample stored in a well of the sample plate 212 .
- the imaging system identifies a “crystal-like object” in the sample, calculates its centroid, moves the lens assembly 230 and digital camera 214 to the centroid, adjusts the zoom level, and captures an image of the “crystal-like object.”
- the imaging system 200 can be configured to capture an image of a sample or a crystal-like object, perform image analysis of the image, adjust imaging parameters (e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, brightness, etc.) and retake an image of the sample or crystal-like object.
- imaging parameters e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, brightness, etc.
- the imaging system 200 can perform this process iteratively until predetermined thresholds (e.g., contrast, edge detection, etc.) are met.
- predetermined thresholds e.g., contrast, edge detection, etc.
- the images captured in an iterative process can be either analyzed individually, or can be combined with other images and the resulting image analyzed.
- the imaging system 200 receives a sample plate 212 and for each well sample performs the following functions including, automatic adjustment of brightness and aperture, autofocus, automatic detection of the sample droplet, and acquisition and storage of images.
- the imaging system 200 stores the aperture, brightness, focus position, drop position and/or size. The imaging system 200 may then use mean values of these factors as initial imaging settings for subsequent plates.
- an illumination source filter 270 (FIG. 2) may be inserted in the filter slot 272 so that the filter 270 is (not shown) may be interposed between the light source 216 and the sample plate 212 .
- the various filters 270 may be inserted and removed from the filter slot 272 by a plate handler. Thus, the filter 270 may be automatically removed or exchanged by the imaging system 200 .
- an image filter (such as those that may be placed in the photo-filter carriage 237 ) may be interposed between the sample droplet in the sample plate 212 and the objective lens 231 .
- the image filter includes a polarization filter that provides a variable amount of polarization on the light incident on the objective lens 231 . The use of these filters can be automatically controlled by imaging software routines and/or determined by operator defined variables.
- the motorized control of aperture, focus, and zoom of the lens assembly 230 in conjunction with remote control of the light source 216 allows dynamic optimization of contrast, field of view, depth of field, and resolution.
- FIG. 17 depicts a functional block diagram of an automated sample analysis system 1700 having an imaging system 100 , 150 , or 200 .
- the system 1700 includes controllers and logic 1760 for controlling various subsystems housed in a cabinet 1702 .
- the system 1700 can further include a shelf access door 1712 for allowing access to a removable shelf system 1720 and/or a stationary shelf system 1722 .
- a removable shelf access door 1710 can be provided.
- the system 1700 can include a transport assembly 1730 that can consist of a plate handler 1732 , an elevator assembly 1734 , and a rotatable platform 1736 .
- the system 1700 can further include an environmental control subsystem 1765 that employs a refrigeration unit 1762 and/or a heater 1764 .
- the system 1700 also includes an imaging system 200 as has been described above.
- the imaging system 200 having subcomponents 210 , 214 , 216 , 218 , 220 , and 230 , which are fully detailed above with reference to FIGS. 2-16, can be housed in the cabinet 1702 .
- This arrangement ensures that the samples in the sample plates remain at all times within the confines of a controlled environment. That is, once a sample plate is stored in the cabinet 1702 , it is unnecessary to expose the sample plate to the environment external to the cabinet since the system 1700 is capable of automatically (i.e., without operator intervention) carry out the imaging of the sample within the cabinet 1702 .
- Embodiments of an automated sample analysis system 1700 having an imaging system in accordance with the invention are described in the related United States Provisional Patent Application entitled “AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD,” having U.S. Patent Application No. 60/444,519, which is referenced above.
- FIG. 18 depicts a block diagram of an imaging and analysis system 1800 , according to one embodiment of the invention.
- the imaging system 1805 can be an imaging system 100 , 150 , or 200 as described above, or another suitable imaging system that provides similar functionality to the imaging systems described herein.
- the system 1800 includes an imaging system controller 1820 that provides logical control of the imaging system 1805 to, for example, direct the imaging system 1805 to image a particular sample on a particular sample plate 212 , all the samples on the sample plate 212 , or image a subset of the samples.
- the imaging controller 1820 may also control the imaging parameters used by the imaging system 1805 .
- imaging parameters can include, for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering and brightness.
- the system 1800 also includes an image storage device 1810 that stores images of samples captured by the imaging system 1805 .
- the image storage device 1810 can be any suitable computer accessible storage medium capable of storing digital images, e.g., a random access memory (RAM), hard disk floppy disk, optical disk, compact disks, or magnetic tape.
- RAM random access memory
- the system 1800 shows the image storage device 1810 separate from the imaging system 1805 .
- the image storage device 1810 can be included in the imaging system 1805 , or it may be included in a system that may also include an image analyzer 1815 , the imaging system controller 1820 , or a scheduler 1825 .
- a computer includes all the control, scheduling, analysis and imaging software for the system 1800 .
- the software for the system 1800 may reside and run on a plurality of computers that are in communication with each other.
- the imaging system 1805 may be configured to provide captured images directly to the image analyzer 1815 , or it may be configured to typically store images on the image storage device 1810 and provide images to the image analyzer 1815 as directed by the imaging system controller 1820 .
- the scheduler 1825 communicates with the image analyzer 1815 and the imaging system controller 1820 to control the analysis and imaging of samples based on user provided input. For example, the scheduler can schedule the imaging of a particular droplet or a plurality of droplets on a sample plate, and coordinate the imaging of said droplet or plurality of droplets with its subsequent analysis.
- the scheduler 1825 can use a database 1830 to store information relating to scheduling the images and image specific information, for example, the size of pixels in each of the stored images, in a suitable format for quick retrieval. Knowing the pixel size can allow the analyzer 1815 to reduce sampling to an appropriate density and size for particular objects in the image.
- the information in the database 1830 can be available with each request to process an image.
- the database 1830 can reside on the same computer as the scheduler 1825 or on a separate computing device.
- the scheduler 1825 provides an analysis request to the image analyzer 1815 .
- the analysis request includes an image list, including the resolution of each image and the absolute X,Y location of its center.
- the image list typically contains only one image but may contain a plurality of images.
- the analysis request can also contain an analysis method including a list of parameters that specify options controlling how to analyze the image(s) and what to report.
- the analysis request can include the Uniform Resource Locator (“URL”) of a definition file 1835 , i.e., an electronic address that may be on the Internet, such as an ftp site, gopher server, or Web page.
- URL Uniform Resource Locator
- the definition file 1835 defines parameters used by the image analyzer 1815 , e.g., neural network dimensions, weights and training resolution (e.g., pixel granularity, or the spacing between pixels, of images used to train the neural network).
- the definition file 1835 may be a single file or a plurality of files, but will be referred to hereinafter in the singular.
- the image analyzer 1815 also receives an analysis method file(s) 1840 .
- the analysis method file may be a single file or a plurality of files, but will be referred to hereinafter in the singular.
- the analysis method file 1840 includes parameters that can be used by the various image analysis modules contained in the image analyzer 1815 , e.g., a content analysis module 1930 , a notable regions module 1935 , and a crystal object analysis module 1940 (FIG. 19), described below, according to one embodiment.
- the image analyzer 1815 can also include functionality that determines the content of an image in terms of objects and/or regions of, for example, crystals or precipitate, or clear regions, that is, regions that do not show any features.
- the image analyzer 1835 includes a neural network to identify features, e.g., crystals, precipitate, and edges, that are depicted in the image, according to one embodiment.
- the image analyzer 1815 is configured to identify objects and regions of interest in an image quickly enough to allow the system 1800 to re-image specific objects or regions, if desired, while the corresponding sample plate is still in the imaging system 1805 .
- the image analyzer 1815 provides an analysis response to the scheduler 1825 .
- the analysis response typically includes the parameters used to for the analysis and the results of the particular analysis performed, e.g., the count of crystal, precipitate, clear and edge samples, regions of crystals, and/or a list and description of objects found in the image.
- the analysis results can be reviewed using an output display 1845 that can be co-located with the scheduler or at a remote location.
- the output displays may be coupled to the system 1800 via a web server, or via a LAN or other small network topology.
- Embodiments of a remote output display in accordance with the invention are described in related United States Provisional Patent Application entitled “REMOTE CONTROL OF AUTOMATED LABS,” having Application No. 60/444,585.
- FIG. 19 depicts a computer 1900 that includes a processor 1905 in communication with memory 1910 , e.g., a hard disk and/or random access memory (RAM).
- the processor 1905 is also in communication with an image analysis module 1960 that can include various modules configured to perform the functionality of the image analyzer 1815 (FIG. 18) described herein.
- the computer 1900 may contain conventional computer electronics that are not shown, including a communications bus, a power supply, data storage devices, and various interfaces and drive electronics. Although not shown in FIG. 19, it is contemplated that in some embodiments, the computer 1900 may include a video display (e.g., monitor), a keyboard, a mouse, loudspeakers or a microphone, a printer, devices allowing the use of removable media including, but not limited to, magnetic tapes and magnetic and optical disks, and interface devices that allow the computer 1900 to communicate with another computer, including but not limited to a computer network, a LAN, an intranet, or a WAN, e.g., the Internet.
- a computer network e.g., a LAN, an intranet, or a WAN, e.g., the Internet.
- the computer 1900 is in communication with an imaging storage device, for example, image storage device 1810 (FIG. 18), and is configured to receive an image of a sample from the storage device and determine the contents of the sample, using one or more analysis processes.
- the computer 1900 can be co-located with the image storage device, located near the image storage device, e.g., in the same building, or geographically separated from the image storage device.
- the computer 1900 can receive the image from the image storage device via, e.g., a direct electronic connection or through a network connection, including a local area network, or a wide area network, including the Internet. It is also contemplated the computer 1900 can receive the image via a suitable type of removable media, e.g., a 3.5′′ floppy disk, compact disc, ZIP drive, magnetic tape, etc.
- the computer 1900 can be implemented with a wide range of computer platforms using conventional general purpose single chip or multichip microprocessors, digital signal processors, embedded microprocessors, microcontrollers and the like.
- the computer 1900 can operate independently, or as part of a computing system.
- the computer 1900 may include stand-alone computers as well personal computers, workstations, servers, clients, mini-computers, main-frame computers, laptop computers, or a network of individual computers.
- the configuration of the computer 1900 may be based, for example, on Intel Corporation's family of microprocessors, such as the PENTIUM family and Microsoft Corporation's WINDOWS operating systems such as WINDOWS NT, WINDOWS 2000, or WINDOWS XP.
- the computer 1900 includes one or more modules or subsystems that incorporate the analysis processes described herein.
- each module can be implemented in hardware or software, or a combination thereof, and comprise various subroutines, procedures, definitional statements, and macros that perform certain tasks.
- all the modules are typically separately compiled and linked into a single executable program.
- the processes performed by each module may be arbitrarily redistributed to one of the other modules, combined together with other processes in a single module, or made available in, for example, a shareable dynamic link library.
- a module may be configured to reside on the addressable storage medium and configured to execute on one or more processors.
- a module may include, by way of example, other subsystems, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. It is also contemplated that the computer 1900 may be implemented with a wide range of operating systems such as Unix, Linux, Microsoft DOS, Macintosh OS, OS/2 and the like.
- the analysis module 1960 can include a pre-processing module 1925 that can filter the received image prior to further processing.
- the image may be filtered to remove “noise” such as speckles, high frequency noise or low frequency noise that may have been introduced by any of the preceding steps including the imaging step.
- Filtering methods to remove high frequency or low frequency noise are well known in image processing, and many different methods may be used to achieve suitable results. For example, according to one embodiment in a filtering procedure that removes speckle, for each pixel, the mean and standard deviation of every other pixel along the perimeter of a 5 ⁇ 5 pixel area centered on a pixel are computed. If the center pixel varies by more than a threshold multiplied by the standard deviation, then it is replaced by the mean value. Then the slope of the 5 ⁇ 5 image pixel intensities is calculated and the center pixel is replaced by the mean value of pixels interpolated on a line across the calculated slope.
- the analysis module 1960 also includes one or more modules that perform image analysis to determine information about the sample contents, including content analysis module 1930 , notable regions analysis module 1935 , and crystal object analysis module 1940 .
- the content analysis module 1930 determines the count of crystal, precipitate, clear and edge pixels in the image, and can be optionally enabled to operate only inside a specific region of the sample.
- the notable regions analysis module 1935 determines a list of regions of a specified pixel type, e.g., crystal, precipitate, clear and edge pixels.
- the crystal object analysis module 1940 determines objects containing crystal pixels that meet certain criteria, for example, size, area, or density.
- FIG. 19 also shows analysis module 1960 includes a report inner/outer non-clear ratio module 1945 that determines the ratio of non-clear pixel density inside a sample region over non-clear pixel density outside a sample region.
- the analysis module 1960 also includes a graphical output analysis module 1950 that generate a color-coded image depicting each of the various features found in a sample image in a specified color. These modules are further described hereinbelow.
- Other analysis modules 1955 that incorporate different image analysis processes may also be included in the analysis module 1960 .
- an analysis module 1955 can analyze the change in two or more images of the same sample taken at two different times.
- the analysis module 1955 can receive the count of pixels that are classified as crystal, precipitate, clear or edge pixels in an image of a particular region of a sample at a time T1 and save the count information with a reference to the region of a sample imaged. When the same region of a sample is re-imaged at a later time T2, the analysis module 1955 receives the count of pixels that are classified as crystal, precipitate, clear and edge pixels in the image of the sample region at time T2. The analysis module 1955 can compare the count information from time T1 and T2 to determine if the droplet contains a crystal(s). One analysis method compares the total number of pixels classified as crystal pixels at time T1 and T2 to determine if the sample contains crystal.
- Another comparison method compares the percentage of crystal pixels at time T1 to the percentage of crystal pixels at time T2. If the count or the percentage of crystal pixels increases beyond a threshold value, the sample will be deemed to contain crystals.
- the other pixel classifications e.g., precipitate, clear and edge
- a time-based comparison method where the count information is saved for one image and compared to a second subsequent image, can be used with any sample processing algorithm.
- the analysis module 1955 may analyze a series of two or more images crystal growth using a grid approach.
- two images 11 and 12 are divided up into grids, and the corresponding grids in each image are compared for change in the number of crystal, using, for example, the actual number of pixels or the percentage of crystal pixels.
- the pixel count information can be kept for each image and used to compare to other images taken at a different time.
- the method can include analyzing every pixel, or skipping one or more pixels between the pixels analyzed.
- a scheduler module 1915 and an imaging system controller module 1920 are also included in computer 1900 , according to one embodiment. These modules are configured to include functionality that schedules the imaging of sample plates/droplet samples and subsequent analysis of the images, and controls the imaging system 100 , 150 , 200 , 1805 , as described herein, e.g., for scheduler 1825 and imaging system controller 1820 , respectively.
- the image analysis software package may include support software that performs training and configuring of perception and analysis functionality, e.g., for a neural network.
- Some of the algorithms included in the image analysis software modules may use stochastic processing and may include the use of a pseudo-random number generation to find answers. All such functions can be provided a random number generator seed in request parameters received by the software module.
- the image analysis modules can be configured so that an analysis method using a pseudo-random number does not affect the results of a different analysis method or software module.
- the image analysis software works with an image size of, for example, 800 by 600 pixels, a zoomed-in resolution of 2,046 pixels/mm (0.5 ⁇ m/pixel), and a zoomed-out resolution of 186 pixels/mm, (5.4 ⁇ m/pixel), or 1,024 by 1,024 pixels, a zoomed-in resolution of 2,460 pixels/mm (0.41 ⁇ m/pixel), and a zoomed-out resolution of 220 pixels/mm, (4.5 ⁇ m/pixel).
- the image analysis modules may optionally use the same neural network for both zoomed-in and zoomed-out images, however, quality of the results may suffer if only one neural network is used and it may be advantageous to train multiple neural networks, e.g., one for zoomed-in images and one for zoomed-out images.
- the image analysis software can also be adaptable to other image sizes and pixel resolutions, however, the training of new neural networks may be necessary in order to suitably process these images. If the resolution of the images vary, each definition file may include its training resolution, that is, the spacing between sampled pixels that was used to train the neural network. This information allows the algorithms to consider how to adapt images of varying resolution for use with the neural networks.
- the analysis module receives an analysis request (FIG. 18) containing an image list that includes the images to be analyzed.
- the analysis request also includes, for each image, its resolution in pixels/mm and the absolute X-Y location of the center of the image. Typically, there is only one image in the image list, however, multi-image methods may also be used.
- the analysis request also includes an analysis method, which is a collection of parameters that specify options controlling how to analyze the images and what to report. In specifying the analysis method, a URL of the definition file is included.
- the definition file defines the neural network's dimensions, weights and training resolution, i.e., a pixel granularity of the images that were used to train the neural network. Examples of the parameters are first described generally below, and then specifically as they relate to the content analysis module 1930 , notable regions analysis module 1935 , and the crystal object analysis module 1940 , according to one embodiment.
- the analysis request may include parameters that specify how a working copy of the image is prepared for all subsequent processing.
- parameters can include options for a color to grayscale conversion of the image, and resizing of the image using pixel interpolation methods.
- the parameters may specify the output of an image, for example, they may specify whether and how an image file representing the pixel interpretation should be generated. This generated image file may be visually displayed and further evaluated by a user.
- the parameters are also used by the analysis modules, e.g., in the content analysis module, the parameters specify whether an image is scanned and analyzed to determine statistics of its contents in terms of crystal, precipitate, clear and edge features. These parameters specify whether crystal-like objects should be searched for and reported. Options may include a scan grid, an ID criteria and the maximum number of objects to find.
- the parameters may also be used by the notable region analysis module 1935 to specify whether notable regions in an image should be reported and, if so, the scan grid in micrometers, the size that is the width times the height in micrometers, the ID criteria, and the quantity of regions to report.
- the crystal object analysis module 1940 can use the parameters to specify whether effective contiguous subregions of crystals are identified and reported as crystal objects, how this identification should be performed, and the quantity of crystal objects to identify.
- the parameters can also specify whether to report the inner/outer non-clear ratio. If this ratio is to be specified, the output includes a ratio of the non-clear pixel density inside a sample region over the non-clear pixel density outside of the sample region. For example, the ratio would be 3.0 if every 100th pixel inside of a sample region is non-clear and every 300th pixel outside of a sample region is non-clear. According to one embodiment, ratios above 1 billion are truncated to that value.
- Image sampling parameters may include, for example, a color processing parameter which specifies how each pixel is converted to a floating point intensity value, or it may specify the linear grayscale for image conversion. If the image is already grayscale, pixels are converted to black, e.g., 0.0, or to white, e.g., 1.0. If color is selected, the pixels are linearly converted to 0.0 to 1.0 with equal channel weighting for each color.
- Pixel interpolation parameters may include, for example, no pixel interpolation, that is, only a closest pixel method will be used for pixel interpolation. This is generally the fastest interpolation method but typically results in reduced image quality.
- Interpolation methods that may be selected include bilinear and cubic spline interpolation, which yield higher quality images but they are more computationally complex and take more time or resources to generate.
- the re-size parameter includes options of 1:1, that is, the image is not resized, automatic, where the image is resized to match the training resolution using the specified interpolation method, and scale factor, where the image is re-sized using this factor and specified interpolation method.
- the analysis modules 1930 , 1035 , 1940 are configured to receive an analysis request from a scheduler module 1915 and generate a response, as described below.
- the content analysis module 1930 determines counts of types of pixels in the sample images, e.g., crystal, precipitate, clear and edge pixels, as depicted in the image.
- the content analysis module 1930 is implemented as a neural network.
- the content analysis module 1930 receives a set of parameters that include parameters that indicate whether this module should be enabled, whether the content analysis should take place inside the sample region only or inside and outside the sample region, and the number of pixels to be skipped during the image analysis. If enable is set to NO, no analysis by the content analysis module 1930 is done and nothing is reported. If enable is set to YES, then the content analysis module analyzes the sample image. If the inside-sample-region-only option is set to YES, the edge of the sample region is found first, and the analysis is done only within the sample region edge. If inside-sample-region-only is set to NO, then checking is done inside and outside the sample region.
- a process for identifying the edge of a sample region is described hereinbelow in reference to FIG. 20, according to one embodiment. If the number of pixels to be skipped is set to 0, all the pixels in the image will be used. If the number of pixels to be skipped is set to 1, every other pixel in the image will be used for the content analysis, if 2, every third pixel will be used, etc. The default parameter for skipped pixels is typically set to 0.
- the response of the content analysis module 1930 includes an “echo” of the parameters used during the content analysis processing, and the counts of each pixel type pixels of crystal, precipitate, clear and edge samples found in the image. If the inside-sample-region-only option is enabled, the edge count can be used to assess how well the edge of the sample region was found. If it is not enabled, the edge count may be ignored.
- the notable region analysis module 1935 processes an image and determines regions of a specified size that include the minimum levels of crystal, precipitate or non-clear pixels.
- the request parameters for the notable region analysis module 1935 can include an enable parameter which is set to either “YES” or “NO” that determines if notable region analysis should be performed and reported.
- the request parameters can also include a region size or area that is used to determine the size of the smallest region the notable region analysis module will identify.
- a skip-pixel parameter can be included to control the number of pixels that will be skipped during processing, where “0” means to check all of the pixels, “1” means to sample every other pixel, that is, sample the pixels with one unsampled pixel between them, etc. Typically, the default value for skip-pixel is “0.”
- the request parameters can also include the maximum number of regions to report and the minimum percentages of crystal pixels, precipitate pixels and non-clear pixels to report. Typically, pixels determined to be edge-type pixels are ignored.
- the notable region analysis module 1935 can be configured to identify regions with the highest percentage of each specified pixel type. If the regions contain less than the minimum percentage of pixels, it is not saved and the search for regions ends. Regions typically do not go outside of the input image. Newly found regions generally do not overlap existing regions.
- the report of results from the notable regions analysis module includes all the request parameters and a list of the regions identified The results for each region can include its absolute position, size, the number of crystal pixels and the total pixels sampled, not including edge pixels.
- the crystal object analysis module 1940 identifies small regions in the image that are rich in crystal pixels.
- the small regions, or objects, comprise one or more “cells.”
- the request parameters for the crystal object analysis module can include an enablement parameter which determines if this analysis should be performed and reported.
- the request parameters also include a skip-pixels parameter that operates as previously described above, parameters that control the size of the cells identified, for example, a cell-minimum-size parameter to control the smallest width or height of a cell, a cell minimum area which indicates the smallest overall area of a cell, a cell minimum density parameter which indicates the proportion from 0 to 1 of crystal pixels the cell must contain in order to be reported, and an object-minimum-size parameter which indicates one or more dimensions that the overall object must achieve in order to be reported.
- the request parameters can also include a pseudo-random generator seed which is used for the crystal object analysis stochastic processing.
- the crystal object analysis module 1940 typically includes the limitation that the center of a cell cannot be inside another cell.
- Identified cells that touch are grouped and identified as a single crystal object, and the largest overall dimension of the crystal object is computed. If the largest overall dimension is less than the minimum size, the object is discarded.
- the crystal object analysis processing can also compute an object area as the sum of the cell density times the cell area, and further compute the object centroid.
- the results from the crystal object analysis module 1940 can include all the request parameters provided to the module, a list of objects identified and their description. The list is sorted in descending order by an object's area. Each object description includes the object area ( ⁇ m 2 ), the centroid (X, Y in ⁇ m) and a list of cells that make up each object. Each cell is described with its absolute position in size ( ⁇ m), crystal pixel count and total pixel count.
- the graphical output module 1950 generates a representation of the analyzed image which can be displayed and further analyzed. For example, grayscale and/or color coding pixel characteristics may be adjusted by the graphical output module 1950 .
- the analysis request for the graphical output module 1950 includes an image path parameter that defines where the image to be analyzed is found. If the image path parameter is empty, no further processing is done.
- a base value parameter indicates whether a “base image,” i.e., an image used to generate the representation of the analyzed image, is either black, gray or white. If the base value is gray, the base image begins as a grayscale rendition of the resampled image. Otherwise, the base image begins as a white or black image, as indicated by the base value.
- the parameters include a gray “min” value and a gray “max” value, which are typically from 0 to 1, and specify the linear grayscale compression.
- adjusting the gray min or max values can control the color coding contrast or flatten the image, and they are typically set to defaults of 0 for the gray min and 0.75 for the gray max.
- An opaque parameter indicates whether a pixel in the base image should be replaced with the color coding associated with the particular type of corresponding pixel in the analyzed image. For example, if the opaque parameter is set to YES or the base parameter equals black or white, the appropriate color coding replaces the pixel. If the opaque parameter is set to no, the color for a base image pixel is generated by OR'ing the color with the corresponding pixel in the analyzed image.
- a crystal color parameter provided in the analysis request sets the color coding value for pixels identified as crystals, a precipitate color parameter sets the color coding for precipitate pixels, and an edge color sets the color coding for pixels identified as edges.
- the default values for the crystal color parameter may be blue, the precipitate color parameter may be green and the edge color parameter may be red.
- the graphical output module 1950 writes the color coded image file to the image path specified in the request parameters, unless the path parameter is empty or invalid.
- the generated color-coded image file typically does not contain region annotations, but annotations can be superimposed on the image file by another process, if desired.
- the graphical output module 1950 provides an analysis report to the scheduler module 1915 that includes the request parameters that were used to produce the color coded image file.
- the analysis modules 1930 , 1935 , 1940 can function as service functions that are capable of quickly identifying objects and/or regions within an image, so that a scheduler module 1915 can dispatch control information to the imaging controller module 1920 , which in turn directs the imaging system to re-image specific areas of a droplet using at least one different imaging parameter, (e.g., the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be used, etc.), while the sample plate containing the sample just analyzed is in the imaging device.
- the imaging controller module 1920 which in turn directs the imaging system to re-image specific areas of a droplet using at least one different imaging parameter, (e.g., the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be used, etc.), while the sample plate containing the sample just analyzed is in the imaging device.
- the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis
- an analysis module 1960 can analyze at least 10,000 images per day under typical conditions, where the images are less than or equal to 1.0 mega pixels, i.e., the equivalent of processing each image in 8.64 seconds, and where one instance of the image analysis software is running on one PC.
- the analysis module may be packaged and distributed in a Java 2 file. Java message service may be used to receive requests and send the responses from the analysis module(s). Extensible markup language (XML) may also be used for the analysis requests and responses.
- XML Extensible markup language
- Test images are used with training software to train the neural networks to analyze crystal growth in sample droplets.
- Training software allows the user to create, open, display, edit and save lists of images in training/test set files, and is described herein according to one embodiment of the invention.
- the test images include identified subimages containing edge, crystal, precipitate and clear pixels within a wide variety of images. For each image, the user can designate “training subimages” as crystal, precipitate, edge or clear. The resolution of the subimages can be user-adjustable.
- the software can include a single-click designation action that efficiently designates the subimages as crystal, precipitate, edge or clear.
- the images containing the designated training subimages can be saved as a set of training files.
- the training software can display training subimages in table form and/or as color-coded markers on an image. Subimages may be moved by either dragging the marker or editing the table. Subimages may also be deleted either from the image or from the table.
- the training software can be configured to allow a user to define the neural network dimensionality, select a training set file and another file for testing, and perform iterative training and testing using the selected sets of files. Training data, e.g., neural network weights, training and test error, and the number of iterations is saved in a definition file.
- the intensity levels of pixels in a selected image area are provided as an input to the neural network.
- the neural network identifies each pixel as a particular type of pixel, e.g., edge, clear, crystal or clear.
- the results are compared to what is actually correct, and corresponding error values are calculated. Small adjustments are made to the weights within the neural network based on the error values, and then another test image containing a designated subimage is provided as in input to the neural network. This process is performed for other test images and can be repeated for many thousands of iterations, where each time the weights may be slightly adjusted to provide a more accurate output.
- an image of a sample droplet is provided as an input to the neural network.
- the output of the neural network includes a rating for each pixel that indicates a degree of confidence that the pixel depicts each of the different pixel classifications, for example, edge, crystal, precipitate, and clear.
- the rating is typically between zero and one, where zero indicates the lowest degree of confidence and a 1 indicates the highest degree of confidence.
- the overall content of an image can be determined counting the number of pixels of each classification by computed as a percentage of the crystal, precipitate, edge and clear pixels contained in the image.
- one analysis option identifies edges of a drop within the image, and may be used with quick and coarse resolution search parameters to first identify the edge of the drop, and then the interior of the drop may be analyzed with a higher resolution search.
- a supervised learning type of neural network is used to classify the subimages as crystal, precipitate, edge of drop or clear, using the pixel intensity, not the pixel hue.
- the entire image is scanned, sampling subimages on a host-specified grid, where the spacing of the grid is in millimeters, not pixels. The resolution of the images is provided as a parameter received from the host.
- Pie charts can be generated graphically showing the results of the neural network analysis.
- Each image analysis method file contains neural network definitions, e.g., “dimensions” and “weights.”
- the method file also includes parameters that specify the analysis options including whether to perform drop edge detection, and if drop edge detection is selected, the sample grid spacing used to find the edges of the drop, and the sample grid spacing to find crystals within the drop. For example, drop edge detection finds the edge of a drop quickly with a relatively coarse grid spacing scan and then use a relatively fine grid spacing scan inside the drop, according to one embodiment.
- a database can be used to associate the image analysis file with the image analysis results, so that if a better image analysis method is available at a later time, an image may be re-analyzed using the later analysis method.
- the analysis modules can use a neural network to classify the contents of an image.
- a fast operator can be used to identify if a pixel has a particular crystal characteristic.
- One embodiment of an edge detection process is described below and illustrated in FIG. 20A. Color or black and white images of a sample droplet can be generated and used for identifying crystals.
- the edge detection process 2000 receives the image of a sample that may contain crystals.
- the process 2000 determines if the image received is a color image. If the image is a color image, it is converted to a grayscale image at step 2015 .
- the image may be filtered at step 2020 to remove minimize undesirable characteristics such as speckle or other types of image “noise” during subsequent processing.
- the edge detection process 2000 uses the gradient of the intensity of the pixels in the image to identify edges.
- gradient information is calculated from a 3 ⁇ 3 set of pixels using a calculation based on the best fit of a plane through the image points.
- the gradient of intensity of the pixel in the center of the 3 ⁇ 3 set of pixels is the direction and magnitude of the maximum slope of the plane.
- the use of a 3 ⁇ 3 set of pixels helps to eliminate some of the effects of image noise on the process.
- Gradient information is calculated for selected pixels in the image. All the pixels in the image may be selected, or a subset of the pixels, e.g., an area of interest in the image which may be smaller than the whole image, may be selected.
- Gradient information is calculated for each selected pixel and stored in three arrays of the same dimensions as the received image.
- the first array contains the cosine of the angle of the gradient direction.
- the second array contains the sine of the angle of the gradient direction.
- the third array contains the magnitude, or steepness, of the gradient. Pixels with a calculated magnitude less than a given threshold have their gradient information set to zero so they are eliminated from further processing.
- edge pixels are identified using the gradient information.
- An edge pixel can be defined as a pixel for which the magnitude of the gradient of the image is a local maximum in the direction of the gradient. These pixels represent the points at which the rate of change in intensity is the greatest.
- a separate array of pixels is used (of the same dimensions as the original image) to store this information for further processing.
- edge pixels are formed into groups based on the direction of their gradient.
- a threshold on the difference in direction is used to include or exclude pixels from a group.
- Each pixel in a group should be adjacent to another pixel in the group.
- the edge pixels are labeled identifying the group to which they belong.
- the group(s) with crystal characteristics are selected and at step 2045 the selected groups are provided to another analysis process for aid in further analysis of the image.
- FIG. 20B includes the same steps 2005 - 2035 as in FIG. 20A, and then uses the crystal characteristic “straightness” to determine whether a group of pixels depict a crystal.
- edge pixels are formed into a group(s), as described above for FIG. 20A.
- the edge detection process 2000 determines the “straightness” of each labeled group of pixels using linear regression, according to one embodiment. The correlation from the linear regression and the number of pixels in the group is used to determine the “straightness” of the group.
- the straightness can be defined as the product of the count of pixels in the group and the reciprocal of 1.0 minus the fourth power of the correlation coefficient for the group, according to one embodiment. If the count of pixels is below a given threshold, the count is set to zero.
- the edge detection process 2000 generates an image, hereinafter referred to as a “lines image,” using the previously calculated straightness information.
- the lines image is the same shape and size as the subset of pixels selected for edge detection.
- the intensity value for a pixel in the lines image is set to the straightness value of the group that its corresponding pixel belongs to.
- the lines image containing information indicating where “straight” pixels may be found, is provided to an analysis module to aid in crystal identification.
- the scheduler 1825 controls the imaging of samples by communicating to the imaging system controller 1815 the necessary information for imaging a particular plate and the droplet samples on that plate.
- the imaging system controller 1815 directs the imaging system 1805 to generate the images of the particular plate and droplet sample at a specified time or in a specified sequence, and the images are stored on the image storage device 1810 .
- the scheduler 1825 sends an analysis request to the image analyzer 1815 , and the corresponding image for that sample is provided to the image analyzer 1815 .
- the image analyzer 1815 determines the contents of the image using one or more of the various analysis modules, and provides results to the scheduler 1825 in an analysis response.
- FIG. 21 shows a process 2100 that uses the results of analyzing an image for subsequent imaging of the same sample, according to one embodiment of the invention.
- a first image of a sample is generated using a first set of imaging parameters, which may include for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering, and/or brightness.
- An analysis process receives the first image at step 2110 and analyzes the first image in accordance with the analysis request at step 2115 .
- the process 2100 determines whether crystal formation in the first image is suspected, the presence of which can make an additional image of the sample desirable. For example, to determine if an additional image is desired, a score can be computed for the image.
- the score can be based upon user-adjustable thresholds and weighting factors, allowing the user to tailor preferences with experienced personal judgment. If the overall score exceeds a specific threshold, reimaging is warranted and an appropriate reimaging request is dispatched. Scoring and threshold may be a function of apparent image content and/or also a function of system bandwidth and scheduling issues. The more available system resources are, e.g., the imaging subsystem, the more likely zoomed-in reimaging occurs.
- the analysis of the first image at step 2120 can be done using a relatively fast running process, e.g., determining the inner/outer non-clear ratio for the droplet sample, and a further, more thorough analysis can be done at step 2140 , according to one embodiment.
- information is provided to the imaging system that allows the same sample to be re-imaged to create a second image of the sample.
- Subsequent images generated of the same sample can use imaging parameters that are different than those used to generate the first image, that is, at least one value of a imaging parameter used to generate the second image is different than the values of the imaging parameters used to create the first image.
- the process 2100 receives the second image of the sample and analyzes the second image at step 2140 using, for example, the analysis methods described herein. Analysis results are output for evaluation or display at step 2145 .
- subsequently generated images can more clearly show the presence of crystal formation. For example, if the formation of crystal in the sample droplet is suspected as a result of analyzing the first image, information can be communicated to the imaging system to zoom-in on the area where the crystal formation is suspected and re-image the droplet using a higher magnification.
- Other imaging parameters e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, and brightness, can also be changed to obtain an image that may better depict the contents of the sample.
- Timely analysis of the first image can result in a relatively large time savings if a subsequent image of a particular sample is desired.
- the process for handling a sample plate containing the sample e.g., fetching the correct plate from a storage location, placing the plate in the imaging device, and returning the plate to its storage location, is very time consuming.
- minimizing the amount of plate handling during image generation increases image generation and analysis throughput.
- the images generated from the samples on a sample plate are completely analyzed before the plate is removed from the imaging device. If desired, additional subsequent images of a sample contained on that plate can then be generated without incurring the time required to re-fetch the plate.
- a certain percentage of the images are analyzed before the plate is removed. While this may not allow every sample to be re-imaged without re-fetching the plate, e.g., the analysis of the last sample imaged may not be completed before the plate is removed, it may still result in an overall time savings as it may allow quick re-imaging of most of the samples, if desired, while not unduly delaying the removal of the plate from the imaging device.
- FIG. 22 illustrates a process 2200 that includes generating two images of a sample, where each image is generated using a set of imaging parameters that has at least one different imaging parameter than those used for the other image, according to one embodiment of the invention.
- a first image is generated using a first set of imaging parameters.
- the first image is received by an analysis process which determines one or more regions of interest in the first image at step 2215 .
- the analysis process may be, for example, an edge detection process or a process implemented in one of the analysis modules, both or which are described hereinabove.
- a second image is generated using a second set of imaging parameters where the second set of imaging parameters includes at least one imaging parameter that is different than the first set of imaging parameters.
- One or more imaging parameter may be changed to generate the second image.
- the focal plane may be set to a different height relative to the droplet sample
- the illumination of the sample may be changed, including using a different direction of illumination (e.g., lighting the sample from alternate sides and off-axis lighting) or a different illumination brightness level
- the magnification or zoom level used may be changed
- different filtering may be used for each image (e.g., polarizing filters).
- the second image is received by an analysis process, and analyzed to determine a region or regions of interest at step 2230 .
- the regions of interest from the first and second images are combined to form a composite image.
- the composite image is the same size as the first and second images.
- the first and second of images are analyzed to determine the portion or portions of each image that will be used to form the composite image.
- the composite images is generated by copying the values of the pixels from each region of interest in the first and second images into one composite image.
- the composite image is analyzed for the presence of crystal formation by a user, or automatically by an automatic or interactive analysis method, e.g., using the content analysis module, the notable regions analysis module, the crystal object analysis module, or a report inner/outer non-clear ratio module, as previously described, and the results are output at step 2245 .
- process 2200 shows a process to form a composite image using two images generated with different imaging parameters
- more than two images may also be generated and used to form composite images, where each image is generated using at least one different imaging parameter, according to another embodiment.
- a plurality of images are generated for a sample where the focal plane for each image is set at a different “height” relative to the sample.
- the resulting images may show varying sharpness in corresponding locations. The sharpness of the corresponding portions of the images are compared to determine which portion of each image should form the composite image.
- the portion of each image that best satisfies specified sharpness criteria may be selected from the plurality of images to form the composite image.
- the size of a portion of an image compared to the other images may be as small as a single pixel or several pixels, and may be as large as tens of pixels or hundreds of pixels, or even larger.
- FIG. 23 illustrates a process 2300 for visual evaluation of crystal growth by a user, according to another embodiment of the invention.
- process 2300 receives an image of a sample.
- the process 2300 classifies the pixels of the image according to their depiction of the contents of the sample, e.g., the pixels are classified as depicting crystal, precipitate, clear or an edge.
- the pixels of the image may be classified by processes incorporated into the content analysis module 1930 , the notable regions analysis module 1935 , the crystal object analysis module 1940 , as described above, or another suitable analysis process.
- process 2300 generates a second image that is color-coded using the pixel classification information from step 2310 .
- Step 2315 may be performed by the above-described graphical output analysis module 1950 .
- Pixels that were classified as edge, precipitate or crystal pixels are depicted as a particular color, e.g., red for crystal pixels, green for precipitate pixels, and blue for edge pixels.
- One or all the classified pixels may be depicted according to a color-code scheme.
- the second image can have opaque color-coded information, or translucent color-coded information that also shows the original image through the color.
- the second image is typically the same size and shape as the image received at step 2305 .
- the color-coded second image is visually displayed, for example, on a computer monitor or on a printout.
- the second image is visually analyzed to determine crystal growth information of the droplet sample. Displaying the color-coded image to a user facilitates efficient interpretation of the contents of the image and allows the presence of crystals in the image to be easily and visualized.
Abstract
Description
- This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 60/474,989 filed on May 30, 2003, and entitled IMAGE ANALYSIS SYSTEM AND METHOD, U.S. Provisional Application No. 60/444,586 filed on Jan. 31, 2003, and entitled AUTOMATED IMAGING SYSTEM AND METHOD, U.S. Provisional Application No. 60/444,519 filed on Jan. 31, 2003, and entitled AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD, and U.S. Provisional Patent Application No. 60/444,585 filed on Jan. 31, 2003, and entitled REMOTE CONTROL OF AUTOMATED LABS, the entirety of which are incorporated herein by reference.
- 1. Field of the Invention
- This invention generally relates to systems and methods for analyzing and exploiting images. More particularly, the invention relates to systems and methods for identifying and analyzing images of substances in samples.
- 2. Description of the Related Technology
- X-ray crystallography is used to determine the three-dimensional structure of macromolecules, e.g., proteins, nucleic acids, etc. This technique requires the growth of crystals of the target macromolecule. Typically, crystal growth of macromolecules is dependent on several environmental conditions, e.g., temperature, pH, salt, and ionic strength. Hence, growing crystals of macromolecules requires identifying the specific environmental conditions that will promote crystallization for any given macromolecule. Moreover, it is insufficient to find conditions that result in any type of crystal growth; rather, the objective is to determine those conditions that yield well-diffracting crystals, i.e., crystal configurations that provide the resolution desired to make the data useful.
- Modern chemistry and biology laboratories produce and analyze multiple samples concurrently in order to accelerate the crystal growth development cycle. The samples are often produced and stored in a sample storage container, such as the individual wells in a well plate. Alternatively, drops of multiple samples are placed at discrete locations on a plate, without the need for wells to contain the sample. In either case, hundreds, thousands, or more, different sample drops may be placed on a single analysis plate. Similarly, a single laboratory may house thousands, millions, or more, samples on plates for analysis. Thus, the number of drops to monitor and analyze may be extremely large.
- In the screening experiments, samples under investigation are periodically evaluated to determine if suitable crystallization of the sample has taken place. In a conventional laboratory, a technician manually locates and removes each plate or sample storage receptacle from a storage location and views each sample well under a microscope to determine if the desired biological changes have occurred. In most cases, the plates are stored in laboratories within a controlled environment. For example, in protein crystallization analysis, samples are often incubated for long periods of time at controlled temperatures to induce production of crystals. Thus, the technician must locate, remove, and view the samples under a microscope in a refrigerated room. Further increasing the demand for technician labor, hundreds or thousands of samples in sample wells may need to be periodically viewed or otherwise analyzed to determine the existence of crystals in a sample well.
- As an alternative, an image may be periodically generated for each sample and provided to a technician, who need not be geographically co-located with the sample, to analyze the image to evaluate crystal growth. Automated image evaluation techniques can also be used to analyze the image and evaluate the presence of crystal growth and increase system throughput. However, current image analysis techniques do not always receive sufficient information from the sample image to accurately evaluate crystal growth. Important information learned as a result of analyzing the image is not automatically exploited, or used for further analysis to facilitate a user's evaluation of the image. Additionally, in current systems, the results of analyzing the image are not adequately provided to facilitate easy interpretation and efficient decision making.
- Accordingly, there is a need in the industry for systems and methods that overcome the aforementioned problems in the current art.
- This invention relates to systems and methods for automation of the monitoring of samples to determine crystal growth. According to one embodiment, the invention comprises a method of evaluating crystal growth in a crystal growth system, comprising receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, and analyzing information depicted in said second image to determine the contents of said sample. According to other embodiments, the different imaging parameter included in the method can be depth-of-field, illumination brightness level, focus, the area imaged, the center location of the area imaged, illumination source type, magnification, polarization, and/or illumination source position. According to other embodiments of the method of evaluating crystal growth, analyzing said first image comprises determining a region of interest in said first image and wherein said information comprises is used to adjust said second set of imaging parameters so that the imaging system generates a zoomed-in second image of said region on interest.
- According to another embodiment, analyzing information in the method of evaluating crystal growth in a crystal growth system comprises determining whether said first image depicts the presence of crystals, and can further comprise, wherein said first image comprises pixels, and said determining comprises classifying said pixels and comparing the number of pixels classified as crystals to a threshold value.
- According to another embodiment, a method of evaluating crystal growth in a crystal growth system comprises counting the number of said pixels depicting objects in the sample and evaluating said number using a threshold value.
- According to another embodiment of the invention, the method of analyzing crystal growth comprises receiving a first image having pixels depicting crystal growth information of a sample, identifying a first set of pixels in said first image comprising a first region of interest, receiving a second image having pixels depicting crystal growth information of said sample, identifying a second set of pixels in said second image comprising a second region of interest, merging said first set of pixels and said second set of pixels to form a composite image, and analyzing said composite image to identify crystal growth information of said sample. According to another embodiment said first image is generated by an imaging system using a first set of imaging parameters, said second image is generated by said imaging system using a second set of imaging parameters, and wherein said second set of imaging parameters comprises at least one imaging parameter that is different from the imaging parameters in said first set of imaging parameters.
- According to another embodiment of the invention, a method of analyzing crystal growth information comprises receiving a first image comprising a set of pixels that depict the contents of a sample, determining information for each pixel in said set of pixels, wherein said information comprises a classification describing the type of sample content depicted by said each pixel, and a color code associated with each classification, generating a second image based on said information and said set of pixels, displaying said second image, and visually analyzing said second image to determine crystal growth information of the sample.
- According to another embodiment, the invention comprises a system for detecting crystal growth information comprising an imaging subsystem with means for generating an image of a sample, wherein said image comprises pixels that depict the content of said sample, an image analyzer subsystem coupled to said imaging system with means for receiving said image, means for classifying the content of said sample using said pixels and means for determining whether said sample should be re-imaged based on said classifying; and a scheduler subsystem coupled to said imaging analyzer system with means for causing said imaging subsystem to re-image said sample.
- According to another embodiment, the invention comprises a computer-readable medium containing instructions for analyzing samples in a crystal growth system, by receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, analyzing information depicted in said second image to determine the contents of said sample.
- The above and other aspects, features, and advantages of the invention will be better understood by referring to the following detailed description, which should be read in conjunction with the accompanying drawings, in which:
- FIG. 1A is a high-level block diagram of an imaging system according to the invention.
- FIG. 1B is high-level block diagram of another imaging system according to the invention.
- FIG. 2 is a perspective view of an imaging system according to the invention.
- FIG. 3 is a perspective view of the imaging system shown in FIG. 2, viewed from a different angle.
- FIG. 4 is a perspective view of the imaging system shown in FIG. 2, viewed from yet a different angle.
- FIG. 5 is a plan front view of the imaging system shown in FIG. 2.
- FIG. 6 is a plan, right side view of the imaging system shown in FIG. 2.
- FIGS. 7A and 7B are perspective views from different angles of a lens system as can be used with the imaging system shown in FIG. 2.
- FIG. 8 is a perspective view from below of a photo-filter carriage that can be used with the imaging system shown in FIG. 2.
- FIG. 9 is a perspective view of certain components as assembled in the imaging system shown in FIG. 2.
- FIG. 10 is a plan front view of certain components as assembled in the imaging system shown in FIG. 2.
- FIG. 11 is a plan, right side view of the components shown in FIG. 10.
- FIG. 12 is a perspective view of a light source as can be used with the imaging system shown in FIG. 2.
- FIG. 13 is a perspective view of a sample mount with the light source shown in FIG. 12, viewed from a different angle.
- FIG. 14A is a plant top view of the light source shown in FIG. 12.
- FIG. 14B is a cross-sectional view along the plane A-A of the light source shown in FIG. 14A.
- FIG. 15 is a exploded, perspective view of certain components of the sample mount and the light source shown in FIG. 13.
- FIG. 16 is a functional block diagram of an illumination duration control circuit as can be used with the light source shown in FIG. 12.
- FIG. 17 is a functional block diagram of an automated sample analysis system in which the imaging system according to the invention can be used.
- FIG. 18 is a block diagram of an imaging and analysis system.
- FIG. 19 is a block diagram of a computer that includes a Crystal Resolve analysis module, according to one aspect of the invention.
- FIG. 20A is a block diagram of an analysis system process, according to one embodiment of the invention.
- FIG. 20B is a block diagram of an analysis system process, according to one embodiment of the invention.
- FIG. 21 is a flow diagram of an imaging analysis process, according to one embodiment of the invention.
- FIG. 22 is a flow diagram of an imaging analysis and control process, according to one embodiment of the invention.
- FIG. 23 is a flow diagram of an analysis process, according to one embodiment of the invention.
- Embodiments of the invention will now be described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.
- The imaging and analysis system and methods disclosed here are related to embodiments of an automated sample analysis system having an imaging system that is described in the related U.S. provisional patent application No. 60/444,519, entitled “AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD.” An imaging system that can provide images of samples for analysis, in response to control information, is described hereinbelow, followed by a description of a system and processes for analyzing the images. It should be noted here that the terms “image”, “subimage” or “pixels” as used herein at various locations do not necessarily mean an optical image, subimage or pixels which are either usually displayed or printed, but rather include digital representations or other representations of such image, subimage or pixels. It should also be noted that the term “sample” as used herein refers to any type of suitable sample, for example, drops, droplets, the contents of a well, the contents of a capillary, a sample in gel or any other embodiment of containing a sample or material.
- FIG. 1A is a high-level block diagram of an
imaging system 100. In this embodiment, theimaging system 100 has anassembly 105 that is controlled by controllers andlogic 110. Theassembly 105 includes astage 115 that holds and transports target samples to be imaged by animage capture device 120. Theimaging system 100 employs anoptics assembly 125 to enhance the view of the target samples before theimage capture device 120 obtains the images of the samples. Anilluminator 130 is configured as part of theassembly 105 to direct light at the samples held in thestage 115. - The
assembly 105 also includes atranslator 135 that provides the structural support members and actuators to move any one combination of thestage 115,image capture device 120,optics 125, orilluminator 130. Thetranslator 135 may be configured to move the combination of components in one, two, or three dimensions. As will be discussed in detail below, in some embodiments thestage 115 remains stationary while thetranslator 135 moves theimage capture device 120 andoptics 125 to a desired well position in a sample plate held by thestage 115. In other embodiments of theimaging system 100, thetranslator 135 moves thestage 115 in a first axis and theimage capture device 120 andoptics 125 in a second axis which is substantially perpendicular to the first axis. - The controllers and
logic 110 of theimaging system 100 provide instructions to and coordinate the activities of the components of theassembly 105. The controllers may include a microprocessor, controller, microcontroller, or any other computing device. The logic includes the instructions to cause the controller to perform the tasks or processing described here. - FIG. 1B is high-level block diagram of an
imaging system 150. Theimaging system 150 includes anassembly 155 in communication with controllers andlogic 160. Theassembly 155 may also be in communication with adata storage device 190, which itself may be configured for communication with the controllers andlogic 160. The controllers andlogic 160 control and coordinate the activities of the components of theassembly 155. - In this embodiment, the
assembly 155 includes asample plate mount 165 suitably configured to receive micro-titer plates of various configurations and sizes. Alternatively, thesample plate mount 165 can be configured to receive any sample matrix that carries samples, regardless of whether the samples are stored in individuals sample wells, rest on the surface of the sample matrix (e.g., as droplets), or are embedded in the sample matrix. A source offlash lighting 180 is arranged to direct light bursts to the samples stored in the micro-titer plate carried by thesample plate mount 165. An inventive system and method of providing theflash lighting 180 will be discussed with reference to FIG. 16. - The
assembly 155 includes acompound lens 175 that cooperates with adigital camera 170 to acquire images of the samples in the sample plate. Thecompound lens 175 may consist, for example, of an objective lens, a zoom lens, and additional optics chosen to provide thedigital camera 170 with the desired image from the light from the samples. In one embodiment, as will be discussed further below, thecompound lens 175 may be motorized (i.e., provided with one or more actuators) so that the controllers andlogic 160 can automatically focus the scene, zoom on the scene, and set the aperture. - In this embodiment, the
assembly 155 includes an x-y translator that moves either thesample plate mount 165 or thecompound lens 175, or both. Of course, if thedigital camera 170 is coupled to thecompound lens 175, the x-y translator moves both thedigital camera 170 and thecompound lens 175. In some embodiments, thex-y translator 185 is configured to move thesample plate mount 165 in two axis, e.g., x and y coordinates. Alternatively, thex-y translator 185 moves thecompound lens 175 in two axis, while thesample plate mount 165 remains stationary. In yet another embodiment, the x-y translator consists of multiple and separate actuators that move independently from one another thesample mount 165 or thecompound lens 175. - It should be noted that the
assembly 155, the controllers andlogic 160, and thedata storage 190 are depicted as separate components for schematic purposes only. That is, in some embodiments of theimaging system 150 it is advantageous to, for example, integrate thedata storage device 190 into theassembly 155 and to include the controllers andlogic 160 as part of one or more of the components shown as being part of theassembly 155. Similarly, thesample mount 165,digital camera 170,compound lens 175,flash lighting 180, andx-y translator 185 need not all be configured as part of asingle assembly 155 as shown. - Exemplary ways of using and constructing embodiments of the
imaging system - Illustrative Embodiment
- With reference to FIGS. 2-6 and9-11, perspective and plan views of an
imaging system 200 according to the invention are illustrated. Theimaging system 200 includes asample plate mount 210 that receives asample plate 212. An x-translator having an actuator 218 (see FIG. 4) is coupled to thesample plate mount 210 to move thesample plate mount 210 into position above alight source 216 and below alens assembly 230. Adigital camera 214 is coupled to thelens assembly 230 to capture images of the wells in thesample plate 212. A y-translator having an actuator 220 (see FIG. 3) is coupled to thelens assembly 230 to move thelens assembly 230 into position over a desired well of thesample plate 212. - Support Platform
- The
digital camera 214,lens assembly 230,sample plate mount 210,light source 216, x-translator 218, and y-translator 220 are mounted on a platform 240 (see FIG. 2). Theplatform 240 generally consists of several structural members, brackets, or walls, e.g.,base 242,side wall 244,front wall 250,bracket 252,bracket 246,post 248, andsupport member 254. Thelight source 216 can be fastened to thebase 242.Rails lens assembly 230 are fastened to thewall 250 of theplatform 240 and to thesupport member 254. Thesample plate mount 210 is supported by arail 262 and an outport guide 253 of thesupport member 254. Therail 262 is supported through attachment to theside wall 244 and thepost 248. Of course, there are multiple, equivalent alternatives to providing support for and configuring thelens assembly 230,sample plate mount 210,light source 216, and x-, y-translators platform 240. - The
platform 240 may be constructed of any of several suitable materials, including but not limited to, aluminum, steel, or plastics. Because in some applications it is critical to keep vibration of theplatform 240 to a minimum, materials that provide rigidity to theplatform 240 are preferred in such applications. With regard to therails lens assembly 230 or thesample plate mount 210 in a smooth fashion, thereby avoiding vibrations. As illustrated in FIGS. 3 and 9, supporting thelens assembly 230 may be done by coupling the linearplain bearings rails sample plate mount 210 to therail 262.Bearings lens assembly 230 or thesample plate mount 210. - Sample Plate Mount
- The
sample plate mount 210 may be constructed from any rigid material, e.g., steel, aluminum, or plastics. Preferably thesample plate mount 210 is configured to accommodate, either directly or through the use of adapters, various standard sizes of micro-titer plates. Micro-titer plates that may be used with thesample plate mount 210 include, but are not limited to, crystallography plates manufactured by Linbro, Douglas, Greiner, and Corning. As will be described further below, thesample plate mount 210 is coupled to anactuator 218 for moving thesample plate mount 210 in one axis. - Translators
- The
imaging system 200 includes two independent translators. Typically, thesample plate mount 210 and thelens assembly 230 move on a plane that is substantially parallel to a plane defined by thesample plate 212 carried by thesample plate mount 210. In one embodiment, the controllers andlogic sample plate mount 210 and thelens assembly 230 at the coordinates of a specific well of thesample plate 212. - An x-axis translator for moving the
sample plate mount 210 consists of an actuator 218 (see FIG. 4) that rotates a threaded rod 219 (or “lead screw”) about its axis in clockwise or counter-clockwise directions. In the embodiment shown in FIGS. 3, 4, and 10, theactuator 218 is coupled to therod 219 via a belt (not shown) and pulleys 221 and 221′. Thesample plate 210 mount is fastened to a “bushing” 267 (see FIG. 10) that rides on therail 262. Thesample plate mount 210 is also supported by the outport guide 253 (see FIGS. 6 and 11) of thesupport member 254. The “bushing” 267 is additionally coupled in a known manner to therod 219. When theactuator 218 turns in one direction, its power is transmitted via the belt and pulleys 221 and 221′ to therod 219, which then moves the “bushing” 267 and, thereby, moves thesample plate mount 210 in a linear direction. - A y-axis translator for moving the
lens assembly 230 consists of an actuator 220 (see FIG. 3) that rotates a threadedrod 260 about its axis in clockwise or counter-clockwise directions. In the embodiment shown in FIGS. 3, 6, and 9, for example, theactuator 220 is coupled to therod 260 through a slotted disc coupling (not shown). Thelens assembly 230 is coupled tobearings rails bearings rod 260 throughplate 255 and the bracket 257 (see FIG. 6) in a known manner. When theactuator 220 turns in one direction, its power is transmitted via the slotted disc coupling to therod 260, which then moves thebearings - The
actuators actuators sample plate mount 210 plussample plate 212 or thelens assembly 230 and thedigital camera 214. Another factor in determining the type of motor is the desired speed. In one embodiment,actuators - In the embodiment of the x-, y-translators described above, each translator mechanism independently translates along an axis of motion each of the
sample plate mount 210 and thelens assembly 230. However, it should be noted that in other embodiments of theimaging system 200, it may be desirable to maintain the lens assembly stationary and only move thesample plate mount 210, which would then have one or more translators to position thesample plate mount 210 anywhere in an x-y coordinate area. Similarly, theimaging system 200 may be configured so that an x-y translator (or set of x-, y-translators) moves the lens assembly in the x-y coordinate area, while thesample plate mount 210 remains stationary over thelight source 216. In one embodiment, the x-, y-translators employ optical sensors 285 and 287 (see FIG. 5) as to sense the start or end positions (“home positions”) of thelens assembly 230 or thesample plate mount 210. - In yet another embodiment, the
imaging system 200 may also include a z-axis translator (not shown) to lift or lower thesample plate mount 210,lens assembly 230, orlight source 216. The z-axis translator may consist of, for example, an actuator, a lead screw, one or more rails, and appropriate bearings and fasteners. - The
actuators actuator 220, for example, to move and keeps count of the travel distance and final location. The controller can be programmed to move theactuator 220 at varying speed, torque, and acceleration. - Image Capture Device
- In some embodiments of the
imaging system 200, the image capture device can be a film camera, a digital camera, a CMOS camera, a charge coupled device (CCD), and the like, or some other apparatus for capturing an image of an object. The embodiments of theimaging system 200 described here employ adigital camera 214. A suitabledigital camera 214 is, for example, a CMOS digital camera. However, it should be apparent that several digital photography devices could also be employed. TheCMOS camera 214 is preferred because it provides random access to the image data and is relatively low cost. In conventional imaging systems for crystallography, a CMOS camera is typically not used because in those systems the level of light is insufficient for this type of camera. In contrast, theimaging system 200 is configured to provide the level of light necessary to allow use of a CMOS camera. - The
digital camera 214 can be a CMOS camera having a pixel resolution of 1280×1024 pixels, Bayer color filter, a pixel size of 7.5×7.5 microns, and a data interface governed by the IEEE 1394 standard (commonly known as “Firewire”). The digital camera 124 may be fully digital and not require a frame grabber. The digital camera 124 may also have a centered pixel area, e.g. a 1024×1024 or 800×600 pixel subset of the array, which enhances the image quality since the edges of the array where optical distortions increase are avoided. In one embodiment, thedigital camera 214 is connected separately to a host computer (not shown) via a Firewire data interface. This allows for rapid transfer of large amounts of image data, e.g., five images per second. - Lens Assembly
- One embodiment of the
lens assembly 230 includes an objective lens 231, a zoom lens 233, and an adapter 235. These optical components are chosen to provide suitable field of view, magnification, and image quality. The objective lens 231, zoom lens 233, and adapter 235 may be purchased from, for example, Navitar Inc. of Rochester, N.Y. - In one embodiment, the zoom lens233 may be the “12×UltraZoom” zoom lens manufactured by Navitar. The zoom lens 233 may provide a 12:1 zoom factor, a focus range of about 12-mm, and an aperture of about 0.14. The zoom lens 233 preferably includes adapters for mounting the objective lens 231. The zoom lens 233 may have actuators 233A, 233B, and 233C for providing, respectively, automatic aperture adjustment, autozoom, and autofocus functionality. In one embodiment, actuators 233B and 233C have gear reductions of 262:1. Of course, the gear reduction ratio is chosen to suit the particular application. For example, a 5752:1 gear ratio for the focus actuator 233C may be too slow for some applications of the
imaging system 200. The actuators 233A, 233B, and 233C may be obtained from Navitar or from MicroMo Electronics, Inc. of Clearwater, Florida. - The objective lens231 may be, for example, a 5× Mitutoyo Infinity Corrected Long Working Distance Microscope Objective (model M Plan Apo 5) microscope accessory. The objective lens 231 is coupled to the zoom lens 233. Since the
light source 216 delivers sufficient light to thesample plate 212, thelens assembly 230 is configured to allow for setting a small aperture in order to increase the depth of field. The objective lens 231 preferably provides a working distance that allows adequate room beneath thelens assembly 230 to manipulate asample plate 212 and provide a photo-filter carriage 237 in the image path. In one embodiment, the working distance of the objective lens 231 is about 34-mm. - The adapter235 serves to allow use of the
digital camera 214. The adapter 235 may be, for example, a 1× Adapter model number 1-6015 sold by Navitar. Of course, different combinations of objective lenses 231 and adapters 235 may be used, e.g., a 2× Adapter and 2× Objective combination. The combination of 1× Adapter and 5× Objective provides a suitable image for most applications of theimaging system 200. In some embodiments, it is desirable to use a 0.67× Adapter 235 with a 10× Objective 231, for example, to provide a higher image resolution. - The optical components of the
lens assembly 230 can be provided with actuators for remote and automatic control. To allow software control of the optical components, controllers and control logic (not shown) can control the actuators 233A, 233B, 233C, and 233D. The actuators (e.g., dc motors) may be coupled to the aperture of the magnification and focus of the zoom lens 233, as well as the photo-filter carriage 237. In some embodiments, the actuators 233A, 233B, 233C, and 233D are preferably provided with encoders to provide position information to the controllers. In one embodiment, the actuators on thelens assembly 230 are 17-mm direct current motors with 100:1 gear reducers. These motors may be obtained from PITMANN® of Harleysville, Pennsylvania. - The
lens assembly 230 may also include a photo-filter carriage 237 that is configured to hold optical filters (not shown). For example, the photo-filter carriage 237 can hold polarization plates or color light filtering plates. FIG. 8 illustrates one embodiment of a photo-filter carriage 237 that may be used with theimaging system 200. The photo-filter carriage 237 includes a filter wheel 237A for receiving one or more photo-filters (not shown) in openings 237B. The photo-filters may be held in place in the filter wheel 237A in a variety of ways. For example, in the embodiment illustrated in FIG. 8, caps 237C in cooperation with suitable fasteners hold the photo-filters in place. The filter wheel 237A may be coupled to an actuator 233D for remote and automatic control of the filter wheel 237A. The actuator 233D and the filter wheel 237A may be fastened, in a conventional manner, to a clamp 237D that is coupled to, for example, the objective lens 231 or the zoom lens 233 (see FIGS. 1 and 9). In one embodiment, a polarization filter is coupled to a filter wheel so that the polarization filter covers about 90 degrees of the wheel. In this embodiment, the polarization filter can be rotated so that the applied polarization varies between zero and ninety degrees. Thus, the use of the polarization filter with a polarized light source can provide analysis of the effect of samples on polarized light. For example, when a polarized light source and the polarization filter are cross-polarized then minimal light should get to the objective lens 231, unless the sample re-orients the polarized light, such as can happen when the light passes through crystals. - The
digital camera 214 in combination with thelens assembly 230 provides a broad depth of field to allow imaging of objects such as protein crystals at varying depths within a sample droplet stored in a sample well of asample plate 212. In one embodiment, thelens assembly 230 has a 12:1 zoom lens and, in cooperation with thedigital camera 214, can provide a 1 micron optical resolution. In some embodiments, thelens assembly 230 and thedigital camera 214 may be integrated as a single assembly. - Light Source
- The
light source 216 will now be described with reference to FIGS. 12-15. FIG. 12 shows a perspective view of thelight source 216. Since the crystallization of substances is often highly sensitive to temperature changes, thelight source 216 is preferably configured to minimize the amount of heat transferred to thesample plate 212, e.g., by isolating and removing heat generated by theelectronics 1408 and illuminators 1402 (see FIG. 14B). - Housing
- With reference to FIGS. 12, 14B and15, the
light source 216 includes a housing 1202 adapted to store one or more illuminators 1402 (see FIGS. 14B and 15), cooling elements 1404, heat reflecting glass 1406, light diffuser plate 1206, andcorresponding electronics - In the embodiment of the
light source 216 shown in FIGS. 12-14B, the top wall 1204A of the housing 1202 has an opening to receive and support a light diffuser plate 1206. The plate 1206 serves to diffuse light from theilluminators 1402 onto thesample plate 212. The plate 1206 may be, for example, a sheet of translucent plastic. In one embodiment, inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass (“hot mirror”) 1406 (see FIG. 14B) is provided. The heat reflecting glass 1406 prevents most infra-red energy from exiting the housing 1202. - The wall1204B of the housing 1202 may be provided with a plurality of orifices 1208 that allows a cooling element 1404, such as fan, to draw air into the housing 1202 for cooling the internal components. A wall 1204C (see FIG. 14B) of the housing 1202 can be fitted with an opening 1410 for receiving a duct that guides forced air out of the housing 1202. A wall 1204D (see FIG. 13) of the housing 1202 can be fitted with a power plug 1208 and a
communications port 1302. The housing 1202 is preferably adapted to isolate an operator of theimaging system 200 from high voltages that may be used to fire theilluminators 1402. - Of course, the housing1202 may be configured in a variety of ways not limited to that detailed above. For example, the ventilation openings 1208 on wall 1204B may be replaced by one or more fans built into the wall 1204B or the wall 1204E. Moreover, depending on the specific location of the
light source 216 in any given application of theimaging system 200, the ventilation openings 1208 may be located on the bottom wall (not shown) of the housing 1202, for example. - Illuminators
- With reference to FIGS. 14B and 15, the
light source 216 includes one ormore illuminators 1402 that generate light rays. Theilluminators 1402 may be various types, for example, incandescent bulbs, light emitting diodes, or fluorescent tubes of various types including, but not limited to, mercury- or neon-based fluorescent tubes. In one embodiment, theilluminators 1402 are two xenon tubes. Xenon tubes are well known in the relevant technology and are readily available. Thexenon tubes 1402 can include borosilicate glass that absorbs ultra-violate radiation. Xenon tubes are preferred because they produce sufficient light to allow use of aCMOS camera 214 in theimaging system 200. Xenon tubes are also preferred since they provide a broad spectrum of light rays, which enables use of color to enhance detection of crystal growth in the wells of thesample plate 212. - The actual dimensions of the
illuminators 1402 are chosen to suit the specific application. For example, in theimaging system 200 thexenon tubes 1402 are long enough to cover one dimension of thesample plate 212 so that it is not necessary to move thelight source 216 when thelens assembly 230 orsample plate mount 210 are repositioned. As shown in FIG. 14B, theilluminators 1402 may be supported on aboard 1405, which may also support electronics for control of theilluminators 1402. - Off-Axis Lighting
- In one embodiment, two
illuminators 1402 are positioned to provide different locations of the illumination source, e.g., both on-axis and off-axis lighting of the wells in thesample plate 212. As used here, the imaging axis of the lens assembly means the principal axes of the lens assembly. For example, first andsecond xenon tubes 1402 can be positioned, respectively, a first and second distance from the imaging axis of thelens assembly 230. Typically, the first and second distances are substantially equal in length, and the first xenon tube is positioned opposite the imaging axis from the second xenon tube. - In one embodiment, the
xenon tubes 1402 are mounted about an inch on either side of the area directly under thelens assembly 230. This configuration allows the use of an indirect lighting effect when only one xenon tube is fired. That is, when two xenon tubes are positioned off the imaging axis, the controllers andlogic sample plate 212. One xenon tube can be fired to provide off-axis illumination of thesample plate 212. When the two xenon tubes are fired simultaneously a more conventional backlit scene is obtained. In some applications, off-axis illumination is preferred because it produces shadows on small objects in a sample droplet stored in a well of thesample plate 212. The shadows caused by off-axis lighting enhance the ability of the controllers andlogic - In one embodiment, for example the
imaging system 150 shown in FIG. 1B, the controllers andlogic 160 control theassembly 155 to capture two images of a droplet in a well plate of thesample plate 212. Theimaging system 150 captures one image with thelight source 216 lighting the sample with a first xenon tube. Theimaging system 150 captures a second image with thelight source 216 lighting the sample with the second xenon tube. The controllers andlogic 160 can then combine the data from both images and perform an analysis based on the combined data. This results in enhanced characterization of the sample since the combination of the images typically provides more information about crystallization of the sample than a single image acquired with standard back lighting of the scene. - Filters
- In one embodiment, a source filter270 (FIG. 2) may be inserted in a filter slot 272 so that the filter 270 is interposed between the
light source 216 and thesample plate 212. The various filters 270 may be inserted and removed from the filter slot 272 by a plate handler. Thus, the filter 270 may be automatically removed, or exchanged with another filter, by theimaging system 200. The source filter 270 may be any type of filter, such as a wavelength specific filter (e.g. red, blue, yellow, etc.) or a polarization filter. - Flash Mode
- In one embodiment of the
imaging system 200, thelight source 216 includes one or more illuminators 1402 (e.g., fluorescent tubes) adapted to provide flash lighting. That is theilluminators 1402 are controlled to illuminate only momentarily thesample plate 212 as thedigital camera 214 captures an image of a well in thesample plate 212. This arrangement provides benefits over known devices in which illuminators remain in the on-position throughout the entire time that thesample plate 212 is handled by an imaging system. In theimaging system 200, since theilluminators 1402 are turned on for only a fraction of a second per image, very little heat radiation is transferred to the wells of thesample plate 212. Hence, one benefit of this configuration is that theimaging system 200 can provide high illumination levels for thecamera 214 while minimizing energy or radiation transfer to the samples in thesample plate 210. Anexemplary control circuit 1600 that provides controlled flash lighting is described below with reference to FIG. 16. - Flash Lighting Circuitry
- FIG. 16 is a functional block diagram of an illumination duration (“flash”)
control circuit 1600 for anilluminator 1402. Although only oneilluminator 1402 andcontrol circuit 1600 is shown,multiple illuminators 1402 can be used and independently controlled usingadditional control circuits 1600. Theilluminator 1402 can be, for example, a xenon tube having a length greater than the maximum width of thesample plate 212 to be used in theimaging system illuminator 1402 can be located underneath and along one axis of thesample plate 212 to illuminate all the wells in one row or column of thesample plate 212 without repositioning theilluminator 1402. - A first end of the
illuminator 1402 is connected to afirst capacitor 1602 and afirst resistor 1604. The opposite end of thefirst resistor 1604 is connected to apower supply 1606. Thepower supply 1606 may be controlled by a dedicated RS232 line, for example. The opposite or second end of thefirst capacitor 1602 that is not connected to theilluminator 1402 is connected to ground or a voltage common. - The second end of the
illuminator 1402 is connected to the anode of a first silicon controlled rectifier (“SCR”) 1607 and a first terminal of asecond capacitor 1608, respectively. An SCR is a solid state switching device that can provide fast, variable proportional control of electric power. Aresistor 1620 is connected between the first terminal of the second capacitor and the cathode of asecond SCR 1610. The second terminal of thesecond capacitor 1608 is connected to an anode of thesecond SCR 1610. The cathode of thefirst SCR 1607 is connected to the ground or voltage common potential. The cathode of thesecond SCR 1610 is connected to the cathode of thefirst SCR 1607 and is similarly connected to ground or the voltage common potential. The anode of thesecond SCR 1610 is also connected to asecond resistor 1614 that connects the anode of thesecond SCR 1610 to thepower supply 1606. - A
trigger 1612 of theilluminator 1402 is connected to the gate of thefirst SCR 1607 so that both can be triggered simultaneously. This common connection controls thetrigger 1612 of theilluminator 1402 and the start of illumination. The gate of thesecond SCR 1610 controls a stop or end of illumination. - The duration of illumination provided by the
illuminator 1402 can be controlled as follows. Initially, the first andsecond SCRs first capacitor 1602 is charged up to the level of the voltage of thepower supply 1606 using thefirst resistor 1604. Thepower supply 1606 can, for example, charge the first capacitor to 300 volts or more. - The size of the
first capacitor 1602 relates to the amount of energy that can be transferred to theilluminator 1402. Theilluminator 1402 provides an illumination based in part on the amount of energy provided by thefirst capacitor 1602. Thefirst capacitor 1602 can be one capacitor or a bank of capacitors. Thefirst capacitor 1602 can be, for example, a 600° F. capacitor. - The size of the
resistors second capacitor 1608.Smaller resistors second capacitor 1608 to charge quickly. However, thesecond SCR 1610 can inadvertently trigger if the voltage impulse at its anode is too great. Thus, the value of theresistors second capacitor 1608 to recharge before the next image flash trigger, but not to recharge so quickly as to inadvertently trigger conduction in thesecond SCR 1610. - The
resistor 1620 provides an electrical path from the anode of thefirst SCR 1607 to ground or voltage common to allow thesecond capacitor 1608 to charge. - The
illuminator 1402 is ready to trigger once thefirst capacitor 1602 is charged. Thesecond capacitor 1608 is charged by thepower supply 1606 through thesecond resistor 1614 concurrent with the charging of thefirst capacitor 1602. Thesecond capacitor 1608 is chosen to be large enough to generate a current potential that shuts off thefirst SCR 1607 and, thus, to terminate illumination by theilluminator 1402. Thesecond capacitor 1608 can be a single capacitor or can be a bank of capacitors. Thesecond capacitor 1608 can be, for example, a 20 μF capacitor. - After the first and
second capacitors illuminator 1402 initially illuminates when the trigger signal is provided to the control of theilluminator 1402 and the gate of thefirst SCR 1607. Theilluminator 1402 can include a triggering circuit that triggers theilluminator 1402 in response to a logic signal. If theilluminator 1402 does not include this circuit, an external triggering circuit can be included. - The
first SCR 1607 conducts in response to the trigger signal. Thefirst SCR 1607 then continues to conduct even in the absence of a gate signal. Thefirst SCR 1607 can be shut off by interrupting the current through the SCR or by reducing the voltage drop across thefirst SCR 1607 to below the forward voltage of the device. - The
second SCR 1610 is controlled by astop signal generator 1616 to connect thesecond capacitor 1608 in parallel with thefirst SCR 1607. However, thesecond capacitor 1608 is charged in opposite polarity to the voltage drop across thefirst SCR 1607. Thus, when thesecond SCR 1610 initially conducts, the voltage from thesecond capacitor 1608 is placed in opposite polarity across thefirst SCR 1607 thereby shutting off thefirst SCR 1607. - After the
first SCR 1607 is triggered by a gate signal and begins to conduct, the second end of theilluminator 1402 and the first terminal of thesecond capacitor 1608 are pulled to ground via thefirst SCR 1607. Theilluminator 1402 then illuminates in response to the current flowing through theilluminator 1402. Thesecond SCR 1610 controls turn-off of theilluminator 1402. Thesecond SCR 1610 begins to conduct when a stop signal is applied to the gate of thesecond SCR 1610. This pulls the second terminal of thesecond capacitor 1608 to ground. Because a capacitor resists instantaneous voltage changes, the voltage across thesecond capacitor 1608 momentarily causes the voltage at the anode of thefirst SCR 1607 to be pushed below the ground or voltage common potential. A negative voltage at the anode of thefirst SCR 1607 results in a loss of current flowing through thefirst SCR 1607, which results in shut down of thefirst SCR 1607. Thesecond capacitor 1608 discharges almost immediately. Theilluminator 1402 shuts off when thefirst SCR 1607 turns off because there is no longer a current path through theilluminator 1402. - Thus, a microprocessor, controller, or microcontroller can be programmed to control the
trigger 1612 and stopsignal generator 1616. The processor controls the trigger signal to initiate illumination with theilluminator 1402. The processor then controls the stop signal to control termination of theilluminator 1402. The processor can thus control the trigger and stop signals to control the duration of the illumination. The processor can control the duration of the illumination (a “flash”) in predetermined intervals or can control the duration of the illumination over a range of time. For example, the processor can control the duration of the flash in microsecond steps across an interval of approximately 20 μS-600 μS. Alternatively, the processor can control the lower range of the duration of the flash to be 0, 20, 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, or 550 μS. In another alternative, the processor can control the upper range of the duration of the flash to be 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550 or 600 μS. In one embodiment, thedigital camera 214 issues the signal to turn on theilluminator 1402 so that the “flash” will be in synchronization with the electronic shutter of thedigital camera 214. - The
power supply 1606 can be a controllable high voltage power supply. The microprocessor, controller, or microcontroller can also control the output voltage of thepower supply 1606 to further control the illumination provided by theilluminator 1402. For example, the microprocessor can control the output voltage of thepower supply 1606 to vary the illumination provided by theilluminator 1402 for the same illumination duration. Thus, for a given illumination duration, the microprocessor can control thepower supply 1606 to a lower output voltage to minimize the illumination. Similarly, for the same illumination duration, the microprocessor can control thepower supply 1606 to a higher output voltage, thereby increasing the illumination. - The microprocessor can control the output voltage of the
power supply 1606 over a range of, for example, 180-300 volts. Theilluminator 1402 may not consistently illuminate for voltages below 180 volts when theilluminator 1402 is a xenon flash tube. The microprocessor can control the output voltage of thepower supply 1606 using a digital control word. Thus, the microcontroller can control the output voltage of thepower supply 1606 in steps determined in part by the number of bits in the control word and the tunable range of thepower supply 1606. The microcontroller can, for example provide a 10-bit control word, an 8-bit control word, a 6-bit control word, a 4-bit control word, or a 2-bit control word. Alternatively, thepower supply 1606 output voltage can be continuously variable over a predetermined range. - Thus, the microcontroller can control a level of illumination by controlling the illumination duration, the
power supply 1606 output voltage, or a combination of the two. The microprocessor's ability to control the combination of the two permits a wider range of brightness outputs than if only one parameter were controllable. The microprocessors ability to control both illumination duration andpower supply 1606 output voltage is advantageous for different lens zoom conditions. When magnification is low, such as when the lens is zoomed out, a relatively small amount of light is required. When magnification is high, a relatively large amount of light is required to capture an image. Use of filters and varying apertures may also be used to adjust the amount of light from the light source. - Operation
- The
imaging system 200 includes software modules that control and direct thelens assembly 230 to perform the following functions. In one embodiment, theimaging system 200 is configured to automatically control the brightness of the image. For example, after thecamera 214 captures an image of a well of thesample plate 212, the software determines whether the brightness is within predetermined thresholds. If the brightness does not fall within the thresholds, the controllers and logic of theimaging system 200 iteratively adjust the illumination intensity of theilluminators 1402 to adjust the brightness of the images until the brightness falls within the thresholds. In some embodiments, the brightness of the image may be evaluated based on a predetermined region (or set of pixels) of the image captured. - The brightness of the
illuminators 1402 may be adjusted when capturing a plurality of images of the same sample droplet. In one embodiment, for example, theimaging system 150 shown in FIG. 1B, the controllers andlogic 160 control theassembly 155 to capture two images of a droplet in a well plate of thesample plate 212. Theimaging system 150 captures one image with thelight source 216 lighting the sample with a first brightness level. Theimaging system 150 captures a second image with thelight source 216 lighting the sample with a second brightness level. In one embodiment the controllers andlogic 160 can then combine the data from both images and perform an analysis based on the combined data, which may result in enhanced characterization of the sample. In some embodiments, the brightness used for the second image may be logically controlled based on analyzing the brightness of the first image, determining if a lighter or darker second image may result in enhanced characterization of the sample, and adjusting thelight source 216 to light the sample accordingly. - The
imaging system 200 can also be configured with software to automatically focus the image. An exemplary autofocus routine is as follows. Once thelens assembly 230 is positioned over a sample of thesample plate 210, the objective lens 231 is moved along its imaging axis to a predetermined starting position. Thecamera 214 then acquires an image of the sample and/or well at that focus position. In one embodiment, the software obtains a “focus score.” This may be done, for example, by examining the brightness values of a set of pixels (e.g., a 500×3 pixel area) in the captured image, applying a low pass filter, and computing the sum of the squares of the differences in brightness of adjacent pixels for the set of pixels. The position and focus score data points are stored in an array. The objective lens 231 is moved to the next predetermined incremental position on its imaging axis, and the process of acquiring an image, computing the focus score, and storing the position and focus score values is repeated. This process continues until the objective lens 231 has been moved to all the predetermined or desired positions, e.g., until it reaches a predetermined end position by incrementally moving in a predetermined step size from the starting position. In one embodiment, the step size depends at least in part upon a predetermined maximum number of images to be acquired during the autofocus routine. - Next, the software searches the lens position/focus score array to identify the lens position with the best focus score. In one embodiment, the software then proceeds to compute the lens positions that are midway from the best focus score position to positions adjacent to it in the array. That is, the software examines the array of positions already imaged, finds the nearest position greater than the lens position associated with the best focus score, and calculates a “midpoint” position between them. A similar process is performed with regard to the nearest lens position that is less than the best focus score position. The software then acquires images at the midpoint positions and obtains corresponding focus scores. The software once again evaluates the array to identify the image with the best focus score, using a step size that is, say, one-half of the initial step size. These tasks are repeated until, for example, a maximum number of images acquired during autofocus, or a minimum step size, has been reached.
- In some embodiments, the
imaging system 200 performs the processes of autofocusing and automatically adjusting the brightness, as described above, for each well sample of asample plate 212 received by theimaging system 200. After the desired brightness and focus are set, theimaging system 200 then captures an image and stores it in, for example, thedata storage 190. In one embodiment, the automatically determined brightness and focus are also stored for each sample. In another embodiment, the software of theimaging system 200 calculates and stores a value associated with the mean of the brightness and focus positions for the aggregate of wells samples of the first plate. This value is then associated with each of the position/focus score data points in the array. Subsequent plates are examined using the mean brightness and focus as initial imaging values. - The
imaging system 200 may also include additional functionality related to automatically finding the edges of a droplet in a well of asample plate 212. In one embodiment, after the edges of the drop have been found, theimaging system 200 finds the centroid of the droplet and moves thelens assembly 230 to the centroid. Theimaging system 200 then determines the magnification required to image substantially only that area corresponding to the droplet, adjusts the zoom, and acquires the image. - In another embodiment, the
imaging system 200 may be configured to perform automatic adjustment of aperture. In this embodiment, theimaging system 200 receives settings for either maximum image resolution or maximum depth of field. Theimaging system 200 then determines the corresponding aperture by, for example, looking one or more tables having values correlating aperture with maximum resolution and/or maximum depth of field. Of course, magnification data may be part of these tables. - In yet another embodiment, the
imaging system 200 may be configured to perform automatic zoom of a substance in a sample stored in a well of thesample plate 212. In one embodiment, for example, the imaging system identifies a “crystal-like object” in the sample, calculates its centroid, moves thelens assembly 230 anddigital camera 214 to the centroid, adjusts the zoom level, and captures an image of the “crystal-like object.” In another embodiment, theimaging system 200 can be configured to capture an image of a sample or a crystal-like object, perform image analysis of the image, adjust imaging parameters (e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, brightness, etc.) and retake an image of the sample or crystal-like object. Theimaging system 200 can perform this process iteratively until predetermined thresholds (e.g., contrast, edge detection, etc.) are met. In some embodiments, the images captured in an iterative process can be either analyzed individually, or can be combined with other images and the resulting image analyzed. - Thus, in one embodiment of the
imaging system 200, the imaging system receives asample plate 212 and for each well sample performs the following functions including, automatic adjustment of brightness and aperture, autofocus, automatic detection of the sample droplet, and acquisition and storage of images. Theimaging system 200 stores the aperture, brightness, focus position, drop position and/or size. Theimaging system 200 may then use mean values of these factors as initial imaging settings for subsequent plates. - To increase the amount of data available for analysis of the sample, or crystal detection, in some embodiments an illumination source filter270 (FIG. 2) may be inserted in the filter slot 272 so that the filter 270 is (not shown) may be interposed between the
light source 216 and thesample plate 212. In one embodiment the various filters 270 may be inserted and removed from the filter slot 272 by a plate handler. Thus, the filter 270 may be automatically removed or exchanged by theimaging system 200. Alternatively, or additionally, an image filter (such as those that may be placed in the photo-filter carriage 237) may be interposed between the sample droplet in thesample plate 212 and the objective lens 231. In one embodiment, the image filter includes a polarization filter that provides a variable amount of polarization on the light incident on the objective lens 231. The use of these filters can be automatically controlled by imaging software routines and/or determined by operator defined variables. - The motorized control of aperture, focus, and zoom of the
lens assembly 230 in conjunction with remote control of the light source 216 (e.g., brightness and direction of illumination) allows dynamic optimization of contrast, field of view, depth of field, and resolution. - Imaging System Integrated with Automated Sample Analysis System
- FIG. 17 depicts a functional block diagram of an automated
sample analysis system 1700 having animaging system system 1700 includes controllers andlogic 1760 for controlling various subsystems housed in acabinet 1702. Thesystem 1700 can further include ashelf access door 1712 for allowing access to aremovable shelf system 1720 and/or astationary shelf system 1722. In one embodiment, a removableshelf access door 1710 can be provided. Thesystem 1700 can include atransport assembly 1730 that can consist of aplate handler 1732, anelevator assembly 1734, and arotatable platform 1736. Thesystem 1700 can further include anenvironmental control subsystem 1765 that employs arefrigeration unit 1762 and/or aheater 1764. - In one embodiment, the
system 1700 also includes animaging system 200 as has been described above. Theimaging system 200, havingsubcomponents cabinet 1702. This arrangement ensures that the samples in the sample plates remain at all times within the confines of a controlled environment. That is, once a sample plate is stored in thecabinet 1702, it is unnecessary to expose the sample plate to the environment external to the cabinet since thesystem 1700 is capable of automatically (i.e., without operator intervention) carry out the imaging of the sample within thecabinet 1702. - Embodiments of an automated
sample analysis system 1700 having an imaging system in accordance with the invention are described in the related United States Provisional Patent Application entitled “AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD,” having U.S. Patent Application No. 60/444,519, which is referenced above. - Sample Analysis System
- FIG. 18 depicts a block diagram of an imaging and
analysis system 1800, according to one embodiment of the invention. Theimaging system 1805 can be animaging system system 1800 includes animaging system controller 1820 that provides logical control of theimaging system 1805 to, for example, direct theimaging system 1805 to image a particular sample on aparticular sample plate 212, all the samples on thesample plate 212, or image a subset of the samples. Theimaging controller 1820 may also control the imaging parameters used by theimaging system 1805. Such imaging parameters can include, for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering and brightness. - The
system 1800 also includes animage storage device 1810 that stores images of samples captured by theimaging system 1805. Theimage storage device 1810 can be any suitable computer accessible storage medium capable of storing digital images, e.g., a random access memory (RAM), hard disk floppy disk, optical disk, compact disks, or magnetic tape. Thesystem 1800 shows theimage storage device 1810 separate from theimaging system 1805. In some embodiments, theimage storage device 1810 can be included in theimaging system 1805, or it may be included in a system that may also include animage analyzer 1815, theimaging system controller 1820, or ascheduler 1825. In one embodiment, a computer includes all the control, scheduling, analysis and imaging software for thesystem 1800. Alternatively, the software for thesystem 1800 may reside and run on a plurality of computers that are in communication with each other. In some embodiments, theimaging system 1805 may be configured to provide captured images directly to theimage analyzer 1815, or it may be configured to typically store images on theimage storage device 1810 and provide images to theimage analyzer 1815 as directed by theimaging system controller 1820. - The
scheduler 1825 communicates with theimage analyzer 1815 and theimaging system controller 1820 to control the analysis and imaging of samples based on user provided input. For example, the scheduler can schedule the imaging of a particular droplet or a plurality of droplets on a sample plate, and coordinate the imaging of said droplet or plurality of droplets with its subsequent analysis. Thescheduler 1825 can use adatabase 1830 to store information relating to scheduling the images and image specific information, for example, the size of pixels in each of the stored images, in a suitable format for quick retrieval. Knowing the pixel size can allow theanalyzer 1815 to reduce sampling to an appropriate density and size for particular objects in the image. The information in thedatabase 1830 can be available with each request to process an image. Thedatabase 1830 can reside on the same computer as thescheduler 1825 or on a separate computing device. - The
scheduler 1825 provides an analysis request to theimage analyzer 1815. According to one embodiment, the analysis request includes an image list, including the resolution of each image and the absolute X,Y location of its center. The image list typically contains only one image but may contain a plurality of images. The analysis request can also contain an analysis method including a list of parameters that specify options controlling how to analyze the image(s) and what to report. Additionally, the analysis request can include the Uniform Resource Locator (“URL”) of adefinition file 1835, i.e., an electronic address that may be on the Internet, such as an ftp site, gopher server, or Web page. Thedefinition file 1835 defines parameters used by theimage analyzer 1815, e.g., neural network dimensions, weights and training resolution (e.g., pixel granularity, or the spacing between pixels, of images used to train the neural network). Thedefinition file 1835 may be a single file or a plurality of files, but will be referred to hereinafter in the singular. - The
image analyzer 1815 also receives an analysis method file(s) 1840. The analysis method file may be a single file or a plurality of files, but will be referred to hereinafter in the singular. Theanalysis method file 1840 includes parameters that can be used by the various image analysis modules contained in theimage analyzer 1815, e.g., acontent analysis module 1930, anotable regions module 1935, and a crystal object analysis module 1940 (FIG. 19), described below, according to one embodiment. Theimage analyzer 1815 can also include functionality that determines the content of an image in terms of objects and/or regions of, for example, crystals or precipitate, or clear regions, that is, regions that do not show any features. Theimage analyzer 1835 includes a neural network to identify features, e.g., crystals, precipitate, and edges, that are depicted in the image, according to one embodiment. Preferably, theimage analyzer 1815 is configured to identify objects and regions of interest in an image quickly enough to allow thesystem 1800 to re-image specific objects or regions, if desired, while the corresponding sample plate is still in theimaging system 1805. - The
image analyzer 1815 provides an analysis response to thescheduler 1825. The analysis response, described in further detail below, typically includes the parameters used to for the analysis and the results of the particular analysis performed, e.g., the count of crystal, precipitate, clear and edge samples, regions of crystals, and/or a list and description of objects found in the image. - The analysis results can be reviewed using an
output display 1845 that can be co-located with the scheduler or at a remote location. The output displays may be coupled to thesystem 1800 via a web server, or via a LAN or other small network topology. Embodiments of a remote output display in accordance with the invention are described in related United States Provisional Patent Application entitled “REMOTE CONTROL OF AUTOMATED LABS,” having Application No. 60/444,585. - Illustrative Embodiment
- A computer containing analysis and control modules, and methods related to controlling an imaging and analysis system are illustrated and described with reference to FIGS. 19-22, according to one embodiment of the invention. FIG. 19 depicts a
computer 1900 that includes aprocessor 1905 in communication withmemory 1910, e.g., a hard disk and/or random access memory (RAM). Theprocessor 1905 is also in communication with animage analysis module 1960 that can include various modules configured to perform the functionality of the image analyzer 1815 (FIG. 18) described herein. - The
computer 1900 may contain conventional computer electronics that are not shown, including a communications bus, a power supply, data storage devices, and various interfaces and drive electronics. Although not shown in FIG. 19, it is contemplated that in some embodiments, thecomputer 1900 may include a video display (e.g., monitor), a keyboard, a mouse, loudspeakers or a microphone, a printer, devices allowing the use of removable media including, but not limited to, magnetic tapes and magnetic and optical disks, and interface devices that allow thecomputer 1900 to communicate with another computer, including but not limited to a computer network, a LAN, an intranet, or a WAN, e.g., the Internet. - The
computer 1900 is in communication with an imaging storage device, for example, image storage device 1810 (FIG. 18), and is configured to receive an image of a sample from the storage device and determine the contents of the sample, using one or more analysis processes. Thecomputer 1900 can be co-located with the image storage device, located near the image storage device, e.g., in the same building, or geographically separated from the image storage device. Thecomputer 1900 can receive the image from the image storage device via, e.g., a direct electronic connection or through a network connection, including a local area network, or a wide area network, including the Internet. It is also contemplated thecomputer 1900 can receive the image via a suitable type of removable media, e.g., a 3.5″ floppy disk, compact disc, ZIP drive, magnetic tape, etc. - It is contemplated the
computer 1900 can be implemented with a wide range of computer platforms using conventional general purpose single chip or multichip microprocessors, digital signal processors, embedded microprocessors, microcontrollers and the like. Thecomputer 1900 can operate independently, or as part of a computing system. Thecomputer 1900 may include stand-alone computers as well personal computers, workstations, servers, clients, mini-computers, main-frame computers, laptop computers, or a network of individual computers. The configuration of thecomputer 1900 may be based, for example, on Intel Corporation's family of microprocessors, such as the PENTIUM family and Microsoft Corporation's WINDOWS operating systems such as WINDOWS NT,WINDOWS 2000, or WINDOWS XP. - The
computer 1900 includes one or more modules or subsystems that incorporate the analysis processes described herein. As can be appreciated by a skilled technologist, each module can be implemented in hardware or software, or a combination thereof, and comprise various subroutines, procedures, definitional statements, and macros that perform certain tasks. For example, in a software implementation, all the modules are typically separately compiled and linked into a single executable program. The processes performed by each module may be arbitrarily redistributed to one of the other modules, combined together with other processes in a single module, or made available in, for example, a shareable dynamic link library. A module may be configured to reside on the addressable storage medium and configured to execute on one or more processors. Thus, a module may include, by way of example, other subsystems, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. It is also contemplated that thecomputer 1900 may be implemented with a wide range of operating systems such as Unix, Linux, Microsoft DOS, Macintosh OS, OS/2 and the like. - The
analysis module 1960 can include apre-processing module 1925 that can filter the received image prior to further processing. The image may be filtered to remove “noise” such as speckles, high frequency noise or low frequency noise that may have been introduced by any of the preceding steps including the imaging step. Filtering methods to remove high frequency or low frequency noise are well known in image processing, and many different methods may be used to achieve suitable results. For example, according to one embodiment in a filtering procedure that removes speckle, for each pixel, the mean and standard deviation of every other pixel along the perimeter of a 5×5 pixel area centered on a pixel are computed. If the center pixel varies by more than a threshold multiplied by the standard deviation, then it is replaced by the mean value. Then the slope of the 5×5 image pixel intensities is calculated and the center pixel is replaced by the mean value of pixels interpolated on a line across the calculated slope. - The
analysis module 1960 also includes one or more modules that perform image analysis to determine information about the sample contents, includingcontent analysis module 1930, notableregions analysis module 1935, and crystalobject analysis module 1940. Thecontent analysis module 1930 determines the count of crystal, precipitate, clear and edge pixels in the image, and can be optionally enabled to operate only inside a specific region of the sample. The notableregions analysis module 1935 determines a list of regions of a specified pixel type, e.g., crystal, precipitate, clear and edge pixels. The crystalobject analysis module 1940 determines objects containing crystal pixels that meet certain criteria, for example, size, area, or density. - FIG. 19 also shows
analysis module 1960 includes a report inner/outernon-clear ratio module 1945 that determines the ratio of non-clear pixel density inside a sample region over non-clear pixel density outside a sample region. Theanalysis module 1960 also includes a graphicaloutput analysis module 1950 that generate a color-coded image depicting each of the various features found in a sample image in a specified color. These modules are further described hereinbelow.Other analysis modules 1955 that incorporate different image analysis processes may also be included in theanalysis module 1960. In one example, ananalysis module 1955 can analyze the change in two or more images of the same sample taken at two different times. Theanalysis module 1955 can receive the count of pixels that are classified as crystal, precipitate, clear or edge pixels in an image of a particular region of a sample at a time T1 and save the count information with a reference to the region of a sample imaged. When the same region of a sample is re-imaged at a later time T2, theanalysis module 1955 receives the count of pixels that are classified as crystal, precipitate, clear and edge pixels in the image of the sample region at time T2. Theanalysis module 1955 can compare the count information from time T1 and T2 to determine if the droplet contains a crystal(s). One analysis method compares the total number of pixels classified as crystal pixels at time T1 and T2 to determine if the sample contains crystal. Another comparison method compares the percentage of crystal pixels at time T1 to the percentage of crystal pixels at time T2. If the count or the percentage of crystal pixels increases beyond a threshold value, the sample will be deemed to contain crystals. The other pixel classifications (e.g., precipitate, clear and edge) can also be compared and evaluated to facilitate the crystal analysis. A time-based comparison method, where the count information is saved for one image and compared to a second subsequent image, can be used with any sample processing algorithm. - In another example, the
analysis module 1955 may analyze a series of two or more images crystal growth using a grid approach. In this analysis method, two images 11 and 12 are divided up into grids, and the corresponding grids in each image are compared for change in the number of crystal, using, for example, the actual number of pixels or the percentage of crystal pixels. The pixel count information can be kept for each image and used to compare to other images taken at a different time. In any of the analysis methods described herein, the method can include analyzing every pixel, or skipping one or more pixels between the pixels analyzed. - A
scheduler module 1915 and an imagingsystem controller module 1920 are also included incomputer 1900, according to one embodiment. These modules are configured to include functionality that schedules the imaging of sample plates/droplet samples and subsequent analysis of the images, and controls theimaging system scheduler 1825 andimaging system controller 1820, respectively. - The image analysis software package may include support software that performs training and configuring of perception and analysis functionality, e.g., for a neural network. Some of the algorithms included in the image analysis software modules may use stochastic processing and may include the use of a pseudo-random number generation to find answers. All such functions can be provided a random number generator seed in request parameters received by the software module. When the analysis modules are properly configured, the same results should be obtained for a given image given the same parameters that affect its algorithms and any pre-processing of the image. The image analysis modules can be configured so that an analysis method using a pseudo-random number does not affect the results of a different analysis method or software module.
- In one embodiment, the image analysis software works with an image size of, for example, 800 by 600 pixels, a zoomed-in resolution of 2,046 pixels/mm (0.5 μm/pixel), and a zoomed-out resolution of 186 pixels/mm, (5.4 μm/pixel), or 1,024 by 1,024 pixels, a zoomed-in resolution of 2,460 pixels/mm (0.41 μm/pixel), and a zoomed-out resolution of 220 pixels/mm, (4.5 μm/pixel). The image analysis modules may optionally use the same neural network for both zoomed-in and zoomed-out images, however, quality of the results may suffer if only one neural network is used and it may be advantageous to train multiple neural networks, e.g., one for zoomed-in images and one for zoomed-out images. The image analysis software can also be adaptable to other image sizes and pixel resolutions, however, the training of new neural networks may be necessary in order to suitably process these images. If the resolution of the images vary, each definition file may include its training resolution, that is, the spacing between sampled pixels that was used to train the neural network. This information allows the algorithms to consider how to adapt images of varying resolution for use with the neural networks.
- The analysis module receives an analysis request (FIG. 18) containing an image list that includes the images to be analyzed. The analysis request also includes, for each image, its resolution in pixels/mm and the absolute X-Y location of the center of the image. Typically, there is only one image in the image list, however, multi-image methods may also be used. The analysis request also includes an analysis method, which is a collection of parameters that specify options controlling how to analyze the images and what to report. In specifying the analysis method, a URL of the definition file is included. The definition file defines the neural network's dimensions, weights and training resolution, i.e., a pixel granularity of the images that were used to train the neural network. Examples of the parameters are first described generally below, and then specifically as they relate to the
content analysis module 1930, notableregions analysis module 1935, and the crystalobject analysis module 1940, according to one embodiment. - The analysis request may include parameters that specify how a working copy of the image is prepared for all subsequent processing. For example, parameters can include options for a color to grayscale conversion of the image, and resizing of the image using pixel interpolation methods. Also, the parameters may specify the output of an image, for example, they may specify whether and how an image file representing the pixel interpretation should be generated. This generated image file may be visually displayed and further evaluated by a user. The parameters are also used by the analysis modules, e.g., in the content analysis module, the parameters specify whether an image is scanned and analyzed to determine statistics of its contents in terms of crystal, precipitate, clear and edge features. These parameters specify whether crystal-like objects should be searched for and reported. Options may include a scan grid, an ID criteria and the maximum number of objects to find.
- The parameters may also be used by the notable
region analysis module 1935 to specify whether notable regions in an image should be reported and, if so, the scan grid in micrometers, the size that is the width times the height in micrometers, the ID criteria, and the quantity of regions to report. The crystalobject analysis module 1940 can use the parameters to specify whether effective contiguous subregions of crystals are identified and reported as crystal objects, how this identification should be performed, and the quantity of crystal objects to identify. - The parameters can also specify whether to report the inner/outer non-clear ratio. If this ratio is to be specified, the output includes a ratio of the non-clear pixel density inside a sample region over the non-clear pixel density outside of the sample region. For example, the ratio would be 3.0 if every 100th pixel inside of a sample region is non-clear and every 300th pixel outside of a sample region is non-clear. According to one embodiment, ratios above 1 billion are truncated to that value.
- Image sampling parameters may include, for example, a color processing parameter which specifies how each pixel is converted to a floating point intensity value, or it may specify the linear grayscale for image conversion. If the image is already grayscale, pixels are converted to black, e.g., 0.0, or to white, e.g., 1.0. If color is selected, the pixels are linearly converted to 0.0 to 1.0 with equal channel weighting for each color. Pixel interpolation parameters may include, for example, no pixel interpolation, that is, only a closest pixel method will be used for pixel interpolation. This is generally the fastest interpolation method but typically results in reduced image quality. Interpolation methods that may be selected include bilinear and cubic spline interpolation, which yield higher quality images but they are more computationally complex and take more time or resources to generate. The re-size parameter includes options of 1:1, that is, the image is not resized, automatic, where the image is resized to match the training resolution using the specified interpolation method, and scale factor, where the image is re-sized using this factor and specified interpolation method.
- The
analysis modules scheduler module 1915 and generate a response, as described below. Thecontent analysis module 1930 determines counts of types of pixels in the sample images, e.g., crystal, precipitate, clear and edge pixels, as depicted in the image. In the illustrative embodiment described herein, thecontent analysis module 1930 is implemented as a neural network. - The
content analysis module 1930 receives a set of parameters that include parameters that indicate whether this module should be enabled, whether the content analysis should take place inside the sample region only or inside and outside the sample region, and the number of pixels to be skipped during the image analysis. If enable is set to NO, no analysis by thecontent analysis module 1930 is done and nothing is reported. If enable is set to YES, then the content analysis module analyzes the sample image. If the inside-sample-region-only option is set to YES, the edge of the sample region is found first, and the analysis is done only within the sample region edge. If inside-sample-region-only is set to NO, then checking is done inside and outside the sample region. A process for identifying the edge of a sample region is described hereinbelow in reference to FIG. 20, according to one embodiment. If the number of pixels to be skipped is set to 0, all the pixels in the image will be used. If the number of pixels to be skipped is set to 1, every other pixel in the image will be used for the content analysis, if 2, every third pixel will be used, etc. The default parameter for skipped pixels is typically set to 0. - The response of the
content analysis module 1930 includes an “echo” of the parameters used during the content analysis processing, and the counts of each pixel type pixels of crystal, precipitate, clear and edge samples found in the image. If the inside-sample-region-only option is enabled, the edge count can be used to assess how well the edge of the sample region was found. If it is not enabled, the edge count may be ignored. - The notable
region analysis module 1935 processes an image and determines regions of a specified size that include the minimum levels of crystal, precipitate or non-clear pixels. The request parameters for the notableregion analysis module 1935 can include an enable parameter which is set to either “YES” or “NO” that determines if notable region analysis should be performed and reported. The request parameters can also include a region size or area that is used to determine the size of the smallest region the notable region analysis module will identify. A skip-pixel parameter can be included to control the number of pixels that will be skipped during processing, where “0” means to check all of the pixels, “1” means to sample every other pixel, that is, sample the pixels with one unsampled pixel between them, etc. Typically, the default value for skip-pixel is “0.” - The request parameters can also include the maximum number of regions to report and the minimum percentages of crystal pixels, precipitate pixels and non-clear pixels to report. Typically, pixels determined to be edge-type pixels are ignored. The notable
region analysis module 1935 can be configured to identify regions with the highest percentage of each specified pixel type. If the regions contain less than the minimum percentage of pixels, it is not saved and the search for regions ends. Regions typically do not go outside of the input image. Newly found regions generally do not overlap existing regions. The report of results from the notable regions analysis module includes all the request parameters and a list of the regions identified The results for each region can include its absolute position, size, the number of crystal pixels and the total pixels sampled, not including edge pixels. - The crystal
object analysis module 1940 identifies small regions in the image that are rich in crystal pixels. The small regions, or objects, comprise one or more “cells.” The request parameters for the crystal object analysis module can include an enablement parameter which determines if this analysis should be performed and reported. The request parameters also include a skip-pixels parameter that operates as previously described above, parameters that control the size of the cells identified, for example, a cell-minimum-size parameter to control the smallest width or height of a cell, a cell minimum area which indicates the smallest overall area of a cell, a cell minimum density parameter which indicates the proportion from 0 to 1 of crystal pixels the cell must contain in order to be reported, and an object-minimum-size parameter which indicates one or more dimensions that the overall object must achieve in order to be reported. The request parameters can also include a pseudo-random generator seed which is used for the crystal object analysis stochastic processing. The crystalobject analysis module 1940 typically includes the limitation that the center of a cell cannot be inside another cell. Identified cells that touch are grouped and identified as a single crystal object, and the largest overall dimension of the crystal object is computed. If the largest overall dimension is less than the minimum size, the object is discarded. The crystal object analysis processing can also compute an object area as the sum of the cell density times the cell area, and further compute the object centroid. The results from the crystalobject analysis module 1940 can include all the request parameters provided to the module, a list of objects identified and their description. The list is sorted in descending order by an object's area. Each object description includes the object area (μm2), the centroid (X, Y in μm) and a list of cells that make up each object. Each cell is described with its absolute position in size (μm), crystal pixel count and total pixel count. - The
graphical output module 1950 generates a representation of the analyzed image which can be displayed and further analyzed. For example, grayscale and/or color coding pixel characteristics may be adjusted by thegraphical output module 1950. The analysis request for thegraphical output module 1950 includes an image path parameter that defines where the image to be analyzed is found. If the image path parameter is empty, no further processing is done. A base value parameter indicates whether a “base image,” i.e., an image used to generate the representation of the analyzed image, is either black, gray or white. If the base value is gray, the base image begins as a grayscale rendition of the resampled image. Otherwise, the base image begins as a white or black image, as indicated by the base value. The parameters include a gray “min” value and a gray “max” value, which are typically from 0 to 1, and specify the linear grayscale compression. For example, adjusting the gray min or max values can control the color coding contrast or flatten the image, and they are typically set to defaults of 0 for the gray min and 0.75 for the gray max. - An opaque parameter indicates whether a pixel in the base image should be replaced with the color coding associated with the particular type of corresponding pixel in the analyzed image. For example, if the opaque parameter is set to YES or the base parameter equals black or white, the appropriate color coding replaces the pixel. If the opaque parameter is set to no, the color for a base image pixel is generated by OR'ing the color with the corresponding pixel in the analyzed image. A crystal color parameter provided in the analysis request sets the color coding value for pixels identified as crystals, a precipitate color parameter sets the color coding for precipitate pixels, and an edge color sets the color coding for pixels identified as edges. For example, the default values for the crystal color parameter may be blue, the precipitate color parameter may be green and the edge color parameter may be red. The
graphical output module 1950 writes the color coded image file to the image path specified in the request parameters, unless the path parameter is empty or invalid. The generated color-coded image file typically does not contain region annotations, but annotations can be superimposed on the image file by another process, if desired. Thegraphical output module 1950 provides an analysis report to thescheduler module 1915 that includes the request parameters that were used to produce the color coded image file. - In one embodiment, the
analysis modules scheduler module 1915 can dispatch control information to theimaging controller module 1920, which in turn directs the imaging system to re-image specific areas of a droplet using at least one different imaging parameter, (e.g., the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be used, etc.), while the sample plate containing the sample just analyzed is in the imaging device. In one embodiment, ananalysis module 1960 can analyze at least 10,000 images per day under typical conditions, where the images are less than or equal to 1.0 mega pixels, i.e., the equivalent of processing each image in 8.64 seconds, and where one instance of the image analysis software is running on one PC. The analysis module may be packaged and distributed in a Java 2 file. Java message service may be used to receive requests and send the responses from the analysis module(s). Extensible markup language (XML) may also be used for the analysis requests and responses. - Test images are used with training software to train the neural networks to analyze crystal growth in sample droplets. As general software implementation of a neural network is well known in the art, only the training of a neural network is described, according to one embodiment of the invention. Training software allows the user to create, open, display, edit and save lists of images in training/test set files, and is described herein according to one embodiment of the invention. The test images include identified subimages containing edge, crystal, precipitate and clear pixels within a wide variety of images. For each image, the user can designate “training subimages” as crystal, precipitate, edge or clear. The resolution of the subimages can be user-adjustable. To minimize user fatigue during image designation, the software can include a single-click designation action that efficiently designates the subimages as crystal, precipitate, edge or clear. The images containing the designated training subimages can be saved as a set of training files. The training software can display training subimages in table form and/or as color-coded markers on an image. Subimages may be moved by either dragging the marker or editing the table. Subimages may also be deleted either from the image or from the table. The training software can be configured to allow a user to define the neural network dimensionality, select a training set file and another file for testing, and perform iterative training and testing using the selected sets of files. Training data, e.g., neural network weights, training and test error, and the number of iterations is saved in a definition file.
- To train the neural network, the intensity levels of pixels in a selected image area, e.g., a subimage, are provided as an input to the neural network. The neural network identifies each pixel as a particular type of pixel, e.g., edge, clear, crystal or clear. The results are compared to what is actually correct, and corresponding error values are calculated. Small adjustments are made to the weights within the neural network based on the error values, and then another test image containing a designated subimage is provided as in input to the neural network. This process is performed for other test images and can be repeated for many thousands of iterations, where each time the weights may be slightly adjusted to provide a more accurate output.
- When the neural network is used for content analysis, an image of a sample droplet is provided as an input to the neural network. The output of the neural network includes a rating for each pixel that indicates a degree of confidence that the pixel depicts each of the different pixel classifications, for example, edge, crystal, precipitate, and clear. The rating is typically between zero and one, where zero indicates the lowest degree of confidence and a 1 indicates the highest degree of confidence. The overall content of an image can be determined counting the number of pixels of each classification by computed as a percentage of the crystal, precipitate, edge and clear pixels contained in the image.
- When considering the content analysis strategy, accuracy of the results is important, but so is the speed of the analysis. Analysis algorithms can allow the user to balance and prioritize the characteristics of speed and quality. For example, one analysis option identifies edges of a drop within the image, and may be used with quick and coarse resolution search parameters to first identify the edge of the drop, and then the interior of the drop may be analyzed with a higher resolution search.
- According to one embodiment of the invention, a supervised learning type of neural network is used to classify the subimages as crystal, precipitate, edge of drop or clear, using the pixel intensity, not the pixel hue. In one embodiment, the entire image is scanned, sampling subimages on a host-specified grid, where the spacing of the grid is in millimeters, not pixels. The resolution of the images is provided as a parameter received from the host. Pie charts can be generated graphically showing the results of the neural network analysis. According to one embodiment, the outputs of a neural network can be summed for each type of object identified and divided by the sum of all the outputs, for example, the results can be A% crystal, B% precipitate, C% clear, where A+B+C=100%.
- Each image analysis method file contains neural network definitions, e.g., “dimensions” and “weights.” The method file also includes parameters that specify the analysis options including whether to perform drop edge detection, and if drop edge detection is selected, the sample grid spacing used to find the edges of the drop, and the sample grid spacing to find crystals within the drop. For example, drop edge detection finds the edge of a drop quickly with a relatively coarse grid spacing scan and then use a relatively fine grid spacing scan inside the drop, according to one embodiment. A database can be used to associate the image analysis file with the image analysis results, so that if a better image analysis method is available at a later time, an image may be re-analyzed using the later analysis method.
- The analysis modules can use a neural network to classify the contents of an image. To aid the neural network in the classification process, a fast operator can be used to identify if a pixel has a particular crystal characteristic. One embodiment of an edge detection process is described below and illustrated in FIG. 20A. Color or black and white images of a sample droplet can be generated and used for identifying crystals. At
step 2005, theedge detection process 2000 receives the image of a sample that may contain crystals. Atstep 2010 theprocess 2000 determines if the image received is a color image. If the image is a color image, it is converted to a grayscale image atstep 2015. The image may be filtered atstep 2020 to remove minimize undesirable characteristics such as speckle or other types of image “noise” during subsequent processing. - The
edge detection process 2000 uses the gradient of the intensity of the pixels in the image to identify edges. Atstep 2025, for a plurality of pixels in the image, gradient information is calculated from a 3×3 set of pixels using a calculation based on the best fit of a plane through the image points. The gradient of intensity of the pixel in the center of the 3×3 set of pixels is the direction and magnitude of the maximum slope of the plane. The use of a 3×3 set of pixels helps to eliminate some of the effects of image noise on the process. Gradient information is calculated for selected pixels in the image. All the pixels in the image may be selected, or a subset of the pixels, e.g., an area of interest in the image which may be smaller than the whole image, may be selected. Gradient information is calculated for each selected pixel and stored in three arrays of the same dimensions as the received image. The first array contains the cosine of the angle of the gradient direction. The second array contains the sine of the angle of the gradient direction. The third array contains the magnitude, or steepness, of the gradient. Pixels with a calculated magnitude less than a given threshold have their gradient information set to zero so they are eliminated from further processing. - At
step 2030, edge pixels are identified using the gradient information. An edge pixel can be defined as a pixel for which the magnitude of the gradient of the image is a local maximum in the direction of the gradient. These pixels represent the points at which the rate of change in intensity is the greatest. A separate array of pixels is used (of the same dimensions as the original image) to store this information for further processing. - At
step 2035, edge pixels are formed into groups based on the direction of their gradient. A threshold on the difference in direction is used to include or exclude pixels from a group. Each pixel in a group should be adjacent to another pixel in the group. The edge pixels are labeled identifying the group to which they belong. Atstep 2040, the group(s) with crystal characteristics are selected and atstep 2045 the selected groups are provided to another analysis process for aid in further analysis of the image. - One characteristic that separates a crystal from other objects in an image is the straightness of the edge of the crystal. FIG. 20B includes the same steps2005-2035 as in FIG. 20A, and then uses the crystal characteristic “straightness” to determine whether a group of pixels depict a crystal. At
step 2035 in FIG. 20B, edge pixels are formed into a group(s), as described above for FIG. 20A. Atstep 2040, theedge detection process 2000 determines the “straightness” of each labeled group of pixels using linear regression, according to one embodiment. The correlation from the linear regression and the number of pixels in the group is used to determine the “straightness” of the group. The straightness can be defined as the product of the count of pixels in the group and the reciprocal of 1.0 minus the fourth power of the correlation coefficient for the group, according to one embodiment. If the count of pixels is below a given threshold, the count is set to zero. - At
step 2055, theedge detection process 2000 generates an image, hereinafter referred to as a “lines image,” using the previously calculated straightness information. The lines image is the same shape and size as the subset of pixels selected for edge detection. The intensity value for a pixel in the lines image is set to the straightness value of the group that its corresponding pixel belongs to. Atstep 2060, the lines image, containing information indicating where “straight” pixels may be found, is provided to an analysis module to aid in crystal identification. - Referring now to FIGS. 18 and 21, in a typical imaging and analysis process, the
scheduler 1825 controls the imaging of samples by communicating to theimaging system controller 1815 the necessary information for imaging a particular plate and the droplet samples on that plate. Theimaging system controller 1815 directs theimaging system 1805 to generate the images of the particular plate and droplet sample at a specified time or in a specified sequence, and the images are stored on theimage storage device 1810. After an image is generated for a particular sample, thescheduler 1825 sends an analysis request to theimage analyzer 1815, and the corresponding image for that sample is provided to theimage analyzer 1815. Theimage analyzer 1815 determines the contents of the image using one or more of the various analysis modules, and provides results to thescheduler 1825 in an analysis response. - FIG. 21 shows a
process 2100 that uses the results of analyzing an image for subsequent imaging of the same sample, according to one embodiment of the invention. Atstep 2105, a first image of a sample is generated using a first set of imaging parameters, which may include for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering, and/or brightness. An analysis process receives the first image atstep 2110 and analyzes the first image in accordance with the analysis request atstep 2115. Atstep 2120, theprocess 2100 determines whether crystal formation in the first image is suspected, the presence of which can make an additional image of the sample desirable. For example, to determine if an additional image is desired, a score can be computed for the image. The score can be based upon user-adjustable thresholds and weighting factors, allowing the user to tailor preferences with experienced personal judgment. If the overall score exceeds a specific threshold, reimaging is warranted and an appropriate reimaging request is dispatched. Scoring and threshold may be a function of apparent image content and/or also a function of system bandwidth and scheduling issues. The more available system resources are, e.g., the imaging subsystem, the more likely zoomed-in reimaging occurs. - The analysis of the first image at
step 2120 can be done using a relatively fast running process, e.g., determining the inner/outer non-clear ratio for the droplet sample, and a further, more thorough analysis can be done atstep 2140, according to one embodiment. Atstep 2125, information is provided to the imaging system that allows the same sample to be re-imaged to create a second image of the sample. Subsequent images generated of the same sample can use imaging parameters that are different than those used to generate the first image, that is, at least one value of a imaging parameter used to generate the second image is different than the values of the imaging parameters used to create the first image. Atstep 2135, theprocess 2100 receives the second image of the sample and analyzes the second image atstep 2140 using, for example, the analysis methods described herein. Analysis results are output for evaluation or display atstep 2145. - Using the analysis data as feedback to the imaging process and adjusting the imaging parameters accordingly, subsequently generated images can more clearly show the presence of crystal formation. For example, if the formation of crystal in the sample droplet is suspected as a result of analyzing the first image, information can be communicated to the imaging system to zoom-in on the area where the crystal formation is suspected and re-image the droplet using a higher magnification. Other imaging parameters, e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, and brightness, can also be changed to obtain an image that may better depict the contents of the sample.
- Timely analysis of the first image can result in a relatively large time savings if a subsequent image of a particular sample is desired. The process for handling a sample plate containing the sample, e.g., fetching the correct plate from a storage location, placing the plate in the imaging device, and returning the plate to its storage location, is very time consuming. When thousands of images are scheduled to be generated in one day, minimizing the amount of plate handling during image generation increases image generation and analysis throughput. According to one embodiment, the images generated from the samples on a sample plate are completely analyzed before the plate is removed from the imaging device. If desired, additional subsequent images of a sample contained on that plate can then be generated without incurring the time required to re-fetch the plate. In another embodiment, a certain percentage of the images are analyzed before the plate is removed. While this may not allow every sample to be re-imaged without re-fetching the plate, e.g., the analysis of the last sample imaged may not be completed before the plate is removed, it may still result in an overall time savings as it may allow quick re-imaging of most of the samples, if desired, while not unduly delaying the removal of the plate from the imaging device.
- FIG. 22 illustrates a process2200 that includes generating two images of a sample, where each image is generated using a set of imaging parameters that has at least one different imaging parameter than those used for the other image, according to one embodiment of the invention. At step 2205 a first image is generated using a first set of imaging parameters. At
step 2210, the first image is received by an analysis process which determines one or more regions of interest in the first image atstep 2215. The analysis process may be, for example, an edge detection process or a process implemented in one of the analysis modules, both or which are described hereinabove. - At
step 2220, a second image is generated using a second set of imaging parameters where the second set of imaging parameters includes at least one imaging parameter that is different than the first set of imaging parameters. One or more imaging parameter may be changed to generate the second image. For example, the focal plane may be set to a different height relative to the droplet sample, the illumination of the sample may be changed, including using a different direction of illumination (e.g., lighting the sample from alternate sides and off-axis lighting) or a different illumination brightness level, the magnification or zoom level used may be changed, and different filtering may be used for each image (e.g., polarizing filters). Atstep 2225 the second image is received by an analysis process, and analyzed to determine a region or regions of interest atstep 2230. - At
step 2235, the regions of interest from the first and second images are combined to form a composite image. Typically, the composite image is the same size as the first and second images. The first and second of images are analyzed to determine the portion or portions of each image that will be used to form the composite image. The composite images is generated by copying the values of the pixels from each region of interest in the first and second images into one composite image. Atstep 2240, the composite image is analyzed for the presence of crystal formation by a user, or automatically by an automatic or interactive analysis method, e.g., using the content analysis module, the notable regions analysis module, the crystal object analysis module, or a report inner/outer non-clear ratio module, as previously described, and the results are output atstep 2245. - Although process2200 shows a process to form a composite image using two images generated with different imaging parameters, more than two images may also be generated and used to form composite images, where each image is generated using at least one different imaging parameter, according to another embodiment. For example, according to one embodiment, a plurality of images are generated for a sample where the focal plane for each image is set at a different “height” relative to the sample. The resulting images may show varying sharpness in corresponding locations. The sharpness of the corresponding portions of the images are compared to determine which portion of each image should form the composite image. The portion of each image that best satisfies specified sharpness criteria, e.g., where a selected set of pixels exhibits the greatest contrast, may be selected from the plurality of images to form the composite image. The size of a portion of an image compared to the other images may be as small as a single pixel or several pixels, and may be as large as tens of pixels or hundreds of pixels, or even larger.
- FIG. 23 illustrates a
process 2300 for visual evaluation of crystal growth by a user, according to another embodiment of the invention. Atstep 2305,process 2300 receives an image of a sample. Atstep 2310, theprocess 2300 classifies the pixels of the image according to their depiction of the contents of the sample, e.g., the pixels are classified as depicting crystal, precipitate, clear or an edge. The pixels of the image may be classified by processes incorporated into thecontent analysis module 1930, the notableregions analysis module 1935, the crystalobject analysis module 1940, as described above, or another suitable analysis process. Atstep 2315,process 2300 generates a second image that is color-coded using the pixel classification information fromstep 2310.Step 2315 may be performed by the above-described graphicaloutput analysis module 1950. To generate the second image, pixels that were classified as edge, precipitate or crystal pixels are depicted as a particular color, e.g., red for crystal pixels, green for precipitate pixels, and blue for edge pixels. One or all the classified pixels may be depicted according to a color-code scheme. The second image can have opaque color-coded information, or translucent color-coded information that also shows the original image through the color. The second image is typically the same size and shape as the image received atstep 2305. - At
step 2320, the color-coded second image is visually displayed, for example, on a computer monitor or on a printout. Atstep 2325, the second image is visually analyzed to determine crystal growth information of the droplet sample. Displaying the color-coded image to a user facilitates efficient interpretation of the contents of the image and allows the presence of crystals in the image to be easily and visualized. - The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.
Claims (40)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/769,150 US20040218804A1 (en) | 2003-01-31 | 2004-01-30 | Image analysis system and method |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US44451903P | 2003-01-31 | 2003-01-31 | |
US44458603P | 2003-01-31 | 2003-01-31 | |
US44458503P | 2003-01-31 | 2003-01-31 | |
US47498903P | 2003-05-30 | 2003-05-30 | |
US10/769,150 US20040218804A1 (en) | 2003-01-31 | 2004-01-30 | Image analysis system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040218804A1 true US20040218804A1 (en) | 2004-11-04 |
Family
ID=32854497
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/769,462 Active 2028-04-30 US7596251B2 (en) | 2003-01-31 | 2004-01-30 | Automated sample analysis system and method |
US10/769,150 Abandoned US20040218804A1 (en) | 2003-01-31 | 2004-01-30 | Image analysis system and method |
US10/769,470 Abandoned US20040260782A1 (en) | 2003-01-31 | 2004-01-30 | Data communication in a laboratory environment |
US10/769,461 Abandoned US20040253742A1 (en) | 2003-01-31 | 2004-01-30 | Automated imaging system and method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/769,462 Active 2028-04-30 US7596251B2 (en) | 2003-01-31 | 2004-01-30 | Automated sample analysis system and method |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/769,470 Abandoned US20040260782A1 (en) | 2003-01-31 | 2004-01-30 | Data communication in a laboratory environment |
US10/769,461 Abandoned US20040253742A1 (en) | 2003-01-31 | 2004-01-30 | Automated imaging system and method |
Country Status (2)
Country | Link |
---|---|
US (4) | US7596251B2 (en) |
WO (4) | WO2004069984A2 (en) |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040022427A1 (en) * | 2002-08-01 | 2004-02-05 | Ming Liau Co., Ltd. | Object inspection system |
US20060024746A1 (en) * | 2004-07-14 | 2006-02-02 | Artann Laboratories, Inc. | Methods and devices for optical monitoring and rapid analysis of drying droplets |
US20060126134A1 (en) * | 2004-12-15 | 2006-06-15 | Xerox Corporation | Camera-based method for calibrating color displays |
US20060126138A1 (en) * | 2004-12-15 | 2006-06-15 | Xerox Corporation | Camera-based system for calibrating color displays |
US20060165305A1 (en) * | 2005-01-24 | 2006-07-27 | Kabushiki Kaisha Toshiba | Image compression method and image compression device |
US20060191251A1 (en) * | 2004-12-18 | 2006-08-31 | Peter Pirro | Harvesting machine with an adjustable chopping means |
US7158888B2 (en) | 2001-05-04 | 2007-01-02 | Takeda San Diego, Inc. | Determining structures by performing comparisons between molecular replacement results for multiple different biomolecules |
US20070023190A1 (en) * | 2005-07-29 | 2007-02-01 | Hall David R | Stab Guide |
US20070160283A1 (en) * | 2006-01-11 | 2007-07-12 | Orbotech Ltd | System and method for inspecting workpieces having microscopic features |
US20080063280A1 (en) * | 2004-07-08 | 2008-03-13 | Yoram Hofman | Character Recognition System and Method |
US20080158365A1 (en) * | 2006-12-29 | 2008-07-03 | Richard Reuter | Trigger system for data reading device |
WO2008079590A1 (en) * | 2006-12-19 | 2008-07-03 | Cytyc Corporation | Method for forming an optimally exposed image of cytological specimen |
US20080235719A1 (en) * | 2007-03-16 | 2008-09-25 | Sharma Yugal K | Image analysis for use with automated audio extraction |
US20090043853A1 (en) * | 2007-08-06 | 2009-02-12 | Yahoo! Inc. | Employing pixel density to detect a spam image |
WO2009026258A1 (en) * | 2007-08-17 | 2009-02-26 | Oral Cancer Prevention International Inc. | Feature dependent extended depth of focusing on semi-transparent biological specimens |
US20090213110A1 (en) * | 2004-06-25 | 2009-08-27 | Shuhei Kato | Image mixing apparatus and pixel mixer |
US20110058727A1 (en) * | 2009-09-09 | 2011-03-10 | Canon Kabushiki Kaisha | Radiation imaging apparatus, radiation imaging method, and program |
US20120099120A1 (en) * | 2009-07-01 | 2012-04-26 | Hiroaki Okamoto | Exposure condition determining method and surface inspection apparatus |
US8396876B2 (en) | 2010-11-30 | 2013-03-12 | Yahoo! Inc. | Identifying reliable and authoritative sources of multimedia content |
US20130073221A1 (en) * | 2011-09-16 | 2013-03-21 | Daniel Attinger | Systems and methods for identification of fluid and substrate composition or physico-chemical properties |
US8577171B1 (en) * | 2006-07-31 | 2013-11-05 | Gatan, Inc. | Method for normalizing multi-gain images |
JP2014010136A (en) * | 2012-07-03 | 2014-01-20 | Dainippon Screen Mfg Co Ltd | Image analysis device and image analysis method |
US20150001087A1 (en) * | 2013-06-26 | 2015-01-01 | Novellus Systems, Inc. | Electroplating and post-electrofill systems with integrated process edge imaging and metrology systems |
US20160208306A1 (en) * | 2013-07-01 | 2016-07-21 | S.D. Sight Diagnostics Ltd. | Method, kit and system for imaging a blood sample |
US9405928B2 (en) * | 2014-09-17 | 2016-08-02 | Commvault Systems, Inc. | Deriving encryption rules based on file content |
US9411986B2 (en) | 2004-11-15 | 2016-08-09 | Commvault Systems, Inc. | System and method for encrypting secondary copies of data |
US9449380B2 (en) | 2012-03-20 | 2016-09-20 | Siemens Medical Solutions Usa, Inc. | Medical image quality monitoring and improvement system |
US9483655B2 (en) | 2013-03-12 | 2016-11-01 | Commvault Systems, Inc. | File backup with selective encryption |
US9735035B1 (en) | 2016-01-29 | 2017-08-15 | Lam Research Corporation | Methods and apparatuses for estimating on-wafer oxide layer reduction effectiveness via color sensing |
US9822460B2 (en) | 2014-01-21 | 2017-11-21 | Lam Research Corporation | Methods and apparatuses for electroplating and seed layer detection |
CN107818559A (en) * | 2017-09-22 | 2018-03-20 | 太原理工大学 | Crystal is inoculated with condition detection method and the harvester of crystal inoculation status image |
WO2018078447A1 (en) * | 2016-10-27 | 2018-05-03 | Scopio Labs Ltd. | Digital microscope which operates as a server |
US20180322629A1 (en) * | 2017-05-02 | 2018-11-08 | Aivitae LLC | System and method for facilitating autonomous control of an imaging system |
US10176565B2 (en) | 2013-05-23 | 2019-01-08 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a cell sample |
US10482595B2 (en) | 2014-08-27 | 2019-11-19 | S.D. Sight Diagnostics Ltd. | System and method for calculating focus variation for a digital microscope |
US10488644B2 (en) | 2015-09-17 | 2019-11-26 | S.D. Sight Diagnostics Ltd. | Methods and apparatus for detecting an entity in a bodily sample |
US20190358623A1 (en) * | 2017-02-07 | 2019-11-28 | Shilps Scieces Private Limited | A system for microdroplet manipulation |
US10640807B2 (en) | 2011-12-29 | 2020-05-05 | S.D. Sight Diagnostics Ltd | Methods and systems for detecting a pathogen in a biological sample |
US20200271682A1 (en) * | 2019-02-27 | 2020-08-27 | Alpha Space Test and Research Alliance, LLC | Systems and Methods for Environmental Factor Interaction Characterization |
US10831013B2 (en) | 2013-08-26 | 2020-11-10 | S.D. Sight Diagnostics Ltd. | Digital microscopy systems, methods and computer program products |
US10841507B2 (en) | 2017-06-26 | 2020-11-17 | Tecan Trading Ag | Imaging a well of a microplate |
US10843190B2 (en) | 2010-12-29 | 2020-11-24 | S.D. Sight Diagnostics Ltd. | Apparatus and method for analyzing a bodily sample |
US10876954B2 (en) * | 2012-03-30 | 2020-12-29 | Sony Corporation | Microparticle sorting apparatus and delay time determination method |
US10909755B2 (en) * | 2018-05-29 | 2021-02-02 | Global Scanning Denmark A/S | 3D object scanning method using structured light |
US11010591B2 (en) * | 2019-02-01 | 2021-05-18 | Merck Sharp & Dohme Corp. | Automatic protein crystallization trial analysis system |
US11099175B2 (en) | 2016-05-11 | 2021-08-24 | S.D. Sight Diagnostics Ltd. | Performing optical measurements on a sample |
CN113340904A (en) * | 2021-06-01 | 2021-09-03 | 贵州中烟工业有限责任公司 | Method for detecting shrinkages of tobacco flakes |
US11199558B2 (en) * | 2016-03-22 | 2021-12-14 | Beckman Coulter, Inc. | Method, computer program product, and system for establishing a sample tube set |
CN113963513A (en) * | 2021-10-13 | 2022-01-21 | 公安部第三研究所 | Robot system for realizing intelligent inspection in chemical industry and control method thereof |
US11295430B2 (en) | 2020-05-20 | 2022-04-05 | Bank Of America Corporation | Image analysis architecture employing logical operations |
US11307196B2 (en) | 2016-05-11 | 2022-04-19 | S.D. Sight Diagnostics Ltd. | Sample carrier for optical measurements |
US11379697B2 (en) | 2020-05-20 | 2022-07-05 | Bank Of America Corporation | Field programmable gate array architecture for image analysis |
US11609413B2 (en) | 2017-11-14 | 2023-03-21 | S.D. Sight Diagnostics Ltd. | Sample carrier for microscopy and optical density measurements |
US11733150B2 (en) | 2016-03-30 | 2023-08-22 | S.D. Sight Diagnostics Ltd. | Distinguishing between blood sample components |
Families Citing this family (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7329394B2 (en) * | 2001-07-18 | 2008-02-12 | Irm Llc | High throughput incubation devices |
US7632467B1 (en) * | 2001-12-13 | 2009-12-15 | Kardex Engineering, Inc. | Apparatus for automated storage and retrieval of miniature shelf keeping units |
US7433546B2 (en) * | 2004-10-25 | 2008-10-07 | Apple Inc. | Image scaling arrangement |
US20040138827A1 (en) * | 2002-09-23 | 2004-07-15 | The Regents Of The University Of California | Integrated, intelligent micro-instrumentation platform for protein crystallization |
DE10353966A1 (en) * | 2003-11-19 | 2005-06-30 | Siemens Ag | Method for access to a data processing system |
US20050168353A1 (en) * | 2004-01-16 | 2005-08-04 | Mci, Inc. | User interface for defining geographic zones for tracking mobile telemetry devices |
KR100602972B1 (en) | 2005-02-15 | 2006-07-20 | 한국과학기술연구원 | Protein crystal inspection system |
US20070057211A1 (en) * | 2005-05-25 | 2007-03-15 | Karsten Bahlman | Multifocal imaging systems and method |
DE102005047326B3 (en) * | 2005-09-30 | 2006-11-02 | Binder Gmbh | Climate-controlled test cupboard for long-term storage stability tests on prescription medicines has spherical light detectors |
US7930369B2 (en) | 2005-10-19 | 2011-04-19 | Apple Inc. | Remotely configured media device |
JP4923541B2 (en) * | 2005-11-30 | 2012-04-25 | 株式会社ニコン | microscope |
DE102006001881A1 (en) | 2006-01-13 | 2007-07-19 | Roche Diagnostics Gmbh | Packaging cassette for reagent carriers |
US8799043B2 (en) | 2006-06-07 | 2014-08-05 | Ricoh Company, Ltd. | Consolidation of member schedules with a project schedule in a network-based management system |
US8050953B2 (en) | 2006-06-07 | 2011-11-01 | Ricoh Company, Ltd. | Use of a database in a network-based project schedule management system |
US7853100B2 (en) * | 2006-08-08 | 2010-12-14 | Fotomedia Technologies, Llc | Method and system for photo planning and tracking |
US7670555B2 (en) * | 2006-09-08 | 2010-03-02 | Rex A. Hoover | Parallel gripper for handling multiwell plate |
DE102006044091A1 (en) | 2006-09-20 | 2008-04-03 | Carl Zeiss Microimaging Gmbh | Control module and control system for influencing sample environment parameters of an incubation system, method for controlling a microscope assembly and computer program product |
US9557217B2 (en) | 2007-02-13 | 2017-01-31 | Bti Holdings, Inc. | Universal multidetection system for microplates |
US9152433B2 (en) | 2007-03-15 | 2015-10-06 | Ricoh Company Ltd. | Class object wrappers for document object model (DOM) elements for project task management system for managing project schedules over a network |
US8826282B2 (en) * | 2007-03-15 | 2014-09-02 | Ricoh Company, Ltd. | Project task management system for managing project schedules over a network |
CN101681016A (en) * | 2007-03-20 | 2010-03-24 | 致茂电子股份有限公司 | Light source |
EP1972874B1 (en) * | 2007-03-20 | 2019-02-13 | Liconic Ag | Automated substance warehouse |
GB0705652D0 (en) * | 2007-03-23 | 2007-05-02 | Trek Diagnostics Systems Ltd | Test plate reader |
DE102007023325B4 (en) * | 2007-05-16 | 2010-04-08 | Leica Microsystems Cms Gmbh | Optical device, in particular a microscope |
KR100945884B1 (en) * | 2007-11-14 | 2010-03-05 | 삼성중공업 주식회사 | Embedded robot control system |
CN101470326B (en) * | 2007-12-28 | 2010-06-09 | 佛山普立华科技有限公司 | Shooting apparatus and its automatic focusing method |
US20090217241A1 (en) * | 2008-02-22 | 2009-08-27 | Tetsuro Motoyama | Graceful termination of a web enabled client |
US20090217240A1 (en) * | 2008-02-22 | 2009-08-27 | Tetsuro Motoyama | Script generation for graceful termination of a web enabled client by a web server |
US8352498B2 (en) | 2008-05-16 | 2013-01-08 | Ricoh Company, Ltd. | Managing to-do lists in a schedule editor in a project management system |
US8706768B2 (en) | 2008-05-16 | 2014-04-22 | Ricoh Company, Ltd. | Managing to-do lists in task schedules in a project management system |
US7941445B2 (en) * | 2008-05-16 | 2011-05-10 | Ricoh Company, Ltd. | Managing project schedule data using separate current and historical task schedule data and revision numbers |
US8321257B2 (en) * | 2008-05-16 | 2012-11-27 | Ricoh Company, Ltd. | Managing project schedule data using separate current and historical task schedule data |
US20090287522A1 (en) * | 2008-05-16 | 2009-11-19 | Tetsuro Motoyama | To-Do List Representation In The Database Of A Project Management System |
US8862489B2 (en) * | 2008-09-16 | 2014-10-14 | Ricoh Company, Ltd. | Project management system with inspection functionality |
US20100070328A1 (en) * | 2008-09-16 | 2010-03-18 | Tetsuro Motoyama | Managing Project Schedule Data Using Project Task State Data |
WO2010081536A1 (en) * | 2009-01-13 | 2010-07-22 | Bcs Biotech S.P.A. | A biochip reader for qualitative and quantitative analysis of images, in particular for the analysis of single or multiple biochips |
JP5324934B2 (en) * | 2009-01-16 | 2013-10-23 | 株式会社ソニー・コンピュータエンタテインメント | Information processing apparatus and information processing method |
KR20100109195A (en) * | 2009-03-31 | 2010-10-08 | 삼성전자주식회사 | Method for adjusting bright of light sources and bio-disk drive using the same |
WO2011066269A1 (en) * | 2009-11-24 | 2011-06-03 | Siemens Healthcare Diagnostics Inc. | Automated, refrigerated specimen inventory management system |
US8759084B2 (en) | 2010-01-22 | 2014-06-24 | Michael J. Nichols | Self-sterilizing automated incubator |
DE102010060634B4 (en) * | 2010-11-17 | 2013-07-25 | Andreas Hettich Gmbh & Co. Kg | Air conditioning room for a time-controlled storage of samples and methods for time-controlled storage of samples |
WO2012114635A1 (en) * | 2011-02-24 | 2012-08-30 | 三洋電機株式会社 | Conveyance device and culture device |
US8640964B2 (en) | 2011-06-01 | 2014-02-04 | International Business Machines Corporation | Cartridge for storing biosample plates and use in automated data storage systems |
US9286914B2 (en) | 2011-06-01 | 2016-03-15 | International Business Machines Corporation | Cartridge for storing biosample capillary tubes and use in automated data storage systems |
US9619627B2 (en) | 2011-09-25 | 2017-04-11 | Theranos, Inc. | Systems and methods for collecting and transmitting assay results |
US8380541B1 (en) | 2011-09-25 | 2013-02-19 | Theranos, Inc. | Systems and methods for collecting and transmitting assay results |
EP2852820B1 (en) * | 2012-05-31 | 2023-05-03 | Agilent Technologies, Inc. | Universal multi-detection system for microplates |
US9250254B2 (en) | 2012-09-30 | 2016-02-02 | International Business Machines Corporation | Biosample cartridge with radial slots for storing biosample carriers and using in automated data storage systems |
US11187713B2 (en) * | 2013-01-14 | 2021-11-30 | Stratec Se | Laboratory module for storing and feeding to further processing of samples |
GB2509758A (en) * | 2013-01-14 | 2014-07-16 | Stratec Biomedical Ag | A laboratory module for storing and moving samples |
KR20150119407A (en) * | 2013-02-18 | 2015-10-23 | 테라노스, 인코포레이티드 | Systems and methods for collecting and transmitting assay results |
HU230739B1 (en) * | 2013-02-28 | 2018-01-29 | 3Dhistech Kft. | Apparatus and method for automatic staining masking, digitizing of slides |
EP2784476B1 (en) * | 2013-03-27 | 2016-07-20 | Ul Llc | Device and method for storing sample bodies |
EP3025257B1 (en) * | 2013-07-25 | 2019-02-06 | Theranos IP Company, LLC | Systems and methods for a distributed clinical laboratory |
EP3851832B1 (en) | 2014-01-30 | 2024-01-17 | BD Kiestra B.V. | A system and method for image acquisition using supervised high quality imaging |
US11041871B2 (en) | 2014-04-16 | 2021-06-22 | Bd Kiestra B.V. | System and method for incubation and reading of biological cultures |
DE102014011941B3 (en) * | 2014-08-14 | 2015-08-20 | Ika-Werke Gmbh & Co. Kg | Shelf and incubator |
DE102014217328A1 (en) * | 2014-08-29 | 2016-03-03 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and device for imaging in microscopy |
EP3227740B1 (en) * | 2014-12-04 | 2024-02-14 | ChemoMetec A/S | Image cytometer |
US20180077242A1 (en) * | 2016-09-09 | 2018-03-15 | Andrew Henry Carl | Network communication technologies for laboratory instruments |
US10871498B2 (en) * | 2016-11-18 | 2020-12-22 | Cepheid | Sample processing module array handling system and methods |
US10456788B2 (en) * | 2018-01-26 | 2019-10-29 | Yury Sherman | Apparatus for disruption of cell and tissue samples in multi-well plates |
WO2020033593A1 (en) * | 2018-08-07 | 2020-02-13 | Britescan, Llc | Portable scanning device for ascertaining attributes of sample materials |
WO2020047105A1 (en) | 2018-08-29 | 2020-03-05 | Etaluma, Inc. | Illumination display as illumination source for microscopy |
CN111024696B (en) | 2019-12-11 | 2022-01-11 | 上海睿钰生物科技有限公司 | Algae analysis method |
DE102021112938A1 (en) * | 2021-05-19 | 2022-11-24 | Bmg Labtech Gmbh | microplate reader |
GB2613008A (en) | 2021-11-19 | 2023-05-24 | Agilent Technologies Inc | Object handler in particular in an analytical system |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5544256A (en) * | 1993-10-22 | 1996-08-06 | International Business Machines Corporation | Automated defect classification system |
US5785898A (en) * | 1993-04-21 | 1998-07-28 | California Institute Of Technology | Potassium lithium tantalate niobate photorefractive crystals |
US5892218A (en) * | 1994-09-20 | 1999-04-06 | Neopath, Inc. | Cytological system autofocus integrity checking apparatus |
US5961716A (en) * | 1997-12-15 | 1999-10-05 | Seh America, Inc. | Diameter and melt measurement method used in automatically controlled crystal growth |
US6151079A (en) * | 1996-07-25 | 2000-11-21 | Hitachi, Ltd. | Image display apparatus having a circuit for magnifying and processing a picture image in accordance with the type of image signal |
US6175652B1 (en) * | 1997-12-31 | 2001-01-16 | Cognex Corporation | Machine vision system for analyzing features based on multiple object images |
US6226032B1 (en) * | 1996-07-16 | 2001-05-01 | General Signal Corporation | Crystal diameter control system |
US6257722B1 (en) * | 1999-05-31 | 2001-07-10 | Nidek Co., Ltd. | Ophthalmic apparatus |
US6267722B1 (en) * | 1998-02-03 | 2001-07-31 | Adeza Biomedical Corporation | Point of care diagnostic systems |
US6529612B1 (en) * | 1997-07-16 | 2003-03-04 | Diversified Scientific, Inc. | Method for acquiring, storing and analyzing crystal images |
US20030150375A1 (en) * | 2002-02-11 | 2003-08-14 | The Regents Of The University Of California | Automated macromolecular crystallization screening |
US6668082B1 (en) * | 1997-08-05 | 2003-12-23 | Canon Kabushiki Kaisha | Image processing apparatus |
US6788411B1 (en) * | 1999-07-08 | 2004-09-07 | Ppt Vision, Inc. | Method and apparatus for adjusting illumination angle |
Family Cites Families (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5419904Y2 (en) * | 1971-11-29 | 1979-07-20 | ||
JPS5126027A (en) * | 1974-08-27 | 1976-03-03 | Canon Kk | |
JPS5136934A (en) * | 1974-09-24 | 1976-03-29 | Canon Kk | |
US4199013A (en) * | 1977-04-01 | 1980-04-22 | Packard Instrument Company, Inc. | Liquid sample aspirating and/or dispensing system |
US4422151A (en) | 1981-06-01 | 1983-12-20 | Gilson Robert E | Liquid handling apparatus |
US4609017A (en) | 1983-10-13 | 1986-09-02 | Coulter Electronics, Inc. | Method and apparatus for transporting carriers of sealed sample tubes and mixing the samples |
US4815845A (en) * | 1986-04-16 | 1989-03-28 | Westinghouse Electric Corp. | Axial alignment aid for remote control operations and related method |
US5105424A (en) * | 1988-06-02 | 1992-04-14 | California Institute Of Technology | Inter-computer message routing system with each computer having separate routinng automata for each dimension of the network |
GB8816982D0 (en) * | 1988-07-16 | 1988-08-17 | Probus Biomedical Ltd | Bio-fluid assay apparatus |
US5468110A (en) | 1990-01-24 | 1995-11-21 | Automated Healthcare, Inc. | Automated system for selecting packages from a storage area |
US5199840A (en) | 1990-08-01 | 1993-04-06 | John Castaldi | Automated storage and retrieval system |
JPH04216886A (en) * | 1990-12-17 | 1992-08-06 | Lintec Corp | Self-adhesive sheet resistant to blistering |
GB2269473A (en) * | 1992-08-08 | 1994-02-09 | Ibm | A robotic cassette transfer apparatus |
JP3305322B2 (en) * | 1992-11-06 | 2002-07-22 | バイオログ,インコーポレーテッド | Liquid and suspension analyzers |
JP3314440B2 (en) * | 1993-02-26 | 2002-08-12 | 株式会社日立製作所 | Defect inspection apparatus and method |
US5539975A (en) * | 1993-09-08 | 1996-07-30 | Allen-Bradley Company, Inc. | Control system and equipment configuration for a modular product assembly platform |
US5552890A (en) * | 1994-04-19 | 1996-09-03 | Tricor Systems, Inc. | Gloss measurement system |
US6800452B1 (en) * | 1994-08-08 | 2004-10-05 | Science Applications International Corporation | Automated methods for simultaneously performing a plurality of signal-based assays |
US5921739A (en) | 1997-02-10 | 1999-07-13 | Keip; Charles P. | Indexing parts tray device |
US5985214A (en) * | 1997-05-16 | 1999-11-16 | Aurora Biosciences Corporation | Systems and methods for rapidly identifying useful chemicals in liquid samples |
DE69836562T2 (en) * | 1997-12-23 | 2007-10-04 | Dako Denmark A/S | CASE FOR PROCESSING A SAMPLE APPLIED ON THE SURFACE OF A CARRIER |
US6455861B1 (en) * | 1998-11-24 | 2002-09-24 | Cambridge Research & Instrumentation, Inc. | Fluorescence polarization assay system and method |
US6271022B1 (en) * | 1999-03-12 | 2001-08-07 | Biolog, Inc. | Device for incubating and monitoring multiwell assays |
US6368475B1 (en) * | 2000-03-21 | 2002-04-09 | Semitool, Inc. | Apparatus for electrochemically processing a microelectronic workpiece |
US6203082B1 (en) * | 1999-07-12 | 2001-03-20 | Rd Automation | Mounting apparatus for electronic parts |
US6360792B1 (en) | 1999-10-04 | 2002-03-26 | Robodesign International, Inc. | Automated microplate filling device and method |
US7133906B2 (en) * | 2000-02-17 | 2006-11-07 | Lumenare Networks | System and method for remotely configuring testing laboratories |
US6701845B2 (en) * | 2000-03-17 | 2004-03-09 | Nikon Corporation & Nikon Technologies Inc. | Print system and handy phone |
JP2001284416A (en) * | 2000-03-30 | 2001-10-12 | Nagase & Co Ltd | Low temperature test device |
US6637473B2 (en) | 2000-10-30 | 2003-10-28 | Robodesign International, Inc. | Automated storage and retrieval device and method |
US6985616B2 (en) | 2001-10-18 | 2006-01-10 | Robodesign International, Inc. | Automated verification and inspection device for sequentially inspecting microscopic crystals |
US7352889B2 (en) * | 2000-10-30 | 2008-04-01 | Ganz Brian L | Automated storage and retrieval device and method |
US20020102149A1 (en) | 2001-01-26 | 2002-08-01 | Tekcel, Inc. | Random access storage and retrieval system for microplates, microplate transport and micorplate conveyor |
US6627461B2 (en) * | 2001-04-18 | 2003-09-30 | Signature Bioscience, Inc. | Method and apparatus for detection of molecular events using temperature control of detection environment |
CA2451789C (en) * | 2001-06-29 | 2012-03-27 | Meso Scale Technologies, Llc. | Assay plates, reader systems and methods for luminescence test measurements |
DE10157121A1 (en) | 2001-11-21 | 2003-05-28 | Richard Balzer | Dynamic storage and material flow system has part systems coupled at one or more points |
ITMO20020076A1 (en) | 2002-03-29 | 2003-09-29 | Ronflette Sa | AUTOMATED WAREHOUSE |
US6871922B1 (en) * | 2002-10-28 | 2005-03-29 | Feliks Pustilnikov | Rotating shelf assembly |
GB0415307D0 (en) | 2004-07-08 | 2004-08-11 | Rts Thurnall Plc | Automated store |
-
2004
- 2004-01-30 US US10/769,462 patent/US7596251B2/en active Active
- 2004-01-30 WO PCT/US2004/003239 patent/WO2004069984A2/en active Application Filing
- 2004-01-30 US US10/769,150 patent/US20040218804A1/en not_active Abandoned
- 2004-01-30 WO PCT/US2004/002717 patent/WO2004069409A2/en active Application Filing
- 2004-01-30 WO PCT/US2004/002617 patent/WO2004071067A2/en active Application Filing
- 2004-01-30 WO PCT/US2004/002633 patent/WO2004070653A2/en active Application Filing
- 2004-01-30 US US10/769,470 patent/US20040260782A1/en not_active Abandoned
- 2004-01-30 US US10/769,461 patent/US20040253742A1/en not_active Abandoned
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5785898A (en) * | 1993-04-21 | 1998-07-28 | California Institute Of Technology | Potassium lithium tantalate niobate photorefractive crystals |
US5544256A (en) * | 1993-10-22 | 1996-08-06 | International Business Machines Corporation | Automated defect classification system |
US5892218A (en) * | 1994-09-20 | 1999-04-06 | Neopath, Inc. | Cytological system autofocus integrity checking apparatus |
US6226032B1 (en) * | 1996-07-16 | 2001-05-01 | General Signal Corporation | Crystal diameter control system |
US6151079A (en) * | 1996-07-25 | 2000-11-21 | Hitachi, Ltd. | Image display apparatus having a circuit for magnifying and processing a picture image in accordance with the type of image signal |
US6529612B1 (en) * | 1997-07-16 | 2003-03-04 | Diversified Scientific, Inc. | Method for acquiring, storing and analyzing crystal images |
US6668082B1 (en) * | 1997-08-05 | 2003-12-23 | Canon Kabushiki Kaisha | Image processing apparatus |
US5961716A (en) * | 1997-12-15 | 1999-10-05 | Seh America, Inc. | Diameter and melt measurement method used in automatically controlled crystal growth |
US6175652B1 (en) * | 1997-12-31 | 2001-01-16 | Cognex Corporation | Machine vision system for analyzing features based on multiple object images |
US6267722B1 (en) * | 1998-02-03 | 2001-07-31 | Adeza Biomedical Corporation | Point of care diagnostic systems |
US6257722B1 (en) * | 1999-05-31 | 2001-07-10 | Nidek Co., Ltd. | Ophthalmic apparatus |
US6788411B1 (en) * | 1999-07-08 | 2004-09-07 | Ppt Vision, Inc. | Method and apparatus for adjusting illumination angle |
US20030150375A1 (en) * | 2002-02-11 | 2003-08-14 | The Regents Of The University Of California | Automated macromolecular crystallization screening |
Cited By (107)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7158888B2 (en) | 2001-05-04 | 2007-01-02 | Takeda San Diego, Inc. | Determining structures by performing comparisons between molecular replacement results for multiple different biomolecules |
US20040022427A1 (en) * | 2002-08-01 | 2004-02-05 | Ming Liau Co., Ltd. | Object inspection system |
US7164783B2 (en) * | 2002-08-01 | 2007-01-16 | Ming Liau Co., Ltd. | Object inspection system |
US20090213110A1 (en) * | 2004-06-25 | 2009-08-27 | Shuhei Kato | Image mixing apparatus and pixel mixer |
US20080063280A1 (en) * | 2004-07-08 | 2008-03-13 | Yoram Hofman | Character Recognition System and Method |
US10007855B2 (en) * | 2004-07-08 | 2018-06-26 | Hi-Tech Solutions Ltd. | Character recognition system and method for rail containers |
US20110280448A1 (en) * | 2004-07-08 | 2011-11-17 | Hi-Tech Solutions Ltd. | Character recognition system and method for shipping containers |
US8184852B2 (en) * | 2004-07-08 | 2012-05-22 | Hi-Tech Solutions Ltd. | Character recognition system and method for shipping containers |
US8194913B2 (en) * | 2004-07-08 | 2012-06-05 | Hi-Tech Solutions Ltd. | Character recognition system and method |
WO2006019582A3 (en) * | 2004-07-14 | 2007-05-03 | Artann Lab Inc | Methods and devices for rapid analysis of liquids |
US7364696B1 (en) * | 2004-07-14 | 2008-04-29 | Artann Laboratories, Inc. | Methods and devices for droplet microchromatography |
WO2006019582A2 (en) * | 2004-07-14 | 2006-02-23 | Artann Laboratories, Inc | Methods and devices for rapid analysis of liquids |
US20060024746A1 (en) * | 2004-07-14 | 2006-02-02 | Artann Laboratories, Inc. | Methods and devices for optical monitoring and rapid analysis of drying droplets |
US9633232B2 (en) | 2004-11-15 | 2017-04-25 | Commvault Systems, Inc. | System and method for encrypting secondary copies of data |
US9411986B2 (en) | 2004-11-15 | 2016-08-09 | Commvault Systems, Inc. | System and method for encrypting secondary copies of data |
US7639260B2 (en) * | 2004-12-15 | 2009-12-29 | Xerox Corporation | Camera-based system for calibrating color displays |
US20060126138A1 (en) * | 2004-12-15 | 2006-06-15 | Xerox Corporation | Camera-based system for calibrating color displays |
US20060126134A1 (en) * | 2004-12-15 | 2006-06-15 | Xerox Corporation | Camera-based method for calibrating color displays |
US7639401B2 (en) | 2004-12-15 | 2009-12-29 | Xerox Corporation | Camera-based method for calibrating color displays |
US20060191251A1 (en) * | 2004-12-18 | 2006-08-31 | Peter Pirro | Harvesting machine with an adjustable chopping means |
US20060165305A1 (en) * | 2005-01-24 | 2006-07-27 | Kabushiki Kaisha Toshiba | Image compression method and image compression device |
US20070023190A1 (en) * | 2005-07-29 | 2007-02-01 | Hall David R | Stab Guide |
US7636466B2 (en) | 2006-01-11 | 2009-12-22 | Orbotech Ltd | System and method for inspecting workpieces having microscopic features |
US20070160283A1 (en) * | 2006-01-11 | 2007-07-12 | Orbotech Ltd | System and method for inspecting workpieces having microscopic features |
US8577171B1 (en) * | 2006-07-31 | 2013-11-05 | Gatan, Inc. | Method for normalizing multi-gain images |
US7826652B2 (en) | 2006-12-19 | 2010-11-02 | Cytyc Corporation | Method for forming an optimally exposed image of cytological specimen |
WO2008079590A1 (en) * | 2006-12-19 | 2008-07-03 | Cytyc Corporation | Method for forming an optimally exposed image of cytological specimen |
US8107675B2 (en) * | 2006-12-29 | 2012-01-31 | Cognex Corporation | Trigger system for data reading device |
US20080158365A1 (en) * | 2006-12-29 | 2008-07-03 | Richard Reuter | Trigger system for data reading device |
US20080235719A1 (en) * | 2007-03-16 | 2008-09-25 | Sharma Yugal K | Image analysis for use with automated audio extraction |
US20090043853A1 (en) * | 2007-08-06 | 2009-02-12 | Yahoo! Inc. | Employing pixel density to detect a spam image |
US20110078269A1 (en) * | 2007-08-06 | 2011-03-31 | Yahoo! Inc. | Employing pixel density to detect a spam image |
US8301719B2 (en) | 2007-08-06 | 2012-10-30 | Yahoo! Inc. | Employing pixel density to detect a spam image |
US7882177B2 (en) * | 2007-08-06 | 2011-02-01 | Yahoo! Inc. | Employing pixel density to detect a spam image |
WO2009026258A1 (en) * | 2007-08-17 | 2009-02-26 | Oral Cancer Prevention International Inc. | Feature dependent extended depth of focusing on semi-transparent biological specimens |
US20120099120A1 (en) * | 2009-07-01 | 2012-04-26 | Hiroaki Okamoto | Exposure condition determining method and surface inspection apparatus |
US8665430B2 (en) * | 2009-07-01 | 2014-03-04 | Nikon Corporation | Exposure condition determining method and surface inspection apparatus |
US9582892B2 (en) * | 2009-09-09 | 2017-02-28 | Canon Kabushiki Kaisha | Radiation imaging apparatus, radiation imaging method, and program |
US20110058727A1 (en) * | 2009-09-09 | 2011-03-10 | Canon Kabushiki Kaisha | Radiation imaging apparatus, radiation imaging method, and program |
US8396876B2 (en) | 2010-11-30 | 2013-03-12 | Yahoo! Inc. | Identifying reliable and authoritative sources of multimedia content |
US10843190B2 (en) | 2010-12-29 | 2020-11-24 | S.D. Sight Diagnostics Ltd. | Apparatus and method for analyzing a bodily sample |
US20130073221A1 (en) * | 2011-09-16 | 2013-03-21 | Daniel Attinger | Systems and methods for identification of fluid and substrate composition or physico-chemical properties |
US10640807B2 (en) | 2011-12-29 | 2020-05-05 | S.D. Sight Diagnostics Ltd | Methods and systems for detecting a pathogen in a biological sample |
US11584950B2 (en) | 2011-12-29 | 2023-02-21 | S.D. Sight Diagnostics Ltd. | Methods and systems for detecting entities in a biological sample |
US9449380B2 (en) | 2012-03-20 | 2016-09-20 | Siemens Medical Solutions Usa, Inc. | Medical image quality monitoring and improvement system |
US10876954B2 (en) * | 2012-03-30 | 2020-12-29 | Sony Corporation | Microparticle sorting apparatus and delay time determination method |
JP2014010136A (en) * | 2012-07-03 | 2014-01-20 | Dainippon Screen Mfg Co Ltd | Image analysis device and image analysis method |
US11042663B2 (en) | 2013-03-12 | 2021-06-22 | Commvault Systems, Inc. | Automatic file encryption |
US9734348B2 (en) | 2013-03-12 | 2017-08-15 | Commvault Systems, Inc. | Automatic file encryption |
US9483655B2 (en) | 2013-03-12 | 2016-11-01 | Commvault Systems, Inc. | File backup with selective encryption |
US9990512B2 (en) | 2013-03-12 | 2018-06-05 | Commvault Systems, Inc. | File backup with selective encryption |
US10445518B2 (en) | 2013-03-12 | 2019-10-15 | Commvault Systems, Inc. | Automatic file encryption |
US11928229B2 (en) | 2013-03-12 | 2024-03-12 | Commvault Systems, Inc. | Automatic file encryption |
US11295440B2 (en) | 2013-05-23 | 2022-04-05 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a cell sample |
US11100634B2 (en) | 2013-05-23 | 2021-08-24 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a cell sample |
US10176565B2 (en) | 2013-05-23 | 2019-01-08 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a cell sample |
US11803964B2 (en) | 2013-05-23 | 2023-10-31 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a cell sample |
US20150001087A1 (en) * | 2013-06-26 | 2015-01-01 | Novellus Systems, Inc. | Electroplating and post-electrofill systems with integrated process edge imaging and metrology systems |
US9809898B2 (en) * | 2013-06-26 | 2017-11-07 | Lam Research Corporation | Electroplating and post-electrofill systems with integrated process edge imaging and metrology systems |
US20220390734A1 (en) * | 2013-07-01 | 2022-12-08 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a blood sample |
US10093957B2 (en) * | 2013-07-01 | 2018-10-09 | S.D. Sight Diagnostics Ltd. | Method, kit and system for imaging a blood sample |
US11434515B2 (en) * | 2013-07-01 | 2022-09-06 | S.D. Sight Diagnostics Ltd. | Method and system for imaging a blood sample |
US20160208306A1 (en) * | 2013-07-01 | 2016-07-21 | S.D. Sight Diagnostics Ltd. | Method, kit and system for imaging a blood sample |
US10831013B2 (en) | 2013-08-26 | 2020-11-10 | S.D. Sight Diagnostics Ltd. | Digital microscopy systems, methods and computer program products |
US9822460B2 (en) | 2014-01-21 | 2017-11-21 | Lam Research Corporation | Methods and apparatuses for electroplating and seed layer detection |
US10669644B2 (en) | 2014-01-21 | 2020-06-02 | Lam Research Corporation | Methods and apparatuses for electroplating and seed layer detection |
US10196753B2 (en) | 2014-01-21 | 2019-02-05 | Lam Research Corporation | Methods and apparatuses for electroplating and seed layer detection |
US10407794B2 (en) | 2014-01-21 | 2019-09-10 | Lam Research Corporation | Methods and apparatuses for electroplating and seed layer detection |
US11100637B2 (en) | 2014-08-27 | 2021-08-24 | S.D. Sight Diagnostics Ltd. | System and method for calculating focus variation for a digital microscope |
US11721018B2 (en) | 2014-08-27 | 2023-08-08 | S.D. Sight Diagnostics Ltd. | System and method for calculating focus variation for a digital microscope |
US10482595B2 (en) | 2014-08-27 | 2019-11-19 | S.D. Sight Diagnostics Ltd. | System and method for calculating focus variation for a digital microscope |
US9720849B2 (en) | 2014-09-17 | 2017-08-01 | Commvault Systems, Inc. | Token-based encryption rule generation process |
US9984006B2 (en) | 2014-09-17 | 2018-05-29 | Commvault Systems, Inc. | Data storage systems and methods |
US9405928B2 (en) * | 2014-09-17 | 2016-08-02 | Commvault Systems, Inc. | Deriving encryption rules based on file content |
US9727491B2 (en) | 2014-09-17 | 2017-08-08 | Commvault Systems, Inc. | Token-based encryption determination process |
US11199690B2 (en) | 2015-09-17 | 2021-12-14 | S.D. Sight Diagnostics Ltd. | Determining a degree of red blood cell deformity within a blood sample |
US11796788B2 (en) | 2015-09-17 | 2023-10-24 | S.D. Sight Diagnostics Ltd. | Detecting a defect within a bodily sample |
US11914133B2 (en) | 2015-09-17 | 2024-02-27 | S.D. Sight Diagnostics Ltd. | Methods and apparatus for analyzing a bodily sample |
US10663712B2 (en) | 2015-09-17 | 2020-05-26 | S.D. Sight Diagnostics Ltd. | Methods and apparatus for detecting an entity in a bodily sample |
US10488644B2 (en) | 2015-09-17 | 2019-11-26 | S.D. Sight Diagnostics Ltd. | Methods and apparatus for detecting an entity in a bodily sample |
US11262571B2 (en) | 2015-09-17 | 2022-03-01 | S.D. Sight Diagnostics Ltd. | Determining a staining-quality parameter of a blood sample |
US9735035B1 (en) | 2016-01-29 | 2017-08-15 | Lam Research Corporation | Methods and apparatuses for estimating on-wafer oxide layer reduction effectiveness via color sensing |
US10497592B2 (en) | 2016-01-29 | 2019-12-03 | Lam Research Corporation | Methods and apparatuses for estimating on-wafer oxide layer reduction effectiveness via color sensing |
US11199558B2 (en) * | 2016-03-22 | 2021-12-14 | Beckman Coulter, Inc. | Method, computer program product, and system for establishing a sample tube set |
US11733150B2 (en) | 2016-03-30 | 2023-08-22 | S.D. Sight Diagnostics Ltd. | Distinguishing between blood sample components |
US11808758B2 (en) | 2016-05-11 | 2023-11-07 | S.D. Sight Diagnostics Ltd. | Sample carrier for optical measurements |
US11099175B2 (en) | 2016-05-11 | 2021-08-24 | S.D. Sight Diagnostics Ltd. | Performing optical measurements on a sample |
US11307196B2 (en) | 2016-05-11 | 2022-04-19 | S.D. Sight Diagnostics Ltd. | Sample carrier for optical measurements |
WO2018078447A1 (en) * | 2016-10-27 | 2018-05-03 | Scopio Labs Ltd. | Digital microscope which operates as a server |
US20200041780A1 (en) * | 2016-10-27 | 2020-02-06 | Scopio Labs Ltd. | Digital microscope which operates as a server |
US10935779B2 (en) * | 2016-10-27 | 2021-03-02 | Scopio Labs Ltd. | Digital microscope which operates as a server |
US20190358623A1 (en) * | 2017-02-07 | 2019-11-28 | Shilps Scieces Private Limited | A system for microdroplet manipulation |
US11276163B2 (en) * | 2017-05-02 | 2022-03-15 | Alvitae LLC | System and method for facilitating autonomous control of an imaging system |
US11321827B2 (en) | 2017-05-02 | 2022-05-03 | Aivitae LLC | System and method for facilitating autonomous control of an imaging system |
US20180322629A1 (en) * | 2017-05-02 | 2018-11-08 | Aivitae LLC | System and method for facilitating autonomous control of an imaging system |
US10841507B2 (en) | 2017-06-26 | 2020-11-17 | Tecan Trading Ag | Imaging a well of a microplate |
CN107818559A (en) * | 2017-09-22 | 2018-03-20 | 太原理工大学 | Crystal is inoculated with condition detection method and the harvester of crystal inoculation status image |
US11614609B2 (en) | 2017-11-14 | 2023-03-28 | S.D. Sight Diagnostics Ltd. | Sample carrier for microscopy measurements |
US11609413B2 (en) | 2017-11-14 | 2023-03-21 | S.D. Sight Diagnostics Ltd. | Sample carrier for microscopy and optical density measurements |
US11921272B2 (en) | 2017-11-14 | 2024-03-05 | S.D. Sight Diagnostics Ltd. | Sample carrier for optical measurements |
US10909755B2 (en) * | 2018-05-29 | 2021-02-02 | Global Scanning Denmark A/S | 3D object scanning method using structured light |
US11010591B2 (en) * | 2019-02-01 | 2021-05-18 | Merck Sharp & Dohme Corp. | Automatic protein crystallization trial analysis system |
US20200271682A1 (en) * | 2019-02-27 | 2020-08-27 | Alpha Space Test and Research Alliance, LLC | Systems and Methods for Environmental Factor Interaction Characterization |
US11295430B2 (en) | 2020-05-20 | 2022-04-05 | Bank Of America Corporation | Image analysis architecture employing logical operations |
US11379697B2 (en) | 2020-05-20 | 2022-07-05 | Bank Of America Corporation | Field programmable gate array architecture for image analysis |
CN113340904A (en) * | 2021-06-01 | 2021-09-03 | 贵州中烟工业有限责任公司 | Method for detecting shrinkages of tobacco flakes |
CN113963513A (en) * | 2021-10-13 | 2022-01-21 | 公安部第三研究所 | Robot system for realizing intelligent inspection in chemical industry and control method thereof |
Also Published As
Publication number | Publication date |
---|---|
WO2004069984A3 (en) | 2005-05-26 |
US20040253742A1 (en) | 2004-12-16 |
WO2004069409A3 (en) | 2009-04-02 |
US20040260782A1 (en) | 2004-12-23 |
WO2004071067A2 (en) | 2004-08-19 |
US20040256963A1 (en) | 2004-12-23 |
WO2004071067A3 (en) | 2005-01-27 |
US7596251B2 (en) | 2009-09-29 |
WO2004070653A3 (en) | 2005-01-06 |
WO2004069984A2 (en) | 2004-08-19 |
WO2004070653A2 (en) | 2004-08-19 |
WO2004069409A2 (en) | 2004-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20040218804A1 (en) | Image analysis system and method | |
JP6437947B2 (en) | Fully automatic rapid microscope slide scanner | |
US7433025B2 (en) | Automated protein crystallization imaging | |
JP2017194700A5 (en) | ||
JP2017194699A5 (en) | ||
JP2005520174A5 (en) | ||
Dolleiser et al. | A fully automated optical microscope for analysis of particle tracks in solids | |
JP4779149B2 (en) | Operating method of laser scanning microscope |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DISCOVERY PARTNERS INTERNATIONAL, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AFFLECK, RHETT L.;LEVIN, ROBERT K.;LILLIG, JOHN E.;AND OTHERS;REEL/FRAME:014677/0469 Effective date: 20040429 |
|
AS | Assignment |
Owner name: NEXUS BIOSYSTEMS, INC., CALIFORNIA Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:DISCOVERY PARTNERS INTERNATIONAL, INC.;REEL/FRAME:017366/0939 Effective date: 20060323 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |