WO2012117103A2 - System and method to index and query data from a 3d model - Google Patents

System and method to index and query data from a 3d model Download PDF

Info

Publication number
WO2012117103A2
WO2012117103A2 PCT/EP2012/053668 EP2012053668W WO2012117103A2 WO 2012117103 A2 WO2012117103 A2 WO 2012117103A2 EP 2012053668 W EP2012053668 W EP 2012053668W WO 2012117103 A2 WO2012117103 A2 WO 2012117103A2
Authority
WO
WIPO (PCT)
Prior art keywords
voxels
interest
voxel
location
region
Prior art date
Application number
PCT/EP2012/053668
Other languages
French (fr)
Other versions
WO2012117103A3 (en
Inventor
André ELISSEEFF
Ulf Holm NIELSEN
Hlynur TRYGGVASON
Olivier SIEGENTHALER
Original Assignee
Nhumi Technologies Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nhumi Technologies Ag filed Critical Nhumi Technologies Ag
Publication of WO2012117103A2 publication Critical patent/WO2012117103A2/en
Publication of WO2012117103A3 publication Critical patent/WO2012117103A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Definitions

  • the present invention is related to a method for indexing and querying data using a 3D model and volumetric information.
  • the invention is further related to a corresponding system and a corresponding computer program product. Description of related art
  • Statistically interesting Geographical Information Based on Queries To a Geographic Search Engine discloses a system and method to retrieve documents from a query composed of a free text and of a domain.
  • the latter refers to a location of the document such as an address on a geographical map but does not refer to any 3D coordinates or a volume from a 3D model.
  • It furthermore describes a graphical method to define such location by mean of an area defined by the user on a map.
  • Such area defines a set of location but does not define a volume nor does it contain any depth information.
  • the disclosed method and system cannot therefore be applied to access electronic records from a 3D model.
  • 3D Geographic Information Systems use volumetric data but mostly data related to geo- spatial coordinates.
  • U.S. Pat. No 6,915,310 to Gutierrez et al. discloses a system storing 3D coordinates or volumetric information attached to records in a database. The query is a single location in the 3D space and the retrieval method consists in returning the list of documents which are the closest from the query.
  • Program it discloses a system to manage information using maps and dots over 2D or 3D representation of human bodies, engines, or a corporation.
  • Such method does not use any volumetric information and assumes that all information or documents are already mapped on the visual
  • a query can be performed but it is directly encoded as a node in a graph and there is no mechanism to create one from a generic user input: the user needs to select an icon on the 3D model or an existing object that is directly linked with a so called similarity network.
  • the 3D context is therefore not fully used: if the 3D model is rotated, the query will not change even though the user might be looking at another set of 3D objects. More specifically, the depth information is not used which can decrease significantly the quality of the results: consider the case where the 3D model is a virtual body seen from the side. Clicking on the arm in the front should retrieve medical records about this arm only. The approach of Eberholst et al. would retrieve documents for both arms because it does not use the depth information and the fact that one arm is behind and the other one in the front.
  • the Visible Human Project has provided a full 3D representation of the male and female human anatomy (adult) to the academic community.
  • the latter derived several applications including brain atlas, that is, a map of the brain where locations are tagged with cognitive function for instance. They define a user interface for users to select what is already shown on screen over a virtual model of the human anatomy but cannot be used as such to retrieve electronic records.
  • the proposed method enables the usage of any 3D representation to access and work with electronic records.
  • a key feature of the proposed method is that 3D queries are simply defined over the 3D model via a click or a touch or any other user input, and that queries are represented directly as volumes or parts of the 3D model. This allows the user to express queries intuitively, which would otherwise be difficult to create.
  • a patient feels pain in his throat. The patient does not know the medical name for such pain and cannot write the correct anatomical term for a throat.
  • a simple click on the throat over a virtual body would provide a direct way to query any document for this patient, including those mentioning throat and those mentioning larynx, thyroid cartilage, thyrohyoid membrane, etc.
  • the association between an electronic record and the 3D model is done via a generic indexing system that can be both automatic or manual, and that associates to each electronic records a volume extracted from the 3D model.
  • the information retrieval consists then in comparing the volumes associated with the electronic records and the volume representing the query.
  • Fig. 1 shows the different components of this invention
  • Fig. 2 shows voxelization and 3D modelling using polygonal meshes or using a voxel representation
  • Fig. 3 shows a grid of the 3D space composed of a set of small cubes also called voxels
  • Fig. 4 shows the transformation of annotations (i)-(iii) into annotations of type (iv);
  • Fig. 5 shows how the 3D models are represented on a 2D screen
  • Fig. 6a shows how to create a 3D query with a pointer in a 2D screen
  • Fig. 6b shows first the results of a query as a bag of voxels overlaid on the 3D model (left side of the figure). The values of the voxels are projected onto the screen (right part of the figure) resulting in an image highlighting the areas of most relevance;
  • Fig. 7 shows the heat map, which is overlaid with lines and numbers referring to the top results of the search;
  • Fig. 8 shows a heat map generated for a 3D model of the human anatomy.
  • This invention refers to a situation where a person, i.e. end user, is using an electronic device such as a computer or a mobile device, including but not limited to a laptop, a desktop, a smartphones or a tablet PC, and visualizes a 3D model of some objects of the real world. It can be for instance a 3D model of the human body, a 3D model of a turbine, or any other 3D models. It can also be any 2D projection of a 3D model or a cross section generated from the 3D model.
  • a set of electronic documents or records related to the 3D model is stored in a storage device that can be accessed directly or indirectly from the device where the 3D model is shown to the user.
  • the storage device can furthermore use a database software or any other data storage software system to enable other applications or users to access the electronic documents.
  • the documents can be of any type including but not limited to text, audio, video, images and other structured or unstructured types. It is assumed in this invention that the electronic documents are related in a way or another, preferably by their content, to the 3D model.
  • the electronic documents can refer to the medical histories, the laboratory results, the medical images, or any other medical information of a patient or of a group of patients.
  • the electronic documents can refer to maintenance reports or any interventional report made on specific parts of the turbine. It can also refer to educational materials explaining how the turbine should be repaired or should be functioning in normal modes.
  • the end user wishes then to retrieve documents or records simply by indicating graphically over the 3D model, or over any 2D images generated from the 3D model (projection or cross section) a region of interest, wherein the region of interest comprises a single or a series of points, areas and/or volumes the user is interested in.
  • the system retrieves then the documents relevant to the defined points, areas or volumes.
  • the notion of relevance is dependent on the application. Examples of
  • relevances include but are not limited to: disease whose symptoms are expressed on specific points, areas or volumes of a 3D model of the human body, clinical findings that are found on specific body regions, or genes that are over expressed on specific tissues.
  • a database and/or an Electronic Medical Record software provide all the medical records about the patients.
  • the clinician uses a computer to visualize a 3D model of the human body and is interested in all the diagnosis related to the heart.
  • This invention provides a system so that the clinician can retrieve the documents simply by clicking on the heart of a 3D model of the human body or of any 2D image derived from the 3D model of the human body. A click on the heart will perform a query and will fetch all documents that have been previously identified as related to the heart.
  • a 3D model is any data stored on a device that models a space in three dimensions (3D).
  • Such 3D models include but are not limited to (i) a set of slice images acquired by medical devices, such as CT scanners or MRI machines, to (ii) the output of a 3 dimensional scanners or to (iii) any 3 dimensional surface.
  • a 3D model is therefore an electronic record
  • V will always denote a real volume.
  • Examples of real volumes include the human body and all its parts, organs, etc., a car or any other manufactured product with all its components, a building with its corridors and rooms, and so on. This is in contrast with a surface such as a geographical map.
  • the data corresponding to a volume is usually encoded using a coordinate system that defines the position of each point in the volume.
  • the Cartesian system completely defines the 3 dimensional space as any real volume, surface or point can be parametrized with (x, y, z) ⁇ 3 ⁇ 4 3 coordinates, where 3 ⁇ 4 denotes the set of real numbers.
  • a 3D model stored with the Cartesian system is usually defined by the surface surrounding it.
  • 3D modelling is the process of developing such a surface also called a 3D model.
  • Common representation of 3D models includes polygonal meshes (points in 3D connected by line segments and or grouped as triangles) and NURBS (spline curves), and they are stored in format such as .fbx, .obj, .max or any other formats used to store polygonal meshes with texture information. These surfaces are often associated with texture or colour information.
  • M(V) the set of 3D models stored as polygonal meshes corresponding to the real volume V
  • the transformation M can be interpreted as the modelling step or the parameterization of a real volume into an electronic representation.
  • Fig. 2 depicts the modelling of a sphere 1 , which is modelled as a coarse polygonal mesh 8 and 9 denoted by M(V) by the transformation 12.
  • An element of M(V) is usually an approximation of V as it is built of polygons and not all real volumes can be exactly represented as polygonal meshes (e.g.
  • a volume can also be represented by storing each point in 3 ⁇ 4 3 that belongs to the volume. As the number of points can be infinite, the volumetric representation is often obtained by considering a regular grid in 3D and by storing the index of the elements of the grid which intersect with the volume. Such representation is called a voxel representation of the volume.
  • a voxel is the basic cube forming the grid as depicted in Fig. 3 and is uniquely defined by a three dimensional index (i, j, k) ⁇ Z 3 , where Z is the set of integers, indicating which element of the grid it corresponds to.
  • a voxel can be associated to any kind of data such as colour information or an object the voxel belongs to. [0029] Note that the number of voxels is potentially infinite as the grid spans the whole space. Most if not all volumes though are bounded and fit therefore into a finite grid. For the sake of simplicity, we will assume that all grids and all voxel representations refer to bounded volumes.
  • indices (i, j, k) is then assumed to be finite.
  • a grid is defined by the size of its voxels, which depends on the length a of its edge indicated by 1 in the Figure.
  • G e (V) the set of voxels representing the real volume V and obtained by the regular grid, where each voxel has edges of length e, G e (V) is also called a voxel representation of the volume.
  • Fig. 2 shows the voxel representation 7 of the sphere 3.
  • a voxel representation (6 in Fig. 2) can be built manually by using a voxel editor. A human user can for instance manually define each voxel and tag it with colour information. Note that a voxel representation can also be obtained from a polygonal mesh m as G e (m). This is depicted by 10 in Fig. 2. It consists of all the cubes of the grid that intersect with the elements of the mesh m.
  • Both the polygonal mesh and the voxel representation can furthermore be split into sub parts. This is often the case for 3D models of the human anatomy or of manufactured products where the vertices of the mesh or the voxels are grouped together when they belong to a single object (e.g. single anatomical structure or a subpart of the manufactured object).
  • m and m-i, m c a polygonal mesh and its subparts
  • v and v-i, v c a corresponding voxel
  • a 3D model and its subparts are assumed to be attached to short textual descriptions referred as "labels" in the context of this invention.
  • a label can be a free text or a unique identifier from a terminology or other data that provide more information about the 3D model or the volume it represents.
  • each subpart of the 3D body could for instance be tagged with a set of unique identifiers from a standard medical dictionary such as the Systematized Nomenclature of MEDicine - Clinical Terms or the Foundational Model of Anatomy. Both dictionaries could indeed be used to describe the medical name of an anatomical structure with a controlled vocabulary that is shared across medical software applications.
  • a 3D model is therefore defined as
  • the data related to this invention is any electronic record relevant to the 3D model at hand. It includes but is not limited to images, video, audio files, and texts. In the case of medicine, the electronic records can consist of all the records representing the medical history of patients such as medical images, medical notes, medication, lab test values and more. It can also represent all the articles of a medical encyclopaedia describing any information about a specific organ or the whole human anatomy.
  • these electronic records are (i) annotated with the same labels tj, t m ⁇ as the ones used to tag the 3D model, or (ii) annotated with parts of the 3D model m-i, ..,m c , or (iii) annotated with vertices of the polygonal mesh, or (iv) annotated with individual voxels directly.
  • Any of the four annotations (i)-(iv) is defined by (a) a tag or a term or a part of the 3D model as aforementioned, (b) a location in the electronic record where that annotation is relevant and (c) a score or a number measuring how relevant the annotation is.
  • the part (a) of the annotation will be either (i) a term in tj, tj n i, or (ii) an element of m-i, ..,m c , or (iii) a set of vertices from the polygonal mesh of the 3D model m, or (iv) a set of voxels extracted from the 3D model m.
  • the Electronic record column contains a reference to the electronic records. This can be a URL or the row of a database or any other pointers to where the electronic record is stored.
  • the Location column refers to an offset in the electronic record.
  • Such an offset depends on the type of the electronic record (e.g. video, text or audio). In this example, locations would typically correspond to offsets in a text starting from the beginning of the electronic record. It can also be an anchor identifying a section of the electronic record when the latter is an HTML page, or any other object that identifies a part of the electronic record.
  • the Term/Tag column contains the entities which are used to annotate the electronic records: they store the tag information and can be plain text or references to the element of a terminology.
  • the Score column contains a measure of the relevance of the tag to the electronic record: the higher the more relevant. At this stage, such score is assumed to be given by a third system including a user manually assessing the relevance of the tag to the content of the electronic record, or an algorithm
  • annotations of type (i) can be done with state-of-the-art information retrieval systems including a text processing unit that can extract Tags from textual documents or metadata.
  • the annotations of type (ii) to (iv) are specific to the proposed approach and can be derived from annotations of type (i) as explained below in the description of the Indexing Module.
  • Fig. 1 shows the overall system to index and retrieve electronic records from a 3D model.
  • the system is composed of a indexing module, a querying module, a retrieval module and a presentation module that are described in the following.
  • Annotations and Electronic Records are objects that serve as input or output of the aforementioned modules.
  • the purpose of the indexing module is to attach a list of relevant annotations of type (iv) as described previously to the electronic records d.
  • Such module generates tuples (electronic records, annotations) pairs encoded as (d,l,s,v) where d is an electronic record, I is a location
  • s is a score in the real set 3 ⁇ 4 and v is a voxel.
  • the purpose is to index the electronic records with volumetric information directly, where this information is extracted from the 3D model and from the existing annotations.
  • the indexing module takes therefore a set of existing annotations of type (i)-(iii) as input.
  • Such annotation has already a score associated with it, which represents its relevance for the electronic record it is associated with: the higher the more relevant.
  • aj would be stored in the Term/Tag column, lj in the Location column and Sj in the Score column.
  • the corresponding values in the Electronic Record column will always be d as the annotations are selected for a single electronic record at this stage.
  • Such map replaces for instance an annotation with a label such as "Heart structure" by an annotation with a volume of the 3D heart from the virtual model of the human body.
  • the map g can be defined in at least two ways as shown by the following two embodiments.
  • the map takes as input annotations containing tags Wj's that correspond to parts of the 3D models or to voxels directly. If the Wj's are graphical elements such as sets of vertices or 3D objects, the output is the set of voxels derived from the voxelization step as depicted in 6 or 10 in Fig. 2. Such map is represented by the maps from 3 to 6 or from 4 to 7 in Fig. 4 (the location information I has been removed to make the description simpler).
  • the returned set of voxel is also derived from the voxelization of m,.
  • the Wj's are anatomical terms such as "heart structure"
  • the map can reduce the score to indicate that the voxels represent more than just a Kidney.
  • the map is extended to receive as input any term or label from a controlled vocabulary, that is not necessarily directly linked to the 3D model. This is an extension of the first
  • the part Wj of the annotations taken as input are texts or labels from a controlled vocabulary or a set of reference words.
  • the annotations can be done using the disease vocabulary of the Systematized Nomenclature of MEDicine.
  • Such vocabulary contains the standard names for most existing diseases.
  • Other controlled vocabularies such as the International Classification of Disease could be used.
  • a medical electronic record is usually annotated with such vocabularies to store the diagnosis of a patient in a standard form.
  • the difference with the first embodiment is that the controlled vocabulary is not used a priori to tag the 3D model directly.
  • the "part of” map can be used to map Wj into "Heart” which is one of the terms tj used to tag the 3D heart.
  • Other maps can be used based on the application and the type of annotations.
  • the composition of such map with the map described in the first embodiment creates a new map from generic tags onto a set of voxels, as represented in Fig. 4 by the maps from 1 to 2 and from 2 to 3.
  • the resulting scores assigned to the annotations with voxels can be different from the initial scores of the annotations used as input.
  • the output of the indexing module is a list of tuples which will be called "Voxel Index table" in the following sections.
  • the Voxel Index Table associates to each location within an electronic record a set of voxel -score pairs (v,s) which can be represented as a vector in 3 ⁇ 4 n as follows: a voxel is uniquely defined by its coordinate (i, j, k) in the grid as represented in Fig. 3. Such coordinate can be re-indexed from 1 to the size of the set of such coordinates. Let us denote by n the size of this set and by f(i, j, k) ⁇ 1,..,n the new numbering of the voxels.
  • a voxel representation can then be encoded as a vector in 3 ⁇ 4 n , where each coordinate represents a triplet (i, j, k) and therefore refers to a unique voxel in the voxel grid represented in Fig.3: a single voxel at location (i, j, k) with a score s is then represented by a vector with zeros everywhere except at coordinate f(i, j, k), where the value is set to s.
  • a voxel can a priori be associated to more than one score: the voxel can indeed be in the voxel representation of several parts of the 3D model such as "Kidney” and "Urinary System” in a 3D model of the human anatomy. Both parts can be used to tag an electronic record on renal failure leading to overlapping voxels stored in the Voxel Index Table.
  • the scores can be combined either by taking the maximum or by considering an average or by any other statistics representing the best the set of scores (e.g. taking the median).
  • the set of voxel-score pairs (v, s) associated to each data can therefore be understood as a real vector in 3 ⁇ 4 n , where all values are set to zero except for the coordinates corresponding to the voxels.
  • the indexing module Besides the Voxel Index Table, the indexing module generates all the Bag of Voxels associated with the electronic records and their annotations. Such data is called the "Bag of Voxels Table” and is another representation of the "Voxel Index Table".
  • One row in the Bag of Voxels Table corresponds to an electronic record and a location. Many rows can therefore refer to the same electronic record.
  • This indexing approach does not exclude an indexing where no location information is used.
  • the location variable I is set to a constant across all electronic records that represents the whole content of each electronic records.
  • the second module is the querying module: a user interacts with a projection plane to define a query which is then compared to the bag of voxel representation of the electronic records.
  • the 3D model is projected in what is called a projection plane.
  • a projection plane is defined as a plane represented by 1 in Fig. 5 and where the 3D model, including colour or texture information, is projected. The user sees the projection plane on the screen. The projection is shown by the dotted line going from the 3D scene 2 onto a view point defined by the camera 3 and intersecting the plane 1 where the projection is defined.
  • the basic step to define a 3D query is to create a volume that will intersect the existing 3D model. This can be done by interacting with the projection plane as described in Fig. 6.
  • a closed curve on the projection plane 1. This can be a rectangle, or a circle or any other closed curve that is deemed relevant by the user. Such curve can be defined with a mouse pointer, or by touching the screen when a touch screen is used.
  • the rectangle 3 - the projection process is inverted to create the volume 4 whose projection leads to the surface defined by the closed curve 3.
  • this defines the parallelepiped 4 touching the sphere from below.
  • the whole 3D scene containing all the 3D models is bounded (it is represented in Fig. 6 by the cube 2 including the 3D sphere). The intersection of the sphere and the volume is depicted as 5 in Fig. 6.
  • Such Bag of Voxels can be composed of 0-1 coordinates, the nonzero coordinates corresponding to the voxels that are in the volume defined by the user.
  • the Bag of Voxels can also has real coordinates that indicate how relevant the voxel is to the query (the greater the more relevant). Such relevance can be for instance based on the distance of the voxel to the projection plane.
  • d(v) the distance from a voxel v to the projection plane where the user has defined its query (as indicated by 3 in Fig. 6)
  • the result of the querying module is thus a Bag of voxels (q-i,.., ⁇ 3 ⁇ 4 n , where qj is non-zero when it corresponds to voxels intersecting the volume of interest and where the value of qj is defined by how far the voxel is from the projection plane.
  • the information retrieval step (retrieving step) consists of computing similarities between Bag of Voxels representations of the query and of the (electronic record, location) pairs stored in the Bag of Voxels Table.
  • the retrieving module returns then the (electronic record, location) pairs from the Bag of Voxels Table whose Bag of Voxels are the most similar to the query.
  • the notion of similarity is very generic in this context and can, by analogy with the bag of words approach in the standard information retrieval domain, correspond to many mathematical definitions.
  • Such similarity function include the cosine or the dot product between the real vectors, it also includes: where bi and b 2 are two Bag of Voxels vectors, exp is the exponential function and
  • VF.IMF Inverse Model Frequency
  • Inverse Document Frequency used in the Bag of Words representation to retrieve documents from textual queries.
  • log (IMFi) x is the value of the ⁇ coordinate of the Bag of Voxels representation
  • #D is the total number of (electronic record, location) pairs, that is the total number of Bags of Voxels in the Bag of Voxels Table
  • #D is the number of Bag of Voxels whose i th coordinate is non-zero.
  • a set of similarity measure over two Bags of Voxels bi and b2 can therefore be defined as: f(VF.IMF 1f VF.IMF 2 ) where VF.IMF1 (resp. VF.IMF2) is the VF.IMF representation of the vector bi (resp. b 2 ).
  • the retrieval module is therefore, mutatis mutandis, identical to the same retrieval module that is part of an information retrieval system when the query is textual and the electronic records are tagged or annotated with textual terms.
  • the main difference here is that the indexin and the querying is done with a voxel representation and a Bag of Voxels rather than a Bag of Words representation.
  • the number r is the number of Bags of Voxels which are deemed relevant to the query at hand, i.e. that have a similarity above a pre-defined threshold.
  • the system can retrieve the list of (electronic records, location) pairs that are the most relevant.
  • the "relevance score" of an electronic record can then be derived by aggregating the similarities over all (electronic records, location) pairs corresponding to that electronic record. Such aggregation can be performed using the maximum value over the similarities or any other statistics representing that set of similarities.
  • the set of results returned by the retrieval module can be represented either as a list that is shown to the user with plain text, or as a set of voxels that are shown over the 3D model by coloring each voxel according to a value representing the relevance of the voxel to the query.
  • the user would typically have access to the list of electronic records sorted by descending similarity as returned by the retrieval module.
  • the location information is then used to highlight the part of the electronic record that is the most relevant to the query.
  • Such visualization is depicted as 1 in Fig. 7.
  • a value for each voxel is necessary to compute a value for each voxel from the result ⁇ b,, returned by the retrieval module.
  • Such value will represent the relevance of the voxel with respect to the query.
  • a value for each voxel can be derived by aggregating the similarities of all the Bags of Voxels containing a non-zero value for the voxel of interest. Such aggregation can be the maximum value over the similarities or any other statistics, such as the median or the average, representing that set of similarities.
  • the list of (voxel, relevant scores) defines a new Bag of Voxels v which can be represented as an overlay to the original 3D model, highlighting the areas which are the most relevant. This is seen in Fig. 6b where the bag of voxels v is represented as 3, the dark cubes representing the values of the coordinates of the vector v. Note that the voxels corresponding to the zero coordinates of v are not represented. The voxels v are laid over the 3D model 3 leading to a representation 4 where the voxelization of the 3D model is also depicted. [0058] The user sees a "projection" 6 of the bag of voxels on the projection plane 5. This is depicted by 7 in the right side of Fig.
  • One point on the projection plane corresponds to all the voxels which are projected to this point, that is, to the set of voxels intersecting the lines perpendicular to the projection plane and going through the point.
  • the relevance scores of each individual voxels intersecting with the line are then combined with a function to compute the color or intensity of the point on the projection plane. This can be done for instance as follows: let ( ii,..,Vik) be the relevance score of the k voxels intersecting with line perpendicular to the point or pixel i in the projection plane, the color , of point i can be computed as:
  • Ci ⁇ j Wjj Vj j ⁇ A weighted maximum over all the relevance scores
  • Wii,..,Wik are real numbers weighting the contribution of the relevance scores in the maximum:
  • Ci max j Wij Vij ⁇ More generally, a total function of the relevance scores of the voxels, where f is any function taking all the relevance scores with the corresponding list of voxels as input:
  • Ci f (Vii,..,Vik) Note that the term color is used here to denote a single real number that will be used later on to compute the real color or intensity of the pixel on screen, but does not refer yet to a representation of a color such as an RGB encoding of a color.
  • the projection of the bag of voxels results in a set of colors or intensities attached to each pixel of the projection plane. These colors are further depicted onto a screen with a pre-defined color mapping that provides an image as depicted by 7 in Fig. 6a. Such image shows the areas of highest relevance based on the 3D query and for the projection plane. As the user manipulates the 3D model and the projection plane changes with the camera, the scores, as computed by this module, changes.
  • the presentation step can also offer to specifically highlight the top k results, k being defined by the end user. This is done by adding visual signs on the projection planes, showing a set of numbers or signs, which refer to a textual description of the top electronic records retrieved by the system, as indicated by 2 in Fig. 7. As described previously, the top k data are all associated with a bag of voxels, which are highlighted on the heat map by graphical signs
  • a paging mechanism is added to look at the next top k results. It is depicted by 3 in the bottom part of Fig. 7 where the user can click to show the next results. It is similar to navigating web search engines except that it applies here to the results shown on the 3D model.
  • Fig. 8 shows an example of a projection plane for several electronic medical records, each representing an anatomical structure and associated to a score (the PRR column).
  • the projection plane is shown on the left hand side as the projection of a 3D model of the human anatomy.
  • the most relevant results are shown on the projection plane over the body with lines linked to numbered labels.
  • the pixels are dark when they correspond to highly relevant projected voxels.
  • the duodenum for instance is the most relevant body structure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

A method for searching electronic records using a three-dimensional model consisting of voxels, comprising the steps of: associating for a plurality of locations in at least one electronic record, said location with a set of voxels of said location, wherein the set of voxels of said location is a subset of the set of voxels of the three-dimensional model; receiving a search request for searching for electronic documents by receiving a region of interest of the three-dimensional model; creating a set of voxels of interest on the basis of the region of interest,wherein the set of voxels of interest is a subset of the set of voxels of the three-dimensional model; determining the relevance of each location for said region of interest by comparing the set of voxels of interest and the set of voxels of said location;displaying the locations on the basis of the determination result.

Description

System and method to index and query data from a 3D model
Reference data
[0001] This application claims priority of Swiss Patent Application CH00365/1 1 filed on March 03, 201 1 , the contents whereof are hereby incorporated. Field of the invention
[0002] The present invention is related to a method for indexing and querying data using a 3D model and volumetric information. The invention is further related to a corresponding system and a corresponding computer program product. Description of related art
[0003] The search for electronic documents using images or illustrations has been mostly done so far with geographic information systems. U.S. Pat. No. 8,015,183 to John R. Frank, "System and Method for Providing
Statistically Interesting Geographical Information Based on Queries To a Geographic Search Engine", discloses a system and method to retrieve documents from a query composed of a free text and of a domain. The latter refers to a location of the document such as an address on a geographical map but does not refer to any 3D coordinates or a volume from a 3D model. It furthermore describes a graphical method to define such location by mean of an area defined by the user on a map. Such area defines a set of location but does not define a volume nor does it contain any depth information. The disclosed method and system cannot therefore be applied to access electronic records from a 3D model.
[0004] Several patents related to Geographic Information Systems and related search engines such as U.S. Pat. No. 7,707,140 to Peter Leishman et al., "Information Retrieval System and Method Employing Spatially
Selective Features" or U.S. Pat. No. 7,801 ,897 to Daniel Egnor, "Indexing Documents According to Geographical Relevance" use longitude and latitude or 2D locations to index and retrieve electronic records. The mapping between the geographical location and the electronic records are furthermore done using manual tagging (e.g. letting users locate where pictures have been taken) or using addresses directly attached to the records. Such manual tagging cannot be assumed when the electronic records relate to general topics such as medicine or motor engines, and therefore restricts the applicability of such invention to a limited set of electronic records.
[0005] Most inventions related to geographic systems are two
dimensional in nature. The documents are tagged to a single location and there is no overlap between two locations. The methods and systems developed for geographic data cannot therefore be applied as such to 3D or volumetric data: there is no notion of depth nor of volume. On the other hand, a new type of Geographic Information Systems called 3D Geographic Information Systems, use volumetric data but mostly data related to geo- spatial coordinates. For instance, U.S. Pat. No 6,915,310 to Gutierrez et al. "Three-dimensional Volumetric Geospatial Querying" discloses a system storing 3D coordinates or volumetric information attached to records in a database. The query is a single location in the 3D space and the retrieval method consists in returning the list of documents which are the closest from the query. This has the disadvantage that a user cannot easily define a 3D coordinate from the 2D screen because a point defines a line of points being in the third dimension. This approach is limited to geo-spatial coordinate but does not apply to random 3D models such as virtual 3D human bodies.
[0006] The 3D nature of data is also used in Content Based 3D shape retrieval where 3D models are retrieved based on a user query. U.S. Pat. Application No 12/323,494 to Tsuneya Kurihara "3D Model Retrieval Method and System" discloses a method and a system to retrieve 3D models from a query that itself is a 3D model. The method consists of translating 3D models into a series of 2D images. The application of such approach is limited to 3D models retrieval and cannot be extended to electronic records. The user does not define a simple query quickly but either needs to create a 3D model or needs to use an existing 3D shape as described in U.S. Pat. Application No. 10/763,741 to Ramani et al.
"Methods, Systems, and Data Structures for Performing Searches on Three Dimensional Objects": the user defines its query by selecting an existing 3D model or by providing a 2D image. The system returns then a list of 3D models that best match either the 3D model or the 2D image (as a projection of the 3D model).
[0007] A query over a graphical representation in 2D has been disclosed in a U.S. Pat. Application No. 10/865,024 to Yu /'Mapping Assessment
Program": it discloses a system to manage information using maps and dots over 2D or 3D representation of human bodies, engines, or a corporation.
Such method does not use any volumetric information and assumes that all information or documents are already mapped on the visual
representation. Its application to any electronic record is therefore impossible and the retrieval function is consequently limited to what is already tagged with a part of the visual representation.
[0008] Specifically for medical informatics, the search for electronic records using a human body has been implemented by the so called symptom checkers tools. The company WebMD (registered trademark) has recently provided on its website a sketch of a 3D body that visitors can click to find what disease might be associated to their ailments in specific areas of the body. Such tool is used though only as a first filter and there is no query associated to a click on the human body. The first question the tool asks is: "please click on the area of interest". It lists then a set of topics that are relevant for that area. Such set has been associated to the area beforehand and no similarity between the query and the returned set of documents is calculated. The same holds true for the Healthline BodyMaps (registered trademark) tool that lets the user to click on a 2D image of the human body to directly access the health articles that are relevant to the area clicked by the user. The link between the articles and the virtual body is not made based on a query but established beforehand, which limits significantly the accuracy of the information retrieval. [0009] The usage of a 3D body to show or retrieve a set of documents has been defined already in former invention applications such as PCT Application No. PCT/IB2008/053635, "System and Method for Analyzing Electronic Data Records", Eberholst et al. It defines a system where a user can see the content of electronic records on a 3D model and can interact with it. A query can be performed but it is directly encoded as a node in a graph and there is no mechanism to create one from a generic user input: the user needs to select an icon on the 3D model or an existing object that is directly linked with a so called similarity network. The 3D context is therefore not fully used: if the 3D model is rotated, the query will not change even though the user might be looking at another set of 3D objects. More specifically, the depth information is not used which can decrease significantly the quality of the results: consider the case where the 3D model is a virtual body seen from the side. Clicking on the arm in the front should retrieve medical records about this arm only. The approach of Eberholst et al. would retrieve documents for both arms because it does not use the depth information and the fact that one arm is behind and the other one in the front.
[0010] At last, the usage of human anatomy to structure information and to provide an intuitive access to medical records dates back to the mid 90's with the Visible Human Project (registered trademark) from the U.S. National Library of Medicine. The Visible Human project has provided a full 3D representation of the male and female human anatomy (adult) to the academic community. The latter derived several applications including brain atlas, that is, a map of the brain where locations are tagged with cognitive function for instance. They define a user interface for users to select what is already shown on screen over a virtual model of the human anatomy but cannot be used as such to retrieve electronic records.
[0011] The problem of querying a set of electronic records from a 3D model or one of its 2D projection using volumetric data is therefore still an open issue that this invention addresses.
Brief summary of the invention [0012] According to the invention, these aims are achieved by means of claims 1 , 16 and 17.
[0013] By defining queries that cover volumes and 3D objects rather than 2D surfaces such as the existing geographical maps, the proposed method enables the usage of any 3D representation to access and work with electronic records. A key feature of the proposed method is that 3D queries are simply defined over the 3D model via a click or a touch or any other user input, and that queries are represented directly as volumes or parts of the 3D model. This allows the user to express queries intuitively, which would otherwise be difficult to create. Consider the case where a patient feels pain in his throat. The patient does not know the medical name for such pain and cannot write the correct anatomical term for a throat. A simple click on the throat over a virtual body would provide a direct way to query any document for this patient, including those mentioning throat and those mentioning larynx, thyroid cartilage, thyrohyoid membrane, etc.
[0014] Contrarily to existing indexing methods that have been applied to geographic system or visual representations, the association between an electronic record and the 3D model is done via a generic indexing system that can be both automatic or manual, and that associates to each electronic records a volume extracted from the 3D model. The information retrieval consists then in comparing the volumes associated with the electronic records and the volume representing the query. This makes the proposed approach particularly relevant for users who express queries by pointing on an object because they do not necessarily know or can refer to a term or a name defining what they look for.
Brief Description of the Drawings
[0015] The invention will be better understood with the aid of the description of an embodiment given by way of example and illustrated by the figures, in which: Fig. 1 shows the different components of this invention;
Fig. 2 shows voxelization and 3D modelling using polygonal meshes or using a voxel representation;
Fig. 3 shows a grid of the 3D space composed of a set of small cubes also called voxels;
Fig. 4 shows the transformation of annotations (i)-(iii) into annotations of type (iv);
Fig. 5 shows how the 3D models are represented on a 2D screen;
Fig. 6a shows how to create a 3D query with a pointer in a 2D screen;
Fig. 6b shows first the results of a query as a bag of voxels overlaid on the 3D model (left side of the figure). The values of the voxels are projected onto the screen (right part of the figure) resulting in an image highlighting the areas of most relevance; Fig. 7 shows the heat map, which is overlaid with lines and numbers referring to the top results of the search;
Fig. 8 shows a heat map generated for a 3D model of the human anatomy.
Detailed Description of possible embodiments of the Invention [0016] This invention refers to a situation where a person, i.e. end user, is using an electronic device such as a computer or a mobile device, including but not limited to a laptop, a desktop, a smartphones or a tablet PC, and visualizes a 3D model of some objects of the real world. It can be for instance a 3D model of the human body, a 3D model of a turbine, or any other 3D models. It can also be any 2D projection of a 3D model or a cross section generated from the 3D model.
[0017] A set of electronic documents or records related to the 3D model is stored in a storage device that can be accessed directly or indirectly from the device where the 3D model is shown to the user. The storage device can furthermore use a database software or any other data storage software system to enable other applications or users to access the electronic documents. The documents can be of any type including but not limited to text, audio, video, images and other structured or unstructured types. It is assumed in this invention that the electronic documents are related in a way or another, preferably by their content, to the 3D model. In the example of a 3D model of the human anatomy, the electronic documents can refer to the medical histories, the laboratory results, the medical images, or any other medical information of a patient or of a group of patients. It can also refer to medication whose side effects are related to some body parts. In the example of the turbine, the electronic documents can refer to maintenance reports or any interventional report made on specific parts of the turbine. It can also refer to educational materials explaining how the turbine should be repaired or should be functioning in normal modes.
[0018] The end user wishes then to retrieve documents or records simply by indicating graphically over the 3D model, or over any 2D images generated from the 3D model (projection or cross section) a region of interest, wherein the region of interest comprises a single or a series of points, areas and/or volumes the user is interested in. The system retrieves then the documents relevant to the defined points, areas or volumes. The notion of relevance is dependent on the application. Examples of
relevances include but are not limited to: disease whose symptoms are expressed on specific points, areas or volumes of a 3D model of the human body, clinical findings that are found on specific body regions, or genes that are over expressed on specific tissues. [0019] As an example, we consider the case where a clinician is accessing the electronic medical records of a patient. A database and/or an Electronic Medical Record software provide all the medical records about the patients. The clinician uses a computer to visualize a 3D model of the human body and is interested in all the diagnosis related to the heart. This invention provides a system so that the clinician can retrieve the documents simply by clicking on the heart of a 3D model of the human body or of any 2D image derived from the 3D model of the human body. A click on the heart will perform a query and will fetch all documents that have been previously identified as related to the heart.
[0020] As another example, consider the case where the 3D model is a turbine and the electronic records are the maintenance reports of all the parts of the turbine. The user is interested in the last maintenance
intervention where the rotor disk was involved. The user clicks on the rotor disc part of a 3D model of a turbine and access directly to all relevant reports from the maintenance database sorted by order of relevance.
[0021] Before we start describing the system and method to implement such query mechanism and to compute the relevance of the electronic documents, let us define first more in details what is meant by 3D model and by electronic records.
[0022] A 3D model is any data stored on a device that models a space in three dimensions (3D). Such 3D models include but are not limited to (i) a set of slice images acquired by medical devices, such as CT scanners or MRI machines, to (ii) the output of a 3 dimensional scanners or to (iii) any 3 dimensional surface. A 3D model is therefore an electronic record
contrarily to what will be called a real volume, the latter being the real object or real space the 3D model refers to. In the following, V will always denote a real volume.
[0023] Examples of real volumes include the human body and all its parts, organs, etc., a car or any other manufactured product with all its components, a building with its corridors and rooms, and so on. This is in contrast with a surface such as a geographical map.
[0024] The data corresponding to a volume is usually encoded using a coordinate system that defines the position of each point in the volume. Let us denote by (x, y, z) the three dimensional Cartesian coordinate system whose origin is (0, 0, 0). The Cartesian system completely defines the 3 dimensional space as any real volume, surface or point can be parametrized with (x, y, z) ε ¾3 coordinates, where ¾ denotes the set of real numbers.
[0025] A 3D model stored with the Cartesian system is usually defined by the surface surrounding it. 3D modelling is the process of developing such a surface also called a 3D model. Common representation of 3D models includes polygonal meshes (points in 3D connected by line segments and or grouped as triangles) and NURBS (spline curves), and they are stored in format such as .fbx, .obj, .max or any other formats used to store polygonal meshes with texture information. These surfaces are often associated with texture or colour information.
[0026] Most real volumes or real objects have already been modelled in computers using such representations. Computer Aided Design softwares typically provide the methods and tools to create such representations. [0027] Let us denote by M(V), the set of 3D models stored as polygonal meshes corresponding to the real volume V, the transformation M can be interpreted as the modelling step or the parameterization of a real volume into an electronic representation. Fig. 2 depicts the modelling of a sphere 1 , which is modelled as a coarse polygonal mesh 8 and 9 denoted by M(V) by the transformation 12. An element of M(V) is usually an approximation of V as it is built of polygons and not all real volumes can be exactly represented as polygonal meshes (e.g. the sphere). It can be seen on 8 that is a coarse approximation of 1. When M(V) models exactly the real volume V, we can identify M(V) as V. More generally, for each element m ε M(V), we identify m as the real volume that m models exactly. Such identification 5 and 11 will be denoted by M"1(m). Fig. 2 represents this identity with the middle line 11 linking the mesh 8 with the real volume 2.
[0028] A volume can also be represented by storing each point in ¾3 that belongs to the volume. As the number of points can be infinite, the volumetric representation is often obtained by considering a regular grid in 3D and by storing the index of the elements of the grid which intersect with the volume. Such representation is called a voxel representation of the volume. A voxel is the basic cube forming the grid as depicted in Fig. 3 and is uniquely defined by a three dimensional index (i, j, k) ε Z3, where Z is the set of integers, indicating which element of the grid it corresponds to. The coloured cube 2 in Fig. 3 shows the voxel corresponding to the index (i, j, k): it corresponds to the ith (resp. jth, kth) cube on the x-axis (resp. y-axis, z- axis) as indicated in 3. A voxel can be associated to any kind of data such as colour information or an object the voxel belongs to. [0029] Note that the number of voxels is potentially infinite as the grid spans the whole space. Most if not all volumes though are bounded and fit therefore into a finite grid. For the sake of simplicity, we will assume that all grids and all voxel representations refer to bounded volumes. The list of indices (i, j, k) is then assumed to be finite. [0030] Assume the axis (x, y, z) have been set including the origin as depicted in Fig. 3, a grid is defined by the size of its voxels, which depends on the length a of its edge indicated by 1 in the Figure. Let us denote by Ge(V) the set of voxels representing the real volume V and obtained by the regular grid, where each voxel has edges of length e, Ge(V) is also called a voxel representation of the volume. Fig. 2 shows the voxel representation 7 of the sphere 3.
[0031] A voxel representation (6 in Fig. 2) can be built manually by using a voxel editor. A human user can for instance manually define each voxel and tag it with colour information. Note that a voxel representation can also be obtained from a polygonal mesh m as Ge(m). This is depicted by 10 in Fig. 2. It consists of all the cubes of the grid that intersect with the elements of the mesh m.
[0032] Both the polygonal mesh and the voxel representation can furthermore be split into sub parts. This is often the case for 3D models of the human anatomy or of manufactured products where the vertices of the mesh or the voxels are grouped together when they belong to a single object (e.g. single anatomical structure or a subpart of the manufactured object). In what follows, we denote by m and m-i, mc a polygonal mesh and its subparts, and by v and v-i, vc a corresponding voxel
representation and its subparts. Note that often v, = Ge(mi), that is, the sub parts of a voxel representation of a 3D model m are the voxel
representation of the sub parts of the 3D model.
[0033] A 3D model and its subparts are assumed to be attached to short textual descriptions referred as "labels" in the context of this invention. A label can be a free text or a unique identifier from a terminology or other data that provide more information about the 3D model or the volume it represents. In the case of a 3D model of the human anatomy, each subpart of the 3D body could for instance be tagged with a set of unique identifiers from a standard medical dictionary such as the Systematized Nomenclature of MEDicine - Clinical Terms or the Foundational Model of Anatomy. Both dictionaries could indeed be used to describe the medical name of an anatomical structure with a controlled vocabulary that is shared across medical software applications. A 3D model is therefore defined as
m=(m1f..,mc) and a list of n, labels ti,..,tjni associated to each part m, , i=1,..,c. [0034] The data related to this invention is any electronic record relevant to the 3D model at hand. It includes but is not limited to images, video, audio files, and texts. In the case of medicine, the electronic records can consist of all the records representing the medical history of patients such as medical images, medical notes, medication, lab test values and more. It can also represent all the articles of a medical encyclopaedia describing any information about a specific organ or the whole human anatomy. [0035] It is assumed in this invention that these electronic records are (i) annotated with the same labels tj, tm\ as the ones used to tag the 3D model, or (ii) annotated with parts of the 3D model m-i, ..,mc , or (iii) annotated with vertices of the polygonal mesh, or (iv) annotated with individual voxels directly. Any of the four annotations (i)-(iv) is defined by (a) a tag or a term or a part of the 3D model as aforementioned, (b) a location in the electronic record where that annotation is relevant and (c) a score or a number measuring how relevant the annotation is. Depending on the annotation type (i) to (iv), the part (a) of the annotation will be either (i) a term in tj, tjni, or (ii) an element of m-i, ..,mc, or (iii) a set of vertices from the polygonal mesh of the 3D model m, or (iv) a set of voxels extracted from the 3D model m. The following Table 1 provides an example of such annotation for the case (i). The Electronic record column contains a reference to the electronic records. This can be a URL or the row of a database or any other pointers to where the electronic record is stored. The Location column refers to an offset in the electronic record. Such an offset depends on the type of the electronic record (e.g. video, text or audio). In this example, locations would typically correspond to offsets in a text starting from the beginning of the electronic record. It can also be an anchor identifying a section of the electronic record when the latter is an HTML page, or any other object that identifies a part of the electronic record. The Term/Tag column contains the entities which are used to annotate the electronic records: they store the tag information and can be plain text or references to the element of a terminology. At last, the Score column contains a measure of the relevance of the tag to the electronic record: the higher the more relevant. At this stage, such score is assumed to be given by a third system including a user manually assessing the relevance of the tag to the content of the electronic record, or an algorithm
comparing the content of the electronic records with the words describing the Tag. The number of times the word 'Heart' is mentioned in the electronic record for instance can be used to compute a score for the Tag "Heart Structure". Note that the generation of annotations of type (i) can be done with state-of-the-art information retrieval systems including a text processing unit that can extract Tags from textual documents or metadata. The annotations of type (ii) to (iv) are specific to the proposed approach and can be derived from annotations of type (i) as explained below in the description of the Indexing Module.
Electronic Location Term/Tag Score
record
d1 0-15 Heart structure 0.95
d1 19-10 Aortic valve 0.8
d2 20-25 Bronchial tree 0.5
d1 20-23 D20.3:ICD10 0.7
Table 1. [0036] Fig. 1 shows the overall system to index and retrieve electronic records from a 3D model. The system is composed of a indexing module, a querying module, a retrieval module and a presentation module that are described in the following. The Voxel Query, 3D Model, Voxel
representation, Annotations and Electronic Records are objects that serve as input or output of the aforementioned modules.
Indexing module
[0037] The purpose of the indexing module is to attach a list of relevant annotations of type (iv) as described previously to the electronic records d. Such module generates tuples (electronic records, annotations) pairs encoded as (d,l,s,v) where d is an electronic record, I is a location
information in the electronic record, s is a score in the real set ¾ and v is a voxel. The purpose is to index the electronic records with volumetric information directly, where this information is extracted from the 3D model and from the existing annotations. The indexing module takes therefore a set of existing annotations of type (i)-(iii) as input. Such annotation has already a score associated with it, which represents its relevance for the electronic record it is associated with: the higher the more relevant.
[0038] Let us assume that an electronic record d is annotated with a set of annotations {Wj, lj, Sj}j=i,..,p having tags, 3D models or vertices Wi,..,wp (depending on the type of the annotation) at location li,..,lp, each having a score Sj, j=1 ,..,p. In the previous example of Table 1 , aj would be stored in the Term/Tag column, lj in the Location column and Sj in the Score column. The corresponding values in the Electronic Record column will always be d as the annotations are selected for a single electronic record at this stage. The indexing task consists of associating a set of annotations to type (iv) to d. This is done by mapping the set {Wj, lj, Sj}j=i,..,p onto the voxels. A map is a generic function g taking the set {Wj, lj, Sj}j=i,..,p as input and returning a set of voxels as output with a list of locations and scores {Vj, lj, s'j}j=i,..,q, where v denotes either a single voxel or a set of voxels. Such map replaces for instance an annotation with a label such as "Heart structure" by an annotation with a volume of the 3D heart from the virtual model of the human body. The map g can be defined in at least two ways as shown by the following two embodiments.
[0039] In a first embodiment, the map takes as input annotations containing tags Wj's that correspond to parts of the 3D models or to voxels directly. If the Wj's are graphical elements such as sets of vertices or 3D objects, the output is the set of voxels derived from the voxelization step as depicted in 6 or 10 in Fig. 2. Such map is represented by the maps from 3 to 6 or from 4 to 7 in Fig. 4 (the location information I has been removed to make the description simpler). If the Wj's are any element in the tags ti,..,tini's associated to the part m, of the 3D model, then the returned set of voxel is also derived from the voxelization of m,. Consider for instance the case where the Wj's are anatomical terms such as "heart structure",
"femur", "kidney", etc.. which are all associated with the related parts of a 3D model of the human anatomy. An annotation of a document such as ("Kidney", 1 -10, 0.9) will be mapped into the set of annotations (Vj, 1 -10, 0.9)j=i,..,q where the Vj, j=1 ,..q, are the voxels representing the 3D kidney. This first embodiment is represented by the maps from 2 to 5 in Fig. 4. Note that the scores attached to each voxel can change a priori based on how the tags ti,..,tini's relate to the model. The annotation "Kidney" might be considered more relevant to the voxels Vj if "Kidney" is the only term attached to those voxels. If the voxels are also attached to "Arterial
System", such map can reduce the score to indicate that the voxels represent more than just a Kidney. [0040] In a second embodiment, the map is extended to receive as input any term or label from a controlled vocabulary, that is not necessarily directly linked to the 3D model. This is an extension of the first
embodiment to let the indexing module accept more generic annotations. In this second embodiment, the part Wj of the annotations taken as input are texts or labels from a controlled vocabulary or a set of reference words. Following the example of medicine and the usage of a 3D model of the human body, the annotations can be done using the disease vocabulary of the Systematized Nomenclature of MEDicine. Such vocabulary contains the standard names for most existing diseases. Other controlled vocabularies such as the International Classification of Disease could be used. A medical electronic record is usually annotated with such vocabularies to store the diagnosis of a patient in a standard form. The difference with the first embodiment is that the controlled vocabulary is not used a priori to tag the 3D model directly. It is assumed here that a map between these Wj and the tags ti,..,tini's associated with the parts of the 3D models is available. In our previous example, such map is provided by the "Finding Site" relationship in the controlled vocabulary of the Systemized Nomenclature of MEDicine, which links any disease to the anatomical site where the disease is expressed (eg. "renal failure" is linked to the kidney). Any map providing a link between the Wj and the tags ti,..,tini's can be used. When Wj is related to the 3D model but not used to tag any parts of the 3D model, maps such as the "part of" relationship can be used indicating which tj associated to the 3D objects contain Wj. Assume for instance that we have a plain 3D heart without any subpart and that Wj is "Mitral Valve". The "part of" map can be used to map Wj into "Heart" which is one of the terms tj used to tag the 3D heart. Other maps can be used based on the application and the type of annotations. The composition of such map with the map described in the first embodiment creates a new map from generic tags onto a set of voxels, as represented in Fig. 4 by the maps from 1 to 2 and from 2 to 3. As for the first embodiment, note that the resulting scores assigned to the annotations with voxels can be different from the initial scores of the annotations used as input.
[0041] The indexing steps are then done as follows: • For each document d do:
Map the annotations {Wj, lj, Sj}j=i,..,p attached to d to {Vj, lj, s'j}j=i,..,q • For each v in Vj : store the tuples (d, lj, s'j, v) [0042] The resulting list of tuples establishes a link between electronic records and voxels. Note that the same electronic record location pair (d, I) can be associated to the same voxel v several times with different scores. An example of output for such indexing is represented below as a table. The voxels are encoded here as their indices in 3D. As mentioned before, the Voxel index table can also be created manual indexing instead of such mapping.
Electronic record Location Voxel Score
d1 0-15 (1,2,3) 0.95 d1 19-10 (0,5,9) 0.8 d2 20-25 (1,9,3) 0.5 d1 20-23 (0,0,1) 0.7
Table 2. The output of the indexing module is a list of tuples which will be called "Voxel Index table" in the following sections. [0043] The Voxel Index Table associates to each location within an electronic record a set of voxel -score pairs (v,s) which can be represented as a vector in ¾n as follows: a voxel is uniquely defined by its coordinate (i, j, k) in the grid as represented in Fig. 3. Such coordinate can be re-indexed from 1 to the size of the set of such coordinates. Let us denote by n the size of this set and by f(i, j, k) ε 1,..,n the new numbering of the voxels. A voxel representation can then be encoded as a vector in ¾n, where each coordinate represents a triplet (i, j, k) and therefore refers to a unique voxel in the voxel grid represented in Fig.3: a single voxel at location (i, j, k) with a score s is then represented by a vector with zeros everywhere except at coordinate f(i, j, k), where the value is set to s. A voxel can a priori be associated to more than one score: the voxel can indeed be in the voxel representation of several parts of the 3D model such as "Kidney" and "Urinary System" in a 3D model of the human anatomy. Both parts can be used to tag an electronic record on renal failure leading to overlapping voxels stored in the Voxel Index Table. In that case, the scores can be combined either by taking the maximum or by considering an average or by any other statistics representing the best the set of scores (e.g. taking the median). The set of voxel-score pairs (v, s) associated to each data can therefore be understood as a real vector in ¾n, where all values are set to zero except for the coordinates corresponding to the voxels. Such
representation will be called "Bag of Voxels" and is specific to the approach developed in this invention.
[0044] Besides the Voxel Index Table, the indexing module generates all the Bag of Voxels associated with the electronic records and their annotations. Such data is called the "Bag of Voxels Table" and is another representation of the "Voxel Index Table". One row in the Bag of Voxels Table corresponds to an electronic record and a location. Many rows can therefore refer to the same electronic record.
[0045] This indexing approach does not exclude an indexing where no location information is used. In that case, the location variable I is set to a constant across all electronic records that represents the whole content of each electronic records.
Querying module
[0046] The second module is the querying module: a user interacts with a projection plane to define a query which is then compared to the bag of voxel representation of the electronic records. [0047] To show a 3D model on the 2D screen of an electronic device, the 3D model is projected in what is called a projection plane. A projection plane is defined as a plane represented by 1 in Fig. 5 and where the 3D model, including colour or texture information, is projected. The user sees the projection plane on the screen. The projection is shown by the dotted line going from the 3D scene 2 onto a view point defined by the camera 3 and intersecting the plane 1 where the projection is defined.
[0048] The basic step to define a 3D query is to create a volume that will intersect the existing 3D model. This can be done by interacting with the projection plane as described in Fig. 6.
[0049] In a first step, the user defines a closed curve on the projection plane 1. This can be a rectangle, or a circle or any other closed curve that is deemed relevant by the user. Such curve can be defined with a mouse pointer, or by touching the screen when a touch screen is used. Once the curve is defined - here the rectangle 3 - the projection process is inverted to create the volume 4 whose projection leads to the surface defined by the closed curve 3. In Fig. 6, this defines the parallelepiped 4 touching the sphere from below. We assume here that the whole 3D scene containing all the 3D models is bounded (it is represented in Fig. 6 by the cube 2 including the 3D sphere). The intersection of the sphere and the volume is depicted as 5 in Fig. 6. The same grid as the one used for indexing or for creating the voxel representation of the 3D model is then applied to define a voxel representation of the intersection 5 between the sphere and the volume defined by the user. This is illustrated by 6 in Fig. 6. Such voxel
representation is furthermore mapped into a Bag of Voxel vector 7 as described above in the indexing module.
[0050] Such Bag of Voxels can be composed of 0-1 coordinates, the nonzero coordinates corresponding to the voxels that are in the volume defined by the user. The Bag of Voxels can also has real coordinates that indicate how relevant the voxel is to the query (the greater the more relevant). Such relevance can be for instance based on the distance of the voxel to the projection plane. Let us denote by d(v) the distance from a voxel v to the projection plane where the user has defined its query (as indicated by 3 in Fig. 6), we can define the value for the coordinate corresponding to v in the Bag of Voxel as:
1/(1 +d(v)) More generally any function of the distance d(v) can be used.
[0051] The result of the querying module is thus a Bag of voxels (q-i,.., ε ¾n, where qj is non-zero when it corresponds to voxels intersecting the volume of interest and where the value of qj is defined by how far the voxel is from the projection plane.
Retrieving module
[0052] Once the query has been translated into a Bag of Voxels, the information retrieval step (retrieving step) consists of computing similarities between Bag of Voxels representations of the query and of the (electronic record, location) pairs stored in the Bag of Voxels Table. The retrieving module returns then the (electronic record, location) pairs from the Bag of Voxels Table whose Bag of Voxels are the most similar to the query. The notion of similarity is very generic in this context and can, by analogy with the bag of words approach in the standard information retrieval domain, correspond to many mathematical definitions. Such similarity function include the cosine or the dot product between the real vectors, it also includes:
Figure imgf000021_0001
where bi and b2 are two Bag of Voxels vectors, exp is the exponential function and ||.|| is the Euclidean norm.
As a further example, we consider below a similarity computed from a new representation defined as the Voxel Frequency. Inverse Model Frequency (VF.IMF) which is an equivalent for the Bag of Voxels of the Term Frequency. Inverse Document Frequency used in the Bag of Words representation to retrieve documents from textual queries. The VF.IMF is defined as a new vector (fi,..,fn) e ¾n where: fi = VFi . log (IMFi)
Figure imgf000022_0001
x, is the value of the Γ coordinate of the Bag of Voxels representation, #D is the total number of (electronic record, location) pairs, that is the total number of Bags of Voxels in the Bag of Voxels Table, and #D, is the number of Bag of Voxels whose ith coordinate is non-zero. A similarity over the Bags of Voxels and their VF.IMF representations can then be derived using the cosine similarity, the dot product or any positive definite bilinear f function over ¾n: let us consider two bags of voxels (or their VF.IMF representation) p,q ε ¾n, we define f(p,q) as: f(p,q) =∑ij au qipj where aj, is a function of (i,j), i=1,..,n, j=1,..,n, encoding how close voxel i is from voxel j. A set of similarity measure over two Bags of Voxels bi and b2 can therefore be defined as: f(VF.IMF1f VF.IMF2) where VF.IMF1 (resp. VF.IMF2) is the VF.IMF representation of the vector bi (resp. b2).
[0053] The retrieval module is therefore, mutatis mutandis, identical to the same retrieval module that is part of an information retrieval system when the query is textual and the electronic records are tagged or annotated with textual terms. The main difference here is that the indexin and the querying is done with a voxel representation and a Bag of Voxels rather than a Bag of Words representation.
[0054] The result of the retrieval module is a set {b,, f j}j=i,..,r of Bag of Voxels from the Bag of Voxels Table, and of associated similarities fj. The larger the similarity the more relevant the Bag of Voxels. The number r is the number of Bags of Voxels which are deemed relevant to the query at hand, i.e. that have a similarity above a pre-defined threshold. From the Bags of Voxels, the system can retrieve the list of (electronic records, location) pairs that are the most relevant. The "relevance score" of an electronic record can then be derived by aggregating the similarities over all (electronic records, location) pairs corresponding to that electronic record. Such aggregation can be performed using the maximum value over the similarities or any other statistics representing that set of similarities. Presentation module
[0055] The set of results returned by the retrieval module can be represented either as a list that is shown to the user with plain text, or as a set of voxels that are shown over the 3D model by coloring each voxel according to a value representing the relevance of the voxel to the query.
[0056] In the first case, the user would typically have access to the list of electronic records sorted by descending similarity as returned by the retrieval module. The location information is then used to highlight the part of the electronic record that is the most relevant to the query. Such visualization is depicted as 1 in Fig. 7.
[0057] In the second case, it is necessary to compute a value for each voxel from the result {b,,
Figure imgf000023_0001
returned by the retrieval module. Such value will represent the relevance of the voxel with respect to the query. As for the relevance score of the electronic records, a value for each voxel can be derived by aggregating the similarities of all the Bags of Voxels containing a non-zero value for the voxel of interest. Such aggregation can be the maximum value over the similarities or any other statistics, such as the median or the average, representing that set of similarities. The list of (voxel, relevant scores) defines a new Bag of Voxels v which can be represented as an overlay to the original 3D model, highlighting the areas which are the most relevant. This is seen in Fig. 6b where the bag of voxels v is represented as 3, the dark cubes representing the values of the coordinates of the vector v. Note that the voxels corresponding to the zero coordinates of v are not represented. The voxels v are laid over the 3D model 3 leading to a representation 4 where the voxelization of the 3D model is also depicted. [0058] The user sees a "projection" 6 of the bag of voxels on the projection plane 5. This is depicted by 7 in the right side of Fig. 6b. One point on the projection plane corresponds to all the voxels which are projected to this point, that is, to the set of voxels intersecting the lines perpendicular to the projection plane and going through the point. The relevance scores of each individual voxels intersecting with the line are then combined with a function to compute the color or intensity of the point on the projection plane. This can be done for instance as follows: let ( ii,..,Vik) be the relevance score of the k voxels intersecting with line perpendicular to the point or pixel i in the projection plane, the color , of point i can be computed as:
• A weighted sum of the voxels, where WM,..,Wik are real numbers weighting the contribution of the relevance scores in the sum:
Ci =∑j Wjj Vjj · A weighted maximum over all the relevance scores, where
Wii,..,Wik are real numbers weighting the contribution of the relevance scores in the maximum:
Ci = maxj Wij Vij · More generally, a total function of the relevance scores of the voxels, where f is any function taking all the relevance scores with the corresponding list of voxels as input:
Ci = f (Vii,..,Vik) Note that the term color is used here to denote a single real number that will be used later on to compute the real color or intensity of the pixel on screen, but does not refer yet to a representation of a color such as an RGB encoding of a color.
[0059] The projection of the bag of voxels results in a set of colors or intensities attached to each pixel of the projection plane. These colors are further depicted onto a screen with a pre-defined color mapping that provides an image as depicted by 7 in Fig. 6a. Such image shows the areas of highest relevance based on the 3D query and for the projection plane. As the user manipulates the 3D model and the projection plane changes with the camera, the scores, as computed by this module, changes.
[0060] To further highlight the top results, the presentation step can also offer to specifically highlight the top k results, k being defined by the end user. This is done by adding visual signs on the projection planes, showing a set of numbers or signs, which refer to a textual description of the top electronic records retrieved by the system, as indicated by 2 in Fig. 7. As described previously, the top k data are all associated with a bag of voxels, which are highlighted on the heat map by graphical signs
represented by lines in Fig. 7. Other signs can be used such as text boxes or icons directly attached to the bag of voxels, such as depicted in Fig. 8. The purpose of these graphical signs is to highlight the top results directly on the projection plane and to link these top results with reference to a textual description of the related data. These references are represented as numbers in Fig. 7 and Fig. 8. The numbers can be replaced by text boxes containing a short text describing the results. [0061] The overlay of top results on the projection plane ensures that the most relevant data are graphically shown while the whole set of results is still represented on the 3D model. If required, the projection plane can also be computed only for the top k results.
[0062] A paging mechanism is added to look at the next top k results. It is depicted by 3 in the bottom part of Fig. 7 where the user can click to show the next results. It is similar to navigating web search engines except that it applies here to the results shown on the 3D model.
[0063] Fig. 8 shows an example of a projection plane for several electronic medical records, each representing an anatomical structure and associated to a score (the PRR column). The projection plane is shown on the left hand side as the projection of a 3D model of the human anatomy. The most relevant results are shown on the projection plane over the body with lines linked to numbered labels. The pixels are dark when they correspond to highly relevant projected voxels. The duodenum for instance is the most relevant body structure.

Claims

Claims
1. A method for searching electronic records using a three- dimensional model consisting of voxels, comprising the steps of:
associating for a plurality of locations in at least one
electronic record said location with a set of voxels of said location, wherein the set of voxels of said location is a subset of the set of voxels of the three- dimensional model;
receiving a search request for searching for electronic documents by receiving a region of interest of the three-dimensional model;
creating a set of voxels of interest on the basis of the region of interest, wherein the set of voxels of interest is a subset of the set of voxels of the three-dimensional model;
determining the relevance of each location for said region of interest by comparing the set of voxels of interest and the set of voxels of said location;
displaying the locations on the basis of the determination result.
2. Method according to claim 1 , wherein the sets of voxels of the plurality of locations and said set of voxels of interest are each represented by vector with a predetermined number of a coordinates, wherein each coordinate represent one voxel of the three-dimensional model, and wherein the relevance of one location for said region of interest is determined by a function of the vector of the region of interest and of the vector of said one location.
3. Method according to claim 2, wherein each coordinate of said vector of one location comprises a parameter representing the relevance of the association between said location corresponding to the vector and said voxel corresponding to the coordinate.
4. Method according to claims 2 and 3, wherein each coordinate of said vector of the region of interest comprises a parameter representing the relevance of said voxel corresponding to the coordinate compared to the region of interest.
5. Method according to anyone of claims 2 to 4, wherein said parameter representing the relevance of said voxel corresponding to the coordinate compared to the region of interest is zero, if the voxel does not intersect with the region of interest.
6. Method according to claim 5, wherein said parameter representing the relevance of said voxel corresponding to the coordinate compared to the region of interest is a function of the distance of said voxel corresponding to the coordinate to a projection plane of the three- dimensional model, if the voxel intersects with the region of interest.
7. Method according to anyone of claims 2 to 6, wherein the locations are presented in the region of interest of the three-dimensional model by presenting each voxel on the basis of a function of the coordinate corresponding to said voxel of the vector of each location and/or of the coordinate corresponding to said voxel of the vector of the region of interest.
8. Method according to anyone of claims 1 to 7, wherein only a subset of locations is presented, which are the most relevant locations determined.
9. Method according to anyone of claims 1 to 8, wherein the locations are presented as a list arranged in an order based on said relevance determined.
10. Method according to anyone of claims 1 to 9, wherein the locations are presented in the region of interest of the three-dimensional model by presenting each voxel on the basis of the locations associated to said voxel.
1 1. Method according to anyone of claims 1 to 10, wherein the locations with the voxels of the three-dimensional model are associated on the basis of terms and/or identifiers associated with the voxels and terms and/or identifiers associated with the location in the electronic record.
12. Method according to anyone of claims 1 to 1 1 , wherein an association of a location with a voxel comprises a parameter representing the relevance of said association.
13. Method according to anyone of claims 1 to 12, wherein the three-dimensional model has subparts defining a subset of the voxels of the three-dimensional model, wherein each subpart is related to at least one term, preferably to a plurality of terms.
14. Method according to anyone of claims 1 to 13, wherein the region of interest is defined by displaying a projection of the three- dimensional model on a two-dimensional projection plain, by receiving a user input defining an area, on said projection plain and by defining the projection of said area in the direction substantially rectangular to the projection plain as said region of interest.
15. Method according to anyone of claims 1 to 13, wherein the region of interest is a volume of the three-dimensional model.
16. Computer program suitable for carrying out the method steps according to anyone of claims 1 to 14 on a processor.
17. An apparatus for searching electronic records using a three- dimensional model consisting of voxels, comprising the steps of:
associating means for associating for a plurality of locations in at least one electronic record, said location with a set of voxels of said location, wherein the set of voxels of said location is a subset of the set of voxels of the three-dimensional model;
receiving means for receiving a search request for searching for electronic documents by receiving a region of interest of the three- dimensional model;
voxel creating means for creating a set of voxels of interest on the basis of the region of interest, wherein the set of voxels of interest is a subset of the set of voxels of the three-dimensional model;
determining means for determining the relevance of each location for said region of interest by comparing the set of voxels of interest and the set of voxels of said location;
display means for displaying the locations on the basis of the determination result.
PCT/EP2012/053668 2011-03-03 2012-03-02 System and method to index and query data from a 3d model WO2012117103A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CH3652011 2011-03-03
CH0365/11 2011-03-03

Publications (2)

Publication Number Publication Date
WO2012117103A2 true WO2012117103A2 (en) 2012-09-07
WO2012117103A3 WO2012117103A3 (en) 2013-02-28

Family

ID=45808905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/053668 WO2012117103A2 (en) 2011-03-03 2012-03-02 System and method to index and query data from a 3d model

Country Status (1)

Country Link
WO (1) WO2012117103A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118349A1 (en) * 2012-10-31 2014-05-01 Gulfstream Aerospace Corporation Systems and methods for presenting vehicle component information
EP2996057A1 (en) * 2014-09-12 2016-03-16 Oulun Ammattikorkeakoulu Oy Healthcare related information management
CN105930497A (en) * 2016-05-06 2016-09-07 浙江工业大学 Image edge and line feature based three-dimensional model retrieval method
CN110188228A (en) * 2019-05-28 2019-08-30 北方民族大学 Cross-module state search method based on Sketch Searching threedimensional model
CN106157329B (en) * 2015-04-20 2021-08-17 中兴通讯股份有限公司 Self-adaptive target tracking method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915310B2 (en) 2002-03-28 2005-07-05 Harris Corporation Three-dimensional volumetric geo-spatial querying
US7707140B2 (en) 2002-10-09 2010-04-27 Yahoo! Inc. Information retrieval system and method employing spatially selective features
US7801897B2 (en) 2004-12-30 2010-09-21 Google Inc. Indexing documents according to geographical relevance
US8015183B2 (en) 2006-06-12 2011-09-06 Nokia Corporation System and methods for providing statstically interesting geographical information based on queries to a geographic search engine

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080065685A1 (en) * 2006-08-04 2008-03-13 Metacarta, Inc. Systems and methods for presenting results of geographic text searches
EP2194466A1 (en) * 2008-11-28 2010-06-09 SEARCHTEQ GmbH Method and system for indexing data in a search engine or a database for speed-optimized proximity queries with different radii

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915310B2 (en) 2002-03-28 2005-07-05 Harris Corporation Three-dimensional volumetric geo-spatial querying
US7707140B2 (en) 2002-10-09 2010-04-27 Yahoo! Inc. Information retrieval system and method employing spatially selective features
US7801897B2 (en) 2004-12-30 2010-09-21 Google Inc. Indexing documents according to geographical relevance
US8015183B2 (en) 2006-06-12 2011-09-06 Nokia Corporation System and methods for providing statstically interesting geographical information based on queries to a geographic search engine

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118349A1 (en) * 2012-10-31 2014-05-01 Gulfstream Aerospace Corporation Systems and methods for presenting vehicle component information
WO2014071035A2 (en) * 2012-10-31 2014-05-08 Gulfstream Aerospace Corporation Systems and methods for presenting vehicle component information
WO2014071035A3 (en) * 2012-10-31 2014-06-26 Gulfstream Aerospace Corporation Systems and methods for presenting vehicle component information
US9501869B2 (en) 2012-10-31 2016-11-22 Gulfstream Aerospace Corporation Systems and methods for presenting vehicle component information
EP2996057A1 (en) * 2014-09-12 2016-03-16 Oulun Ammattikorkeakoulu Oy Healthcare related information management
CN106157329B (en) * 2015-04-20 2021-08-17 中兴通讯股份有限公司 Self-adaptive target tracking method and device
CN105930497A (en) * 2016-05-06 2016-09-07 浙江工业大学 Image edge and line feature based three-dimensional model retrieval method
CN110188228A (en) * 2019-05-28 2019-08-30 北方民族大学 Cross-module state search method based on Sketch Searching threedimensional model
CN110188228B (en) * 2019-05-28 2021-07-02 北方民族大学 Cross-modal retrieval method based on sketch retrieval three-dimensional model

Also Published As

Publication number Publication date
WO2012117103A3 (en) 2013-02-28

Similar Documents

Publication Publication Date Title
Kalpathy-Cramer et al. Evaluating performance of biomedical image retrieval systems—An overview of the medical image retrieval task at ImageCLEF 2004–2013
Chang et al. Image information systems: where do we go from here?
US9390236B2 (en) Retrieving and viewing medical images
Yang et al. Semantic image browser: Bridging information visualization with automated intelligent image analysis
Slingsby et al. Interactive tag maps and tag clouds for the multiscale exploration of large spatio-temporal datasets
Kim et al. Comparison techniques utilized in spatial 3D and 4D data visualizations: A survey and future directions
WO2012117103A2 (en) System and method to index and query data from a 3d model
US20190324624A1 (en) Graphically representing content relationships on a surface of graphical object
Friedrichs et al. Creating suitable tools for art and architectural research with historic media repositories
Stefani et al. A web platform for the consultation of spatialized and semantically enriched iconographic sources on cultural heritage buildings
Assa et al. Displaying data in multidimensional relevance space with 2D visualization maps
Borgbjerg Web‐based imaging viewer for real‐color volumetric reconstruction of human visible project and DICOM datasets
Cakmak et al. Multiscale visualization: A structured literature analysis
Gotz et al. Multifaceted visual analytics for healthcare applications
WO2011071363A2 (en) System and method for visualizing and learning of human anatomy
Roberts Display models-ways to classify visual representations
Ng Interactive visualisation techniques for ontology development
Abergel et al. Aïoli: A reality-based 3D annotation cloud platform for the collaborative documentation of cultural heritage artefacts
Dadzie et al. Providing visualisation support for the analysis of anatomy ontology data
Wang et al. MULTI-NETVIS: visual analytics for multivariate network
Stefani et al. A database of spatialized and semantically-enriched iconographic sources for the documentation of cultural heritage buildings
Benson et al. Symbolic and spatial database for structural biology
Sannakki et al. Memory learning framework for retrieval of neural objects
Dadzie Visual analysis of anatomy ontologies and related genomic information
O’Sullivan et al. Task-based annotation and retrieval for image information management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12707564

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12707564

Country of ref document: EP

Kind code of ref document: A2