US7734556B2 - Method and system for discovering knowledge from text documents using associating between concepts and sub-concepts - Google Patents

Method and system for discovering knowledge from text documents using associating between concepts and sub-concepts Download PDF

Info

Publication number
US7734556B2
US7734556B2 US10/532,163 US53216305A US7734556B2 US 7734556 B2 US7734556 B2 US 7734556B2 US 53216305 A US53216305 A US 53216305A US 7734556 B2 US7734556 B2 US 7734556B2
Authority
US
United States
Prior art keywords
key
concepts
data
semi
relations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/532,163
Other versions
US20060026203A1 (en
Inventor
Ah Hwee Tan
Rajaraman Kanagasabai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agency for Science Technology and Research Singapore
Original Assignee
Agency for Science Technology and Research Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency for Science Technology and Research Singapore filed Critical Agency for Science Technology and Research Singapore
Assigned to AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH reassignment AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANAGASABAI, RAJARAMAN, TAN, AH HWEE
Publication of US20060026203A1 publication Critical patent/US20060026203A1/en
Application granted granted Critical
Publication of US7734556B2 publication Critical patent/US7734556B2/en
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition

Definitions

  • the invention relates generally to the field of natural language processing, text mining, and knowledge discovery. In particular, it relates to a method and a system for extracting and discovering knowledge from text documents.
  • Meta-data which are condensed and typically semi-structured representations of text content, can be considered as the raw form of knowledge and are essentially facts specified in the text documents. Meta-data do not include knowledge that is not mentioned explicitly in the text. In addition, there is usually too much information extracted by the conventional methods and systems and it is a painstaking process for a user to organize and discover wisdom from the extracted information.
  • a critical problem associated with the foregoing proposals lies in the common and attendant inability of the proposed systems and methods to derive new or hidden knowledge from text documents that is often the critical differentiating factor in gaining an edge over competitors.
  • a method for discovering knowledge from text documents comprising the steps of:
  • a computer program product comprising a computer usable medium having computer readable program code means embodied in the medium for discovering knowledge from text documents, the computer program product comprising:
  • a system for knowledge discovery from free-text documents comprising:
  • FIG. 1 illustrates a knowledge discovery system according to an embodiment of the invention
  • FIG. 2A illustrates a flow diagram of a knowledge discovery method according to a further embodiment of the invention
  • FIG. 2B illustrates an exemplary application of the knowledge discovery method of FIG. 2A for discovering relationships between product attributes and diseases
  • FIG. 3 illustrates an exemplary flow diagram of a meta-data extraction process of FIG. 2A ;
  • FIG. 4 illustrates an exemplary architecture for an associative discoverer of FIG. 1 for the learning of the association between sub-concepts in the sub-concept space and concepts in the concept space, in which an F 1 a field serves as the input field for the sub-concepts, an F 1 b field serves as the input field for the sub-concepts, and clusters are formed in an F 2 field that represent the associative mappings from the sub-concept space to the concept space;
  • FIG. 5 illustrates a category choice process performed in the associative discoverer of FIG. 4 ;
  • FIG. 6 illustrates a template matching process and a template learning process performed in the associative discoverer of FIG. 4 ;
  • FIG. 7 illustrates an exemplary flow diagram of an associative discovery process of FIG. 2A ;
  • FIG. 8 illustrates a correspondence process between a cluster in the associative discoverer of FIG. 4 and an IF_THEN rule, in which template vectors w j a and w j b encoded by a cluster j can be interpreted as a rule mapping a set of antecedents represented by w j a to consequents represented by w j b ; and
  • FIG. 9 illustrates a general-purpose computer by which the embodiments of the invention are preferably implemented.
  • a method and a system for knowledge discovery relate to the discovery of new or hidden knowledge from free-text documents in a domain.
  • semi-structured meta-data are first extracted from unstructured free-text documents.
  • the semi-structured meta-data typically comprise entities as well as the relationships between the entities known as relations.
  • the embodiments involve the use of a domain knowledge base dependent on taxonomy, a concept hierarchy network, ontology, a database, or a thesaurus, from which attributes or the like features of the entities can be obtained.
  • the knowledge discovery method or system then uncovers the hidden knowledge in the relevant domain by analyzing the relationships between the attributes and the entities, mapping from an attribute space to an entity space.
  • a domain refers to an application context in which the embodiments operate or function. Relevant fields of application include knowledge management, business intelligence, scientific discovery, bio-informatics, semantic web, and intelligent agents.
  • a knowledge discovery system 10 in accordance with an embodiment of the invention is described, comprising a meta-data extractor 20 , an intermediate meta-data store 30 , a meta-data filter 40 , a domain knowledge base 50 , a meta-data transformer 60 , an associative discoverer 70 , a knowledge interpreter 80 , and a user interface 90 .
  • the meta-data extractor 20 allows a user of the knowledge discovery system 10 to extract semi-structured meta-data from free-text documents, which can be stored in the meta-data store 30 on a permanent or temporary basis.
  • the semi-structured meta-data can be in the Noun-Verb-Noun form as commonly referred to in the field of Natural Language Processing, or in the form of Concept-Relation-Concept (CRC) triples as proposed in U.S. Pat. No. 6,263,335, or in the form of Subject-Action-Object (SAO) tuples as proposed in International Patent Publication No. WO 01/01289, or in other known forms.
  • a noun, a concept, a subject, or an object is an entity or the like individual element in a domain.
  • an entity can be a company, a product, a person, a protein, and etc.
  • an entity is exemplified by a concept of which an attribute is a sub-concept.
  • the embodiments of the invention are not restricted to application to concepts and sub-concepts but include application to other types of entities and attributes of the entities.
  • the meta-data filter 40 identifies key concepts and relations based on the occurrence frequencies of the key concepts and users' preferences.
  • the meta-data transformer 60 then converts a concept to a plurality of sub-concepts based on the domain knowledge base 50 .
  • the sub-concepts may be related to the company profile, such as CEO profile, countries of operations, business sectors, financial ratios and etc.
  • the domain knowledge base 50 may be dependent on a conventional relational or object-oriented database, taxonomy, a thesaurus such as WORDNET, and/or a conceptual model such as one described in “Concept Hierarchy Memory Model”, published in “International Journal of Neural Systems”, Vol. 8 No. 3, pp. 437-446 (1996).
  • the features or attributes of the concepts or sub-concepts as specified in the domain knowledge base 50 can be predefined manually or generated automatically through other term extraction or thesaurus building algorithms known in the art.
  • the associative discoverer 70 may embody a statistical method, a symbolic machine-learning algorithm, or a neural network model, capable of supervised and/or unsupervised learning.
  • the neural network may comprise, for example, an Adaptive Resonance Theory Map (ARTMAP) system, such as one described in “Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervised Learning of Analog Multidimensional Maps, published in “IEEE Transactions on Neural Networks”, Vol. 3, No. 5, pp. 698-713 (1992), or an Adaptive Resonance Associative Map (ARAM) system, such as one described in “Adaptive Resonance Associative Map”, published in “Neural Networks”, Vol. 8, No. 3, pp. 437446 (1995).
  • ARTMAP Adaptive Resonance Theory Map
  • ARAM Adaptive Resonance Associative Map
  • the user interface 90 may comprise a graphical user interface, keyboard, keypad, mouse, voice command recognition system, or any combination thereof, and may permit graphical visualization of information groupings.
  • the knowledge discovery method can be executed using a computer system, such as a personal computer or the like processing means known in the art.
  • the knowledge discovery system 10 can be a stand-alone system, or can be incorporated into a computer system, in which case the user interface 90 can be the graphical or other user interface of the computer system and the domain knowledge base 50 may be in any conventional recordable storage format, for example a file in a storage device, such as magnetic or optical storage media, or in a storage area of a computer system.
  • An exemplary implementation of an embodiment of the invention is a business intelligence system in which concepts may refer to companies, products, and people, relations may refer to launch events, key hires and etc, and sub-concepts may refer to company profile, product features, and personal profile stored in an enterprise database or taxonomy.
  • the knowledge discovery system 10 in this context may serve to uncover hidden relations among company profiles and, as an example, stock price performance, or hidden relations among personal profile and company.
  • Yet another exemplary application of an embodiment of the invention is a scientific discovery system in which concepts may refer to genes, plants, or diseases, relations may refer to protein interaction, localization, or disorder association, and sub-concepts may refer to DNA sequences, plant attributes and etc.
  • the knowledge discovery system 10 in this context can be used to uncover hidden relations between, as an example, DNA sequences and a specific disease, in term of identifying key DNA segments that have a strong link to the disease.
  • a meta-data extraction step 110 first scans text documents to extract meta-data, for example, in the form of concept-relation-concept 3-tuples.
  • a meta-data filtering step 130 next removes irrelevant 3-tuples by focusing on the prominent or key concepts and relations that the user deems as important.
  • a meta-data transformation step 140 reads an external domain knowledge base 50 dependent on taxonomy, ontology, a database, or a concept hierarchy network and derives the sub-concepts for the concepts related to a key concept.
  • a pattern formulation step 150 then forms training samples, each consisting of a vector representing sub-concept and a vector representing a key concept.
  • An associative discovery step 160 subsequently processes vector pairs and learns the underlying associations in the form of ARAM clusters.
  • a knowledge interpretation step 170 further extracts knowledge in the form of IF-THEN rules from ARAM.
  • the meta-data extraction step 110 scans consumer product reports to extract meta-data consisting of associations between product attributes and diseases.
  • the meta-data filtering step 130 removes irrelevant 3-tuples by focusing on the causal relation between consumer products and cancer.
  • the meta-data transformation step 140 reads external product knowledge dependent on taxonomy, ontology, or a database, and derives the product attributes for each product.
  • the pattern formulation step 150 forms training samples consisting of a vector representing product attributes and another vector representing cancer.
  • the associative discovery step 160 processes vector pairs and learns the underlying associations in the form of ARAM clusters.
  • the knowledge interpretation step 170 extracts knowledge in the form of IF-THEN rules from ARAM.
  • examples of the rules discovered may be the association of specific manufacturers with hazardous (cancer causing) products, and the identification of product ingredients, or combinations thereof, which consistently led to cancer.
  • the knowledge discovery method is described in greater detail hereinafter by way of exemplary methods or models for implementing each processing step.
  • the meta-data extractor 20 assumes that the input documents are in plain text format. If the input documents are in a different format, the required conversion is first performed in a pre-processing step 102 . For example, menu bars or formatting specifications such as HTML tags as found on web pages are removed to keep only the main body of the content. Text in image formats is converted to plain text using an optical character recognition system. Speech audio signals are converted to text using a speech recognition system. Captions and transcriptions in video are converted to text using a character recognition system and a speech recognition system respectively.
  • a name entity recognition step 104 next extracts all entities such as person names, company names, dates and numerical data (e.g. 10,000) from the preprocessed texts and creates an index.
  • the index stores the frequency of each entity together with the identities of the source documents.
  • NE recognition also identifies different variations of the same entity, e.g. “George Bush”, “Bush, George” and “Mr. Bush”, through a co-reference resolution algorithm. All variations of an entity are reduced into a standard form and annotate the documents using this standard form for further processing.
  • NC and VC are extended forms of Noun Phrase and Verb Phrase respectively, defined by the regular expressions NC: ⁇ (ADJP)?(NP) + ((IN
  • NVN 3-tuples offer a low-level representation for capturing the agent/verb/object relationships in text.
  • text is tagged using a Part-of-Speech (POS) tagger.
  • POS Part-of-Speech
  • a rule-based algorithm is employed to extract the NVN 3-tuples.
  • POS Part-of-Speech
  • ⁇ [NC1][VC1][NC2], [DT][NC3] ⁇ (NC1,VC1,NC2), (NC2,IS-A,NC3) when invoked on sentence would result in the NVN 3 tuples:
  • the set of the parsing rules can be constructed and validated based on a collection of documents.
  • the rules aim to take care of a large proportion of the major sentence forms.
  • a special rule is made use to extract only the (NC,VC,NC) 3-tuple that appears at the beginning of the sentence. In the above example, this corresponds to (Bill Gates, released, Windows XP). Though this is approximate, often it could extract the main action conveyed by the sentence.
  • a sense disambiguation step 108 identifies the specific meanings of noun clauses (NC) and verb clauses (VC) through the use of WordNet. For every word, WordNet distinguishes between its different word senses by providing separate synsets and associating a sense with each synset. The context of the words in a NC/VC is then used to compute a distance measure and pick the correct word sense for the NC/VC.
  • verb clauses are unified according to the respective meanings. For example, “causes”, “leads to”, and “results in” are all different forms of expressing a causal relation.
  • Clustering of noun clauses and unification of verb clauses complete the meta-data extraction step 110 by transforming the syntactic based NVN 3-tuples to semantic based Concept-Relation-Concept (CRC) 3-tuples type of meta-data representation.
  • CRC Concept-Relation-Concept
  • the meta-data filtering step 130 allows a user to focus on subsets of the 3-tuples by identifying key concepts and relations for the purpose of knowledge discovery.
  • Key concepts and relations can be provided directly by the user. For example, the user can identify “cancer” as a key concept if he/she is interested in discovering factors related to cancer.
  • key concepts/relations can be identified automatically through simple statistical methods, as are known in the state-of-the-art. For example, a concept/relation can be referred to as a key concept/relation if it is contained in more than half of the CRC 3-tuples extracted. This approach enables a user to discover important concepts/relations previously unknown to the user.
  • the meta-data transformation step 140 Given a set of CRC 3-tuples, (A 1 ,R,B), (A 2 ,R,B), . . . , and (A n ,R,B), produced by the meta-data extractor 20 , where B is a key concept and R is a key relation identified by the meta-data filter 40 , and A 1 , A 2 , . . . , A n denotes the concepts that are related to B under relation R, the meta-data transformation step 140 first obtains the sub-concept representation of A 1 , A 2 , . . . , A n from the domain knowledge base 50 .
  • the domain knowledge base 50 may be dependent on a conventional relational or object-oriented database, taxonomy, a thesaurus (such as WORDNET), and/or a conceptual model.
  • the relations are position specific.
  • the pattern formulation step 150 formulates an example consisting of the sub-representation of A i and the associated concept B in the form of ( ⁇ a i1 , a i2 , . . . , a iM ⁇ /B) for processing in the associative discovery step 160 described below.
  • ARAM is a family of neural network models that performs incremental supervised learning of clusters (pattern classes) and multidimensional maps of both binary and analog patterns.
  • An ARAM system can be visualized as two overlapping Adaptive Resonance Theory (ART) modules consisting of two input fields F 1 a 71 and F 1 b 72 with a cluster field F 2 73 .
  • ART Adaptive Resonance Theory
  • the input field F 1 a 71 serves to represent the attribute vector A and the input field F 1 b 72 serves to represent the concept vector B.
  • Each F 2 cluster node j is associated with an adaptive template vector w j a and a corresponding adaptive template vector w j b for learning the mapping from attributes to concepts. Initially, all cluster nodes are uncommitted and all weights are set equal to 1. After a cluster node is selected for encoding, it becomes committed.
  • the system first searches for an F 2 cluster J encoding a template vector w j a and a template vector w j b paired therewith that are closest to the input vectors A and B, respectively, according to a similarity function. Specifically, for each F 2 cluster j, the clustering engine calculates a similarity score based on the input vectors A and B, and the template vectors w j a and w j b , respectively.
  • An example of a similarity function is given below as the category choice function, eqn. (2).
  • the F 2 cluster that has the maximal similarity score is then selected and indexed at J.
  • the system performs template matching to verify that the template vectors w j a and w j b of the selected cluster J match well with the input information vectors A and B, respectively, according to another similarity function, e.g. eqn. (3) below. If so, the system performs template learning to modify the template vectors w j a and w j b of the F 2 cluster J to encode the input vectors A and B, respectively. Otherwise, the cluster is reset and the system repeats the process until a match is found.
  • Another similarity function e.g. eqn. (3) below.
  • the ART modules used in ARAM may be of a type that categorizes binary patterns, analog patterns, or a combination of the two patterns (referred to as “fuzzy ART”), as is known in the art. Described below is a fuzzy ARAM model composed of two overlapping fuzzy ART modules.
  • Fuzzy ARAM dynamics are determined by the choice parameters ⁇ a >0 and ⁇ b >0; the learning rates ⁇ a in [0,1] and ⁇ b in [0,1]; the vigilance parameters ⁇ a 74 in [0,1] and ⁇ b 75 in [0,1]; and a contribution parameter ⁇ in [0,1].
  • the choice parameters ⁇ a and ⁇ b control the bias towards choosing a F 2 cluster whose template vectors have a larger norm or magnitude.
  • the learning rates ⁇ a and ⁇ b control how fast the template vectors w j a and w j b adapt to the input vectors A and B, respectively.
  • ) (2) where, for vectors p and q, the fuzzy AND operation is defined by (p ⁇ q) i min (p i ,q i ), and the norm is defined by
  • ⁇ i p i .
  • the system is said to make a choice when at most one F 2 node can become active.
  • ⁇ a and m J b
  • mismatch reset 212 occurs in which the value of the choice function T J is set to 0 for the duration of the input presentation. The search process repeats, selecting a new index J until resonance is achieved.
  • the vigilance parameter ⁇ a equals a baseline vigilance ⁇ a . If a reset occurs in the cluster field F 2 , a match tracking process 214 increases ⁇ a until it is slightly larger than the match function m J a . The search process then selects another F 2 node J under the revised vigilance criterion.
  • the associative discoverer 70 learns hidden relationship in terms of mapping between the attribute set ⁇ a i1 , a i2 , . . . , a in ⁇ and the concept B.
  • the knowledge uncovered may be in the form of ⁇ f i1 , f i2 , . . . , f in ⁇ B where ⁇ f i1 , f i2 , . . . , f in ⁇ ,a subset of ⁇ a i1 , a i2 , . . . , a in ⁇ , indicates the key features that are related to B.
  • the Knowledge Interpretation step 170 extracts symbolic knowledge in the form of IF-THEN rules from an ARAM.
  • rule extraction algorithm as described in “Rule Extraction: From Neural Architecture to Symbolic Representation”, published in “Connection Science”, Vol. 7, No. 1, pp. 3-26 (1995).
  • FIG. 8 in a fuzzy ARAM network, each cluster node in the F 2 field roughly corresponds to a rule. Each node has an associated weight vector that can be directly translated into a verbal description of the antecedents in the corresponding rule. Each such node is also associated to a weight template vector in the F 1 b field, which in turn encodes a prediction. Learned weight vectors, one for each F 2 node, constitute a set of rules that link antecedents to consequences. The number of rules equals the number of F 2 nodes that become active during learning.
  • a rule pruning procedure aims to select a small set of rules from trained ARAM networks based on their confidence factors.
  • an antecedent pruning procedure aims to remove antecedents from rules while preserving accuracy.
  • the rule pruning algorithm derives a confidence factor for each F 2 cluster node in terms of its usage frequency in a training set and its predictive accuracy on a predicting set Usage and accuracy roughly corresponds to support and confidence as used in the field of associative rule mining, respectively.
  • the confidence factor identifies good rules with nodes that are frequently and correctly used. This allows pruning of ARAM to remove rules with low confidence. Overall performance is actually improved when the pruning algorithm removes rules that were created to handle misleading special cases.
  • a j P j /max ⁇ P J : node J predicts outcome k ⁇ .
  • clusters can be pruned from the network using one of following strategies:
  • Threshold Pruning This is the simplest type of pruning where the F 2 nodes with confidence factors below a given threshold ⁇ are removed from the network. A typical setting for ⁇ is 0.5. This method is fast and provides an initial elimination of unwanted nodes. To avoid over-pruning, it is sometimes useful to specify a minimum number of clusters to be preserved in the system.
  • Local Pruning removes clusters one at a time from an ARAM network. The baseline system performance on the training and the predicting sets is first determined. Then the algorithm deletes the cluster with the lowest confidence factor. The cluster is replaced, however, if its removal degrades system performance on the training and predicting sets.
  • a variant of the local pruning strategy updates baseline performance each time a cluster is removed. This option, called hill-climbing, gives slightly larger rule sets but better predictive accuracy.
  • a hybrid strategy first prunes the ARAM systems using threshold pruning and then applies local pruning on the remaining smaller set of rules.
  • a non-zero weight to an F 2 cluster node translates into an antecedent in the corresponding rule.
  • the antecedent pruning procedure calculates an error factor for each antecedent in each rule based on its performance on the training and predicting sets.
  • each antecedent of the rule that also appears in the current input has its error factor increased in proportion to the smaller of its magnitudes in the rule and in the input vector.
  • a local pruning strategy similar to the one for rules, removes redundant antecedents.
  • ARAM learns real-valued weights.
  • the feature values represented by weights w j a are quantized.
  • the algorithm reduces the value of w to V q .
  • the embodiments of the invention are preferably implemented using a computer, such as the general-purpose computer shown in FIG. 9 , or group of computers that are interconnected via a network.
  • a computer such as the general-purpose computer shown in FIG. 9
  • the functionality or processing of the knowledge discovery system and method of FIGS. 1 to 8 may be implemented as software, or a computer program, executing on the computer or group of computers.
  • the method or process steps for acquiring, sharing and managing knowledge and information within an organization are effected by instructions in the software that are carried out by the computer or group of computers.
  • the software may be implemented as one or more modules for implementing the process steps.
  • a module is a part of a computer program that usually performs a particular function or related functions.
  • a module can also be a packaged functional hardware unit for use with other components or modules.
  • the software may be stored in a computer readable medium, including the storage devices described below.
  • the software is preferably loaded into the computer or group of computers from the computer readable medium and then carried out by the computer or group of computers.
  • a computer program product includes a computer readable medium having such software or a computer program recorded on it that can be carried out by a computer.
  • the use of the computer program product in the computer or group of computers preferably effects an advantageous system for acquiring, sharing and managing knowledge and information within an organization in accordance with the embodiments of the invention.
  • the system 28 is simply provided for illustrative purposes and other configurations can be employed without departing from the scope and spirit of the invention.
  • Computers with which the embodiment can be practiced include IBM-PC/ATs or compatibles, one of the MacintoshTM family of PCs, Sun SparcstationTM, a workstation or the like. The foregoing is merely exemplary of the types of computers with which the embodiments of the invention may be practiced.
  • the processes of the embodiments, described hereinafter are resident as software or a program recorded on a hard disk drive (generally depicted as block 29 in FIG. 16 ) as the computer readable medium, and read and controlled using the processor 30 .
  • Intermediate storage of the program and any data may be accomplished using the semiconductor memory 31 , possibly in concert with the hard disk drive 29 .
  • the program may be supplied to the user encoded on a CD-ROM or a floppy disk (both generally depicted by block 29 ), or alternatively could be read by the user from the network via a modem device connected to the computer, for example.
  • the software can also be loaded into the computer system 28 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between a computer and another device, a computer readable card such as a PCMCIA card, and the Internet and Intranets including email transmissions and information recorded on websites and the like.
  • the foregoing is merely exemplary of relevant computer readable mediums. Other computer readable mediums may be practiced without departing from the scope and spirit of the invention.

Abstract

A method and a system for discovering knowledge from text documents are disclosed, which involve extracting from text documents semi-structured meta-data, wherein the semi-structured meta-data includes a plurality of entities and a plurality of relations between the entities; identifying from the semi-structured meta-data a plurality of key entities and a corresponding plurality of key relations; deriving from a domain knowledge base a plurality of attributes relating to each of the plurality of entities relating to one of the plurality of key entities for forming a plurality of pairs of key entity and a plurality of attributes related thereto; formulating a plurality of patterns, each of the plurality of patterns relating to one of the plurality of pairs of key entity and a plurality of attributes related thereto; analyzing the plurality of patterns using an associative discoverer; and interpreting the output of the associative discoverer for discovering knowledge.

Description

FIELD OF INVENTION
The invention relates generally to the field of natural language processing, text mining, and knowledge discovery. In particular, it relates to a method and a system for extracting and discovering knowledge from text documents.
BACKGROUND
Due to the recent advancement of information technology and the growing popularity of the Internet, a vast amount of information is now available in digital form in both the Internet and the Intranet environments. Such availability of information has provided many opportunities. In the commercial world for example, online information is an advantageous source of business intelligence that is crucial to a company's survival and adaptability in a highly competitive environment. Unfortunately, a user in this situation is usually faced with too much information and too little knowledge that is useful or actionable knowledge. The processes of extracting and discovering knowledge, or knowledge extraction and discovery, from text documents or the like textual data are thus very important tasks of considerable application potential and impact.
Conventional methods and systems of knowledge extraction and discovery from text documents typically focus on the extraction of information or meta-data from free-text documents. Meta-data, which are condensed and typically semi-structured representations of text content, can be considered as the raw form of knowledge and are essentially facts specified in the text documents. Meta-data do not include knowledge that is not mentioned explicitly in the text. In addition, there is usually too much information extracted by the conventional methods and systems and it is a painstaking process for a user to organize and discover wisdom from the extracted information.
Specifically, in U.S. Pat. No. 6,076,088 and U.S. Pat. No. 6,263,335, both entitled “Information Extraction System and Method using Concept-Relation-Concept (CRC) Triples” by Paik et al, systems are proposed for building subject knowledge bases in the form of Concept-Relation-Concept (CRC) triples from text documents. The systems can acquire new knowledge by automatically identifying new names, events, or concepts from text documents.
In International Patent Publication No. WO 01/01289 entitled “Semantic Processor and Method with Knowledge Analysis of And Extraction from Natural Language Documents” by Tsourikov et al, the use of natural language processing methods are proposed for the extraction of Subject-Action-Object (SAO) tuples from text documents upon a user request. The methods further include normalization and organization of SAO triplets into Problem Folders with Action-Object (AO) portions as the name of the folders containing a list of subjects. In International Patent Publication No. WO 01/82122 entitled “Expanded Search and Display of SAO Knowledge Based Information” by Tsourikov et al, the methods proposed by Tsourikov et al in WO 01/01289 are extended by proposing methods for normalizing SAO triplets through paraphrasing AOs.
A critical problem associated with the foregoing proposals lies in the common and attendant inability of the proposed systems and methods to derive new or hidden knowledge from text documents that is often the critical differentiating factor in gaining an edge over competitors.
There is therefore a need for a method and a system for knowledge extraction and discovery from text documents for addressing such a problem.
SUMMARY
In accordance with a first aspect of the invention, there is provided a method for discovering knowledge from text documents, the method comprising the steps of:
    • extracting from text documents semi-structured meta-data, wherein the semi-structured meta-data includes a plurality of entities and a plurality of relations between the entities;
    • identifying from the semi-structured meta-data a plurality of key entities and a corresponding plurality of key relations;
    • deriving from a domain knowledge base a plurality of attributes relating to each of the plurality of entities relating to one of the plurality of key entities for forming a plurality of pairs of key entity and a plurality of attributes related thereto;
    • formulating a plurality of patterns, each of the plurality of patterns relating to one of the plurality of pairs of key entity and a plurality of attributes related thereto;
    • analyzing the plurality of patterns using an associative discoverer; and
    • interpreting the output of the associative discoverer for discovering knowledge.
In accordance with a second aspect of the invention, there is provided a computer program product comprising a computer usable medium having computer readable program code means embodied in the medium for discovering knowledge from text documents, the computer program product comprising:
    • computer readable program code means for extracting from text documents semi-structured meta-data, wherein the semi-structured meta-data includes a plurality of entities and a plurality of relations between the entities;
    • computer readable program code means for identifying from the semi-structured meta-data a plurality of key entities and a corresponding plurality of key relations;
    • computer readable program code means for deriving from a domain knowledge base a plurality of attributes relating to each of the plurality of entities relating to one of the plurality of key entities for forming a plurality of pairs of key entity and a plurality of attributes related thereto;
    • computer readable program code means for formulating a plurality of patterns, each of the plurality of patterns relating to one of the plurality of pairs of key entity and a plurality of attributes related thereto;
    • computer readable program code means for analyzing the plurality of patterns using an associative discoverer; and
    • computer readable program code means for interpreting the output of the associative discoverer for discovering knowledge.
In accordance with a third aspect of the invention, there is provided a system for knowledge discovery from free-text documents, comprising:
    • means for extracting semi-structured meta-data from the free-text documents;
    • means for identifying key entities and key relations from the semi-structured meta-data;
    • a knowledge base that defines the attributes of entities;
    • means for formulating patterns based on the key entities and the attributes of entities related to the key entities; and
    • means for analyzing the patterns for knowledge.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention are described hereinafter by way of example with reference to the accompanying drawings, in which:
FIG. 1 illustrates a knowledge discovery system according to an embodiment of the invention;
FIG. 2A illustrates a flow diagram of a knowledge discovery method according to a further embodiment of the invention;
FIG. 2B illustrates an exemplary application of the knowledge discovery method of FIG. 2A for discovering relationships between product attributes and diseases;
FIG. 3 illustrates an exemplary flow diagram of a meta-data extraction process of FIG. 2A;
FIG. 4 illustrates an exemplary architecture for an associative discoverer of FIG. 1 for the learning of the association between sub-concepts in the sub-concept space and concepts in the concept space, in which an F1 a field serves as the input field for the sub-concepts, an F1 b field serves as the input field for the sub-concepts, and clusters are formed in an F2 field that represent the associative mappings from the sub-concept space to the concept space;
FIG. 5 illustrates a category choice process performed in the associative discoverer of FIG. 4;
FIG. 6 illustrates a template matching process and a template learning process performed in the associative discoverer of FIG. 4;
FIG. 7 illustrates an exemplary flow diagram of an associative discovery process of FIG. 2A;
FIG. 8 illustrates a correspondence process between a cluster in the associative discoverer of FIG. 4 and an IF_THEN rule, in which template vectors wj a and wj b encoded by a cluster j can be interpreted as a rule mapping a set of antecedents represented by wj a to consequents represented by wj b; and
FIG. 9 illustrates a general-purpose computer by which the embodiments of the invention are preferably implemented.
DETAILED DESCRIPTION
The foregoing problem is addressed by a method and a system described hereinafter for transforming meta-data or the like information extracted from text documents in a domain and thereby discovering knowledge that is new or previously undiscovered in the extracted information and the text documents.
A method and a system for knowledge discovery according to embodiments of the invention described hereinafter relate to the discovery of new or hidden knowledge from free-text documents in a domain. According to the embodiments, semi-structured meta-data are first extracted from unstructured free-text documents. The semi-structured meta-data typically comprise entities as well as the relationships between the entities known as relations. The embodiments involve the use of a domain knowledge base dependent on taxonomy, a concept hierarchy network, ontology, a database, or a thesaurus, from which attributes or the like features of the entities can be obtained. The knowledge discovery method or system then uncovers the hidden knowledge in the relevant domain by analyzing the relationships between the attributes and the entities, mapping from an attribute space to an entity space. A domain refers to an application context in which the embodiments operate or function. Relevant fields of application include knowledge management, business intelligence, scientific discovery, bio-informatics, semantic web, and intelligent agents.
With reference to FIG. 1, using a knowledge discovery method a knowledge discovery system 10 in accordance with an embodiment of the invention is described, comprising a meta-data extractor 20, an intermediate meta-data store 30, a meta-data filter 40, a domain knowledge base 50, a meta-data transformer 60, an associative discoverer 70, a knowledge interpreter 80, and a user interface 90.
The meta-data extractor 20 allows a user of the knowledge discovery system 10 to extract semi-structured meta-data from free-text documents, which can be stored in the meta-data store 30 on a permanent or temporary basis. The semi-structured meta-data can be in the Noun-Verb-Noun form as commonly referred to in the field of Natural Language Processing, or in the form of Concept-Relation-Concept (CRC) triples as proposed in U.S. Pat. No. 6,263,335, or in the form of Subject-Action-Object (SAO) tuples as proposed in International Patent Publication No. WO 01/01289, or in other known forms. A noun, a concept, a subject, or an object, is an entity or the like individual element in a domain. For example, an entity can be a company, a product, a person, a protein, and etc.
For purposes of brevity of description hereinafter, an entity is exemplified by a concept of which an attribute is a sub-concept. However, the embodiments of the invention are not restricted to application to concepts and sub-concepts but include application to other types of entities and attributes of the entities.
To identify knowledge of interest, the meta-data filter 40 identifies key concepts and relations based on the occurrence frequencies of the key concepts and users' preferences. The meta-data transformer 60 then converts a concept to a plurality of sub-concepts based on the domain knowledge base 50. For example in the case of companies, for which the concept is a company profile, the sub-concepts may be related to the company profile, such as CEO profile, countries of operations, business sectors, financial ratios and etc. The domain knowledge base 50 may be dependent on a conventional relational or object-oriented database, taxonomy, a thesaurus such as WORDNET, and/or a conceptual model such as one described in “Concept Hierarchy Memory Model”, published in “International Journal of Neural Systems”, Vol. 8 No. 3, pp. 437-446 (1996). The features or attributes of the concepts or sub-concepts as specified in the domain knowledge base 50 can be predefined manually or generated automatically through other term extraction or thesaurus building algorithms known in the art.
The associative discoverer 70 may embody a statistical method, a symbolic machine-learning algorithm, or a neural network model, capable of supervised and/or unsupervised learning. The neural network may comprise, for example, an Adaptive Resonance Theory Map (ARTMAP) system, such as one described in “Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervised Learning of Analog Multidimensional Maps, published in “IEEE Transactions on Neural Networks”, Vol. 3, No. 5, pp. 698-713 (1992), or an Adaptive Resonance Associative Map (ARAM) system, such as one described in “Adaptive Resonance Associative Map”, published in “Neural Networks”, Vol. 8, No. 3, pp. 437446 (1995).
The user interface 90 may comprise a graphical user interface, keyboard, keypad, mouse, voice command recognition system, or any combination thereof, and may permit graphical visualization of information groupings.
The knowledge discovery method can be executed using a computer system, such as a personal computer or the like processing means known in the art. The knowledge discovery system 10 can be a stand-alone system, or can be incorporated into a computer system, in which case the user interface 90 can be the graphical or other user interface of the computer system and the domain knowledge base 50 may be in any conventional recordable storage format, for example a file in a storage device, such as magnetic or optical storage media, or in a storage area of a computer system.
An exemplary implementation of an embodiment of the invention is a business intelligence system in which concepts may refer to companies, products, and people, relations may refer to launch events, key hires and etc, and sub-concepts may refer to company profile, product features, and personal profile stored in an enterprise database or taxonomy. The knowledge discovery system 10 in this context may serve to uncover hidden relations among company profiles and, as an example, stock price performance, or hidden relations among personal profile and company.
Yet another exemplary application of an embodiment of the invention is a scientific discovery system in which concepts may refer to genes, plants, or diseases, relations may refer to protein interaction, localization, or disorder association, and sub-concepts may refer to DNA sequences, plant attributes and etc. The knowledge discovery system 10 in this context can be used to uncover hidden relations between, as an example, DNA sequences and a specific disease, in term of identifying key DNA segments that have a strong link to the disease.
With reference to FIG. 2A, an instance of the knowledge discovery method in accordance with a further embodiment of the invention is described. In the knowledge discovery method, a meta-data extraction step 110 first scans text documents to extract meta-data, for example, in the form of concept-relation-concept 3-tuples. A meta-data filtering step 130 next removes irrelevant 3-tuples by focusing on the prominent or key concepts and relations that the user deems as important. Next, a meta-data transformation step 140 reads an external domain knowledge base 50 dependent on taxonomy, ontology, a database, or a concept hierarchy network and derives the sub-concepts for the concepts related to a key concept. A pattern formulation step 150 then forms training samples, each consisting of a vector representing sub-concept and a vector representing a key concept. An associative discovery step 160 subsequently processes vector pairs and learns the underlying associations in the form of ARAM clusters. A knowledge interpretation step 170 further extracts knowledge in the form of IF-THEN rules from ARAM.
With reference to FIG. 2B, an application of the knowledge discovery method for discovering the hidden relationships between product attributes and diseases, such as cancer, is described for illustrative purposes. The meta-data extraction step 110 scans consumer product reports to extract meta-data consisting of associations between product attributes and diseases. The meta-data filtering step 130 removes irrelevant 3-tuples by focusing on the causal relation between consumer products and cancer. The meta-data transformation step 140 reads external product knowledge dependent on taxonomy, ontology, or a database, and derives the product attributes for each product. The pattern formulation step 150 forms training samples consisting of a vector representing product attributes and another vector representing cancer. The associative discovery step 160 processes vector pairs and learns the underlying associations in the form of ARAM clusters. The knowledge interpretation step 170 extracts knowledge in the form of IF-THEN rules from ARAM. For this application, examples of the rules discovered may be the association of specific manufacturers with hazardous (cancer causing) products, and the identification of product ingredients, or combinations thereof, which consistently led to cancer.
The knowledge discovery method is described in greater detail hereinafter by way of exemplary methods or models for implementing each processing step.
Meta-Data Extraction
With reference to FIG. 3, which illustrates an exemplary flow diagram of a meta-data extraction process, the meta-data extractor 20 assumes that the input documents are in plain text format. If the input documents are in a different format, the required conversion is first performed in a pre-processing step 102. For example, menu bars or formatting specifications such as HTML tags as found on web pages are removed to keep only the main body of the content. Text in image formats is converted to plain text using an optical character recognition system. Speech audio signals are converted to text using a speech recognition system. Captions and transcriptions in video are converted to text using a character recognition system and a speech recognition system respectively.
Name Entity (NE) Recognition
A name entity recognition step 104 next extracts all entities such as person names, company names, dates and numerical data (e.g. 10,000) from the preprocessed texts and creates an index. The index stores the frequency of each entity together with the identities of the source documents. NE recognition also identifies different variations of the same entity, e.g. “George Bush”, “Bush, George” and “Mr. Bush”, through a co-reference resolution algorithm. All variations of an entity are reduced into a standard form and annotate the documents using this standard form for further processing.
NVN 3-Tuples Extraction
An NVN 3-tuples extraction step 106 then parses sentences and derives syntactic 3-tuples in the form of (NC, VC, NC), where NC is a Noun Clause and VC is a Verb Clause. NC and VC are extended forms of Noun Phrase and Verb Phrase respectively, defined by the regular expressions
NC:−(ADJP)?(NP)+((IN|VBG)*(ADJP|NP))*
VC:−(ADVP)?(VP)+(IN|ADVP)*(VP)*
where ADJP is an adjective, NP is a noun phrase, IN is a preposition, VBG is a Verb gerund, ADVP is an adverb, and VP is a verb phrase.
NVN 3-tuples offer a low-level representation for capturing the agent/verb/object relationships in text. As a first step in the extraction, text is tagged using a Part-of-Speech (POS) tagger. Then a rule-based algorithm is employed to extract the NVN 3-tuples. As an example, consider the sentence: “Bill Gates released Windows XP, an operating system for PC's”. The rule
{[NC1][VC1][NC2], [DT][NC3]}→(NC1,VC1,NC2), (NC2,IS-A,NC3)
when invoked on sentence would result in the NVN 3 tuples:
    • (Bill Gates, released, Windows XP), and
    • (Windows XP, IS-A, operating system for PC's).
The set of the parsing rules can be constructed and validated based on a collection of documents. The rules aim to take care of a large proportion of the major sentence forms. In situation when no rule is found to match with the sentence, a special rule is made use to extract only the (NC,VC,NC) 3-tuple that appears at the beginning of the sentence. In the above example, this corresponds to (Bill Gates, released, Windows XP). Though this is approximate, often it could extract the main action conveyed by the sentence.
Sense Disambiguation
Next, a sense disambiguation step 108 identifies the specific meanings of noun clauses (NC) and verb clauses (VC) through the use of WordNet. For every word, WordNet distinguishes between its different word senses by providing separate synsets and associating a sense with each synset. The context of the words in a NC/VC is then used to compute a distance measure and pick the correct word sense for the NC/VC.
Clustering and Unification
In a clustering and unification step 110, the disambiguated NCs are grouped through a clustering algorithm, as is known in the state of the art, based on their similarities. By this, the knowledge recovery method would also take care of most of the entities that have not been reduced to the standard form during NE Recognition due to inaccuracies in NE co-reference resolution.
In addition, the verb clauses are unified according to the respective meanings. For example, “causes”, “leads to”, and “results in” are all different forms of expressing a causal relation. Clustering of noun clauses and unification of verb clauses complete the meta-data extraction step 110 by transforming the syntactic based NVN 3-tuples to semantic based Concept-Relation-Concept (CRC) 3-tuples type of meta-data representation.
Meta-Data Filtering
As the meta-data extraction step 110 in FIG. 2A may produce too many CRC 3-tuples, the meta-data filtering step 130 allows a user to focus on subsets of the 3-tuples by identifying key concepts and relations for the purpose of knowledge discovery. Key concepts and relations can be provided directly by the user. For example, the user can identify “cancer” as a key concept if he/she is interested in discovering factors related to cancer. Alternatively, key concepts/relations can be identified automatically through simple statistical methods, as are known in the state-of-the-art. For example, a concept/relation can be referred to as a key concept/relation if it is contained in more than half of the CRC 3-tuples extracted. This approach enables a user to discover important concepts/relations previously unknown to the user.
Meta-Data Transformation
Given a set of CRC 3-tuples, (A1,R,B), (A2,R,B), . . . , and (An,R,B), produced by the meta-data extractor 20, where B is a key concept and R is a key relation identified by the meta-data filter 40, and A1, A2, . . . , An denotes the concepts that are related to B under relation R, the meta-data transformation step 140 first obtains the sub-concept representation of A1, A2, . . . , An from the domain knowledge base 50. The domain knowledge base 50, whose main purpose is to provide a sub-level representation for the key concepts identified, may be dependent on a conventional relational or object-oriented database, taxonomy, a thesaurus (such as WORDNET), and/or a conceptual model. Without loss of generality, each concept (A1) can be represented by a M-dimensional vector of attributes or features,
Ai=(ai1 , . . . , a iM)  (1)
where aij is a real-valued number between zero and one, indicating the degree of presence of attribute j in concept Ai. Note that in the above representation, the relations are position specific. In other words, (A,R,B) does not necessarily imply (B,R,A). For each concept Ai, the pattern formulation step 150 formulates an example consisting of the sub-representation of Ai and the associated concept B in the form of ({ai1, ai2, . . . , aiM}/B) for processing in the associative discovery step 160 described below.
Associative Discovery
With reference to FIG. 4, there is provided one such neural network-based associative discoverer 70 as described in “Adaptive Resonance Associative Map”, published in “Neural Networks”, Vol. 8 No. 3, pp. 437446 (1995), which is an example of the associative discoverer 70. As described in the article cited above, ARAM is a family of neural network models that performs incremental supervised learning of clusters (pattern classes) and multidimensional maps of both binary and analog patterns. An ARAM system can be visualized as two overlapping Adaptive Resonance Theory (ART) modules consisting of two input fields F 1 a 71 and F 1 b 72 with a cluster field F 2 73. For the knowledge discovery method described herein, the input field F 1 a 71 serves to represent the attribute vector A and the input field F 1 b 72 serves to represent the concept vector B. Each F2 cluster node j is associated with an adaptive template vector wj a and a corresponding adaptive template vector wj b for learning the mapping from attributes to concepts. Initially, all cluster nodes are uncommitted and all weights are set equal to 1. After a cluster node is selected for encoding, it becomes committed.
With reference to FIG. 5, given an input vector A with an associated input vector B, the system first searches for an F2 cluster J encoding a template vector wj a and a template vector wj b paired therewith that are closest to the input vectors A and B, respectively, according to a similarity function. Specifically, for each F2 cluster j, the clustering engine calculates a similarity score based on the input vectors A and B, and the template vectors wj a and wj b, respectively. An example of a similarity function is given below as the category choice function, eqn. (2). The F2 cluster that has the maximal similarity score is then selected and indexed at J.
With reference to FIG. 6, the system performs template matching to verify that the template vectors wj a and wj b of the selected cluster J match well with the input information vectors A and B, respectively, according to another similarity function, e.g. eqn. (3) below. If so, the system performs template learning to modify the template vectors wj a and wj b of the F2 cluster J to encode the input vectors A and B, respectively. Otherwise, the cluster is reset and the system repeats the process until a match is found. The detailed algorithm is given below.
The ART modules used in ARAM may be of a type that categorizes binary patterns, analog patterns, or a combination of the two patterns (referred to as “fuzzy ART”), as is known in the art. Described below is a fuzzy ARAM model composed of two overlapping fuzzy ART modules.
Fuzzy ARAM dynamics are determined by the choice parameters αa>0 and αb>0; the learning rates βa in [0,1] and βb in [0,1]; the vigilance parameters ρa 74 in [0,1] and ρ b 75 in [0,1]; and a contribution parameter γ in [0,1]. The choice parameters αa and αb control the bias towards choosing a F2 cluster whose template vectors have a larger norm or magnitude. The learning rates βa and βb control how fast the template vectors wj a and wj b adapt to the input vectors A and B, respectively. The vigilance parameters ρa and ρb determine the criteria for a satisfactory match between the input vectors A and B and the template vectors wj a and wj b, respectively. The contribution parameter γ controls the weighting of contribution from the F1 a and F1 b fields when selecting an F2 cluster.
With reference to FIG. 7, the dynamics of the associative discoverer 70 is described by way of a flow diagram. Given a pair of F1 a and F1 b input vectors A and B, for each F2 node j, a category choice process 202 computes the choice function Tj as defined by
T j =γ|A^w j a|/(αa +|w j a|)+(1−γ)|B^w j b|/(αb +|w j b|)  (2)
where, for vectors p and q, the fuzzy AND operation is defined by (p^q)i=min (pi,qi), and the norm is defined by |p|=Σi pi. The system is said to make a choice when at most one F2 node can become active. The choice is indexed at J by a select winner process 204 where TJ=max {Tj: for all F2 nodes j}.
A template matching process 206 then checks if the selected cluster represents a good match. Specifically, a check 208 is performed to verify if the match functions, mJ a and mJ b, meet the vigilance criteria in their respective modules:
m J a =|A^w J a |/|A|≧ρ a and m J b =|B^w J b /|B|≧ρ b.  (3)
Resonance occurs if both criteria are satisfied. Learning then ensues, as defined below. If any of the vigilance constraints is violated, mismatch reset 212 occurs in which the value of the choice function TJ is set to 0 for the duration of the input presentation. The search process repeats, selecting a new index J until resonance is achieved.
Once the search ends, a template learning process 210 updates the template vectors wJ a and wJ b, respectively, according to the equations:
w J a(new)=(1−βa)w J a(old)a(A^w J a(old))  (4)
and
w J b(new)=(1−βb)w J b(old)b(B^w J b(old)  (5)
respectively. For efficient coding of noisy input sets, it is useful to set βab=1 when J is an uncommitted node, and then take βa<1 and βb<1 after the cluster node is committed. Fast learning corresponds to setting βab=1 for committed nodes.
At the start of each input presentation, the vigilance parameter ρa equals a baseline vigilance ρ a. If a reset occurs in the cluster field F2, a match tracking process 214 increases ρa until it is slightly larger than the match function mJ a. The search process then selects another F2 node J under the revised vigilance criterion. An exemplary setting for the other parameters is as follows: αab=0.1, βab=1, ρ a0.5, ρb=1, and γ=0.5.
Through its supervised learning procedure, the associative discoverer 70 learns hidden relationship in terms of mapping between the attribute set {ai1, ai2, . . . , ain} and the concept B. The knowledge uncovered may be in the form of {fi1, fi2, . . . , fin}→B where {fi1, fi2, . . . , fin},a subset of {ai1, ai2, . . . , ain}, indicates the key features that are related to B.
Knowledge Interpretation
As the last step of the knowledge discovery method, the Knowledge Interpretation step 170 extracts symbolic knowledge in the form of IF-THEN rules from an ARAM. There is provided one such rule extraction algorithm as described in “Rule Extraction: From Neural Architecture to Symbolic Representation”, published in “Connection Science”, Vol. 7, No. 1, pp. 3-26 (1995). Referring to FIG. 8, in a fuzzy ARAM network, each cluster node in the F2 field roughly corresponds to a rule. Each node has an associated weight vector that can be directly translated into a verbal description of the antecedents in the corresponding rule. Each such node is also associated to a weight template vector in the F1 b field, which in turn encodes a prediction. Learned weight vectors, one for each F2 node, constitute a set of rules that link antecedents to consequences. The number of rules equals the number of F2 nodes that become active during learning.
However, large databases typically cause ARAM to generate too many rules to be of practical use. To reduce the complexity of fuzzy ARAM, a rule pruning procedure aims to select a small set of rules from trained ARAM networks based on their confidence factors. To derive concise rules, an antecedent pruning procedure aims to remove antecedents from rules while preserving accuracy.
Rule Pruning
The rule pruning algorithm derives a confidence factor for each F2 cluster node in terms of its usage frequency in a training set and its predictive accuracy on a predicting set Usage and accuracy roughly corresponds to support and confidence as used in the field of associative rule mining, respectively. The confidence factor identifies good rules with nodes that are frequently and correctly used. This allows pruning of ARAM to remove rules with low confidence. Overall performance is actually improved when the pruning algorithm removes rules that were created to handle misleading special cases.
Specifically, the pruning algorithm evaluates a F2 cluster j in terms of a confidence factor Cj:
C j =γU j+(1−γ)A j,  (6)
where Uj is the usage of node j, Aj is its accuracy, and γ in [0,1] is a weighting factor.
For a cluster j that predicts outcome k, its usage Uj equals the fraction of training set patterns with outcome k coded by node j (Fj), divided by the maximum fraction of training patterns coded by any node J (Fj):
U j =F j/max{F j}.  (7)
For a cluster j that predicts outcome k, its accuracy Aj equals the percent of predicting set patterns predicted correctly by node j (Pj), divided by the maximum percent of patterns predicted correctly by any node J (PJ) that predicts outcome k:
A j =P j/max{P J: node J predicts outcome k}.  (8)
After confidence factors are determined, clusters can be pruned from the network using one of following strategies:
Threshold Pruning—This is the simplest type of pruning where the F2 nodes with confidence factors below a given threshold τ are removed from the network. A typical setting for τ is 0.5. This method is fast and provides an initial elimination of unwanted nodes. To avoid over-pruning, it is sometimes useful to specify a minimum number of clusters to be preserved in the system.
Local Pruning—Local pruning removes clusters one at a time from an ARAM network. The baseline system performance on the training and the predicting sets is first determined. Then the algorithm deletes the cluster with the lowest confidence factor. The cluster is replaced, however, if its removal degrades system performance on the training and predicting sets.
A variant of the local pruning strategy updates baseline performance each time a cluster is removed. This option, called hill-climbing, gives slightly larger rule sets but better predictive accuracy. A hybrid strategy first prunes the ARAM systems using threshold pruning and then applies local pruning on the remaining smaller set of rules.
Antecedent Pruning
During rule extraction, a non-zero weight to an F2 cluster node translates into an antecedent in the corresponding rule. The antecedent pruning procedure calculates an error factor for each antecedent in each rule based on its performance on the training and predicting sets. When a rule makes a predictive error, each antecedent of the rule that also appears in the current input has its error factor increased in proportion to the smaller of its magnitudes in the rule and in the input vector. After the error factor for each antecedent is determined, a local pruning strategy, similar to the one for rules, removes redundant antecedents.
Quantizing Weight Values
When learning analog patterns or with slow learning, ARAM learns real-valued weights. In order to describe the rules in words rather than real numbers, the feature values represented by weights wj a are quantized. A quantization level Q is defined as the number of feature values used in the extracted fuzzy rules. For example, with Q=3, feature values are described as low, medium, or high in the fuzzy rules. Quantization by truncation divides the range of [0,1] into Q intervals and assigns a quantization point to the lower bound of each interval; i.e., for q=1, . . . , Q, let Vq=(q−1)/Q. When a weight w falls in interval q, the algorithm reduces the value of w to Vq. Quantization by round-off distributes Q quantization points evenly in the range of [0,1], with one at each end point; i.e., for q=1, . . . , Q, let Vq=(q−1)/(Q−1). The algorithm then rounds a weight w to the nearest Vq value.
The embodiments of the invention are preferably implemented using a computer, such as the general-purpose computer shown in FIG. 9, or group of computers that are interconnected via a network. In particular, the functionality or processing of the knowledge discovery system and method of FIGS. 1 to 8 may be implemented as software, or a computer program, executing on the computer or group of computers. The method or process steps for acquiring, sharing and managing knowledge and information within an organization are effected by instructions in the software that are carried out by the computer or group of computers. The software may be implemented as one or more modules for implementing the process steps. A module is a part of a computer program that usually performs a particular function or related functions. Also, a module can also be a packaged functional hardware unit for use with other components or modules.
In particular, the software may be stored in a computer readable medium, including the storage devices described below. The software is preferably loaded into the computer or group of computers from the computer readable medium and then carried out by the computer or group of computers. A computer program product includes a computer readable medium having such software or a computer program recorded on it that can be carried out by a computer. The use of the computer program product in the computer or group of computers preferably effects an advantageous system for acquiring, sharing and managing knowledge and information within an organization in accordance with the embodiments of the invention.
The system 28 is simply provided for illustrative purposes and other configurations can be employed without departing from the scope and spirit of the invention. Computers with which the embodiment can be practiced include IBM-PC/ATs or compatibles, one of the Macintosh™ family of PCs, Sun Sparcstation™, a workstation or the like. The foregoing is merely exemplary of the types of computers with which the embodiments of the invention may be practiced. Typically, the processes of the embodiments, described hereinafter, are resident as software or a program recorded on a hard disk drive (generally depicted as block 29 in FIG. 16) as the computer readable medium, and read and controlled using the processor 30. Intermediate storage of the program and any data may be accomplished using the semiconductor memory 31, possibly in concert with the hard disk drive 29.
In some instances, the program may be supplied to the user encoded on a CD-ROM or a floppy disk (both generally depicted by block 29), or alternatively could be read by the user from the network via a modem device connected to the computer, for example. Still further, the software can also be loaded into the computer system 28 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between a computer and another device, a computer readable card such as a PCMCIA card, and the Internet and Intranets including email transmissions and information recorded on websites and the like. The foregoing is merely exemplary of relevant computer readable mediums. Other computer readable mediums may be practiced without departing from the scope and spirit of the invention.
In the foregoing manner, a system and a method for transforming meta-data or information extracted from text documents and thereby discovering knowledge that is new or previously not mentioned in the extracted information and the original documents are described. Although only a number of embodiments of the invention are disclosed, it will be apparent to one skilled in the art in view of this disclosure that numerous changes and/or modification can be made without departing from the scope and spirit of the invention.

Claims (39)

1. A method for discovering knowledge from a set of text documents using a processor, the method comprising:
extracting semi-structured meta-data from the set of text documents using a meta-data extractor, the semi-structured meta-data comprising a plurality of concepts and a plurality of relations between the concepts;
filtering the semi-structured meta-data to identify a set of key concepts and a corresponding set of key relations between the key concepts, the set of key concepts corresponding to the plurality of concepts;
deriving at least one set of sub-concepts corresponding to the set of key concepts based upon data within a domain knowledge base, using a meta-data transformer;
formulating a plurality of training samples, each training sample including a vector representing a sub-concept and a vector representing a key concept; and
analyzing the plurality of training samples using an associative discoverer to derive a set of associations between a set of vectors representing a sub-concept and at least one vector representing a key concept,
wherein neither the set of text documents nor the semi-structured meta-data mention the set of associations, and
wherein the set of associations corresponds to discovered knowledge that is extractable by a knowledge interpreter.
2. The method as in claim 1, wherein extracting semi-structured meta-data from the set of text documents comprises extracting text content from documents containing at least one type of text, image, audio, and video information.
3. The method as in claim 1, wherein filtering the semi-structured meta-data comprises selecting the set of key concepts according to frequency of appearance of the set of key concepts in the semi-structured meta-data.
4. The method as in claim 1, wherein identifying the set of key relations comprises selecting the set of key relations according to frequency of appearance of the set of key relations in the semi-structured meta-data.
5. The method as in claim 1, wherein data within the domain knowledge base relate to at least one of taxonomy, a concept hierarchy network, ontology, a thesaurus, a relational database, and an object-oriented database.
6. The method as in claim 1, wherein formulating the plurality of training samples comprises formulating concatenated vector representations of the set of sub-concepts and the set of key concepts relating to the corresponding set of key relations.
7. The method as in claim 1, wherein analyzing the plurality of training samples using the associative discoverer comprises the step of analyzing the plurality of training samples using at least one of a neural network, a statistical system, and a symbolic machine learning system.
8. The method as in claim 7, wherein analyzing the plurality of training samples comprises the step of analyzing the plurality of training samples using an Adaptive Resonance Associative Map.
9. The method as in claim 1, wherein extraction of discovered knowledge by the knowledge interpreter comprises discovering the semantic relations between the set of sub-concepts and the set of key concepts.
10. The method as in claim 1, further comprising the step of using a user interface for displaying the semi-structured meta-data, the set of key concepts, the set of key relations, the set of sub-concepts, and the knowledge discovered.
11. The method as in claim 1, further comprising the step of using a user interface for obtaining user instruction for the set of key concepts and the set of key relations.
12. A computer program product comprising a computer usable medium having computer readable program code means embodied in the medium for discovering knowledge from a set of text documents, the computer program product comprising:
computer readable program code means for extracting semi-structured meta-data from the set of text documents using a meta-data extractor, the semi-structured meta-data comprising a plurality of concepts and a plurality of relations between the concepts;
computer readable program code means for filtering the semi-structured meta-data to identify a set of key concepts and a corresponding set of key relations between the key concepts, the set of key concepts corresponding to the plurality of concepts;
computer readable program code means for deriving at least one set of sub-concepts corresponding to the set of key concepts based upon data within a domain knowledge base, using a meta-data transformer;
computer readable program code means for formulating a plurality of training samples, each training sample including a vector representing a sub-concept and a vector representing a key concept; and
computer readable program code means for analyzing the plurality of training samples using an associative discoverer to derive a set of associations between a set of vectors representing a sub-concept and at least one vector representing a key concept,
wherein neither the set of text documents nor the semi-structured meta-data mention the set of associations, and
wherein the set of associations corresponds to discovered knowledge that is extractable by a knowledge interpreter.
13. The computer program product as in claim 12, wherein the computer readable program code means for extracting semi-structured meta-data from the set of text documents comprises computer readable program code means for extracting text content from documents containing at least one of text, image, audio, and video information.
14. The computer program product as in claim 12, wherein the computer readable program code means for filtering the semi-structured meta-data comprises computer readable program code means for selecting the set of key concepts according to frequency of appearance of the set of key concepts in the semi-structured meta-data.
15. The computer program product as in claim 12, wherein the computer readable program code means for identifying the set of key relations comprises computer readable program code means for selecting the set of key relations according to frequency of appearance of the set of key relations in the semi-structured meta-data.
16. The computer program product as in claim 12, wherein the data within the domain knowledge base relate to at least one of taxonomy, a concept hierarchy network, ontology, a thesaurus, a relational database, and an object-oriented database.
17. The computer program product as in claim 12, wherein the computer readable program code means for formulating the plurality of training samples comprises computer readable program code means for formulating concatenated vector representations of the set of sub-concepts and the set of key concepts relating to the corresponding set of key relations.
18. The computer program product as in claim 12, wherein the computer readable program code means for analyzing the plurality of training samples using the associative discoverer comprises computer readable program code means for analyzing the plurality of training samples using at least one of a neural network, a statistical system, and a symbolic machine learning system.
19. The computer program product as in claim 18, wherein the computer readable program code means for analyzing the plurality of training samples comprises computer readable program code means for analyzing the plurality of training samples using an Adaptive Resonance Associative Map.
20. The computer program product as in claim 12, wherein extraction of discovered knowledge by the knowledge interpreter comprises discovering the semantic relations between the set of sub-concepts and the set of key concepts.
21. The computer program product as in claim 12, further comprising computer readable program code means for using a user interface for displaying the semi-structured meta-data, the set of key concepts, the set of key relations, the set of sub-concepts, and the knowledge discovered.
22. The computer program product as in claim 12, further comprising computer readable program code means for using a user interface for obtaining user instruction for the set of key concepts and the set of key relations.
23. A system for knowledge discovery from a set of free-text documents, the system comprising:
means for extracting semi-structured meta-data from the set of free-text documents;
means for filtering the semi-structured meta-data to identify a set of key concepts and a corresponding set of key relations between the key concepts, the set of key concepts corresponding to the plurality of concepts;
means for deriving at least one set of sub-concepts corresponding to the set of key concepts based upon data within a domain knowledge base,
means for formulating training samples, each training sample including a vector representing a sub-concept and a vector representing a key concept; and
means for deriving a set of associations between a set of vectors representing a sub-concept and at least one vector representing a key concept,
wherein neither the set of text documents nor the semi-structured meta-data mention the set of associations, and
wherein the set of associations corresponds to discovered knowledge that is extractable by a knowledge interpreter.
24. The system according to claim 23 wherein the semi-structured meta-data comprises definition of concepts and semantic relations among the concepts.
25. The system according to claim 23 wherein the semi-structured meta-data is stored in at least one of a permanent and temporary storage.
26. The system according to claim 23 wherein the set of free-text documents comprise text, image, audio, video, or any combination thereof.
27. The system according to claim 23 wherein the means for filtering the semi-structured meta-data comprises means for selecting the set of key concepts according to frequency of appearance of the set of key concepts in the semi-structured meta-data.
28. The system according to claim 23 wherein the means for identifying the set of key relations comprises means for selecting the set of key relations according to frequency of appearance of the set of key relations in the semi-structured meta-data.
29. The system according to claim 23 wherein the domain knowledge base comprises a taxonomy, a concept hierarchy network, an ontology, a thesaurus, a relational database, an object-oriented database, or any combination thereof.
30. The system according to claim 23 wherein the training samples comprises concatenated vector representations of the set of sub-concepts and the set of key concepts relating to the corresponding set of key relations.
31. The system according to claim 23 wherein the means for deriving the set of associations comprises a neural network, a statistical system, a symbolic machine learning system, or any combination thereof.
32. The system according to claim 23 wherein the means for deriving the set of associations comprises an Adaptive Resonance Associative Map for analyzing the plurality of training samples.
33. The system according to claim 23 wherein the discovered knowledge comprises implicit hidden key relations between the sub-concepts and the key concepts.
34. The system according to claim 23 wherein the system further comprises a user interface for displaying the semi-structured meta-data, the key concepts, the key relations, the sub-concepts, and the knowledge discovered.
35. The system according to claim 23 wherein the knowledge discovery system further comprises a user interface for obtaining user's instruction for the key concepts and the key relations.
36. The method of claim 1 wherein filtering the semi-structured meta-data includes automatically performing a statistical analysis upon the semi-structured meta-data.
37. The method of claim 1 wherein the domain knowledge base is dependent upon one of a taxonomy, an ontology, a database, and a concept hierarchy network.
38. The method of claim 1 wherein analyzing the plurality of training samples comprises mapping a plurality of sub-concepts to a key concept.
39. The method of claim 1 further comprising deriving a set of IF-THEN rules corresponding to the set of associations, the set of IF-THEN rules corresponding to new symbolic knowledge.
US10/532,163 2002-10-24 2002-10-24 Method and system for discovering knowledge from text documents using associating between concepts and sub-concepts Expired - Fee Related US7734556B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2002/000249 WO2004042493A2 (en) 2002-10-24 2002-10-24 Method and system for discovering knowledge from text documents

Publications (2)

Publication Number Publication Date
US20060026203A1 US20060026203A1 (en) 2006-02-02
US7734556B2 true US7734556B2 (en) 2010-06-08

Family

ID=32310991

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/532,163 Expired - Fee Related US7734556B2 (en) 2002-10-24 2002-10-24 Method and system for discovering knowledge from text documents using associating between concepts and sub-concepts

Country Status (3)

Country Link
US (1) US7734556B2 (en)
AU (1) AU2002368316A1 (en)
WO (1) WO2004042493A2 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070198339A1 (en) * 2006-02-22 2007-08-23 Si Shen Targeted mobile advertisements
US20080027893A1 (en) * 2006-07-26 2008-01-31 Xerox Corporation Reference resolution for text enrichment and normalization in mining mixed data
US20090112835A1 (en) * 2007-10-24 2009-04-30 Marvin Elder Natural language database querying
US20090208117A1 (en) * 2008-02-20 2009-08-20 Juliet Eichen Fast block matching in digital images
US20100174675A1 (en) * 2007-03-30 2010-07-08 Albert Mons Data Structure, System and Method for Knowledge Navigation and Discovery
US8566345B2 (en) 2011-07-14 2013-10-22 International Business Machines Corporation Enterprise intelligence (‘EI’) reporting in an EI framework
US20140046977A1 (en) * 2012-08-10 2014-02-13 Xurmo Technologies Pvt. Ltd. System and method for mining patterns from relationship sequences extracted from big data
US20140207712A1 (en) * 2013-01-22 2014-07-24 Hewlett-Packard Development Company, L.P. Classifying Based on Extracted Information
US9208179B1 (en) * 2012-05-25 2015-12-08 Narus, Inc. Comparing semi-structured data records
US9235850B1 (en) * 2007-08-13 2016-01-12 Google Inc. Adaptation of web-based text ads to mobile devices
US9639815B2 (en) 2011-07-14 2017-05-02 International Business Machines Corporation Managing processes in an enterprise intelligence (‘EI’) assembly of an EI framework
US9646278B2 (en) 2011-07-14 2017-05-09 International Business Machines Corporation Decomposing a process model in an enterprise intelligence (‘EI’) framework
US9659266B2 (en) 2011-07-14 2017-05-23 International Business Machines Corporation Enterprise intelligence (‘EI’) management in an EI framework
US9824084B2 (en) 2015-03-19 2017-11-21 Yandex Europe Ag Method for word sense disambiguation for homonym words based on part of speech (POS) tag of a non-homonym word
US10656923B1 (en) * 2019-07-31 2020-05-19 Capital One Services, Llc Systems for determining regulatory compliance of smart contracts
US11086920B2 (en) * 2017-06-22 2021-08-10 Cerego, Llc. System and method for automatically generating concepts related to a target concept
US20210279606A1 (en) * 2020-03-09 2021-09-09 Samsung Electronics Co., Ltd. Automatic detection and association of new attributes with entities in knowledge bases
CN114840686A (en) * 2022-05-07 2022-08-02 中国电信股份有限公司 Knowledge graph construction method, device and equipment based on metadata and storage medium

Families Citing this family (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040225658A1 (en) * 2003-02-13 2004-11-11 Felix Horber Network-based document management systems
US20040199906A1 (en) * 2003-04-01 2004-10-07 Mcknight Russell F. Systems and methods for saving files having different media types
US7542991B2 (en) * 2003-05-12 2009-06-02 Ouzounian Gregory A Computerized hazardous material response tool
US8190443B1 (en) 2003-05-12 2012-05-29 Alluviam Llc Computerized hazardous material response tool
KR100533810B1 (en) * 2003-10-16 2005-12-07 한국전자통신연구원 Semi-Automatic Construction Method for Knowledge of Encyclopedia Question Answering System
US20070094219A1 (en) * 2005-07-14 2007-04-26 The Boeing Company System, method, and computer program to predict the likelihood, the extent, and the time of an event or change occurrence using a combination of cognitive causal models with reasoning and text processing for knowledge driven decision support
US7644053B2 (en) 2004-03-03 2010-01-05 The Boeing Company System, method, and computer program product for combination of cognitive causal models with reasoning and text processing for knowledge driven decision support
BE1016079A6 (en) * 2004-06-17 2006-02-07 Vartec Nv METHOD FOR INDEXING AND RECOVERING DOCUMENTS, COMPUTER PROGRAM THAT IS APPLIED AND INFORMATION CARRIER PROVIDED WITH THE ABOVE COMPUTER PROGRAM.
US20060036451A1 (en) 2004-08-10 2006-02-16 Lundberg Steven W Patent mapping
WO2006128183A2 (en) 2005-05-27 2006-11-30 Schwegman, Lundberg, Woessner & Kluth, P.A. Method and apparatus for cross-referencing important ip relationships
WO2007014341A2 (en) * 2005-07-27 2007-02-01 Schwegman, Lundberg & Woessner, P.A. Patent mapping
US10180942B2 (en) 2005-10-26 2019-01-15 Cortica Ltd. System and method for generation of concept structures based on sub-concepts
US10535192B2 (en) 2005-10-26 2020-01-14 Cortica Ltd. System and method for generating a customized augmented reality environment to a user
US9330189B2 (en) 2005-10-26 2016-05-03 Cortica, Ltd. System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item
US9489431B2 (en) 2005-10-26 2016-11-08 Cortica, Ltd. System and method for distributed search-by-content
US9218606B2 (en) 2005-10-26 2015-12-22 Cortica, Ltd. System and method for brand monitoring and trend analysis based on deep-content-classification
US10742340B2 (en) 2005-10-26 2020-08-11 Cortica Ltd. System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto
US9372940B2 (en) 2005-10-26 2016-06-21 Cortica, Ltd. Apparatus and method for determining user attention using a deep-content-classification (DCC) system
US10387914B2 (en) 2005-10-26 2019-08-20 Cortica, Ltd. Method for identification of multimedia content elements and adding advertising content respective thereof
US9529984B2 (en) 2005-10-26 2016-12-27 Cortica, Ltd. System and method for verification of user identification based on multimedia content elements
US9031999B2 (en) 2005-10-26 2015-05-12 Cortica, Ltd. System and methods for generation of a concept based database
US8266185B2 (en) 2005-10-26 2012-09-11 Cortica Ltd. System and methods thereof for generation of searchable structures respective of multimedia data content
US8312031B2 (en) 2005-10-26 2012-11-13 Cortica Ltd. System and method for generation of complex signatures for multimedia data content
US9384196B2 (en) 2005-10-26 2016-07-05 Cortica, Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US11386139B2 (en) 2005-10-26 2022-07-12 Cortica Ltd. System and method for generating analytics for entities depicted in multimedia content
US9646005B2 (en) 2005-10-26 2017-05-09 Cortica, Ltd. System and method for creating a database of multimedia content elements assigned to users
US8818916B2 (en) 2005-10-26 2014-08-26 Cortica, Ltd. System and method for linking multimedia data elements to web pages
US8326775B2 (en) 2005-10-26 2012-12-04 Cortica Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US9466068B2 (en) 2005-10-26 2016-10-11 Cortica, Ltd. System and method for determining a pupillary response to a multimedia data element
US9256668B2 (en) 2005-10-26 2016-02-09 Cortica, Ltd. System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US11019161B2 (en) 2005-10-26 2021-05-25 Cortica, Ltd. System and method for profiling users interest based on multimedia content analysis
US10360253B2 (en) 2005-10-26 2019-07-23 Cortica, Ltd. Systems and methods for generation of searchable structures respective of multimedia data content
US10698939B2 (en) 2005-10-26 2020-06-30 Cortica Ltd System and method for customizing images
US11032017B2 (en) 2005-10-26 2021-06-08 Cortica, Ltd. System and method for identifying the context of multimedia content elements
US9639532B2 (en) 2005-10-26 2017-05-02 Cortica, Ltd. Context-based analysis of multimedia content items using signatures of multimedia elements and matching concepts
US9286623B2 (en) 2005-10-26 2016-03-15 Cortica, Ltd. Method for determining an area within a multimedia content element over which an advertisement can be displayed
US9477658B2 (en) 2005-10-26 2016-10-25 Cortica, Ltd. Systems and method for speech to speech translation using cores of a natural liquid architecture system
US9767143B2 (en) 2005-10-26 2017-09-19 Cortica, Ltd. System and method for caching of concept structures
US9191626B2 (en) 2005-10-26 2015-11-17 Cortica, Ltd. System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto
US10585934B2 (en) 2005-10-26 2020-03-10 Cortica Ltd. Method and system for populating a concept database with respect to user identifiers
US10614626B2 (en) 2005-10-26 2020-04-07 Cortica Ltd. System and method for providing augmented reality challenges
US11003706B2 (en) 2005-10-26 2021-05-11 Cortica Ltd System and methods for determining access permissions on personalized clusters of multimedia content elements
US10776585B2 (en) 2005-10-26 2020-09-15 Cortica, Ltd. System and method for recognizing characters in multimedia content
US11216498B2 (en) 2005-10-26 2022-01-04 Cortica, Ltd. System and method for generating signatures to three-dimensional multimedia data elements
US10380164B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for using on-image gestures and multimedia content elements as search queries
US10621988B2 (en) 2005-10-26 2020-04-14 Cortica Ltd System and method for speech to text translation using cores of a natural liquid architecture system
US10193990B2 (en) 2005-10-26 2019-01-29 Cortica Ltd. System and method for creating user profiles based on multimedia content
US9396435B2 (en) 2005-10-26 2016-07-19 Cortica, Ltd. System and method for identification of deviations from periodic behavior patterns in multimedia content
US9235557B2 (en) 2005-10-26 2016-01-12 Cortica, Ltd. System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page
US10949773B2 (en) 2005-10-26 2021-03-16 Cortica, Ltd. System and methods thereof for recommending tags for multimedia content elements based on context
US10635640B2 (en) 2005-10-26 2020-04-28 Cortica, Ltd. System and method for enriching a concept database
US11604847B2 (en) 2005-10-26 2023-03-14 Cortica Ltd. System and method for overlaying content on a multimedia content element based on user interest
US10372746B2 (en) 2005-10-26 2019-08-06 Cortica, Ltd. System and method for searching applications using multimedia content elements
US10607355B2 (en) 2005-10-26 2020-03-31 Cortica, Ltd. Method and system for determining the dimensions of an object shown in a multimedia content item
US10380267B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for tagging multimedia content elements
US10380623B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for generating an advertisement effectiveness performance score
US10191976B2 (en) 2005-10-26 2019-01-29 Cortica, Ltd. System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US10691642B2 (en) 2005-10-26 2020-06-23 Cortica Ltd System and method for enriching a concept database with homogenous concepts
US20140156628A1 (en) * 2005-10-26 2014-06-05 Cortica Ltd. System and method for determination of causality based on big data analysis
US9953032B2 (en) 2005-10-26 2018-04-24 Cortica, Ltd. System and method for characterization of multimedia content signals using cores of a natural liquid architecture system
US9558449B2 (en) 2005-10-26 2017-01-31 Cortica, Ltd. System and method for identifying a target area in a multimedia content element
US11361014B2 (en) 2005-10-26 2022-06-14 Cortica Ltd. System and method for completing a user profile
US11403336B2 (en) 2005-10-26 2022-08-02 Cortica Ltd. System and method for removing contextually identical multimedia content elements
US8504606B2 (en) * 2005-11-09 2013-08-06 Tegic Communications Learner for resource constrained devices
US7676463B2 (en) * 2005-11-15 2010-03-09 Kroll Ontrack, Inc. Information exploration systems and method
EP1920366A1 (en) 2006-01-20 2008-05-14 Glenbrook Associates, Inc. System and method for context-rich database optimized for processing of concepts
JP3875257B1 (en) * 2006-03-24 2007-01-31 株式会社コナミデジタルエンタテインメント Stock issue search device, stock issue search method and program
US8738359B2 (en) * 2006-10-18 2014-05-27 Honda Motor Co., Ltd. Scalable knowledge extraction
US10733326B2 (en) 2006-10-26 2020-08-04 Cortica Ltd. System and method for identification of inappropriate multimedia content
EP1959449A1 (en) 2007-02-13 2008-08-20 British Telecommunications Public Limited Company Analysing video material
US8996587B2 (en) 2007-02-15 2015-03-31 International Business Machines Corporation Method and apparatus for automatically structuring free form hetergeneous data
US8108413B2 (en) 2007-02-15 2012-01-31 International Business Machines Corporation Method and apparatus for automatically discovering features in free form heterogeneous data
US8332209B2 (en) * 2007-04-24 2012-12-11 Zinovy D. Grinblat Method and system for text compression and decompression
US7987484B2 (en) 2007-06-24 2011-07-26 Microsoft Corporation Managing media content with a self-organizing map
US8463641B2 (en) 2007-10-05 2013-06-11 The Boeing Company Method and system using linear programming for estimating test costs for bayesian diagnostic models
US20090193053A1 (en) * 2008-01-29 2009-07-30 Garret Swart Information management system
US8650022B2 (en) * 2008-03-13 2014-02-11 Siemens Aktiengesellschaft Method and an apparatus for automatic semantic annotation of a process model
EP2300966A4 (en) 2008-05-01 2011-10-19 Peter Sweeney Method, system, and computer program for user-driven dynamic generation of semantic networks and media synthesis
US20100131513A1 (en) 2008-10-23 2010-05-27 Lundberg Steven W Patent mapping
US8356045B2 (en) * 2009-12-09 2013-01-15 International Business Machines Corporation Method to identify common structures in formatted text documents
US20110314001A1 (en) * 2010-06-18 2011-12-22 Microsoft Corporation Performing query expansion based upon statistical analysis of structured data
US8620852B1 (en) 2010-12-16 2013-12-31 The Boeing Company Systems, methods, and computer program products for predictive accuracy for strategic decision support
US9043326B2 (en) 2011-01-28 2015-05-26 The Curators Of The University Of Missouri Methods and systems for biclustering algorithm
US8719692B2 (en) * 2011-03-11 2014-05-06 Microsoft Corporation Validation, rejection, and modification of automatically generated document annotations
US9904726B2 (en) 2011-05-04 2018-02-27 Black Hills IP Holdings, LLC. Apparatus and method for automated and assisted patent claim mapping and expense planning
US20130086033A1 (en) 2011-10-03 2013-04-04 Black Hills Ip Holdings, Llc Systems, methods and user interfaces in a patent management system
US20130086070A1 (en) 2011-10-03 2013-04-04 Steven W. Lundberg Prior art management
US9286291B2 (en) * 2013-02-15 2016-03-15 International Business Machines Corporation Disambiguation of dependent referring expression in natural language processing
US10504030B2 (en) 2015-07-25 2019-12-10 The Boeing Company Systems, methods, and computer program products for generating a query specific Bayesian network
US10795937B2 (en) * 2016-08-08 2020-10-06 International Business Machines Corporation Expressive temporal predictions over semantically driven time windows
US10067965B2 (en) * 2016-09-26 2018-09-04 Twiggle Ltd. Hierarchic model and natural language analyzer
US20180089316A1 (en) 2016-09-26 2018-03-29 Twiggle Ltd. Seamless integration of modules for search enhancement
US10614093B2 (en) * 2016-12-07 2020-04-07 Tata Consultancy Services Limited Method and system for creating an instance model
US11321614B2 (en) 2017-09-29 2022-05-03 Oracle International Corporation Directed trajectories through communication decision tree using iterative artificial intelligence
US11481640B2 (en) 2017-09-29 2022-10-25 Oracle International Corporation Directed trajectories through communication decision tree using iterative artificial intelligence
US10904298B2 (en) 2018-10-19 2021-01-26 Oracle International Corporation Machine-learning processing at native-location storage system to generate collections action plan
BE1027433B9 (en) * 2019-07-18 2021-03-01 Lynxcare Clinical Informatics A method of extracting information from semi-structured documents, an associated system and a processing device
CN110807102B (en) * 2019-09-19 2023-09-29 平安科技(深圳)有限公司 Knowledge fusion method, apparatus, computer device and storage medium
US11275893B1 (en) * 2020-10-29 2022-03-15 Accenture Global Solutions Limited Reference document generation using a federated learning system
CN113987152B (en) * 2021-11-01 2022-08-12 北京欧拉认知智能科技有限公司 Knowledge graph extraction method, system, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5297039A (en) * 1991-01-30 1994-03-22 Mitsubishi Denki Kabushiki Kaisha Text search system for locating on the basis of keyword matching and keyword relationship matching
US5881187A (en) 1997-07-29 1999-03-09 Corning Incorporated Optical waveguide fiber bragg grating
US5905505A (en) 1996-05-13 1999-05-18 Bell Communications Research, Inc. Method and system for copy protection of on-screen display of text
WO1999034307A1 (en) 1997-12-29 1999-07-08 Infodream Corporation Extraction server for unstructured documents
US6044375A (en) 1998-04-30 2000-03-28 Hewlett-Packard Company Automatic extraction of metadata using a neural network
US6209103B1 (en) 1998-06-14 2001-03-27 Alchemedia Ltd. Methods and apparatus for preventing reuse of text, images and software transmitted via networks
US6208735B1 (en) 1997-09-10 2001-03-27 Nec Research Institute, Inc. Secure spread spectrum watermarking for multimedia data
JP2002099565A (en) 2000-09-26 2002-04-05 Fujitsu Ltd Information retrieval apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5297039A (en) * 1991-01-30 1994-03-22 Mitsubishi Denki Kabushiki Kaisha Text search system for locating on the basis of keyword matching and keyword relationship matching
US5905505A (en) 1996-05-13 1999-05-18 Bell Communications Research, Inc. Method and system for copy protection of on-screen display of text
US5881187A (en) 1997-07-29 1999-03-09 Corning Incorporated Optical waveguide fiber bragg grating
US6208735B1 (en) 1997-09-10 2001-03-27 Nec Research Institute, Inc. Secure spread spectrum watermarking for multimedia data
WO1999034307A1 (en) 1997-12-29 1999-07-08 Infodream Corporation Extraction server for unstructured documents
US6044375A (en) 1998-04-30 2000-03-28 Hewlett-Packard Company Automatic extraction of metadata using a neural network
US6209103B1 (en) 1998-06-14 2001-03-27 Alchemedia Ltd. Methods and apparatus for preventing reuse of text, images and software transmitted via networks
JP2002099565A (en) 2000-09-26 2002-04-05 Fujitsu Ltd Information retrieval apparatus

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Agency for Science, Technology and Research, International Search Report mailed Dec. 28, 2005 for International Application No. PCT/SG 2002/000249 (3 p.).
Ah-Hwee Tan, "Predictive Self-Organing Networks for Text Categorization", PAKDD, pp. 66-77, Apr. 18, 2001. *
Ah-Hwee Tan, "Predictive Self-Organizing Networks for Text Categorization", PAKDD, pp. 66-77, Apr. 18, 2001. *
He et al., Ji, "Machine Learning Methods for Chinses Web Page Categorization", Proceeding of the Second Workshop on Chinese Language Processing, pp. 93-100, 2000. *
Rajaraman et al., Kanagasabi, "Topic Detection, Tracking, and Trend Analysis Using Self-Organizing Neural Networks", PAKDD, pp. 102-107, Apr. 18, 2001. *
Tan et al., Ah-Hwee, "Learning User Profiles for Personalized Information Dissemination", IEEE, pp. 183-188, May 1998. *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070198339A1 (en) * 2006-02-22 2007-08-23 Si Shen Targeted mobile advertisements
US10380651B2 (en) 2006-02-22 2019-08-13 Google Llc Distributing mobile advertisements
US9251520B2 (en) 2006-02-22 2016-02-02 Google Inc. Distributing mobile advertisements
US20080027893A1 (en) * 2006-07-26 2008-01-31 Xerox Corporation Reference resolution for text enrichment and normalization in mining mixed data
US8595245B2 (en) * 2006-07-26 2013-11-26 Xerox Corporation Reference resolution for text enrichment and normalization in mining mixed data
US20100174675A1 (en) * 2007-03-30 2010-07-08 Albert Mons Data Structure, System and Method for Knowledge Navigation and Discovery
US9235850B1 (en) * 2007-08-13 2016-01-12 Google Inc. Adaptation of web-based text ads to mobile devices
US20090112835A1 (en) * 2007-10-24 2009-04-30 Marvin Elder Natural language database querying
US20090208117A1 (en) * 2008-02-20 2009-08-20 Juliet Eichen Fast block matching in digital images
US8566345B2 (en) 2011-07-14 2013-10-22 International Business Machines Corporation Enterprise intelligence (‘EI’) reporting in an EI framework
US9639815B2 (en) 2011-07-14 2017-05-02 International Business Machines Corporation Managing processes in an enterprise intelligence (‘EI’) assembly of an EI framework
US9646278B2 (en) 2011-07-14 2017-05-09 International Business Machines Corporation Decomposing a process model in an enterprise intelligence (‘EI’) framework
US9659266B2 (en) 2011-07-14 2017-05-23 International Business Machines Corporation Enterprise intelligence (‘EI’) management in an EI framework
US9208179B1 (en) * 2012-05-25 2015-12-08 Narus, Inc. Comparing semi-structured data records
US20140046977A1 (en) * 2012-08-10 2014-02-13 Xurmo Technologies Pvt. Ltd. System and method for mining patterns from relationship sequences extracted from big data
US20140207712A1 (en) * 2013-01-22 2014-07-24 Hewlett-Packard Development Company, L.P. Classifying Based on Extracted Information
US9824084B2 (en) 2015-03-19 2017-11-21 Yandex Europe Ag Method for word sense disambiguation for homonym words based on part of speech (POS) tag of a non-homonym word
US11086920B2 (en) * 2017-06-22 2021-08-10 Cerego, Llc. System and method for automatically generating concepts related to a target concept
US10656923B1 (en) * 2019-07-31 2020-05-19 Capital One Services, Llc Systems for determining regulatory compliance of smart contracts
US20210279606A1 (en) * 2020-03-09 2021-09-09 Samsung Electronics Co., Ltd. Automatic detection and association of new attributes with entities in knowledge bases
CN114840686A (en) * 2022-05-07 2022-08-02 中国电信股份有限公司 Knowledge graph construction method, device and equipment based on metadata and storage medium
CN114840686B (en) * 2022-05-07 2024-01-02 中国电信股份有限公司 Knowledge graph construction method, device, equipment and storage medium based on metadata

Also Published As

Publication number Publication date
AU2002368316A1 (en) 2004-06-07
AU2002368316A8 (en) 2004-06-07
WO2004042493A2 (en) 2004-05-21
US20060026203A1 (en) 2006-02-02
WO2004042493A3 (en) 2006-03-02

Similar Documents

Publication Publication Date Title
US7734556B2 (en) Method and system for discovering knowledge from text documents using associating between concepts and sub-concepts
Andhale et al. An overview of text summarization techniques
US7685118B2 (en) Method using ontology and user query processing to solve inventor problems and user problems
Zubrinic et al. The automatic creation of concept maps from documents written using morphologically rich languages
Kashyap et al. TaxaMiner: an experimentation framework for automated taxonomy bootstrapping
Moens Innovative techniques for legal text retrieval
Liu et al. A new method for knowledge and information management domain ontology graph model
Grobelnik et al. Automated knowledge discovery in advanced knowledge management
CN110347796A (en) Short text similarity calculating method under vector semantic tensor space
Kaur et al. A survey of topic tracking techniques
Siefkes et al. An overview and classification of adaptive approaches to information extraction
Bakari et al. A novel semantic and logical-based approach integrating RTE technique in the Arabic question–answering
Tallapragada et al. Improved Resume Parsing based on Contextual Meaning Extraction using BERT
Brewster et al. Ontologies, taxonomies, thesauri: Learning from texts
Samani et al. The state of the art in developing fuzzy ontologies: A survey
Neshatian et al. Text categorization and classification in terms of multi-attribute concepts for enriching existing ontologies
Zubrinic et al. Comparison of Naıve Bayes and SVM Classifiers in Categorization of Concept Maps
Karoui et al. Contextual Concept Discovery Algorithm.
Paukkeri et al. Learning taxonomic relations from a set of text documents
Degeratu et al. Building automatically a business registration ontology
Hao Naive Bayesian Prediction of Japanese Annotated Corpus for Textual Semantic Word Formation Classification
El Idrissi et al. HCHIRSIMEX: An extended method for domain ontology learning based on conditional mutual information
Campi et al. A fuzzy extension for the XPath query language
Shaban A semantic graph model for text representation and matching in document mining
Cardie A cognitive bias approach to feature selection and weighting for case-based learners

Legal Events

Date Code Title Description
AS Assignment

Owner name: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH,SINGAP

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAN, AH HWEE;KANAGASABAI, RAJARAMAN;SIGNING DATES FROM 20050426 TO 20050510;REEL/FRAME:016619/0922

Owner name: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, SINGA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAN, AH HWEE;KANAGASABAI, RAJARAMAN;REEL/FRAME:016619/0922;SIGNING DATES FROM 20050426 TO 20050510

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220608