US20130318075A1 - Dictionary refinement for information extraction - Google Patents

Dictionary refinement for information extraction Download PDF

Info

Publication number
US20130318075A1
US20130318075A1 US13/480,974 US201213480974A US2013318075A1 US 20130318075 A1 US20130318075 A1 US 20130318075A1 US 201213480974 A US201213480974 A US 201213480974A US 2013318075 A1 US2013318075 A1 US 2013318075A1
Authority
US
United States
Prior art keywords
dictionary
results
extractor
extracted
entries
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/480,974
Inventor
Laura Chiticariu
Vitaly Feldman
Frederick R. Reiss
Sudeepa Roy
Huaiyu Zhu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/480,974 priority Critical patent/US20130318075A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REISS, FREDERICK R, ROY, SUDEEPA, CHITICARIU, LAURA, FELDMAN, VITALY, ZHU, HUAIYU
Priority to US13/598,946 priority patent/US8775419B2/en
Publication of US20130318075A1 publication Critical patent/US20130318075A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries

Definitions

  • Extracting structured information from unstructured text is an essential component of many important applications including business intelligence, social media analytics, semantic search, and regulatory compliance.
  • the success of these applications is tightly connected with the quality of the extracted results. Incorrect or missing results may often render the application useless.
  • the system is a dictionary refinement system.
  • the system includes: an extractor configured to match a dictionary to a collection of text to obtain a set of extracted results, wherein the extracted results are labeled as correct results or incorrect results; a processor configured to: process the extracted results using an algorithm configured to set a score for the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and output a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results.
  • Other embodiments of the system are also described.
  • the computer program product includes a computer readable storage medium to store a computer readable program, wherein the computer readable program, when executed by a processor within a computer, causes the computer to perform operations for refining a dictionary for information extraction.
  • the operations include: inputting a set of extracted results from execution of an extractor comprising the dictionary on a collection of text, wherein the extracted results are labeled as correct results or incorrect results; processing the extracted results using an algorithm configured to set a score of the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and outputting a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results.
  • Other embodiments of the computer program product are also described.
  • the method is a method for refining a dictionary for information extraction.
  • the method includes: inputting a set of extracted results from execution of an extractor comprising the dictionary on a collection of text, wherein the extracted results are labeled as correct results or incorrect results; processing the extracted results using an algorithm configured to set a score of the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and outputting a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results.
  • Other embodiments of the method are also described.
  • FIG. 1 depicts a schematic diagram of one embodiment of a dictionary refinement system.
  • FIG. 2 depicts a flowchart diagram of one embodiment of a method for determining candidate dictionary entries.
  • FIG. 3 depicts a flowchart diagram of one embodiment of a method for refining a dictionary.
  • FIG. 4 depicts a flowchart diagram of one embodiment of a method for refining a dictionary for information extraction.
  • FIG. 5 depicts a schematic diagram of one embodiment of a computer system for implementation of one or more aspects of the functionality described herein.
  • the system uses statistical modeling and refinement optimization to balance the precision and recall of an extractor for efficient, accurate information extraction.
  • the system may use extracted results that have been labeled as correct or incorrect to determine candidate entries 138 to be removed from the dictionary to provide the highest precision for avoiding false positives while minimizing any decrease in recall.
  • These candidate entries 138 may also be analyzed by a user to determine which entries should be removed from the dictionary.
  • developers may start by writing an initial extractor that includes an initial set of basic features and rules that combine the features to extract the desired entities.
  • the extractor may be executed on a document collection, the results may be examined to determine the cause of incorrect results, and the features and rules may then be refined to remove the incorrect results. This process may be repeated as many times as necessary to obtain satisfactory performance of the extractor.
  • removing the sources of false positives from the extractor helps produce a higher precision in the extracted results.
  • refining dictionaries used in an extractor by removing the sources (words or phrases) of false positives can improve the quality of the extractor.
  • the system and method described herein allow the refinement of dictionaries to improve the precision (minimization of false positives) of the extractor while maintaining a sufficient level of recall (avoidance of discarding correct answers) for the extractor.
  • Refining the dictionary may be divided into two sub-problems: statistical modeling and refinement optimization.
  • the primary goal of the statistical modeling problem is to estimate the precision of each individual dictionary entry in an extractor, given a set of extracted entities that have been labeled as “correct” or “incorrect”. Labeling the outputs of extractors may be an expensive task requiring large amounts of human effort. Dictionaries frequently contain thousands of entries, so very little information about individual entries may be available even with a large collection of labeled data. Consequently, the extractor may need to be capable of coping with very sparse labeled data in order to be usable in practice.
  • the refinement optimization problem involves using the outputs of parameter estimation to choose the best set of entries to remove from the dictionary in order to improve the quality of the extractor. Balancing the requirements of precision and recall allow the maximization of an F-score (the harmonic mean of precision and recall) for the extractor.
  • the F-score maximization may be subject to a limit on the number of entries removed from the dictionary, or the maximum allowable decrease in recall.
  • FIG. 1 depicts a schematic diagram of one embodiment of a dictionary refinement system 100 .
  • the depicted dictionary refinement system 100 includes various components, described in more detail below, that are capable of performing the functions and operations described herein.
  • at least some of the components of the dictionary refinement system 100 are implemented in a computer system.
  • the functionality of one or more components of the dictionary refinement system 100 may be implemented by computer program instructions stored on a computer memory device 102 and executed by a processing device 104 such as a CPU.
  • the dictionary refinement system 100 may include other components, such as input/output devices 106 , a disk storage drive 108 , an extractor 110 , a dictionary 112 , and a processor 114 .
  • the dictionary refinement system 100 may be stored on a single computing device or on a network of computing devices, including a wireless communication network.
  • the dictionary refinement system 100 may include more or fewer components or subsystems than those depicted herein.
  • the dictionary refinement system 100 may be used to implement the methods described herein as depicted in FIG. 4 .
  • the extractor 110 is applied to a collection of text, dictionary entries 124 in the dictionary 112 are matched to the collection of text and the extractor 110 outputs extracted results 116 .
  • the extractor may include multiple dictionaries 112 and may be applied to the collection of text based on a set of predefined rules 118 . At least some of the extracted results 116 may be labeled by a user to identify correct results 120 (true positives) and incorrect results 122 (false positives).
  • the precision 130 of the extractor 110 is the fraction of true positives among the total number of extracted results 116 .
  • each dictionary entry 124 is the probability that an extracted entity will be correct, given that the entity is based, in whole or in part, on a match of the dictionary entry 124 .
  • An extractor 110 with high precision 130 outputs few incorrect results 122 .
  • An expected mention that is not identified by the extractor 110 is referred to herein as a missing result or a false negative.
  • the term “recall” 128 is broadly interpreted herein to include the fraction of true positives among the total number of expected occurrences. An extractor 110 with high recall 128 misses very few expected results.
  • the recall 128 and precision 130 are balanced to maximize a score of the extractor, for example, by setting the score above a score threshold that may be predetermined based on a desired balance of the precision 130 and recall 128 .
  • the score may be an F-score, an F-measure, or some other measure of scoring the dictionary.
  • F-score 132 , or “F-measure”, is broadly interpreted herein to include combining precision 130 and recall 128 into a single measure that is computed as the harmonic mean of precision 130 and recall 128 depicted as
  • the refinement optimization problem includes executing a refinement algorithm 126 under two constraints.
  • the problem may include a size constraint 134 to limit the size of dictionary entries 124 to be examined at a time.
  • the size constraint 134 may include a set S of size at most k to maximize the F-score 132 of the resulting extractor E′.
  • the extractor 110 may be refined such that the recall 128 does not fall below a certain limit using a recall constraint 136 .
  • the recall constraint 136 includes a set w such that the F-score 132 of E′ is maximized, while at the same time the recall 128 of E′ does not decrease more than a fixed budget. In other embodiments, the refinement optimization problem may be approached without constraints on size or recall 128 .
  • the algorithm 126 produces a set of candidate dictionary entries 138 that may be removed from the dictionary 112 to improve performance of the extractor 110 .
  • the system may maximize the quality of the extractor 110 in general and avoid overfitting to the labeled dataset.
  • the system may use a model for refining the dictionary 112 by estimating the parameters of the model including the precision 130 of each individual dictionary entry, given the set of extracted results 116 that have been labeled as correct or incorrect.
  • FIG. 2 depicts a flowchart diagram of one embodiment of a method 200 for determining candidate dictionary entries 138 . While the method 200 is described in conjunction with the dictionary refinement system 100 of FIG. 1 , embodiments of the method 200 may be implemented with other types of dictionary refinement systems 100 .
  • the extractor 110 receives a collection of text 202 to be matched to a set of one or more dictionaries 112 according to a set of rules associated with the extractor 110 .
  • Each dictionary 112 may include a set of dictionary entries 124 corresponding to a given subject or grouping of words and phrases. Some of the entries 124 in the dictionaries 112 may overlap with other dictionaries 112 , depending on the subjects or groupings for the dictionaries 112 .
  • the extractor 110 is applied to the collection of text 202 and outputs the extracted results 116 .
  • the system 100 examines the rules and dictionaries 112 of the extractor 110 and determines which dictionaries 112 are involved in producing the extracted result 116 , and also determines the provenance of the extracted result 116 . Some or all of the extracted results 116 are then given labels 204 as correct results 120 or incorrect results 122 based on a user input. The incorrect results 122 are false positives output by the extractor 110 .
  • the collection of text 202 may include a phrase “Mark Calendar” that is marked as a name based on matches in one or more of the dictionaries 112 .
  • the extracted results 116 are input into the processor 114 , and the processor 114 uses an algorithm 126 that maximizes the F-score 132 for the extractor 110 .
  • the algorithm 126 produces a set of candidate dictionary entries 138 for each dictionary 112 that are output by the processor 114 .
  • the set of candidate dictionary entries 138 are candidates that may be removed from the dictionary 112 that would maximize the F-score 132 for the extractor 110 .
  • the labels 204 for a given false positive may be determined by multiple dictionary entries 124 , the label 204 may not be used to estimate the precision 130 of a dictionary entry directly.
  • the false positive example given above it is not clear whether the false positive is because “Mark” is an incorrect first name, “Calendar” is an incorrect last name, or both.
  • the same dictionary entry can contribute to different results, some correct and some incorrect. For example, “Mark” may contribute to an incorrect result “Mark Calendar”, as well as a correct result for “Mark Smith”.
  • the processor 114 determines the candidate dictionary entries 138 by using the provenance of each false positive to model the complex dependencies between the dictionary entries 124 and the extracted results 116 , along with an algorithm 126 for estimating precisions 130 based on an expectation-maximization (EM) algorithm.
  • EM expectation-maximization
  • the algorithm 126 is configured to determine which dictionary entries 124 may be removed to result in the highest quality improvement of the extractor 110 .
  • two dictionary entries “Chelsea” and “Mark” are both ambiguous as a person name. If “Chelsea” is labeled as an incorrect result 60 times and as a correct result 40 times, and “Mark” is labeled as an incorrect result 9 times and as a correct result 1 time, the precision 130 of “Mark” (10%) is lower than that of “Chelsea” (40%). However, removing “Chelsea” results in removing more incorrect results 122 , possibly leading to a higher overall quality improvement for the extractor 110 .
  • FIG. 3 depicts a flowchart diagram of one embodiment of a method for refining a dictionary 112 . While the method is described in conjunction with the dictionary refinement system 100 of FIG. 1 , embodiments of the method may be implemented with other types of dictionary refinement systems 100 .
  • the processor 114 after the processor 114 outputs the candidate dictionary entries 138 , the processor 114 then receives a user input 302 that selects one or more of the candidate dictionary entries 138 for removal from the dictionaries 112 .
  • the processor 114 may then read the dictionary entries 124 currently stored in the dictionaries 112 , remove the selected dictionary entries, and modify the dictionaries 112 according to the new set of dictionary entries 124 . This may allow a user to manually refine the candidate dictionary entries 138 by determining which entries from the set of candidate dictionary entries 138 are actually removed from the dictionaries 112 .
  • the dictionary refinement system 100 processes 415 the extracted results 116 using an algorithm 126 configured to maximize an F-score 132 for the extractor 110 , for example, by setting the F-score above a score threshold.
  • the score threshold for the maximized F-score 132 balances the precision 130 and recall 128 of the extractor 110 .
  • the system may process the extracted results 116 by computing the set of candidate dictionary entries 138 that maximize the F-score 132 under a maximum size constraint 134 for the set of candidate dictionary entries 138 .
  • the system may process the extracted results 116 by computing the set of candidate dictionary entries 138 that maximize the F-score 132 within an allocated recall constraint 136 .
  • the dictionary refinement system 100 outputs 420 a set of candidate dictionary entries 138 corresponding to a full set of dictionary entries 124 of the dictionary 112 .
  • the candidate dictionary entries 138 are candidates to be removed from the dictionary 112 based on the extracted results 116 .
  • the dictionary refinement system 100 receives 425 a user input 302 to select dictionary entries from the set of candidate dictionary entries 138 .
  • the dictionary refinement system 100 then removes 430 the selected dictionary entries from the dictionary 112 .
  • the system may obtain extracted results 116 using a plurality of specialized dictionaries.
  • Each dictionary 112 may produce extracted results 116 labeled as correct results 122 or incorrect results 122 , which may then be processed for each corresponding dictionary 112 to determine which dictionary entries 124 should be removed for greatest improvement of the performance of the extractor 110 .
  • the dictionary A contains a set of n entries 124 .
  • each entry has a fixed precision p w ⁇ [0,1].
  • An occurrence of an entry w is Good if it is a correct match for the annotation used in a query, otherwise the occurrence is Bad.
  • a match for ‘Ford’, ‘Chelsea’ or ‘Mark’ is Good for the Person annotator if the match corresponds to a person name, and Bad otherwise.
  • a human annotator labels a subset of the occurrences explicitly as Good or Bad.
  • it is assumed that each occurrence of w in the given corpus was chosen to be Good with probability p w and Bad with probability 1 ⁇ p w randomly and independently of the other occurrences and of whether the label 204 is given to the refinement algorithm 126 .
  • t w denote the number of occurrences of w in the given corpus
  • g w denote the number of times the entry was labeled Good
  • b w denote the number of times the entry was labeled Bad.
  • the empirical frequencies (t w / ⁇ w ⁇ A t w ) may be referred to as true frequencies. Consequently, the goal of the parameter estimation problem is estimating precisions 130 .
  • estimating the precision 130 for w includes observing the precision 130 of other entries 124 . For example, if other entries 124 with a large number of labels 204 have precision 130 close to 80% then w is also more likely to have precision 130 close to 80%. This dependency may be expressed in the model as described below.
  • the precision 130 of each word is assumed to be chosen randomly and independently from a fixed and unknown distribution H over [0,1].
  • distribution ⁇ represents a prior belief about p w . This allows the use of the given labels 204 for w to perform Bayesian updates so as to obtain the posterior distribution ⁇ w .
  • the posterior distribution ⁇ w represents a knowledge of p w and can be used to derive an estimate of p w . Taking the mean of ⁇ w provides a simple and more optimal way to use ⁇ w .
  • the prior distribution ⁇ is not given to the algorithm 126 , and a suitable ⁇ may need to be found given the available labels 204 .
  • the distribution ⁇ may be modeled using beta distributions. This may be a convenient distribution for Bayesian updates using the labels 204 .
  • the beta distribution also allows easy estimation of parameters.
  • Beta( ⁇ , ⁇ ) has two parameters ⁇ , ⁇ >0, and the probability density function (PDF) of the distribution is c ⁇ ⁇ 1 (1 ⁇ ) ⁇ 1 where c is the normalizing constant.
  • PDF probability density function
  • Beta( ⁇ , ⁇ ) improve the estimate of the obtained precision p w .
  • the system may use a uniform prior case with Beta(1,1) as the prior.
  • ⁇ ⁇ 1 n ⁇ ⁇ w ⁇ ⁇ ⁇ ⁇ A ⁇ ⁇ g w g w + b w
  • the optimization problem may be considered for single dictionary refinement, assuming that the true values of p w , f w are given as input for all w ⁇ A.
  • the standard notions of precision 130 , recall 128 and F-score 132 may be used to measure the quality of the solution for the refinement optimization problem.
  • precision (P S ), recall (R S ), and F-score (F S ) are defined as
  • F-score 132 is the harmonic mean of P S and R S and is used to balance the precision 130 with recall 128 of the refined dictionary.
  • P S P A ⁇ S
  • R S R A ⁇ S
  • F S F A ⁇ S .
  • the F-score 132 may be maximized under two constraints:
  • F S is a non-linear function of the precisions ⁇ circumflex over (p) ⁇ w , w ⁇ S, since both the numerator and the denominator are dependent on the set S being removed, thereby making the analysis of the optimization problem non-trivial.
  • the goal is to maximize
  • the system first checks whether there is a set S of entries such that
  • a subset S is desired such that F S ⁇ F A . Consequently, the minimum value of the guess is F A and the maximum value is 1.
  • the algorithm 126 is presented in Algorithm 1, shown below, in terms of parameter ⁇ where ⁇ is the desired accuracy of the algorithm.
  • a simpler implementation of verification uses a mini-heap that gives O(n+k log n) time, whereas a simple sorting gives O(n log n) time.
  • the algorithm sorts the entries 124 in increasing order of precisions p w , and selects entries 124 according to this order until the recall budget is exhausted or there is no improvement of F-score 132 by selecting the next entry.
  • the algorithm runs in time O(n log n).
  • the system and method described herein are capable of refining and optimizing an extractor 110 using more than one dictionary 112 .
  • Every provenance expression ⁇ may be assumed to be a true frequency f( ⁇ ) ⁇ [0,1] and a true precision p( ⁇ ) ⁇ [0,1].
  • the label 204 of ⁇ is Good with probability p( ⁇ ) and Bad with probability 1 ⁇ p( ⁇ ) randomly and independently of other occurrences, and whether the label 204 of ⁇ is given.
  • the precision p( ⁇ ) of results ⁇ may be estimated from a limited amount of labeled data.
  • a natural approach to find the precisions 130 of provenance expressions is to estimate them empirically. The problem with this approach is that the possible number of such provenance expression is very large and it is likely that very few (if any) labels 204 would be available for most of the provenance expressions.
  • individual dictionary entries 124 have similar precision 130 across different provenance expressions. This intuition may be represented by strengthening the model described herein in the following way.
  • every entry w has a fixed (and unknown) precision 130 denoted by p w .
  • p w For any given occurrence ⁇ such that w ⁇ Prov( ⁇ ), the match of w for ⁇ is correct with probability p w and incorrect with probability 1 ⁇ p w independent of the other occurrences and other entries 124 in the provenance of ⁇ .
  • the AQL rule is correct, i.e., the label 204 of ⁇ is Good if and only if its provenance Prov( ⁇ ) evaluates to true with the matches of the dictionary entries 124 in Prov( ⁇ ) ((Good ⁇ true and Bad ⁇ false).
  • the goal is to estimate the values of precision p w given a set of occurrences ⁇ along with their labels 204 and provenance expressions Prov( ⁇ ).
  • the Expectation-Maximization (EM) algorithm may be used to solve this problem.
  • the EM algorithm is a widely-used technique for maximum likelihood estimation of parameters of a probabilistic model under hidden variables. This algorithm estimates the parameters iteratively either for a given number of steps or until some convergence criteria are met.
  • the entries 124 are indexed arbitrarily as w 1 , . . . , w n .
  • There are N labeled occurrences in ⁇ 1 , . . . , ⁇ N . It is assumed that ⁇ 1 , . . . , ⁇ N also denote the labels 204 of the occurrences. So each ⁇ i is Boolean, where ⁇ i 1 (resp. 0) if the label 204 is Good (resp. Bad). If w i ⁇ Prov( ⁇ j ), then ⁇ j ⁇ Succ(w i ).
  • each ⁇ j takes b inputs y j1 , . . . , y jb and produces ⁇ j .
  • the entry corresponding the y jl is denoted by Prov jl ⁇ w 1 , . . . , w n ⁇ .
  • the vector of vectors ⁇ right arrow over (y) ⁇ y jl j ⁇
  • is the hidden data
  • the parameter vector at iteration t is denoted to be ⁇ right arrow over ( ⁇ ) ⁇ t .
  • c w t , ⁇ j ,t E[y jl
  • , where ⁇ j ⁇ Succ(w i ) and Prov jl w i . It may be shown that the update rules for parameters p i has a nice closed form:
  • R S _ ⁇ ⁇ ⁇ ⁇ ⁇ surv ⁇ ( S ) ⁇ ⁇ p ⁇ ( ⁇ ) ⁇ f ⁇ ( ⁇ ) ⁇ ⁇ ⁇ ⁇ p ⁇ ( ⁇ ) ⁇ f ⁇ ( ⁇ )
  • P S _ ⁇ ⁇ surv ⁇ ( S ) ⁇ ⁇ p ⁇ ( ⁇ ) ⁇ f ⁇ ( ⁇ ) ⁇ ⁇ ⁇ ⁇ surv ⁇ ( S ) ⁇ ⁇ f ⁇ ( ⁇ )
  • both greedy and EP-based algorithms compute the precision 130 of tuples from precision entries 124 under independence assumption.
  • the greedy algorithms select the next entry that gives the maximum improvement in F-score 132 .
  • the algorithm stops if no further improvement in F-score 132 is possible by deleting any entry or when the given size or recall budget is exhausted.
  • the EP-based algorithms exploit the precision 130 of individual dictionary entries 124 .
  • the dictionary entries 124 may be treated as if they come from a single dictionary (however, note that the actual provenances were used by the EM algorithm to estimate the precision 130 of entries 124 ).
  • These algorithms use the selection criteria of incremental algorithms for the single-dictionary case, i.e., maximize ⁇ F for size constraint 134 and ⁇ F/ ⁇ R for recall constrain, where ⁇ F, ⁇ R denote the changes in F-score 132 and recall 128 by deleting one additional entry.
  • FIG. 5 depicts a schematic diagram of one embodiment of a computer system 500 for implementation of one or more aspects of the functionality described herein.
  • the illustrated computer system 500 is only one example of a suitable computer architecture and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, the computer system 500 is capable of being implemented to performing any or all of the functionality set forth hereinabove.
  • the depicted computer system 500 includes a computer processing device 502 , which is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 502 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
  • the computer processing device 502 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • Embodiments of the computer processing device 502 may be practiced locally, remotely, or in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer system storage media including memory storage devices.
  • the computer processing device 502 includes components and functionality typical of a general-purpose computing device.
  • the components of the computer processing device 502 may include, but are not limited to, one or more processors or processing units 504 , a system memory 506 , and a bus 508 that couples various system components including the system memory 506 to the processor 504 .
  • the bus 508 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
  • the computer processing device 502 typically includes a variety of computer system readable media (also referred to as computer readable media and/or computer usable media). Such media may be any available media that is accessible by the computer processing device 502 .
  • Embodiments of the computer readable media may include one or more of the following types of media: volatile and non-volatile media, removable and non-removable media.
  • the system memory 506 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 510 and/or cache memory 512 .
  • the computer processing device 502 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • a storage system 514 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”).
  • a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”)
  • an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media
  • each can be connected to the bus 508 by one or more data media interfaces.
  • the memory 506 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
  • a program/utility 516 having a set (at least one) of program modules 518 , is stored in the memory 506 .
  • the program modules 518 generally carry out one or more of the functions and/or methodologies of the embodiments described herein.
  • the memory 506 also may store an operating system, one or more application programs, other program modules, and/or program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a personal computer and/or networking environment.
  • the computer processing device 502 may also communicate with one or more external devices 520 such as a keyboard, a pointing device, a display 522 , etc.; one or more devices that enable a user to interact with the computer processing device 502 ; and/or any devices (e.g., network card, modem, etc.) that enable the computer processing device 502 to communicate with one or more other computing devices. Such communication can occur via input/output (I/O) interfaces 524 . Additionally, the computer processing device 502 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via a network adapter 526 .
  • LAN local area network
  • WAN wide area network
  • public network e.g., the Internet
  • the network adapter 526 communicates with the other components of the computer processing device 502 via the bus 508 .
  • the bus 508 It should be understood that, although not shown, other hardware and/or software components could be used in conjunction with embodiments of the computer processing device 502 . Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
  • Embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • An embodiment of a dictionary refinement system 100 includes at least one processor coupled directly or indirectly to memory elements through a system bus such as a data, address, and/or control bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • I/O devices can be coupled to the system either directly or through intervening I/O controllers.
  • network adapters also may be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Abstract

A method for refining a dictionary for information extraction, the operations including: inputting a set of extracted results from execution of an extractor comprising the dictionary on a collection of text, wherein the extracted results are labeled as correct results or incorrect results; processing the extracted results using an algorithm configured to set a score of the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and outputting a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results.

Description

    BACKGROUND
  • Extracting structured information from unstructured text is an essential component of many important applications including business intelligence, social media analytics, semantic search, and regulatory compliance. The success of these applications is tightly connected with the quality of the extracted results. Incorrect or missing results may often render the application useless.
  • Building high-quality information extraction rules to extract structured information from unstructured text is a difficult and time-consuming process. Exhaustive dictionaries of words and phrases are integral to any information extraction system. One of the most important parts of this process can include refining the dictionaries by selectively removing dictionary entries that lead to false positives. Sophisticated extractors that use greater numbers of fine-grained dictionaries to improve accuracy also increase the difficulty of refining the dictionaries for efficient and accurate extraction due to the size and number of dictionaries.
  • SUMMARY
  • Embodiments of a system are described. In one embodiment, the system is a dictionary refinement system. The system includes: an extractor configured to match a dictionary to a collection of text to obtain a set of extracted results, wherein the extracted results are labeled as correct results or incorrect results; a processor configured to: process the extracted results using an algorithm configured to set a score for the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and output a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results. Other embodiments of the system are also described.
  • Embodiments of a computer program product are also described. In one embodiment, the computer program product includes a computer readable storage medium to store a computer readable program, wherein the computer readable program, when executed by a processor within a computer, causes the computer to perform operations for refining a dictionary for information extraction. The operations include: inputting a set of extracted results from execution of an extractor comprising the dictionary on a collection of text, wherein the extracted results are labeled as correct results or incorrect results; processing the extracted results using an algorithm configured to set a score of the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and outputting a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results. Other embodiments of the computer program product are also described.
  • Embodiments of a method are also described. In one embodiment, the method is a method for refining a dictionary for information extraction. The method includes: inputting a set of extracted results from execution of an extractor comprising the dictionary on a collection of text, wherein the extracted results are labeled as correct results or incorrect results; processing the extracted results using an algorithm configured to set a score of the extractor above a score threshold, wherein the score threshold balances a precision and a recall of the extractor; and outputting a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results. Other embodiments of the method are also described.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts a schematic diagram of one embodiment of a dictionary refinement system.
  • FIG. 2 depicts a flowchart diagram of one embodiment of a method for determining candidate dictionary entries.
  • FIG. 3 depicts a flowchart diagram of one embodiment of a method for refining a dictionary.
  • FIG. 4 depicts a flowchart diagram of one embodiment of a method for refining a dictionary for information extraction.
  • FIG. 5 depicts a schematic diagram of one embodiment of a computer system for implementation of one or more aspects of the functionality described herein.
  • Throughout the description, similar reference numbers may be used to identify similar elements.
  • DETAILED DESCRIPTION
  • It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
  • The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
  • Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
  • Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
  • Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
  • While many embodiments are described herein, at least some of the described embodiments present a system and method for refining at least one dictionary for information extraction. More specifically, the system uses statistical modeling and refinement optimization to balance the precision and recall of an extractor for efficient, accurate information extraction. The system may use extracted results that have been labeled as correct or incorrect to determine candidate entries 138 to be removed from the dictionary to provide the highest precision for avoiding false positives while minimizing any decrease in recall. These candidate entries 138 may also be analyzed by a user to determine which entries should be removed from the dictionary.
  • In general, developing and maintaining high-quality extractors is a laborious and time consuming process. When creating an extractor, developers may start by writing an initial extractor that includes an initial set of basic features and rules that combine the features to extract the desired entities. The extractor may be executed on a document collection, the results may be examined to determine the cause of incorrect results, and the features and rules may then be refined to remove the incorrect results. This process may be repeated as many times as necessary to obtain satisfactory performance of the extractor. Generally, removing the sources of false positives from the extractor helps produce a higher precision in the extracted results. Specifically, refining dictionaries used in an extractor by removing the sources (words or phrases) of false positives can improve the quality of the extractor. The system and method described herein allow the refinement of dictionaries to improve the precision (minimization of false positives) of the extractor while maintaining a sufficient level of recall (avoidance of discarding correct answers) for the extractor.
  • Refining the dictionary may be divided into two sub-problems: statistical modeling and refinement optimization. The primary goal of the statistical modeling problem is to estimate the precision of each individual dictionary entry in an extractor, given a set of extracted entities that have been labeled as “correct” or “incorrect”. Labeling the outputs of extractors may be an expensive task requiring large amounts of human effort. Dictionaries frequently contain thousands of entries, so very little information about individual entries may be available even with a large collection of labeled data. Consequently, the extractor may need to be capable of coping with very sparse labeled data in order to be usable in practice.
  • The refinement optimization problem involves using the outputs of parameter estimation to choose the best set of entries to remove from the dictionary in order to improve the quality of the extractor. Balancing the requirements of precision and recall allow the maximization of an F-score (the harmonic mean of precision and recall) for the extractor. In some embodiments, the F-score maximization may be subject to a limit on the number of entries removed from the dictionary, or the maximum allowable decrease in recall.
  • FIG. 1 depicts a schematic diagram of one embodiment of a dictionary refinement system 100. The depicted dictionary refinement system 100 includes various components, described in more detail below, that are capable of performing the functions and operations described herein. In one embodiment, at least some of the components of the dictionary refinement system 100 are implemented in a computer system. For example, the functionality of one or more components of the dictionary refinement system 100 may be implemented by computer program instructions stored on a computer memory device 102 and executed by a processing device 104 such as a CPU. The dictionary refinement system 100 may include other components, such as input/output devices 106, a disk storage drive 108, an extractor 110, a dictionary 112, and a processor 114. Some or all of the components of the dictionary refinement system 100 may be stored on a single computing device or on a network of computing devices, including a wireless communication network. The dictionary refinement system 100 may include more or fewer components or subsystems than those depicted herein. In some embodiments, the dictionary refinement system 100 may be used to implement the methods described herein as depicted in FIG. 4.
  • In one embodiment, the processor 114 is wholly contained within the processing device 104. In another embodiment, the processor 114 includes one or more separate devices that may be spread among a network of computers, such that the processing capabilities are shared by multiple computing devices and/or executed simultaneously. In various embodiments, the extractor is implemented in the processing device 104 or the processor 114. In one embodiment, the dictionary 112 is contained on the storage disk 108 on the same computing device as the processing device 104, though the dictionary 112 may be contained on any number of storage disks 108. In one embodiment, the dictionary refinement system 100 includes more than one dictionary 112. Each dictionary 112 may be a specialized dictionary for any given subject or grouping of information.
  • In one embodiment, the extractor 110 is applied to a collection of text, dictionary entries 124 in the dictionary 112 are matched to the collection of text and the extractor 110 outputs extracted results 116. The extractor may include multiple dictionaries 112 and may be applied to the collection of text based on a set of predefined rules 118. At least some of the extracted results 116 may be labeled by a user to identify correct results 120 (true positives) and incorrect results 122 (false positives). In one embodiment, the precision 130 of the extractor 110 is the fraction of true positives among the total number of extracted results 116. The precision of each dictionary entry 124 is the probability that an extracted entity will be correct, given that the entity is based, in whole or in part, on a match of the dictionary entry 124. An extractor 110 with high precision 130 outputs few incorrect results 122. An expected mention that is not identified by the extractor 110 is referred to herein as a missing result or a false negative. The term “recall” 128 is broadly interpreted herein to include the fraction of true positives among the total number of expected occurrences. An extractor 110 with high recall 128 misses very few expected results. In one embodiment, the recall 128 and precision 130 are balanced to maximize a score of the extractor, for example, by setting the score above a score threshold that may be predetermined based on a desired balance of the precision 130 and recall 128. The score may be an F-score, an F-measure, or some other measure of scoring the dictionary. The term “F-score” 132, or “F-measure”, is broadly interpreted herein to include combining precision 130 and recall 128 into a single measure that is computed as the harmonic mean of precision 130 and recall 128 depicted as
  • 2 PR ( P + R )
  • where P is the precision 130 and R is the recall 128.
  • In various embodiments, the refinement optimization problem includes executing a refinement algorithm 126 under two constraints. First, because the refinement of an extractor 110 is often done with human supervision, the problem may include a size constraint 134 to limit the size of dictionary entries 124 to be examined at a time. For an extractor E, the size constraint 134 may include a set S of size at most k to maximize the F-score 132 of the resulting extractor E′. Alternatively, the extractor 110 may be refined such that the recall 128 does not fall below a certain limit using a recall constraint 136. The recall constraint 136 includes a set w such that the F-score 132 of E′ is maximized, while at the same time the recall 128 of E′ does not decrease more than a fixed budget. In other embodiments, the refinement optimization problem may be approached without constraints on size or recall 128. The algorithm 126 produces a set of candidate dictionary entries 138 that may be removed from the dictionary 112 to improve performance of the extractor 110.
  • Maximizing the quality of the extractor 110 on the entirety of the labeled dataset for the extracted results 116 may not be useful in practice. Instead, by using statistical modeling and refinement optimization, the system may maximize the quality of the extractor 110 in general and avoid overfitting to the labeled dataset. The system may use a model for refining the dictionary 112 by estimating the parameters of the model including the precision 130 of each individual dictionary entry, given the set of extracted results 116 that have been labeled as correct or incorrect.
  • FIG. 2 depicts a flowchart diagram of one embodiment of a method 200 for determining candidate dictionary entries 138. While the method 200 is described in conjunction with the dictionary refinement system 100 of FIG. 1, embodiments of the method 200 may be implemented with other types of dictionary refinement systems 100.
  • In one embodiment, the extractor 110 receives a collection of text 202 to be matched to a set of one or more dictionaries 112 according to a set of rules associated with the extractor 110. Each dictionary 112 may include a set of dictionary entries 124 corresponding to a given subject or grouping of words and phrases. Some of the entries 124 in the dictionaries 112 may overlap with other dictionaries 112, depending on the subjects or groupings for the dictionaries 112.
  • The extractor 110 is applied to the collection of text 202 and outputs the extracted results 116. For each result 116, the system 100 examines the rules and dictionaries 112 of the extractor 110 and determines which dictionaries 112 are involved in producing the extracted result 116, and also determines the provenance of the extracted result 116. Some or all of the extracted results 116 are then given labels 204 as correct results 120 or incorrect results 122 based on a user input. The incorrect results 122 are false positives output by the extractor 110. For example, given a dictionary 112 containing “first name” entries, a second dictionary 112 containing “last name” entries, and a third dictionary 112 containing “full name” entries, the collection of text 202 may include a phrase “Mark Calendar” that is marked as a name based on matches in one or more of the dictionaries 112.
  • The extracted results 116 are input into the processor 114, and the processor 114 uses an algorithm 126 that maximizes the F-score 132 for the extractor 110. The algorithm 126 produces a set of candidate dictionary entries 138 for each dictionary 112 that are output by the processor 114. The set of candidate dictionary entries 138 are candidates that may be removed from the dictionary 112 that would maximize the F-score 132 for the extractor 110.
  • Because the labels 204 for a given false positive may be determined by multiple dictionary entries 124, the label 204 may not be used to estimate the precision 130 of a dictionary entry directly. Using the false positive example given above, it is not clear whether the false positive is because “Mark” is an incorrect first name, “Calendar” is an incorrect last name, or both. Furthermore, the same dictionary entry can contribute to different results, some correct and some incorrect. For example, “Mark” may contribute to an incorrect result “Mark Calendar”, as well as a correct result for “Mark Smith”. In this case, the processor 114 determines the candidate dictionary entries 138 by using the provenance of each false positive to model the complex dependencies between the dictionary entries 124 and the extracted results 116, along with an algorithm 126 for estimating precisions 130 based on an expectation-maximization (EM) algorithm.
  • In one embodiment, the algorithm 126 is configured to determine which dictionary entries 124 may be removed to result in the highest quality improvement of the extractor 110. In another example, two dictionary entries “Chelsea” and “Mark” are both ambiguous as a person name. If “Chelsea” is labeled as an incorrect result 60 times and as a correct result 40 times, and “Mark” is labeled as an incorrect result 9 times and as a correct result 1 time, the precision 130 of “Mark” (10%) is lower than that of “Chelsea” (40%). However, removing “Chelsea” results in removing more incorrect results 122, possibly leading to a higher overall quality improvement for the extractor 110.
  • FIG. 3 depicts a flowchart diagram of one embodiment of a method for refining a dictionary 112. While the method is described in conjunction with the dictionary refinement system 100 of FIG. 1, embodiments of the method may be implemented with other types of dictionary refinement systems 100.
  • In one embodiment, after the processor 114 outputs the candidate dictionary entries 138, the processor 114 then receives a user input 302 that selects one or more of the candidate dictionary entries 138 for removal from the dictionaries 112. The processor 114 may then read the dictionary entries 124 currently stored in the dictionaries 112, remove the selected dictionary entries, and modify the dictionaries 112 according to the new set of dictionary entries 124. This may allow a user to manually refine the candidate dictionary entries 138 by determining which entries from the set of candidate dictionary entries 138 are actually removed from the dictionaries 112.
  • FIG. 4 depicts a flowchart diagram of one embodiment of a method 400 for refining a dictionary 112 for information extraction. While the method 400 is described in conjunction with the dictionary refinement system 100 of FIG. 1, embodiments of the method 400 may be implemented with other types of dictionary refinement systems 100.
  • In one embodiment, the dictionary refinement system 100 inputs a set of extracted results 116 from matching the dictionary 112 to the collection of text 202. The extracted results 116 are labeled 410 as correct results 122 or incorrect results 122. In some embodiments, the extracted results 116 that are labeled include only a portion of the entities from the collection of text 202 matched to entries 124 in the dictionary 112. In one embodiment, the system uses 405 a set of predetermined rules 118 and a dictionary 112 to determine the extracted results 116 for the collection of text 202. In one embodiment, the correct results 122 and incorrect results 122 are labeled based on a user input 302.
  • The dictionary refinement system 100 processes 415 the extracted results 116 using an algorithm 126 configured to maximize an F-score 132 for the extractor 110, for example, by setting the F-score above a score threshold. The score threshold for the maximized F-score 132 balances the precision 130 and recall 128 of the extractor 110. The system may process the extracted results 116 by computing the set of candidate dictionary entries 138 that maximize the F-score 132 under a maximum size constraint 134 for the set of candidate dictionary entries 138. The system may process the extracted results 116 by computing the set of candidate dictionary entries 138 that maximize the F-score 132 within an allocated recall constraint 136. The recall constraint 136 determines a minimum coverage of the dictionary 112, which may help the system avoid false negatives. The system may process the extracted results 116 by estimating the precision 130 of each dictionary entry in the full set of dictionary entries 124 using the extracted results 116. The algorithm 126 used by the system may be the EM algorithm.
  • The dictionary refinement system 100 outputs 420 a set of candidate dictionary entries 138 corresponding to a full set of dictionary entries 124 of the dictionary 112. The candidate dictionary entries 138 are candidates to be removed from the dictionary 112 based on the extracted results 116. In one embodiment, the dictionary refinement system 100 receives 425 a user input 302 to select dictionary entries from the set of candidate dictionary entries 138. The dictionary refinement system 100 then removes 430 the selected dictionary entries from the dictionary 112.
  • In some embodiments, the system may obtain extracted results 116 using a plurality of specialized dictionaries. Each dictionary 112 may produce extracted results 116 labeled as correct results 122 or incorrect results 122, which may then be processed for each corresponding dictionary 112 to determine which dictionary entries 124 should be removed for greatest improvement of the performance of the extractor 110.
  • In one embodiment, for a single dictionary case, the dictionary A contains a set of n entries 124. A given partially labeled corpus may be a random sample of entries from A sampled independently according to their relative frequency denoted by fw, i.e., any occurrence in the corpus is a match for entry w with probability fw, and ΣwεAfx=1.
  • In addition, each entry has a fixed precision pwε[0,1]. An occurrence of an entry w is Good if it is a correct match for the annotation used in a query, otherwise the occurrence is Bad. For example, a match for ‘Ford’, ‘Chelsea’ or ‘Mark’ is Good for the Person annotator if the match corresponds to a person name, and Bad otherwise. In practice, a human annotator labels a subset of the occurrences explicitly as Good or Bad. In one embodiment, it is assumed that each occurrence of w in the given corpus was chosen to be Good with probability pw and Bad with probability 1−pw randomly and independently of the other occurrences and of whether the label 204 is given to the refinement algorithm 126. For an entry w let tw denote the number of occurrences of w in the given corpus, gw denote the number of times the entry was labeled Good and bw denote the number of times the entry was labeled Bad.
  • For the collections of text 202 in which the total number of occurrences is much larger than the number of labeled occurrences, the empirical frequencies (twwεAtw) may be referred to as true frequencies. Consequently, the goal of the parameter estimation problem is estimating precisions 130.
  • In one embodiment, estimating the precision 130 for w includes observing the precision 130 of other entries 124. For example, if other entries 124 with a large number of labels 204 have precision 130 close to 80% then w is also more likely to have precision 130 close to 80%. This dependency may be expressed in the model as described below. The precision 130 of each word is assumed to be chosen randomly and independently from a fixed and unknown distribution H over [0,1]. In a Bayesian analysis, when estimating pw, distribution Π represents a prior belief about pw. This allows the use of the given labels 204 for w to perform Bayesian updates so as to obtain the posterior distribution Πw. The posterior distribution Πw represents a knowledge of pw and can be used to derive an estimate of pw. Taking the mean of Πw provides a simple and more optimal way to use Πw.
  • In some embodiments, it is assumed that the prior distribution Π is not given to the algorithm 126, and a suitable Π may need to be found given the available labels 204. To find the distribution Π from which each precision pw is assumed to be drawn randomly and independently. The distribution Π may be modeled using beta distributions. This may be a convenient distribution for Bayesian updates using the labels 204. The beta distribution also allows easy estimation of parameters.
  • A beta distribution Beta(α,β) has two parameters α,β>0, and the probability density function (PDF) of the distribution is cΘα−1(1−Θ)β−1 where c is the normalizing constant. The mean of the distribution is
  • α α + β .
  • If a Good (or Bad) label 204 is observed, the posterior Πw updates to Beta(α+1,β) (or Beta(α,β+1), respectively). More generally, if gw=bw=0, the posterior Πw is the same as the prior Π.
  • Better estimates of the parameters of the prior Beta(α,β) improve the estimate of the obtained precision pw. The system may use a uniform prior case with Beta(1,1) as the prior. The available empirical precisions
  • g w g w + b w
  • may be used to compute the prior using the standard method of moments. Let
  • μ ^ = 1 n w ε A g w g w + b w
  • be the sample mean of observed precisions, and
  • α ^ = μ ^ ( μ ^ ( 1 - μ ^ ) σ ^ 2 - 1 ) and β ^ = ( 1 - μ ^ ) ( μ ^ ( 1 - μ ^ ) σ ^ 2 - 1 )
  • which are considered as the parameters.
  • The mean of the posterior distribution Πw, which equals
  • α + g w α + g w + β + b w ,
  • is used to estimate pw. This simplification may not affect the quality of the refinement optimization significantly because the F-score 132 of a dictionary 112 is determined by large sums of precisions 130 multiplied by frequencies. A large sum of precisions 130, each drawn independently from the corresponding distribution Πw, is strongly concentrated around the expectation of the sum, which depends only on the mean of each Πw.
  • In one embodiment, the optimization problem may be considered for single dictionary refinement, assuming that the true values of pw, fw are given as input for all wεA. The standard notions of precision 130, recall 128 and F-score 132 may be used to measure the quality of the solution for the refinement optimization problem. For a subset of entries S, precision (PS), recall (RS), and F-score (FS) are defined as
  • R S = w ε S p w f w w ε A p w f w , P S = w ε S p w f w w ε S f w , F S = 2 w ε S p w f w w ε A p w f w + w ε S f w .
  • F-score 132 is the harmonic mean of PS and RS and is used to balance the precision 130 with recall 128 of the refined dictionary. When a subset of S is removed from the dictionary A, the residual precision 130, recall 128 and F-score 132 are denoted by P S =PA\S, R S =RA\S and F S =FA\S. The F-score 132 may be maximized under two constraints:
      • 1. Size constraint: Given an integer k≦n, find a subset S that maximizes F S , where |S|≦k.
      • 2. Recall constraint: Given a fraction ρ≦1, find a subset S that maximizes F S , where the residual recall R S ≧ρ.
        The F-score 132 may alternatively be maximized with no size or recall budget constraints.
  • F S is a non-linear function of the precisions {circumflex over (p)}w, wεS, since both the numerator and the denominator are dependent on the set S being removed, thereby making the analysis of the optimization problem non-trivial.
  • For the size constraint 134, the goal is to maximize
  • F S _ = 2 w ε A p w f w w ε A p w f w + w S f w ,
  • where |S|≦k. Finding out whether there exists a dictionary 112 with F-score 132 of at least θ may be allow the algorithm 126 to overcome the non-linearity of the objective function. Accordingly, the algorithm 126 guesses a value θ and then checks if θ is a feasible F-score 132 for some S. The maximum value of the F-score 132 is then found by doing a binary search.
  • To check whether θ is a feasible F-score 132, the system first checks whether there is a set S of entries such that
  • F S _ = 2 w ε A p w f w - w ε S p w f w w ε A p w f w + w ε A f w - w ε S f w θ .
  • Rearranging the terms obtains
  • ΣwεSfw(θ−2pw)≧ΣwεAfw(θ−(2−θ)pw). The right hand side of the inequality is independent of S, so the system selects the highest (at most) k entries with non-negative value of fw(θ−2pw) and checks if the sum is at least ΣwεAfw(θ−(2−θ)pw).
  • A subset S is desired such that F S ≧FA. Consequently, the minimum value of the guess is FA and the maximum value is 1. The algorithm 126 is presented in Algorithm 1, shown below, in terms of parameter Δ where Δ is the desired accuracy of the algorithm.
  • Algorithm 1: Algorithm for size constraint (given k and parameter Δ)
    1:   Let θlow = FA and θhigh = 1
    2:   while θhigh − θlow > Δ do
    3:  Let θ = (θhigh + θlow) / 2 be the current guess.
    4:  Sort the entries w in descending order of fw(θ − 2pw).
    5:  Let S be the top l ≦ k entries in the sorted order such that
        fw(θ − 2pw) ≧ 0 for all w ε S .
    6:  if ΣwεS fw(θ − 2pw) ≧ ΣwεA fw(θ − (2 − θ)pw) then
    7:   θ is feasible, set θlow = Fs and continue.
    8:  else
    9:   θ is not feasible, set θhigh = θ and continue.
    10:  end if
    11:  end while
    12: Output the set s to define the most recent θlow.
  • A linear time O(n) time algorithm for checking the feasibility includes: (i) use the standard linear time selection algorithm to find the k-th highest entry, for example u, according to fw(θ−2pw), (ii) do a linear scan to choose the entries w such that fw(θ−2pw)>fu(θ−2pu), and then choose entries such that fw(θ−2pw)=fu(θ−2pu) to get k entries total, (iii) discard the selected entries with negative values of fw(2pw−θ) and output the remaining ≦k as the set S. However, a simpler implementation of verification uses a mini-heap that gives O(n+k log n) time, whereas a simple sorting gives O(n log n) time.
  • Since values of the guesses are between 0 and 1 and the algorithm stops when the upper and lower bounds are less than Δ away, at most log(1/Δ) steps will be required. This means that there is an implementation of the algorithm with running time O(n log(1/Δ)). Setting Δ to a sufficiently low value may allow the algorithm to find the optimal solution. Specifically, there is an optimal algorithm for maximizing the residual F-score 132 for single dictionary refinement under a size constraint 134. The algorithm runs in time O(n(log n+B)), where B is the number of bits used to represent each of the pw and fw values given to the algorithm.
  • A simple and efficient algorithm that gives a nearly optimal solution when used on a large corpus where frequencies of individual entries 124 are small is described below. The algorithm sorts the entries 124 in increasing order of precisions pw, and selects entries 124 according to this order until the recall budget is exhausted or there is no improvement of F-score 132 by selecting the next entry. The algorithm runs in time O(n log n).
  • To obtain a lower bound on the F-score 132 of the solution produced by the algorithm, let w1, . . . , wn be the entries 124 sorted by precision 130 and p1≦ . . . ≦pn be the corresponding precisions 130. Let S* be the set of entries 124 whose removal gives the optimal F-score 132 such that R S*≧ρ. Let r*=ΣS*pifi and let l be the largest index for which Σi>lpifi≧r*. Then the set S returned by the algorithm satisfies
  • F S _ 2 i ε S _ * p i f i i ε S _ * f i + i p i f i + f max / p + 1 .
  • The lower bound guaranteed by the algorithm differs from the optimal F-score F S* only by the addition of the error term
  • f max p + 1
  • to the denominator. Individual frequencies are likely to be small when the given corpus and the dictionary 112 are large. At the same time l, and hence pl+1 are determined by the recall budget. Therefore, the error term
  • f max p + 1
  • is likely to be much smaller than the denominator for a large dictionary 112.
  • While it is not necessarily optimal in general, without the recall budget (i.e., with ρ=0) this algorithm finds the solution with the globally optimal F-score 132. The optimal solution can also be found using Algorithm 1 with k=n.
  • While the algorithms above are described primarily in conjunction with a single dictionary case, the system and method described herein are capable of refining and optimizing an extractor 110 using more than one dictionary 112. For example, in one embodiment there are b dictionaries A1, . . . , Ab, and there are n entries in total in A=A=Ul=1 bAl. Any occurrence τ is produced by matches of one or e=1 more dictionary entries 124 combined by the given extraction rule; all such dictionary entries w are said to be in provenance of τ. How the entries 124 produce τ is captured by the provenance expression Prov(τ) of τ; for all such entries w, wεProv(τ) is a Boolean expression where the entries 124 in Prov(τ) are treated as variable (every entry in A corresponds to a unique Boolean variable). Given two Boolean expressions φ1 and φ2, φ12 if the variable sets in φ1 and φ2 are the same and the truth tables of φ1 and φ2 on these variables are also the same. For the same provenance expression φ, there may be multiple occurrences τ such that Prov(τ)=φ. This is analogous to the single dictionary case, where the trivial provenance expression φ=w for any entry w has one or more occurrences. Note that with extraction rules based on SELECT—PROJECT—JOIN—UNION queries, the provenance expressions are monotone.
  • The statistical model of a single dictionary 112 is extended to the multiple dictionary case. Every provenance expression φ may be assumed to be a true frequency f(φ)ε[0,1] and a true precision p(φ)ε[0,1]. As before, Σφf(φ)=1, where the sum is over all possible Boolean expressions on the set of entries 124, and any occurrence τ has Prov(τ)=φ with probability f(φ). In addition, the label 204 of τ is Good with probability p(φ) and Bad with probability 1−p(φ) randomly and independently of other occurrences, and whether the label 204 of τ is given.
  • In practice, unlabeled data is sufficiently large, so the frequencies of results are estimated using their empirical frequencies
  • f ^ ( φ ) = τ : Prov ( τ ) = φ ψ τ : Prov ( τ ) = ψ ,
  • and the hat may be dropped. The precision p(φ) of results φ may be estimated from a limited amount of labeled data. A natural approach to find the precisions 130 of provenance expressions is to estimate them empirically. The problem with this approach is that the possible number of such provenance expression is very large and it is likely that very few (if any) labels 204 would be available for most of the provenance expressions. At the same time, it is quite likely that individual dictionary entries 124 have similar precision 130 across different provenance expressions. This intuition may be represented by strengthening the model described herein in the following way.
  • It may be assumed that, as in the single dictionary case, every entry w has a fixed (and unknown) precision 130 denoted by pw. For any given occurrence τ such that wεProv(τ), the match of w for τ is correct with probability pw and incorrect with probability 1−pw independent of the other occurrences and other entries 124 in the provenance of τ. Further, it may be assumed that the AQL rule is correct, i.e., the label 204 of τ is Good if and only if its provenance Prov(τ) evaluates to true with the matches of the dictionary entries 124 in Prov(τ) ((Good≡true and Bad≡false). Computing the probability of any Boolean expression φ given the probabilities of the individual variables is in general #P-hard, and the classes of queries for which the probability of the Boolean provenance can be efficiently computed have been extensively studied in the literature. However, the Boolean provenance expression described herein involves a small number of variables (typically ≦10). Thus, p(φ) may be computed given pw by an exhaustive enumeration of satisfying assignments of φ and using the assumption of independence of variables.
  • Here, the goal is to estimate the values of precision pw given a set of occurrences τ along with their labels 204 and provenance expressions Prov(τ). The Expectation-Maximization (EM) algorithm may be used to solve this problem.
  • The EM algorithm is a widely-used technique for maximum likelihood estimation of parameters of a probabilistic model under hidden variables. This algorithm estimates the parameters iteratively either for a given number of steps or until some convergence criteria are met.
  • The following notations present the update rules of the EM algorithm for the problem described herein. The entries 124 are indexed arbitrarily as w1, . . . , wn. Each entry wi has a true precision pi=pw i . There are N labeled occurrences in τ1, . . . , τN. It is assumed that τ1, . . . , τN also denote the labels 204 of the occurrences. So each τi is Boolean, where τi=1 (resp. 0) if the label 204 is Good (resp. Bad). If wiεProv(τj), then τjεSucc(wi).
  • For simplicity in presentation, it may be assumed that entries 124 from exactly b dictionaries 112 are involved in the provenance expression φj=Prov(τj) for each occurrence τj, although this implementation works for general cases. Hence, each φj takes b inputs yj1, . . . , yjb and produces τj. Each yjl is Boolean, where yjl=1 (resp. 0) if the match of the dictionary entry corresponding the yjl is correct (resp. incorrect) while producing the label 204 for τj. The entry corresponding the yjl is denoted by Provjlε{w1, . . . , wn}.
  • To illustrate the notations, consider the following example extraction rule expressed in the Annotation Query Language (AQL) language:
  • create view FirstLast as
    select Merge(F.match, P.match) as match
    from FirstName F, LastName L
    where FollowsTok(F.match, L.match, 0, 0)
  • The result is a person name if it is a match from first-name (FN) dictionary, followed by a match from last-name (LN) dictionary. This rule is called the FN-LN rule. In this example, b=2 and for every occurrence τj, τjj(yj1,yj2)=yj1yj2. For a Good occurrence “John Smith”, τj=1, yj1=1 (for “John”), and yj2=1 (for “Smith”), Provj1=“John” and Provj2=“Smith”. For a Bad occurrence “Mark Calendar”, τj=0, yj1=1 (for “Mark”), and yj2=0 (for “Calendar”).
  • The vector {right arrow over (x)}=
    Figure US20130318075A1-20131128-P00001
    τ1, . . . , τN
    Figure US20130318075A1-20131128-P00002
    is the observed data, the vector of vectors {right arrow over (y)}=
    Figure US20130318075A1-20131128-P00001
    yjl
    Figure US20130318075A1-20131128-P00002
    jε|1,N|,bε|1,l| is the hidden data, and the vector {right arrow over (θ)}={p1, . . . , pn} is the vector of unknown parameters.
  • The parameter vector at iteration t is denoted to be {right arrow over (θ)}t. Suppose: cw t j ,t=E[yjlj,{right arrow over (θ)}t|, where τjεSucc(wi) and Provjl=wi. It may be shown that the update rules for parameters pi has a nice closed form:
  • p i = C 1 C 1 + C 2 ,
  • where C1=Σcw i j ,t and C2=Σ(1−cw i j ,t), and the sum is over 1≦j≦N such that τjεSucc(wi). These parameter values are considered to be θ{right arrow over (t)}+1, estimation of the parameter in the t+1-th round.
  • In the single dictionary case, every occurrence τ of an entry w has Prov(τ)=w, and when w is deleted only those entries 124 get deleted. However, in the multiple-dictionary case, if an entry w is deleted, multiple provenance expressions τ such that wεProv(τ) can disappear from the result set. When a subset of entries SA is removed, it may be seen that a provenance expression φ disappears if and only if, after assigning all variables for entries 124 in S value false and all variables for entries 124 in A\S value true, the Boolean provenance φ evaluates to false. Denote the set of provenance expressions φ that survive (do not disappear) after a given set S is deleted by surv(S). For example, if there are three occurrences with provenance expressions uv, u+v, uw+uv, when S={u} is deleted, the set surv(S) will only contain the occurrence with provenance expression u+v. Hence the residual recall (R S ) and the residual precision (P S ) are defined as (F S is their harmonic mean):
  • R S _ = φ εsurv ( S ) p ( φ ) f ( φ ) φ p ( φ ) f ( φ ) , P S _ = φεsurv ( S ) p ( φ ) f ( φ ) φ εsurv ( S ) f ( φ )
  • The above definitions for multiple dictionary generalize the definitions for single dictionary refinement optimization.
  • Since the multiple dictionary refinement problem is non-deterministic polynomial-time (NP)-hard under both size and recall constraints 134, 136 several simple and efficient algorithms are proposed and evaluated. These algorithms take the precisions 130 of individual dictionary entries 124 (which may be obtained using the EM algorithm) and a set of occurrences with their provenance expressions as input, and produce a subset of entries 124 across all dictionaries 112 to be removed. The types of algorithms evaluated here are (1) greedy, and, (2) entry-precision-based, or EP-based in short.
  • To compute the residual F-score 132, both greedy and EP-based algorithms compute the precision 130 of tuples from precision entries 124 under independence assumption. The greedy algorithms select the next entry that gives the maximum improvement in F-score 132. The algorithm stops if no further improvement in F-score 132 is possible by deleting any entry or when the given size or recall budget is exhausted.
  • On the other hand, the EP-based algorithms exploit the precision 130 of individual dictionary entries 124. The dictionary entries 124 may be treated as if they come from a single dictionary (however, note that the actual provenances were used by the EM algorithm to estimate the precision 130 of entries 124). These algorithms use the selection criteria of incremental algorithms for the single-dictionary case, i.e., maximize ΔF for size constraint 134 and ΔF/ΔR for recall constrain, where ΔF, ΔR denote the changes in F-score 132 and recall 128 by deleting one additional entry. It may be shown that, in the single-dictionary case, selection according to these criteria can be approximated by selecting entries 124 according to increasing value of fw(pw−F/2) for size constraint 134, and pw for recall constrain, where F is the current value of F-score 132 (the proof appears in the full version). In the multiple-dictionary case, pw is considered as the given precision 130 of entry w, and fw as the total frequency of provenance expressions that include w. An entry is selected for removal from the top of such a sorted order if it gives an improvement in F-score 132. The selection continues until the given size or recall budget is exhausted. For optimization under the size constraint 134, the value of F-score 132 is also recomputed after each entry is selected.
  • FIG. 5 depicts a schematic diagram of one embodiment of a computer system 500 for implementation of one or more aspects of the functionality described herein. The illustrated computer system 500 is only one example of a suitable computer architecture and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, the computer system 500 is capable of being implemented to performing any or all of the functionality set forth hereinabove.
  • The depicted computer system 500 includes a computer processing device 502, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 502 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
  • The computer processing device 502 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Embodiments of the computer processing device 502 may be practiced locally, remotely, or in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
  • In one embodiment, the computer processing device 502 includes components and functionality typical of a general-purpose computing device. The components of the computer processing device 502 may include, but are not limited to, one or more processors or processing units 504, a system memory 506, and a bus 508 that couples various system components including the system memory 506 to the processor 504.
  • The bus 508 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
  • The computer processing device 502 typically includes a variety of computer system readable media (also referred to as computer readable media and/or computer usable media). Such media may be any available media that is accessible by the computer processing device 502. Embodiments of the computer readable media may include one or more of the following types of media: volatile and non-volatile media, removable and non-removable media.
  • The system memory 506 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 510 and/or cache memory 512. The computer processing device 502 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, a storage system 514 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus 508 by one or more data media interfaces. As will be further depicted and described below, the memory 506 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
  • In some embodiments, a program/utility 516, having a set (at least one) of program modules 518, is stored in the memory 506. The program modules 518 generally carry out one or more of the functions and/or methodologies of the embodiments described herein. The memory 506 also may store an operating system, one or more application programs, other program modules, and/or program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a personal computer and/or networking environment.
  • The computer processing device 502 may also communicate with one or more external devices 520 such as a keyboard, a pointing device, a display 522, etc.; one or more devices that enable a user to interact with the computer processing device 502; and/or any devices (e.g., network card, modem, etc.) that enable the computer processing device 502 to communicate with one or more other computing devices. Such communication can occur via input/output (I/O) interfaces 524. Additionally, the computer processing device 502 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via a network adapter 526. As depicted, the network adapter 526 communicates with the other components of the computer processing device 502 via the bus 508. It should be understood that, although not shown, other hardware and/or software components could be used in conjunction with embodiments of the computer processing device 502. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
  • Embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements. In one embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • An embodiment of a dictionary refinement system 100 includes at least one processor coupled directly or indirectly to memory elements through a system bus such as a data, address, and/or control bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Additionally, network adapters also may be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.
  • In the above description, specific details of various embodiments are provided. However, some embodiments may be practiced with less than all of these specific details. In other instances, certain methods, procedures, components, structures, and/or functions are described in no more detail than to enable the various embodiments of the invention, for the sake of brevity and clarity.
  • The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.

Claims (14)

1. A computer program product, comprising:
a computer readable storage medium to store a computer readable program, wherein the computer readable program, when executed by a processor within a computer, causes the computer to perform operations for refining a dictionary for information extraction, the operations comprising:
inputting a set of extracted results from execution of an extractor matching the dictionary to a collection of text, wherein the extracted results are labeled as correct results or incorrect results;
processing the extracted results using an algorithm configured to set a score of the extractor above a score threshold, computed as a harmonic mean of a recall of the extractor and a precision of each dictionary entry, wherein the recall comprises a fraction of true positives among a total number of expected occurrences, wherein the precision comprises a probability that an extracted entity is correct; and
outputting a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results.
2. The computer program product of claim 1, wherein the computer readable program, when executed on the computer, causes the computer to perform additional operations, comprising:
obtaining extracted results from the extractor using a plurality of specialized dictionaries, wherein each dictionary produces extracted results labeled as correct results or incorrect results.
3. The computer program product of claim 1, wherein inputting a set of extracted results further comprises:
using the extractor comprising a set of predetermined rules and matching a plurality of dictionaries to the collection of text to determine the extracted results; and
labeling the correct results and the incorrect results based on a user input.
4. The computer program product of claim 1, wherein processing the extracted results further comprises:
computing the set of candidate dictionary entries that set the score above the score threshold, wherein the set of candidate dictionary entries comprises a maximum size constraint.
5. The computer program product of claim 1, wherein processing the extracted results further comprises:
computing the set of candidate dictionary entries that set the score above the score threshold within an allocated recall constraint, wherein the recall constraint determines a minimum coverage of the dictionary.
6. The computer program product of claim 1, wherein processing the extracted results further comprises:
estimating the precision of each dictionary entry in the full set of dictionary entries using the extracted results, wherein the algorithm is a statistical estimation algorithm.
7. The computer program product of claim 6, wherein the algorithm is an expectation-maximization (EM) algorithm.
8-14. (canceled)
15. A dictionary refinement system, comprising:
an extractor configured to match a dictionary to a collection of text to obtain a set of extracted results, wherein the extracted results are labeled as correct results or incorrect results;
a processor configured to:
process the extracted results using an algorithm configured to set a score of the extractor above a score threshold computed as a harmonic mean of a recall of the extractor and a precision of each dictionary entry, wherein the recall comprises a fraction of true positives among a total number of expected occurrences, wherein the precision comprises a probability that an extracted entity is correct; and
output a set of candidate dictionary entries corresponding to a full set of dictionary entries, wherein the candidate dictionary entries are candidates to be removed from the dictionary based on the extracted results.
16. The system of claim 15, wherein the extractor is further configured to:
obtain extracted results from the extractor using a plurality of specialized dictionaries, wherein the extracted results corresponding to each dictionary are labeled as correct results or incorrect results.
17. The system of claim 15, wherein inputting a set of extracted results further comprises:
using the extractor comprising a set of predetermined rules and matching a plurality of dictionaries to the collection of text to determine the extracted results; and
labeling the correct results and the incorrect results based on a user input.
18. The system of claim 15, wherein processing the extracted results further comprises:
computing the set of candidate dictionary entries that set the score above the score threshold, wherein the set of candidate dictionary entries comprises a maximum size constraint.
19. The system of claim 15, wherein processing the extracted results further comprises:
computing the set of candidate dictionary entries that set the score above the score threshold within an allocated recall constraint, wherein the recall constraint determines a minimum coverage of the dictionary.
20. The system of claim 15, wherein the extractor is further configured to:
estimate the precision of each dictionary entry in the full set of dictionary entries using the extracted results, wherein the algorithm is a statistical estimation algorithm.
US13/480,974 2012-05-25 2012-05-25 Dictionary refinement for information extraction Abandoned US20130318075A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/480,974 US20130318075A1 (en) 2012-05-25 2012-05-25 Dictionary refinement for information extraction
US13/598,946 US8775419B2 (en) 2012-05-25 2012-08-30 Refining a dictionary for information extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/480,974 US20130318075A1 (en) 2012-05-25 2012-05-25 Dictionary refinement for information extraction

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/598,946 Continuation US8775419B2 (en) 2012-05-25 2012-08-30 Refining a dictionary for information extraction

Publications (1)

Publication Number Publication Date
US20130318075A1 true US20130318075A1 (en) 2013-11-28

Family

ID=49622393

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/480,974 Abandoned US20130318075A1 (en) 2012-05-25 2012-05-25 Dictionary refinement for information extraction
US13/598,946 Expired - Fee Related US8775419B2 (en) 2012-05-25 2012-08-30 Refining a dictionary for information extraction

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/598,946 Expired - Fee Related US8775419B2 (en) 2012-05-25 2012-08-30 Refining a dictionary for information extraction

Country Status (1)

Country Link
US (2) US20130318075A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107656975A (en) * 2017-09-05 2018-02-02 华南师范大学 A kind of appraisal procedure of thematic map, system and device
CN108573289A (en) * 2017-03-09 2018-09-25 株式会社东芝 Information processing unit, information processing method and recording medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8793199B2 (en) * 2012-02-29 2014-07-29 International Business Machines Corporation Extraction of information from clinical reports
US9047347B2 (en) 2013-06-10 2015-06-02 Sap Se System and method of merging text analysis results
US10373711B2 (en) 2014-06-04 2019-08-06 Nuance Communications, Inc. Medical coding system with CDI clarification request notification
US10366687B2 (en) 2015-12-10 2019-07-30 Nuance Communications, Inc. System and methods for adapting neural network acoustic models
US10878190B2 (en) 2016-04-26 2020-12-29 International Business Machines Corporation Structured dictionary population utilizing text analytics of unstructured language dictionary text
US10628522B2 (en) * 2016-06-27 2020-04-21 International Business Machines Corporation Creating rules and dictionaries in a cyclical pattern matching process
WO2018057639A1 (en) 2016-09-20 2018-03-29 Nuance Communications, Inc. Method and system for sequencing medical billing codes
US11133091B2 (en) 2017-07-21 2021-09-28 Nuance Communications, Inc. Automated analysis system and method
US11024424B2 (en) * 2017-10-27 2021-06-01 Nuance Communications, Inc. Computer assisted coding systems and methods
US11379669B2 (en) * 2019-07-29 2022-07-05 International Business Machines Corporation Identifying ambiguity in semantic resources

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030191626A1 (en) * 2002-03-11 2003-10-09 Yaser Al-Onaizan Named entity translation
US6804665B2 (en) * 2001-04-18 2004-10-12 International Business Machines Corporation Method and apparatus for discovering knowledge gaps between problems and solutions in text databases
US20080091434A1 (en) * 2001-12-03 2008-04-17 Scientific Atlanta Building a Dictionary Based on Speech Signals that are Compressed
US8417709B2 (en) * 2010-05-27 2013-04-09 International Business Machines Corporation Automatic refinement of information extraction rules
US20130159318A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Rule-Based Generation of Candidate String Transformations

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5995922A (en) 1996-05-02 1999-11-30 Microsoft Corporation Identifying information related to an input word in an electronic dictionary
AU2002307847A1 (en) 2001-02-22 2002-10-21 Volantia Holdings Limited System and method for extracting information
US7113950B2 (en) 2002-06-27 2006-09-26 Microsoft Corporation Automated error checking system and method
US7272560B2 (en) 2004-03-22 2007-09-18 Sony Corporation Methodology for performing a refinement procedure to implement a speech recognition dictionary
JP4322765B2 (en) 2004-09-28 2009-09-02 株式会社東芝 Knowledge dictionary refinement device
WO2009000103A1 (en) * 2007-06-25 2008-12-31 Google Inc. Word probability determination
US20090240501A1 (en) 2008-03-19 2009-09-24 Microsoft Corporation Automatically generating new words for letter-to-sound conversion
US8214346B2 (en) 2008-06-27 2012-07-03 Cbs Interactive Inc. Personalization engine for classifying unstructured documents

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6804665B2 (en) * 2001-04-18 2004-10-12 International Business Machines Corporation Method and apparatus for discovering knowledge gaps between problems and solutions in text databases
US20080091434A1 (en) * 2001-12-03 2008-04-17 Scientific Atlanta Building a Dictionary Based on Speech Signals that are Compressed
US20030191626A1 (en) * 2002-03-11 2003-10-09 Yaser Al-Onaizan Named entity translation
US7249013B2 (en) * 2002-03-11 2007-07-24 University Of Southern California Named entity translation
US20080114583A1 (en) * 2002-03-11 2008-05-15 University Of Southern California Named entity translation
US7580830B2 (en) * 2002-03-11 2009-08-25 University Of Southern California Named entity translation
US8417709B2 (en) * 2010-05-27 2013-04-09 International Business Machines Corporation Automatic refinement of information extraction rules
US20130159318A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Rule-Based Generation of Candidate String Transformations

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573289A (en) * 2017-03-09 2018-09-25 株式会社东芝 Information processing unit, information processing method and recording medium
CN107656975A (en) * 2017-09-05 2018-02-02 华南师范大学 A kind of appraisal procedure of thematic map, system and device

Also Published As

Publication number Publication date
US8775419B2 (en) 2014-07-08
US20130318076A1 (en) 2013-11-28

Similar Documents

Publication Publication Date Title
US8775419B2 (en) Refining a dictionary for information extraction
CN110717039B (en) Text classification method and apparatus, electronic device, and computer-readable storage medium
AU2019261735B2 (en) System and method for recommending automation solutions for technology infrastructure issues
US11093854B2 (en) Emoji recommendation method and device thereof
US20200202171A1 (en) Systems and methods for rapidly building, managing, and sharing machine learning models
Zhang et al. Text chunking based on a generalization of Winnow.
US9070090B2 (en) Scalable string matching as a component for unsupervised learning in semantic meta-model development
US20120303661A1 (en) Systems and methods for information extraction using contextual pattern discovery
US8983826B2 (en) Method and system for extracting shadow entities from emails
Srikumar et al. On amortizing inference cost for structured prediction
CN113515629A (en) Document classification method and device, computer equipment and storage medium
Ambert et al. K-information gain scaled nearest neighbors: a novel approach to classifying protein-protein interaction-related documents
US20200117574A1 (en) Automatic bug verification
US10977290B2 (en) Transaction categorization system
CN102789473A (en) Identifier retrieval method and equipment
Vanamala et al. Topic modeling and classification of Common Vulnerabilities And Exposures database
US8650180B2 (en) Efficient optimization over uncertain data
WO2021042529A1 (en) Article abstract automatic generation method, device, and computer-readable storage medium
US20170140010A1 (en) Automatically Determining a Recommended Set of Actions from Operational Data
CN113011156A (en) Quality inspection method, device and medium for audit text and electronic equipment
US20210117448A1 (en) Iterative sampling based dataset clustering
Polpinij A method of non-bug report identification from bug report repository
US20230075290A1 (en) Method for linking a cve with at least one synthetic cpe
US20220188512A1 (en) Maintenance of a data glossary
US11210471B2 (en) Machine learning based quantification of performance impact of data veracity

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHITICARIU, LAURA;FELDMAN, VITALY;REISS, FREDERICK R;AND OTHERS;SIGNING DATES FROM 20120713 TO 20120716;REEL/FRAME:028732/0682

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE