CA2710882A1 - Managing an archive for approximate string matching - Google Patents

Managing an archive for approximate string matching Download PDF

Info

Publication number
CA2710882A1
CA2710882A1 CA2710882A CA2710882A CA2710882A1 CA 2710882 A1 CA2710882 A1 CA 2710882A1 CA 2710882 A CA2710882 A CA 2710882A CA 2710882 A CA2710882 A CA 2710882A CA 2710882 A1 CA2710882 A1 CA 2710882A1
Authority
CA
Canada
Prior art keywords
string
strings
records
archive
representations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA2710882A
Other languages
French (fr)
Other versions
CA2710882C (en
Inventor
Arlen Anderson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ab Initio Technology LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2710882A1 publication Critical patent/CA2710882A1/en
Application granted granted Critical
Publication of CA2710882C publication Critical patent/CA2710882C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90344Query processing by using string matching techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/113Details of archiving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3338Query expansion

Abstract

In one aspect, in general, a method is described for managing an archive for determining approximate matches associated with strings occurring in records. The method includes: processing records to determine a set of string representations that correspond to strings occurring in the records; generating, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and storing entries in the archive that each represent a potential approximate match between at least two strings based on their respective close representations.

Description

MANAGING AN ARCHIVE FOR APPROXIMATE STRING
MATCHING
BACKGROUND
The invention relates to managing an archive for approximate string matching.
Various techniques for approximate string matching (also called "fuzzy" or "inexact" string matching or searching) are used for finding strings that match a given pattern string within some tolerance according to a string metric (also called a "similarity function"). The strings being searched may be substrings of a larger string called a "text"
or may be strings contained in records of a database, for example. One category of string metric is the "edit distance." An example of an edit distance is the Levenshtein distance, which counts the minimum number of edit operations (insertion, deletion, or substitution of a character) needed to convert one string into another. Approximate string matching includes on-line matching, in which the text to be searched cannot be processed (or "indexed") before the matching begins, and off-line matching, in which the text can be processed before the matching begins.

SUMMARY
In one aspect, in general, a method is described for managing an archive for determining approximate matches associated with strings occurring in records.
The method includes: processing records to determine a set of string representations that correspond to strings occurring in the records; generating, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and storing entries in an archive that each represent a potential approximate match between at least two strings based on their respective close representations.
Aspects can include one or more of the following features.
Each string representation comprises a string.
Each close representation consists of at least some of the same characters in the string.

Generating the plurality of close strings for a given string in the set includes generating close strings that each have a different character deleted from the given string.
Generating the plurality of close strings for a given string in the set includes generating close strings that each have a single character deleted from the given string.
Generating the plurality of close strings for a given string in the set includes generating close strings at least some of which have multiple characters deleted from the given string.
Generating close strings that each have a different character deleted from the given string includes generating close strings that each have a single character deleted from the given string if the given string is shorter than a predetermined length, and generating close strings at least some of which have multiple characters deleted from the given string if the given string is longer than the predetermined length.
The method further includes determining, for each of at least some of the string representations in the set, a frequency of occurrence of the corresponding string in the records.
The method further includes generating, for each of at least some of the string representations in the set, a significance value that represents a significance of the corresponding string based on a sum that includes the frequency of occurrence of the string and the frequencies of occurrence of at least some strings represented in the archive as a potential approximate match to the string.
The significance value is generated based on an inverse of the sum.
The method further includes determining whether different phrases that include multiple strings correspond to an approximate match by determining whether strings within the phrases correspond to an approximate match, wherein the strings within the phrases are selected based on their corresponding significance values.
The significance value of a string within a phrase is based on the sum, and is based on least one of a length of the string, a position of the string in the phrase, a field of a record in which the string occurs, and a source of a record in which the field occurs.
The method further includes generating, for each of at least some of the entries in the archive, a score associated with the entry that quantifies a quality of the potential approximate match between at least two strings.
The method further includes determining whether strings associated with an entry correspond to an approximate match by comparing the score associated with the entry to a threshold.
The score is based on a correspondence between the respective close representations used to determine the potential approximate match between the at least two strings.
Processing the records to determine a set of string representations that correspond to strings occurring in the records includes modifying a string occurring in at least one record to generate a modified string to include in the set of string representations.
Modifying the string includes removing or replacing punctuation.
Modifying the string includes encoding the string into a different representation.
Modifying the string includes encoding the string into a numerical representation.
Encoding the string into a numerical representation includes mapping each character in the string to a prime number and representing the string as the product of the prime numbers mapped to the characters in the string.
The archive includes at least some entries that represent a potential approximate match between at least two strings based on input from a user.
In another aspect, in general, a computer program, stored on a computer-readable medium, is described for managing an archive for determining approximate matches associated with strings occurring in records. The computer program includes instructions for causing a computer to: process records to determine a set of string representations that correspond to strings occurring in the records; generate, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and store entries in an archive that each represent a potential approximate match between at least two strings based on their respective close representations.
In another aspect, in general, a system is described for managing an archive for determining approximate matches associated with strings occurring in records.
The system includes: means for processing records to determine a set of string representations that correspond to strings occurring in the records; means for generating, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string;
and means for storing entries in an archive that each represent a potential approximate match between at least two strings based on their respective close representations.
In another aspect, in general, a system is described for managing an archive for determining approximate matches associated with strings occurring in records.
The system includes: a data source storing records; a computer system configured to process the records in the data source to determine a set of string representations that correspond to strings occurring in the records, and generate, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and a data store coupled to the computer system to store an archive including entries that each represent a potential approximate match between at least two strings based on their respective close representations.
Aspects can have one or more of the following advantages.
In typical database applications, given fields of different records match when their contents are identical. Operations like join and rollup typically group records into sets based on matching keys occurring in specified fields. In some applications, however, it is useful to be able to perform a join or rollup using approximate string matching to compare keys. Two records are said to be an approximate match if their corresponding key fields are sufficiently close under a predetermined criterion. For example, when the operation is being performed using more than one data source, for keys consisting of a word or phrase, the exact spelling of words in each source may not agree or one phrase may contain words not present in the other.
An archive is maintained to store close pairs of strings appearing within records of one or more data sources. These pairs and associated information such as scores provided by the archive increase the efficiency of join, rollup, and other operations that use approximate string matching. In some implementations, the archive is accessible from a component of a computation graph that performs the operations on the data from the data sources, as described in more detail below.

Other features and advantages of the invention will become apparent from the following description, and from the claims.

DESCRIPTION OF DRAWINGS
FIG. 1 is a block diagram of a system for executing graph-based computations.
FIG. 2 is a diagram of a computation graph.
FIG. 3 is a flowchart of a pre-processing procedure.
DESCRIPTION
1 System Overview The techniques for approximate string matching (or "fuzzy matching") can be applied to a variety of types of systems including different forms of database systems storing datasets. As used herein, a dataset includes any collection of data that enables portions of data to be organized as records having values for respective fields (also called "attributes" or "columns"). The database system and stored datasets can take any of a variety of forms, such a sophisticated database management system or a file system storing simple flat files. One aspect of various database systems is the type of record structure it uses for records within a dataset (which can include the field structure used for fields within each record). In some systems, the record structure of a dataset may simply define individual text documents as records and the contents of the document represent values of one or more fields. In some systems, there is no requirement that all the records within a single dataset have the same structure (e.g., field structure).
Complex computations can often be expressed as a data flow through a directed graph, with components of the computation being associated with the vertices of the graph and data flows between the components corresponding to links (arcs, edges) of the graph. A system that implements such graph-based computations is described in U.S.
Patent 5,966,072, EXECUTING COMPUTATIONS EXPRESSED AS GRAPHS, incorporated herein by reference. One approach to executing a graph-based computation is to execute a number of processes, each associated with a different vertex of the graph, and to establish communication paths between the processes according to the links of the graph. For example, the communication paths can use TCP/IP or UNIX domain sockets, or use shared memory to pass data between the processes.
Referring to FIG. 1, a system 100 for executing graph-based computations includes a development environment 104 coupled to a data store 102 and a runtime environment 108 coupled to the data store 102. A developer 101 builds applications using the development environment 104. An application is associated with one or more computation graphs specified by data structures in the data store 102 which may be written to the data store as a result of the developer's use of the development environment 104. A data structure 103 for a computation graph 105 specifies, for example, the vertices (components or datasets) of a computation graph and links (representing flows of work elements) between the vertices. The data structures can also include various characteristics of the components, datasets, and flows of the computation graphs (also called "dataflow graphs").
The runtime environment 108 may be hosted on one or more general-purpose computers under the control of a suitable operating system, such as the UNIX
operating system. For example, the runtime environment 108 can include a multiple-node parallel computing environment including a configuration of computer systems using multiple central processing units (CPUs), either local (e.g., multiprocessor systems such as SMP
computers), or locally distributed (e.g., multiple processors coupled as clusters or MPPs), or remotely, or remotely distributed (e.g., multiple processors coupled via LAN or WAN
networks), or any combination thereof.
The runtime environment 108 is configured to receive control inputs from the data store 102 and/or from a user 107 for executing and configuring computations.
The control inputs can include commands to process particular datasets using corresponding computation graphs, which are specified in the stored graph data structures.
The user 107 can interact with the runtime environment 108, for example, using a command line or graphical interface.
The runtime environment 108 includes a pre-execution module 110 and an execution module 112. The pre-execution module 110 performs any pre-processing procedures and prepares and maintains resources for executing computation graphs, such as a dictionary 111 and an archive 114 used for approximate string matching.
The dictionary 111 stores words and associated information about words appearing in a dataset. The archive 114 stores various results from pre-processing based on words, phrases, or records of the dataset. The dictionary 111 and archive 114 can be implemented in any of a variety of formats and can be organized as single collections of data or as multiple dictionaries and archives. The execution module 112 schedules and controls execution of the processes assigned to a computation graph for performing the computations of the components. The execution module 112 can interact with external computing resources coupled to the system 100 that are accessed during processing associated with the graph components, such as a data source 116 providing records from a database system.
Referring to FIG. 2, a simple example of a computation graph 105 includes a rollup component 200 that performs a rollup operation, a first input dataset 202, a second input dataset 204, and an output dataset 206. The input datasets provides flows of work elements (e.g., database records) to the rollup component 200, and the output dataset 206 receives a flow of work elements (e.g., aggregated database records) generated by the rollup component 200. The input datasets 202 and 204 and the output dataset represent stored data (e.g., database files) in a storage medium accessible to the runtime environment 108 such as the data source 116. The rollup component 200 compares key field values of the records received from the input datasets and generates aggregated records based on approximate matches between the key field values.
FIG. 3 shows a pre-processing procedure 300 performed by the pre-execution module 110 to prepare the dictionary 111 and archive 114 used by the computation graphs. The procedure 300 receives (302) configuration information from a user indicating which sources are to be processed and from which fields of those sources words are to be read. The indication of which fields to read may be implicit, according to a default setting, e.g., to read all fields. The procedure 300 compiles (304) the dictionary 111 by reading the selected fields of the records of the selected source(s) and storing the words that appear in the fields in the dictionary, updating statistics such as word frequency counts. The procedure 300 compiles (306) potential fuzzy matches to be stored in the archive 114 by generating (308) close words for the words in the dictionary and finding (310) potential fuzzy matches according to comparison of respective close words. Along with a pair of words that are a potential fuzzy match, a score is stored that can be used during graph execution to determine whether the potential fuzzy match is an actual fuzzy match. The procedure 300 updates (312) the dictionary 111 by computing significance scores for the words appearing in the dictionary 111 based on results stored in the archive 114. For example, for each word in dictionary 111, the procedure 300 renormalizes (314) the word frequency counts based on potential fuzzy matches, as described in more detail below. These renormalized word frequency counts can then be used to compute (316) a significance score that can be used when matching phrases and records of the source.
In some implementations, the pre-processing procedure 300 is repeated each time a new source is available, or new records are received into an existing source. The procedure can be invoked by a user, or automatically invoked at repeated intervals or in response to certain events.

2 Fuzzy Matching A challenge facing many businesses is to reconcile two (or more) datasets using fields, like "name" or "address," having equivalent values that may not be exactly the same. Reconciling the datasets may involve answering various questions about the data such as the following. Is a company name in one dataset present in another? If so, do they have the same address? If the company names from the two datasets are exactly the same, the corresponding address fields can be compared using a join operation, which finds all records having a matching key (here, the company name). But what if the names are not exactly the same? One name might end with the word COMPANY while another abbreviates it to CO and a third simply drops it altogether. Articles like OF
or THE may be in one name but not in another. Words might be misspelled (e.g., COMPNY for COMPANY). The name in one source might contain additional information like the personal name of a contact, or an account number.
There are no firm formatting rules for something like a business name, even within a single dataset, let alone among datasets of different provenance. The challenge is to find a match, generally a set of possible matches, even when the names are not identical. Such matching can be performed using approximate string matching, also known as "fuzzy" matching. The matches are fuzzy because they are tolerant of errors or discrepancies.
When an operation uses fuzzy matching, two words or phrases are said to match if they are found to be equivalent, though not necessarily identical, within some range of acceptable differences or discrepancies. For example, the exact spelling of the words may not agree or one phrase may contain words not present in the other. A
score can be used to quantify the quality of the agreement. With fuzzy matching it possible to perform familiar operations such as comparison, join, rollup, and lookup when the keys that would correspond to a match in given fields of records are not necessarily identical but are equivalent.
To increase the speed of fuzzy matching, the pre-execution module 110 periodically processes datasets from data sources that have been identified as potentially accessible by computation graphs. The pre-execution module 110 reads data appearing within selected fields of records of the datasets. The user can select all available fields to be processed or a selected subset of the available fields. In some cases, the data in a field may correspond to a single word, and in some cases the data may correspond to a phrase containing multiple words. (As used herein, a "word" is any string comprising a sequence of characters from some character set, where multiple words in a phrase are separated by spaces or other delimiters such as commas.) Rules for decomposing a field into words are configurable and the rules can be specified when the field is initially selected. In some cases, a phrase with multiple words can be processed in a similar manner as a single word even though there are embedded spaces (e.g., city names or UK
postcodes) by decomposing the phrase into a "multi-word" that treats embedded spaces as characters, as described in more detail below. This allows concatenated and broken words to be identified under fuzzy matching. For example, "john allen" would match "johnallen" or even "johnal len".
The pre-execution module 110 identifies a set of words occurring in the records and stores representations of the words (also called "word representations" or "string representations") in the dictionary 111. Statistics for each word can also be stored in the dictionary 111. For example, a frequency of occurrence of a word in the records of a given dataset, or of all the datasets, can be stored in the dictionary 111.
Information about the context of a word can be stored in the dictionary 111. For example, the context can include a field or group of fields in which the word occurred. Other information about a word, such as statistics on its position within a phrase, can also be stored in the dictionary 111. The representation of a word can be the word itself, or an encoded representation of the word that is in a different form such as a different character set or in a non-string representation (e.g., a number or alphanumeric encoding), as described in more detail below.
The processing of the pre-execution module 110 includes generating, for a given word, a set of "close words" (or "close representations"). The close words for a given word are stored in the dictionary 111 in association with the given word. The close words are used to generate potential fuzzy matches between pairs of words, as described in more detail below.
The results of the processing are stored in the archive 114 for use by graphs executed by the execution module 112. Some graphs may use the archive 114 in the process of determining whether given records should be processed (e.g., in a join or rollup component). Measuring the similarity of records is a data quality activity that arises in many contexts. For example, two sources may share a common key that is used in a join operation to bring records together from both sources. The fields in the records in which the key may occur need to be compared. Multiple words may be present in a field and often multiple fields are used to hold certain kinds of data, like a name or address. The placement of words among fields may not be consistent across sources, extra words may be present in one source or the other, words may be out of order, and there may be typographical errors. Various scoring functions can be used to compute a base score characterizing a quality of match between records, which is then weighted by penalties for various kinds of inconsistencies and errors. The weights associated with different kinds of errors are adjustable, and the contributions of each penalty to the score for any comparison are reportable. The scoring functions differ by how they compare words for being a fuzzy match and whether they weight words (statistically) in computing the score. Scores for records can involve weighted scoring across several separately scored individual or concatenated fields.
There are three levels of scoring in the matching process: words, phrases and records. Scoring of words is typically done during a pre-processing stage and determines whether two words are a potential fuzzy match or not based on a predetermined criterion and associates the potential fuzzy match with a "fuzzy match score." Scoring of phrases may take into account not only fuzzy matched words, but also the possibility that words are missing from one or both phrases, that words occur out of order or with extra words between them. The scoring algorithms are configurable so that the importance of different sources of discrepancy can be adjusted. The third level of scoring is of entire records. These are scored by combining the scores of different fields in a context-sensitive fashion. Missing or inconsistent information can be given appropriate weight.
The archive 114 stores the pairs of words found to be a potential fuzzy match along with the corresponding fuzzy match score. The set of potential fuzzy matches and their fuzzy match scores can be modified by users to replace the computed fuzzy match score with a different fuzzy match score and to add pairs of potentially fuzzy matching words not related by the predetermined criterion.
The archive 114 also stores a "significance score" that represents the relative importance of a word to a phrase containing the word for the purposes of phrase comparison. The significance score uses the inverse frequency of occurrence of a word in the dataset, but adjusts this value using a frequency of variants related by fuzzy match (determined using the score archive), and optionally uses additional information deduced from length of word, position in the phrase, source, and context (e.g., field where word occurs). Adjusting the significance score based on the context in which a word occurs may be useful, for example, because data may not be placed in the correct field and such errors may need to be identified. In some cases the strings occurring in fields, like an address field, may be received in an unstructured format, but may nevertheless contain structure. These strings can be parsed and individual elements identified.
Relative significance by context can be used to indicate whether a given word is more likely to appear in one context or another. For example, in an address field, LONDON as a city instead of part of a street name. Coordination of contextual information assists parsing: if LONDON is followed by another city name, it is likely to be part of a street name; if it immediately precedes the zip code, it is likely the city.
In some implementations, the archive 114 stores a "phrase comparison score"
that measures the fuzzy match quality between two phrases. The pre-execution module overlaps phrases to find lists of shared and unused words and uses their relative significance, alignment and word order to compute the phrase comparison score.
The information stored in the archive 114 can be further processed in a variety of ways after it is generated. For example, within the score archive, probable false positives can be identified using the results of self-scoring of gold reference data and possibly ruled out by n-gram analysis applied during the initial scoring. Other scores can also be included. For example, words or phrases from multiple fields can be scored independently and their scores combined to score a full record.
The archive 114 is accessed by different kinds of operations that use a fuzzy match. Some operations include a "normal version" that uses an exact match, and a "fuzzy version" that uses a fuzzy match. A fuzzy version of a rollup (or aggregation) operation associates similar records into groups. This is useful for consolidating records where variant forms of the keys identifying a unique entity are present in the raw data.
Simply defining the groups of records associated with the unique entities may be a significant result. Once the groups have been defined, typical aggregation activities are then supported. Because fuzzy match is a scored equivalence relation, association to a group will also be scored. This has the effect that, contrary to the familiar case, a given entity will not necessarily be a member of a unique group. Aggregations (like numeric totals) over the members of a group will not then be exact but will only be known within error bounds reflecting the uncertainty of group membership. Aggregation across all groups will however remain exact because membership in the whole is certain.
Components can also use the score archive to identify misspellings and other errors in individual words. The identified errors could be confirmed by a user (e.g., from a list produced from the archive), and a data correction component could correct the errors in the data. This error correction capability based on fuzzy matches can be an extension to data profiler capabilities, as described in more detail in U.S.
Patent Application No. 10/941,402 entitled "Data Profiling," incorporated herein by reference.
2.1 Matching Criterion Any of a variety of criteria can be used to measure of the quality of a match.
Consider two words, MOORGATE and MOOGRATE. One type of criterion is a distance metric. There are a number of different distance metrics, or ways to measure a distance, between two words. One of the simplest is the Hamming distance, which counts the number of positions for which the corresponding characters are different.
Since this number depends on alignment, the alignment for which the Hamming distance is minimum is used. This corresponds to the minimum number of substitutions required to convert one word to the other, as shown in the following example.
M O O R G A T E
M O O G R A T E

In this example, there are two positions for which corresponding characters differ, requiring two substitutions to convert one word to the other.
Since alignment is important in computing the Hamming distance, an insertion that "breaks" the alignment can produce a large distance, as in the following example.
M O O R G A T E
M O O R G R A T E

In this example, a single insertion in the original word produced a Hamming distance of four. For some applications, the Hamming distance overstates the significance of an insertion.
An alternative to the Hamming distance is the "edit distance." The edit distance is a count of the minimum number of edits (where an "edit" is one of insertion, deletion, or substitution) needed to convert one word into the other, preserving alignment as much as possible. The pair of words MOORGATE and MOOGRATE has an edit distance of 2 because it requires one insertion and one deletion (or two substitutions) to convert one word into the other. The pair of words MOORGATE and MOORGRATE has an edit distance of 1 because it requires one insertion to convert one word into the other.
There are various algorithms for computing the edit distance. By assigning different weights to insertion, deletion and substitution (and even transposition), the edit distance score can be made to reflect the importance of the different kinds of errors. A
complication with the edit distance is that it is generally more expensive to compute than the Hamming distance.
A "close word comparison" is an alternative to the edit distance described above, and is fast to compute. Since the frequency of typographical and transcription errors is relatively low in real data, finding more than one error in a single word is rare. (Of course, more systematic errors such as replacing one syllable with a phonetically similar one can involve multiple letters, but these can be treated as non-matching words.) Instead of computing an edit distance between two given words, the pre-execution module 110 uses a "deletion join" procedure to implement the close word comparison to determine whether or not the given words are a potential fuzzy match. The deletion join procedure starts by constructing all variant words obtained by deleting a single character from each word to obtain a set of close words called "deletion variants" for each word.
The deletion join procedure then compares the two sets of deletion variants to find whether any of the deletion variants match. If they do, the original words are designated as potential fuzzy match. The deletion join procedure can be characterized as finding words related by a limited set of equivalent changes involving a single deletion, a single insertion, or a deletion followed by an insertion (which covers both substitution and transposition). Generation of deletion variants and the deletion join procedure are described in more detail in examples below. Other forms of close word comparison are possible. For example, each word can have a set of close words that are "close"
according to a different set of equivalent changes.
The deletion join procedure can also use deletion variants in which multiple characters are deleted, but the number of false positives generally rises with the number of characters deleted. A false positive is when one naturally occurring ("real") word matches another naturally occurring word. For example, CLARKE and CLAIRE are related by a deletion followed by an insertion. Indeed an error could transcribe one word into the other, but because both words are real, the likelihood is that each word occurs naturally without error. Without information about which words are naturally occurring, a given fuzzy matching algorithm cannot tell directly which words are real (though there is some possibility of inferring it, as described below), so false positives can be hard for the algorithm to detect. The situation is subtle because some changes may be errors while others may not. One approach for dealing with false positives is to attach to each word a further measure, called "significance score," which initially is based on factors like the frequency with which the word appears in the data (the inverse of the word frequency count or renormalized word frequency count). For example, if one naturally occurring word is easy to mistake for another, the significance of both is reduced, reflecting loss of confidence in their distinguishability.
A potential problem with false positives is that it can lead to records which should be distinguished being reported as possible matches. One advantage of using single deletions in the deletion variants is that as the length of words increases (e.g., beyond five or so characters), the number of false positives falls rapidly as there are typically fewer long words appearing in the records, and the long words are typically more widely separated. In some implementations, the number of deletions used to generate deletion variants can depend on the length of the word. For example, for words with no more than five characters a single character is deleted, and for words with more than five characters up to two characters are deleted (e.g., the deletion variants include all words with a single character deleted and all words with two characters deleted).
The pre-execution module 110 can include features to reduce the overall computation time of large numbers of evaluations and to speed up ad hoc evaluations.
The words appearing in the records and their deletion variants are stored in the dictionary 111 and the results of the deletion comparisons of the deletion join procedure are pre-computed and stored in the archive 114, where they are available without further evaluation being required. This leverages a simple observation about name/address data:
the number of words in data based on natural languages is comparatively small (e.g., <<
1 million) relative to the number of records which may eventually be processed. Rather than repeat the same relatively expensive computation every time it occurs, the archive 114 stores the result (close pairs and an associated fuzzy match score) for reuse. Because the number of possible words is relatively small, and only observed variants are stored, the volume of the archive 114 is manageable.
The archive 114 has further benefits. A scoring algorithm for phrases will report a fuzzy match between words if their fuzzy match score in the archive 114 is less than a predetermined threshold (assuming a lower score indicates a higher quality match). By enabling a user to manually adjust the fuzzy match scores of word pairs in the archive 114, undesirable matches (e.g., false positives) can be turned off.
Furthermore, words that would not match under the close word comparison can be added as matches in the archive 114 by adding the pair with the appropriate fuzzy match score. For example, adding the pair HSBC MIDLAND creates a fuzzy match between "HSBC BANK" and "MIDLAND BANK". This provides a method for applying synonyms based on business meaning (here, MIDLAND is a former name for HSBC). Multiword associations (like IBM for INTERNATIONAL BUSINESS MACHINES) can also be added as fuzzy matches in the archive by manually adding the entry and setting the fuzzy match score to indicate a fuzzy match. In some implementations, multiword associations are stored and/or processed separately from associations between single words. For example, in some cases, multiword associations may be used during standardization and not necessarily during phrase comparison scoring.
The matching process for words, phrases, and records generally includes identifying candidates as a potential fuzzy match and judging the quality of a match between candidates to determine actual fuzzy matches.
Matching records may involve matching words from corresponding fields, or in some cases, matching phrases of words from one or more fields. For example, since the number of words present in records from different sources when matching names or addresses needn't be the same, fuzzy matching in this context can be performed by "phrase matching." The pre-execution module 110 could choose to treat a space as an additional character, so phrases simply become longer words. When this technique is used, the pre-execution module 110 may perform additional processing to handle optional words to account for the fact that regions within the longer word with spaces treated as characters might change.

2.2 Standardization and Parsing In some implementations of fuzzy matching, phrases are standardized before comparison. This reduces variability in predictable ways, for example, by dropping common words, like OF or THE, or by replacing common abbreviations with a full word, such as replacing CO with COMPANY. This standardization can increase the strength of matches and improve performance of the matching process in some approaches.
The difficulty is that some information may be lost during the standardization, or a false identification may be introduced: CORP is likely to be an abbreviation of CORPORATION, but it could be a misspelling of CORPS.
Standardization is configurable so that in some cases standardization is not used or is held to a minimum and other mechanisms are used to handle the issues of common words and replacement of abbreviations. In some implementations, all phrases are standardized to uppercase or lowercase (while preserving character variations like accents) because case is typically inconsistent between different sources and is largely a spurious distinction. The intent is to preserve the integrity of the original data and to leave any compensation for systematic variation within a field, like abbreviations or synonyms, for later steps in the processing. The reason for this is that while a word like ST in an address field might often mean STREET, it does not always, so it is potentially troublesome to change it too soon.
Provision can be made for special treatment of punctuation characters since these are often inconsistent between sources. For example, many punctuation characters are either optional or indicate formatting in an otherwise unstructured field.
Three examples of types of processing of punctuation are: replace with empty, replace with space, and replace with line break. Other forms of standardization can include replacing certain predetermined words or phrases with other words or phrases, such as replacing a word with a synonym. Users can control the replacement of punctuation and words using rules in configurable lookup files.
Parsing assigns meaning to different words or portions of phrases from any number of fields. For example, parsing may be used when handling addresses which may appear coalesced into one or two fields in some sources while they are split across eight or ten fields in other sources. Parsing fields that are expected to include known elements can be facilitated using reference sources that provide auxiliary information for identifying and validating the elements. For example, parsing address fields can be handled using postal address files (PAF files) as reference sources to identify and validate individual elements of an address.
2.3 Word Frequency and Context The pre-execution module 110 scans the records of a given source for words appearing in the records, and, in some cases, limits the scanning to selected fields of the records. The words that occur in selected fields of the records of the given source are stored in the dictionary 111. In some implementations, each entry in the dictionary stores a word, the word's frequency, position statistics for the word, and the word's context.
The frequency is a count of the number of times the word appears in the records of the source (e.g., a word may appear multiple times in a given record). The frequency can be an aggregate count over all fields or multiple counts each representing how many times the word appears in a given field. The frequency can also be renormalized as described in more detail below. If a word appears within a phrase of a given field, the position of the word in the phrase is computed for that phrase. The position statistics for a given word in the dictionary 111 include, for example, the mean and standard deviation of this position over all the phrases in which the word appeared.
A classifier called "context" is stored in the dictionary 111 to support logical groupings of fields. A user can specify contexts when the fields to be processed are selected. The use of contexts makes it possible to compare sources with dissimilar record structures, without requiring standardization to a common format. This improves the quality of comparisons between sources because source-specific information which can be deduced from the presence of words in particular fields is not lost through standardization to the common format. A given field may appear in multiple contexts, allowing both control of the granularity of comparisons and tolerance for ambiguous placement of data.
For example, an "address" context may be used to group fields that contain words that are part of an address. The field names associated with a context may differ between sources. For example, the fields address_line1, address_line2, address_line3 in one source might all be part of the address context while in another source the address context might include building_number, street, city, and postcode. Word frequency counts and significance score of a given word occurring in different fields can be aggregated to represent the frequency for the associated context.
The dictionary 111 is used for identifying the words in the archive 114 representing potential fuzzy matches and for identifying the significance of each word represented by a significance score. The dictionary 111 can be maintained using any of a variety of file formats that area efficiently accessible. For example, the dictionary 111 can be stored in an indexed compressed (concatenated) flat file format, such as the format described in U.S. Application Serial No. 11/555,458, incorporated herein by reference.
The dictionary 111 can be maintained in a single file, or in multiple files, either for different purposes, sources, field encodings or for accelerating access, for example.
Other information about a word can be included in the dictionary 111. For example "importance position" is the position the word would have in the field if the words in the field were sorted in descending order by importance: this puts important words early in the phrase. For example, a phrase in the original order may be "Bank of America." The phrase with words sorted by importance may be "America Bank of."
The position of the word "Bank" in the original phrase is 1st (of 3). The importance position of the word "Bank" is 2d (of 3).

2.4 Fuzzy Match Score Potential fuzzy matches of words within and between sources are pre-computed and stored in the archive 114 along with a fuzzy match score for each potential match between a pair of words that characterizes the quality of the match. Since the number of distinct words in a source is typically much less than the number of all words in the source, this pre-computation step accelerates the later comparison and scoring of fields by eliminating redundant fuzzy comparison of words. Initially only words that constitute a potential fuzzy match (according to a predetermined criterion such as the close word comparison technique) are stored in the archive 114. Users can amend and extend the archive 114 by manually adjusting fuzzy match scores or by adding match pairs that are not identified in the initial population of fuzzy match scores for words in the archive 113 based on the predetermined criterion.
In some implementations, the archive 114 is populated with fuzzy match scores using the deletion join procedure. Rather than compute a full edit distance between each pair of words, which would be expensive computationally, only nearby words are compared in the deletion join procedure. This is achieved in the following way. For each word in the word dictionary 111 (or for a portion of the dictionary 111, e.g., for a given source, context and/or field), every variant formed by deleting a single character is made. A "deletion set" for a given original word contains a list of entries each having key for the original word ("word key"), the original word ("original"), the deletion variant ("deletion_var"), and the position of the character ("deletionpos") that has been deleted from the original word. The deletion set can be stored in the dictionary 111 along with the original word, or can be discarded after being used by the pre-execution module 110 to generate the potential fuzzy matches that are stored in the archive 114. The original word is included in the deletion set along with the deletion variants and has deleted character position of 0. For example, the following is a deletion set for the word LONDON:

word key deletionpos deletion_var original Note that word key, deletionpos is a unique "key" identifying a given deletion variant.
This deletion join procedure can be extended to more deletions, recording the sequence of deletion positions, but since more than one deletion sequence can lead to the same word, the "key" for a given deletion variant is no longer unique. (There is however a canonical key determined by requiring the deletions be done in a particular order, say starting from the left in the original word, and always indicating the deletion position from the previous variant.) Thus, the deletion variant LOON generated by deleting two characters from the original word LONDON (both times in the third position) would have a deletion set entry:
word key deletionpos deletion_var original 1 3,3 LOON LONDON

The deletion join procedure determines potential fuzzy matches from words within one or more dictionaries by performing a join operation on the deletion_var word.
The quality of the fuzzy match is scored by comparing the positions of the deleted characters. In one example of a procedure for computing a fuzzy match score points are assigned for different types of changes as follows: 1 point each for a deletion, 1 point for changing the first letter, 1 point for changing the last letter, 1 point if the characters deleted are separated by more than one position. The weight associated with each type of change is adjustable. If the deletion position of one word is 0 and the other is not, this is a single insertion or deletion. If the deletion position is the same, it is a substitution.
Matches having the same word key and deletionpos are ignored since these are exact matches. Matches that indicate a deletion of a paired letter are also ignored as being uninformative (e.g., MEET -> MET by deleting either character 2 or 3).
The following is an example of a series of selected entries from respective deletion sets for the original words LONDON, LODON, LOMDON, and LODNON.
word key deletionpos deletion_var original In this example, some of the deletion variant entries have been suppressed because they do not lead to interesting matches. The join operation pairs a first entry with a second entry that has the same value of deletion-var. The resulting potential fuzzy matches between pairs of original words are:

First entry Second entry Potential fuzzy match Respectively, the exemplary potential fuzzy matches above represent a wordO-deletion, a substitution, a transposition, a transposition obtained by a different path, and a wordO-insertion (or word 1-deletion). Each pair of words in the archive 114 representing a potential fuzzy match has an associated fuzzy match score indicating a quality of the match.
Using the procedure described above, the fuzzy match scores for these pairs are as follows:

Potential fuzzy match Fuzzy match score As another example of scoring a pair identified as a potential fuzzy match, the words ONDOON and LONDON would have a fuzzy match score of 4 (1 for first letter, 1 for deleting L, 1 for deleting 0, 1 for non-adjacent deletions.
The join operation can be performed on a dictionary 111 with words from any number of sources including a single source. In some cases, the dictionary 111 includes a section with words from a first source (sourceO) and a section with words from a second source (sourcel). The deletion join procedure can be performed using selected sources and the results can be stored in the archive 114 along with an indication of which sources were selected and other information such as in what order they were compared.
For example, each pair of words in the archive 114 representing a potential fuzzy match can also be associated with an entry_descriptor. The entry_descriptor can be implemented as a bit-mapped field which indicates the origin of the pair. One bit indicates that the pair was produced by a graph using this deletion join method. Other bits indicate whether the score has been modified by a user, whether the pair has been inserted by a user (e.g., to introduce synonyms), or whether the pair was generated by cleansing of embedded punctuation characters (called "cleansed").
Cleansing of punctuation characters (potentially including foreign-language diacriticals) arises because experience shows that multiple punctuation characters can occur embedded within a single word. Sometimes these are effectively optional, like periods in initials; sometimes they look like trailing garbage characters.
Since more than one character is often involved, such cleansings would not be picked up by the single-deletion method while multiple deletions would bring in too many false positives. The following are a couple of examples of cleansed words:
original word cleansed word S.I.ID. SIID
HOLDINGS### HOLDINGS
Words from a single source can be scored against each other. When there is more than one source, words from each source can be scored against those of other sources (e.g., independent of field or context). This identifies potential fuzzy matches that may not occur within the source itself. Since misspellings and typographical errors are relatively isolated events, they typically vary across sources.
The archive 114 can store potential fuzzy match results across all sources, for example, in an indexed compressed (concatenated) flat file format such as the format described in U.S. Application Serial No. 11/555,458, incorporated herein by reference.
The use of multiple archives 114, both in multifile format and for different purposes, is possible (e.g., for the multiword scores described below). As new sources are brought in, the words are scored against their own deletion sets added to the dictionary 111 and against existing deletion sets already in the dictionary 111. New potential fuzzy match pairs and scores can be concatenated to the end of the existing archive 114 in sorted order. A later merge process can reorganize the archive for performance.
In some implementations, if an entry in the archive 114 for a word pair is already present, the pair is discarded and not scored again. This may be done, for example, for performance and to allow users to modify the score produced by the deletion join procedure. Since the archive 114 is cumulative, the number of words that need to be scored declines over time as the body of observed pairs grows.
Modifying the score of an entry is useful to turn off false positives individually.
Components of computation graphs are able to use the scores in the archive 114 to determine whether a word pair is a fuzzy match based on whether their score in the score archive is below a given threshold. Raising the score for a given pair of words in the archive 114 above the threshold effectively turns the match off (indicating that the identified potential fuzzy match is not an actual fuzzy match). Judicious scoring of word pairs makes it possible to selectively turn on and off sets of words by adjusting the threshold. In some implementations, one or more thresholds are configured to depend on context, and the score for a given pair can also depend on context (e.g., using additional information stored in the archive 114).
Users can also add word pairs to the archive 114 that would not be identified in the deletion join procedure. This is useful for adding synonyms or abbreviations to the score archive. For example, STREET and ST would not be identified as a potential fuzzy match by the deletion join procedure, but this may be an identification that is desirable.
It is also possible to add nickname identifications like ROBERT and BOB (this is a natural example of a context-dependent score-these might be considered a match in a personal name context but not in other contexts).
The fuzzy match scores in the archive can be updated by feedback from the results of further processing (e.g., accepted matches between phrases in which those words appear).

2.5 Word frequency renormalization and significance score After the archive 114 is populated with at least some word pairs, it can be used to "renormalize" the word frequency counts in the dictionary 111. The frequency of each word is adjusted by adding the counts of all words that are related to that word as a potential fuzzy match. The resulting renormalized frequency is used to compute the "significance score" of a word, which in turn will be used when matching phrases. The less frequent a word is in the data, the more significant it is in the sense that it is distinguishable from other words.
The difficulty with applying the frequency concept of significance to the raw word frequency counts is illustrated by misspelled words. Misspellings are rare and therefore are disproportionately significant. By adjusting their counts with the count of the more frequently occurring words they match with, their true relative significance is brought out. The high frequency matching words should not necessarily be thought of as "correctly spelled" because this implies a correctness to the matching word which may not apply. Not all low frequency words are misspelled and not all matching high frequency words are the corrected spelling even of misspelled words. For example, NORTE may be a misspelling of NORTH or it may simply be north in Spanish.
LABLE
might be a misspelling of LABEL but it could also be a misspelling of TABLE-both will occur as high frequency matches.
Significance carries a strong connotation of distinguishability. If a word matches with multiple high frequency words, as LABLE does, it is considered less significant because there is greater scope for it to be mistaken for another word.
In the archive 114, the renormalized word frequency count can be stored and other information such as a list of the words used to perform the renormalization (e.g., for diagnostic purposes). The following are examples of words that are potential fuzzy matches to the word AVENUE, along with an exemplary phrase showing the context in which the word appears, and a count of the number of times the word appears in an exemplary data source. In this exemplary data source, the word AVENUE itself occurs 10,500 times.

word context count AVENE 237 Park Avene 1 AVNENUE 255 5th Avnenue SW 1 AVENUES Philadelphia & Reading Avenues 11 AVENUS 1900 9th Avenus 1 AVENIE 3401 Hillview Avenie 1 VENUE 725 North Mathilda Venue 1 ANENUE 3330 Evergreen Anenue 1 AVENUE. 5200 NW 33rd Avenue. Suite 215 1 The word AVENUE would show as having 16 potential fuzzy matches and the count of 10500 would be adjusted by the sum of the counts of those matched words.
Each of these matched words would have their counts adjusted by the 10500 associated with AVENUE plus any other words identified as a potential fuzzy match to that matched word. Typically, misspelled words link with fewer fuzzy matches than correctly spelled words.
The process by which a word's frequency count is renormalized is illustrated in the following example. In this example, a dictionary for source0 includes the words MEXICO (original frequency count of 11) and MEXICO (original frequency count of 259) which appear in a field called "legal_address," and a dictionary for sourcel includes the words MEXCIO (original frequency count of 2) and MEXICO (original frequency count of 311) which appear in a field called "taddress3" (note the accented E
in sourceO).
In this example, the pairs of words stored in the archive 114 include potential fuzzy matches between sourceO and sourcel based on a join operation between the deletion sets of the dictionary for sourceO and the deletion sets of the dictionary for sourcel. (The following example below will augment the archive with the potential fuzzy matches from a self-join of the deletion sets for each source.) So the following two pairs of potential fuzzy matches occur in the archive 114:

MEXICO MEXICO
MEXICO MEXCIO

An example of the renormalization process is as follows. The dictionaries for both sourceO and sourcel are processed, for example, starting with the word MEXICO in sourceO. This word is looked up in the archive 114 to find the list of potential fuzzy matches occurring in sourcel. Each potential fuzzy match is then looked up in the original dictionary for source0. The resulting counts are added to the original count.
This process applied to the example above yields the following results for renormalization of the word frequency counts in the dictionary for sourceO:
first input word and original count from sourceO: MEXICO 11 look up in archive, return {MEXICO}
look up each in sourceO dictionary, add counts:
sourceO: MEXICO 259 found: {MEXICO}
renormalized count for MEXICO = 11 + 259 = 270 second input word and original count from source0: MEXICO 259 look up in archive, return {MEXICO,MEXCIO}
look up each in source0 dictionary, add counts:
source0: MEXICO 11 source0: MEXCIO not found found: {MEXICO}
renormalized count for MEXICO = 259 + 11 = 270 Suppose there were an additional entry in the source0 dictionary for the word MEXICA with an original word frequency count of 5.
This word MEXICA does not have a potential fuzzy match to anything in sourcel, so it does not appear in the archive 114 and therefore will not participate in the renormalization for source0 (relative to source I). If, however, the archive 114 has been extended with the self-join on deletion sets for the source0 dictionary with this additional entry, there would be the following additional entry in the archive 114:

MEXICA MEXICO

The lookup on the first word MEXICO would then add MEXICA to the set of found potential fuzzy matches. The renormalization for MEXICO would then be performed as follows:
source0: MEXICO 259 source0: MEXICA 5 found: {MEXICA,MEXICO}
renormalized count for MEXICO = 11 + 5 + 259 = 275 The renormalized word frequency count is now higher, reflecting the presence of the additional potential fuzzy match MEXICA. The significance score calculated based on the renormalized word frequency counts is then calculated, for example, as the log of the total number of non-empty records divided by the renormalized word frequency count. In this version of the significance score, the more frequently a word and its variants occur, the lower the significance score. A negative value indicates that a word occurs more often than once per record.
The renormalized word frequency counts can be used to identify likely misspellings or conversely likely false positives. Simply put, misspellings are expected to be rare while false positives are not. A simple ratio test indicates which words have counts much less than the renormalized word frequency count. These are likely misspellings. Even higher confidence might be achievable if ngram frequencies were consulted. An ngram is an n-letter word fragment. The frequency distribution of ngrams across the full data indicates the frequency of occurrence of the different letter combinations. (This distribution is of course language-specific.) The idea is that during production of the archive, the location of the change between two words is known. The two-letter and three-letter (and higher) word fragments spanning the location of the change can be identified and their frequencies looked up. If the ngram frequencies associated with the change in one word are much lower than in the other, this indicates the former word is likely misspelled.
On the other hand, multiple variants each with relatively high counts are likely to be naturally occurring variants-that is, false positives. From the standpoint of scoring, the existence of false positive matches reduces the significance of a word by adding relatively larger counts to it. In some cases, this may be desirable because it indicates the words could be mistaken. In other cases however, the words are so dissimilar looking that it is unlikely an error produced them-certainly not at the level of the relative counts.
Significance can be truly relative. When multiple sources are involved, some potential fuzzy match words may not occur in all sources. This means that adjusted counts can vary between sources. Similarly, the field or context associated with a word may be relevant. Which adjustments are made can be adapted to the pairing of sources, fields, and contexts, for example. In the relative case, only potential fuzzy matches that actually occur in the appropriate source/field/context when two sources are compared would be used to adjust the counts. It is expected the contribution from source-specific variants would generally be a small effect.

2.6 Encoding The close words found in the deletion join procedure are based on the arrangement of characters in words essentially unaltered from their appearance in the original dataset. The close word comparison can also be carried out in a "space of words" that is altered by using "word encodings." The set of close that are found may be different when word encodings are used. A word encoding maps words to a new representation. The mapping can be one-to-one, one-to-many, or many-to-one.
Some encodings may transform a word to a different character set, and some encodings may transform a word to a numerical representation. A word encoding modifies the space of words so that distances between words according to a given metric may change.
Words that may not have been close in terms of their natural character representations may be close after a word encoding is applied.
For example, in some implementations, the pre-execution module 110 performs a "prime encoding" in which each character in the character set is encoded as a prime number (e.g., with each letter in the alphabet being mapped to a different prime number, ignoring case) and the encoded word is the product of the prime numbers of the characters. Because multiplication is commutative (i.e., independent of the order of the factors), two encoded words are the same providing they consist of the same set of characters, irrespective of their order. This encoding is insensitive to transposition or indeed to scrambling, and is an example of a many-to-one mapping.
For a given encoding, a variation of the deletion join procedure can perform deletion of a character before the encoding to generate close words, or can perform a close-word-operation after the encoding to generate close words. A variation of the deletion join procedure for the prime encoding can be performed in which the module 110 divides the encoded product by a prime number to delete a corresponding character in a deletion variant. For some encodings (like the prime encoding), the close-word-operation after encoding yields the same result as if character deletion had taken place before encoding, but for other encodings, the close-word-operation may yield a different result than if character deletion had taken place before encoding.
For some languages, like Japanese, which use multiple alphabets or character sets (e.g., computer byte codes that identify characters), the encoding may include standardizing the choice of alphabet or character set before encoding.
2.7 Multiwords A multiword is a phrase containing embedded spaces treated as a word. In previous examples, phrases have been parsed into words without embedded spaces before scoring. This overlooks two potential sources of errors: spaces can be inserted within a word and the space between words can be dropped. Another example is handling synonyms relating a phrase to a single word, like an acronym.
To allow embedded spaces is to weaken the identification of space as a separator.
This is done by extending the parsing of a phrase to contain not only single words, but also all adjacent pairs and triples, etc., of words. The phrase is decomposed into all of its subphrases (multiwords) shorter than a specified length. All of the embedded spaces within a multiword (mword), are deleted to form the concatenated-word (cword).
This is the analog of a word formed by deletion. The cword becomes the key for the multiword dictionary and the multiword archive. When multiwords are compared, their cwords are matched and then the original mwords scored. For now, the possibility of misspelled words within an mword are ignored. To treat this case, the archive is consulted when scoring the mword pairs.
As an example, consider a source with the following three names JOHN A SMITH
JO HNA SMITH
JOHNA SMITH
An mword decomposition of the first entry to length 3 would give the set of mwords: {JOHN, A, SMITH, JOHN A, A SMITH, JOHN A SMITH}.

3 Implementations The approximate string matching approach described herein can be implemented using software for execution on a computer. For instance, the software forms procedures in one or more computer programs that execute on one or more programmed or programmable computer systems (which may be of various architectures such as distributed, client/server, or grid) each including at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. The software may form one or more modules of a larger program, for example, that provides other services related to the design and configuration of computation graphs. The nodes and elements of the graph can be implemented as data structures stored in a computer readable medium or other organized data conforming to a data model stored in a data repository.
The software may be provided on a storage medium, such as a CD-ROM, readable by a general or special purpose programmable computer or delivered (encoded in a propagated signal) over a communication medium of a network to the computer where it is executed. All of the functions may be performed on a special purpose computer, or using special-purpose hardware, such as coprocessors. The software may be implemented in a distributed manner in which different parts of the computation specified by the software are performed by different computers. Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, some of the steps described above may be order independent, and thus can be performed in an order different from that described.
It is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the invention, which is defined by the scope of the appended claims. For example, a number of the function steps described above may be performed in a different order without substantially affecting overall processing. Other embodiments are within the scope of the following claims.

Claims (30)

1. (Original) A method for managing an archive for determining approximate matches associated with strings occurring in records, the method including:

processing records to determine a set of string representations that correspond to strings occurring in the records;

generating, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and storing entries in an archive that each represent a potential approximate match between at least two strings based on their respective close representations,
2. The method of claim 1, wherein each string representation comprises a string.
3. The method of claim 2, wherein each close representation consists of at least some of the same characters in the string.
4. The method of claim 3, wherein generating the plurality of close strings for a given string in the set includes generating close strings that each have a different character deleted from the given string.
5. The method of claim 4, wherein generating the plurality of close strings for a given string in the set includes generating close strings that each have a single character deleted from the given string.
6. The method of claim 5, wherein generating the plurality of close strings for a given string in the set includes generating close strings at least some of which have multiple characters deleted from the given string.
7. The method of claim 4, wherein generating close strings that each have a different character deleted from the given string includes generating close strings that each have a single character deleted from the given string if the given string is shorter than a predetermined length, and generating close strings at least some of which have multiple characters deleted from the given string if the given string is longer than the predetermined length.
8. The method of claim 1, further including determining, for each of at least some of the string representations in the set, a frequency of occurrence of the corresponding string in the records.
9. The method of claim 8, further including generating, for each of at least some of the string representations in the set, a significance value that represents a significance of the corresponding string based on a sum that includes the frequency of occurrence of the string and the frequencies of occurrence of at least some strings represented in the archive as a potential approximate match to the string.
10. The method of claim 9, wherein the significance value is generated based on an inverse of the sum.
11. The method of claim 9, further including determining whether different phrases that include multiple strings correspond to an approximate match by determining whether strings within the phrases correspond to an approximate match, wherein the strings within the phrases are selected based on their corresponding significance values.
12. The method of claim 11, wherein the significance value of a string within a phrase is based on the sum, and is based on least one of a length of the string, a position of the string in the phrase, a field of a record in which the string occurs, and a source of a record in which the field occurs.
13. The method of claim 1, further including generating, for each of at least some of the entries in the archive, a score associated with the entry that quantifies a quality of the potential approximate match between at least two strings.

determining whether strings
14. The method of claim 13, further including associated with an entry correspond to an approximate match by comparing the score associated with the entry to a threshold.
15. The method of claim 13, wherein the score is based on a correspondence between the respective close representations used to determine the potential approximate match between the at least two strings.
16. The method of claim 1, wherein processing the records to determine a set of string representations that correspond to strings occurring in the records includes modifying a string occurring in at least one record to generate a modified string to include in the set of string representations.
17. The method of claim 16, wherein modifying the string includes removing or replacing punctuation.
18. The method of claim 16, wherein modifying the string includes encoding the string into a different representation,
19. The method of claim 18, wherein modifying the string includes encoding the string into a numerical representation.
20. The method of claim 19, wherein encoding the string into a numerical representation includes mapping each character in the string to a prime.
number and representing the string as the product of the prime numbers mapped to the characters in the string.
21. The method of claim 1, wherein the archive includes at least some entries that represent a potential approximate match between at least two strings based on input from a user.
22. A computer program, stored on a computer-readable medium, for managing an archive for determining approximate matches associated with strings occurring in records, the computer program including instructions for causing a computer to:

process records to determine a set of string representations that correspond to strings occurring in the records;

generate, for each of at least some of the string representations in the set, a plurality of close representations that are each generated, from at least some of the same characters in the string; and store entries in an archive that each represent a potential approximate match between at least two strings based on their respective close representations.
23. A system for managing an archive for determining approximate matches associated with strings occurring in records, the system including:

means for processing records to determine a set of string representations that correspond to strings occurring in the records;

means for generating, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and means for storing entries in an archive that each represent a potential approximate match between at least two strings based on their respective close representations,
24. A system for managing an archive for determining approximate matches associated with strings occurring in records, the system including:

a data source storing records;

a computer system configured to process the records in the data source to determine a set of string representations that correspond to strings occurring in the records, and generate, for each of at least some of the string representations in the set, a plurality of close representations that are each generated from at least some of the same characters in the string; and a data store coupled to the computer system to store an archive including entries that each represent a potential approximate match between at least two strings based on their respective close representations,
25. The method of claim 1 wherein each of the entries includes the strings between which there is a potential approximate match and a score that quantifies a quality of the potential approximate match between the strings.
26. The method of claim 1, further including generating, for each of at least some of the string representations in the set, a significance value that represents a significance of the corresponding string based on a frequency of occurrence of the corresponding string.
27. The method of claim 13, further including identifying potential approximate matches which are probable false positives using the entries in the archive.
28. The method of claim 27, wherein a probable false positive potential approximate match between a first string and a second string is identified based on a frequency of occurrence of the first string in the records and a frequency of occurrence of the second string in the records.
29. The method of claim 27, wherein a probable false positive potential approximate match is identified based on n-gram frequencies stored in the archive.
30. The method of claim 27, further including adjusting the score associated with the entry representing a potential approximate match in response to identification of the potential approximate match as a probable false positive.
CA2710882A 2008-01-16 2008-12-30 Managing an archive for approximate string matching Active CA2710882C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/015,085 US8775441B2 (en) 2008-01-16 2008-01-16 Managing an archive for approximate string matching
US12/015,085 2008-01-16
PCT/US2008/088530 WO2009091494A1 (en) 2008-01-16 2008-12-30 Managing an archive for approximate string matching

Publications (2)

Publication Number Publication Date
CA2710882A1 true CA2710882A1 (en) 2009-07-23
CA2710882C CA2710882C (en) 2017-01-17

Family

ID=40851547

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2710882A Active CA2710882C (en) 2008-01-16 2008-12-30 Managing an archive for approximate string matching

Country Status (8)

Country Link
US (2) US8775441B2 (en)
EP (1) EP2235621A4 (en)
JP (1) JP5603250B2 (en)
KR (1) KR101564385B1 (en)
CN (2) CN105373365B (en)
AU (1) AU2008348066B2 (en)
CA (1) CA2710882C (en)
WO (1) WO2009091494A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489461B2 (en) 2014-08-20 2019-11-26 Oracle International Corporation Multidimensional spatial searching for identifying substantially similar data fields

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7877350B2 (en) * 2005-06-27 2011-01-25 Ab Initio Technology Llc Managing metadata for graph-based computations
JP5894724B2 (en) * 2006-08-10 2016-03-30 アビニシオ テクノロジー エルエルシー Distributed service of graph type calculation
KR101758670B1 (en) 2007-07-26 2017-07-18 아브 이니티오 테크놀로지 엘엘시 Transactional graph-based computation with error handling
US8775441B2 (en) 2008-01-16 2014-07-08 Ab Initio Technology Llc Managing an archive for approximate string matching
WO2009094649A1 (en) * 2008-01-24 2009-07-30 Sra International, Inc. System and method for variant string matching
US8095773B2 (en) 2008-02-26 2012-01-10 International Business Machines Corporation Dynamic address translation with translation exception qualifier
KR101491581B1 (en) * 2008-04-07 2015-02-24 삼성전자주식회사 Correction System for spelling error and method thereof
EP2342684A4 (en) 2008-10-23 2017-09-20 Ab Initio Technology LLC Fuzzy data operations
US9135396B1 (en) 2008-12-22 2015-09-15 Amazon Technologies, Inc. Method and system for determining sets of variant items
CN102317911B (en) 2009-02-13 2016-04-06 起元技术有限责任公司 Management role performs
US8856879B2 (en) 2009-05-14 2014-10-07 Microsoft Corporation Social authentication for account recovery
US9124431B2 (en) * 2009-05-14 2015-09-01 Microsoft Technology Licensing, Llc Evidence-based dynamic scoring to limit guesses in knowledge-based authentication
US8667329B2 (en) * 2009-09-25 2014-03-04 Ab Initio Technology Llc Processing transactions in graph-based applications
WO2011088195A1 (en) 2010-01-13 2011-07-21 Ab Initio Technology Llc Matching metadata sources using rules for characterizing matches
AU2011268459B2 (en) 2010-06-15 2014-09-18 Ab Initio Technology Llc Dynamically loading graph-based computations
US8798366B1 (en) 2010-12-28 2014-08-05 Amazon Technologies, Inc. Electronic book pagination
US9846688B1 (en) 2010-12-28 2017-12-19 Amazon Technologies, Inc. Book version mapping
US9069767B1 (en) 2010-12-28 2015-06-30 Amazon Technologies, Inc. Aligning content items to identify differences
KR101889120B1 (en) 2011-01-28 2018-08-16 아브 이니티오 테크놀로지 엘엘시 Generating data pattern information
US9881009B1 (en) 2011-03-15 2018-01-30 Amazon Technologies, Inc. Identifying book title sets
US9317544B2 (en) 2011-10-05 2016-04-19 Microsoft Corporation Integrated fuzzy joins in database management systems
EP2780836A1 (en) * 2011-11-15 2014-09-24 Ab Initio Technology LLC Data clustering based on variant token networks
US8788471B2 (en) 2012-05-30 2014-07-22 International Business Machines Corporation Matching transactions in multi-level records
US10108521B2 (en) 2012-11-16 2018-10-23 Ab Initio Technology Llc Dynamic component performance monitoring
US9507682B2 (en) 2012-11-16 2016-11-29 Ab Initio Technology Llc Dynamic graph performance monitoring
GB2508223A (en) 2012-11-26 2014-05-28 Ibm Estimating the size of a joined table in a database
GB2508603A (en) 2012-12-04 2014-06-11 Ibm Optimizing the order of execution of multiple join operations
US9274926B2 (en) 2013-01-03 2016-03-01 Ab Initio Technology Llc Configurable testing of computer programs
US9063944B2 (en) 2013-02-21 2015-06-23 International Business Machines Corporation Match window size for matching multi-level transactions between log files
US9317499B2 (en) * 2013-04-11 2016-04-19 International Business Machines Corporation Optimizing generation of a regular expression
US9146946B2 (en) * 2013-05-09 2015-09-29 International Business Machines Corporation Comparing database performance without benchmark workloads
CN104182383B (en) * 2013-05-27 2019-01-01 腾讯科技(深圳)有限公司 A kind of text statistical method and equipment
US20140350919A1 (en) * 2013-05-27 2014-11-27 Tencent Technology (Shenzhen) Company Limited Method and apparatus for word counting
US20150046152A1 (en) * 2013-08-08 2015-02-12 Quryon, Inc. Determining concept blocks based on context
US10043182B1 (en) * 2013-10-22 2018-08-07 Ondot System, Inc. System and method for using cardholder context and preferences in transaction authorization
CA3128713C (en) 2013-12-05 2022-06-21 Ab Initio Technology Llc Managing interfaces for dataflow graphs composed of sub-graphs
US10521441B2 (en) * 2014-01-02 2019-12-31 The George Washington University System and method for approximate searching very large data
MY173084A (en) * 2014-05-23 2019-12-25 Mimos Berhad Adaptive-window edit distance algorithm computation
US10764265B2 (en) * 2014-09-24 2020-09-01 Ent. Services Development Corporation Lp Assigning a document to partial membership in communities
US9805099B2 (en) * 2014-10-30 2017-10-31 The Johns Hopkins University Apparatus and method for efficient identification of code similarity
US9679024B2 (en) * 2014-12-01 2017-06-13 Facebook, Inc. Social-based spelling correction for online social networks
JP2015062146A (en) * 2015-01-05 2015-04-02 富士通株式会社 Information generation program, information generation apparatus, and information generation method
US9646061B2 (en) 2015-01-22 2017-05-09 International Business Machines Corporation Distributed fuzzy search and join with edit distance guarantees
US20170004120A1 (en) * 2015-06-30 2017-01-05 Facebook, Inc. Corrections for natural language processing
US9904672B2 (en) 2015-06-30 2018-02-27 Facebook, Inc. Machine-translation based corrections
US10657134B2 (en) 2015-08-05 2020-05-19 Ab Initio Technology Llc Selecting queries for execution on a stream of real-time data
US10140200B2 (en) * 2015-10-15 2018-11-27 King.Dom Ltd. Data analysis
IL242218B (en) * 2015-10-22 2020-11-30 Verint Systems Ltd System and method for maintaining a dynamic dictionary
CN105446957B (en) 2015-12-03 2018-07-20 小米科技有限责任公司 Similitude determines method, apparatus and terminal
EP3779674B1 (en) 2015-12-21 2023-02-01 AB Initio Technology LLC Sub-graph interface generation
WO2017197402A2 (en) * 2016-05-13 2017-11-16 Maana, Inc. Machine-assisted object matching
US11176180B1 (en) * 2016-08-09 2021-11-16 American Express Travel Related Services Company, Inc. Systems and methods for address matching
US10228955B2 (en) * 2016-09-29 2019-03-12 International Business Machines Corporation Running an application within an application execution environment and preparation of an application for the same
US10402489B2 (en) 2016-12-21 2019-09-03 Facebook, Inc. Transliteration of text entry across scripts
US10810380B2 (en) 2016-12-21 2020-10-20 Facebook, Inc. Transliteration using machine translation pipeline
US10394960B2 (en) 2016-12-21 2019-08-27 Facebook, Inc. Transliteration decoding using a tree structure
US10546062B2 (en) * 2017-11-15 2020-01-28 International Business Machines Corporation Phonetic patterns for fuzzy matching in natural language processing
US11294943B2 (en) 2017-12-08 2022-04-05 International Business Machines Corporation Distributed match and association of entity key-value attribute pairs
US11163952B2 (en) * 2018-07-11 2021-11-02 International Business Machines Corporation Linked data seeded multi-lingual lexicon extraction
US11693860B2 (en) 2019-01-31 2023-07-04 Optumsoft, Inc. Approximate matching
US11269905B2 (en) * 2019-06-20 2022-03-08 International Business Machines Corporation Interaction between visualizations and other data controls in an information system by matching attributes in different datasets
US11886794B2 (en) * 2020-10-23 2024-01-30 Saudi Arabian Oil Company Text scrambling/descrambling
US11556593B1 (en) 2021-07-14 2023-01-17 International Business Machines Corporation String similarity determination
US11615243B1 (en) * 2022-05-27 2023-03-28 Intuit Inc. Fuzzy string alignment
KR20240025272A (en) 2022-08-18 2024-02-27 한국전력공사 An approximate answer extraction System and Method based on unstructured data for natural language processing

Family Cites Families (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02129756A (en) 1988-11-10 1990-05-17 Nippon Telegr & Teleph Corp <Ntt> Word collating device
US5179643A (en) * 1988-12-23 1993-01-12 Hitachi, Ltd. Method of multi-dimensional analysis and display for a large volume of record information items and a system therefor
US5388259A (en) * 1992-05-15 1995-02-07 Bell Communications Research, Inc. System for accessing a database with an iterated fuzzy query notified by retrieval response
JPH0644309A (en) 1992-07-01 1994-02-18 Nec Corp Data base managing system
JPH0944518A (en) 1995-08-02 1997-02-14 Adoin Kenkyusho:Kk Method for structuring image data base, and method and device for retrieval from image data base
US5832182A (en) * 1996-04-24 1998-11-03 Wisconsin Alumni Research Foundation Method and system for data clustering for very large databases
JPH10275159A (en) 1997-03-31 1998-10-13 Nippon Telegr & Teleph Corp <Ntt> Information retrieval method and device therefor
JP3466054B2 (en) 1997-04-18 2003-11-10 富士通株式会社 Grouping and aggregation operation processing method
US6026398A (en) * 1997-10-16 2000-02-15 Imarket, Incorporated System and methods for searching and matching databases
JPH11184884A (en) 1997-12-24 1999-07-09 Ntt Data Corp System for judging the same person and method therefor
US6581058B1 (en) * 1998-05-22 2003-06-17 Microsoft Corporation Scalable system for clustering of large databases having mixed data attributes
US6285995B1 (en) 1998-06-22 2001-09-04 U.S. Philips Corporation Image retrieval system using a query image
US6742003B2 (en) * 2001-04-30 2004-05-25 Microsoft Corporation Apparatus and accompanying methods for visualizing clusters of data and hierarchical cluster classifications
JP2000029899A (en) 1998-07-14 2000-01-28 Hitachi Software Eng Co Ltd Matching method for building and map, and recording medium
US6493709B1 (en) * 1998-07-31 2002-12-10 The Regents Of The University Of California Method and apparatus for digitally shredding similar documents within large document sets in a data processing environment
US6658626B1 (en) * 1998-07-31 2003-12-02 The Regents Of The University Of California User interface for displaying document comparison information
US7356462B2 (en) * 2001-07-26 2008-04-08 At&T Corp. Automatic clustering of tokens from a corpus for grammar acquisition
US6317707B1 (en) * 1998-12-07 2001-11-13 At&T Corp. Automatic clustering of tokens from a corpus for grammar acquisition
US6456995B1 (en) * 1998-12-31 2002-09-24 International Business Machines Corporation System, method and computer program products for ordering objects corresponding to database operations that are performed on a relational database upon completion of a transaction by an object-oriented transaction system
AU780926B2 (en) * 1999-08-03 2005-04-28 Bally Technologies, Inc. Method and system for matching data sets
WO2001031479A1 (en) 1999-10-27 2001-05-03 Zapper Technologies Inc. Context-driven information retrieval
JP2001147930A (en) 1999-11-19 2001-05-29 Mitsubishi Electric Corp Character string comparing method and information retrieving device using character string comparison
US7328211B2 (en) * 2000-09-21 2008-02-05 Jpmorgan Chase Bank, N.A. System and methods for improved linguistic pattern matching
DE10048478C2 (en) * 2000-09-29 2003-05-28 Siemens Ag Method of accessing a storage unit when searching for substrings
US6931390B1 (en) * 2001-02-27 2005-08-16 Oracle International Corporation Method and mechanism for database partitioning
JP3605052B2 (en) 2001-06-20 2004-12-22 本田技研工業株式会社 Drawing management system with fuzzy search function
US20030033138A1 (en) * 2001-07-26 2003-02-13 Srinivas Bangalore Method for partitioning a data set into frequency vectors for clustering
US7043647B2 (en) 2001-09-28 2006-05-09 Hewlett-Packard Development Company, L.P. Intelligent power management for a rack of servers
US6570511B1 (en) * 2001-10-15 2003-05-27 Unisys Corporation Data compression method and apparatus implemented with limited length character tables and compact string code utilization
US7213025B2 (en) 2001-10-16 2007-05-01 Ncr Corporation Partitioned database system
US20030120630A1 (en) * 2001-12-20 2003-06-26 Daniel Tunkelang Method and system for similarity search and clustering
US7369984B2 (en) * 2002-02-01 2008-05-06 John Fairweather Platform-independent real-time interface translation by token mapping without modification of application code
WO2003067497A1 (en) 2002-02-04 2003-08-14 Cataphora, Inc A method and apparatus to visually present discussions for data mining purposes
AU2003243533A1 (en) * 2002-06-12 2003-12-31 Jena Jordahl Data storage, retrieval, manipulation and display tools enabling multiple hierarchical points of view
US6961721B2 (en) * 2002-06-28 2005-11-01 Microsoft Corporation Detecting duplicate records in database
US20050226511A1 (en) 2002-08-26 2005-10-13 Short Gordon K Apparatus and method for organizing and presenting content
US7043476B2 (en) * 2002-10-11 2006-05-09 International Business Machines Corporation Method and apparatus for data mining to discover associations and covariances associated with data
US7392247B2 (en) 2002-12-06 2008-06-24 International Business Machines Corporation Method and apparatus for fusing context data
US20040139072A1 (en) * 2003-01-13 2004-07-15 Broder Andrei Z. System and method for locating similar records in a database
US7912842B1 (en) 2003-02-04 2011-03-22 Lexisnexis Risk Data Management Inc. Method and system for processing and linking data records
US7287019B2 (en) * 2003-06-04 2007-10-23 Microsoft Corporation Duplicate data elimination system
US20050120011A1 (en) * 2003-11-26 2005-06-02 Word Data Corp. Code, method, and system for manipulating texts
US7493294B2 (en) * 2003-11-28 2009-02-17 Manyworlds Inc. Mutually adaptive systems
US7283999B1 (en) * 2003-12-19 2007-10-16 Ncr Corp. Similarity string filtering
US7472113B1 (en) * 2004-01-26 2008-12-30 Microsoft Corporation Query preprocessing and pipelining
GB0413743D0 (en) * 2004-06-19 2004-07-21 Ibm Method and system for approximate string matching
US7917480B2 (en) * 2004-08-13 2011-03-29 Google Inc. Document compression system and method for use with tokenspace repository
US8407239B2 (en) * 2004-08-13 2013-03-26 Google Inc. Multi-stage query processing system and method for use with tokenspace repository
US20080040342A1 (en) * 2004-09-07 2008-02-14 Hust Robert M Data processing apparatus and methods
US8725705B2 (en) * 2004-09-15 2014-05-13 International Business Machines Corporation Systems and methods for searching of storage data with reduced bandwidth requirements
US7523098B2 (en) * 2004-09-15 2009-04-21 International Business Machines Corporation Systems and methods for efficient data searching, storage and reduction
US7290084B2 (en) * 2004-11-02 2007-10-30 Integrated Device Technology, Inc. Fast collision detection for a hashed content addressable memory (CAM) using a random access memory
EP1866808A2 (en) 2005-03-19 2007-12-19 ActivePrime, Inc. Systems and methods for manipulation of inexact semi-structured data
US9110985B2 (en) * 2005-05-10 2015-08-18 Neetseer, Inc. Generating a conceptual association graph from large-scale loosely-grouped content
US7584205B2 (en) 2005-06-27 2009-09-01 Ab Initio Technology Llc Aggregating data with complex operations
US7658880B2 (en) * 2005-07-29 2010-02-09 Advanced Cardiovascular Systems, Inc. Polymeric stent polishing method and apparatus
US7672833B2 (en) * 2005-09-22 2010-03-02 Fair Isaac Corporation Method and apparatus for automatic entity disambiguation
US7890533B2 (en) * 2006-05-17 2011-02-15 Noblis, Inc. Method and system for information extraction and modeling
US8175875B1 (en) * 2006-05-19 2012-05-08 Google Inc. Efficient indexing of documents with similar content
US7634464B2 (en) 2006-06-14 2009-12-15 Microsoft Corporation Designing record matching queries utilizing examples
US20080140653A1 (en) * 2006-12-08 2008-06-12 Matzke Douglas J Identifying Relationships Among Database Records
US7739247B2 (en) * 2006-12-28 2010-06-15 Ebay Inc. Multi-pass data organization and automatic naming
EP2122506A4 (en) * 2007-01-10 2011-11-30 Sysomos Inc Method and system for information discovery and text analysis
US8694472B2 (en) 2007-03-14 2014-04-08 Ca, Inc. System and method for rebuilding indices for partitioned databases
US7711747B2 (en) * 2007-04-06 2010-05-04 Xerox Corporation Interactive cleaning for automatic document clustering and categorization
JP4203967B1 (en) * 2007-05-28 2009-01-07 パナソニック株式会社 Information search support method and information search support device
US7769778B2 (en) * 2007-06-29 2010-08-03 United States Postal Service Systems and methods for validating an address
US7788276B2 (en) * 2007-08-22 2010-08-31 Yahoo! Inc. Predictive stemming for web search with statistical machine translation models
US7925652B2 (en) * 2007-12-31 2011-04-12 Mastercard International Incorporated Methods and systems for implementing approximate string matching within a database
US8775441B2 (en) 2008-01-16 2014-07-08 Ab Initio Technology Llc Managing an archive for approximate string matching
US8032546B2 (en) * 2008-02-15 2011-10-04 Microsoft Corp. Transformation-based framework for record matching
US8266168B2 (en) * 2008-04-24 2012-09-11 Lexisnexis Risk & Information Analytics Group Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US7958125B2 (en) * 2008-06-26 2011-06-07 Microsoft Corporation Clustering aggregator for RSS feeds
WO2010028437A1 (en) * 2008-09-10 2010-03-18 National Ict Australia Limited Identifying relationships between users of a communications domain
US8150169B2 (en) * 2008-09-16 2012-04-03 Viewdle Inc. System and method for object clustering and identification in video
EP2342684A4 (en) 2008-10-23 2017-09-20 Ab Initio Technology LLC Fuzzy data operations
US20100169311A1 (en) * 2008-12-30 2010-07-01 Ashwin Tengli Approaches for the unsupervised creation of structural templates for electronic documents
JP5173898B2 (en) 2009-03-11 2013-04-03 キヤノン株式会社 Image processing method, image processing apparatus, and program
US20100274770A1 (en) * 2009-04-24 2010-10-28 Yahoo! Inc. Transductive approach to category-specific record attribute extraction
US8161048B2 (en) * 2009-04-24 2012-04-17 At&T Intellectual Property I, L.P. Database analysis using clusters
US8195626B1 (en) 2009-06-18 2012-06-05 Amazon Technologies, Inc. Compressing token-based files for transfer and reconstruction
US20100333116A1 (en) * 2009-06-30 2010-12-30 Anand Prahlad Cloud gateway system for managing data storage to cloud storage sites
US8713018B2 (en) * 2009-07-28 2014-04-29 Fti Consulting, Inc. System and method for displaying relationships between electronically stored information to provide classification suggestions via inclusion
US8433715B1 (en) * 2009-12-16 2013-04-30 Board Of Regents, The University Of Texas System Method and system for text understanding in an ontology driven platform
US8375061B2 (en) * 2010-06-08 2013-02-12 International Business Machines Corporation Graphical models for representing text documents for computer analysis
US8346772B2 (en) * 2010-09-16 2013-01-01 International Business Machines Corporation Systems and methods for interactive clustering
US8463742B1 (en) 2010-09-17 2013-06-11 Permabit Technology Corp. Managing deduplication of stored data
US8606771B2 (en) * 2010-12-21 2013-12-10 Microsoft Corporation Efficient indexing of error tolerant set containment
US8612386B2 (en) * 2011-02-11 2013-12-17 Alcatel Lucent Method and apparatus for peer-to-peer database synchronization in dynamic networks
EP2780836A1 (en) 2011-11-15 2014-09-24 Ab Initio Technology LLC Data clustering based on variant token networks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489461B2 (en) 2014-08-20 2019-11-26 Oracle International Corporation Multidimensional spatial searching for identifying substantially similar data fields

Also Published As

Publication number Publication date
US20090182728A1 (en) 2009-07-16
CA2710882C (en) 2017-01-17
CN105373365A (en) 2016-03-02
WO2009091494A1 (en) 2009-07-23
KR20100116595A (en) 2010-11-01
CN101978348A (en) 2011-02-16
AU2008348066B2 (en) 2015-03-26
JP5603250B2 (en) 2014-10-08
KR101564385B1 (en) 2015-10-29
CN105373365B (en) 2019-02-05
US20150066862A1 (en) 2015-03-05
EP2235621A1 (en) 2010-10-06
EP2235621A4 (en) 2012-08-29
US8775441B2 (en) 2014-07-08
JP2011511341A (en) 2011-04-07
US9563721B2 (en) 2017-02-07
AU2008348066A1 (en) 2009-07-23
CN101978348B (en) 2015-11-25

Similar Documents

Publication Publication Date Title
CA2710882C (en) Managing an archive for approximate string matching
Boytsov Indexing methods for approximate dictionary searching: Comparative analysis
US6470347B1 (en) Method, system, program, and data structure for a dense array storing character strings
US8855998B2 (en) Parsing culturally diverse names
KR102031392B1 (en) Data clustering based on candidate queries
US6963871B1 (en) System and method for adaptive multi-cultural searching and matching of personal names
EP0277356B1 (en) Spelling error correcting system
US20040107205A1 (en) Boolean rule-based system for clustering similar records
US20040107189A1 (en) System for identifying similarities in record fields
JPH079655B2 (en) Spelling error detection and correction method and apparatus
JPH08241335A (en) Method and system for vague character string retrieval using fuzzy indeterminative finite automation
Rogers et al. Searching for historical word forms in text databases using spelling‐correction methods: Reverse error and phonetic coding methods
AU2015202043B2 (en) Managing an archive for approximate string matching
AT&T dbs-010.dvi
CN113032371A (en) Database grammar analysis method and device and computer equipment
CN117331963B (en) Data access processing method and device, electronic equipment and storage medium
US7676330B1 (en) Method for processing a particle using a sensor structure
Schay A generic framework for the matching of similar names
Luján-Mora et al. Reducing inconsistency in data warehouses
Jaisunder et al. Significance of character ‘H’in soundex patterns on Indian Names
US20080275842A1 (en) Method for processing counts when an end node is encountered
Taghva et al. Ocrspell: An interactive spelling correction system for ocr errors in text
Coles et al. Advanced Search Techniques
Weizenbaum Advanced Search Techniques
Cristani et al. Ontology-Driven Approximate Duplicate Elimination of Postal Addresses

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20131216