WO2007041117A1 - Weighted linear model - Google Patents

Weighted linear model Download PDF

Info

Publication number
WO2007041117A1
WO2007041117A1 PCT/US2006/037562 US2006037562W WO2007041117A1 WO 2007041117 A1 WO2007041117 A1 WO 2007041117A1 US 2006037562 W US2006037562 W US 2006037562W WO 2007041117 A1 WO2007041117 A1 WO 2007041117A1
Authority
WO
WIPO (PCT)
Prior art keywords
word
words
alignment
association
given
Prior art date
Application number
PCT/US2006/037562
Other languages
French (fr)
Inventor
Robert C. Moore
Wen-Tau Yih
Galen Andrew
Kristina Toutanova
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/242,290 external-priority patent/US7957953B2/en
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Publication of WO2007041117A1 publication Critical patent/WO2007041117A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/45Example-based machine translation; Alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/47Machine-assisted translation, e.g. using translation memory

Definitions

  • Machine translation is a process by which a textual input in a first language is automatically translated, using a computerized machine translation system, into a textual output in a second language.
  • Some such systems operate using word based translation. In those systems, each word in the input text, in the first language, is translated into some number of corresponding words in the output text, in the second language.
  • phrase- based translation systems Better performing systems, howeyer, are referred to as phrase- based translation systems.
  • current training systems In order to train either of these two types of systems (and many other machine translation systems), current training systems often access a parallel bilingual corpus; that is, a text in one language and its translation into another language.
  • the training systems first align text fragments in the bilingual corpus such that a text fragment (e.g., a sentence) in the first language is aligned with a text fragment (e.g., a sentence) in the second language that is the translation of the text fragment in the first language.
  • a text fragment e.g., a sentence
  • a text fragment e.g., a sentence
  • this is referred to as a bilingual sentence- aligned data corpus .
  • the training system In order to train the machine translation system, the training system must also know the individual word alignments within the aligned sentences. In other words, even though sentences have been identified as translations of one another in the bilingual, sentence-aligned corpus, the machine translation training system must also know which words in each sentence of the first language translate to which words in the aligned sentence in the second language.
  • HMM Hidden Markov Model
  • Generative models require a generative "story" as to how the observed data is generated by an inter-related set of stochastic processes.
  • the generative story for models 1 and 2 mentioned above and the HMM alignment model is that a target language translation of a given source language sentence is generated by first choosing a length for the target language sentence, then for each target sentence position, choosing a source sentence word, and then choosing the corresponding target language word.
  • the present invention addresses one, some, or all of these problems.
  • these problems are not to be used to limit the scope of the invention in any way, and the invention can be used to address different problems, other than those mentioned, in machine translation.
  • a weighted linear word alignment model linearly combines weighted features to score a word alignment for a bilingual, aligned pair of text fragments.
  • the features are each weighted by a feature weight.
  • One of the features is a word association metric generated from surface statistics.
  • FIG. 1 is a block diagram of one exemplary environment in which the present invention can be practiced.
  • FIG. 2 is a block diagram of one embodiment of a word alignment system.
  • FIG- 3 is a flow diagram illustrating one embodiment of operation of the system shown in FIG. 2.
  • FIG. 4A is a flow diagram illustrating one embodiment for indexing association types.
  • FIG. 4B is a flow diagram illustrating one embodiment for generating a list of possible association types for a sentence pair.
  • FIG. 5A is a flow diagram illustrating how a best alignment is identified in more detail-
  • FIGS. 5B-1 to 5B-3 are flow diagrams illustrating one embodiment in which potential alignments are incrementally generated and pruned.
  • FIG. 5C is a flow diagram illustrating one embodiment for adding a new link to an existing alignment in a first model.
  • FIG. 5D is a flow diagram illustrating an embodiment of adding a new link to an existing alignment in a second model.
  • FIG. 6 is a flow diagram illustrating one embodiment for optimizing parameters for the model.
  • FIG. 7 is a block diagram of a classifier using conditional log odds .
  • the present invention deals with bilingual word alignment. However, before describing the present invention in greater detail, one illustrative environment in which the present invention can be used will be discussed.
  • FIG. 1 illustrates an example of a suitable computing system environment 100 on which embodiments may be implemented.
  • the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
  • Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the aboye systems or devices, and the like.
  • Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules are located in both local and remote computer storage media including memory storage devices.
  • an exemplary system for implementing some embodiments includes a general-purpose computing device in the form of a computer 110.
  • Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120.
  • the system bus 121 may be any of several t.ypes of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Computer 110 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
  • the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132.
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system 133 (BIOS) containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131.
  • BIOS basic input/output system 133
  • RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120.
  • FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
  • the computer 110 may also include other removable/nonremovable volatile/nonvolatile computer storage media.
  • FIG. 1 illustrates a hard ⁇ iisk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a, CD ROM or other optical media.
  • Other removable/nonremovable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic- tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
  • hard j disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad.
  • Other input devices may include a joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB) .
  • a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190.
  • computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 195.
  • the computer 110 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 180.
  • the remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110.
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet-
  • the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet.
  • the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism.
  • program modules depicted relative to the computer 110, or portions thereof may be stored in the remote memory storage device.
  • FIG. 1 illustrates remote application programs 185 as residing on remote computer 180. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • FIG. 2 is a block diagram of a word alignment system 200 in accordance with one embodiment of the invention.
  • Word alignment system 200 includes word alignment component 202 and word alignment model 204.
  • System 200 is also shown coupled to an optional bilingual corpus 206 and text fragment alignment Component 208.
  • index generator 210 coupled to a store of word association types 212.
  • Bilingual corpus 206 illustratively includes bilingual data in which text in the first language is found, along with a translation of that text into a second language.
  • bilingual corpus 206 will illustratively include a relatively large amount of English language text along with a French translation of that text.
  • a relatively small amount of bilingual corpus 206 is word- aligned by a person fluent in both languages .
  • bilingual corpus 206 might consist of 500,000 pairs, each pair having an English sentence and its French translation, of which 200 to 300 pairs have been word-aligned by hand.
  • text fragment alignment component 208 first accesses bilingual corpus 206 to generate pairs of aligned text fragments from bilingual corpus 206.
  • the text fragments are sentences, although the text fragments could be other fragments such as clauses, etc.
  • Text fragment alignment component 208 thus outputs a first text fragment 214 in a first language E (such as English) and a second text fragment 216 in a second language F (such as French) which is the translation of the first text fragment 214.
  • the bilingual, aligned text fragments 214 and 216 are then input to word alignment component 202.
  • Either text fragment alignment component 208 or a different component, illustratively calculates values of a statistical measure of the strength of word associations in the text-fragment-aligned data. These values are referred to as word association scores and are indicative of a strength of association between a bilingual pair of words, or a bilingual cluster of words . Each pair or cluster of words is referred to as a word association type and is shown in FIG. 2 as being stored, along with its word association score, in a word association type data store 212- These scores can be used to determine feature values in the word alignment model described below. Two different exemplary word association scores that can be used are discussed in detail below, and a number of others are also mentioned. In addition, one or more other features (in addition to those computed from the word association scores) are also defined for possible word alignments. This set of features 218 is used in word alignment model 204.
  • index generator 210 accesses all of the various word association types identified in the training data
  • index generator 210 generates an index 220 of word association types.
  • the index 220 and the word alignment model 204 are available to word alignment component 202 in order to generate a word alignment 222 for a pair of aligned text fragments 214 and 216.
  • FIG. 3 is a flow diagram illustrating the operation of system 200 in greater detail.
  • Word alignment component 202 first selects a sentence pair (or text fragment pair) . This is indicated by block 300 in FIG. 3.
  • Word alignment component 202 then accesses the set of association types through index 220. This is indicated by block 302.
  • Word alignment component 202 then generates a list of all possible association types that apply to the selected sentence pair. This is indicated by block 304 in FIG. 3.
  • Word alignment component 202 then sorts the list of possible association types based on their association scores. This is indicated by block 306 in FIG. 3. Finally, word alignment component 202 identifies the best alignment according to word alignment model 204, by accessing word alignment model 204, and employing the various features 218 in model 204. This is indicated by block 308 in FIG. 3.
  • model 204 is generated based on discriminative training of a weighted linear combination of a relatively small number of features .
  • model 214 simply multiplies the values of each of the features by a corresponding weight to give a score for that feature, and sums the feature scores to give an overall score for the alignment. The possible alignment having the best overall score is selected as the word alignment for that sentence pair.
  • the weights can be optimized using a modified version of averaged perceptron learning as described below with respect to FIG. 6. This is a relatively quick process.
  • no generative "story” needs to be invented to explain how the features generate the data, so new features can be easily added to the model without having to change the overall structure of the model.
  • word alignment model 204 can be any of a variety of different implementations incorporating a variety of different features.
  • word alignment model 204 incorporates a feature computed from the different word association scores, mentioned above, intended to indicate how likely various pairs of words or groups of words are to be mutual translations, plus additional features measuring how much word reordering is required by a given alignment, and how many words are left unlinked in that alignment.
  • embodiments of model 204 can also include a feature measuring how often one word is linked to several words in the alignment under analysis.
  • alignment is used to mean an overall word alignment of a sentence pair.
  • link on the other hand is used to mean the alignment of a particular pair of words or small group of words .
  • model 204 uses a statistical measure of word association in order to perform bilingual word alignment.
  • word here and subsequently should be taken very broadly to include any relatively fixed sequence of characters (including a single character) for which a translation relationship can be meaningfully considered. For example, a single punctuation character such as a period or comma may be treated as a word.
  • words are conventionally considered to include usually no more than one or two characters. For the purposes of bilingual word alignment, however, it has sometimes proved useful to treat each individual Chinese character as a single word.
  • LLR log likelihood ratio
  • the log likelihood ratio statistic is a measure of the strength of association between a particular English word and a particular French word.
  • the log likelihood ratio is computed from bilingual, aligned sentences.
  • the LLR statistic takes into account how often an English word occurs in the English sentences, and how often a French word occurs in the French sentences, and how often they occur together in an aligned sentence pair.
  • One way of calculating LLR scores for words in the training corpus is as follows : Eq. 2
  • Equation 2 f and e refer to the words (in French and in English, respectively) whose degree of association is being measured.
  • f and e refer to the words (in French and in English, respectively) whose degree of association is being measured.
  • f and e when used, it means that those words occur in the respective target and source sentences of an aligned sentence pair, and -if and ->e mean that the corresponding words do not occur in the respective sentences, whereas f? and e? are variables ranging over these values, and C (£?,e?) is the observed joint count for the values of f? and e?.
  • the probabilities in Equation 2, p(f? ⁇ e?) and p(f?), illustratively refer to maximum likelihood estimates.
  • any negatively associated word pairs are discarded by requiring that p(f,e) ⁇ p(f) 'p(e) . Also, any word pairs with an LLR score of less than 1 can be discarded as well.
  • the word association scores are used to compute word association features 230 used in model 204, and the value of the principal word association feature for an alignment is simply the sum of all the individual log-likelihood ratio scores for the word pairs linked by the alignment.
  • the log-likelihood ratio-based model also includes a plurality of other features.
  • non- monotonicity features 232 one set of features is referred to as non- monotonicity features 232. It may be observed that in closely related languages, word alignments of sentences that are mutual translations tend tp be approximately monotonic (i.e., corresponding words tend to be in nearly corresponding sentence positions) . Even for distantly related languages, the number of crossing links is far less than chance, since phrases tend to be translated as contiguous chunks. To model these tendencies, non- monotonicity features 232 provide a measure of the monotonicity
  • the alignment is arbitrarily designated as the source, and the other as the target.
  • the word pairs in the alignment are sorted, first by source word position, then by target word position. (That is, the ordering is determined primarily by source word position, and target word position is considered only if the source word positions are the same.)
  • the alignment is traversed, looking only at the target word positions.
  • the points of non-monotonicity in the alignment are places where there are backward jumps in this sequence of target word positions.
  • a sorted alignment contains the following pairs of linked word positions ( (1,1) (2,4) (2,5) (3,2) (5, 6) ).
  • the first term in this sequence (1,1) means that the first word in the source sentence is aligned with the first word in the target sentence.
  • the second term (2,4) means that the second word in the source sentence is aligned with the fourth word in the target sentence.
  • the third term (2,5) means that the second word in the source sentence is also aligned with the fifth word in the target sentence.
  • the fourth term (3,2) means that the third word in the target sentence is aligned with the second word in the source sentence, and the last term (5,6) means that the fifth word in the source sentence is aligned with the sixth word in the target sentence.
  • the sequence of target word positions in this sorted alignment is (1,4,5,2,6). Therefore, there is one point of non-monotonicity where target word position 2 follows target word position 5.
  • the particular way in which the degree of non- monotonicity of an alignment is measured can vary. For instance, in one embodiment, the magnitudes of the backward jumps in the target word sequence are summed, and this sum is the measure of non-monotonicity • In another embodiment, the number of backward jumps are counted, and the number of jumps is indicative of the non-monotonicity. Finally, rather than choosing between those various embodiments, both of them can be used.
  • the non- monotonicity features 232 in word alignment model 204 are illustratively comprised of one or both of these measures of non- monotonicity, or a different set of measures of non-monotonicity.
  • word alignment links tend to be 1-to-l. Indeed, word alignment results can often be improved by restricting more general models to permit only 1-to-l links between words .
  • one embodiment of the invention defines a 1-to-many feature as the number of links connecting two words such that exactly one of them participates in at least one other link.
  • the system can also define a many-to-many feature as the number of links that connect two words that both participate in other links .
  • Multiple link features 234 in word alignment model 204 can be either or both of these features.
  • the 1-to-many feature is the only one used in multiple link features 234, while the many-to-many feature is not used directly in the model, but is simply used to reduce the number of alignments that must be considered, as any alignments having a non-zero value of the many- to-many feature are discarded.
  • the unlinked word features 236 simply count the total number of unlinked words in both sentences in an aligned sentence pair. This is used to control the number of words that get linked to something in the aligned sentence pair.
  • the rank of an association with respect to a word in a sentence pair can be defined to be the number of association types
  • association score rank features 231 that are based on association score rank. One feature totals the sum of the association ranks with respect to both words involved in each link. The second feature sums the minimum of association ranks with respect to both words involved in each link.
  • the only feature relating to word order are those measuring non-monotonicity .
  • the likelihoods of various forward jump distances is not modeled. If alignments are dense enough, measuring non-monotonicity models this indirectly. That is, if every word is aligned, it is impossible to have large forward jumps without correspondingly large backwards jumps, because something has to link to the words that are jumped over. If word alignments are sparse, however, due to free translation, it is possible to have alignments with very different forward jumps, but the same backwards jumps.
  • a jump distance difference feature 233 is used that sums the differences between the distance between consecutive aligned source words and the distance between the closest target words they are aligned to. In another embodiment, jump distance difference feature 233 sums the differences between the distance between consecutive aligned target words and the distance between the closest source words they are aligned to.
  • this is modeled by including two many to-one jump distance features 235.
  • One feature sums, for each word w, the number of words not linked to w that fall between the first and last words linked to w.
  • the other feature counts only such words that are linked to some word other than w.
  • the point of the second of these features is that it is likely not as detrimental to have a function word not linked to anything, between two words linked to the same word.
  • an exact match feature 237 sums the number of words linked to identical words. This can be included because proper names Qr specialized terms are often the same in both languages, and it can be advantageous to take advantage of this to link such words even when they are too rare to have a high association score.
  • benefits may be gained by including lexical features 239 that count the links between particular high frequency words . Such features can cover all pairs of the five most frequent non-punctuation words in each language, for instance.
  • features are included for all bilingual word pairs that have at least two occurrences in the labeled training data.
  • features can be included for counting the number of unlinked occurrences of each word having at least two occurrences in the labeled training data.
  • the model can be trained by first optimizing the weights for all other features, then optimizing the weights for the lexical features 239, with the other weights held fixed to the optimum values without lexical features 239.
  • word association features 230 are not simply the sum of log-likelihood ratio-based word association statistics. Instead, those statistics are replaced with the logarithm of the estimated conditional probability of two words (or combinations of words) being linked, given that they co-occur in a pair of aligned sentences. These estimates are derived from the best alignments according to another, simpler model. For example, if "former" occurs 100 times in English sentences whose French translation contain " ancient", and the simpler alignment model links them in 60 of those sentence pairs, the conditional link probability (CLP) can be estimated for this word pair as 60/100, or 0.6. However, it may be more desirable to adjust the probabilities by- subtracting a small fixed discount from the link count as follows: Eq. 3
  • LLR-based model considers each word-to- word link separately, but allows multiple links per word, as long as they lead to alignments consisting only of 1-to-l and 1-to-many links (in either direction) .
  • conditional probabilities are allowed for both 1-to-l and 1-to- many clusters, but all clusters are required to be disjoint. For instance, the conditional probability of linking
  • the CLP-based model incorporates the same additional features as the LLR-based model, except that it omits the 1-to-many feature since it is assumed that the 1-to-l versus the 1-to-many tradeoff is already modeled in the conditional link probabilities for particular 1-to-l and 1- to-many clusters.
  • the 1-to-many feature may be retained in the CLP-based model, in case it turns out that the conditional link probability estimates are more reliable for 1-to- 1 clusters than for 1-to-many clusters, or vice yersa.
  • conditional link probabilities there are a variety of different bases for estimating the conditional link probabilities. For instance, one estimate of the conditional link probabilities can be derived from the LLR- based model described above, optimized on an annotated development set. Another estimate can be derived from a heuristic alignment model. It should also be noted that, in addition to the LLR- based model and the CLP-based model, other weighted linear models using word association scores based on surface statistics can be used as well. By “surface statistics” it is meant any different association metrics that can be defined on a contingency table.
  • a contingency table for two words is a two-by-two matrix in which the four cells of the matrix indicate a count where neither of the words is present, where one of the words is present but the other is not and vice versa, and where both words are present.
  • association metrics which can be calculated from such a matrix, including the ⁇ 1 statistic, the Dice co-efficient, or any of wide variety of other metrics.
  • Some embodiments include additional features .
  • One such feature is a symmetrized non-monotonicity feature 241 in which the previous non-monotonicity feature that sums the magnitude of backwards jumps, is symmetrized by averaging the sum of backwards jumps in the target sentence order relative to the source sentence order, with the sum of the backwards jumps in the source sentence order relative to the target sentence order.
  • the feature that counts the number of backwards jumps can be omitted.
  • a multi-link feat ⁇ re 243 counts the number of link clusters that are not one-to-one. This enables modeling whether the link scores for these clusters are more or less reliable than the link scores for one-to-one cl ⁇ sters .
  • Another feature is an empirically parameterized jump distance feature 245 which incorporates a feature measuring the jump distances between alignment links that are more sophisticated than simply measuring the difference in source and target distances.
  • the (signed) source and target distances between all pairs of links are measured in the simpler alignment of the full training data that is used to estimate the conditional link probability and conditional link odds features. From this, the odds of each possible target distance given the corresponding source distance are estimated as:
  • the feature values include the sum of the scaled log odds of the jumps between consecutive links in a hypothesized alignment, computed in both source sentence and target sentence order. This feature is applied only when both the source and target jump distances are non-zero, so that it applies only to jumps between clusters, not to jumps on the "many" side of many-to-one cluster. In one embodiment these feature values are linearly scaled in order to get good results (in terms of training set alignment error rate
  • Additional embodiments of the empirically parameterized jump distance feature are based on the probability, rather than the odds, of each possible target distance given the corresponding source distance and/or each possible source distance given the corresponding target distance, or other quantities computed using the frequency that a given jump distance between two words in the source language occurs with a given jump distance between words in the target language linked to the two words in the target language .
  • FIG. 4A is a flow diagram illustrating one embodiment for indexing association types using index generator 210.
  • index generator 210 selects two words (one in each language) in each association type. Reading in the association types, choosing an association type and selecting the words in the association type is indicated by blocks 350 and 352 in FIG. 4A.
  • Index generator 210 then indexes the given association type by the selected words. This is indicated by block 354, and results in the index of word association types 220 shown in FIG. 2. This is used by word alignment component 202 as discussed below.
  • index generator 210 may illustratively generate index 220 prior to runtime. It may illustratively be done at set up time or at any other time as desired.
  • FIG. 4B is a flow diagram better illustrating how the list of all possible association types that apply to the selected sentence pair can be generated.
  • Word alignment component 202 generates all possible word pairs in the pair of aligned sentences 214 and 216. The word pairs represent all combinations of words, one being from the language E text fragment 214 and the other being from the language F text fragment 216. Generating all possible word pairs for the aligned sentences (or text fragments) is indicated by block 356 in FIG. 4B. Word alignment component 202 then selects one of the word pairs from the list. This is indicated by block 358.
  • Word alignment component 202 determines whether there is an index entry for the selected word pair. In doing so, word alignment component 202 accesses index 220 to determine whether it contains an entry for the selected word pair from the list of word pairs generated from the aligned text fragments 214 and 216. Checking for the index entry is indicated by block 360 in FIG. 4B.
  • word alignment component 202 determines whether there are any more possible word pairs in the list to be considered. If so, processing reverts to block 358 where another word pair is selected. Determination of whether there are more word pairs to be considered is indicated by block 362 in FIG. 4B.
  • word alignment component 202 determines whether the index entry is for a 1-to-l association type. In other words, component 202 determines whether the index entry is only a link between a single word in text fragment 214 and a single word in text fragment 216, where neither of the words have additional links specified by the association type. This is indicated by block 364. If the index entry is for a 1-to-l association type, then the association type represented by the index entry is simply added to the list of possible association types generated for the aligned text fragments 214 and 216. This is indicated by block 366 in FIG. 4B.
  • word alignment component 202 determines whether the other words in the association type represented by the index entry (other than those which are listed in the index entry) occur in the pair of aligned text fragments 214 and 216. This is indicated by block 368 in FIG. 4B. In other words, if the index entry represents a 1-to- many association type, then component 202 determines whether all of the words in the association type occur in the pair of aligned text fragments. If the words do occur in the pair of aligned text fragments 214 and 216, then the association type represented by the index entry is also added to the list of possible association types for the aligned text fragments 214 and 216.
  • many-to-many association types are not considered. In that case, those association types can be omitted from index 220, in which case the many-to-many association type will never be selected. Other ways of omitting many-to-many association types can be used as well, and it may in some cases be desirable to use such association types, in which case they are left in and treated as a 1-to-many association type at this point.
  • the list of possible association types for the aligned text fragments 214 and 216 is sorted based on association scores, from strongest association score to weakest association score. This is indicated by block 370 in FIG. 4B.
  • the possible association types can be determined incrementally as the possible word pairs are generated. That is, as each word pair is generated, the operations indicated in blocks 360, 364, 368, and 366 are performed for that word pair, before the next possible word pair is generated.
  • word alignment component 202 then identifies the best alignment according to word alignment model 204 using the list of possible association types.
  • FIG. 5A is a flow diagram illustrating the process of identifying the best alignment (also represented by blocks 306 and 308 in FIG- 3) in more detail.
  • Word alignment component 202 first initializes a list of existing alignments to contain only an empty alignment along with its overall score. Since an empty alignment has no links, the overall score for an empty alignment will simply be the total number of words in both sentences, multiplied by the unlinked word feature weight. This is indicated by block 400 in FIG. 5A. Component 202 then selects the highest scoring association type not yet considered from the list of possible association types just generated as described with respect to FIG. 4. Selecting one of the possible association types is indicated by block 402 in FIG. 5A.
  • Component 202 then incrementally adds all possible instances of the selected association type to copies of each of the alignments in a list of current alignments, keeping the previous alignments as well (before each instance of the association type is added) . This is indicated by block 404 in FIG. 5A. If there is more than one instance, in the aligned text fragments 214 and 216, of the selected association type being processed, then component 202 picks one instance and tries adding that instance to each of the alignments, and repeats that process for each of the instances. As each instance is considered, the alignments created by adding earlier instances are included in the existing potential alignments that component 202 adds the new instance to.
  • the set of alignments is pruned by component 202, as new alignments are generated as indicated by block 4Q4 in FIG. 5A.
  • One embodiment in which the alignments are pruned as they are generated will be described in greater detail below with respect to FIGS. 5B-1 to 5B-3.
  • Component 202 iterates through the sorted list of association types, from best to worst, creating new alignments that add links for all instances of the association type currently being considered to existing alignments, potentially keeping both the -old and new alignments in the set of possible alignments being generated. This continues until there are no more association types in the list to consider. This is indicated by block 408 in
  • FIG. 5A The particular way in which the new links are added is also described in greater detail below with respect to FIGS. 5C and 5D.
  • component 202 simply outputs the best scoring word alignment 222 (shown in FIG. 2) based upon the score generated by word alignment model 204 (also shown in FIG. 2) . This is indicated by block 410 in FIG. 5A.
  • FIGS. 5B-1 to 5B-3 are flow diagrams better illustrating how the list of potential alignments might be incrementally generated and pruned during the processing described with respect to block 404 of FIG. 5A.
  • a possible link "I" that is an instance of the selected association type is selected in the sentence pair. This is indicated by block 504 in FIG. 5B.
  • the set of recent alignments is initialized to be empty. This is indicated by block 506 in FIG. 5B.
  • An existing alignment ⁇ A" is selected, and a set of new alignments is created.
  • the new alignments are ways of adding "I” to "A”. Selecting the existing alignment and creating a set of new alignments is indicated by blocks 508 and 510 in FIG. 5B.
  • an alignment is selected from the set of new alignments. This is indicated by block 512 in FIG. 5B.
  • Component 202 determines whether A' already exists in the set of recent alignments, or whether it has any many-to- many links in it, or whether it has any one-to-many links with more than a predetermined value "M" branches . This is indicated by block 514 in FIG. 5B.
  • the value of "M” can be set empirically, or in any other desired way, and it is believed that a value of approximately 3 works well-
  • word alignment component 202 determines that the selected alignment A' either already exists in the set of recent alignments or has many-to-many links in it, or has any one-to-many links with more than M branches. If, at block 514, word alignment component 202 determines that the selected alignment A' either already exists in the set of recent alignments or has many-to-many links in it, or has any one-to-many links with more than M branches, then processing moves to block 516 where component 202 determines whether there are any more alignments A' to consider. However, if, at block 514, component 202 determines that A' does not already exist in the set of recent alignments, and it does not have any many-to-many links in it, and it does not have any one- to-many links with more than "M" branches, then word alignment component 202 computes the score for the alignment A' according to the model 2Q4. Computing the score is indicated by block 518 in FIG. 5B.
  • Word alignment component 202 determines whether the score for the alignment A' is worse than the best score computed so far by more than a pruning threshold amount. This is indicated by block 520 in FIG. 5B. If so, then the alignment A' is discarded, and processing again moves to block 516 where component 202 determines whether there are any more alignments A' to be processed. Discarding the alignment is indicated by block 522 in FIG. 5B.
  • the pruning threshold can be any desired value and can be set empirically or otherwise.
  • word alignment component 202 determines that the score for the alignment A' is not worse than the best score so far by more than the pruning threshold, then component 202 adds the alignment A' to the list of recent alignments. This is indicated by block 524 in FIG. 5B. If the score for the alignment A' is the best score so far, then component 202 notes that as well. This is indicated by block 526. Component 202 then determines whether there are more existing alignments "A" to be processed. If so, processing reverts back to block 5Q8. If not, however, component 202 adds the recent alignments to the set of existing alignments. This is indicated by block 534 in FIG. 5B.
  • the set of existing alignments is .then sorted based on the model score and is pruned back to the N-best existing alignments based on the model score. Sorting the set of existing alignments is indicated by block 536 and pruning to the N-best existing alignments is indicated by block 538 in FIG. 5B. While the value of N can be any desired or empirically determined value, it is believed that a value in a range of approximately 1Q-20 works well.
  • Component 202 determines whether there are more possible links ⁇ I" that are instances of the selected association type in the sentence pair currently being processed. If so, processing reverts back to block 504. Determining whether there are more existing alignments "A” is indicated by block 528, and determining whether there are more possible links ⁇ I" is determined by block 530.
  • component 202 determines that there are no more instances of the association type to be processed, then component 202 has completed the processing indicated by block 404.
  • FIG. 5C illustrates one embodiment for creating a set of new alignments , that add a link to an existing alignment (represented by block 510 in FIG. 5B) in more detail.
  • FIG. 5C illustrates the process for the embodiment in which word alignment model 204 uses the log-likelihood ratio metric in computing the word association features 230.
  • Component 202 first initializes the set of new alignments to be empty. This is indicated by block 540 in FIG. 5C. Next, for each existing link I', in alignment "A" that conflicts with the instance "I”, component 202 adds an alignment to the set of new alignments that has link for the instance "I” plus all the links in the alignment "A", except the existing link I' . This is indicated by block 542 in FIG. 5C. Component 202 then adds an alignment to the set of new alignments that has "I” plus all the links in "A”. This is indicated by block 544 in FIG. 5C.
  • An extra pruning technique can also be used with the LLR-based model.
  • the association scores for each pair of one of these English words and one of these French words are as follows: 4125.02019332218 Minister shuttle 2315.88778082931 Prime premier 1556.9205658087 Prime shuttle 1436.06392959541 Minister premier
  • FIG. 5D is a flow diagram illustrating how new links are added to existing alignments in the embodiment in which model 204 uses the conditional link probabilities in computing the word association features 230.
  • FIG. 6 is a flow diagram illustrating one embodiment for optimizing the feature weights (/I 7 ) for the features used by model 204.
  • the optimization technique is a modified version of averaged perceptron learning. Perceptron learning and averaged perceptron learning techniques are known.
  • perceptron learning iterates through the annotated training data multiple times, comparing, for each sentence pair, the best alignment ⁇ a hyp ) according to the current model with the reference alignment (a ref ) ⁇
  • the weight for each feature is incremented by the difference between the value of the feature for the reference alignment, and the value of the feature for the best alignment according to the model as follows: Eq. 6
  • the feature weights for the final model are the average of the weight values over all the data, rather than simply the values after the final sentence pair of the final iteration.
  • the present system averages the weight values over each pass through the data ' , rather than over all passes. It is believed that this leads to faster convergence. After each pass of perceptron learning through the data, another pass is made through the data with feature weights fixed to their average value for the previous learning pass, in order to evaluate current performance of the model. The system iterates over this procedure until a local optimum is found.
  • a fixed weight is provided for the word association feature 230. It is believed that this feature is of significant importance in the model, and fixing the weight can be fixed to any desired or empirically determined value. In one embodiment, the weight is fixed to 1.0. Allowing all weights to vary allows many equivalent sets of weights that differ only by a constant scale factor. Fixing one weight thus eliminates a spurious apparent degree of freedom.
  • the present system thus employs a version of perceptron learning that uses a learning rate parameter.
  • the perceptron update rules involve incrementing each weight by the difference in the feature values being compared. If the feature values are discrete, however, the minimum difference may be too large compared to the unweighted association score. Therefore, the present system multiplies the feature value difference by a learning rate parameter ⁇ to allow smaller increments when needed as follows:
  • the learning rate can be set to any empirically determined value. In one embodiment, the learning rate is set to 0-01, although different rates can be used and optimizations on the rate can be performed as desired.
  • the LLR scores can become very large (such as 100,000 for a 500,000 pair corpus) but small differences can be significant. Thus, small differences in the weighting values are also likely to be significant.
  • the present system uses a progression of learning rates, starting at a relatively large value (which can be empirically determined, and one example of which is approximately 1000) and reducing each successive weight until a final desired learning weight is reached.
  • a relatively large value which can be empirically determined, and one example of which is approximately 1000
  • the level of reduction can be ( empirically determined or set as desired.
  • the learning rate is reduced, successively by an order of magnitude until a learning rate of 1 is reached-
  • other yalues can be used as well.
  • the feature weights are reinitialized to the optimum values found with the previous learning rate.
  • FIG. 6 is a flow diagram illustrating one embodiment in which the parameter weights ( X 1 ) are trained or optimized. First, all of the weights, and the learning rate, are initialized. This is indicated by block 560 in FIG. 6. In one embodiment, the weight for the word association feature 230 is initialized to 1.0 and the weights for the other features are initialized to 0.0.
  • a training sample sentence pair, annotated with its correct word alignment, is then processed as described above with respect to the previous figures, in order to obtain a best guess at a word alignment for the sentence pair. This is indicated by block 562.
  • weights (/I 7 ) are then adjusted based on the difference in feature values between the correct alignment and the best guess. This is indicated by block 566 in FIG. 6, and is shown in Eq. 5 above.
  • the error rate is checked to determine whether it is still decreasing since the last time it was checked. This check is performed using the average values for the feature weights since the last time the error rate was checked, applied to a specified set of annotated sentence pairs. This set may be the entire set of training sentence pairs used in adjusting the feature weights, a subset of that set, or an independent set of annotated sentence pairs. This is indicated by block 569 in FIG. 6. If the error rate is still decreasing, as indicated by block 570, then processing reverts back to block 562 where training continues by processing additional training samples. In other words, it is determined that the learning rate is still set at a level such that, as learning continues, the model is getting better and the error rate associated with the model is decreasing. Therefore, processing continues using the current learning rate.
  • the error rate has started to increase (or is at least no longer decreasing) then it is determined that training has flattened out with respect to the current learning rate. It is thus determined whether there are any additional learning rates to try during the training process. This is indicated by block 572. If not, training is complete and the weights that yielded the lowest error rate are used. However, if, at block 572 it is determined that there are more learning rates to try, then the learning rate is set to its next lowest value, and the feature weights are reset to the values that have yielded the lowest error rate so far. This is indicated by block 574 and 576. Processing then continues at block 562 in which training samples are again processed in order to continue training the model feature weights X 1 .
  • perceptron learning may be used to optimize the model feature weights, and that other learning methods such as maximum entropy modeling or maximum margin methods, including support vector machines, may be used to optimise the feature weights . If the number of feature weights is small, direct optimization methods such as Powell's method or the downhill simplex method may also be used.
  • a support vector machine (SVM) method for structured output spaces can be used.
  • the method can be based on known large margin methods for structured and interdependent output variables.
  • this method tries to find the hyperplane that separates the training examples with the largest margin.
  • the optimal hyperplane can be efficiently approximated given the desired error rate, using a cutting plane algorithm. In each iteration of the algorithm, it adds the "best" incorrect predictions given the current model as constraints, and optimizes the weight vector subject only to them.
  • This algorithm does not pose special restrictions on the output structure, as long as "decoding” can be done efficiently. This can be beneficial because several features mentioned above are believed to be very effective in this task, but are difficult to incorporate into structured learning methods that require decomposable features. This method also allows a variety of loss functions, but can also use only simple 0-1 loss, which in this context means whether or not the alignment of a sentence pair is completely correct. In the embodiment in which an SVM method is used, the
  • SVM method has a number of free parameters, which can be tuned in a number of different ways.
  • One way is by minimizing training set AER.
  • Another is five-fold cross validation. In this method, training is performed five times on 80% of the training data and testing on the other 20%, with five disjoint subsets used for testing. The parameter values yielding the best averages AER on the five test subsets of the training set are used to train the final model on the entire training set.
  • FIG. 7 shows a classifier 600 that uses conditional log odds to define features and that can be used as a binary classifier, a multi-class classifier or a structured classifier.
  • Classifier 600 receives an input 602 and generates a classified output 604.
  • One application for classifier 600 is for sequence segmentation, such as segmenting East Asian characters into words .
  • Such a log-odds feature provides much greater benefit than the log-probability, and it is useful to include such a feature even when the model also includes indicator function features for every word in the training corpus .
  • wordcount(x ⁇ b ) is the number of times (x ⁇ b ) forms a word in the training set
  • nonwordcount ⁇ x ⁇ b ) is the number of times (x fli ) occurs, not segmented into a single word.
  • the word alignment problem and sequence segmentation problems described above are both instances of structured classification problems, because a word alignment or a segmentation of a sequence can both be viewed as structured labels of the inputs, with the individual alignment links or word segment boundaries being partial labels.
  • Log-conditional-odds features were also found very useful in a multi-class classification model for another natural language prediction problem, which can be another application for classifier 600, in which a fixed set of more than two unstructured labels is used.
  • the exemplary problem is to predict Japanese case markers given the rest of the words in a Japanese sentence. Case markers are words which indicate grammatical relations (such as subject, object, and location) of the complement noun phrase to the predicate.
  • the following Japanese sentence shows an example of the use of case markers in Japanese.
  • the case markers that need to be predicted are kara (from) for the second phrase and ni for the third phrase (the case markers are underlined below) .
  • This tas k can be viewed as a multi-class classification problem where the goal is to predict one case marker for each phrase in the Japanese sentence , given a set of features from the context of the phrase, and possibly features from a corresponding
  • the number of classes to be predicted is 19 , which includes 18 case markers and a class NQNE , meaning that a phrase has no case marker .
  • One exemplary model for this task uses nominal-valued features , such as HeadPOS (the part of speech tag of the head word of the phrase ) , Headword, PrevHeadWord, NextHeadWord (head words of surrounding phrases ) , as well as features from a syntactic dependency tree .
  • HeadPOS the part of speech tag of the head word of the phrase
  • Headword the part of speech tag of the head word of the phrase
  • PrevHeadWord head words of surrounding phrases
  • NextHeadWord head words of surrounding phrases
  • Feature type 0 is added to model the prior likelihood of every class.
  • Each of the log-odds features in this equation represents the logarithm of the probability of class y, given some feature divided by the probability of the complement of y, (denoted by —y,-) given that feature.
  • the complement of class y i is the set of all classes other than y, .
  • the odds values were estimated by smoothed relative frequency estimates from the training data.
  • the unsmoothed relative frequency estimates are:
  • add- a alpha smoothing is used to improve this estimate.
  • the add- ⁇ estimate is as follows:
  • k denotes the number of classes as before.
  • the parameters ⁇ J y ⁇ of the model can be trained using a standard technique. For instance, the sum of the conditional log- likelihoods of the training data instances can be maximized, and a Gaussian prior on the parameters can be included. Comparing this modeX using log-odds features to one using log-probability features it was found that the model using log-odds outperformed the latter model. In particular, the model using log-probability features had the following form:
  • Equation 12 closely Qorresponds to the method of discriminatively training parameters (the ⁇ J y> ) , for weighting log- probability features from generative models .
  • the probability features P(x j ⁇ y i ) were estimated by smoothed relative frequency estimates from the training data, using add- a smoothing .
  • the smoothed relative frequency estimate for P(X j I y 1 ) is :
  • V denotes the number of possible values for the j-th nominal feature X j .
  • a range of values for the a smoothing parameter can be tried in any known manner for both Equations 9 and 12, finding approximately optimal values for them. This can be done empirically, using a systematic approach, or otherwise.
  • conditional log odds can be used in classification. This can be done in binary classification, multi-class classification (with a fixed set of more than 2 classes) and structured classification (such as bilingual word alignment or word segmentation) .
  • the logarithm of the ratio of the probability of some label given a feature and the probability of not the label, given the feature produces the same results as the logarithm pf the ratio of the probability of the feature given the label and the probability of the feature given not the label. That is, log P > ⁇ (v ⁇ 'x-) can be replaced by log P(x ⁇ >v•)'- • Even in cases where the

Abstract

A weighted linear word alignment model linearly combines weighted features to score a word alignment for a bilingual, aligned pair of text fragments. The features are each weighted by a feature weight. One of the features is a word association metric, which may be generated from surface statistics.

Description

WEIGHTED LINEAR MODEL
BACKGROUND
Machine translation is a process by which a textual input in a first language is automatically translated, using a computerized machine translation system, into a textual output in a second language. Some such systems operate using word based translation. In those systems, each word in the input text, in the first language, is translated into some number of corresponding words in the output text, in the second language. Better performing systems, howeyer, are referred to as phrase- based translation systems. In order to train either of these two types of systems (and many other machine translation systems), current training systems often access a parallel bilingual corpus; that is, a text in one language and its translation into another language. The training systems first align text fragments in the bilingual corpus such that a text fragment (e.g., a sentence) in the first language is aligned with a text fragment (e.g., a sentence) in the second language that is the translation of the text fragment in the first language. When the text fragments are aligned sentences, this is referred to as a bilingual sentence- aligned data corpus .
In order to train the machine translation system, the training system must also know the individual word alignments within the aligned sentences. In other words, even though sentences have been identified as translations of one another in the bilingual, sentence-aligned corpus, the machine translation training system must also know which words in each sentence of the first language translate to which words in the aligned sentence in the second language.
One current approach to word alignment makes use of five translation models. This approach to word alignment is sometimes augmented by a Hidden Markov Model (HMM) based model .
These word alignment models are less than ideal, however, in a number of different ways. For instance, although the standard models can theoretically be trained without supervision, m practice various parameters are introduced that should be optimized using annotated data. In the models that include an HMM model supervised optimization of a number of parameters is suggested, including the probability of jumping to the empty word in the Hidden Markov Model (HMM) , as well as smoothing parameters for the distortion probabilities and fertility probabilities of the more complex models. Since the values of these parameters affect the values of the translation, alignment, and fertility probabilities trained by estimation maximization (EM) algorithm, there is no effective way to optimize them other than t° ^un the training procedure with a particular combination of values and to evaluate the accuracy of the resulting alignments. Since evaluating each combination of parameter values in this way can take hours to days on a large training corpus, it is likely that these parameters are rarely, if ever, truly jointly optimized for a particular alignment task.
Another problem associated with these models is the difficulty of adding features to them, because they are standard generative models. Generative models require a generative "story" as to how the observed data is generated by an inter-related set of stochastic processes. For example, the generative story for models 1 and 2 mentioned above and the HMM alignment model is that a target language translation of a given source language sentence is generated by first choosing a length for the target language sentence, then for each target sentence position, choosing a source sentence word, and then choosing the corresponding target language word.
One prior system attempted to add a fertility component to create models 3, 4 and 5 mentioned above. However, this generative story did not fit any longer, because it did not include the number of target language words needed to align to each source language word as a separate decision. Therefore, to model this explicitly, a different generative "story" was required. Thus, a relatively large amount of additional work is required in order to add features. xii αuui.iuu, the higher accuracy models are mathematically complex, and also difficult to train, because they do not permit a dynamic programming solution. It can thus take many hours of processing time on current standard computers to train the models and produce an alignment of a large parallel corpus .
The present invention addresses one, some, or all of these problems. However, these problems are not to be used to limit the scope of the invention in any way, and the invention can be used to address different problems, other than those mentioned, in machine translation.
The discussion above is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter. SUMMARY
A weighted linear word alignment model linearly combines weighted features to score a word alignment for a bilingual, aligned pair of text fragments. The features are each weighted by a feature weight. One of the features is a word association metric generated from surface statistics.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of one exemplary environment in which the present invention can be practiced. FIG. 2 is a block diagram of one embodiment of a word alignment system.
FIG- 3 is a flow diagram illustrating one embodiment of operation of the system shown in FIG. 2.
FIG. 4A is a flow diagram illustrating one embodiment for indexing association types. FIG. 4B is a flow diagram illustrating one embodiment for generating a list of possible association types for a sentence pair.
FIG. 5A is a flow diagram illustrating how a best alignment is identified in more detail-
FIGS. 5B-1 to 5B-3 are flow diagrams illustrating one embodiment in which potential alignments are incrementally generated and pruned.
FIG. 5C is a flow diagram illustrating one embodiment for adding a new link to an existing alignment in a first model.
FIG. 5D is a flow diagram illustrating an embodiment of adding a new link to an existing alignment in a second model.
FIG. 6 is a flow diagram illustrating one embodiment for optimizing parameters for the model. FIG. 7 is a block diagram of a classifier using conditional log odds .
DETAILED DESCRIPTION
The present invention deals with bilingual word alignment. However, before describing the present invention in greater detail, one illustrative environment in which the present invention can be used will be discussed.
FIG. 1 illustrates an example of a suitable computing system environment 100 on which embodiments may be implemented.
The computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the aboye systems or devices, and the like.
Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules are located in both local and remote computer storage media including memory storage devices.
With reference to FIG. 1, an exemplary system for implementing some embodiments includes a general-purpose computing device in the form of a computer 110. Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several t.ypes of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks
(DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS) , containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation, FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137. The computer 110 may also include other removable/nonremovable volatile/nonvolatile computer storage media. By way of example only, FIG. 1 illustrates a hard ςiisk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a, CD ROM or other optical media. Other removable/nonremovable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic- tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
The drives and their associated computer storage media discussed above and illustrated in FIG. 1, provide storage of computer readable instructions, data structures, program modules and other data for the computer 110. In FIG'. 1, for example, hard j disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
A user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB) . A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. In addition to the monitor, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 195.
The computer 110 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110. The logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet-
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 1 illustrates remote application programs 185 as residing on remote computer 180. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
FIG. 2 is a block diagram of a word alignment system 200 in accordance with one embodiment of the invention. Word alignment system 200 includes word alignment component 202 and word alignment model 204. System 200 is also shown coupled to an optional bilingual corpus 206 and text fragment alignment Component 208. Similarly, system 200 is shown with index generator 210 coupled to a store of word association types 212.
Bilingual corpus 206 illustratively includes bilingual data in which text in the first language is found, along with a translation of that text into a second language. For instance, using the English and French languages as an example, bilingual corpus 206 will illustratively include a relatively large amount of English language text along with a French translation of that text. A relatively small amount of bilingual corpus 206 is word- aligned by a person fluent in both languages . Illustratively, bilingual corpus 206 might consist of 500,000 pairs, each pair having an English sentence and its French translation, of which 200 to 300 pairs have been word-aligned by hand. In order to word-align all the sentences in corpus 206, text fragment alignment component 208 first accesses bilingual corpus 206 to generate pairs of aligned text fragments from bilingual corpus 206. In one illustrative embodiment, the text fragments are sentences, although the text fragments could be other fragments such as clauses, etc.
Text fragment alignment component 208 thus outputs a first text fragment 214 in a first language E (such as English) and a second text fragment 216 in a second language F (such as French) which is the translation of the first text fragment 214. The bilingual, aligned text fragments 214 and 216 (such as bilingual, aligned sentences) are then input to word alignment component 202.
Either text fragment alignment component 208, or a different component, illustratively calculates values of a statistical measure of the strength of word associations in the text-fragment-aligned data. These values are referred to as word association scores and are indicative of a strength of association between a bilingual pair of words, or a bilingual cluster of words . Each pair or cluster of words is referred to as a word association type and is shown in FIG. 2 as being stored, along with its word association score, in a word association type data store 212- These scores can be used to determine feature values in the word alignment model described below. Two different exemplary word association scores that can be used are discussed in detail below, and a number of others are also mentioned. In addition, one or more other features (in addition to those computed from the word association scores) are also defined for possible word alignments. This set of features 218 is used in word alignment model 204.
In one embodiment, index generator 210 accesses all of the various word association types identified in the training data
(and stored in word association type data store 212) and indexes those word association types. This is described in greater detail below with respect to FIG. 4. In any case, index generator 210 generates an index 220 of word association types. The index 220 and the word alignment model 204 are available to word alignment component 202 in order to generate a word alignment 222 for a pair of aligned text fragments 214 and 216.
FIG. 3 is a flow diagram illustrating the operation of system 200 in greater detail. Word alignment component 202 first selects a sentence pair (or text fragment pair) . This is indicated by block 300 in FIG. 3. Word alignment component 202 then accesses the set of association types through index 220. This is indicated by block 302. Word alignment component 202 then generates a list of all possible association types that apply to the selected sentence pair. This is indicated by block 304 in FIG. 3.
Word alignment component 202 then sorts the list of possible association types based on their association scores. This is indicated by block 306 in FIG. 3. Finally, word alignment component 202 identifies the best alignment according to word alignment model 204, by accessing word alignment model 204, and employing the various features 218 in model 204. This is indicated by block 308 in FIG. 3.
In one embodiment, model 204 is generated based on discriminative training of a weighted linear combination of a relatively small number of features . For a given parallel sentence pair, for each possible word alignment considered, model 214 simply multiplies the values of each of the features by a corresponding weight to give a score for that feature, and sums the feature scores to give an overall score for the alignment. The possible alignment having the best overall score is selected as the word alignment for that sentence pair. Thus, for a sentence pair e, f (where e is the sentence in English and f is the sentence in French) model 204 identifies an alignment a such that: Eq. 1 n a=argmaxβ∑XJ1(a,e,f)
(=1 where /are the features, and λs are the corresponding features weights .
The weights can be optimized using a modified version of averaged perceptron learning as described below with respect to FIG. 6. This is a relatively quick process. In addition, no generative "story" needs to be invented to explain how the features generate the data, so new features can be easily added to the model without having to change the overall structure of the model.
The specific implementation of word alignment model 204 can be any of a variety of different implementations incorporating a variety of different features. In one embodiment described herein, word alignment model 204 incorporates a feature computed from the different word association scores, mentioned above, intended to indicate how likely various pairs of words or groups of words are to be mutual translations, plus additional features measuring how much word reordering is required by a given alignment, and how many words are left unlinked in that alignment. As discussed below, embodiments of model 204 can also include a feature measuring how often one word is linked to several words in the alignment under analysis.
In the following discussion, and as used above, the term "alignment" is used to mean an overall word alignment of a sentence pair. The term "link" on the other hand is used to mean the alignment of a particular pair of words or small group of words .
In any case, one embodiment of model 204 uses a statistical measure of word association in order to perform bilingual word alignment. The term "word" here and subsequently should be taken very broadly to include any relatively fixed sequence of characters (including a single character) for which a translation relationship can be meaningfully considered. For example, a single punctuation character such as a period or comma may be treated as a word. In the Chinese language, words are conventionally considered to include usually no more than one or two characters. For the purposes of bilingual word alignment, however, it has sometimes proved useful to treat each individual Chinese character as a single word. On the other hand, many languages, including English, include fixed phrases, such as "in spite of", "according to", or "more than", which function as a single unit and might desirably be treated as single words for purposes of bilingual word alignment or translation. One might also consider breaking what are conventionally regarded as single words into a stem and an inflectional marker (or series of markers) and using each of those as a basic unit for word alignment. For example, the English word "went" might be decomposed into "go" followed by an inflectional marker that might be represented as "+PAST". In what follows, it is simply assumed that the system is dealing with bilingual text segments that have been "tokenized", i.e., broken up, and perhaps transformed, in some way into discrete tokens that we may treat as words for alignment purposes.
While any statistical measure indicative of the strength of association between words can be used, one illustrative statistical measure is referred to as- the log likelihood ratio (LLR) statistic. Assume, for instance, that the two languages being discussed are English and French. The log likelihood ratio statistic is a measure of the strength of association between a particular English word and a particular French word. Basically, the log likelihood ratio is computed from bilingual, aligned sentences. The LLR statistic takes into account how often an English word occurs in the English sentences, and how often a French word occurs in the French sentences, and how often they occur together in an aligned sentence pair. One way of calculating LLR scores for words in the training corpus is as follows : Eq. 2
Figure imgf000015_0001
In Equation 2, f and e refer to the words (in French and in English, respectively) whose degree of association is being measured. When the terms f and e are used, it means that those words occur in the respective target and source sentences of an aligned sentence pair, and -if and ->e mean that the corresponding words do not occur in the respective sentences, whereas f? and e? are variables ranging over these values, and C (£?,e?) is the observed joint count for the values of f? and e?. The probabilities in Equation 2, p(f?\e?) and p(f?), illustratively refer to maximum likelihood estimates. The LLR scores computed using Equation 2 for a pair of words is high if the words have either a strong positive association or a strong negative association. Therefore, in accordance with one embodiment, any negatively associated word pairs are discarded by requiring that p(f,e) < p(f) 'p(e) . Also, any word pairs with an LLR score of less than 1 can be discarded as well.
In this particular embodiment of model 204, the word association scores are used to compute word association features 230 used in model 204, and the value of the principal word association feature for an alignment is simply the sum of all the individual log-likelihood ratio scores for the word pairs linked by the alignment. The log-likelihood ratio-based model also includes a plurality of other features.
For instance, one set of features is referred to as non- monotonicity features 232. It may be observed that in closely related languages, word alignments of sentences that are mutual translations tend tp be approximately monotonic (i.e., corresponding words tend to be in nearly corresponding sentence positions) . Even for distantly related languages, the number of crossing links is far less than chance, since phrases tend to be translated as contiguous chunks. To model these tendencies, non- monotonicity features 232 provide a measure of the monotonicity
(or more accurately the non-monotonicity) of the alignment under consideration . To find the points of non-monotonicity of a word alignment, one of the languages in the alignment is arbitrarily designated as the source, and the other as the target. The word pairs in the alignment are sorted, first by source word position, then by target word position. (That is, the ordering is determined primarily by source word position, and target word position is considered only if the source word positions are the same.) The alignment is traversed, looking only at the target word positions. The points of non-monotonicity in the alignment are places where there are backward jumps in this sequence of target word positions.
For example, suppose a sorted alignment contains the following pairs of linked word positions ( (1,1) (2,4) (2,5) (3,2) (5, 6) ). The first term in this sequence (1,1) means that the first word in the source sentence is aligned with the first word in the target sentence. The second term (2,4) means that the second word in the source sentence is aligned with the fourth word in the target sentence. The third term (2,5) means that the second word in the source sentence is also aligned with the fifth word in the target sentence. The fourth term (3,2) means that the third word in the target sentence is aligned with the second word in the source sentence, and the last term (5,6) means that the fifth word in the source sentence is aligned with the sixth word in the target sentence. The sequence of target word positions in this sorted alignment is (1,4,5,2,6). Therefore, there is one point of non-monotonicity where target word position 2 follows target word position 5. The particular way in which the degree of non- monotonicity of an alignment is measured can vary. For instance, in one embodiment, the magnitudes of the backward jumps in the target word sequence are summed, and this sum is the measure of non-monotonicity • In another embodiment, the number of backward jumps are counted, and the number of jumps is indicative of the non-monotonicity. Finally, rather than choosing between those various embodiments, both of them can be used. Thus, the non- monotonicity features 232 in word alignment model 204 are illustratively comprised of one or both of these measures of non- monotonicity, or a different set of measures of non-monotonicity.
Another set of features is referred to as a set of multiple link features 234 in word alignment model 204. It has often been observed that word alignment links tend to be 1-to-l. Indeed, word alignment results can often be improved by restricting more general models to permit only 1-to-l links between words .
In order to model the tendency for links to be 1-to-l, one embodiment of the invention defines a 1-to-many feature as the number of links connecting two words such that exactly one of them participates in at least one other link. The system can also define a many-to-many feature as the number of links that connect two words that both participate in other links . Multiple link features 234 in word alignment model 204 can be either or both of these features. However, in one embodiment, the 1-to-many feature is the only one used in multiple link features 234, while the many-to-many feature is not used directly in the model, but is simply used to reduce the number of alignments that must be considered, as any alignments having a non-zero value of the many- to-many feature are discarded.
Another exemplary feature used in word alignment model 204 is referred to as a set of unlinked word features 236. The unlinked word features 236 simply count the total number of unlinked words in both sentences in an aligned sentence pair. This is used to control the number of words that get linked to something in the aligned sentence pair. The rank of an association with respect to a word in a sentence pair can be defined to be the number of association types
(word-type to word-type) for that word that have higher association scores, such that words of both types occur in the sentence pair. In one embodiment, there are two association score rank features 231 that are based on association score rank. One feature totals the sum of the association ranks with respect to both words involved in each link. The second feature sums the minimum of association ranks with respect to both words involved in each link.
So far, as discussed above, the only feature relating to word order are those measuring non-monotonicity . The likelihoods of various forward jump distances is not modeled. If alignments are dense enough, measuring non-monotonicity models this indirectly. That is, if every word is aligned, it is impossible to have large forward jumps without correspondingly large backwards jumps, because something has to link to the words that are jumped over. If word alignments are sparse, however, due to free translation, it is possible to have alignments with very different forward jumps, but the same backwards jumps. To differentiate such alignments, in one embodiment, a jump distance difference feature 233 is used that sums the differences between the distance between consecutive aligned source words and the distance between the closest target words they are aligned to. In another embodiment, jump distance difference feature 233 sums the differences between the distance between consecutive aligned target words and the distance between the closest source words they are aligned to.
It may be that the likelihoods of a large forward jump on either the source or target side of an alignment is much less if the jump is between the words that are both linked to the same word of the other language. In one embodiment, this is modeled by including two many to-one jump distance features 235. One feature sums, for each word w, the number of words not linked to w that fall between the first and last words linked to w. The other feature counts only such words that are linked to some word other than w. The point of the second of these features is that it is likely not as detrimental to have a function word not linked to anything, between two words linked to the same word.
In other embodiments, an exact match feature 237 sums the number of words linked to identical words. This can be included because proper names Qr specialized terms are often the same in both languages, and it can be advantageous to take advantage of this to link such words even when they are too rare to have a high association score. In one embodiment, benefits may be gained by including lexical features 239 that count the links between particular high frequency words . Such features can cover all pairs of the five most frequent non-punctuation words in each language, for instance. In one embodiment, features are included for all bilingual word pairs that have at least two occurrences in the labeled training data. In addition, features can be included for counting the number of unlinked occurrences of each word having at least two occurrences in the labeled training data.
In training the present model, it was believed that using so many lexical features 239 might result in over-fitting to the training data. To try to prevent this, the model can be trained by first optimizing the weights for all other features, then optimizing the weights for the lexical features 239, with the other weights held fixed to the optimum values without lexical features 239.
In accordance with another embodiment of word alignment model 204, word association features 230 are not simply the sum of log-likelihood ratio-based word association statistics. Instead, those statistics are replaced with the logarithm of the estimated conditional probability of two words (or combinations of words) being linked, given that they co-occur in a pair of aligned sentences. These estimates are derived from the best alignments according to another, simpler model. For example, if "former" occurs 100 times in English sentences whose French translation contain "ancien", and the simpler alignment model links them in 60 of those sentence pairs, the conditional link probability (CLP) can be estimated for this word pair as 60/100, or 0.6. However, it may be more desirable to adjust the probabilities by- subtracting a small fixed discount from the link count as follows: Eq. 3
cooc{f,e) where JjΕ>d(f,e) represents the estimated conditional link probability for the words f and e, linksi (f,e) is the number of times they are linked by the simpler alignment model, d is the discount, and cooc (f,e) is the number of times they co-occur. This adjustment prevents assigning high probabilities to links between pairs of words that rarely co-occur. Illustratively, this discount may have a value between 0 and 1.
One difference between the LLR-based model and the CLP- based model is that the LLR-based model considers each word-to- word link separately, but allows multiple links per word, as long as they lead to alignments consisting only of 1-to-l and 1-to-many links (in either direction) . In the CLP-based model, however, conditional probabilities are allowed for both 1-to-l and 1-to- many clusters, but all clusters are required to be disjoint. For instance, the conditional probability of linking
"not" (in English) to "ne...pas" (in French) can be estimated by considering the number of sentence pairs in which "not" occurs in the English sentence and both "ne" and "pas" occur in the French sentence, compared to the number of times "not" is linked to both "ne" and "pas" in pairs of corresponding sentences. However, when this estimate is made in the CLP-based model, a link between "not" and "ne...pas" is not counted if the same instance of "not", "ne" or "pas" is linked to any other words. The CLP-based model incorporates the same additional features as the LLR-based model, except that it omits the 1-to-many feature since it is assumed that the 1-to-l versus the 1-to-many tradeoff is already modeled in the conditional link probabilities for particular 1-to-l and 1- to-many clusters. In other embodiments, the 1-to-many feature may be retained in the CLP-based model, in case it turns out that the conditional link probability estimates are more reliable for 1-to- 1 clusters than for 1-to-many clusters, or vice yersa.
There are a variety of different bases for estimating the conditional link probabilities. For instance, one estimate of the conditional link probabilities can be derived from the LLR- based model described above, optimized on an annotated development set. Another estimate can be derived from a heuristic alignment model. It should also be noted that, in addition to the LLR- based model and the CLP-based model, other weighted linear models using word association scores based on surface statistics can be used as well. By "surface statistics" it is meant any different association metrics that can be defined on a contingency table. In other words, a contingency table for two words is a two-by-two matrix in which the four cells of the matrix indicate a count where neither of the words is present, where one of the words is present but the other is not and vice versa, and where both words are present. There are many different association metrics which can be calculated from such a matrix, including the χ1 statistic, the Dice co-efficient, or any of wide variety of other metrics. In another embodiment, the estimated conditional probability of a cluster of words linked is replaced with the estimated conditional odds of a cluster of words being linked, as follows : io(Wl,-,Wt) = /fafe.0v"»)+i EQ . 4
COOc(W1 , - - -, wk) - IiHh1 (W1 , ■ ■ ■ , wk J) + 1 where LO(W1,-",W11) represents the estimated conditional link odds for the cluster of words svx,---,wk . In this exemplary embodiment,
"add-one" smoothing is used in place of a discount.
Some embodiments include additional features . One such feature is a symmetrized non-monotonicity feature 241 in which the previous non-monotonicity feature that sums the magnitude of backwards jumps, is symmetrized by averaging the sum of backwards jumps in the target sentence order relative to the source sentence order, with the sum of the backwards jumps in the source sentence order relative to the target sentence order. In this exemplary embodiment, the feature that counts the number of backwards jumps can be omitted.
A multi-link featμre 243 counts the number of link clusters that are not one-to-one. This enables modeling whether the link scores for these clusters are more or less reliable than the link scores for one-to-one clμsters .
Another feature is an empirically parameterized jump distance feature 245 which incorporates a feature measuring the jump distances between alignment links that are more sophisticated than simply measuring the difference in source and target distances. The (signed) source and target distances between all pairs of links are measured in the simpler alignment of the full training data that is used to estimate the conditional link probability and conditional link odds features. From this, the odds of each possible target distance given the corresponding source distance are estimated as:
J0(dι l ds ) = C(^dist = dι Λ S_ dist = di ) + l " C(t_dist ≠ dι Λ s_dist = di) + l
Similarly, the odds of each possible source distance given the corresponding target distance are estimated. The feature values include the sum of the scaled log odds of the jumps between consecutive links in a hypothesized alignment, computed in both source sentence and target sentence order. This feature is applied only when both the source and target jump distances are non-zero, so that it applies only to jumps between clusters, not to jumps on the "many" side of many-to-one cluster. In one embodiment these feature values are linearly scaled in order to get good results (in terms of training set alignment error rate
(AER)) when using perceptron training. It has been found empirically that good results can be obtained in terms of training set AER by dividing each log odds estimate by the largest absolute value of any such estimate computed.
Additional embodiments of the empirically parameterized jump distance feature are based on the probability, rather than the odds, of each possible target distance given the corresponding source distance and/or each possible source distance given the corresponding target distance, or other quantities computed using the frequency that a given jump distance between two words in the source language occurs with a given jump distance between words in the target language linked to the two words in the target language .
While the discriminative models discussed above are relatively straightforward to describe, finding the optimal alignment according to these models is non-trivial. Adding a link for a new pair of words can affect the non-monotonicity scores, the 1-to-many score, and the unlinked word score differently, depending on what other links are present in the alignment.
However, a beam search procedure can be used which is highly effective in finding good alignments, when used with these models. This was discussed in brief with respect to FIG. 3 in which an aligned sentence pair is selected, and the set of all possible association types is accessed to generate a list of all possible association types that apply to the selected sentence pair.
FIG. 4A is a flow diagram illustrating one embodiment for indexing association types using index generator 210. First, as the complete set of association types and scores are read in from data store 212, index generator 210 selects two words (one in each language) in each association type. Reading in the association types, choosing an association type and selecting the words in the association type is indicated by blocks 350 and 352 in FIG. 4A.
Index generator 210 then indexes the given association type by the selected words. This is indicated by block 354, and results in the index of word association types 220 shown in FIG. 2. This is used by word alignment component 202 as discussed below.
It should be noted that index generator 210 may illustratively generate index 220 prior to runtime. It may illustratively be done at set up time or at any other time as desired. FIG. 4B is a flow diagram better illustrating how the list of all possible association types that apply to the selected sentence pair can be generated. Word alignment component 202 generates all possible word pairs in the pair of aligned sentences 214 and 216. The word pairs represent all combinations of words, one being from the language E text fragment 214 and the other being from the language F text fragment 216. Generating all possible word pairs for the aligned sentences (or text fragments) is indicated by block 356 in FIG. 4B. Word alignment component 202 then selects one of the word pairs from the list. This is indicated by block 358. Word alignment component 202 then determines whether there is an index entry for the selected word pair. In doing so, word alignment component 202 accesses index 220 to determine whether it contains an entry for the selected word pair from the list of word pairs generated from the aligned text fragments 214 and 216. Checking for the index entry is indicated by block 360 in FIG. 4B.
If there is no index entry, then word alignment component 202 determines whether there are any more possible word pairs in the list to be considered. If so, processing reverts to block 358 where another word pair is selected. Determination of whether there are more word pairs to be considered is indicated by block 362 in FIG. 4B.
If, at block 360, word alignment component 202 determines that there is an index entry in index 220 for the selected word pair, then word alignment component 202 determines whether the index entry is for a 1-to-l association type. In other words, component 202 determines whether the index entry is only a link between a single word in text fragment 214 and a single word in text fragment 216, where neither of the words have additional links specified by the association type. This is indicated by block 364. If the index entry is for a 1-to-l association type, then the association type represented by the index entry is simply added to the list of possible association types generated for the aligned text fragments 214 and 216. This is indicated by block 366 in FIG. 4B. If, at block 364, it is determined that the index entry is not a for a 1-to-l association type, then word alignment component 202 determines whether the other words in the association type represented by the index entry (other than those which are listed in the index entry) occur in the pair of aligned text fragments 214 and 216. This is indicated by block 368 in FIG. 4B. In other words, if the index entry represents a 1-to- many association type, then component 202 determines whether all of the words in the association type occur in the pair of aligned text fragments. If the words do occur in the pair of aligned text fragments 214 and 216, then the association type represented by the index entry is also added to the list of possible association types for the aligned text fragments 214 and 216.
It will be noted that, in accordance with one embodiment, many-to-many association types are not considered. In that case, those association types can be omitted from index 220, in which case the many-to-many association type will never be selected. Other ways of omitting many-to-many association types can be used as well, and it may in some cases be desirable to use such association types, in which case they are left in and treated as a 1-to-many association type at this point.
Once all of the word pairs have been considered as determined at block 362, then the list of possible association types for the aligned text fragments 214 and 216 is sorted based on association scores, from strongest association score to weakest association score. This is indicated by block 370 in FIG. 4B.
In another embodiment of the present invention, instead of first generating all possible word pairs in the sentence pair as in block 356, and then determining which ones index a possible association type for the sentence pair, the possible association types can be determined incrementally as the possible word pairs are generated. That is, as each word pair is generated, the operations indicated in blocks 360, 364, 368, and 366 are performed for that word pair, before the next possible word pair is generated. Once this list of possible association types for the pair of aligned sentences 214 and 216 under consideration has been generated, word alignment component 202 then identifies the best alignment according to word alignment model 204 using the list of possible association types. FIG. 5A is a flow diagram illustrating the process of identifying the best alignment (also represented by blocks 306 and 308 in FIG- 3) in more detail.
Word alignment component 202 first initializes a list of existing alignments to contain only an empty alignment along with its overall score. Since an empty alignment has no links, the overall score for an empty alignment will simply be the total number of words in both sentences, multiplied by the unlinked word feature weight. This is indicated by block 400 in FIG. 5A. Component 202 then selects the highest scoring association type not yet considered from the list of possible association types just generated as described with respect to FIG. 4. Selecting one of the possible association types is indicated by block 402 in FIG. 5A.
Component 202 then incrementally adds all possible instances of the selected association type to copies of each of the alignments in a list of current alignments, keeping the previous alignments as well (before each instance of the association type is added) . This is indicated by block 404 in FIG. 5A. If there is more than one instance, in the aligned text fragments 214 and 216, of the selected association type being processed, then component 202 picks one instance and tries adding that instance to each of the alignments, and repeats that process for each of the instances. As each instance is considered, the alignments created by adding earlier instances are included in the existing potential alignments that component 202 adds the new instance to.
Without pruning, the number of possible alignments generated by component 202 would combinatorially increase dramatically. Therefore, the set of alignments is pruned by component 202, as new alignments are generated as indicated by block 4Q4 in FIG. 5A. One embodiment in which the alignments are pruned as they are generated will be described in greater detail below with respect to FIGS. 5B-1 to 5B-3.
Component 202 iterates through the sorted list of association types, from best to worst, creating new alignments that add links for all instances of the association type currently being considered to existing alignments, potentially keeping both the -old and new alignments in the set of possible alignments being generated. This continues until there are no more association types in the list to consider. This is indicated by block 408 in
FIG. 5A. The particular way in which the new links are added is also described in greater detail below with respect to FIGS. 5C and 5D.
Once the final set of potential alignments has been generated, component 202 simply outputs the best scoring word alignment 222 (shown in FIG. 2) based upon the score generated by word alignment model 204 (also shown in FIG. 2) . This is indicated by block 410 in FIG. 5A.
FIGS. 5B-1 to 5B-3 (collectively FIG. 5B) are flow diagrams better illustrating how the list of potential alignments might be incrementally generated and pruned during the processing described with respect to block 404 of FIG. 5A.
First, a possible link "I" that is an instance of the selected association type is selected in the sentence pair. This is indicated by block 504 in FIG. 5B.
The set of recent alignments is initialized to be empty. This is indicated by block 506 in FIG. 5B. An existing alignment λλA" is selected, and a set of new alignments is created. The new alignments are ways of adding "I" to "A". Selecting the existing alignment and creating a set of new alignments is indicated by blocks 508 and 510 in FIG. 5B.
Once the set of new alignments is created, an alignment (A') is selected from the set of new alignments. This is indicated by block 512 in FIG. 5B. Component 202 then determines whether A' already exists in the set of recent alignments, or whether it has any many-to- many links in it, or whether it has any one-to-many links with more than a predetermined value "M" branches . This is indicated by block 514 in FIG. 5B. The value of "M" can be set empirically, or in any other desired way, and it is believed that a value of approximately 3 works well-
If, at block 514, word alignment component 202 determines that the selected alignment A' either already exists in the set of recent alignments or has many-to-many links in it, or has any one-to-many links with more than M branches, then processing moves to block 516 where component 202 determines whether there are any more alignments A' to consider. However, if, at block 514, component 202 determines that A' does not already exist in the set of recent alignments, and it does not have any many-to-many links in it, and it does not have any one- to-many links with more than "M" branches, then word alignment component 202 computes the score for the alignment A' according to the model 2Q4. Computing the score is indicated by block 518 in FIG. 5B.
Word alignment component 202 then determines whether the score for the alignment A' is worse than the best score computed so far by more than a pruning threshold amount. This is indicated by block 520 in FIG. 5B. If so, then the alignment A' is discarded, and processing again moves to block 516 where component 202 determines whether there are any more alignments A' to be processed. Discarding the alignment is indicated by block 522 in FIG. 5B. The pruning threshold can be any desired value and can be set empirically or otherwise.
If, at block 520, word alignment component 202 determines that the score for the alignment A' is not worse than the best score so far by more than the pruning threshold, then component 202 adds the alignment A' to the list of recent alignments. This is indicated by block 524 in FIG. 5B. If the score for the alignment A' is the best score so far, then component 202 notes that as well. This is indicated by block 526. Component 202 then determines whether there are more existing alignments "A" to be processed. If so, processing reverts back to block 5Q8. If not, however, component 202 adds the recent alignments to the set of existing alignments. This is indicated by block 534 in FIG. 5B. The set of existing alignments is .then sorted based on the model score and is pruned back to the N-best existing alignments based on the model score. Sorting the set of existing alignments is indicated by block 536 and pruning to the N-best existing alignments is indicated by block 538 in FIG. 5B. While the value of N can be any desired or empirically determined value, it is believed that a value in a range of approximately 1Q-20 works well.
Component 202 then determines whether there are more possible links λλI" that are instances of the selected association type in the sentence pair currently being processed. If so, processing reverts back to block 504. Determining whether there are more existing alignments "A" is indicated by block 528, and determining whether there are more possible links λλI" is determined by block 530.
If, at block 530, component 202 determines that there are no more instances of the association type to be processed, then component 202 has completed the processing indicated by block 404.
FIG. 5C illustrates one embodiment for creating a set of new alignments , that add a link to an existing alignment (represented by block 510 in FIG. 5B) in more detail. FIG. 5C illustrates the process for the embodiment in which word alignment model 204 uses the log-likelihood ratio metric in computing the word association features 230.
Component 202 first initializes the set of new alignments to be empty. This is indicated by block 540 in FIG. 5C. Next, for each existing link I', in alignment "A" that conflicts with the instance "I", component 202 adds an alignment to the set of new alignments that has link for the instance "I" plus all the links in the alignment "A", except the existing link I' . This is indicated by block 542 in FIG. 5C. Component 202 then adds an alignment to the set of new alignments that has "I" plus all the links in "A". This is indicated by block 544 in FIG. 5C.
An extra pruning technique can also be used with the LLR-based model. In generating the list of possible association types to be used in aligning a given sentence pair, we use only association types which have the best association spore for this sentence pair for one of the word types involved in the association. The idea is to discard associations not likely to be used. For example, in data from the Canadian Parliament, "Prime Minister" and "premier minister" frequently occur in parallel sentence pairs. In one illustrative training corpus, the association scores for each pair of one of these English words and one of these French words are as follows: 4125.02019332218 Minister ministre 2315.88778082931 Prime premier 1556.9205658087 Prime ministre 1436.06392959541 Minister premier
All four pairs have quite high association scores, but in aligning a sentence pair that contains both "Prime Minister" and "premier ministre", we' would not consider the associations between "Prime" and "ministre" and between "Minister" and "premier", because in those two pairings, neither word is the most strongly associated with the other for this sentence pair. This pruning step can be applied as the list of possible association types for a selected sentence pair in being generated in block 304, just before block 366.
FIG. 5D is a flow diagram illustrating how new links are added to existing alignments in the embodiment in which model 204 uses the conditional link probabilities in computing the word association features 230.
Component 202 simply lets the set of new alignments contain only an alignment having a link for the instance "I" plus all links in the alignment "A" that are not conflicting with the instance "I". This is indicated by block 546 in FIG. 5D. FIG. 6 is a flow diagram illustrating one embodiment for optimizing the feature weights (/I7) for the features used by model 204. Prior to discussing FIG. 6, a brief discussion of perceptron learning is provided. In one embodiment, the optimization technique is a modified version of averaged perceptron learning. Perceptron learning and averaged perceptron learning techniques are known. Briefly, starting with an initial set of feature weight values, perceptron learning iterates through the annotated training data multiple times, comparing, for each sentence pair, the best alignment {ahyp) according to the current model with the reference alignment (aref) ■ At each sentence pair, the weight for each feature is incremented by the difference between the value of the feature for the reference alignment, and the value of the feature for the best alignment according to the model as follows: Eq. 6
Λ <r-λ, +(firef^f)-fι(%ιp,e,f)) The updated feature weights are used to compute α, for the next sentence pair.
Iterating through the data continues until the weights stop changing, because αrefhyp for each sentence pair, or until some other stopping condition is met. In the averaged perceptron learning technique, the feature weights for the final model are the average of the weight values over all the data, rather than simply the values after the final sentence pair of the final iteration.
In accordance with one embodiment of the optimization technique, the present system averages the weight values over each pass through the data', rather than over all passes. It is believed that this leads to faster convergence. After each pass of perceptron learning through the data, another pass is made through the data with feature weights fixed to their average value for the previous learning pass, in order to evaluate current performance of the model. The system iterates over this procedure until a local optimum is found.
Also, in accordance with one embodiment of the present system, a fixed weight is provided for the word association feature 230. It is believed that this feature is of significant importance in the model, and fixing the weight can be fixed to any desired or empirically determined value. In one embodiment, the weight is fixed to 1.0. Allowing all weights to vary allows many equivalent sets of weights that differ only by a constant scale factor. Fixing one weight thus eliminates a spurious apparent degree of freedom.
By eliminating this degree of freedom, and fixing one of the weights, the present system thus employs a version of perceptron learning that uses a learning rate parameter. As is known, the perceptron update rules involve incrementing each weight by the difference in the feature values being compared. If the feature values are discrete, however, the minimum difference may be too large compared to the unweighted association score. Therefore, the present system multiplies the feature value difference by a learning rate parameter η to allow smaller increments when needed as follows:
Eq. 7 λ,<r-λ, +η(f,(aref,e,f)-f,(alψp,e,f)) For the CLP-based model, based on the typical feature values expected, the learning rate can be set to any empirically determined value. In one embodiment, the learning rate is set to 0-01, although different rates can be used and optimizations on the rate can be performed as desired. For the LLR-based model, the LLR scores can become very large (such as 100,000 for a 500,000 pair corpus) but small differences can be significant. Thus, small differences in the weighting values are also likely to be significant. This means that a learning rate small enough to allow convergence on a desired weight value may require a very large number of iterations through the data in order to reach those values. Thus, in accordance with one embodiment, the present system uses a progression of learning rates, starting at a relatively large value (which can be empirically determined, and one example of which is approximately 1000) and reducing each successive weight until a final desired learning weight is reached. Of course, the level of reduction can be ( empirically determined or set as desired. In one embodiment, the learning rate is reduced, successively by an order of magnitude until a learning rate of 1 is reached- Of course, other yalues can be used as well. At each transition between learning rates, the feature weights are reinitialized to the optimum values found with the previous learning rate. This can be done based on error rate or any other desired measure. With this in mind, FIG. 6 is a flow diagram illustrating one embodiment in which the parameter weights ( X1 ) are trained or optimized. First, all of the weights, and the learning rate, are initialized. This is indicated by block 560 in FIG. 6. In one embodiment, the weight for the word association feature 230 is initialized to 1.0 and the weights for the other features are initialized to 0.0.
A training sample sentence pair, annotated with its correct word alignment, is then processed as described above with respect to the previous figures, in order to obtain a best guess at a word alignment for the sentence pair. This is indicated by block 562.
The best guess is then compared to the known correct alignment for the sentence pair. This is indicated by block 564.
The weights (/I7) are then adjusted based on the difference in feature values between the correct alignment and the best guess. This is indicated by block 566 in FIG. 6, and is shown in Eq. 5 above.
It is then determined whether enough data has been processed in order to check the error rate. This is indicated by block 568. In other words, it may be desirable not to check the error rate after processing each training sentence pair. Instead, it may be desirable to process a plurality of different training sentence pairs before checking the error rate. Therefore, determining whether enough data has been processed to check the error rate is indicated by block 568. Illustratively, it may be desirable to process all the annotated training sentence pairs once between occurrences of checking the error rate.
If so, then the error rate is checked to determine whether it is still decreasing since the last time it was checked. This check is performed using the average values for the feature weights since the last time the error rate was checked, applied to a specified set of annotated sentence pairs. This set may be the entire set of training sentence pairs used in adjusting the feature weights, a subset of that set, or an independent set of annotated sentence pairs. This is indicated by block 569 in FIG. 6. If the error rate is still decreasing, as indicated by block 570, then processing reverts back to block 562 where training continues by processing additional training samples. In other words, it is determined that the learning rate is still set at a level such that, as learning continues, the model is getting better and the error rate associated with the model is decreasing. Therefore, processing continues using the current learning rate.
However, if, at block 570, the error rate has started to increase (or is at least no longer decreasing) then it is determined that training has flattened out with respect to the current learning rate. It is thus determined whether there are any additional learning rates to try during the training process. This is indicated by block 572. If not, training is complete and the weights that yielded the lowest error rate are used. However, if, at block 572 it is determined that there are more learning rates to try, then the learning rate is set to its next lowest value, and the feature weights are reset to the values that have yielded the lowest error rate so far. This is indicated by block 574 and 576. Processing then continues at block 562 in which training samples are again processed in order to continue training the model feature weights X1.
Practitioners skilled in the art will recognize that many other variations of perceptron learning may be used to optimize the model feature weights, and that other learning methods such as maximum entropy modeling or maximum margin methods, including support vector machines, may be used to optimise the feature weights . If the number of feature weights is small, direct optimization methods such as Powell's method or the downhill simplex method may also be used.
In one alternative embodiment, a support vector machine (SVM) method for structured output spaces can be used. The method can be based on known large margin methods for structured and interdependent output variables. Like standard SVM learning, this method tries to find the hyperplane that separates the training examples with the largest margin. Despite a very large number of possible output labels (e.g., all possible alignments of a given pair of sentences), the optimal hyperplane can be efficiently approximated given the desired error rate, using a cutting plane algorithm. In each iteration of the algorithm, it adds the "best" incorrect predictions given the current model as constraints, and optimizes the weight vector subject only to them.
One advantage of this algorithm is that it does not pose special restrictions on the output structure, as long as "decoding" can be done efficiently. This can be beneficial because several features mentioned above are believed to be very effective in this task, but are difficult to incorporate into structured learning methods that require decomposable features. This method also allows a variety of loss functions, but can also use only simple 0-1 loss, which in this context means whether or not the alignment of a sentence pair is completely correct. In the embodiment in which an SVM method is used, the
SVM method has a number of free parameters, which can be tuned in a number of different ways. One way is by minimizing training set AER. Another is five-fold cross validation. In this method, training is performed five times on 80% of the training data and testing on the other 20%, with five disjoint subsets used for testing. The parameter values yielding the best averages AER on the five test subsets of the training set are used to train the final model on the entire training set.
It will also be appreciated that log- conditional-odds- based features, as mentioned above, are not only useful in bilingual word alignment as discussed above. In addition, log conditional odds can be used to define features in other applications as well. FIG. 7 shows a classifier 600 that uses conditional log odds to define features and that can be used as a binary classifier, a multi-class classifier or a structured classifier. Classifier 600 receives an input 602 and generates a classified output 604. One application for classifier 600 is for sequence segmentation, such as segmenting East Asian characters into words .
For word segmentation, one might want to use as a local feature: the log-probability that a segment is a word, given the character sequence it spans. A curious property of this feature is that it induces a counterintuitive asymmetry between the is- word and is-not-word cases: the component generative model can efficiently dictate that a certain chunk is not a word, by assigning it a very low probability (driving the feature value to negative infinity) , but it cannot dictate that a chunk is a word, because the log-probability is bounded above. If instead the log
P(y \ χ) condi tional odds log — '—^ is used, the asymmetry disappears .
P,(ny\ χ)
Such a log-odds feature provides much greater benefit than the log-probability, and it is useful to include such a feature even when the model also includes indicator function features for every word in the training corpus .
Therefore, a feature that can be used in word segmentation is the smoothed log conditional odds that a given sub-sequence xαb ={xα,-m',Xb_{) forms a word, estimated as: wordcountix , ) + 1 log -y^ , EQ. 8 nonwordcoιmt{xαb)+1 where wordcount(xαb) is the number of times (xαb) forms a word in the training set, and nonwordcount{xαb) is the number of times (xfli) occurs, not segmented into a single word. As in our word alignment features, we use "add-one" smoothing so that neither the numerator or denominator of the ratio is ever 0.
The word alignment problem and sequence segmentation problems described above are both instances of structured classification problems, because a word alignment or a segmentation of a sequence can both be viewed as structured labels of the inputs, with the individual alignment links or word segment boundaries being partial labels. Log-conditional-odds features were also found very useful in a multi-class classification model for another natural language prediction problem, which can be another application for classifier 600, in which a fixed set of more than two unstructured labels is used. The exemplary problem is to predict Japanese case markers given the rest of the words in a Japanese sentence. Case markers are words which indicate grammatical relations (such as subject, object, and location) of the complement noun phrase to the predicate.
The following Japanese sentence (in Table 1) shows an example of the use of case markers in Japanese. The case markers that need to be predicted are kara (from) for the second phrase and ni for the third phrase (the case markers are underlined below) .
TABLE 1
chiisai toki kara ainshutain ni akogarete ita small time from Einstein NI idolize PROG-PΛST
" (he) has idolized Einstein since (he was ) little"
This tas k can be viewed as a multi-class classification problem where the goal is to predict one case marker for each phrase in the Japanese sentence , given a set of features from the context of the phrase, and possibly features from a corresponding
English sentence . In one example , the number of classes to be predicted is 19 , which includes 18 case markers and a class NQNE , meaning that a phrase has no case marker .
One exemplary model for this task uses nominal-valued features , such as HeadPOS (the part of speech tag of the head word of the phrase ) , Headword, PrevHeadWord, NextHeadWord (head words of surrounding phrases ) , as well as features from a syntactic dependency tree . To αescriDe tne model in more detail, some notation is now introduced. Denote the context features of a phrase by a vector Qf nominal
Figure imgf000038_0001
, where there is one dimension for each of the nominal features included in the model. Denote also by yvy2,..,yk the k possible classes (case marker assignments) . The form of the model is as follows:
Figure imgf000038_0002
In EQ. 9, the trainable parameters of the model are the /I/ parameters - one parameter for every class y,,i = l...k and for every feature type j,j = O...m . Feature type 0 is added to model the prior likelihood of every class. Each of the log-odds features in this equation represents the logarithm of the probability of class y, given some feature divided by the probability of the complement of y, (denoted by —y,-) given that feature. The complement of class yi is the set of all classes other than y, .
The odds values were estimated by smoothed relative frequency estimates from the training data. The unsmoothed relative frequency estimates are:
P(y,\xj) _ count(y,,Xj)
EQ. 10
P{—yt I X - ) count(Xj ) - count (yt , x, )
In one embodiment, add- a alpha smoothing is used to improve this estimate. The add-α estimate is as follows:
PJy1Uj ,—) —_ count(y,,Xj)+■—a• £j^) β X_L
Pi-Iy1 1 XJ ) countiXj ) - count iyt , Xj ) + (A; - X)a
In EQ. 11, k denotes the number of classes as before.
The parameters λJ of the model can be trained using a standard technique. For instance, the sum of the conditional log- likelihoods of the training data instances can be maximized, and a Gaussian prior on the parameters can be included. Comparing this modeX using log-odds features to one using log-probability features it was found that the model using log-odds outperformed the latter model. In particular, the model using log-probability features had the following form:
P{yi\
Figure imgf000039_0001
Equation 12 closely Qorresponds to the method of discriminatively training parameters (the λJ y> ) , for weighting log- probability features from generative models .
For the model in EQ . 12 , the probability features P(xj \ yi) were estimated by smoothed relative frequency estimates from the training data, using add- a smoothing . The smoothed relative frequency estimate for P(Xj I y1) is :
Figure imgf000039_0002
where V denotes the number of possible values for the j-th nominal feature Xj . A range of values for the a smoothing parameter can be tried in any known manner for both Equations 9 and 12, finding approximately optimal values for them. This can be done empirically, using a systematic approach, or otherwise.
Therefore, conditional log odds (either single log of conditional odds or a sum of a plurality of logs of estimated conditional odds) can be used in classification. This can be done in binary classification, multi-class classification (with a fixed set of more than 2 classes) and structured classification (such as bilingual word alignment or word segmentation) . In some cases of classification, the logarithm of the ratio of the probability of some label given a feature and the probability of not the label, given the feature, produces the same results as the logarithm pf the ratio of the probability of the feature given the label and the probability of the feature given not the label. That is, log P(v\'x-) can be replaced by log P(x\>v•)'- • Even in cases where the
P,(-y\x) P,(χ\-<y) results are not mathematically equivalent, such substitutions may be effective. The present embodiment is intended to cover, in both multi-class classification and structured classification, such substitutions .
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims .

Claims

WHAT IS CLAIMED IS:
1. A word alignment system, comprising: a weighted linear word alignment model (204), linearly combining feature valμes for a plurality of different features (230-245) , given a word alignment for a pair of text fragments (214, 216), each of the different features
(230-245) being weighted by a corresponding feature weight, wherein the plurality of different features (230-245) in the word alignment model (204) comprise a word association metric indicative of a strength of association between words in the pair of text fragments (214, 216), the word association metric being based on one of: surface statistics from a training corpus (206) ; or a conditional link probability word association metric; a statistic computable from a count of a number of times the words in the pair of text fragments (214, 216) are linked by another word alignment system, and a count of a number of times the words co-occur, in text fragments from a training corpus (206) ; an automatic training component (202) configured to train
(566) the feature weights; and a word alignment component (202) configured to receive the pair of text fragments (114, 116) and access the word alignment model (204) to identify a best scoring word alignment (222) for the pair of text fragments (114,
116) .
2. The word alignment system of claim 1 wherein the word association metric is based on conditional odds of a set of words being linked in the text fragments (114, 116) .
3. The word alignment system of claim 1 wherein the word association metric is based on the conditional link probability word association metric.
4. The word alignment system of claim 1 wherein the word association metric is based on statistics derived from a contingency table.
5. The word alignment system of claim 1 wherein the word association metric comprises a log-likelihood word association metric.
6. The word alignment system of claim 1 wherein the plurality of different features (230-245) comprise: an exact match feature (237) indicative of a number of times words are linked to identical words .
7. The word alignment system of claim 1 wherein the plurality of different features (230-245) comprise: association score rank features (231) based on association ranks for linked words in the given word alignment.
8. The word alignment system of claim 1 wherein the text fragments (114, 116) are in a source language and a target language and wherein the plurality of different features (230-245) comprise : a jump distance difference feature (233) based on differences between consecutive aligned source or target words in the given word alignment and a distance between target or source words in the target language text fragment (216) that the source or target words are aligned to.
9. The word alignment system of claim 1 wherein the plurality of features (230-245) comprise: many-to-one jump distance features (235) based on a number of words, between a first and last word linked to a given word in the given word alignment, that are not linked to the given word in the given word alignment.
10. The word alignment system of claim 1 wherein the plurality of features (230-245) comprise: lexical features (239) indicative of a count of a number of links between words having a frequency of joint occurrence in the training corpus that exceeds a given threshold.
11. The word alignment system of claim 1 wherein the plurality of features (230-245) comprise: lexical features (239) indicative Qf a count of a number of unlinked occurrences of words having a frequency of occurrence in the training corpus that exceeds a given threshold.
12. The word alignment system of claim 1 wherein the text fragments (114, 116) are in a source language and a target language and wherein the plurality of features (230-245) comprise: a parameterized jump distance feature (245) based on a count of a number of times that a given jump distance between two words in the source language occurs with a given jump distance between words in the target language linked to the two words in the target language.
13. The word alignment system of claim 1 wherein the given alignment aligns a source language text fragment (114) with a target language text fragment (116) , and wherein the plurality of features comprise: a symmetrized non-monotonicity feature (241) based on a sum of magnitudes of backward jumps in word order in the target language text fragment (116) in the given word alignment relative to a word order in the source language text fragment (114), and a sum of magnitudes of backward jumps in word order in the source language text fragment (114) in the given word alignment relative to tne worα orαer in the source language text fragment (114) .
14. The word alignment system of claim 1 wherein the automatic training component (2Q8) comprises a structured support vector machine component .
15. The word alignment system of claim 1 wherein the plurality of different features (218) comprise: a non-monotonicity feature (232) indicative of a non- monotonicity of the given word alignment.
16. The word alignment system of claim 1 wherein the plurality of different features (218) comprise: an unlinked words feature (236) indicative of a number of unlinked words in the given word alignment.
17. The word alignment system of claim 1 wherein the plurality of different features (218) comprise: a multiple links feature (234) indicative of a number of words in the given alignment that are linked to multiple other words in the given alignment.
18. The word alignment system of claim 1 and further comprising: an index (220) of word association types coupled to the word alignment component (202) .
19. The word alignment system of claim 1 wherein the automatic training component (202) is a discriminative automatic training component.
20. A method of identifying a bilingual word alignment (222) for a pair of aligned text fragments (214, 216), comprising: generating (304) a list of all possible bilingual association types for the pair of aligned text fragments (214, 216); per±orming a plurality of processing passes to build alignments by adding (404) instances of a selected association type from the list to copies of previous alignments built during a previous processing pass such that, if a given previous alignment includes a conflicting link (I' ) , conflicting with the selected association type, then generating new alignments by adding (544) to copies of the given previous alignment one or more permutations of adding a link for the selected association type and keeping and removing any conflicting links (I' ) from the given previous alignment; and
identifying (308) a best alignment (222) of the alignments based on a score from a weighted linear model (204) .
21. A method of identifying a bilingual word alignment for a pair of aligned text fragments (214, 216), comprising: generating (304) a list of all possible bilingual association types for the pair of aligned text fragments (214, 216); sorting (306) the list based on word association scores (230) for the bilingual association types in the list; building (404) possible alignments using the bilingual association types in the sorted list; and identifying (308) a best alignment (222) of the possible alignments based on a score from a weighted linear model
(204) .
22. A method of performing multi-class or structured classification, comprising: selecting (300) an input (602) to be classified; calculating a logarithm of a ratio of a probability that the input (602) has one or more selected features (230-245) given that it has a given label or partial label, and a probability that the input (602) has the one or more selected features (230-245) given that it does not have the given label or partial label; and assigning (308) a multi-class pr structured classifier label to the input (602) based QΠ the logarithm of the ratio calculated.
PCT/US2006/037562 2005-10-03 2006-09-26 Weighted linear model WO2007041117A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/242,290 US7957953B2 (en) 2005-10-03 2005-10-03 Weighted linear bilingual word alignment model
US11/242,290 2005-10-03
US11/485,015 US20070083357A1 (en) 2005-10-03 2006-07-12 Weighted linear model
US11/485,015 2006-07-12

Publications (1)

Publication Number Publication Date
WO2007041117A1 true WO2007041117A1 (en) 2007-04-12

Family

ID=37906487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/037562 WO2007041117A1 (en) 2005-10-03 2006-09-26 Weighted linear model

Country Status (2)

Country Link
US (1) US20070083357A1 (en)
WO (1) WO2007041117A1 (en)

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8666725B2 (en) 2004-04-16 2014-03-04 University Of Southern California Selection and use of nonstatistical translation components in a statistical machine translation framework
JP5452868B2 (en) 2004-10-12 2014-03-26 ユニヴァーシティー オブ サザン カリフォルニア Training for text-to-text applications that use string-to-tree conversion for training and decoding
US8676563B2 (en) * 2009-10-01 2014-03-18 Language Weaver, Inc. Providing human-generated and machine-generated trusted translations
US8886517B2 (en) 2005-06-17 2014-11-11 Language Weaver, Inc. Trust scoring for language translation systems
US10319252B2 (en) 2005-11-09 2019-06-11 Sdl Inc. Language capability assessment and training apparatus and techniques
US8943080B2 (en) * 2006-04-07 2015-01-27 University Of Southern California Systems and methods for identifying parallel documents and sentence fragments in multilingual document collections
US8209163B2 (en) * 2006-06-02 2012-06-26 Microsoft Corporation Grammatical element generation in machine translation
US7865352B2 (en) * 2006-06-02 2011-01-04 Microsoft Corporation Generating grammatical elements in natural language sentences
US8886518B1 (en) 2006-08-07 2014-11-11 Language Weaver, Inc. System and method for capitalizing machine translated text
US20080097982A1 (en) * 2006-10-18 2008-04-24 Yahoo! Inc. System and method for classifying search queries
JP5082374B2 (en) * 2006-10-19 2012-11-28 富士通株式会社 Phrase alignment program, translation program, phrase alignment device, and phrase alignment method
US8433556B2 (en) * 2006-11-02 2013-04-30 University Of Southern California Semi-supervised training for statistical word alignment
US9122674B1 (en) 2006-12-15 2015-09-01 Language Weaver, Inc. Use of annotations in statistical machine translation
US8615389B1 (en) 2007-03-16 2013-12-24 Language Weaver, Inc. Generation and exploitation of an approximate language model
US8831928B2 (en) 2007-04-04 2014-09-09 Language Weaver, Inc. Customizable machine translation service
US20080275743A1 (en) * 2007-05-03 2008-11-06 Kadambe Shubha L Systems and methods for planning
US8825466B1 (en) 2007-06-08 2014-09-02 Language Weaver, Inc. Modification of annotated bilingual segment pairs in syntax-based machine translation
US8452585B2 (en) * 2007-06-21 2013-05-28 Microsoft Corporation Discriminative syntactic word order model for machine translation
US8639509B2 (en) * 2007-07-27 2014-01-28 Robert Bosch Gmbh Method and system for computing or determining confidence scores for parse trees at all levels
JP5342760B2 (en) * 2007-09-03 2013-11-13 株式会社東芝 Apparatus, method, and program for creating data for translation learning
US8180624B2 (en) * 2007-09-05 2012-05-15 Microsoft Corporation Fast beam-search decoding for phrasal statistical machine translation
US8060360B2 (en) * 2007-10-30 2011-11-15 Microsoft Corporation Word-dependent transition models in HMM based word alignment for statistical machine translation
US8521516B2 (en) * 2008-03-26 2013-08-27 Google Inc. Linguistic key normalization
US20100017293A1 (en) * 2008-07-17 2010-01-21 Language Weaver, Inc. System, method, and computer program for providing multilingual text advertisments
US9176952B2 (en) * 2008-09-25 2015-11-03 Microsoft Technology Licensing, Llc Computerized statistical machine translation with phrasal decoder
US9098812B2 (en) * 2009-04-14 2015-08-04 Microsoft Technology Licensing, Llc Faster minimum error rate training for weighted linear models
US8990064B2 (en) * 2009-07-28 2015-03-24 Language Weaver, Inc. Translating documents based on content
US8380486B2 (en) * 2009-10-01 2013-02-19 Language Weaver, Inc. Providing machine-generated translations and corresponding trust levels
US8229929B2 (en) 2010-01-06 2012-07-24 International Business Machines Corporation Cross-domain clusterability evaluation for cross-guided data clustering based on alignment between data domains
US8589396B2 (en) * 2010-01-06 2013-11-19 International Business Machines Corporation Cross-guided data clustering based on alignment between data domains
US10417646B2 (en) 2010-03-09 2019-09-17 Sdl Inc. Predicting the cost associated with translating textual content
CN102237081B (en) * 2010-04-30 2013-04-24 国际商业机器公司 Method and system for estimating rhythm of voice
US9201871B2 (en) * 2010-06-11 2015-12-01 Microsoft Technology Licensing, Llc Joint optimization for machine translation system combination
US20120158398A1 (en) * 2010-12-17 2012-06-21 John Denero Combining Model-Based Aligner Using Dual Decomposition
US8655640B2 (en) * 2011-03-02 2014-02-18 Raytheon Bbn Technologies Corp. Automatic word alignment
US11003838B2 (en) 2011-04-18 2021-05-11 Sdl Inc. Systems and methods for monitoring post translation editing
US8694303B2 (en) 2011-06-15 2014-04-08 Language Weaver, Inc. Systems and methods for tuning parameters in statistical machine translation
US8886515B2 (en) 2011-10-19 2014-11-11 Language Weaver, Inc. Systems and methods for enhancing machine translation post edit review processes
US8942973B2 (en) 2012-03-09 2015-01-27 Language Weaver, Inc. Content page URL translation
US10261994B2 (en) 2012-05-25 2019-04-16 Sdl Inc. Method and system for automatic management of reputation of translators
US9201876B1 (en) * 2012-05-29 2015-12-01 Google Inc. Contextual weighting of words in a word grouping
US9152622B2 (en) 2012-11-26 2015-10-06 Language Weaver, Inc. Personalized machine translation via online adaptation
US20160132491A1 (en) * 2013-06-17 2016-05-12 National Institute Of Information And Communications Technology Bilingual phrase learning apparatus, statistical machine translation apparatus, bilingual phrase learning method, and storage medium
US20150026146A1 (en) * 2013-07-17 2015-01-22 Daniel Ivan Mance System and method for applying a set of actions to one or more objects and interacting with the results
US9213694B2 (en) 2013-10-10 2015-12-15 Language Weaver, Inc. Efficient online domain adaptation
CN106156010B (en) * 2015-04-20 2019-10-11 阿里巴巴集团控股有限公司 Translate training method, device, system and translation on line method and device
US10042929B2 (en) 2015-06-09 2018-08-07 International Business Machines Corporation Modification of search subject in predictive search sentences
US9400781B1 (en) * 2016-02-08 2016-07-26 International Business Machines Corporation Automatic cognate detection in a computer-assisted language learning system
US11074420B2 (en) * 2019-06-25 2021-07-27 International Business Machines Corporation Self-learning input-driven brokering of language-translation engines
CN111291185B (en) * 2020-01-21 2023-09-22 京东方科技集团股份有限公司 Information extraction method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000019194A (en) * 1998-09-09 2000-04-06 윤종용 K-best part-of-speech tagging apparatus and method based on statistics
WO2000033211A2 (en) * 1998-11-30 2000-06-08 Koninklijke Philips Electronics N.V. Automatic segmentation of a text
KR20010094627A (en) * 2000-03-31 2001-11-01 김영택 Machine translation method decomposing compound noun using markov model
KR20040018008A (en) * 2002-08-24 2004-03-02 한국전자통신연구원 Apparatus for tagging part of speech and method therefor
US20040098247A1 (en) * 2002-11-20 2004-05-20 Moore Robert C. Statistical method and apparatus for learning translation relationships among phrases

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09128396A (en) * 1995-11-06 1997-05-16 Hitachi Ltd Preparation method for bilingual dictionary
DE69837979T2 (en) * 1997-06-27 2008-03-06 International Business Machines Corp. System for extracting multilingual terminology
US7054803B2 (en) * 2000-12-19 2006-05-30 Xerox Corporation Extracting sentence translations from translated documents
US7734459B2 (en) * 2001-06-01 2010-06-08 Microsoft Corporation Automatic extraction of transfer mappings from bilingual corpora
US7191115B2 (en) * 2001-06-20 2007-03-13 Microsoft Corporation Statistical method and apparatus for learning translation relationships among words
WO2004001623A2 (en) * 2002-03-26 2003-12-31 University Of Southern California Constructing a translation lexicon from comparable, non-parallel corpora
US7349839B2 (en) * 2002-08-27 2008-03-25 Microsoft Corporation Method and apparatus for aligning bilingual corpora
US8548794B2 (en) * 2003-07-02 2013-10-01 University Of Southern California Statistical noun phrase translation
US20050216253A1 (en) * 2004-03-25 2005-09-29 Microsoft Corporation System and method for reverse transliteration using statistical alignment
US7672830B2 (en) * 2005-02-22 2010-03-02 Xerox Corporation Apparatus and methods for aligning words in bilingual sentences
US8433556B2 (en) * 2006-11-02 2013-04-30 University Of Southern California Semi-supervised training for statistical word alignment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000019194A (en) * 1998-09-09 2000-04-06 윤종용 K-best part-of-speech tagging apparatus and method based on statistics
WO2000033211A2 (en) * 1998-11-30 2000-06-08 Koninklijke Philips Electronics N.V. Automatic segmentation of a text
KR20010094627A (en) * 2000-03-31 2001-11-01 김영택 Machine translation method decomposing compound noun using markov model
KR20040018008A (en) * 2002-08-24 2004-03-02 한국전자통신연구원 Apparatus for tagging part of speech and method therefor
US20040098247A1 (en) * 2002-11-20 2004-05-20 Moore Robert C. Statistical method and apparatus for learning translation relationships among phrases

Also Published As

Publication number Publication date
US20070083357A1 (en) 2007-04-12

Similar Documents

Publication Publication Date Title
US7957953B2 (en) Weighted linear bilingual word alignment model
US20070083357A1 (en) Weighted linear model
US7680647B2 (en) Association-based bilingual word alignment
US7542893B2 (en) Machine translation using elastic chunks
Lopez Statistical machine translation
Ling et al. Latent predictor networks for code generation
Carlberger et al. Implementing an efficient part‐of‐speech tagger
Wang et al. Decoding algorithm in statistical machine translation
US7536295B2 (en) Machine translation using non-contiguous fragments of text
JP5774751B2 (en) Extracting treelet translation pairs
US6182026B1 (en) Method and device for translating a source text into a target using modeling and dynamic programming
Dahlmeier et al. A beam-search decoder for grammatical error correction
KR101130457B1 (en) Extracting treelet translation pairs
CN104484322A (en) Methods and systems for automated text correction
Bulian et al. Tomayto, tomahto. beyond token-level answer equivalence for question answering evaluation
Axelrod Factored language models for statistical machine translation
EP2759945A2 (en) Sampling and optimization in phrase-based machine translation using an enriched language model representation
Köper et al. Applying multi-sense embeddings for german verbs to determine semantic relatedness and to detect non-literal language
Luong et al. Find the errors, get the better: Enhancing machine translation via word confidence estimation
Costa-jussà An overview of the phrase-based statistical machine translation techniques
JP2011243087A (en) Automatic word mapping device, method and program therefor
CN111814493A (en) Machine translation method, device, electronic equipment and storage medium
Modrzejewski Improvement of the Translation of Named Entities in Neural Machine Translation
Dugonik et al. The usage of differential evolution in a statistical machine translation
Zhang et al. Towards machine translation in semantic vector space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06815511

Country of ref document: EP

Kind code of ref document: A1