WO2008154104A1 - Generating a phrase translation model by iteratively estimating phrase translation probabilities - Google Patents

Generating a phrase translation model by iteratively estimating phrase translation probabilities Download PDF

Info

Publication number
WO2008154104A1
WO2008154104A1 PCT/US2008/063403 US2008063403W WO2008154104A1 WO 2008154104 A1 WO2008154104 A1 WO 2008154104A1 US 2008063403 W US2008063403 W US 2008063403W WO 2008154104 A1 WO2008154104 A1 WO 2008154104A1
Authority
WO
WIPO (PCT)
Prior art keywords
phrase
translation
pair
probability
instance
Prior art date
Application number
PCT/US2008/063403
Other languages
French (fr)
Inventor
Robert C. Moore
Aaron Fred Bobick
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Publication of WO2008154104A1 publication Critical patent/WO2008154104A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/44Statistical methods, e.g. probability models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/45Example-based machine translation; Alignment

Definitions

  • Machine translation is a process by which a textual input in a first language (a source language) is automatically translated into a textual output in a second language (a target language) .
  • Some machine translation systems attempt to translate a textual inputs word for word, by translating individual words in the source language into individual words in the target language. However, this has led to translations that are not very fluent.
  • phrase based translation systems Some systems currently translate based on phrases.
  • phrase based translation systems receive a word- aligned bilingual corpus of sentence translation pairs, where words in a source training text are aligned with corresponding words in a target training text. Based on the word-aligned bilingual corpus, phrase pairs are extracted that are likely translations of one another.
  • phrase based translation systems find a sequence of words in English for which a sequence of words in French is a translation of that English word sequence.
  • phrases translation tables are important to these types of phrase-based statistical machine translation systems.
  • the phrase translation tables provide pairs of phrases that are used to construct a large set of potential translations for each input sentence, along with feature values associated with each phrase pair.
  • the feature values can include estimated probabilities that one of the phrases in the phrase pair will translate as the other on a particular occasion. These estimates are used to select a best translation from a given set of potential translations.
  • a "phrase" can be a single word or any contiguous sequence of words or other tokens, such as punctuation symbols, that are treated as words by the translation system. It need not correspond to a complete linguistic constituent.
  • One current system for building phrase translation tables selects, from a word alignment provided for a parallel bilingual training corpus, all pairs of phrases (up to a given length) that meet two criteria.
  • a selected phrase pair must contain at least one pair of words linked by the word alignment and must not contain any words that have word-alignment links to words outside the phrase pair. That is, the word-alignment links must not cross phrase pair boundaries.
  • Conditional phrase translation probabilities for the phrase pairs have, in the past, been estimated simply by marginalizing the counts of phrase instances as follows:
  • y) is the probability of a phrase y in a source language being translated as a phrase y in a target language
  • C(x,y) is the count of the number instances of target phrase x being paired with source phrase y in the phrase pair instances extracted from the word aligned corpus, and x' varies over all target phrases that were ever seen in the extracted phrase pairs, paired with source phrase y.
  • Phrase translation probabilities according to Eq. 1 are thus estimated based on how often the phrases in a particular pair are identified as translations of each other, compared to how often the phrases in question are identified as translations of other phrases.
  • This method can be used to estimate the conditional probabilities of both target phrases given source phrases and source phrases given target phrases, by reversing the roles of source phrases and target phrases in Eq. 1.
  • a second problem is that the method fails to take into account instances of a word sequence that cannot be part of any phrase pair, because of constraints from word alignment links, in estimating the translation probabilities for that word sequence.
  • a) will be estimated to be 1.0, instead of 0.001, which might seem more plausible.
  • this method of estimating phrase translation probabilities for the phrase table does not use information from instances in which word-alignment constraints make the alignment of a word sequence more certain, in order to help decide how to align other instances of the word sequence in the absence of word alignment constraints.
  • the E step deals with the problem of counting the same instance of a word sequence multiple times by normalizing fractional counts so that the more ambiguity there is, the lower the resulting fractional counts. It also deals with the issue of using information from instances in which word-alignment constraints make the alignment of a word sequence more certain by weighting the fractional counts by their probability as estimated by the preceding M step.
  • phrase translation model is trained without assuming a segmentation of training data into non- overlapping phrase pairs. Instead, the training algorithm assumes that any particular phrase instance has only a single phrase instance in another language as its translation in that instance, but that phrases can overlap.
  • the training algorithm estimates phrase translation probabilities according to the model by computing expected phrase alignment counts, deriving selection probabilities from previous estimates of phrase translation probabilities, and then re-estimating phrase translation probabilities according to the expected phrase alignment counts computed. These steps are iterated over until one or more desired stopping criteria are reached.
  • the estimated phrase translation probabilities can be deployed in a machine translation system.
  • FIG. 1 is a block diagram of one machine translation training system.
  • FIG. 2 is a flow diagram illustrating the overall operation of the system shown in FIG. 1.
  • FIG. 3A shows one example of a word-aligned corpus .
  • FIG. 3B shows one example of extracted phrase pairs .
  • FIG. 4 is a flow diagram illustrating one embodiment of the operation of a model training component shown in FIG . 1.
  • FIG. 5 is a flow diagram illustrating one embodiment of estimating phrase alignment probabilities.
  • FIG. 6 is a block diagram illustrating a statistical phrase translation model deployed in a machine translation system.
  • FIG. 7 is a block diagram of one illustrative computing environment.
  • System 100 includes word alignment component 102, and model training component 104.
  • Model training component 104 illustratively includes phrase pair extractor 106 and feature value estimation component 108.
  • FIG. 1 also shows that system 100 has access to bilingual corpus 110.
  • bilingual corpus 110 includes sentence translation pairs.
  • the sentence translation pairs are pairs of sentences, each pair of sentences having one sentence that is in the source language and a translation of that sentence that is in the target language.
  • Model training component 104 illustratively generates a phrase translation table 112 for use in a statistical machine translation system 116.
  • FIG. 2 is a block diagram illustrating the overall operation of one embodiment of system 100, shown in FIG. 1.
  • Word alignment component 102 first accesses the sentence pairs in bilingual training corpus 110 and computes a word alignment for each sentence pair in the training corpus 110. Accessing the bilingual corpus 110 is indicated by block 150 in FIG. 2, and performing word alignment is indicated by block 152. The word alignment is a relation between the words in the two sentences in a sentence pair.
  • word alignment component 102 is a discriminatively trained word alignment component that generates word aligned bilingual corpus 118. Of course, any other word alignment component 102 can be used as well.
  • FIG. 3A illustrates three different sentence pairs 200, 202 and 204.
  • the sentence pairs include one French sentence and one English sentence, and the lines between the words in the French and English sentences represent the word alignments calculated by word alignment component 102.
  • model training component 104 generates the phrase translation table 112 for use in statistical phrase translation system 116.
  • phrase pair extractor 106 in model training component 104 extracts possible phrase pairs from the word-aligned bilingual corpus 118 for inclusion in the phrase translation table 112. Extracting the possible phrase pairs is indicated by block 154 in FIG. 2.
  • every phrase pair is extracted, up to a given phrase length, that is consistent with the word alignment that is annotated in corpus 118.
  • each consistent phrase pair has at least one word alignment between words within the phrases, and no words in either phrase (source or target) are aligned with any words outside of the phrases.
  • FIG. 3B shows some of the phrases that are extracted for the word aligned sentence pairs shown in FIG. 3A.
  • the phrases in FIG. 3B are exemplary only.
  • the possible phrase pairs are illustrated by block 120 in FIG. 1.
  • feature value estimation component 108 calculates estimated values of features associated with the phrase pairs.
  • the estimated values associated with the phrase pairs are translation probabilities that indicate the probability of the source portion of the phrase pair being translated as the target phrase pair, and vise versa. Estimating the phrase translation probabilities for the possible phrase pairs 120 is indicated by block 156 in FIG. 2.
  • Model training component 104 then outputs phrase translation table 112 which includes phrase pairs along with translation probabilities. This is indicated by block 158 in FIG. 2.
  • the phrase translation table can then be incorporated into a statistical machine translation system 116, in which it is used to provide one or more target phrases that are likely translations of source phrases that match segments of a source text to be translated.
  • the probabilities, according to the phrase translation table, of source phrases translating as particular target phrases, and of target phrases translating as particular source phrases are used to predict the most likely target language translation of the source language text, which is then output by the machine translation system. How this is done is well known to practitioners skilled in the art of statistical machine translation. Deployment of the phrase table 112 in system 116 is indicated by block 160 in FIG. 2.
  • Model training component 104 uses an iterative training procedure, similar to the Expectation Maximization
  • FIG. 4 is a flow diagram illustrating this in greater detail.
  • Steps 300 and 302 can be performed in a number of different ways.
  • component 108 can initialize the translation probability distributions with uniform phrase translation probabilities.
  • each possible translation for a given phrase can be initially set to have the same probability.
  • the initial phrase translation probability distributions can be set in a nonuniform way as well.
  • word alignment component 102 illustratively uses word-translation probabilities that can be obtained from any of a wide variety of word translation models (or word alignment models) .
  • the word- translation probabilities can be used to set the initial phrase translation probability distributions in a non- uniform way.
  • phrase translation pairs selected at step 300 in FIG. 4 can be done in a variety of different ways as well.
  • One way is described above with respect to phrase pair extractor 106 shown in FIG. 1.
  • phrase pair extractor 106 need not place any initial restrictions on what phrase pairs are considered. Instead, the phrase pairs with a relatively low phrase translation probabilities are simply pruned as processing continues. Setting the initial phrase translation probabilities to be uniform is indicated by block 304 in FIG. 4 and setting them to be non-uniform is indicated by block 306
  • component 108 re-estimates the phrase translation probabilities according the expected phrase alignment counts just computed. Re-estimating the phrase translation probabilities is indicated by block 314 in FIG. 4. Re-estimation of phrase translation probabilities is performed by dividing the expected phrase alignment count for a pair of phrases by the total number of instances of the source or target phrase in the corpus (regardless of whether they participate in possible phrase pairs) .
  • E denotes expected counts
  • C denotes observed counts.
  • the use of the total observed counts of particular source and target phrases (instead of marginalized expected joint counts) in estimating the conditional phrase translation probabilities causes the conditional phrase translation probability distributions for a given word sequence to generally sum to less than 1.0.
  • the missing probability mass is interpreted as the probability that a given word sequence does not translate as any contiguous word sequence in the other language. Therefore, this addresses some difficulties with prior systems in which phrase translation probability estimates do not properly take account of word sequences that have no phrase alignment consistent with the word alignment .
  • Estimation component 108 iterates over steps 310- 314 (estimating alignment probabilities and computing the expected phrase alignment counts, and then re-estimating the phrase translation probabilities) until one or more desired stopping criteria are met. This is indicated by block 316 in FIG. 4.
  • the desired stopping criteria can be any desired criteria. For instance, it may be that estimation component 108 simply performs a fixed number of iterations. The fixed number can be empirically determined or determined otherwise. This is indicated by block 318 in FIG. 4. It may also be, however, that the iterations continue until a measurement of model quality stops improving by a desired amount.
  • model training iteratively continues until it no longer decreases the conditional entropy of the phrase translation model as estimated on a held-out sample of source text.
  • the measured model quality criteria is indicated by block 320 in FIG. 4.
  • Component 108 eventually outputs the final phrase translation table 112 for use in statistical machine translation system 116.
  • Outputting the phase translation table is indicated by block 322 in FIG. 4, and deploying that table in a machine translation system is indicated by block 324 in FIG. 4. It may be that table 112 contains phrase pairs that have corresponding phrase translation probabilities that meet a threshold, or alternatively no threshold is used.
  • phrase alignment probabilities are estimated (as indicated in block 310 in FIG. 4) are illustrated in FIG. 5.
  • Phrase alignment is modeled as a stochastic process that combines two subprocesses of selection. Each possible source phrase instance can be viewed as independently selecting a possible target phrase instance, and each possible target phrase instance can be viewed as independently selecting a possible source phrase instance.
  • a source phrase instance and a target phrase instance combine to form an aligned phrase pair instance, if and only if each selects the other. It will thus be seen that the probability of a source phrase instance and a target phrase instance forming an aligned translation pair instance can be estimated as the product of the estimated probabilities of each selecting the other, since it is stipulated that each selection is independent.
  • a word-aligned sentence pair is selected from a training corpus. This is indicated by block 250 in FIG. 5. Then, all instances of possible phrase translation pairs within the sentence pair are identified.
  • FIG. 4. This is indicated by block 252 in Fig. 5.
  • the probabilities for each source phrase instance selecting each possible target phrase instance are estimated, as indicated by block 254 in FIG. 5, and for each possible target phrase instance selecting each source phrase instance, indicated by block 256 in FIG. 5, restricted to those permitted by the set of possible translation pairs.
  • the estimated probability of a phrase instance y selecting a phrase instance x is proportional to the probability of x translating as y according to the previous translation probability estimates, normalized over the possible non-null choices for x presented by the word- aligned sentence pair. [0058] This can be expressed symbolically as follows:
  • p s denotes selection probability
  • p t denotes translation probability
  • x' ranges over the phrase instances within the sentence pair that could possibly align to the phrase instance y according to the set of possible phrase translation pairs.
  • the alignment probability is estimated as the product of the selection probabilities. This is indicated by block 258 in FIG. 5.
  • FIG. 6 is a block diagram showing phrase translation table 112 in use in a statistical machine translation system 116.
  • system 116 receives a source language input 350 and translates it into a target language output 352.
  • the input 350 can be one or more words, phrases, sentences, etc. as can be target language output 352.
  • machine translation system 116 illustratively employs phrase translation table 112.
  • FIG. 7 is a block diagram of one illustrative computing environment 400 in which training system 100 or the runtime system shown in FIG. 6, can be used.
  • the computing system environment 400 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the claimed subject matter. Neither should the computing environment 400 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 400.
  • Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
  • Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules are located in both local and remote computer storage media including memory storage devices.
  • an exemplary system for implementing some embodiments includes a general-purpose computing device in the form of a computer 410.
  • Components of computer 410 may include, but are not limited to, a processing unit 420, a system memory 430, and a system bus 421 that couples various system components including the system memory to the processing unit 420.
  • the system bus 421 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer 410 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 410 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 410.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media .
  • the system memory 430 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 431 and random access memory (RAM) 432.
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system 433
  • RAM 432 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 420.
  • FIG. 7 illustrates operating system 434, application programs 435, other program modules 436, and program data 437.
  • System 100 or the runtime system shown in FIG. 6 can reside at any desired location, such as in modules 436 or elsewhere.
  • the computer 410 may also include other removable/non-removable volatile/nonvolatile computer storage media.
  • FIG. 7 illustrates a hard disk drive 441 that reads from or writes to nonremovable, nonvolatile magnetic media, a magnetic disk drive 451 that reads from or writes to a removable, nonvolatile magnetic disk 452, and an optical disk drive 455 that reads from or writes to a removable, nonvolatile optical disk 456 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 441 is typically connected to the system bus 421 through a non-removable memory interface such as interface 440, and magnetic disk drive 451 and optical disk drive 455 are typically connected to the system bus 421 by a removable memory interface, such as interface 450.
  • the drives and their associated computer storage media discussed above and illustrated in FIG. 7, provide storage of computer readable instructions, data structures, program modules and other data for the computer 410.
  • hard disk drive 441 is illustrated as storing operating system 444, application programs 445, other program modules 446, and program data 447. Note that these components can either be the same as or different from operating system 434, application programs 435, other program modules 436, and program data 437.
  • Operating system 444, application programs 445, other program modules 446, and program data 447 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 410 through input devices such as a keyboard 462, a microphone 463, and a pointing device 461, such as a mouse, trackball or touch pad.
  • Other input devices may include a joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 420 through a user input interface 460 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB) .
  • a monitor 491 or other type of display device is also connected to the system bus 421 via an interface, such as a video interface 490.
  • computers may also include other peripheral output devices such as speakers 497 and printer 496, which may be connected through an output peripheral interface 495.
  • the computer 410 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 480.
  • the remote computer 480 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 410.
  • the logical connections depicted in FIG. 7 include a local area network (LAN) 471 and a wide area network (WAN) 473, but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer 410 When used in a LAN networking environment, the computer 410 is connected to the LAN 471 through a network interface or adapter 470. When used in a WAN networking environment, the computer 410 typically includes a modem 472 or other means for establishing communications over the WAN 473, such as the Internet.
  • the modem 472 which may be internal or external, may be connected to the system bus 421 via the user input interface 460, or other appropriate mechanism.
  • program modules depicted relative to the computer 410, or portions thereof, may be stored in the remote memory storage device.
  • FIG. 7 illustrates remote application programs 485 as residing on remote computer 480.

Abstract

A phrase translation model is trained without assuming a segmentation of training data into non-overlapping phrase pairs. Instead, the training algorithm assumes that any particular phrase instance has only a single phrase instance in another language as its translation in that instance, but that phrases can overlap. The model is trained by computing expected phrase alignment counts, deriving selection probabilities from current estimates of translation probabilities and then re-estimating phrase translation probabilities according to the expected phrase alignment counts computed. The model is trained by iterating over these steps until one or more desired stopping criteria are reached. The trained model can be deployed in a machine translation system.

Description

GENERATING A PHRASE TRANSLATION MODEL BY ITERATIVELY ESTIMATING PHRASE TRANSLATION
PROBABILITIES
BACKGROUND [0001] Machine translation is a process by which a textual input in a first language (a source language) is automatically translated into a textual output in a second language (a target language) . Some machine translation systems attempt to translate a textual inputs word for word, by translating individual words in the source language into individual words in the target language. However, this has led to translations that are not very fluent.
[0002] Therefore, some systems currently translate based on phrases. Machine translation systems that translate sequences of words in the source text, as a whole, into sequences of words in the target language, as a whole, are referred to as phrase based translation systems. [0003] During training, these systems receive a word- aligned bilingual corpus of sentence translation pairs, where words in a source training text are aligned with corresponding words in a target training text. Based on the word-aligned bilingual corpus, phrase pairs are extracted that are likely translations of one another. By way of example, using English as the source text and French as the target text, phrase based translation systems find a sequence of words in English for which a sequence of words in French is a translation of that English word sequence. [0004] Phrase translation tables are important to these types of phrase-based statistical machine translation systems. The phrase translation tables provide pairs of phrases that are used to construct a large set of potential translations for each input sentence, along with feature values associated with each phrase pair. The feature values can include estimated probabilities that one of the phrases in the phrase pair will translate as the other on a particular occasion. These estimates are used to select a best translation from a given set of potential translations. [0005] For purposes of the present discussion, a "phrase" can be a single word or any contiguous sequence of words or other tokens, such as punctuation symbols, that are treated as words by the translation system. It need not correspond to a complete linguistic constituent. [0006] There are a variety of ways of building phrase translation tables. One current system for building phrase translation tables selects, from a word alignment provided for a parallel bilingual training corpus, all pairs of phrases (up to a given length) that meet two criteria. A selected phrase pair must contain at least one pair of words linked by the word alignment and must not contain any words that have word-alignment links to words outside the phrase pair. That is, the word-alignment links must not cross phrase pair boundaries. [0007] Conditional phrase translation probabilities for the phrase pairs have, in the past, been estimated simply by marginalizing the counts of phrase instances as follows:
P(x\y)= rC(X'y) Eq. 1
[0008] In Eq. 1, p(x|y) is the probability of a phrase y in a source language being translated as a phrase y in a target language; C(x,y) is the count of the number instances of target phrase x being paired with source phrase y in the phrase pair instances extracted from the word aligned corpus, and x' varies over all target phrases that were ever seen in the extracted phrase pairs, paired with source phrase y.
[0009] Phrase translation probabilities according to Eq. 1 are thus estimated based on how often the phrases in a particular pair are identified as translations of each other, compared to how often the phrases in question are identified as translations of other phrases.
[0010] This method can be used to estimate the conditional probabilities of both target phrases given source phrases and source phrases given target phrases, by reversing the roles of source phrases and target phrases in Eq. 1.
[0011] There are a number of problems associated with this system for estimating phrase translation probabilities. A first is that it counts the same instances of a word sequence multiple times if it participates in multiple possible phrase pairs, and gives all possible phrase pair instances equal weight, no matter how many other possible phrase pair instances there are for the word sequence instances involved. Thus, the more ambiguity there is about how a particular word sequence instance aligns, the more overall weight that instance receives.
[0012] A second problem is that the method fails to take into account instances of a word sequence that cannot be part of any phrase pair, because of constraints from word alignment links, in estimating the translation probabilities for that word sequence. Thus, if a word sequence "a" has 999 instances in which it cannot be aligned to any phrase because of these constraints, and one instance in which it can be aligned only to word sequence "b", then p(b|a) will be estimated to be 1.0, instead of 0.001, which might seem more plausible.
[0013] An example may be helpful. Assume that a French target phrase almost always aligns to a source English phrase "object linking and embedding". Given the rules of French grammar, the French phrase would likely have the word-for-word translation "linking and embedding of objects." Note that within this French phrase, there is no contiguous sequence of words that corresponds to "object linking." Now suppose that the system attempted to estimate the translation probability for the source, (English) language phrase "object linking" and found one French instance of a phrase that translated word-for-word as "linking of objects". Assume further that there existed 999 instances of the French phrase corresponding to "object linking and embedding". However, the 999 instances of the latter translation would not be even identified as containing possible translations of "object linking" because the word alignments would create a crossing pattern in which word alignment links crossed phrase boundaries. Therefore, the system would only notice the one instance of "object linking" translated into the French instance of "linking of objects" and would not take into account the 999 other instances of "object linking" that occurred, but did not translate to any French phrase. The system would thus calculate the probability of the source language phrase "object linking" being translated to the French translation of "linking of objects" as 1.0, when in fact it should probably be closer to 0.001.
[0014] In addition, this method of estimating phrase translation probabilities for the phrase table does not use information from instances in which word-alignment constraints make the alignment of a word sequence more certain, in order to help decide how to align other instances of the word sequence in the absence of word alignment constraints.
[0015] In order to address some of these difficulties, some prior approaches have attempted to estimate phrase translation probabilities directly, using generative models trained on a parallel corpus by the expectation maximization
(EM) algorithm. Such models assume that a source sentence
"a" is segmented into some number of phrases, and for each phrase selected in "a", a phrase position is selected in the target sentence "b" that is being generated. For each selected phrase in "a" and the corresponding phrase position in "b", a target phrase is chosen, and the target sentence is read off from the sequence of target phrases. [0016] This prior method thus assumes that the parallel corpus has a hidden segmentation into non-overlapping phrases, so that no particular word instance participates in more than one phrase instance. These types of systems have performed relatively poorly, and it is believed that the poor translation quality generated by these types of models results from the assumption that the training text has a unique segmentation into non-overlapping phrases. [0017] The parameters for these models are estimated using the EM algorithm. The E step deals with the problem of counting the same instance of a word sequence multiple times by normalizing fractional counts so that the more ambiguity there is, the lower the resulting fractional counts. It also deals with the issue of using information from instances in which word-alignment constraints make the alignment of a word sequence more certain by weighting the fractional counts by their probability as estimated by the preceding M step.
[0018] However, prior art models following this approach have assumed a uniform probability distribution over all possible segmentations and therefore fail to take into account instances of a word sequence that cannot be part of any phrase pair, because of constraints from word alignment links. Therefore, given the freedom to select whatever segmentation maximizes the likelihood of any given sentence pair, the EM algorithm tends to favor segmentations that yield source phrases with as few occurrences as possible, since more of the associated conditional probability mass can be concentrated on the target phrase alignments that are possible in the sentence being analyzed. [0019] Applied to a model of this type, the EM algorithm therefore tends to maximize the probability of the training data by concentrating probability mass on the rarest source phrase it can construct to cover the training data. The resulting probability estimates thus have less generalizability to unseen data than if probability mass were concentrated on more frequently occurring source phrases . [0020] The discussion above is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter.
SUMMARY [0021] A phrase translation model is trained without assuming a segmentation of training data into non- overlapping phrase pairs. Instead, the training algorithm assumes that any particular phrase instance has only a single phrase instance in another language as its translation in that instance, but that phrases can overlap. [0022] The training algorithm estimates phrase translation probabilities according to the model by computing expected phrase alignment counts, deriving selection probabilities from previous estimates of phrase translation probabilities, and then re-estimating phrase translation probabilities according to the expected phrase alignment counts computed. These steps are iterated over until one or more desired stopping criteria are reached. The estimated phrase translation probabilities can be deployed in a machine translation system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] FIG. 1 is a block diagram of one machine translation training system.
[0024] FIG. 2 is a flow diagram illustrating the overall operation of the system shown in FIG. 1. [0025] FIG. 3A shows one example of a word-aligned corpus .
[0026] FIG. 3B shows one example of extracted phrase pairs . [0027] FIG. 4 is a flow diagram illustrating one embodiment of the operation of a model training component shown in FIG . 1.
[0028] FIG. 5 is a flow diagram illustrating one embodiment of estimating phrase alignment probabilities. [0029] FIG. 6 is a block diagram illustrating a statistical phrase translation model deployed in a machine translation system.
[0030] FIG. 7 is a block diagram of one illustrative computing environment.
DETAILED DESCRIPTION
[0031] FIG. 1 is a block diagram of a machine translation training system 100 in accordance with one embodiment.
System 100 includes word alignment component 102, and model training component 104. Model training component 104 illustratively includes phrase pair extractor 106 and feature value estimation component 108. FIG. 1 also shows that system 100 has access to bilingual corpus 110. In one embodiment, bilingual corpus 110 includes sentence translation pairs. The sentence translation pairs are pairs of sentences, each pair of sentences having one sentence that is in the source language and a translation of that sentence that is in the target language.
[0032] Model training component 104 illustratively generates a phrase translation table 112 for use in a statistical machine translation system 116.
[0033] FIG. 2 is a block diagram illustrating the overall operation of one embodiment of system 100, shown in FIG. 1.
Word alignment component 102 first accesses the sentence pairs in bilingual training corpus 110 and computes a word alignment for each sentence pair in the training corpus 110. Accessing the bilingual corpus 110 is indicated by block 150 in FIG. 2, and performing word alignment is indicated by block 152. The word alignment is a relation between the words in the two sentences in a sentence pair. In one illustrative embodiment, word alignment component 102 is a discriminatively trained word alignment component that generates word aligned bilingual corpus 118. Of course, any other word alignment component 102 can be used as well.
[0034] FIG. 3A illustrates three different sentence pairs 200, 202 and 204. In the examples shown, the sentence pairs include one French sentence and one English sentence, and the lines between the words in the French and English sentences represent the word alignments calculated by word alignment component 102.
[0035] Once a word-aligned, bilingual corpus of sentence translation pairs 118 is generated, model training component 104 generates the phrase translation table 112 for use in statistical phrase translation system 116. First, phrase pair extractor 106 in model training component 104 extracts possible phrase pairs from the word-aligned bilingual corpus 118 for inclusion in the phrase translation table 112. Extracting the possible phrase pairs is indicated by block 154 in FIG. 2.
[0036] In one embodiment, every phrase pair is extracted, up to a given phrase length, that is consistent with the word alignment that is annotated in corpus 118. In one embodiment, each consistent phrase pair has at least one word alignment between words within the phrases, and no words in either phrase (source or target) are aligned with any words outside of the phrases.
[0037] FIG. 3B shows some of the phrases that are extracted for the word aligned sentence pairs shown in FIG. 3A. The phrases in FIG. 3B are exemplary only. The possible phrase pairs are illustrated by block 120 in FIG. 1.
[0038] For each extracted phrase pair (s,t) (where s is the source portion of the phrase pair and t is the target portion of the phrase pair) feature value estimation component 108 calculates estimated values of features associated with the phrase pairs. In the embodiment described herein, the estimated values associated with the phrase pairs are translation probabilities that indicate the probability of the source portion of the phrase pair being translated as the target phrase pair, and vise versa. Estimating the phrase translation probabilities for the possible phrase pairs 120 is indicated by block 156 in FIG. 2.
[0039] Model training component 104 then outputs phrase translation table 112 which includes phrase pairs along with translation probabilities. This is indicated by block 158 in FIG. 2. [0040] The phrase translation table can then be incorporated into a statistical machine translation system 116, in which it is used to provide one or more target phrases that are likely translations of source phrases that match segments of a source text to be translated. The probabilities, according to the phrase translation table, of source phrases translating as particular target phrases, and of target phrases translating as particular source phrases, are used to predict the most likely target language translation of the source language text, which is then output by the machine translation system. How this is done is well known to practitioners skilled in the art of statistical machine translation. Deployment of the phrase table 112 in system 116 is indicated by block 160 in FIG. 2. [0041] Model training component 104 uses an iterative training procedure, similar to the Expectation Maximization
(EM) algorithm, in order to estimate the phrase translation probabilities. FIG. 4 is a flow diagram illustrating this in greater detail.
[0042] Model training component 104 first selects possible phrase translation pairs 120. This is indicated by block 300 in FIG. 4. Feature value estimation component 108 then initializes the translation probability distributions for each of the possible source and target phrases. This is indicated by block 302 in FIG. 4.
[0043] Steps 300 and 302 can be performed in a number of different ways. For instance, component 108 can initialize the translation probability distributions with uniform phrase translation probabilities. In other words, each possible translation for a given phrase can be initially set to have the same probability. Of course, the initial phrase translation probability distributions can be set in a nonuniform way as well. For instance, word alignment component 102 illustratively uses word-translation probabilities that can be obtained from any of a wide variety of word translation models (or word alignment models) . The word- translation probabilities can be used to set the initial phrase translation probability distributions in a non- uniform way.
[0044] Similarly, the possible phrase translation pairs selected at step 300 in FIG. 4 can be done in a variety of different ways as well. One way is described above with respect to phrase pair extractor 106 shown in FIG. 1. However, if the initial phrase translation probability distributions are not simply set as uniform probability distributions, but instead are set to some reasonable approximation (as discussed above) , then phrase pair extractor 106 need not place any initial restrictions on what phrase pairs are considered. Instead, the phrase pairs with a relatively low phrase translation probabilities are simply pruned as processing continues. Setting the initial phrase translation probabilities to be uniform is indicated by block 304 in FIG. 4 and setting them to be non-uniform is indicated by block 306
[0045] For each instance of a pair of phrases identified as a possible phrase translation pair, the probability of that pair of phrases being aligned is estimated. This is indicated by block 310 in FIG. 4. (How alignment probabilities are estimated is explained below with respect to FIG. 5.) For each possible phrase translation pair, the alignment probabilities of all instances of the phrase pair are summed to give the expected phrase alignment count for that pair. This is indicated by block 312 in FIG. 4.
[0046] Once the expected phrase alignment counts have been computed, component 108 re-estimates the phrase translation probabilities according the expected phrase alignment counts just computed. Re-estimating the phrase translation probabilities is indicated by block 314 in FIG. 4. Re-estimation of phrase translation probabilities is performed by dividing the expected phrase alignment count for a pair of phrases by the total number of instances of the source or target phrase in the corpus (regardless of whether they participate in possible phrase pairs) . That is, if the phrase alignment model predicts a number x expected alignments of source phrase "a" to target phrase "b" in the corpus, and there are y occurrences of "a" and z occurrences of "b" in the corpus, then the probability of an instance of "a" being translated as "b" can be estimated as x/y and the probability of an instance of "b" being translated as "a" can be estimated as x/z. This is expressed mathematically as follows: E(a,b) E(a,b)
P,(P \ a) = pt(a \ b) = Eq . 2 C(a) : C(b) '
[0047] where E denotes expected counts and C denotes observed counts. [0048] The use of the total observed counts of particular source and target phrases (instead of marginalized expected joint counts) in estimating the conditional phrase translation probabilities causes the conditional phrase translation probability distributions for a given word sequence to generally sum to less than 1.0. In one embodiment, the missing probability mass is interpreted as the probability that a given word sequence does not translate as any contiguous word sequence in the other language. Therefore, this addresses some difficulties with prior systems in which phrase translation probability estimates do not properly take account of word sequences that have no phrase alignment consistent with the word alignment . [0049] Estimation component 108 iterates over steps 310- 314 (estimating alignment probabilities and computing the expected phrase alignment counts, and then re-estimating the phrase translation probabilities) until one or more desired stopping criteria are met. This is indicated by block 316 in FIG. 4. [0050] The desired stopping criteria can be any desired criteria. For instance, it may be that estimation component 108 simply performs a fixed number of iterations. The fixed number can be empirically determined or determined otherwise. This is indicated by block 318 in FIG. 4. It may also be, however, that the iterations continue until a measurement of model quality stops improving by a desired amount. For instance, it may be that the model training iteratively continues until it no longer decreases the conditional entropy of the phrase translation model as estimated on a held-out sample of source text. Of course, other measurements of model quality, and indeed other stopping criteria, could be used as well. The measured model quality criteria is indicated by block 320 in FIG. 4. Component 108 eventually outputs the final phrase translation table 112 for use in statistical machine translation system 116. Outputting the phase translation table is indicated by block 322 in FIG. 4, and deploying that table in a machine translation system is indicated by block 324 in FIG. 4. It may be that table 112 contains phrase pairs that have corresponding phrase translation probabilities that meet a threshold, or alternatively no threshold is used. [0051] The details of how phrase alignment probabilities are estimated (as indicated in block 310 in FIG. 4) are illustrated in FIG. 5. Phrase alignment is modeled as a stochastic process that combines two subprocesses of selection. Each possible source phrase instance can be viewed as independently selecting a possible target phrase instance, and each possible target phrase instance can be viewed as independently selecting a possible source phrase instance. A source phrase instance and a target phrase instance combine to form an aligned phrase pair instance, if and only if each selects the other. It will thus be seen that the probability of a source phrase instance and a target phrase instance forming an aligned translation pair instance can be estimated as the product of the estimated probabilities of each selecting the other, since it is stipulated that each selection is independent.
[0052] It will be seen that the model just described does not assume a segmentation of the source text into nonoverlapping segments, as prior art iteratively trained models have done. Instead, it makes only the weaker assumption that each specific phrase instance selects only one other phrase instance, and thus can align to at most one other phrase instance. Thus, nothing in the present method of estimating alignment probabilities prevents alignment probabilities from summing to more than 1.0 for phrase instances that are not identical, even if they overlap. [0053] For example, if an English sentence in training corpus of sentence translation pairs contains the phrase "the government" and the corresponding French sentence contains the phrase "Ie gouvernement", there is nothing in the present method of estimating alignment probabilities that would prevent the possible phrase translation pair instances "Ie gouvernment/the government" and "gouvernement/government" from both having estimated alignment probabilities close to 1.0, so that their sum would be close to 2.0.
[0054] With prior art iterative methods that have assumed a unique segmentation into phrases, the alignment probabilities for these two possible phrase translation pair instances would have to sum to 1.0 or less, because the models these methods are based on do not allow two overlapping phrases to both participate in phrase alignments. Both possible phrase translation pair instances could have some non-zero alignment probability, due to uncertainty about what the segmentation is; however, the total alignment probability for overlapping phrases with prior art iterative methods must always sum to 1.0 or less, because the models these methods are based on assume that possible alignments for each of two overlapping phrases cannot both be correct.
[0055] To begin the process of estimating phrase alignment probabilities, a word-aligned sentence pair is selected from a training corpus. This is indicated by block 250 in FIG. 5. Then, all instances of possible phrase translation pairs within the sentence pair are identified.
(It will be recalled that the possible phrase translation pairs were previously selected as indicated by block 300 in
FIG. 4.) This is indicated by block 252 in Fig. 5. [0056] Then, the probabilities for each source phrase instance selecting each possible target phrase instance are estimated, as indicated by block 254 in FIG. 5, and for each possible target phrase instance selecting each source phrase instance, indicated by block 256 in FIG. 5, restricted to those permitted by the set of possible translation pairs.
[0057] The estimated probability of a phrase instance y selecting a phrase instance x is proportional to the probability of x translating as y according to the previous translation probability estimates, normalized over the possible non-null choices for x presented by the word- aligned sentence pair. [0058] This can be expressed symbolically as follows:
Figure imgf000016_0001
[0059] where ps denotes selection probability, pt denotes translation probability, and x' ranges over the phrase instances within the sentence pair that could possibly align to the phrase instance y according to the set of possible phrase translation pairs.
[0060] After estimating the selection probabilities in each direction for a possible phrase pair instance within the aligned sentence pair, the alignment probability is estimated as the product of the selection probabilities. This is indicated by block 258 in FIG. 5.
[0061] After estimating all alignment probabilities for the selected sentence pair, it is determined whether any more sentence pairs remain to be processed. If so, the alignment probability estimation procedure is repeated until no more sentence pairs remain to be processed. This is indicated by block 260 in FIG. 5. The estimated alignment probabilities for each possible phrase translation pair instance are output as indicated by block 262 in FIG. 5. [0062] FIG. 6 is a block diagram showing phrase translation table 112 in use in a statistical machine translation system 116. FIG. 6 shows that system 116 receives a source language input 350 and translates it into a target language output 352. Of course, the input 350 can be one or more words, phrases, sentences, etc. as can be target language output 352. In translating input 350 into output 352, machine translation system 116 illustratively employs phrase translation table 112.
[0063] FIG. 7 is a block diagram of one illustrative computing environment 400 in which training system 100 or the runtime system shown in FIG. 6, can be used. The computing system environment 400 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the claimed subject matter. Neither should the computing environment 400 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 400. [0064] Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
[0065] Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules are located in both local and remote computer storage media including memory storage devices. [0066] With reference to FIG. 7, an exemplary system for implementing some embodiments includes a general-purpose computing device in the form of a computer 410. Components of computer 410 may include, but are not limited to, a processing unit 420, a system memory 430, and a system bus 421 that couples various system components including the system memory to the processing unit 420. The system bus 421 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
[0067] Computer 410 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 410 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 410. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media .
[0068] The system memory 430 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 431 and random access memory (RAM) 432. A basic input/output system 433 (BIOS), containing the basic routines that help to transfer information between elements within computer 410, such as during start-up, is typically stored in ROM 431. RAM 432 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 420. By way of example, and not limitation, FIG. 7 illustrates operating system 434, application programs 435, other program modules 436, and program data 437. System 100 or the runtime system shown in FIG. 6 can reside at any desired location, such as in modules 436 or elsewhere. [0069] The computer 410 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only, FIG. 7 illustrates a hard disk drive 441 that reads from or writes to nonremovable, nonvolatile magnetic media, a magnetic disk drive 451 that reads from or writes to a removable, nonvolatile magnetic disk 452, and an optical disk drive 455 that reads from or writes to a removable, nonvolatile optical disk 456 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 441 is typically connected to the system bus 421 through a non-removable memory interface such as interface 440, and magnetic disk drive 451 and optical disk drive 455 are typically connected to the system bus 421 by a removable memory interface, such as interface 450.
[0070] The drives and their associated computer storage media discussed above and illustrated in FIG. 7, provide storage of computer readable instructions, data structures, program modules and other data for the computer 410. In FIG. 6, for example, hard disk drive 441 is illustrated as storing operating system 444, application programs 445, other program modules 446, and program data 447. Note that these components can either be the same as or different from operating system 434, application programs 435, other program modules 436, and program data 437. Operating system 444, application programs 445, other program modules 446, and program data 447 are given different numbers here to illustrate that, at a minimum, they are different copies. [0071] A user may enter commands and information into the computer 410 through input devices such as a keyboard 462, a microphone 463, and a pointing device 461, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 420 through a user input interface 460 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB) . A monitor 491 or other type of display device is also connected to the system bus 421 via an interface, such as a video interface 490. In addition to the monitor, computers may also include other peripheral output devices such as speakers 497 and printer 496, which may be connected through an output peripheral interface 495.
[0072] The computer 410 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 480. The remote computer 480 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 410. The logical connections depicted in FIG. 7 include a local area network (LAN) 471 and a wide area network (WAN) 473, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. [0073] When used in a LAN networking environment, the computer 410 is connected to the LAN 471 through a network interface or adapter 470. When used in a WAN networking environment, the computer 410 typically includes a modem 472 or other means for establishing communications over the WAN 473, such as the Internet. The modem 472, which may be internal or external, may be connected to the system bus 421 via the user input interface 460, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 410, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 7 illustrates remote application programs 485 as residing on remote computer 480. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. [0074] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

WHAT IS CLAIMED IS:
1. A method of estimating phrase translation probabilities for use in a machine translation system, comprising: selecting (300) an instance of a possible phrase translation pair, from a set of possible phrase translation pair instances occurring in a sentence translation pair (200) in a corpus (110) of sentence translation pairs, the possible phrase translation pair instance (120) having an instance of a first phrase in a first language and an instance of a second phrase in a second language, the first and second phrases being possible translations of one another; estimating (310) an alignment probability that the first phrase instance and the second phrase instance are aligned to each other in the sentence translation pair (200), not excluding the possibility that another phrase instance in the first language, overlapping the first phrase instance in the first language, and another phrase instance in the second language are also aligned to each other in the sentence translation pair (200); estimating (314) a phrase translation probability for the selected possible phrase translation pair, over the corpus (110) of sentence translation pairs, based on the sum of the alignment probabilities for all instances of the possible phrase translation pair in the corpus (110) of sentence translation pairs; iterating (316) over selecting possible phrase translation pair instances, estimating alignment probabilities, and estimating phrase translation probabilities until a desired stopping criteria (318, 320) is met.
2. The method of claim 1 wherein estimating a probability that the first phrase instance and the second phrase instance are aligned to each other comprises : generating a first selection probability (254) indicative of how likely the second phrase instance will be selected as a translation of the first phrase instance given the set of instances of possible phrase translation pairs (120) occurring in the given sentence translation pair (200) ; generating a second selection probability (256) indicative of how likely the first phrase instance will be selected as a translation of the second phrase instance given the set of instances of possible phrase translation pairs (120); and estimating the alignment probability (258) for the selected possible phrase translation pair instance, based on the first and second selection probabilities (254, 256) .
3. The method of claim 2 wherein iterating comprises: iterating (316) over the steps of generating a first selection probability (254), generating a second selection probability (256) and estimating an alignment probability (258) until the desired stopping criterion (318, 320) is reached.
4. The method of claim 2 wherein generating a first selection probability comprises: generating the first selection probability (254) based on a phrase translation probability indicative of a probability that the first phrase is translated as the second phrase.
5. The method of claim 4 wherein generating a first selection probability comprises: normalizing the selection probability over all non-null choices for the second phrase instance in the set of possible phrase translation pairs (120) involving the first phrase instance in the given sentence pair.
6. The method of claim 5 and further comprising: prior to generating a first selection probability (254) and prior to generating a second selection probability (256) , setting phrase translation probabilities (302), for all phrase pairs in the set of possible phrase translation pairs (120), to an initial value.
7. The method of claim 6 wherein setting the phrase translation probabilities to an initial value comprises : setting (302) the phrase translation probabilities according to a uniform probability distribution (304) .
8. The method of claim 6 wherein setting the phrase translation probabilities to an initial value comprises : setting (302) the phrase translation probabilities according to a non-uniform probability distribution (306).
9. The method of claim 1 and further comprising: after iterating (316), outputting (322) the selected possible phrase translation pair (120), with the phrase translation probability, to a phrase translation table (112) for use in the machine translation system (116).
10. The method of claim 1 wherein iterating comprises: iterating for a predetermined number of iterations
(318) .
11. The method of claim 1 wherein iterating comprises: iterating (320) until a measurement of a translation quality produced by the phrase translation probabilities reaches a desired level.
12. The method of claim 1 and further comprising: prior to selecting (300) a possible phrase translation pair, extracting (154) the set of possible phrase translation pairs (120) from a word alignment of the corpus (110) of sentence translation pairs.
13. The method of claim 1 and further comprising: selecting a possible phrase translation pair (120) instance, estimating an alignment probability (310), estimating a phrase translation probability (314), and iterating (316), for each of the set of possible phrase translation pairs (120) .
14. A phrase translation model training system, comprising: a phrase pair extractor (106) configured to extract a set of possible phrase pairs (120) from a corpus (110) of sentence translation pairs, each possible phrase pair (120) including a phrase in a first language and a phrase in a second language; and a feature value estimation component (108) configured to set phrase translation probabilities for each possible phrase pair (120) in the set to an initial value, select an instance of a possible phrase translation pair (120) from a sentence translation pair in the corpus (110), estimate an alignment probability (310) for the possible phrase translation pair (120) instance, not excluding a possibility that the phrase translation pair instance overlaps with another aligned phrase translation pair instance, estimate a new phrase translation probability (314) for the possible phrase translation pair (120) based on the alignment probability, and iterate (316) over estimating alignment probabilities (310) and translation probabilities (314) until a stopping criterion is met (318, 320).
15. The phrase translation model training system of claim 14 wherein the feature value estimation component (108) is configured to estimate the new phrase translation probability by: computing a sum of the alignment probabilities of all the instances of the phrase translation pair selected from the corpus, and re-estimating the phrase translation probability (314) based on the sum.
16. The phrase translation model training system of claim 15 and further comprising: a word alignment component (102) configured to word-align sentences in a parallel, bi-lingual training corpus (110) of sentence translation pairs .
17. The phrase translation model training system of claim 15 wherein the feature value estimation component (108) is configured to output the possible phrase pair and the re-estimated translation probability to a phrase translation table (112) used in a statistical machine translation system (116).
18. A method of training a phrase alignment model, comprising : initializing (302) phrase translation probability values for a set of phrase pairs (120) extracted from a corpus (118) of sentence translation pairs, to an initial value (304, 306); computing (312) an expected phrase alignment count indicative of an expected number of times a selected phrase pair is aligned in the corpus (118), by computing first and second selection probabilities (254, 256) indicative of how likely each phrase in the selected phrase pair (120) is to be a translation of the other phrase in the phrase pair (120), given the set of phrase pairs occurring in each sentence translation pair in the corpus (118), and given a current value of the phrase translation probabilities; re-estimating (314) the phrase translation probability values based on the expected phrase alignment count computed; iterating (316) over the steps of computing an expected phrase alignment count (312) and re-estimating the phrase translation probability values (316) until a desired stopping criteria (318, 320) is met; and after iterating (316), outputting the selected phrase pair and the phrase translation probability values (322) for the selected phrase pair, to a phrase translation table (112) for use in a machine translation system (116).
19. The method of claim 18 wherein initializing phrase translation probability values comprises: setting the phrase translation probability values to a non-uniform probability distribution (306) based on estimated word translation probability values.
PCT/US2008/063403 2007-06-08 2008-05-12 Generating a phrase translation model by iteratively estimating phrase translation probabilities WO2008154104A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/811,114 2007-06-08
US11/811,114 US7983898B2 (en) 2007-06-08 2007-06-08 Generating a phrase translation model by iteratively estimating phrase translation probabilities

Publications (1)

Publication Number Publication Date
WO2008154104A1 true WO2008154104A1 (en) 2008-12-18

Family

ID=40096659

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/063403 WO2008154104A1 (en) 2007-06-08 2008-05-12 Generating a phrase translation model by iteratively estimating phrase translation probabilities

Country Status (2)

Country Link
US (1) US7983898B2 (en)
WO (1) WO2008154104A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009004723A1 (en) * 2007-07-04 2009-01-08 Fujitsu Limited Translation support program, translation support apparatus and method of translation support
JP5342760B2 (en) * 2007-09-03 2013-11-13 株式会社東芝 Apparatus, method, and program for creating data for translation learning
US8732577B2 (en) 2009-11-24 2014-05-20 Clear Channel Management Services, Inc. Contextual, focus-based translation for broadcast automation software
US8818790B2 (en) * 2010-04-06 2014-08-26 Samsung Electronics Co., Ltd. Syntactic analysis and hierarchical phrase model based machine translation system and method
WO2011163477A2 (en) * 2010-06-24 2011-12-29 Whitesmoke, Inc. Systems and methods for machine translation
US8682643B1 (en) * 2010-11-10 2014-03-25 Google Inc. Ranking transliteration output suggestions
US20120158398A1 (en) * 2010-12-17 2012-06-21 John Denero Combining Model-Based Aligner Using Dual Decomposition
CN103823795B (en) * 2012-11-16 2017-04-12 佳能株式会社 Machine translation system, machine translation method and decoder used together with system
US9183197B2 (en) 2012-12-14 2015-11-10 Microsoft Technology Licensing, Llc Language processing resources for automated mobile language translation
US20160132491A1 (en) * 2013-06-17 2016-05-12 National Institute Of Information And Communications Technology Bilingual phrase learning apparatus, statistical machine translation apparatus, bilingual phrase learning method, and storage medium
CN104252439B (en) * 2013-06-26 2017-08-29 华为技术有限公司 Diary generation method and device
US9442922B2 (en) * 2014-11-18 2016-09-13 Xerox Corporation System and method for incrementally updating a reordering model for a statistical machine translation system
US10460038B2 (en) 2016-06-24 2019-10-29 Facebook, Inc. Target phrase classifier
US10268686B2 (en) * 2016-06-24 2019-04-23 Facebook, Inc. Machine translation system employing classifier
US10963782B2 (en) * 2016-11-04 2021-03-30 Salesforce.Com, Inc. Dynamic coattention network for question answering
EP3791330A1 (en) 2018-05-08 2021-03-17 Google LLC Contrastive sequence-to-sequence data selector
KR102592630B1 (en) * 2018-11-21 2023-10-23 한국전자통신연구원 Simultaneous interpretation system and method using translation unit band corpus
CN111626064A (en) * 2019-02-26 2020-09-04 株式会社理光 Training method and device of neural machine translation model and storage medium
US10719666B1 (en) * 2020-01-31 2020-07-21 Capital One Services, Llc Computer-based systems utilizing textual embedding space software engines for identifying candidate phrases in a text document and methods of use thereof
KR102592623B1 (en) * 2020-10-28 2023-10-23 한국전자통신연구원 Method for learning real-time simultaneous translation model based on alignment information, method and system for simutaneous translation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060015318A1 (en) * 2004-07-14 2006-01-19 Microsoft Corporation Method and apparatus for initializing iterative training of translation probabilities

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5477451A (en) * 1991-07-25 1995-12-19 International Business Machines Corp. Method and system for natural language translation
US6304841B1 (en) * 1993-10-28 2001-10-16 International Business Machines Corporation Automatic construction of conditional exponential models from elementary features
US6885985B2 (en) * 2000-12-18 2005-04-26 Xerox Corporation Terminology translation for unaligned comparable corpora using category based translation probabilities
US7054803B2 (en) * 2000-12-19 2006-05-30 Xerox Corporation Extracting sentence translations from translated documents
US6990439B2 (en) * 2001-01-10 2006-01-24 Microsoft Corporation Method and apparatus for performing machine translation using a unified language model and translation model
ES2343786T3 (en) * 2002-03-27 2010-08-10 University Of Southern California PROBABILITY MODEL OF UNION BASED ON PHRASES FOR STATISTICAL AUTOMATIC TRANSLATION.
US8548794B2 (en) * 2003-07-02 2013-10-01 University Of Southern California Statistical noun phrase translation
US8666725B2 (en) * 2004-04-16 2014-03-04 University Of Southern California Selection and use of nonstatistical translation components in a statistical machine translation framework
US7505894B2 (en) * 2004-11-04 2009-03-17 Microsoft Corporation Order model for dependency structure
US20070016397A1 (en) * 2005-07-18 2007-01-18 Microsoft Corporation Collocation translation using monolingual corpora

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060015318A1 (en) * 2004-07-14 2006-01-19 Microsoft Corporation Method and apparatus for initializing iterative training of translation probabilities

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ASHISH VENUGOPAL ET AL.: "Effective Phrase Translation Extraction from Alignment Models", PROCEEDINGS OF THE 41ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 7 July 2003 (2003-07-07) *
MOORE R.C. ET AL.: "An Iteratively-Trained Segmentation-Free Phrase Translation Model for Statistical Machine Translation", PROCEEDINGS OF THE SECOND WORKSHOP ON STATISTICAL MACHINE TRANSLATION, 23 June 2007 (2007-06-23) *
ZETTLEMOYER L.S. ET AL.: "Selective Phrase Pair Extraction for Improved Statistical Machine Translation", PROCEEDINGS OF NAACL HLT 2007, 26 April 2007 (2007-04-26) *

Also Published As

Publication number Publication date
US20080306725A1 (en) 2008-12-11
US7983898B2 (en) 2011-07-19

Similar Documents

Publication Publication Date Title
US7983898B2 (en) Generating a phrase translation model by iteratively estimating phrase translation probabilities
US7409332B2 (en) Method and apparatus for initializing iterative training of translation probabilities
US8775155B2 (en) Machine translation using overlapping biphrase alignments and sampling
EP2137639B1 (en) Large language models in machine translation
EP1889180A2 (en) Collocation translation from monolingual and available bilingual corpora
US8874433B2 (en) Syntax-based augmentation of statistical machine translation phrase tables
US20020123877A1 (en) Method and apparatus for performing machine translation using a unified language model and translation model
US20060015317A1 (en) Morphological analyzer and analysis method
US20140163951A1 (en) Hybrid adaptation of named entity recognition
JP7111464B2 (en) Translation method, translation device and translation system
EP2318953A2 (en) Optimizing parameters for machine translation
JP2008165783A (en) Discriminative training for model for sequence classification
US9311299B1 (en) Weakly supervised part-of-speech tagging with coupled token and type constraints
CN111274829B (en) Sequence labeling method utilizing cross-language information
WO2008103894A1 (en) Automated word-form transformation and part of speech tag assignment
JP5288371B2 (en) Statistical machine translation system
KR20160133349A (en) Method for generating a phase table and method for machine translation using the phase table
Ayan et al. A maximum entropy approach to combining word alignments
CN107229613B (en) English-Chinese corpus extraction method based on vector space model
Federico et al. A word-to-phrase statistical translation model
Angle et al. Automated error correction and validation for POS tagging of Hindi
JP5500636B2 (en) Phrase table generator and computer program therefor
JP2010198438A (en) Apparatus for aligning words in pair of sentences with each other, and computer program for the same
CN111814493A (en) Machine translation method, device, electronic equipment and storage medium
Antony et al. Statistical method for English to Kannada transliteration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08769448

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08769448

Country of ref document: EP

Kind code of ref document: A1