WO2009015057A1 - Cross-lingual query suggestion - Google Patents

Cross-lingual query suggestion Download PDF

Info

Publication number
WO2009015057A1
WO2009015057A1 PCT/US2008/070578 US2008070578W WO2009015057A1 WO 2009015057 A1 WO2009015057 A1 WO 2009015057A1 US 2008070578 W US2008070578 W US 2008070578W WO 2009015057 A1 WO2009015057 A1 WO 2009015057A1
Authority
WO
WIPO (PCT)
Prior art keywords
query
target language
queries
lingual
cross
Prior art date
Application number
PCT/US2008/070578
Other languages
French (fr)
Inventor
Cheng Niu
Ming Zhou
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Publication of WO2009015057A1 publication Critical patent/WO2009015057A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3338Query expansion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3337Translation of the query language, e.g. Chinese to English

Definitions

  • Query suggestion helps users of a search engine to better specify their information need by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users.
  • Search engines such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method.
  • the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-for- performance search market.
  • Typical methods for query suggestion perform monolingual query suggestion. These methods exploit query logs (of the original query language) and document collections, assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners. By suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents.
  • the cross-lingual query suggestion aims to suggest relevant queries for a given query in a different language.
  • the techniques disclosed herein improve CLQS by exploiting the query logs in the language of the suggested queries.
  • the disclosed techniques include a method for learning and determining a cross-lingual similarity measure between two queries in different languages.
  • the disclosed techniques use a discriminative model to learn and estimate the cross-lingual query similarity.
  • the discriminative model starts with first identifying candidate queries in target language using a combination of multiple methods based on monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics.
  • the identified candidate queries in target language may be further expanded using monolingual query suggestion.
  • the resultant candidate queries in target language may be checked against the query log of the target language to select a narrowed set of candidate queries in target language, which are then evaluated using a cross-lingual similarity score for cross-lingual suggestion.
  • One embodiment uses both the query log itself and click- through information associated therewith to identify the most pertinent cross-lingual query to be suggested.
  • the multiple lingual resources are represented by a feature vector in an input feature space which is mapped to a kernel space for estimating the cross-lingual similarity.
  • a support vector machine algorithm may be used for learning the weight vector for such estimation.
  • the disclosed techniques provide an effective means to map the input query of one language to queries of the other language in the query log, and have significance in scenarios of cross-language information retrieval (CLIR) and cross- lingual keyword bidding for search engine advertisement.
  • CLIR cross-language information retrieval
  • lingual keyword bidding for search engine advertisement
  • FIG. 1 is a flowchart illustrating aspects of a cross-lingual query suggestion process disclosed in the present description.
  • FIG. 2 is a flowchart illustrating aspects of another cross-lingual query suggestion process.
  • FIG. 3 is a flowchart illustrating an exemplary process of cross-lingual query suggestion integrating multiple resources of various types of information, including translation information, bilingual information and monolingual information.
  • FIG. 4 is a flowchart illustrating an exemplary process to learn the feature vector w for computing cross-lingual similarities.
  • FIG. 5 shows an exemplary environment for implementing the method of the present disclosure.
  • FIG. 1 is a flowchart illustrating aspects of a cross-lingual query suggestion process disclosed in the present description.
  • an input query in source language is given. This is typically provided by a search engine user.
  • the input query in source language may be provided in various application scenarios, including cross-lingual information retrieval and cross-lingual keyword bidding for search engine advertisement.
  • the process identifies a query in target language from a query log of a search engine.
  • the query in target language is a query written in the target language which is a different language from the source language.
  • the input source language may be French, and the target language may be English.
  • a general condition for a query in target language to be selected in this identification process is that the query in target language and the input query in source language have a cross-lingual similarity satisfying a certain standard.
  • a certain standard is a preset threshold value. Further detail of acquiring candidate queries in target language and computing cross-lingual similarity is given later in this description.
  • the process suggests the query in target language as a cross-lingual query.
  • the suggested query in target language may be used as a search query to retrieve relevant documents from websites in the target language.
  • the information retrieval using the query in target language is performed in addition to the information retrieval using the original input query in source language. That is, the query in target language is used to supplement the original input query in source language in order to broaden the scope of the search in a cross-lingual information retrieval.
  • the query in target language may be used alone to perform an information retrieval from websites in the target language. It is also appreciated that multiple queries in target language may be identified and suggested.
  • FIG. 2 is a flowchart illustrating aspects of another cross-lingual query suggestion process.
  • the process receives an input query in source language.
  • the process provides a set of candidate queries in target language. At least some of the candidate queries in target language are selected from a query log of a search engine. As will be shown later in this description, candidate queries in target language may be provided by Web mining and/or a query expansion using monolingual query suggestion. In one embodiment, candidate queries are limited to those that are also found in a query log.
  • the process ranks the set of candidate queries in target language using a cross-lingual query similarity score. Further detail of computing cross-lingual similarity is given later in this description.
  • the process suggests a query in target language from top- ranking candidate queries in target language as a cross-lingual query.
  • CLQS cross-lingual query suggestion
  • the cross-lingual query suggestion (CLQS) disclosed herein aims to solve the mismatch problem encountered by traditional methods which approach CLQS as a query translation problem, i.e., by suggesting the queries that are translations of the original query.
  • This disclosure proposes to solve the mismatch problem by mapping the input queries in the source language and the queries in the target language, using the query logs of a search engine.
  • the disclosed techniques exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages.
  • a query written in a source language likely has an equivalent in a query log in the target language.
  • the user intends to perform cross-lingual information retrieval (CLIR)
  • CLIR cross-lingual information retrieval
  • a query log of a search engine usually contains user queries in different languages within a certain period of time.
  • the candidate query is more likely to be an appropriate cross-lingual query that can be suggested to the user.
  • the click-through information is also recorded. With this information, one knows which documents have been selected by users for each query.
  • a CLQS task Given a query in the source language, a CLQS task is to determine one or several similar queries in the target language from the query log.
  • the first issue faced by a cross-lingual query suggestion method is to learn and estimate a similarity measure between two queries in different languages.
  • the cross-lingual similarity measure may be calculated based on both translation information and monolingual similarity information. Each type of the information is applied using one or more tools or resources. In order to provide up-to- date query similarity measure, it may not be sufficient to use only a static translation resource for translation. Therefore, one embodiment integrates a method to mine possible translations on the Web. This method is particularly useful for dealing with OOV terms. [00032] Given a set of resources of different natures, another issue faced by a cross-lingual query suggested method is how to integrate the resources in a principled manner. This disclosure proposes a discriminative model to learn the appropriate similarity measure.
  • the principle used in the discriminative model is as follows: assuming a reasonable monolingual query similarity measure, for any training query example for which a translation exists, its similarity measure (with any other query) is transposed to its translation. Using this principle, the desired cross-language similarity value for the training query samples may be computed. A discriminative model is then used to learn a cross-language similarity function which fits the best training examples.
  • FIG. 3 is a flowchart illustrating an exemplary process of cross-lingual query suggestion integrating multiple resources of various types of information, including translation information, bilingual information and monolingual information.
  • the process receives an input query in source language.
  • Blocks 320A, 320B and 320C each represent an optional process for utilizing a certain type of information resource to identify and provide candidate queries in target language.
  • Block 320A is a process for providing candidate queries in target language using a bilingual dictionary-based translation score for ranking.
  • the process ranks potential query translations of the input query in source language using term-term cohesion, which is further described later in this description.
  • the potential query translations may be initially constructed from a bilingual dictionary.
  • the process selects a set of top query translations based on the ranking result.
  • the process retrieves from the query log any query containing the same keywords as the top query translation.
  • Block 320B is an alternative process for providing candidate queries in target language using bidirectional translation score for ranking.
  • the process ranks queries in the query log using bidirectional translation probability derived from parallel corpora of the source language and the target language. The bidirectional translation probability is further described later in the present description.
  • the process selects a set of top queries based on ranking result.
  • Block 320C is another alternative process for providing candidate queries in target language using web mining.
  • the process mines candidate queries in target language co-occurring with the input query on the web. The mining method is further described later in the present description.
  • the process selects a set of top ranking mined queries in the target language. In one embodiment, only those queries that are also found in the query log are selected.
  • the results of processes 320A, 320B and 320C are pooled together to form a set of candidate queries in target language 330. It should be noted that any combination (including single) of the three processes 320A, 320B and 320C may be run for acquiring the set of candidate queries in target language 330. Further detail of the three processes 320A, 320B and 320C is described in later sections of the present disclosure.
  • the process expands the set of candidate queries in target language 330 by identifying additional candidate queries in target language having a high monolingual similarity with each initial candidate query in target language. It is noted that the expansion described at block 340 is optional.
  • the process ranks the set of candidate queries in target language 330, or the expanded set of candidate queries in target language if block 340 is performed, using a cross-lingual query similarity score.
  • the computation of the cross-lingual query similarity score will be described in further detail later in the present description.
  • the process suggests a query in target language as a cross-lingual query from the top ranking candidate queries in the target language.
  • a discriminative model for cross-lingual query similarity estimation uses several exemplary features (monolingual and cross-lingual information) that are used in the discriminative model.
  • Discriminative Model for Estimating Cross-Lingual Query Similarity uses a discriminative model to learn cross-lingual query similarities in a principled manner. The principle is as follows.
  • a cross-lingual correspondent query similarity can be deduced between one query and the other query's translation.
  • their cross-lingual similarity should fit the monolingual similarity between one query and the other query's translation.
  • the similarity between French query "pagesterrorisms" i.e., "yellow page” in English
  • English query "telephone directory” should be equal to the monolingual similarity between the translation of the French query "yellow page” and "telephone directory”.
  • Tg/ is the translation of q f in the target language.
  • a training corpus may be created based on Equation (1). In order to do this, a list of sample query and their translations may be provided. Then an existing monolingual query suggestion system can be used to automatically produce similar queries for each translation. The sample queries and their translations, together with the translations' corresponding similar queries (which are in the same language and generated by the monolingual query suggestion system), constitute a training corpus for cross-lingual similarity estimation.
  • One advantage of this embodiment is that it is fairly easy to make use of arbitrary information sources within a discriminative modeling framework to achieve optimal performance.
  • Support vector machine (SVM) regression algorithm may be used to learn the cross-lingual term similarity function.
  • is the mapping from the input feature space onto the kernel space
  • w is a weight vector in the kernel space.
  • the dimensionality of the feature vector ⁇ (f(qf ,q e ) and the dimensionality of the weight factor w are both determined by the number of different features used in the algorithm. For example, the feature vector and the weight factor w both have four dimensions when four different features are used together in the algorithm.
  • the weight factor w contains the information of weights distributed among the multiple features, and the value of the weight factor w is to be learned by the SVM regression training. Once the weight vector w is learned, the Equation (2) can be used to estimate the similarity between queries of different languages. As such, the Equations (1) and (2) construct a regression model for cross- lingual query similarity estimation.
  • FIG. 4 is a flowchart illustrating an exemplary process to learn the feature vector w for computing cross-lingual similarities.
  • a process provides a training corpus containing both monolingual information and cross-lingual information.
  • the exemplary training corpus as described above may be used.
  • the exemplary training corpus may include a list of queries, corresponding translations and expanded translations.
  • the process determines the monolingual similarities of the training corpus using a monolingual query similarity measure.
  • the process determines the present cross-lingual similarities of the training corpus by calculating an inner product between a weight vector and a feature vector in a kernel space, as given by Equation (2).
  • the process compares the present cross-lingual similarities with the monolingual similarities of the training corpus.
  • the process determines whether the present cross-lingual similarities fit the monolingual similarities of the training corpus, as indicated in Equation (1). If not, the process goes to block 460. If yes, the process ends at block 470.
  • the standard for fitting may be a preset threshold measuring how close the present cross-lingual similarities and the monolingual similarities are to each other.
  • the process adjusts the weight vector, and returns to block 430 for the next round of fitting.
  • Any monolingual term similarity measure can be used as the regression target.
  • One embodiment selects the monolingual query similarity measure described in Wen, J. R., Me, J.-Y., and Zhang, H. J., "Query Clustering Using User Logs", ACM Trans. Information Systems, 20(l):59-81, 2002, which reports good performance by using search users' click-through information in query logs.
  • the benefit of using this monolingual similarity is that the similarity is defined in a context similar to the present context. That is, the monolingual similarity is defined according to a user log that reflects users' intention and behavior. Using a monolingual similarity measure such as this, one can expect that the cross-language term similarity learned therefrom also reflects users' intention and expectation.
  • monolingual query similarity is defined by combining both query content-based similarity and click-through commonality in the query log.
  • content similarity between two queries p and q is defined as follows:
  • f ⁇ (x) is the number of keywords in a query x (e.g., /? or q)
  • KN(p, q) is the number of common keywords in the two queries/? and q.
  • a and ⁇ are the relative importance of the two similarity measures.
  • the threshold is set as 0.9 empirically in one example.
  • Feature 1 Bilingual dictionary-based scoring using term-term cohesion
  • the first feature used is bilingual dictionary-based scoring, which is illustrated below with an example using a built-in-house bilingual dictionary containing 120,000 unique entries to retrieve candidate queries. Since multiple translations may be associated with each source word, co-occurrence based translation disambiguation is performed. The process is as follows.
  • MI (t ij t ki ) P(t ij ,t ki ) lo g P l ⁇ p " ⁇ (6)
  • C(x, y) is the number of queries in the query log containing both terms x and y
  • C(x) is the number of queries containing term x
  • N is the total number of queries in the query log.
  • Top ranking query translations are then selected. For example, a set of top-4 query translations is selected and denoted as S(T q ) . For each possible query
  • the system retrieves from the target language query log available queries containing the same keywords as T does. Preferably, all such available queries are retrieved. The retrieved queries are collected as candidate target queries, and are assigned S ⁇ ct (T) as the value of the feature Dictionary-based Translation
  • Feature 2 Bidirectional translation score based on parallel corpora
  • the second feature that may be used is bidirectional translation score based on parallel corpora.
  • Parallel corpora are valuable resources for bilingual knowledge acquisition. Different from the bilingual dictionary, the bilingual knowledge learned from parallel corpora assigns a probability for each translation candidate, which is useful information in acquiring dominant query translations.
  • the Europarl corpus (a set of parallel French and
  • the corpus is sentence aligned first.
  • the word alignments are then derived by training an IBM translation model 1 using GIZA++.
  • the learned bilingual knowledge is used to extract candidate queries from the query log. The process is as follows.
  • the Bi-Directional Translation Score Given a pair of queries qf in the source language and q e in the target language, the Bi-Directional Translation Score is defined as follows:
  • IBM model 1 which has the following form:
  • bidirectional translation probability One purpose to use the bidirectional translation probability is to deal with the fact that common words can be considered as possible translations of many words. By using bidirectional translation, one may test whether the translation words can be translated back to the source words. This is helpful to focus on the translation probability onto the most specific translation candidates.
  • top queries are selected. For example, given an input query qj, the top ten queries ⁇ q e ⁇ having the highest bidirectional translation scores with q j are retrieved from the query log, and
  • the third feature that may be used is frequency in Web mining snippets and co-occurrence frequency.
  • Web mining has been used to acquire out-of- vocabulary words (OOV), which account for a major knowledge bottleneck for query translation and CLIR.
  • OOV out-of- vocabulary words
  • CLIR query translation
  • web mining has been exploited to acquire
  • English-Chinese term translations based on the observation that Chinese terms may co-occur with their English translations in the same web page.
  • a similar web mining approach is adapted to acquire not only translations but semantically related queries in the target language.
  • a simple method is to send the source query to a search engine (e.g., Google search engine) to search for Web pages in the target language in order to find related queries in the target language. For instance, by sending a French query "pagesterrorisms" to search for English pages, the English snippets containing the key words "yellow pages" or "telephone directory" will be returned.
  • a search engine e.g., Google search engine
  • this simple approach may induce significant amount of noise due to the non-relevant returns from the search engine.
  • the simple approach is modified by using a more structured query.
  • An exemplary query modification is as follows.
  • the original query is used with dictionary-based query keyword translations to perform a search.
  • Both the original query and the dictionary-based query keywords translations are unified by the ⁇ (and) v (OR) operators into a single Boolean query.
  • q abc where the set of translation entries in the dictionary of for a is and c is ⁇ q ⁇ , one may issue q ⁇ (a ⁇ v a 2 V e 3 ) A (A j V b 2 ) A c 1 as one web query.
  • Top snippets returned by the modified and unified web query are retrieved to select candidate queries in target language.
  • the selection makes use of the query log to select only those Web mined queries that are also found in the query log. For example, from the returned top 700 snippets, the most frequent 10 target queries that are also in the query log are identified, and are associated with the feature Frequency in the Snippets.
  • CODC Measure may be used to weight the association between the source and target queries.
  • CODC Measure has been proposed as an association measure based on snippet analysis, named Web Search with Double Checking (WSDC) model.
  • WSDC model two objects a and b are considered to have an association if b can be found by using a as query (forward process), and a can be found by using b as query (backward process) by web search.
  • the forward process counts the frequency of b in the top N snippets of query a, denoted as/ re? g > @ a) ⁇
  • the backward process count the frequency of a in the top N snippets of query b, denoted as freq ⁇ a@b) .
  • the CODC association score is defined as follows:
  • a is set at 0.15 following an exemplary practice.
  • a query q e mined from the Web may be associated with a feature
  • the monolingual similarity between the query q e and SQ ML ⁇ q e ) is used as the value of the q e ⁇ Monolingual Query Suggestion-based Feature.
  • a threshold may be set for selecting additional candidate target queries using Equation (10). For example, if the monolingual similarity between a query q e and its source query SQ ML (q e ) meets or is above the threshold, the query q e is chosen to be a candidate query in target language, in addition to the set of candidate queries Qo, to be ranked using the cross-lingual query similarity score and suggested as a cross-lingual query (e.g., blocks 230 and 240 in FIG. 2, or blocks 350 and 260 in FIG. 3) in the next steps.
  • a cross-lingual query e.g., blocks 230 and 240 in FIG. 2, or blocks 350 and 260 in FIG. 3
  • the target language queries q e used in this part may be from any suitable source. In one embodiment, however, the target language queries q e used in this part are selected from the query log of the search engine.
  • Feature 4 monolingual query suggestion
  • the four features are also used to learn the cross-lingual query similarity.
  • SVM regression algorithm is used to learn the weights in Equation (2).
  • LibSVM toolkit is used for the regression training.
  • the set of candidate queries in target language are ranked using the cross-lingual query similarity score computed using Equation (2), and the queries with similarity score lower than a threshold will be regarded as non- relevant.
  • the threshold is learned using a development data set by fitting MLQS 's output.
  • CLQS is primarily used as cross-lingual query suggestion, but may also be used as an alternative tool for query translation. Using the CLQS for query translation may also be useful for testing the effectiveness of the
  • a set of relevant queries ⁇ q e ⁇ in the target language are recommended using the cross-lingual query suggestion system.
  • a monolingual IR system based on the BM25 model is called using each q ⁇ ⁇ q e ⁇ as queries to retrieve documents.
  • the retrieved documents are re-ranked based on the sum of the BM25 scores associated with each monolingual retrieval.
  • CLQS as a translation method is more effective than the traditional query translation method. Based on the observation that the CLIR performance heavily relies on the quality of the suggested queries, the resulting good performance of CLIR is believed to indicate high quality of the suggested queries.
  • a computing device such as a server, a personal computer (PC) or a portable device having a computing unit.
  • FIG. 5 shows an exemplary environment for implementing the method of the present disclosure.
  • Computing system 501 is implemented with computing device
  • the computer device 502 which includes processor(s) 510, I/O devices 520, computer readable media (e.g., memory) 530, and network interface (not shown).
  • the computer device 502 is connected to servers 541, 542 and 543 through networks 590.
  • the computer readable media 530 stores application program modules 532 and data 534 (such as monolingual and cross-lingual data).
  • Application program modules 532 contain instructions which, when executed by processor(s) 510, cause the processor(s) 510 to perform actions of a process described herein (e.g., the processes of FIGS. 1-4).
  • computer readable medium 530 has stored thereupon a plurality of instructions that, when executed by one or more processors 510, causes the processor(s) 510 to:
  • processor(s) 510 may also perform other actions as described herein, such as computing the cross-lingual similarity using Equation (2).
  • the computer readable media may be any of the suitable memory devices for storing computer data. Such memory devices include, but not limited to, hard disks, flash memory devices, optical data storages, and floppy disks. Furthermore, the computer readable media containing the computer-executable instructions may consist of component(s) in a local system or components distributed over a network of multiple remote systems. The data of the computer-executable instructions may either be delivered in a tangible physical memory device or transmitted electronically.
  • a computing device may be any device that has a processor, an I/O device and a memory (either an internal memory or an external memory), and is not limited to a personal computer.
  • a computer device may be, without limitation, a server, a PC, a game console, a set top box, and a computing unit built in another electronic device such as a television, a display, a printer or a digital camera.
  • the computer device 502 may be a search engine server, or a cluster of such search engine servers.
  • This disclosure describes a new approach to cross-lingual query suggestion (CLQS) by mining relevant queries in different languages from query logs.
  • CLQS cross-lingual query suggestion
  • the system learns a cross-lingual query similarity measure by using a discriminative model exploiting multiple monolingual and bilingual resources.
  • the model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.
  • the CLQS has wide applications on World Wide Web, such as cross- language search or for suggesting relevant bidding terms in a different language.
  • the present CLQS exploits up-to-date query logs, it is expected that for most user queries, one can find common formulations on these topics in the query log in the target language. In this sense, the present CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language.

Abstract

Cross-lingual query suggestion (CLQS) aims to suggest relevant queries in a target language for a given query in a source language. The cross-lingual query suggestion is improved by exploiting the query logs in the target language. The disclosed techniques include a method for learning and determining a similarity measure between two queries in different languages. The similarity measure is based on both translation information and monolingual similarity information, and in one embodiment uses both the query log itself and click-through information associated therewith. Monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics may be used to estimate the cross-lingual query similarity with a discriminative model.

Description

CROSS-LINGUAL QUERY SUGGESTION
BACKGROUND
[0001] Query suggestion helps users of a search engine to better specify their information need by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users. Search engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method. In addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-for- performance search market.
[0002] Typical methods for query suggestion perform monolingual query suggestion. These methods exploit query logs (of the original query language) and document collections, assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners. By suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents.
[0003] The existing techniques for cross-lingual query suggestion are primitive and limited. These techniques approach the issue as a query translation problem. That is, these techniques suggest queries that are translations of the original query. When used as a means for cross-lingual information retrieval (CLIR), for example, the system may perform a query translation followed by a monolingual information retrieval (IR) using the translation of the origin of query as the search query. Typically, queries are translated either using a bilingual dictionary, some machine translation software, or a parallel corpus. In other query translation methods, out-of- vocabulary (OO V) term translations are mined from the Web using a search engine to alleviate the problem of OOV, which is one of the major bottlenecks for CLIR. In others, bilingual knowledge is acquired based on anchor text analysis. In addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation. [0004] Many of these translation techniques rely on static knowledge and data and therefore cannot effectively reflect the quickly shifting interests of Web users. For those translation approaches may help reduce the problem of static knowledge, they have other inherent problems existing with any CLQS model that simply suggest straight translations of the queries. For instance, a translated term may be a reasonable translation, but it may not be popularly used in the target language. For example, the French query "aliment biologique" is translated into "biologic food" by Google translation tool, yet the correct formulation nowadays should be "organic food". Therefore, there exist many mismatches between the translated terms and the terms in the target language. These mismatches make the suggested terms in the target language ineffective.
[0005] Furthermore, it is arguable that accurate query translation may not be necessary for CLQS. Indeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query. This observation has led to the development of cross-lingual query expansion (CLQE) techniques, some of which reported the enhancement on CLIR by post-translation expansion, and others developed a cross-lingual relevancy model by leveraging the cross-lingual co-occurrence statistics in parallel texts. However, query expansion cannot be used as a substitute for query suggestion. Although query expansion is related to query suggestion, there is an essential difference between them. While expansion aims to extend the original query with new search terms to narrow the scope of the search, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries.
[0006] Furthermore, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques.
SUMMARY
[0007] The cross-lingual query suggestion (CLQS) aims to suggest relevant queries for a given query in a different language. The techniques disclosed herein improve CLQS by exploiting the query logs in the language of the suggested queries. The disclosed techniques include a method for learning and determining a cross-lingual similarity measure between two queries in different languages.
[0008] In one embodiment, the disclosed techniques use a discriminative model to learn and estimate the cross-lingual query similarity. The discriminative model starts with first identifying candidate queries in target language using a combination of multiple methods based on monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics. The identified candidate queries in target language may be further expanded using monolingual query suggestion. The resultant candidate queries in target language may be checked against the query log of the target language to select a narrowed set of candidate queries in target language, which are then evaluated using a cross-lingual similarity score for cross-lingual suggestion. One embodiment uses both the query log itself and click- through information associated therewith to identify the most pertinent cross-lingual query to be suggested.
[0009] Disclosed are also techniques for integrating multiple lingual resources of different characteristics in a principled manner. The multiple lingual resources are represented by a feature vector in an input feature space which is mapped to a kernel space for estimating the cross-lingual similarity. A support vector machine algorithm may be used for learning the weight vector for such estimation.
[00010] The disclosed techniques provide an effective means to map the input query of one language to queries of the other language in the query log, and have significance in scenarios of cross-language information retrieval (CLIR) and cross- lingual keyword bidding for search engine advertisement.
[00011] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE FIGURES
[00012] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. [00013] FIG. 1 is a flowchart illustrating aspects of a cross-lingual query suggestion process disclosed in the present description. [00014] FIG. 2 is a flowchart illustrating aspects of another cross-lingual query suggestion process.
[00015] FIG. 3 is a flowchart illustrating an exemplary process of cross-lingual query suggestion integrating multiple resources of various types of information, including translation information, bilingual information and monolingual information.
[00016] FIG. 4 is a flowchart illustrating an exemplary process to learn the feature vector w for computing cross-lingual similarities.
[00017] FIG. 5 shows an exemplary environment for implementing the method of the present disclosure.
DETAILED DESCRIPTION
[00018] The cross-lingual query suggestion techniques are described below with an overview of the processes followed by a further detailed description of the exemplary embodiments. In this description, the order in which a process is described is not intended to be construed as a limitation, and any number of the described process blocks may be combined in any order to implement the method, or an alternate method.
[00019] FIG. 1 is a flowchart illustrating aspects of a cross-lingual query suggestion process disclosed in the present description.
[00020] At block 110, an input query in source language is given. This is typically provided by a search engine user. The input query in source language may be provided in various application scenarios, including cross-lingual information retrieval and cross-lingual keyword bidding for search engine advertisement. [00021] At block 120, for the input query in source language, the process identifies a query in target language from a query log of a search engine. The query in target language is a query written in the target language which is a different language from the source language. For example, the input source language may be French, and the target language may be English. A general condition for a query in target language to be selected in this identification process is that the query in target language and the input query in source language have a cross-lingual similarity satisfying a certain standard. One example of such a standard is a preset threshold value. Further detail of acquiring candidate queries in target language and computing cross-lingual similarity is given later in this description.
[00022] At block 130, the process suggests the query in target language as a cross-lingual query. For example, if the user has provided the original input query in source language for the purpose of cross-lingual information retrieval using a search engine, the suggested query in target language may be used as a search query to retrieve relevant documents from websites in the target language. Typically, the information retrieval using the query in target language is performed in addition to the information retrieval using the original input query in source language. That is, the query in target language is used to supplement the original input query in source language in order to broaden the scope of the search in a cross-lingual information retrieval. However, in some situations, the query in target language may be used alone to perform an information retrieval from websites in the target language. It is also appreciated that multiple queries in target language may be identified and suggested. [00023] FIG. 2 is a flowchart illustrating aspects of another cross-lingual query suggestion process. [00024] At block 210, the process receives an input query in source language.
This is typically provided by a search engine user.
[00025] At block 220, the process provides a set of candidate queries in target language. At least some of the candidate queries in target language are selected from a query log of a search engine. As will be shown later in this description, candidate queries in target language may be provided by Web mining and/or a query expansion using monolingual query suggestion. In one embodiment, candidate queries are limited to those that are also found in a query log.
[00026] At block 230, the process ranks the set of candidate queries in target language using a cross-lingual query similarity score. Further detail of computing cross-lingual similarity is given later in this description.
[00027] At block 240, the process suggests a query in target language from top- ranking candidate queries in target language as a cross-lingual query. [00028] The cross-lingual query suggestion (CLQS) disclosed herein aims to solve the mismatch problem encountered by traditional methods which approach CLQS as a query translation problem, i.e., by suggesting the queries that are translations of the original query. This disclosure proposes to solve the mismatch problem by mapping the input queries in the source language and the queries in the target language, using the query logs of a search engine. The disclosed techniques exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages. As a result, a query written in a source language likely has an equivalent in a query log in the target language. In particular, if the user intends to perform cross-lingual information retrieval (CLIR), then the original query input by the user in the source language is even more likely to have its correspondent included in the query in target language log.
[00029] A query log of a search engine usually contains user queries in different languages within a certain period of time. In general, if a candidate query for CLQS appears often in the query log of the target language, the candidate query is more likely to be an appropriate cross-lingual query that can be suggested to the user. In addition to the query terms, the click-through information is also recorded. With this information, one knows which documents have been selected by users for each query.
[00030] Given a query in the source language, a CLQS task is to determine one or several similar queries in the target language from the query log. The first issue faced by a cross-lingual query suggestion method is to learn and estimate a similarity measure between two queries in different languages. Although various statistical similarity measures have been studied for monolingual terms, most of them are based on term co-occurrence statistics, and can hardly be applied directly in cross-lingual settings.
[00031] The cross-lingual similarity measure may be calculated based on both translation information and monolingual similarity information. Each type of the information is applied using one or more tools or resources. In order to provide up-to- date query similarity measure, it may not be sufficient to use only a static translation resource for translation. Therefore, one embodiment integrates a method to mine possible translations on the Web. This method is particularly useful for dealing with OOV terms. [00032] Given a set of resources of different natures, another issue faced by a cross-lingual query suggested method is how to integrate the resources in a principled manner. This disclosure proposes a discriminative model to learn the appropriate similarity measure. The principle used in the discriminative model is as follows: assuming a reasonable monolingual query similarity measure, for any training query example for which a translation exists, its similarity measure (with any other query) is transposed to its translation. Using this principle, the desired cross-language similarity value for the training query samples may be computed. A discriminative model is then used to learn a cross-language similarity function which fits the best training examples.
[00033] Based on the above principle, method of calculating the similarity between query in source language and the query in target language is proposed. The method exploits, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, and query logs with click-through data. A discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries. The model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation.
[00034] FIG. 3 is a flowchart illustrating an exemplary process of cross-lingual query suggestion integrating multiple resources of various types of information, including translation information, bilingual information and monolingual information. [00035] At block 310, the process receives an input query in source language.
[00036] Blocks 320A, 320B and 320C each represent an optional process for utilizing a certain type of information resource to identify and provide candidate queries in target language. Block 320A is a process for providing candidate queries in target language using a bilingual dictionary-based translation score for ranking. At sub-block 320A- 1, the process ranks potential query translations of the input query in source language using term-term cohesion, which is further described later in this description. The potential query translations may be initially constructed from a bilingual dictionary. At sub-block 320A-2, the process selects a set of top query translations based on the ranking result. At sub-block 320A-3, for each top query translation, the process retrieves from the query log any query containing the same keywords as the top query translation.
[00037] Block 320B is an alternative process for providing candidate queries in target language using bidirectional translation score for ranking. At sub-block 320B- 1, the process ranks queries in the query log using bidirectional translation probability derived from parallel corpora of the source language and the target language. The bidirectional translation probability is further described later in the present description. At sub-block 320B-2, the process selects a set of top queries based on ranking result.
[00038] Block 320C is another alternative process for providing candidate queries in target language using web mining. At sub-block 320C- 1, the process mines candidate queries in target language co-occurring with the input query on the web. The mining method is further described later in the present description. At sub-block 320C-2, the process selects a set of top ranking mined queries in the target language. In one embodiment, only those queries that are also found in the query log are selected. [00039] The results of processes 320A, 320B and 320C are pooled together to form a set of candidate queries in target language 330. It should be noted that any combination (including single) of the three processes 320A, 320B and 320C may be run for acquiring the set of candidate queries in target language 330. Further detail of the three processes 320A, 320B and 320C is described in later sections of the present disclosure.
[00040] At block 340, the process expands the set of candidate queries in target language 330 by identifying additional candidate queries in target language having a high monolingual similarity with each initial candidate query in target language. It is noted that the expansion described at block 340 is optional.
[00041] At block 350, the process ranks the set of candidate queries in target language 330, or the expanded set of candidate queries in target language if block 340 is performed, using a cross-lingual query similarity score. The computation of the cross-lingual query similarity score will be described in further detail later in the present description.
[00042] At block 360, the process suggests a query in target language as a cross-lingual query from the top ranking candidate queries in the target language. [00043] Further details of processes of FIGS. 1-3 are described below from various aspects. The following sections first describe the detail of a discriminative model for cross-lingual query similarity estimation, and then introduce several exemplary features (monolingual and cross-lingual information) that are used in the discriminative model. Discriminative Model for Estimating Cross-Lingual Query Similarity [00044] One embodiment of the present CLQS process uses a discriminative model to learn cross-lingual query similarities in a principled manner. The principle is as follows. For a reasonable monolingual query similarity between two queries in the same language, a cross-lingual correspondent query similarity can be deduced between one query and the other query's translation. Specifically, for a pair of queries in different languages, their cross-lingual similarity should fit the monolingual similarity between one query and the other query's translation. For example, the similarity between French query "pages jaunes" (i.e., "yellow page" in English) and English query "telephone directory" should be equal to the monolingual similarity between the translation of the French query "yellow page" and "telephone directory". [00045] There are many ways to obtain a monolingual similarity measure between query terms, e.g., term co-occurrence based mutual information and χ2. Any of these monolingual similarity measures can be used as the target for the cross- lingual similarity function to fit in a training and learning algorithm. [00046] In one embodiment, cross-lingual query similarity estimation is formulated as a regression task as follows: [00047] Given a query in source language qf , a query in target language qe , and a monolingual query similarity simML , the corresponding cross-lingual query similarity simCL is defined as follows: simCL {qf ,qe) = simML {Tqf ,qe) (1)
[00048] where Tg/is the translation of qf in the target language. [00049] A training corpus may be created based on Equation (1). In order to do this, a list of sample query and their translations may be provided. Then an existing monolingual query suggestion system can be used to automatically produce similar queries for each translation. The sample queries and their translations, together with the translations' corresponding similar queries (which are in the same language and generated by the monolingual query suggestion system), constitute a training corpus for cross-lingual similarity estimation. One advantage of this embodiment is that it is fairly easy to make use of arbitrary information sources within a discriminative modeling framework to achieve optimal performance.
[00050] Support vector machine (SVM) regression algorithm may be used to learn the cross-lingual term similarity function. Given a vector of feature functions / between qj and qe, simCL (qf , qe ) is represented as an inner product between a weight vector and the feature vector in a kernel space as follows: simCL {qf ,qe) = w φ{f{qf ,qe)) (2)
[00051] where φ is the mapping from the input feature space onto the kernel space, and w is a weight vector in the kernel space. The dimensionality of the feature vector φ(f(qf ,qe) and the dimensionality of the weight factor w are both determined by the number of different features used in the algorithm. For example, the feature vector and the weight factor w both have four dimensions when four different features are used together in the algorithm. The weight factor w contains the information of weights distributed among the multiple features, and the value of the weight factor w is to be learned by the SVM regression training. Once the weight vector w is learned, the Equation (2) can be used to estimate the similarity between queries of different languages. As such, the Equations (1) and (2) construct a regression model for cross- lingual query similarity estimation.
[00052] FIG. 4 is a flowchart illustrating an exemplary process to learn the feature vector w for computing cross-lingual similarities. At block 410, a process provides a training corpus containing both monolingual information and cross-lingual information. The exemplary training corpus as described above may be used. The exemplary training corpus may include a list of queries, corresponding translations and expanded translations.
[00053] At block 420, the process determines the monolingual similarities of the training corpus using a monolingual query similarity measure. [00054] At block 430, the process determines the present cross-lingual similarities of the training corpus by calculating an inner product between a weight vector and a feature vector in a kernel space, as given by Equation (2). [00055] At block 440, the process compares the present cross-lingual similarities with the monolingual similarities of the training corpus. [00056] At block 450, the process determines whether the present cross-lingual similarities fit the monolingual similarities of the training corpus, as indicated in Equation (1). If not, the process goes to block 460. If yes, the process ends at block 470. The standard for fitting may be a preset threshold measuring how close the present cross-lingual similarities and the monolingual similarities are to each other. [00057] At block 460, the process adjusts the weight vector, and returns to block 430 for the next round of fitting.
[00058] It is noted that instead of regression, one may simplify the task as a binary or ordinal classification, in which case CLQS can be categorized according to discontinuous class labels, e.g., relevant and irrelevant, or a series of levels of relevancies, e.g., strongly relevant, weakly relevant, and irrelevant. In either case, one can resort to discriminative classification approaches, such as an SVM or maximum entropy model, in a straightforward way. However, the regression formalism enables one to fully rank the suggested queries based on the similarity score given by Equation (1).
[00059] In the following sections, the monolingual query similarity measure and the feature functions used for SVM regression are further described.
Monolingual Query Similarity Measure Based on Click-through Information: [00060] Any monolingual term similarity measure can be used as the regression target. One embodiment selects the monolingual query similarity measure described in Wen, J. R., Me, J.-Y., and Zhang, H. J., "Query Clustering Using User Logs", ACM Trans. Information Systems, 20(l):59-81, 2002, which reports good performance by using search users' click-through information in query logs. The benefit of using this monolingual similarity is that the similarity is defined in a context similar to the present context. That is, the monolingual similarity is defined according to a user log that reflects users' intention and behavior. Using a monolingual similarity measure such as this, one can expect that the cross-language term similarity learned therefrom also reflects users' intention and expectation.
[00061] In one embodiment, monolingual query similarity is defined by combining both query content-based similarity and click-through commonality in the query log. [00062] The content similarity between two queries p and q is defined as follows:
KN(p, q) similarity content (p, q) = (3)
Max (kn(p), kn (q))
[00063] where fø(x) is the number of keywords in a query x (e.g., /? or q),
KN(p, q) is the number of common keywords in the two queries/? and q. [00064] The click-through based similarity is defined as follows,
similarity click^through (p,q) = P' q (4)
Max{rd{p),rd{q))
[00065] where rd(x) is the number of clicked URLs for a query x (e.g., p or q), and RD(p, q) is the number of common URLs clicked for two queries/? and q. [00066] Accordingly, the similarity between two queries is a linear combination of the content-based and click-through-based similarities, and is presented as follows: similarity(p,q) = a * similarity content(p,q) + β * similarity click_through (p, q)
[00067] where a and β are the relative importance of the two similarity measures. One embodiment sets α = 0.4 and β = 0.6. Queries with similarity measure higher than a threshold with another query will be regarded as relevant monolingual query suggestions (MLQS) for the latter. The threshold is set as 0.9 empirically in one example.
Features Used for Learning Cross-Lingual Query Similarity Measure
[00068] This section describes the extraction of candidate relevant queries from the query log with the assistance of various monolingual and bilingual resources utilized as features. Feature functions over source query and the relevant cross-lingual candidates are defined. Some of the resources used here, such as bilingual lexicon and parallel corpora, have been traditionally used for query translation. It is noted that the present disclosure employs these resources as an assistant means for finding relevant candidates in the query log, rather than for acquiring accurate translations. Feature 1 : Bilingual dictionary-based scoring using term-term cohesion [00069] The first feature used is bilingual dictionary-based scoring, which is illustrated below with an example using a built-in-house bilingual dictionary containing 120,000 unique entries to retrieve candidate queries. Since multiple translations may be associated with each source word, co-occurrence based translation disambiguation is performed. The process is as follows.
[00070] Given an input query qf
Figure imgf000019_0001
- - - wfii } i& the source language, for each query term wj, a set of unique translations are provided by the bilingual dictionary D as: D(Wβ ) = (^1 1^ , ... , tim } . Then the term-term cohesion between the translations of two query terms is measured using mutual information which is computed as:
MI (tij tki ) = P(tij ,tki ) log Pl\p " \ (6)
where P
Figure imgf000019_0002
[00071] Here C(x, y) is the number of queries in the query log containing both terms x and y, C(x) is the number of queries containing term x, and N is the total number of queries in the query log. [00072] Based on the term-term cohesion defined in Equation (6), all possible query translations are ranked using the summation of the term-term cohesion S dict (Tqf ) = ∑ MI {ttj , tkl ) . i, k,i ≠ k
[00073] Top ranking query translations are then selected. For example, a set of top-4 query translations is selected and denoted as S(Tq ) . For each possible query
translation TG S(Tq ), the system retrieves from the target language query log available queries containing the same keywords as T does. Preferably, all such available queries are retrieved. The retrieved queries are collected as candidate target queries, and are assigned Sώct(T) as the value of the feature Dictionary-based Translation
Score.
Feature 2: Bidirectional translation score based on parallel corpora
[00074] The second feature that may be used is bidirectional translation score based on parallel corpora. Parallel corpora are valuable resources for bilingual knowledge acquisition. Different from the bilingual dictionary, the bilingual knowledge learned from parallel corpora assigns a probability for each translation candidate, which is useful information in acquiring dominant query translations.
[00075] In one embodiment, the Europarl corpus (a set of parallel French and
English texts from the proceedings of the European Parliament) is used. The corpus is sentence aligned first. The word alignments are then derived by training an IBM translation model 1 using GIZA++. The learned bilingual knowledge is used to extract candidate queries from the query log. The process is as follows.
[00076] Given a pair of queries qf in the source language and qe in the target language, the Bi-Directional Translation Score is defined as follows:
Figure imgf000021_0001
[00077] where />/5M1(y | x) is the word sequence translation probability given by
IBM model 1 which has the following form:
I W\ \χ\
PiBM i (y I x) = , u M H ∑ p(yj I *. ) (8) y\ x + i) J=l 1=0
[00078] where p{y} \ X1 ) is the word to word translation probability derived
from the word-aligned corpora.
[00079] One purpose to use the bidirectional translation probability is to deal with the fact that common words can be considered as possible translations of many words. By using bidirectional translation, one may test whether the translation words can be translated back to the source words. This is helpful to focus on the translation probability onto the most specific translation candidates.
[00080] Based on the above bidirectional translation scoring, top queries are selected. For example, given an input query qj, the top ten queries {qe} having the highest bidirectional translation scores with qj are retrieved from the query log, and
SjBMl(gf ,qe) calculated in Equation (7) is assigned as the value for the feature Bi-
Directional Translation Score.
Feature 3: Frequency in Web mining snippets and CODC measure
[00081] The third feature that may be used is frequency in Web mining snippets and co-occurrence frequency. Web mining has been used to acquire out-of- vocabulary words (OOV), which account for a major knowledge bottleneck for query translation and CLIR. For example, web mining has been exploited to acquire
English-Chinese term translations based on the observation that Chinese terms may co-occur with their English translations in the same web page. In this disclosure, a similar web mining approach is adapted to acquire not only translations but semantically related queries in the target language.
[00082] It is assumed that if a query in the target language co-occurs with the source query in many web pages, the two queries are probably semantically related. Therefore, a simple method is to send the source query to a search engine (e.g., Google search engine) to search for Web pages in the target language in order to find related queries in the target language. For instance, by sending a French query "pages jaunes" to search for English pages, the English snippets containing the key words "yellow pages" or "telephone directory" will be returned. However, this simple approach may induce significant amount of noise due to the non-relevant returns from the search engine. In order to improve the relevancy of the bilingual snippets, the simple approach is modified by using a more structured query.
[00083] An exemplary query modification is as follows. The original query is used with dictionary-based query keyword translations to perform a search. Both the original query and the dictionary-based query keywords translations are unified by the Λ (and) v (OR) operators into a single Boolean query. For example, for a given query q = abc where the set of translation entries in the dictionary of for a is
Figure imgf000022_0001
and c is {q}, one may issue q Λ (aλ v a2 V e3) A (Aj V b2) A c1 as one web query.
[00084] Top snippets returned by the modified and unified web query are retrieved to select candidate queries in target language. In one embodiment, the selection makes use of the query log to select only those Web mined queries that are also found in the query log. For example, from the returned top 700 snippets, the most frequent 10 target queries that are also in the query log are identified, and are associated with the feature Frequency in the Snippets.
[00085] Furthermore, Co-Occurrence Double-Check (CODC) Measure may be used to weight the association between the source and target queries. CODC Measure has been proposed as an association measure based on snippet analysis, named Web Search with Double Checking (WSDC) model. In WSDC model, two objects a and b are considered to have an association if b can be found by using a as query (forward process), and a can be found by using b as query (backward process) by web search. The forward process counts the frequency of b in the top N snippets of query a, denoted as/re? g>@ a) ■ Similarly, the backward process count the frequency of a in the top N snippets of query b, denoted as freq{a@b) . Then the CODC association score is defined as follows:
0, if freq(qe @qf ) x freq(qf @qe) = 0
ScODc iQf ' Qe) - e
Figure imgf000023_0001
[00086] CODC measures the association of two terms in the range between 0 and 1, where under the two extreme cases, qe and q/ are of no association when freq{qe @qf ) = 0 or freq(q f @ qe) = 0 , and are of the strongest association when
freq (qe @ qf ) = freq (qf ) and freq (qf @ qe) = freq {qe ) . In one experiment, a is set at 0.15 following an exemplary practice.
[00087] A query qe mined from the Web may be associated with a feature
CODC Measure with SCODdqf,qe) as the feature value. Feature 4: Monolingual Query Suggestion-Based Feature
[00088] The candidate queries in target language retrieved using the above- described bilingual dictionary, parallel corpora and web mining are pulled together as a set of candidate queries Qo. A monolingual query suggestion system is called to produce more related queries in the target language using the set of candidate queries
Qo.
[00089] For a query qe, its monolingual source query SQML {qe) is defined as the query in go having the highest monolingual similarity with qe, i.e.,
SQ ML føe ) = ar§ maX0 SimML (Αede) ( 10)
[00090] The monolingual similarity between the query qe and SQML{qe) is used as the value of the qe\ Monolingual Query Suggestion-based Feature. A threshold may be set for selecting additional candidate target queries using Equation (10). For example, if the monolingual similarity between a query qe and its source query SQML(qe) meets or is above the threshold, the query qe is chosen to be a candidate query in target language, in addition to the set of candidate queries Qo, to be ranked using the cross-lingual query similarity score and suggested as a cross-lingual query (e.g., blocks 230 and 240 in FIG. 2, or blocks 350 and 260 in FIG. 3) in the next steps. For any query that is already in Q0 (i.e., q ≡ Q0 ), its Monolingual Query Suggestion-Based Feature is set as 1, the maximum monolingual similarity value. [00091] For any query <?e £ βo , its values of Dictionary-based Translation
Score, Bi-Directional Translation Score, Frequency in the Snippet, and CODC Measure are set to be equal to the feature values of SQ ML (qe ) . [00092] The target language queries qe used in this part may be from any suitable source. In one embodiment, however, the target language queries qe used in this part are selected from the query log of the search engine.
Estimating Cross-lingual Query Similarity
[00093] In the above, four categories of features are used to acquire a set of candidate queries in target language, which include Qo and its monolingual expansion if Feature 4 (monolingual query suggestion) is used. The four features are also used to learn the cross-lingual query similarity. For example, SVM regression algorithm is used to learn the weights in Equation (2). In one embodiment, LibSVM toolkit is used for the regression training.
[00094] In the prediction stage, the set of candidate queries in target language are ranked using the cross-lingual query similarity score computed using Equation (2), and the queries with similarity score lower than a threshold will be regarded as non- relevant. The threshold is learned using a development data set by fitting MLQS 's output.
[00095] Exploitation of the above described resources and features may be conducted at various levels. For example, the simplest CLQS system may use a dictionary only. The next level my use a dictionary and parallel corpora, a higher level may use dictionary, parallel corpora and web mining, while a comprehensive CLQS system may combine dictionary, parallel corpora, web mining and monolingual query suggestion together. It is expected that the comprehensive CLQS system may tend to have better performance. CLIR Based on Cross-Lingual Query Suggestion
[00096] The presently disclosed CLQS is primarily used as cross-lingual query suggestion, but may also be used as an alternative tool for query translation. Using the CLQS for query translation may also be useful for testing the effectiveness of the
CLQS in CLIR tasks.
[00097] Given a source query q/, a set of relevant queries {qe} in the target language are recommended using the cross-lingual query suggestion system. In an exemplary CLIR test, a monolingual IR system based on the BM25 model is called using each q ≡ {qe} as queries to retrieve documents. The retrieved documents are re-ranked based on the sum of the BM25 scores associated with each monolingual retrieval. The results show that the presently described CLQS as a translation method is more effective than the traditional query translation method. Based on the observation that the CLIR performance heavily relies on the quality of the suggested queries, the resulting good performance of CLIR is believed to indicate high quality of the suggested queries.
Implementation Environment
[00098] The above-described techniques may be implemented with the help of a computing device, such as a server, a personal computer (PC) or a portable device having a computing unit.
[00099] FIG. 5 shows an exemplary environment for implementing the method of the present disclosure. Computing system 501 is implemented with computing device
502 which includes processor(s) 510, I/O devices 520, computer readable media (e.g., memory) 530, and network interface (not shown). The computer device 502 is connected to servers 541, 542 and 543 through networks 590.
[000100] The computer readable media 530 stores application program modules 532 and data 534 (such as monolingual and cross-lingual data). Application program modules 532 contain instructions which, when executed by processor(s) 510, cause the processor(s) 510 to perform actions of a process described herein (e.g., the processes of FIGS. 1-4).
[000101] For example, in one embodiment, computer readable medium 530 has stored thereupon a plurality of instructions that, when executed by one or more processors 510, causes the processor(s) 510 to:
[000102] (i) identify a query in target language from a query log based on a cross- lingual similarity with an input query in source language; and [000103] (ii) suggest the query in target language as a cross-lingual query. [000104] To perform the above actions, the processor(s) 510 may also perform other actions as described herein, such as computing the cross-lingual similarity using Equation (2).
[000105] It is appreciated that the computer readable media may be any of the suitable memory devices for storing computer data. Such memory devices include, but not limited to, hard disks, flash memory devices, optical data storages, and floppy disks. Furthermore, the computer readable media containing the computer-executable instructions may consist of component(s) in a local system or components distributed over a network of multiple remote systems. The data of the computer-executable instructions may either be delivered in a tangible physical memory device or transmitted electronically. [000106] It is also appreciated that a computing device may be any device that has a processor, an I/O device and a memory (either an internal memory or an external memory), and is not limited to a personal computer. For example, a computer device may be, without limitation, a server, a PC, a game console, a set top box, and a computing unit built in another electronic device such as a television, a display, a printer or a digital camera.
[000107] Especially, the computer device 502 may be a search engine server, or a cluster of such search engine servers.
Conclusions
[000108] This disclosure describes a new approach to cross-lingual query suggestion (CLQS) by mining relevant queries in different languages from query logs. The system learns a cross-lingual query similarity measure by using a discriminative model exploiting multiple monolingual and bilingual resources. The model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.
[000109] The CLQS has wide applications on World Wide Web, such as cross- language search or for suggesting relevant bidding terms in a different language. As the present CLQS exploits up-to-date query logs, it is expected that for most user queries, one can find common formulations on these topics in the query log in the target language. In this sense, the present CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language. [000110] It is appreciated that the potential benefits and advantages discussed herein are not to be construed as a limitation or restriction to the scope of the appended claims.
[000111] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.

Claims

CLAIMSWhat is claimed is:
1. A method for query suggestion, the method comprising: for an input query in source language, identifying a query in target language from a query log of a search engine, the query in target language and the input query in source language having a cross-lingual similarity satisfying a certain standard; and suggesting the query in target language as a cross-lingual query.
2. The method as recited in claim 1, wherein identifying the query in target language from the query log comprises: providing a plurality of candidate queries in target language; and ranking the plurality of candidate queries in target language using a cross- lingual query similarity score.
3. The method as recited in claim 2, wherein the cross-lingual query similarity score of the input query in source language qe and the candidate query in target language q/ is computed using equation simCL (qj-, qe) = w φ(f(q f ,qe)) , where simciiq/, qe) is the cross-lingual query similarity score, f(q/, qe) is a feature vector, φ is mapping from an input feature space onto a kernel space, and w is a weight vector in the kernel space.
4. The method as recited in claim 3, wherein the weight vector w is learned by a regression algorithm using a training set of input queries in source language and corresponding queries in target language.
5. The method as recited in claim 3, wherein the weight vector w is learned by a binary or ordinal classification algorithm in which cross-lingual query suggestions are categorized according to discontinuous class labels.
6. The method as recited in claim 3, wherein the weight vector w is learned by fitting a training set based on a principle that cross-lingual similarity of a pair of queries in two different languages fits the monolingual similarity between one query and a translation of the other query of the pair.
7. The method as recited in claim 3, wherein the feature vector f (q/, qe) includes at least two of the feature functions selected from bilingual dictionary-based translation score, bidirectional translation score, frequency in Web mining snippets, and monolingual query suggestion-based feature.
8. The method as recited in claim 2, wherein identifying the query in target language from the query log further comprises: identifying one or more queries in target language whose cross-lingual query similarity score with the input query meets or exceeds a threshold.
9. The method as recited in claim 8, wherein the threshold is learned using a development data set by fitting a monolingual query suggestion output.
10. The method as recited in claim 2, wherein providing the plurality of candidate queries in target language comprises: ranking potential query translations of the input query in source language using term-term cohesion, the potential query translations being constructed from a bilingual dictionary; selecting a set of top query translations based on ranking result; and for each top query translation, retrieving from the query log at least one query containing the same keywords as the top query translation.
11. The method as recited in claim 2, wherein providing the plurality of candidate queries in target language comprises: ranking queries in the query log using bidirectional translation probability derived from a parallel corpora of the source language and the target language; and selecting a set of top queries based on ranking result.
12. The method as recited in claim 2, wherein providing the plurality of candidate queries in target language comprises: mining candidate queries in the target language on the Web, the each candidate query being a translation of the input query in source language or semantically related to the input query in source language.
13. The method as recited in claim 12, further comprising: ranking the candidate queries using a co-occurrence double-check measure; selecting a set of top candidate queries based on ranking result.
14. The method as recited in claim 2, wherein providing the plurality of candidate queries in target language comprises: providing a set of initial candidate queries in target language; and expanding the set of initial candidate queries in target language by identifying additional candidate queries having a high monolingual similarity with each initial candidate query in target language.
15. A method for query suggestion, the method comprising: receiving an input query in source language; providing a plurality of candidate queries in target language, at least part of the plurality of candidate queries in target language being selected from a query log of a search engine; ranking the plurality of candidate queries in target language using a cross- lingual query similarity score; and from top ranking candidate queries in target language, suggesting a query in target language as a cross-lingual query.
16. The method as recited in claim 15, wherein providing the plurality of candidate queries in target language comprises: ranking potential query translations of the input query in source language using term-term cohesion, the potential query translations being constructed from a bilingual dictionary; selecting a set of top query translations based on ranking result; and for each top query translation, retrieving from the query log at least one query containing the same keywords as the top query translation.
17. The method as recited in claim 15, wherein providing the plurality of candidate queries in target language comprises: ranking queries in the query log using a bidirectional translation probability derived from a parallel corpora of the source language and the target language; and selecting a set of top queries based on ranking result.
18. The method as recited in claim 15, further comprising: mining additional candidate queries in the target language on the Web, each additional candidate queries being a translation of the input query in source language or semantically related to the input query in source language.
19. The method as recited in claim 15, further comprising: expanding the plurality of candidate queries in target language by identifying additional candidate queries having a high monolingual similarity with each initial candidate query in target language.
20. One or more computer readable media having stored thereupon a plurality of instructions that, when executed by a processor, causes the processor to: for an input query in source language, identify a query in target language from a query log of a search engine, the query in target language and the input query in source language having a cross-lingual similarity satisfying a certain standard; and suggest the query in target language as a cross-lingual query.
PCT/US2008/070578 2007-07-20 2008-07-20 Cross-lingual query suggestion WO2009015057A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US95102907P 2007-07-20 2007-07-20
US60/951,029 2007-07-20
US12/033,308 2008-02-19
US12/033,308 US8051061B2 (en) 2007-07-20 2008-02-19 Cross-lingual query suggestion

Publications (1)

Publication Number Publication Date
WO2009015057A1 true WO2009015057A1 (en) 2009-01-29

Family

ID=40265686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/070578 WO2009015057A1 (en) 2007-07-20 2008-07-20 Cross-lingual query suggestion

Country Status (2)

Country Link
US (1) US8051061B2 (en)
WO (1) WO2009015057A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495678A (en) * 2011-11-30 2012-06-13 左盼 Information display method and system based on input method
WO2012145521A1 (en) * 2011-04-21 2012-10-26 Google Inc. Localized translation of keywords
CN106372187A (en) * 2016-08-31 2017-02-01 中译语通科技(北京)有限公司 Cross-language retrieval method oriented to big data

Families Citing this family (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146409B1 (en) * 2001-07-24 2006-12-05 Brightplanet Corporation System and method for efficient control and capture of dynamic database content
US7752266B2 (en) 2001-10-11 2010-07-06 Ebay Inc. System and method to facilitate translation of communications between entities over a network
US8078505B2 (en) 2002-06-10 2011-12-13 Ebay Inc. Method and system for automatically updating a seller application utilized in a network-based transaction facility
US7505964B2 (en) 2003-09-12 2009-03-17 Google Inc. Methods and systems for improving a search ranking using related queries
US9189568B2 (en) * 2004-04-23 2015-11-17 Ebay Inc. Method and system to display and search in a language independent manner
US8639782B2 (en) 2006-08-23 2014-01-28 Ebay, Inc. Method and system for sharing metadata between interfaces
US8661029B1 (en) 2006-11-02 2014-02-25 Google Inc. Modifying search result ranking based on implicit user feedback
US9110975B1 (en) 2006-11-02 2015-08-18 Google Inc. Search result inputs using variant generalized queries
US7908260B1 (en) 2006-12-29 2011-03-15 BrightPlanet Corporation II, Inc. Source editing, internationalization, advanced configuration wizard, and summary page selection for information automation systems
US8938463B1 (en) 2007-03-12 2015-01-20 Google Inc. Modifying search result ranking based on implicit user feedback and a model of presentation bias
US8694374B1 (en) 2007-03-14 2014-04-08 Google Inc. Detecting click spam
US9092510B1 (en) 2007-04-30 2015-07-28 Google Inc. Modifying search result ranking based on a temporal element of user feedback
US9002869B2 (en) 2007-06-22 2015-04-07 Google Inc. Machine translation for query expansion
US8694511B1 (en) 2007-08-20 2014-04-08 Google Inc. Modifying search result ranking based on populations
US8086620B2 (en) * 2007-09-12 2011-12-27 Ebay Inc. Inference of query relationships
US8909655B1 (en) 2007-10-11 2014-12-09 Google Inc. Time based ranking
KR20100134618A (en) * 2008-02-29 2010-12-23 샤프 가부시키가이샤 Information processing device, method, and program
US8745051B2 (en) * 2008-07-03 2014-06-03 Google Inc. Resource locator suggestions from input character sequence
US8312032B2 (en) 2008-07-10 2012-11-13 Google Inc. Dictionary suggestions for partial user entries
US9081765B2 (en) * 2008-08-12 2015-07-14 Abbyy Infopoisk Llc Displaying examples from texts in dictionaries
US8326785B2 (en) * 2008-09-30 2012-12-04 Microsoft Corporation Joint ranking model for multilingual web search
US8396865B1 (en) 2008-12-10 2013-03-12 Google Inc. Sharing search engine relevance data between corpora
US8224839B2 (en) * 2009-04-07 2012-07-17 Microsoft Corporation Search query extension
US9009146B1 (en) 2009-04-08 2015-04-14 Google Inc. Ranking search results based on similar queries
US8577909B1 (en) 2009-05-15 2013-11-05 Google Inc. Query translation using bilingual search refinements
US8572109B1 (en) * 2009-05-15 2013-10-29 Google Inc. Query translation quality confidence
US8577910B1 (en) 2009-05-15 2013-11-05 Google Inc. Selecting relevant languages for query translation
US8538957B1 (en) * 2009-06-03 2013-09-17 Google Inc. Validating translations using visual similarity between visual media search results
US8447760B1 (en) 2009-07-20 2013-05-21 Google Inc. Generating a related set of documents for an initial set of documents
US9026542B2 (en) * 2009-07-25 2015-05-05 Alcatel Lucent System and method for modelling and profiling in multiple languages
AU2009350904B2 (en) * 2009-08-04 2016-07-14 Google Llc Query suggestions from documents
US8498974B1 (en) 2009-08-31 2013-07-30 Google Inc. Refining search results
US8972391B1 (en) 2009-10-02 2015-03-03 Google Inc. Recent interest based relevance scoring
TWI409646B (en) * 2009-10-14 2013-09-21 Inst Information Industry Vocabulary translation system, vocabulary translation method and computer readable-writable storage medium of the same
US8874555B1 (en) 2009-11-20 2014-10-28 Google Inc. Modifying scoring data based on historical changes
US8615514B1 (en) 2010-02-03 2013-12-24 Google Inc. Evaluating website properties by partitioning user feedback
US8924379B1 (en) 2010-03-05 2014-12-30 Google Inc. Temporal-based score adjustments
US8959093B1 (en) 2010-03-15 2015-02-17 Google Inc. Ranking search results based on anchors
US8825648B2 (en) 2010-04-15 2014-09-02 Microsoft Corporation Mining multilingual topics
US8478699B1 (en) * 2010-04-30 2013-07-02 Google Inc. Multiple correlation measures for measuring query similarity
US8635205B1 (en) * 2010-06-18 2014-01-21 Google Inc. Displaying local site name information with search results
US9623119B1 (en) 2010-06-29 2017-04-18 Google Inc. Accentuating search results
US8832083B1 (en) 2010-07-23 2014-09-09 Google Inc. Combining user feedback
NZ702142A (en) 2010-08-05 2016-01-29 Christopher Galassi System and method for multi-dimensional knowledge representation
US8442987B2 (en) * 2010-08-19 2013-05-14 Yahoo! Inc. Method and system for providing contents based on past queries
US8959068B2 (en) * 2010-09-29 2015-02-17 International Business Machines Corporation Dynamic configuration of a persistence provider
US20120117102A1 (en) * 2010-11-04 2012-05-10 Microsoft Corporation Query suggestions using replacement substitutions and an advanced query syntax
EP2639706A4 (en) * 2010-11-10 2014-08-27 Rakuten Inc Related-word registration device, information processing device, related-word registration method, program for related-word registration device, recording medium, and related-word registration system
US10346479B2 (en) 2010-11-16 2019-07-09 Microsoft Technology Licensing, Llc Facilitating interaction with system level search user interface
US8515984B2 (en) * 2010-11-16 2013-08-20 Microsoft Corporation Extensible search term suggestion engine
US10073927B2 (en) 2010-11-16 2018-09-11 Microsoft Technology Licensing, Llc Registration for system level search user interface
US8862595B1 (en) * 2010-11-23 2014-10-14 Google Inc. Language selection for information retrieval
US8645289B2 (en) * 2010-12-16 2014-02-04 Microsoft Corporation Structured cross-lingual relevance feedback for enhancing search results
US9002867B1 (en) 2010-12-30 2015-04-07 Google Inc. Modifying ranking data based on document changes
US20120191745A1 (en) * 2011-01-24 2012-07-26 Yahoo!, Inc. Synthesized Suggestions for Web-Search Queries
KR101850124B1 (en) * 2011-06-24 2018-04-19 구글 엘엘씨 Evaluating query translations for cross-language query suggestion
US8713037B2 (en) * 2011-06-30 2014-04-29 Xerox Corporation Translation system adapted for query translation via a reranking framework
US8543563B1 (en) * 2012-05-24 2013-09-24 Xerox Corporation Domain adaptation for query translation
US9070303B2 (en) * 2012-06-01 2015-06-30 Microsoft Technology Licensing, Llc Language learning opportunities and general search engines
US8892596B1 (en) * 2012-08-08 2014-11-18 Google Inc. Identifying related documents based on links in documents
US9104733B2 (en) * 2012-11-29 2015-08-11 Microsoft Technology Licensing, Llc Web search ranking
US10108699B2 (en) * 2013-01-22 2018-10-23 Microsoft Technology Licensing, Llc Adaptive query suggestion
US9183499B1 (en) 2013-04-19 2015-11-10 Google Inc. Evaluating quality based on neighbor features
US10067913B2 (en) * 2013-05-08 2018-09-04 Microsoft Technology Licensing, Llc Cross-lingual automatic query annotation
US9558176B2 (en) 2013-12-06 2017-01-31 Microsoft Technology Licensing, Llc Discriminating between natural language and keyword language items
GB2542279A (en) * 2014-03-29 2017-03-15 Thomson Reuters Global Resources Improved method, system and software for searching, identifying, retrieving and presenting electronic documents
US20160012124A1 (en) * 2014-07-10 2016-01-14 Jean-David Ruvini Methods for automatic query translation
US10452786B2 (en) * 2014-12-29 2019-10-22 Paypal, Inc. Use of statistical flow data for machine translations between different languages
TWI712899B (en) 2015-07-28 2020-12-11 香港商阿里巴巴集團服務有限公司 Information query method and device
FR3040808B1 (en) * 2015-09-07 2022-07-15 Proxem METHOD FOR THE AUTOMATIC ESTABLISHMENT OF INTER-LANGUAGE REQUESTS FOR A SEARCH ENGINE
KR102407630B1 (en) * 2015-09-08 2022-06-10 삼성전자주식회사 Server, user terminal and a method for controlling thereof
CN105631425B (en) * 2015-12-29 2020-04-07 厦门科拓通讯技术股份有限公司 License plate recognition method and system based on video stream and intelligent digital camera
JP2017167659A (en) * 2016-03-14 2017-09-21 株式会社東芝 Machine translation device, method, and program
US10540357B2 (en) 2016-03-21 2020-01-21 Ebay Inc. Dynamic topic adaptation for machine translation using user session context
US20190034080A1 (en) * 2016-04-20 2019-01-31 Google Llc Automatic translations by a keyboard
CN106919642B (en) * 2017-01-13 2021-04-16 北京搜狗科技发展有限公司 Cross-language search method and device for cross-language search
US10942954B2 (en) * 2017-12-22 2021-03-09 International Business Machines Corporation Dataset adaptation for high-performance in specific natural language processing tasks
CN109388810A (en) * 2018-08-31 2019-02-26 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing
CN109582982A (en) * 2018-12-17 2019-04-05 北京百度网讯科技有限公司 Method and apparatus for translated speech

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6604101B1 (en) * 2000-06-28 2003-08-05 Qnaturally Systems, Inc. Method and system for translingual translation of query and search and retrieval of multilingual information on a computer network
US7146358B1 (en) * 2001-08-28 2006-12-05 Google Inc. Systems and methods for using anchor text as parallel corpora for cross-language information retrieval
US20070027905A1 (en) * 2005-07-29 2007-02-01 Microsoft Corporation Intelligent SQL generation for persistent object retrieval

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301109A (en) 1990-06-11 1994-04-05 Bell Communications Research, Inc. Computerized cross-language document retrieval using latent semantic indexing
US5787410A (en) 1996-02-20 1998-07-28 Oracle Corporation Method and apparatus for storing and retrieving data in multiple languages simultaneously using a fully-populated sub-table
US5956740A (en) 1996-10-23 1999-09-21 Iti, Inc. Document searching system for multilingual documents
US6055528A (en) 1997-07-25 2000-04-25 Claritech Corporation Method for cross-linguistic document retrieval
US6081774A (en) 1997-08-22 2000-06-27 Novell, Inc. Natural language information retrieval system and method
KR980004126A (en) 1997-12-16 1998-03-30 양승택 Query Language Conversion Apparatus and Method for Searching Multilingual Web Documents
US6370498B1 (en) 1998-06-15 2002-04-09 Maria Ruth Angelica Flores Apparatus and methods for multi-lingual user access
JP3114703B2 (en) * 1998-07-02 2000-12-04 富士ゼロックス株式会社 Bilingual sentence search device
US6381598B1 (en) 1998-12-22 2002-04-30 Xerox Corporation System for providing cross-lingual information retrieval
JP3055545B1 (en) * 1999-01-19 2000-06-26 富士ゼロックス株式会社 Related sentence retrieval device
US6757646B2 (en) 2000-03-22 2004-06-29 Insightful Corporation Extended functionality for an inverse inference engine based web search
JP2003529845A (en) 2000-03-31 2003-10-07 アミカイ・インコーポレイテッド Method and apparatus for providing multilingual translation over a network
US20020111792A1 (en) 2001-01-02 2002-08-15 Julius Cherny Document storage, retrieval and search systems and methods
US7260570B2 (en) 2002-02-01 2007-08-21 International Business Machines Corporation Retrieving matching documents by queries in any national language
US7194455B2 (en) * 2002-09-19 2007-03-20 Microsoft Corporation Method and system for retrieving confirming sentences
US7149688B2 (en) 2002-11-04 2006-12-12 Speechworks International, Inc. Multi-lingual speech recognition with cross-language context modeling
US7558726B2 (en) 2003-05-16 2009-07-07 Sap Ag Multi-language support for data mining models
US7346487B2 (en) * 2003-07-23 2008-03-18 Microsoft Corporation Method and apparatus for identifying translations
JP3856778B2 (en) * 2003-09-29 2006-12-13 株式会社日立製作所 Document classification apparatus and document classification method for multiple languages
US7689412B2 (en) * 2003-12-05 2010-03-30 Microsoft Corporation Synonymous collocation extraction using translation information
US7620539B2 (en) * 2004-07-12 2009-11-17 Xerox Corporation Methods and apparatuses for identifying bilingual lexicons in comparable corpora using geometric processing
US7603353B2 (en) * 2004-10-27 2009-10-13 Harris Corporation Method for re-ranking documents retrieved from a multi-lingual document database
NZ592209A (en) 2005-01-04 2012-12-21 Thomson Reuters Glo Resources Method for for multilingual information retrieval
US7765098B2 (en) * 2005-04-26 2010-07-27 Content Analyst Company, Llc Machine translation using vector space representations
US20070022134A1 (en) 2005-07-22 2007-01-25 Microsoft Corporation Cross-language related keyword suggestion
US7818315B2 (en) * 2006-03-13 2010-10-19 Microsoft Corporation Re-ranking search results based on query log
CN101443759B (en) * 2006-05-12 2010-08-11 北京乐图在线科技有限公司 Multi-lingual information retrieval
US7720856B2 (en) 2007-04-09 2010-05-18 Sap Ag Cross-language searching
US7809714B1 (en) * 2007-04-30 2010-10-05 Lawrence Richard Smith Process for enhancing queries for information retrieval
US8799307B2 (en) * 2007-05-16 2014-08-05 Google Inc. Cross-language information retrieval

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6604101B1 (en) * 2000-06-28 2003-08-05 Qnaturally Systems, Inc. Method and system for translingual translation of query and search and retrieval of multilingual information on a computer network
US7146358B1 (en) * 2001-08-28 2006-12-05 Google Inc. Systems and methods for using anchor text as parallel corpora for cross-language information retrieval
US20070027905A1 (en) * 2005-07-29 2007-02-01 Microsoft Corporation Intelligent SQL generation for persistent object retrieval

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012145521A1 (en) * 2011-04-21 2012-10-26 Google Inc. Localized translation of keywords
US8484218B2 (en) 2011-04-21 2013-07-09 Google Inc. Translating keywords from a source language to a target language
CN102495678A (en) * 2011-11-30 2012-06-13 左盼 Information display method and system based on input method
CN106372187A (en) * 2016-08-31 2017-02-01 中译语通科技(北京)有限公司 Cross-language retrieval method oriented to big data

Also Published As

Publication number Publication date
US20090024613A1 (en) 2009-01-22
US8051061B2 (en) 2011-11-01

Similar Documents

Publication Publication Date Title
US8051061B2 (en) Cross-lingual query suggestion
US7917488B2 (en) Cross-lingual search re-ranking
US7882097B1 (en) Search tools and techniques
US8543563B1 (en) Domain adaptation for query translation
US9542476B1 (en) Refining search queries
Carmel et al. Estimating the query difficulty for information retrieval
US8892550B2 (en) Source expansion for information retrieval and information extraction
JP5727512B2 (en) Cluster and present search suggestions
US8249855B2 (en) Identifying parallel bilingual data over a network
Gao et al. Cross-lingual query suggestion using query logs of different languages
US8332426B2 (en) Indentifying referring expressions for concepts
US7519528B2 (en) Building concept knowledge from machine-readable dictionary
US20130268519A1 (en) Fact verification engine
US20140149401A1 (en) Per-document index for semantic searching
US20130110839A1 (en) Constructing an analysis of a document
Gao et al. Exploiting query logs for cross-lingual query suggestions
EP2192503A1 (en) Optimised tag based searching
US20100010982A1 (en) Web content characterization based on semantic folksonomies associated with user generated content
Popescu et al. Social media driven image retrieval
Mahdabi et al. The effect of citation analysis on query expansion for patent retrieval
Cheng et al. Creating multilingual translation lexicons with regional variations using web corpora
Al-Eroud et al. Evaluating Google queries based on language preferences
Gong et al. Business information query expansion through semantic network
Toba et al. Enhanced unsupervised person name disambiguation to support alumni tracer study
Billerbeck Efficient query expansion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08782111

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08782111

Country of ref document: EP

Kind code of ref document: A1