US20050060138A1 - Language conversion and display - Google Patents

Language conversion and display Download PDF

Info

Publication number
US20050060138A1
US20050060138A1 US10/898,407 US89840704A US2005060138A1 US 20050060138 A1 US20050060138 A1 US 20050060138A1 US 89840704 A US89840704 A US 89840704A US 2005060138 A1 US2005060138 A1 US 2005060138A1
Authority
US
United States
Prior art keywords
text
language
phonetic
input
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/898,407
Inventor
Jian Wang
Gao Zhang
Jian Han
Zheng Chen
Xianoning Ling
Kai-Fu Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US10/898,407 priority Critical patent/US20050060138A1/en
Publication of US20050060138A1 publication Critical patent/US20050060138A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/018Input/output arrangements for oriental characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/53Processing of non-Latin text
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams

Definitions

  • the present invention relates to a language input user interface. More particularly, the present invention relates to a language input user interface that may be used in language-specific or multilingual word processing systems, email systems, browsers, and the like, where phonetic text is input and converted to language text.
  • Alphanumeric keyboards e.g. the English QWERTY keyboard
  • Alphanumeric keyboards work well for languages that employ a small alphabet, such as the Roman character set.
  • character-based languages also referred to as symbol languages
  • language specific keyboards do not exist for character-based languages because it is practically impossible to build a keyboard to support separate keys for so many different characters.
  • language-specific word processing systems allow the user to enter phonetic text from a small character-set keyboard (e.g., a QWERTY keyboard) and convert that phonetic text to language text of a character-based language.
  • a small character-set keyboard e.g., a QWERTY keyboard
  • “Phonetic text” represents the sounds made when speaking a given language
  • the “language text” represents the actual written characters as they appear in the text.
  • Pinyin is an example of phonetic text
  • Hanzi is an example of the language text.
  • the set of characters needed to express the phonetic text is much smaller than the character set used to express the language text.
  • UI language input user interface
  • Existing language input UIs are not very user friendly because they are not easy to learn and they do not accommodate a fast typing speed.
  • some conventional language input user interfaces disassociate the phonetic text input from the converted language text output. For instance, a user may enter phonetic text in one location on the visual display screen and the converted characters of the language text are presented in a separate and distinct location on the screen. The two locations may even have their own local cursor. This dual presentation can be confusing to the user in terms of where entry is actually taking place. Moreover, the user must continuously glance between the locations on the screen.
  • a code-based user interface In general, there are two types of language input user interfaces: (1) a code-based user interface and (2) a mode-based user interface.
  • a code-based user interface users memorize codes related to words of the language. The codes are input by way of an input device and converted to the desired language text. This type of user interface allows users to input text very fast once they have memorized the codes. However, these codes are often not easy to memorize, but are easy to forget.
  • a mode-based user interface phonetic text is input and converted to the desired language text.
  • Mode-based user interfaces do not require users to memorize codes, but typically require users to switch modes between inputting and editing language text.
  • One example of a mode-based user interface is employed in Microsoft's “Word”-brand word processing program that is adapted for foreign languages by utilizing phonetic-to-language conversion, such as Chinese.
  • a mode-based user interface is employed in Microsoft's “Word”-brand word processing program that is adapted for foreign languages by utilizing phonetic-to-language conversion, such as Chinese.
  • a user is presented with a localized tool bar that enables the user to switch between an inputting mode in which a user inputs phonetic characters (e.g., Chinese Pinyin) and an editing mode in which a user corrects inevitable errors that occasionally occur as a result of the recognition and conversion process.
  • phonetic characters e.g., Chinese Pinyin
  • a conversion error occurs when the recognition and conversion engine converts the phonetic text into an incorrect language character. This may be quite common due to the nature of a given language and the accuracy at which the phonetic text can be used to predict an intended character.
  • the user interface typically provides some way for the user to correct the character.
  • Microsoft's “Word”-brand word processing program for China for example, a user is presented with a box containing possible alternative characters. If the list is long, the box provides controls to scroll through the list of possible characters.
  • mode switching for inputting different languages.
  • the user When a user is inputting phonetic text and wants to input text of a second language, the user has to switch modes to input the second language.
  • the localized tool bar offers a control button that enables a user to toggle between entry of a first language (e.g., Chinese Pinyin) and entry of a second language (e.g., English). The user must consciously activate the control to inform the word recognition engine of the intended language.
  • a first language e.g., Chinese Pinyin
  • a second language e.g., English
  • Another concern related to a language input UI is typing errors.
  • the average user of phonetic text input UIs is particularly prone to entering typographic typing errors.
  • One reason for the typing errors is that users from different geographic regions often use different dialects of a character-based language. Users misspell phonetic text due to their local dialects. A slight deviation in phonetic text can result in entirely different character text.
  • the present invention concerns a language input user interface that intelligently integrates phonetic text entered by a user and language text converted from the phonetic text into the same screen area.
  • the user interface is modeless in that it does not require a user to switch between input and edit modes.
  • the modeless user interface also accommodates entry of multiple languages without requiring explicit mode switching among the languages. As a result, the user interface is intuitive for users, easy to learn, and is user friendly.
  • the language input user interface includes an in-line input feature that integrates phonetic text with converted language text.
  • the phonetic text being input by a user is displayed in the same line concurrently with previously entered phonetic text and previously converted language text. Displaying input phonetic text in the same line with the previously converted language text allows users to focus their eyes in the same line, thereby making for a more intuitive and natural user interface.
  • the language input UI supports language text editing operations including: 1) adding language text; 2) deleting language text; and 3) replacing selected language text with one or more replacement language text candidates.
  • the user interface allows a user to select language text and replace it by manually typing in new phonetic text that can then be converted to new language text.
  • the user interface provides one or more lists of language text candidates. A floating list is first presented in conjunction with the selected language text to be changed. In this manner, language text candidates are presented in-place within the sentence structure to allow the user to visualize the corrections in the grammatical context.
  • the list of candidates is presented in a sorted order according to a rank or score of the likelihood that the choice is actually the one originally intended by the user.
  • the hierarchy may be based on probability, character strokes, or other metrics.
  • the top candidate is the one that gives the sentence the highest score, followed by the second candidate that gives the sentence the next highest score, and so on.
  • the list is updated within the context menu. Additionally, the currently visual choices are shown in animated movement in the direction of the scrolling action. The animation helps the user ascertain how much or how fast the list is being scrolled. Once the user selects the replacement text, it is inserted in place of the language text within the sentence, allowing the user to focus on a single line being edited.
  • Another feature of the language input UI is that it allows the user to view previously input phonetic text for the language text being edited. The user can select the previously input phonetic text and upon selection, the previously input phonetic text is displayed in place of the language text. The phonetic text can then be edited and converted to new language text.
  • Another feature of the language input user interface is a sentence-based automatic conversion feature.
  • a sentence-based automatic conversion previously converted language text within a sentence may be further converted automatically to different language text after inputting subsequent phonetic text. Once a sentence is complete, as indicated by a period, the language text in that sentence becomes fixed and is not further converted automatically to different language text as a result of entering input text in a subsequent sentence. It is appreciated that a phrase-based or similar automatic conversion can be used in an alternative embodiment.
  • Another feature of the language input user interface is sentence-based automatic conversion with language text confirmation. After phonetic text is converted to language text, a user can confirm the just converted language text so that the just converted language text will not be further converted automatically in view of the context of the sentence.
  • Another feature of the language input user interface is the ability to handle multiple languages without switching modes. Words or symbols of a second language, when intermixed with the phonetic text, are treated as special language input text and displayed as the second language text. Thus, users are not required to switch modes when inputting different languages.
  • FIG. 1 is a block diagram of a computer system having a language-specific word processor that implements a language input architecture.
  • the language input architecture includes a language input user interface (UI).
  • UI language input user interface
  • FIG. 2 is a diagrammatic illustration of a screen display of one implementation of a language input user interface.
  • FIG. 2 illustrates an in-line input feature of the language input UI.
  • FIG. 3 is a diagrammatic illustration of a screen display of the language input UI, which shows an automatic conversion feature.
  • FIG. 4 is a diagrammatic illustration of a screen display of the language input UI, which shows a sentence-based automatic conversion feature.
  • FIG. 5 is a diagrammatic illustration of a screen display of the language input UI, which shows an in-place error correction feature and a phonetic text hint feature.
  • FIG. 6 is a diagrammatic illustration of a screen display of the language input UI, which shows a second candidate list feature.
  • FIG. 7 is a diagrammatic illustration of a screen display of the language input UI, which shows an in-place phonetic text correction feature.
  • FIG. 8 is a diagrammatic illustration of a screen display of the language UI, which shows a subsequent screen of the in-place phonetic text correction of FIG. 7 .
  • FIG. 9 is a diagrammatic illustration of a screen display of the language UI, which shows a subsequent screen of the in-place phonetic text correction of FIGS. 7 and 8 .
  • FIG. 10 is a diagrammatic illustration of a screen display of the language UI, which shows entry of mixed text containing multiple different languages.
  • FIG. 11 is a flow diagram of a method for inputting text using a language input user interface.
  • FIG. 12 is a flow diagram of an in-line input sub-process.
  • FIG. 13 is a flow diagram of an automatic conversion sub-process.
  • FIG. 14 is a flow diagram of an automatic conversion sub-process with confirmed character text.
  • FIG. 15 is a flow diagram of an in-place error correction sub-process.
  • FIG. 16 is a flow diagram of an in-place error correction sub-process with a second candidate list.
  • FIG. 17 is a flow diagram of a phonetic text hint sub-process.
  • FIG. 18 is a flow diagram of an in-place phonetic text correction sub-process.
  • FIG. 19 is a flow diagram of an in-line inputting mixed language text sub-process.
  • FIG. 20 illustrates exemplary user inputs and a resulting screen shots of an exemplary Chinese input user interface, which shows an example of an in-line input feature.
  • FIG. 21 illustrates an exemplary screen display of an exemplary Chinese input user interface, which shows an example of a Pinyin text hint feature.
  • FIG. 22 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of an in-place error correction feature.
  • FIG. 23 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of an in-place Pinyin text correction feature.
  • FIG. 24 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of a mixture input of English/Chinese language feature.
  • FIG. 25 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of a second candidate list feature.
  • FIG. 26 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of a sentence-based automatic conversion with a character confirmation feature.
  • FIG. 27 illustrates a definition of phonetic text (e.g. Chinese Pinyin text) and its corresponding character text (e.g. Chinese character text), and a definition of non-phonetic text (e.g. alphanumeric text).
  • phonetic text e.g. Chinese Pinyin text
  • character text e.g. Chinese character text
  • non-phonetic text e.g. alphanumeric text
  • the present invention concerns a language input user interface that facilitates phonetic text input and conversion to language text.
  • the invention is described in the general context of word processing programs executed by a general-purpose computer.
  • the invention may be implemented in many different environments other than word processing (e.g., email systems, browsers, etc.) and may be practiced on many diverse types of devices.
  • FIG. 1 shows an exemplary computer system 100 having a central processing unit (CPU) 102 , a memory 104 , and an input/output (I/O) interface 106 .
  • the CPU 102 communicates with the memory 104 and I/O interface 106 .
  • the memory 104 is representative of both volatile memory (e.g., RAM) and non-volatile memory (e.g., ROM, hard disk, etc.).
  • the computer system 100 has one or more peripheral devices connected via the I/O interface 106 .
  • Exemplary peripheral devices include a mouse 110 , a keyboard 112 (e.g., an alphanumeric QWERTY keyboard, a phonetic keyboard, etc.), a display monitor 114 , a printer 116 , a peripheral storage device 118 , and a microphone 120 .
  • the computer system may be implemented, for example, as a general-purpose computer. Accordingly, the computer system 100 implements a computer operating system (not shown) that is stored in memory 104 and executed on the CPU 102 .
  • the operating system is preferably a multi-tasking operating system that supports a windowing environment.
  • An example of a suitable operating system is a Windows brand operating system from Microsoft Corporation.
  • FIG. 1 the language input UI may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network (e.g., LAN, Internet, etc.).
  • program modules may be located in both local and remote memory storage devices.
  • a data or word processing program 130 is stored in memory 104 and executes on CPU 102 . Other programs, data, files, and such may also be stored in memory 104 , but are not shown for ease of discussion.
  • the word processing program 130 is configured to receive phonetic text and convert it automatically to language text. More particularly, the word processing program 130 implements a language input architecture 131 that, for discussion purposes, is implemented as computer software stored in memory and executable on a processor.
  • the word processing program 130 may include other components in addition to the architecture 131 , but such components are considered standard to word processing programs and will not be shown or described in detail.
  • the language input architecture 131 of word processing program 130 has a user interface (UI) 132 , a search engine 134 , a language model 136 , and a typing model 137 .
  • the architecture 131 is language independent.
  • the UI 132 and search engine 134 are generic and can be used for any language.
  • the architecture 131 is adapted to a particular language by changing the language model 136 and the typing model 137 .
  • Ser. No. 09/606,660 entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Tolerance To Spelling, Typographical, And Conversion Errors” and Ser. No. 09/606,807, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Modeless Entry”, which are incorporated herein by reference.
  • the search engine 134 , language module 136 , and typing model 137 together form a phonetic text-to-language text converter 138 .
  • text means one or more characters and/or non-character symbols.
  • Phonetic text generally refers to an alphanumeric text representing sounds made when speaking a given language.
  • a “language text” is the characters and non-character symbols representative of a written language.
  • Non-phonetic text is alphanumeric text that does not represent sounds made when speaking a given language. Non-phonetic text might include punctuation, special symbols, and alphanumeric text representative of a written language other than the language text.
  • FIG. 27 shows an example of phonetic text, converted language text, and non-phonetic text.
  • the phonetic text is Chinese Pinyin text, which is translated to “hello”.
  • the exemplary character text is Chinese Hanzi text, which is also translated to “hello”.
  • the exemplary non-phonetic text is a string of alphanumeric symbol text “@3m”.
  • word processor 130 is described in the context of a Chinese-based word processor and the language input architecture 131 is configured to convert Pinyin to Hanzi. That is, the phonetic text is Pinyin and the language text is Hanzi.
  • the language input architecture is language independent and may be used for other languages.
  • the phonetic text may be a form of spoken Japanese
  • the language text is representative of a Japanese written language, such as Kanji.
  • Arabic languages Korean language
  • Indian language other Asian languages, and so forth.
  • phonetic text may be any alphanumeric text represented in a Roman-based character set (e.g., English alphabet) that represents sounds made when speaking a given language that, when written, does not employ the Roman-based character set.
  • Language text is the written symbols corresponding to the given language.
  • Phonetic text is entered via one or more of the peripheral input devices, such as the mouse 110 , keyboard 112 , or microphone 120 .
  • the computer system may further implement a speech recognition module (not shown) to receive the spoken words and convert them to phonetic text.
  • a speech recognition module not shown
  • the UI 132 displays the phonetic text as it is being entered.
  • the UI is preferably a graphical user interface.
  • the user interface 132 passes the phonetic text (P) to the search engine 134 , which in turn passes the phonetic text to the typing model 137 .
  • the typing model 137 generates various typing candidates (TC 1 , . . . , TC N ) that might be suitable edits of the phonetic text intended by the user, given that the phonetic text may include errors.
  • the typing model 137 returns the typing candidates to the search engine 13 , which passes them onto the language model 136 .
  • the language model 136 generates various conversion candidates (CC 1 , . . .
  • conversion candidates are associated with the typing candidates. Conversion from phonetic text to language text is not a one-for-one conversion. The same or similar phonetic text might represent a number of characters or symbols in the language text. Thus, the context of the phonetic text is interpreted before conversion to language text. On the other hand, conversion of non-phonetic text will typically be a direct one-to-one conversion wherein the alphanumeric text displayed is the same as the alphanumeric input.
  • the conversion candidates (CC 1 , . . . , CC N ) are passed back to the search engine 134 , which performs statistical analysis to determine which of the typing and conversion candidates exhibit the highest probability of being intended by the user.
  • the search engine 134 selects the candidate with the highest probability and returns the language text of the conversion candidate to the UI 132 .
  • the UI 132 then replaces the phonetic text with the language text of the conversion candidate in the same line of the display. Meanwhile, newly entered phonetic text continues to be displayed in the line ahead of the newly inserted language text.
  • the user interface 132 presents a first list of other high probability candidates ranked in order of the likelihood that the choice is actually the intended answer. If the user is still dissatisfied with the possible candidates, the UI 132 presents a second list that offers all possible choices.
  • the second list may be ranked in terms of probability or other metric (e.g., stroke count or complexity in Chinese characters).
  • the user interface 132 visually integrates the display of inputted phonetic text along with converted language text in the same line on the screen.
  • Many of the features are described in the context of how they visually appear on a display screen, such as presence and location of a window or a menu or a cursor. It is noted that such features are supported by the user interface 132 alone or in conjunction with an operating system.
  • FIGS. 2-10 illustrate various screen displays of one exemplary implementation of the language input user interface 132 .
  • Symbol “P” is used throughout FIGS. 2-10 to represent phonetic text that has been input and displayed in the UI, but has not yet been converted into language text.
  • Symbol “C” represents converted language text that has been converted from input phonetic text P. Subscripts are used with each of the phonetic text P, for example, P 1 , P 2 , . . . P N , and the converted language text C, for example, C 1 , C 2 , C N , to represent individual ones of the phonetic and converted language text.
  • FIG. 2 shows a screen display 200 presented by the language input UI 132 alone, or in conjunction with an operating system.
  • the screen display 200 resembles a customary graphical window, such as those generated by Microsoft's Windows brand operating system.
  • the graphical window is adapted for use in the context of language input, and presents an in-line input area 202 in which phonetic text is entered and subsequently converted to language text.
  • the in-line area 202 is represented pictorially by the parallel dashed lines.
  • An input cursor 204 marks the present position where the next phonetic text input will occur.
  • the graphical UI may further include a plurality of tool bars, such as tool bars 206 , 208 , 210 , 212 , or other functional features depending on the application, e.g., word processor, data processor, spread sheet, internet browser, email, operating system, etc. Tool bars are generally known in the word or data processing art and will not be described in detail.
  • the in-line input area 202 integrates input of phonetic text P and output of the converted language text C. This advantageously allows the user to focus attention on a single area of the screen.
  • the phonetic text P is presented in-line in a first direction (e.g., horizontal across the screen).
  • the input cursor 204 is positioned by or in alignment with the converted language text C 1 C 2 and the input phonetic text P 1 P 2 P 3 . In FIG. 2 , the input sequence is from left to right and the input cursor 204 is positioned at the right side of the previously input phonetic text P 1 P 2 P 3 .
  • the language input UI is capable of in-line input in virtually any direction including, but not limited to, vertically, diagonally, etc.
  • Other in-line formats are conceivable including various three dimensional formats wherein the in-line input feature might appear to the user to extend away or toward the user.
  • the converter 138 automatically converts the phonetic text to converted language text C.
  • a few of the phonetic text elements P e.g., one to six phonetic text elements P is entered before the phonetic text P is converted to language text C.
  • the converted language text C is presented in the same line as the phonetic text P, as indicated by the in-line area 202 .
  • the most recently input phonetic text P is displayed in-line with the previously converted language text C.
  • phonetic text P 1 P 2 P 3 is displayed in-line with the most recently converted language text C 1 C 2 . Displaying input phonetic text P in the same line with previously converted language text C allows users to focus their eyes in the same line, thereby making the input process more intuitive and natural, as well as allowing faster input.
  • the user interface automatically converts the phonetic text P in real time to language text C without the user having to switch modes. As shown in the example of FIG. 3 , as soon as the user enters phonetic text P 4 , the previous phonetic text P 1 P 2 P 3 is automatically converted to language text C 3 . The user continues inputting phonetic text P 4 P 5 P 6 P 7 without having to switch modes or hesitating.
  • Conversion from phonetic text to language text is an automatic process controlled by the language model 136 .
  • the language text C 3 is selected as having the highest probability among all possible language text and so is used in the automatic conversion. However, the more a user types, the greater the context considered. Accordingly, language text C 3 might be changed to different language text upon further entry of phonetic text such as P 4 P 5 P 6 P 7 .
  • the language input architecture 131 may be configured to minimize how often the converted language text is changed in response to entry of additional input text.
  • the converted language text could change with each entered character of input text, essentially flipping among two or more possible interpretations that have approximately equal likelihood of being intended by the user in the given context.
  • the constant flipping of language text might become visually distracting to the user.
  • the converter 138 may implement one or more probabilistic-based rules that stipulate maintaining the current language text unless there is a significant likelihood that another context is intended. In this way, the converter 138 is reluctant to change the converted language text to a second language text when the second language text is only slightly better from a statistical standpoint. The degree of significance varies with the context. As an example, the converter 138 may be configured to modify the language text only when the modified language text is at least five percentage points more likely than the language text it is replacing.
  • the automatic conversion from phonetic text P to language text C is a sentence-based automatic conversion.
  • the language text C in that sentence will not be further converted automatically to different language text C when inputting phonetic text P in a subsequent sentence.
  • the sentence-based automatic conversion feature significantly reduces users' typing errors as well as preventing a previous sentence from being continuously converted automatically.
  • a sentence can be defined in many other ways.
  • a sentence can be defined as a string of text within certain predefined punctuation, such as a string of text between two periods, a string of text between various predefined punctuation, a string of text containing certain text elements, and so forth.
  • punctuation Once a user enters punctuation, the string of text entered between the punctuation and a previous punctuation, if any, may be treated as a sentence.
  • the string of converted language text C in that sentence is not further automatically converted as the user inputs phonetic text in subsequent sentences.
  • the automatic conversion can be based on two or more sentences if desired.
  • FIG. 4 illustrates the screen display 200 at a point in which a sentence is confirmed by way of punctuation. Entry of punctuation, in addition to confirming 11 a sentence, will typically result in the phonetic text P at the end of the sentence being automatically converted to language text C. For example as shown in FIG. 4 , once a comma 400 is entered, phonetic text P 4 P 5 P 6 P 7 is converted to language text C 4 . The string of language text C 1 C 2 C 3 C 4 is now treated as a sentence. Converted language text C 1 C 2 C 3 C 4 will no longer be automatically further converted.
  • a user can expressly confirm one or more of the converted language text C following its conversion from entered phonetic text P.
  • a user can confirm the just converted language text C by entry of a user command at the keyboard (e.g., a space bar entry) so that the just converted language text C will not be further automatically converted in view of the context of the sentence.
  • a user command at the keyboard e.g., a space bar entry
  • the language input architecture 131 is designed to contemplate when to convert the phonetic text to the language text.
  • conversion is made when the converter is sufficiently confident that the converted language text was intended by the user.
  • Characterized in the UI context the issue becomes how many characters of the phonetic text should be displayed at any one time such that eventual conversion results in highly likely language text that is unlikely to be modified as the user enters in more phonetic text. Converting too soon results in more errors in the converted language text, is thereby forcing the user to correct the converted language text more often. Converting too late creates a distraction in that the user is presented with long strings of phonetic text rather than the desired language text.
  • the language input architecture may be configured to defer conversion until an optimum number of phonetic characters are entered to ensure a high conversion accuracy.
  • the architecture is designed to defer selecting and displaying converted language text in place of the phonetic text until after entry of a minimum number of characters and before entry of a maximum number of characters.
  • a language input architecture tailored for Chinese might be configured to convert Pinyin text to Hanzi text when at least one Pinyin character and at most six Pinyin characters have been entered and displayed in the UI.
  • the language input architecture implements a set of rules to determine, for a given context, the optimum number of phonetic characters that may be entered prior to selecting and displaying the converted language text.
  • the rules may be summarized as follows:
  • Rule 1 Always display the last (i.e., most recently entered) input character.
  • Rule 2 After entry and display of multiple input characters, evaluate top N conversion candidates for one or more characters in the candidates that may match. If at least one converted character is the same for all N conversion candidates, convert at least one input character forming part of the input text to the matching converted character(s) in the output text.
  • Rule 3 If the first most likely conversion candidate scores significantly higher than the second most likely conversion candidate, convert at least one input character to the character(s) of the first conversion candidate.
  • FIGS. 5-9 illustrate an exemplary implementation of the modeless editing features supported by the architecture.
  • the user interface enables a user to seamlessly transition from input mode to edit mode without an explicit mode switch operation.
  • the edit mode supports traditional editing functions such as addition, deletion, and replacement of language text.
  • the present invention allows replacement of language text by inputting new phonetic text or by selection of replacement language text from a list of at least one replacement language text candidate.
  • FIG. 5 shows a screen display 200 with various edit features.
  • the user has confirmed the language text C 1 C 2 C 3 C 4 previously of FIG. 4 ) by entering punctuation 400 and now wishes to edit the confirmed language text C 1 C 2 C 3 C 4 .
  • the user repositions the cursor 204 to a desired location within the confirmed language text C 1 C 2 C 3 C 4 . Cursor positioning can be accomplished in many different ways, including but not limited to, arrow keys, mouse click, or verbal command.
  • FIG. 5 illustrates the cursor 204 repositioned in front of the language character C 3 to select this character for editing.
  • the user enters one or more user commands to invoke an edit window or box 500 that is superimposed on or about the in-line area 202 at the point in the text containing the character(s) to be edited.
  • the user command can be accomplished in any of several manners that are well known in the art, including but not limited to, depressing an escape key “ESC” on keyboard 112 .
  • the edit window or box 500 pops up adjacent to the language character C 3 in a second direction (e.g., vertical) orthogonal to the first direction (e.g., horizontal) of the in-line text.
  • the pop-up edit window 500 has two parts: an input text hint window 502 and a scrollable candidate window 504 . These parts are preferably invoked simultaneously by a common user command.
  • the corresponding phonetic text P 1 P 2 P 3 for the character C 3 which was previously input by a user, appears in the input text hint window 502 directly above and in vertical alignment with the language character C 3 being edited. Displaying the input phonetic text P 1 P 2 P 3 allows a user to see what they had previously entered for the language text C 3 and to edit it if necessary.
  • the input text hint window 502 has a scroll up bar 506 disposed at the top. Activation of this scroll up bar 506 causes the phonetic text P 1 P 2 P 3 to slide into the sentence and replace the language text character C 3 .
  • the candidate window 504 contains a scrollable list of at least one replacement language text candidate C 3 a , C 3 b , C 3 c , C 3 d , having the same or similar phonetic text as the language text C 3 .
  • the candidate window 504 is arranged orthogonal to the in-line input area 202 containing the language text C 1 C 2 C 3 C 4 and directly below and in vertical alignment with the language character C 3 .
  • a superscript is used to represent different language text characters, such as C 3 a , C 3 b , C 3 c , and C 3 d .
  • a scroll down bar 508 is presented at the bottom of candidate window 504 .
  • a user can select (e.g., click on) the scroll down bar 508 to view additional replacement language text.
  • One feature of the in-place windows 502 and 504 is that the scrolling operation can be animated to demonstrate the candidates or text moving up or down. This gives the user visual feedback that the list is being scrolled one item at a time.
  • the phonetic text P 1 P 2 P 3 in the input text hint window 502 and the replacement language text candidates C 3 a , C 3 b , C 3 c , C 3 d in the candidate window 504 are additionally referenced by numbers 0, 1, 2, 3, 4, as shown.
  • the numbering method of the replacement language text and the size of the candidate window 504 can be implemented in different ways.
  • the candidate window 504 has a limited size and lists only the top four highest probabilities of replacement language text.
  • the language text candidates C 3 a , C 3 b , C 3 c , C 3 d in the candidate window 504 are preferably arranged in some order or ranking. For instance, an order may be based on a probability or likelihood that the candidate is actually the one intended by the user originally. This probability is computed by the search engine 134 , in conjunction with candidates returned by the language model 136 . If the probability of one replacement language text in a given context is higher than the probability of another replacement language text in the given context, the replacement language text with the higher probability is displayed closer to the language text to be edited and with a lower reference number.
  • a user can optionally select the phonetic text P 1 P 2 P 3 or select one of the replacement language text C 3 a , C 3 b , C 3 c , C 3 d by entering the appropriate reference number to replace the character text C 3 or through other common techniques (point and click on the selected option). The selected replacement is then substituted for the character C 3 in the in-line text.
  • the pop-up edit window 500 can be configured to automatically disappear, leaving the corrected text.
  • the user may explicitly close the text hint window 502 and the candidate window 504 using conventional methods, such as a mouse click outside the windows 502 and 504 .
  • the text replacement feature implemented by in-place windows 502 and 504 is referred to as the in-place error correction feature.
  • the selected phonetic text P 1 P 2 P 3 or the selected one of the replacement language text C 3 a , C 3 b , C 3 c , C 3 d is displayed in-place of the language text C 3 that is to be replaced.
  • the in-place error correction feature allows a user to focus generally proximate a string of language text containing the language text to be edited.
  • FIG. 6 illustrates a screen display 200 similar to that shown in FIG. 5 , but also showing a second candidate window 600 separate from and adjacent to the first candidate window 504 .
  • the second candidate window 600 lists a larger or perhaps complete list of replacement language text that has the same or similar phonetic text as the corresponding phonetic text P 1 P 2 P 3 of the character text C 3 to be edited.
  • the phonetic text P 1 P 2 P 3 in the input text hint window 502 and the replacement language text C 3 a , C 3 b , C 3 c , C 3 d in the candidate window 504 are also listed in the second candidate window 600 .
  • only the additional replacement candidates are listed in the second candidate window 600 .
  • a user enters a command, such as a depressing a right arrow key on the keyboard while active in the candidate window 504 .
  • the user can then select a desired replacement language text by a suitable command, such as a mouse click or a key entry.
  • a suitable command such as a mouse click or a key entry.
  • a user may move a focus 602 from text character to text character.
  • the candidates in second candidate window 600 may also be arranged in some order, although not necessarily according to the same ranking technique used for the first candidate window 504 .
  • sorting by probability score as is done with the candidates in the first candidate window 504 , may not be as useful for the full candidate window 600 because the variations between many candidates is small and somewhat meaningless. The user may have no intuitive feel for locating a particular candidate in this setting. Accordingly, the second candidate window 600 attempts to rank the candidates in some other way that allows intuitive discovery of the desired candidate.
  • One metric that may be used to rank the candidates in the second candidate window 600 is a measure the complexity of a character or symbol.
  • the candidates may be listed according to the number of strokes required to form the candidate.
  • a stroke order imposes some tangible feel for a user who is hunting for a desired language text. The user can quickly glance to a particular area of the window 600 that holds characters of seemingly similar complexity.
  • This ranking metric is not intended to cause the user to count or know a precise number of strokes, but only to give a strong and consistent and visually recognizable sorting order.
  • a user enters a command such as a key entry at the keyboard or a mouse click outside the window 600 .
  • a command such as a key entry at the keyboard or a mouse click outside the window 600 .
  • FIGS. 7-9 show a sequence of screen displays 200 at various instances to illustrate an in-place phonetic text correction of the phonetic text P 1 P 2 P 3 shown in FIG. 5 .
  • a user determines that the phonetic text P 1 P 2 P 3 in the input text hint window 502 is incorrect.
  • the correct phonetic text should be P 1 a P 2 P 3 .
  • the user first selects the phonetic text P 1 P 2 P 3 from the input text hint window 502 .
  • FIG. 7 shows that the selected phonetic text P 1 P 2 P 3 is displayed in place of the text character C 3 being edited.
  • the user can then edit the phonetic text by changing P 1 to P 1 a .
  • FIG. 8 shows the UI after the phonetic text is changed to P 1 a .
  • the text hint window 502 is also updated to reflect the change.
  • at least one new replacement language text C 3 j having the same or similar edited phonetic text P 1 a P 2 P 3 is displayed in the candidate window 504 .
  • the user can then select the replacement language text (e.g. C 3 j ) in the candidate window 504 .
  • FIG. 9 shows the selected replacement text C 3 j substituted for the edited phonetic text P 1 a P 2 P 3 .
  • the edited phonetic text can be automatically converted to the most probable new replacement language text.
  • the language input architecture may be further configured to distinguish between two or more languages.
  • the first language is detected as phonetic text and converted to language text, whereas the second language is detected as non-phonetic text and kept as is.
  • the UI 132 presents the two languages concurrently in the same line as the user enters text. The technique advantageously eliminates the need to switch between two input modes when inputting multi-language text. As far as a user is concerned, the user interface is modeless.
  • FIG. 10 illustrates the screen display 200 of the user interface and demonstrates an integrated handling and presentation of mixed text of two different languages.
  • Symbol “A” represents characters of a second language text.
  • Second language A is a non-phonetic language wherein the second language text A is displayed as input by the user.
  • the first language is Chinese Hanzi and the second language is English. It will be appreciated that the multiple languages might be any number of different languages.
  • a user might input mixed language text, one of which is phonetic text P (e.g., Pinyin) convertible to language text C (e.g., Hanzi).
  • the phonetic text P of the character-based language is displayed in-line with the language text A until the phonetic text P is automatically converted to language text C, which is displayed in-line with the language text A of the second language.
  • FIG. 10 illustrates the input phonetic text P, the converted language text C, and the second language text A within the same in-line area 202 .
  • Different fonts or colors may be used to distinguish between the phonetic text P and the non-phonetic text A.
  • the phonetic text P is displayed in a first font or color
  • the non-phonetic text A is displayed in a second font or color that is different from the first font or color.
  • other techniques may be used to visually differentiate between the phonetic text P and the non-phonetic text A.
  • FIGS. 11-19 illustrate methods implemented by the language input architecture. The methods are implemented as part of the language input user interface to facilitate convenient entry and edition of phonetic text, as well as edition of converted language text.
  • FIG. 11 illustrates the general process, while FIGS. 12-19 illustrate certain of the operations in more detail. The methods are described with additional reference to the screen displays of FIGS. 2-10 .
  • FIG. 11 shows a method 1100 for inputting text via the language input user interface.
  • the user interface enables a user to input text within the common in-line area 202 .
  • the input text is a phonetic text, such as Chinese Pinyin.
  • the input text is automatically converted to language text of a character-based language, such as Chinese Hanzi (operation 1104 ).
  • a character-based language such as Chinese Hanzi
  • One exemplary implementation of this conversion is described above with reference to FIG. 1 . If the reader is interested, a more detailed discussion can be found in the incorporated co-pending applications Ser. No. ______, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Tolerance To Spelling, Typographical, And Conversion Errors” and Ser. No. ______, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Modeless Entry.”
  • Operation 1106 determines whether a user desires to edit the language text following conversion, as indicated by repositioning the cursor or an express s command. If so (i.e., the “Yes” path from operation 1106 ), the UI receives the user's repositioning of the cursor proximal to the character to be edited (operation 1108 ). As illustrated in FIG. 5 , the cursor may be repositioned in front of the language text character.
  • the UI opens the edit window 500 in response to a user command as shown in FIG. 5 .
  • the edit window 500 includes the first candidate list 504 for replacing the language text. If a suitable replacement candidate is not presented in candidate list 504 , the user may decide to invoke the second candidate list window 600 , as illustrated in FIG. 6 .
  • Operation 1112 determines whether the user has requested the second candidate window 600 . If a suitable candidate is available on the first candidate list 504 , and thus the user decides not to open the second candidate list window (i.e., the “no” branch from operation 1112 ), the user may select replacement language text from the first candidate list window to replace the language text to be edited (operation 1114 ).
  • the UI opens the second candidate list window and allows the user to select replacement language text to replace the language text being (operation 1116 ).
  • the selected replacement language text from either the first candidate list window 504 or the second candidate list window 600 is then displayed in place of the language text in the in-line area 202 (operation 1118 ).
  • the operational flow continues in operation 1106 .
  • the UI determines whether the user continues to input text, as indicated by the user repositioning the cursor and continuing to enter characters (operation 1120 ). If the user's actions tend to suggest a continuation of text entry, the cursor is moved back to the input position at the end of the current section (operation 1122 ) and operational flow continues in input in-line operation 1102 . If the user does not wish to continue, the process ends.
  • FIG. 12 illustrates an in-line input sub-process 1200 , which is an exemplary implementation of operations 1102 and 1104 of FIG. 11 .
  • Exemplary screen displays depicting this sub-process are illustrated in FIGS. 2 and 3 .
  • the UI receives an input string of phonetic text (e.g., Pinyin) from an input device (e.g., keyboard, voice recognition).
  • the language input UI displays the phonetic text within the same in-line area 202 as the previously converted language text (operation 1204 ).
  • the phonetic text-to-language text converter 138 converts the string of phonetic text into language text (e.g., Hanzi) in operation 1206 .
  • the language input UI replaces the phonetic text string with the converted language text string and displays the language text in the in-line area 202 (operation 1208 ).
  • Sub-process 1200 then exits.
  • FIG. 13 illustrates an automatic conversion sub-process 1300 , which is another exemplary implementation of operation 1104 .
  • Exemplary screen displays depicting this sub-process are illustrated in FIGS. 3 and 4 .
  • the language input architecture receives a string of phonetic text input by the user via an input device.
  • the language input UI displays the input phonetic text in the in-line area 202 (operation 1304 ).
  • the language input architecture determines whether the phonetic text belongs in the existing sentence or a new sentence. This determination can be based on whether the user has entered some form of punctuation, such as a period or comma.
  • the input phonetic text is automatically converted to language text without considering the content of previous text in the previous sentence, if any (operation 1308 ). Conversely, if the input phonetic text does not belong to a new sentence (i.e. the “existing” path from operation 1306 ), the phonetic text in the sentence is automatically converted within the context of the sentence (operation 1310 ). As part of this conversion, previously converted language text may be further modified as additional text continues to change the intended meaning of the entire sentence. Operational flow exits following the conversion operations 1308 and 1310 .
  • FIG. 14 illustrates an automatic conversion sub-process 1400 in which the user confirms the converted language text.
  • Sub-process 1400 is another exemplary implementation of operation 1104 .
  • the language input architecture receives a string of phonetic text input by the user via an input device.
  • the language input UI displays the input phonetic text in the in-line area 202 (operation 1404 ).
  • the phonetic text of the corresponding unconfirmed language text is automatically converted into language text of a character-based language (operation 1406 ).
  • the language input UI determines whether the user has confirmed the converted language text. If not, the sub-process exits. Otherwise, if the user has confirmed the language text (i.e., the “yes” path from operation 1408 ), the UI confirms the converted language text and removes it from further contextual consideration as additional phonetic text is entered (operation 1410 ). Operational flow then exits.
  • FIGS. 15-18 illustrate different implementations of an in-place error correction sub-process, which is an exemplary implementation of operations 1108 - 1118 of FIG. 11 .
  • the sub-processes of FIGS. 15 and 16 concern use of the first and second candidate lists to correct language text.
  • the sub-processes of FIGS. 17 and 18 are directed to correcting phonetic text using the phonetic text hint window.
  • FIG. 15 illustrates an in-place error correction sub-process 1500 that corrects converted language text by offering alternative language texts in a pop-up candidate window. Exemplary screen displays depicting this sub-process 1500 are illustrated in FIG. 5 .
  • the language input UI selects or identifies the language text to be edited.
  • the UI opens the edit window 500 , including the first candidate window 504 directly below the language text to be edited, to display a list of replacement candidates for the selected language text (operation 1504 ).
  • the UI receives the user's selection of a replacement candidate from the first candidate window 504 .
  • the language input UI displays the selected replacement language text candidate in place of the selected language text within the same in-line area 202 (operation 1508 ). Operational flow then exits.
  • FIG. 16 illustrates an in-place error sub-process 1600 that corrects converted language text by offering a complete list of alternative language texts in a secondary, larger pop-up candidate window. Exemplary screen displays depicting this sub-process 1600 are illustrated in FIG. 6 .
  • the language input UI selects or identifies the language text to be edited.
  • the UI opens the edit window 500 , including the first candidate window 504 directly below the language text to be edited, to display a short list of replacement candidates for the selected language text (operation 1604 ). If the user cannot find an appropriate replacement candidate, the user may invoke a second candidate window 600 of replacement language text candidates (operation 1606 ).
  • the second candidate list contains a larger or more complete list of replacement language text candidates than the first candidate window.
  • the UI receives the user's selection of a replacement candidate from the second candidate window 600 .
  • the language input UI displays the selected replacement language text candidate in place of the selected language text within the same in-line area 202 (operation 1610 ). Operational flow then exits.
  • FIG. 17 illustrates an in-place error sub-process 1700 that corrects the converted language text by editing the previously entered phonetic text via a pop-up hint window. Exemplary screen displays depicting this sub-process 1700 are illustrated in FIG. 7 .
  • the language input UI selects or identifies the language text to be edited.
  • the UI opens the edit window 500 , including the phonetic text hint window 502 directly above the language text to be edited that displays the phonetic text as entered by the user (operation 1704 ).
  • the UI Upon user selection of the phonetic text in the hint window 502 (i.e., the “yes” path from operation 1706 ), the UI displays the phonetic text in place of the language text being edited (operation 1708 ). This allows the user to make corrections to the phonetic text within the in-line area 202 . Operational flow then exits.
  • FIG. 18 illustrates an in-place error sub-process 1800 that corrects the converted language text by editing the previously entered phonetic text and viewing a new set of candidates following the editing. Exemplary screen displays depicting this sub-process 1800 are illustrated in FIGS. 8 and 9 .
  • the language input UI selects or identifies the language text to be edited.
  • the UI opens the edit window 500 , including the phonetic text hint window 502 directly above the selected language text and the first candidate window 504 directly below the language text (operation 1804 ).
  • the UI Upon user selection of the phonetic text in the hint window 502 (i.e., the “yes” path from operation 1806 ), the UI displays the phonetic text in place of the language text being edited (operation 1808 ). The UI receives and displays the user's edits of the phonetic text in the in-line edit area 202 (operations 1810 ). In response to the editing, the UI displays a new list of replacement language text candidates in the first candidate window 504 (operation 1812 ). The user may further invoke the second candidate window 600 if desired.
  • the UI receives the user's selection of a replacement candidate from the new list in the first candidate window 504 .
  • the language input UI displays the selected replacement language text candidate in place of the selected language text within the same in-line area 202 (operation 1816 ). Operational flow then exits.
  • FIG. 19 illustrates a multi-language entry sub-process 1900 in which two or more different languages are entered using the in-line input UI. Exemplary screen displays depicting this sub-process 1900 are illustrated in FIG. 10 .
  • the language input architecture receives a string of mixed phonetic and non-phonetic text input by the user via an input device.
  • the language input UI displays the mixed text within the same in-line area 202 as the previously converted language text (operation 1904 ).
  • the language input architecture determines whether the input text is phonetic text (e.g., Pinyin) as opposed to non-phonetic text (e.g., English). If the input text is phonetic text (i.e., the “yes” path from operation 1906 ), the language input architecture converts the phonetic text to language text (operation 1908 ). The UI displays the language text in place of the entered phonetic text and in-line with the previous text (operation 1910 ). On the other hand, if the input text is non-phonetic text (i.e., the “no” path from operation 1906 ), the language input architecture does not convert it and the UI displays the non-phonetic text in-line with the previous text (operation 1912 ). Operational flow then exits.
  • phonetic text e.g., Pinyin
  • non-phonetic text e.g., English
  • FIGS. 20-26 illustrate an exemplary implementation of the language input architecture and UI in the context of the Chinese language.
  • the phonetic text is Chinese Pinyin and the language text is Chinese Hanzi characters.
  • FIG. 20 illustrates one implementation of a Chinese input user interface showing an example of the in-line input feature.
  • Table 2000 contains two strings of Pinyin text 2002 and 2004 input by the user and corresponding converted Hanzi text 2006 and 2008 as it would appear in the in-line input area.
  • An exemplary display screen 2010 is shown below table 2000 and contains the converted Hanzi text 2008 . Notice that the Pinyin text being input at a cursor bar 2012 is displayed in-line with the converted Chinese text.
  • the other characteristics shown in the display screen 2010 are known in the word processing art.
  • FIG. 21 illustrates a Chinese UI screen 2100 in which converted Hanzi text is presently displayed in the in-line entry area 2102 .
  • the user has moved the cursor to select Chinese character text 2104 for editing and invoked the pop-up edit window 2106 , consisting of a Pinyin text hint window 2108 and a first Hanzi text candidate window 2110 .
  • the Pinyin text 2112 associated with the selected Chinese character text 2104 is displayed in a Pinyin text hint window 2108 .
  • FIG. 22 illustrates one implementation of a Chinese input user interface showing an example of the in-place error correction feature.
  • Table 2200 depicts two user actions in the left column—an action 2202 to open the edit window containing a phonetic hint and a candidate list and an action 2204 to select an item “1” from the candidate list.
  • the right column of table 2200 illustrates corresponding exemplary screen shots 2206 and 2208 .
  • the user selects Chinese character text 2210 for editing by moving the cursor in front of the character text 2210 .
  • the user inputs a command to open an edit window containing a Pinyin text hint window 2212 and a first candidate list window 2214 .
  • the user selects item “1” from the candidate list 2214 and the first candidate 2216 associated with item “1” is substituted for the original selected text 2210 .
  • the candidates in the list 2208 are updated (i.e., scrolled upward one place) to reflect that the selected candidate 2216 is moved into the in-line entry area.
  • the updating may be animated to visually illustrate that the selected candidate 2216 is moved into the in-line area.
  • FIG. 23 illustrates another implementation of a Chinese input user interface to illustrate in-place correction of the Pinyin text.
  • the left column in table 2300 contains a series of five user actions 2302 - 2310 and the right column shows corresponding exemplary screen shots 2312 - 2320 resulting from the user actions.
  • action 2302 When a user decides to edit the character text, the user moves the cursor to the front of the character text to be edited (action 2302 ).
  • the user selects Chinese character text 2330 to be edited (UI screen shot 2312 ).
  • the user inputs a command (e.g., pressing the “ESC” key) to invoke the edit window (action 2304 ).
  • a Pinyin text hint window 2332 and a first candidate list window 2334 are opened as shown in the UI screen shot 2314 .
  • the user enters “0” (action 2306 ) to select the Pinyin text 2336 in the Pinyin text hint window 2332 .
  • the selected Pinyin text 2336 is substituted for the selected character text 2330 as shown in the UI screen shot 2316 .
  • the user is free to edit the original Pinyin text.
  • the user adds an additional apostrophe in the Pinyin text 2336 (action 2308 ) to produce text 2336 ′ as shown UI screen shot 2318 .
  • the edited Pinyin text 2336 ′ is shown both in the in-line area as well as the Pinyin text hint window 2332 .
  • the first candidate window 2334 is updated with a new list of character text candidates.
  • a new character text candidate 2338 corresponding to the edited Pinyin text 2336 ′ is displayed in the first candidate list window 2334 .
  • the user selects the desired character text 2338 in the first candidate list window 2334 , for example, by entering “1” (action 2310 ).
  • the selected character text 2338 is displayed in place of the edited Pinyin text 2336 ′, as illustrated in UI screen shot 2320 .
  • the new character text 2338 is effectively substituted for the original language text 2330 .
  • FIG. 24 illustrates another implementation of a Chinese input user interface to illustrate entry of mixed languages, such as Chinese and English.
  • the left column in table 2400 contains a series of two user actions 2402 and 2404 and the right column shows corresponding exemplary screen shots 2406 and 2408 resulting from the user actions.
  • the user inputs mixed Pinyin text 2410 and English text 2412 as indicated by action 2402 .
  • the user can enter the mixed text into the language input UI without shifting modes between Chinese entry and English entry. That is, the user simply enters the Pinyin text and English text in the same line without stopping.
  • the Pinyin text 2410 is converted into Chinese text 2414 and displayed within the same in-line area, as illustrated in UI screen shot 2406 .
  • the English text 2412 is not converted by the language input architecture, but is displayed as entered.
  • the user inputs mixed Pinyin text 2416 , English text 2418 , and Pinyin text 2420 without shifting modes (action 2404 ).
  • the Pinyin text 2416 and 2420 are converted into Chinese text 2422 and 2424 , respectively, as shown in UI screen shot 2408 .
  • the English text 2418 remains unchanged and is displayed in-line with the converted Chinese text.
  • the phonetic and non-phonetic text may be displayed differently to differentiate between them. For example, compare the mixed text in table 2000 of FIG. 20 and table 2400 of FIG. 24 .
  • the Pinyin text e.g., 2012 in FIG. 20
  • the English text e.g., 2412 or 2418 in FIG. 24
  • FIG. 25 illustrates another implementation of a Chinese input user interface to illustrate the first and second candidate lists for in-place editing.
  • the left column in table 2500 contains a series of two user actions 2502 and 2504 and the right column shows corresponding exemplary screen shots 2506 and 2508 resulting from the user actions.
  • the user selects a Chinese text to be edited and inputs a command to open the Pinyin text hint window 2510 and a first character text candidate list 2512 .
  • the windows 2510 and 2512 appear above and below the in-line entry area, respectively, as illustrated in UI screen shot 2506 .
  • a second character text candidate list window 2514 is popped open next to the first candidate list 2512 , as illustrated in UI screen shot 2508 .
  • the user may then select a character text candidate from the second character text candidate list window 2514 .
  • FIG. 26 illustrates another implementation of a Chinese input user interface to illustrate sentence-based automatic conversion with confirmed character text.
  • the left column in table 2600 contains a series of five user actions 2602 - 2610 and the right column shows corresponding exemplary screen shots 2612 - 2620 resulting from the user actions.
  • the user inputs Pinyin text 2622 and 2624 .
  • the Pinyin text 2622 is automatically converted into character text 2626 and Pinyin text 2624 remains unconverted until further user input, as illustrated by UI screen shot 2612 .
  • the user subsequently inputs Pinyin text 2628 .
  • the previously converted character text 2626 is now automatically converted into different Chinese character text 2630 as a result of changing context introduced by the addition of Pinyin text 2628 . This modification of the converted character text is illustrated in UI screen shot 2614 . Pinyin text 2624 and 2628 remain unconverted at this point, and continue to be illustrated in-line with the modified language text.
  • the user inputs a confirmation command (e.g., pressing the space bar) to confirm the just converted character text 2630 .
  • a confirmation command e.g., pressing the space bar
  • the Pinyin text 2624 and 2628 are automatically converted into Chinese text 2632 and 2634 , respectively, based on the context in the sentence so far. This is illustrated in screen shot 2616 .
  • the character text 2630 is not confirmed by user action 2606 (e.g., the user does not press the space bar). Instead, the user enters the additional Pinyin text without confirmation of character text 2630 . In this case, the character text 2626 remains unchanged and is not modified to text 2630 , as illustrated by UI screen shot 2620 . This is because the automatic conversion from Pinyin text to character text is sentence-based and character text 2626 is part of the sentence. As long as the sentence is active (i.e., no punctuation has ended the sentence or no new sentence has yet been started), the previously converted character text in the current sentence is subject to further modification unless the user confirms the converted character text.

Abstract

A language input architecture receives input text (e.g., phonetic text of a character-based language) entered by a user from an input device (e.g., keyboard, voice recognition). The input text is converted to an output text (e.g., written language text of a character-based language). The language input architecture has a user interface that displays the output text and unconverted input text in line with one another. As the input text is converted, it is replaced in the UI with the converted output text. In addition to this in-line input feature, the UI enables in-place editing or error correction without requiring the user to switch modes from an entry mode to an edit mode. To assist with this in-place editing, the UI presents pop-up windows containing the phonetic text from which the output text was converted as well as first and second candidate lists that contain small and large sets of alternative candidates that might be used to replace the current output text. The language input user interface also allows a user to enter a mixed text of different languages.

Description

    RELATED CASES
  • This is a divisional of U.S. patent application Ser. No. 09/606,811, entitled “Language Input User Interface”, which was filed Jun. 28, 2000, and is assigned to Microsoft Corporation.
  • This divisional claims benefit of U.S. Provisional Application No. 60/163,588, filed Nov. 5, 1999.
  • This divisional is also co-pending with U.S. patent application Ser. No. 09/606,660, filed on Jun. 28, 2000, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Tolerance To Spelling, Typographical, And Conversion Errors” and U.S. patent application Ser. No. 09/606,807, filed on Jun. 28, 2000, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Modeless Entry”. Both of these co-pending applications are incorporated herein by reference.
  • TECHNICAL FIELD
  • The present invention relates to a language input user interface. More particularly, the present invention relates to a language input user interface that may be used in language-specific or multilingual word processing systems, email systems, browsers, and the like, where phonetic text is input and converted to language text.
  • BACKGROUND
  • Language-specific word processing systems that utilize alphanumeric keyboards (e.g. the English QWERTY keyboard) have existed for many years. Alphanumeric keyboards work well for languages that employ a small alphabet, such as the Roman character set. Unfortunately, not all languages have a small character base. For instance, character-based languages (also referred to as symbol languages), such as Chinese, Japanese, Korean, and the like, may have thousands of characters. Language specific keyboards do not exist for character-based languages because it is practically impossible to build a keyboard to support separate keys for so many different characters.
  • Rather than designing expensive language and dialect-specific keyboards, language-specific word processing systems allow the user to enter phonetic text from a small character-set keyboard (e.g., a QWERTY keyboard) and convert that phonetic text to language text of a character-based language. “Phonetic text” represents the sounds made when speaking a given language, whereas the “language text” represents the actual written characters as they appear in the text. In the Chinese language, for example, Pinyin is an example of phonetic text and Hanzi is an example of the language text. Typically, the set of characters needed to express the phonetic text is much smaller than the character set used to express the language text. By converting the phonetic text to language text, many different languages can be processed by the language specific word processor using conventional computers and standard QWERTY keyboards.
  • To facilitate user entry of phonetic text, language-specific word processing systems often employ a language input user interface (UI). Existing language input UIs, however, are not very user friendly because they are not easy to learn and they do not accommodate a fast typing speed. As an example of such unfriendliness, some conventional language input user interfaces disassociate the phonetic text input from the converted language text output. For instance, a user may enter phonetic text in one location on the visual display screen and the converted characters of the language text are presented in a separate and distinct location on the screen. The two locations may even have their own local cursor. This dual presentation can be confusing to the user in terms of where entry is actually taking place. Moreover, the user must continuously glance between the locations on the screen.
  • As a result, existing language input UIs are often only used by professional typists, not by everyday personal computer (PC) users. In character-based language countries, these concerns have a significant affect on the popularity of PC use.
  • In general, there are two types of language input user interfaces: (1) a code-based user interface and (2) a mode-based user interface. In a code-based user interface, users memorize codes related to words of the language. The codes are input by way of an input device and converted to the desired language text. This type of user interface allows users to input text very fast once they have memorized the codes. However, these codes are often not easy to memorize, but are easy to forget.
  • In a mode-based user interface, phonetic text is input and converted to the desired language text. Mode-based user interfaces do not require users to memorize codes, but typically require users to switch modes between inputting and editing language text. One example of a mode-based user interface is employed in Microsoft's “Word”-brand word processing program that is adapted for foreign languages by utilizing phonetic-to-language conversion, such as Chinese. When entering phonetic text in the “Word” program, a user is presented with a localized tool bar that enables the user to switch between an inputting mode in which a user inputs phonetic characters (e.g., Chinese Pinyin) and an editing mode in which a user corrects inevitable errors that occasionally occur as a result of the recognition and conversion process.
  • One drawback with such traditional interfaces is that users must be aware of the current mode—input or edit—and take additional steps that are extraneous to text entry (i.e., clicking a tool bar control button) to switch between the modes. This interface thus causes extra work for the user and diverts the user's attention from text entry to others peripheral control aspects, thereby significantly reducing input speed.
  • Another problem associated with mode-based user interfaces concerns how to handle, from a user interface perspective, the inevitable conversion errors. A conversion error occurs when the recognition and conversion engine converts the phonetic text into an incorrect language character. This may be quite common due to the nature of a given language and the accuracy at which the phonetic text can be used to predict an intended character. After the user converts to the editing mode, the user interface typically provides some way for the user to correct the character. In Microsoft's “Word”-brand word processing program for China, for example, a user is presented with a box containing possible alternative characters. If the list is long, the box provides controls to scroll through the list of possible characters.
  • Another drawback of traditional mode-based user interfaces is that they require mode switching for inputting different languages. When a user is inputting phonetic text and wants to input text of a second language, the user has to switch modes to input the second language. For instance, in the context of Microsoft “Word”, the localized tool bar offers a control button that enables a user to toggle between entry of a first language (e.g., Chinese Pinyin) and entry of a second language (e.g., English). The user must consciously activate the control to inform the word recognition engine of the intended language.
  • Another concern related to a language input UI, particularly from the perspective of non-professional typists, is typing errors. The average user of phonetic text input UIs is particularly prone to entering typographic typing errors. One reason for the typing errors is that users from different geographic regions often use different dialects of a character-based language. Users misspell phonetic text due to their local dialects. A slight deviation in phonetic text can result in entirely different character text.
  • Accordingly, there is a need for an improved language input user interface.
  • SUMMARY
  • The present invention concerns a language input user interface that intelligently integrates phonetic text entered by a user and language text converted from the phonetic text into the same screen area. The user interface is modeless in that it does not require a user to switch between input and edit modes. The modeless user interface also accommodates entry of multiple languages without requiring explicit mode switching among the languages. As a result, the user interface is intuitive for users, easy to learn, and is user friendly.
  • In one implementation, the language input user interface (UI) includes an in-line input feature that integrates phonetic text with converted language text. In particular, the phonetic text being input by a user is displayed in the same line concurrently with previously entered phonetic text and previously converted language text. Displaying input phonetic text in the same line with the previously converted language text allows users to focus their eyes in the same line, thereby making for a more intuitive and natural user interface.
  • The language input UI supports language text editing operations including: 1) adding language text; 2) deleting language text; and 3) replacing selected language text with one or more replacement language text candidates. The user interface allows a user to select language text and replace it by manually typing in new phonetic text that can then be converted to new language text. Alternatively, the user interface provides one or more lists of language text candidates. A floating list is first presented in conjunction with the selected language text to be changed. In this manner, language text candidates are presented in-place within the sentence structure to allow the user to visualize the corrections in the grammatical context. The list of candidates is presented in a sorted order according to a rank or score of the likelihood that the choice is actually the one originally intended by the user. The hierarchy may be based on probability, character strokes, or other metrics. The top candidate is the one that gives the sentence the highest score, followed by the second candidate that gives the sentence the next highest score, and so on.
  • As the user scrolls through the list, the list is updated within the context menu. Additionally, the currently visual choices are shown in animated movement in the direction of the scrolling action. The animation helps the user ascertain how much or how fast the list is being scrolled. Once the user selects the replacement text, it is inserted in place of the language text within the sentence, allowing the user to focus on a single line being edited.
  • Another feature of the language input UI is that it allows the user to view previously input phonetic text for the language text being edited. The user can select the previously input phonetic text and upon selection, the previously input phonetic text is displayed in place of the language text. The phonetic text can then be edited and converted to new language text.
  • Another feature of the language input user interface is a sentence-based automatic conversion feature. In a sentence-based automatic conversion, previously converted language text within a sentence may be further converted automatically to different language text after inputting subsequent phonetic text. Once a sentence is complete, as indicated by a period, the language text in that sentence becomes fixed and is not further converted automatically to different language text as a result of entering input text in a subsequent sentence. It is appreciated that a phrase-based or similar automatic conversion can be used in an alternative embodiment.
  • Another feature of the language input user interface is sentence-based automatic conversion with language text confirmation. After phonetic text is converted to language text, a user can confirm the just converted language text so that the just converted language text will not be further converted automatically in view of the context of the sentence.
  • Another feature of the language input user interface is the ability to handle multiple languages without switching modes. Words or symbols of a second language, when intermixed with the phonetic text, are treated as special language input text and displayed as the second language text. Thus, users are not required to switch modes when inputting different languages.
  • These and various other features as well as advantages which characterize the present invention will be apparent from reading the following detailed description and a review of the associated drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The same numbers are used throughout the Figures to reference like components and features.
  • FIG. 1 is a block diagram of a computer system having a language-specific word processor that implements a language input architecture. The language input architecture includes a language input user interface (UI).
  • FIG. 2 is a diagrammatic illustration of a screen display of one implementation of a language input user interface. FIG. 2 illustrates an in-line input feature of the language input UI.
  • FIG. 3 is a diagrammatic illustration of a screen display of the language input UI, which shows an automatic conversion feature.
  • FIG. 4 is a diagrammatic illustration of a screen display of the language input UI, which shows a sentence-based automatic conversion feature.
  • FIG. 5 is a diagrammatic illustration of a screen display of the language input UI, which shows an in-place error correction feature and a phonetic text hint feature.
  • FIG. 6 is a diagrammatic illustration of a screen display of the language input UI, which shows a second candidate list feature.
  • FIG. 7 is a diagrammatic illustration of a screen display of the language input UI, which shows an in-place phonetic text correction feature.
  • FIG. 8 is a diagrammatic illustration of a screen display of the language UI, which shows a subsequent screen of the in-place phonetic text correction of FIG. 7.
  • FIG. 9 is a diagrammatic illustration of a screen display of the language UI, which shows a subsequent screen of the in-place phonetic text correction of FIGS. 7 and 8.
  • FIG. 10 is a diagrammatic illustration of a screen display of the language UI, which shows entry of mixed text containing multiple different languages.
  • FIG. 11 is a flow diagram of a method for inputting text using a language input user interface.
  • FIG. 12 is a flow diagram of an in-line input sub-process.
  • FIG. 13 is a flow diagram of an automatic conversion sub-process.
  • FIG. 14 is a flow diagram of an automatic conversion sub-process with confirmed character text.
  • FIG. 15 is a flow diagram of an in-place error correction sub-process.
  • FIG. 16 is a flow diagram of an in-place error correction sub-process with a second candidate list.
  • FIG. 17 is a flow diagram of a phonetic text hint sub-process.
  • FIG. 18 is a flow diagram of an in-place phonetic text correction sub-process.
  • FIG. 19 is a flow diagram of an in-line inputting mixed language text sub-process.
  • FIG. 20 illustrates exemplary user inputs and a resulting screen shots of an exemplary Chinese input user interface, which shows an example of an in-line input feature.
  • FIG. 21 illustrates an exemplary screen display of an exemplary Chinese input user interface, which shows an example of a Pinyin text hint feature.
  • FIG. 22 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of an in-place error correction feature.
  • FIG. 23 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of an in-place Pinyin text correction feature.
  • FIG. 24 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of a mixture input of English/Chinese language feature.
  • FIG. 25 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of a second candidate list feature.
  • FIG. 26 illustrates exemplary user inputs and resulting screen shots of an exemplary Chinese input user interface, which shows an example of a sentence-based automatic conversion with a character confirmation feature.
  • FIG. 27 illustrates a definition of phonetic text (e.g. Chinese Pinyin text) and its corresponding character text (e.g. Chinese character text), and a definition of non-phonetic text (e.g. alphanumeric text).
  • DETAILED DESCRIPTION
  • The present invention concerns a language input user interface that facilitates phonetic text input and conversion to language text. For discussion purposes, the invention is described in the general context of word processing programs executed by a general-purpose computer. However, the invention may be implemented in many different environments other than word processing (e.g., email systems, browsers, etc.) and may be practiced on many diverse types of devices.
  • System Architecture
  • FIG. 1 shows an exemplary computer system 100 having a central processing unit (CPU) 102, a memory 104, and an input/output (I/O) interface 106. The CPU 102 communicates with the memory 104 and I/O interface 106. The memory 104 is representative of both volatile memory (e.g., RAM) and non-volatile memory (e.g., ROM, hard disk, etc.).
  • The computer system 100 has one or more peripheral devices connected via the I/O interface 106. Exemplary peripheral devices include a mouse 110, a keyboard 112 (e.g., an alphanumeric QWERTY keyboard, a phonetic keyboard, etc.), a display monitor 114, a printer 116, a peripheral storage device 118, and a microphone 120. The computer system may be implemented, for example, as a general-purpose computer. Accordingly, the computer system 100 implements a computer operating system (not shown) that is stored in memory 104 and executed on the CPU 102. The operating system is preferably a multi-tasking operating system that supports a windowing environment. An example of a suitable operating system is a Windows brand operating system from Microsoft Corporation.
  • It is noted that other computer system configurations may be used, such as hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. In addition, although a standalone computer is illustrated in FIG. 1, the language input UI may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network (e.g., LAN, Internet, etc.). In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
  • A data or word processing program 130 is stored in memory 104 and executes on CPU 102. Other programs, data, files, and such may also be stored in memory 104, but are not shown for ease of discussion. The word processing program 130 is configured to receive phonetic text and convert it automatically to language text. More particularly, the word processing program 130 implements a language input architecture 131 that, for discussion purposes, is implemented as computer software stored in memory and executable on a processor. The word processing program 130 may include other components in addition to the architecture 131, but such components are considered standard to word processing programs and will not be shown or described in detail.
  • The language input architecture 131 of word processing program 130 has a user interface (UI) 132, a search engine 134, a language model 136, and a typing model 137. The architecture 131 is language independent. The UI 132 and search engine 134 are generic and can be used for any language. The architecture 131 is adapted to a particular language by changing the language model 136 and the typing model 137. A more detailed discussion of the architecture is found in co-pending applications Ser. No. 09/606,660, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Tolerance To Spelling, Typographical, And Conversion Errors” and Ser. No. 09/606,807, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Modeless Entry”, which are incorporated herein by reference.
  • The search engine 134, language module 136, and typing model 137 together form a phonetic text-to-language text converter 138. For purposes of this disclosure, “text” means one or more characters and/or non-character symbols. “Phonetic text” generally refers to an alphanumeric text representing sounds made when speaking a given language. A “language text” is the characters and non-character symbols representative of a written language. “Non-phonetic text” is alphanumeric text that does not represent sounds made when speaking a given language. Non-phonetic text might include punctuation, special symbols, and alphanumeric text representative of a written language other than the language text.
  • FIG. 27 shows an example of phonetic text, converted language text, and non-phonetic text. In this example, the phonetic text is Chinese Pinyin text, which is translated to “hello”. The exemplary character text is Chinese Hanzi text, which is also translated to “hello”. The exemplary non-phonetic text is a string of alphanumeric symbol text “@3m”. For discussion purposes, word processor 130 is described in the context of a Chinese-based word processor and the language input architecture 131 is configured to convert Pinyin to Hanzi. That is, the phonetic text is Pinyin and the language text is Hanzi.
  • However, the language input architecture is language independent and may be used for other languages. For example, the phonetic text may be a form of spoken Japanese, whereas the language text is representative of a Japanese written language, such as Kanji. Many other examples exist including, but not limited to, Arabic languages, Korean language, Indian language, other Asian languages, and so forth.
  • Perhaps more generally stated, phonetic text may be any alphanumeric text represented in a Roman-based character set (e.g., English alphabet) that represents sounds made when speaking a given language that, when written, does not employ the Roman-based character set. Language text is the written symbols corresponding to the given language.
  • Phonetic text is entered via one or more of the peripheral input devices, such as the mouse 110, keyboard 112, or microphone 120. In this manner, a user is permitted to input phonetic text using keyed entry or oral speech. In the case of oral input, the computer system may further implement a speech recognition module (not shown) to receive the spoken words and convert them to phonetic text. The following discussion assumes that entry of text via keyboard 112 is performed on a full size, standard alphanumeric QWERTY keyboard.
  • The UI 132 displays the phonetic text as it is being entered. The UI is preferably a graphical user interface. The user interface 132 passes the phonetic text (P) to the search engine 134, which in turn passes the phonetic text to the typing model 137. The typing model 137 generates various typing candidates (TC1, . . . , TCN) that might be suitable edits of the phonetic text intended by the user, given that the phonetic text may include errors. The typing model 137 returns the typing candidates to the search engine 13, which passes them onto the language model 136. The language model 136 generates various conversion candidates (CC1, . . . , CCN) written in the language text that might be representative of a converted form of the phonetic text intended by the user. The conversion candidates are associated with the typing candidates. Conversion from phonetic text to language text is not a one-for-one conversion. The same or similar phonetic text might represent a number of characters or symbols in the language text. Thus, the context of the phonetic text is interpreted before conversion to language text. On the other hand, conversion of non-phonetic text will typically be a direct one-to-one conversion wherein the alphanumeric text displayed is the same as the alphanumeric input.
  • The conversion candidates (CC1, . . . , CCN) are passed back to the search engine 134, which performs statistical analysis to determine which of the typing and conversion candidates exhibit the highest probability of being intended by the user. Once the probabilities are computed, the search engine 134 selects the candidate with the highest probability and returns the language text of the conversion candidate to the UI 132. The UI 132 then replaces the phonetic text with the language text of the conversion candidate in the same line of the display. Meanwhile, newly entered phonetic text continues to be displayed in the line ahead of the newly inserted language text.
  • If the user wishes to change language text from the one selected by the search engine 134, the user interface 132 presents a first list of other high probability candidates ranked in order of the likelihood that the choice is actually the intended answer. If the user is still dissatisfied with the possible candidates, the UI 132 presents a second list that offers all possible choices. The second list may be ranked in terms of probability or other metric (e.g., stroke count or complexity in Chinese characters).
  • Language Input User Interface
  • The remaining discussion is particularly directed to features of the user interface 132. In particular, the user interface 132 visually integrates the display of inputted phonetic text along with converted language text in the same line on the screen. Many of the features are described in the context of how they visually appear on a display screen, such as presence and location of a window or a menu or a cursor. It is noted that such features are supported by the user interface 132 alone or in conjunction with an operating system.
  • FIGS. 2-10 illustrate various screen displays of one exemplary implementation of the language input user interface 132. Symbol “P” is used throughout FIGS. 2-10 to represent phonetic text that has been input and displayed in the UI, but has not yet been converted into language text. Symbol “C” represents converted language text that has been converted from input phonetic text P. Subscripts are used with each of the phonetic text P, for example, P1, P2, . . . PN, and the converted language text C, for example, C1, C2, CN, to represent individual ones of the phonetic and converted language text.
  • Integrated In-Line Text Input/Output
  • FIG. 2 shows a screen display 200 presented by the language input UI 132 alone, or in conjunction with an operating system. In this illustration, the screen display 200 resembles a customary graphical window, such as those generated by Microsoft's Windows brand operating system. The graphical window is adapted for use in the context of language input, and presents an in-line input area 202 in which phonetic text is entered and subsequently converted to language text. The in-line area 202 is represented pictorially by the parallel dashed lines.
  • An input cursor 204 marks the present position where the next phonetic text input will occur. The graphical UI may further include a plurality of tool bars, such as tool bars 206, 208, 210, 212, or other functional features depending on the application, e.g., word processor, data processor, spread sheet, internet browser, email, operating system, etc. Tool bars are generally known in the word or data processing art and will not be described in detail.
  • The in-line input area 202 integrates input of phonetic text P and output of the converted language text C. This advantageously allows the user to focus attention on a single area of the screen. As the user enters phonetic text (via key entry or voice), the phonetic text P is presented in-line in a first direction (e.g., horizontal across the screen). The input cursor 204 is positioned by or in alignment with the converted language text C1C2 and the input phonetic text P1P2P3. In FIG. 2, the input sequence is from left to right and the input cursor 204 is positioned at the right side of the previously input phonetic text P1P2P3. It will be appreciated that it is within the scope of the present invention to input text in the same direction in which a given language is read, and that the “left to right” input sequence discussed in the present implementation is merely one example. Further, it will be appreciated that the language input UI is capable of in-line input in virtually any direction including, but not limited to, vertically, diagonally, etc. Other in-line formats are conceivable including various three dimensional formats wherein the in-line input feature might appear to the user to extend away or toward the user.
  • Automatic Conversion
  • As the user inputs phonetic text P, the converter 138 automatically converts the phonetic text to converted language text C. Typically, a few of the phonetic text elements P (e.g., one to six phonetic text elements P) is entered before the phonetic text P is converted to language text C.
  • As conversion is made, the converted language text C is presented in the same line as the phonetic text P, as indicated by the in-line area 202. As the user continues to enter phonetic text, the most recently input phonetic text P is displayed in-line with the previously converted language text C. In FIG. 2, for example, phonetic text P1P2P3 is displayed in-line with the most recently converted language text C1C2. Displaying input phonetic text P in the same line with previously converted language text C allows users to focus their eyes in the same line, thereby making the input process more intuitive and natural, as well as allowing faster input.
  • As the user continues to enter phonetic text P, the user interface automatically converts the phonetic text P in real time to language text C without the user having to switch modes. As shown in the example of FIG. 3, as soon as the user enters phonetic text P4, the previous phonetic text P1P2P3 is automatically converted to language text C3. The user continues inputting phonetic text P4P5P6P7 without having to switch modes or hesitating.
  • Conversion from phonetic text to language text is an automatic process controlled by the language model 136. The language text C3 is selected as having the highest probability among all possible language text and so is used in the automatic conversion. However, the more a user types, the greater the context considered. Accordingly, language text C3 might be changed to different language text upon further entry of phonetic text such as P4P5P6P7.
  • The language input architecture 131 may be configured to minimize how often the converted language text is changed in response to entry of additional input text. In some contexts, it is possible that the converted language text could change with each entered character of input text, essentially flipping among two or more possible interpretations that have approximately equal likelihood of being intended by the user in the given context. The constant flipping of language text might become visually distracting to the user.
  • To minimize textual flipping, the converter 138 may implement one or more probabilistic-based rules that stipulate maintaining the current language text unless there is a significant likelihood that another context is intended. In this way, the converter 138 is reluctant to change the converted language text to a second language text when the second language text is only slightly better from a statistical standpoint. The degree of significance varies with the context. As an example, the converter 138 may be configured to modify the language text only when the modified language text is at least five percentage points more likely than the language text it is replacing.
  • Sentence-Based and Confirmed Automatic Conversion
  • Users may not feel comfortable if a very long string of text (e.g., a paragraph of text) is subject to conversion. In one implementation of the user interface, the automatic conversion from phonetic text P to language text C is a sentence-based automatic conversion. In other words, once a sentence is complete, the language text C in that sentence will not be further converted automatically to different language text C when inputting phonetic text P in a subsequent sentence. The sentence-based automatic conversion feature significantly reduces users' typing errors as well as preventing a previous sentence from being continuously converted automatically.
  • It is appreciated that a sentence can be defined in many other ways. For example, a sentence can be defined as a string of text within certain predefined punctuation, such as a string of text between two periods, a string of text between various predefined punctuation, a string of text containing certain text elements, and so forth. Once a user enters punctuation, the string of text entered between the punctuation and a previous punctuation, if any, may be treated as a sentence. The string of converted language text C in that sentence is not further automatically converted as the user inputs phonetic text in subsequent sentences. A person skilled in the art will appreciate that the automatic conversion can be based on two or more sentences if desired.
  • FIG. 4 illustrates the screen display 200 at a point in which a sentence is confirmed by way of punctuation. Entry of punctuation, in addition to confirming 11 a sentence, will typically result in the phonetic text P at the end of the sentence being automatically converted to language text C. For example as shown in FIG. 4, once a comma 400 is entered, phonetic text P4P5P6P7 is converted to language text C4. The string of language text C1C2C3C4 is now treated as a sentence. Converted language text C1C2C3C4 will no longer be automatically further converted.
  • In addition to sentence-based automatic conversion, a user can expressly confirm one or more of the converted language text C following its conversion from entered phonetic text P. A user can confirm the just converted language text C by entry of a user command at the keyboard (e.g., a space bar entry) so that the just converted language text C will not be further automatically converted in view of the context of the sentence. A detailed example of this feature is discussed later with reference to FIGS. 20 and 24.
  • Deferred Conversion
  • In many languages, users are typically more accustomed to reading and correcting language text than phonetic text. As phonetic text is entered, the user commonly waits for conversion before attempting to discern whether the entered text is accurate. This is particularly true for the Chinese user, who prefers reading and correcting Chinese Hanzi characters as opposed to Pinyin characters.
  • In view of this user characteristic, the language input architecture 131 is designed to contemplate when to convert the phonetic text to the language text. Generally, conversion is made when the converter is sufficiently confident that the converted language text was intended by the user. Characterized in the UI context, the issue becomes how many characters of the phonetic text should be displayed at any one time such that eventual conversion results in highly likely language text that is unlikely to be modified as the user enters in more phonetic text. Converting too soon results in more errors in the converted language text, is thereby forcing the user to correct the converted language text more often. Converting too late creates a distraction in that the user is presented with long strings of phonetic text rather than the desired language text.
  • As a compromise between converting too early and converting too late, the language input architecture may be configured to defer conversion until an optimum number of phonetic characters are entered to ensure a high conversion accuracy. In practice, the architecture is designed to defer selecting and displaying converted language text in place of the phonetic text until after entry of a minimum number of characters and before entry of a maximum number of characters. As one example, a language input architecture tailored for Chinese might be configured to convert Pinyin text to Hanzi text when at least one Pinyin character and at most six Pinyin characters have been entered and displayed in the UI.
  • According to one implementation, the language input architecture implements a set of rules to determine, for a given context, the optimum number of phonetic characters that may be entered prior to selecting and displaying the converted language text. The rules may be summarized as follows:
  • Rule 1: Always display the last (i.e., most recently entered) input character.
  • Rule 2: After entry and display of multiple input characters, evaluate top N conversion candidates for one or more characters in the candidates that may match. If at least one converted character is the same for all N conversion candidates, convert at least one input character forming part of the input text to the matching converted character(s) in the output text.
  • Rule 3: If the first most likely conversion candidate scores significantly higher than the second most likely conversion candidate, convert at least one input character to the character(s) of the first conversion candidate.
  • Modeless Editing
  • FIGS. 5-9 illustrate an exemplary implementation of the modeless editing features supported by the architecture. The user interface enables a user to seamlessly transition from input mode to edit mode without an explicit mode switch operation. Moreover, the edit mode supports traditional editing functions such as addition, deletion, and replacement of language text. The present invention allows replacement of language text by inputting new phonetic text or by selection of replacement language text from a list of at least one replacement language text candidate.
  • In-Place Error Correction
  • FIG. 5 shows a screen display 200 with various edit features. For discussion purposes, assume that the user has confirmed the language text C1C2C3C4 previously of FIG. 4) by entering punctuation 400 and now wishes to edit the confirmed language text C1C2C3C4. The user repositions the cursor 204 to a desired location within the confirmed language text C1C2C3C4. Cursor positioning can be accomplished in many different ways, including but not limited to, arrow keys, mouse click, or verbal command. FIG. 5 illustrates the cursor 204 repositioned in front of the language character C3 to select this character for editing.
  • Once the cursor 204 is positioned to the front of the language character C3, the user enters one or more user commands to invoke an edit window or box 500 that is superimposed on or about the in-line area 202 at the point in the text containing the character(s) to be edited. The user command can be accomplished in any of several manners that are well known in the art, including but not limited to, depressing an escape key “ESC” on keyboard 112.
  • In the illustrated implementation, the edit window or box 500 pops up adjacent to the language character C3 in a second direction (e.g., vertical) orthogonal to the first direction (e.g., horizontal) of the in-line text. The pop-up edit window 500 has two parts: an input text hint window 502 and a scrollable candidate window 504. These parts are preferably invoked simultaneously by a common user command. The corresponding phonetic text P1P2P3 for the character C3, which was previously input by a user, appears in the input text hint window 502 directly above and in vertical alignment with the language character C3 being edited. Displaying the input phonetic text P1P2P3 allows a user to see what they had previously entered for the language text C3 and to edit it if necessary. The input text hint window 502 has a scroll up bar 506 disposed at the top. Activation of this scroll up bar 506 causes the phonetic text P1P2P3 to slide into the sentence and replace the language text character C3.
  • The candidate window 504 contains a scrollable list of at least one replacement language text candidate C3 a, C3 b, C3 c, C3 d, having the same or similar phonetic text as the language text C3. The candidate window 504 is arranged orthogonal to the in-line input area 202 containing the language text C1C2C3C4 and directly below and in vertical alignment with the language character C3. A superscript is used to represent different language text characters, such as C3 a, C3 b, C3 c, and C3 d. When there are more candidates than can be displayed in candidate window 504, a scroll down bar 508 is presented at the bottom of candidate window 504. A user can select (e.g., click on) the scroll down bar 508 to view additional replacement language text. One feature of the in- place windows 502 and 504 is that the scrolling operation can be animated to demonstrate the candidates or text moving up or down. This gives the user visual feedback that the list is being scrolled one item at a time.
  • The phonetic text P1P2P3 in the input text hint window 502 and the replacement language text candidates C3 a, C3 b, C3 c, C3 d in the candidate window 504 are additionally referenced by numbers 0, 1, 2, 3, 4, as shown. The numbering method of the replacement language text and the size of the candidate window 504 can be implemented in different ways. In one implementation, the candidate window 504 has a limited size and lists only the top four highest probabilities of replacement language text.
  • The language text candidates C3 a, C3 b, C3 c, C3 d in the candidate window 504 are preferably arranged in some order or ranking. For instance, an order may be based on a probability or likelihood that the candidate is actually the one intended by the user originally. This probability is computed by the search engine 134, in conjunction with candidates returned by the language model 136. If the probability of one replacement language text in a given context is higher than the probability of another replacement language text in the given context, the replacement language text with the higher probability is displayed closer to the language text to be edited and with a lower reference number.
  • A user can optionally select the phonetic text P1P2P3 or select one of the replacement language text C3 a, C3 b, C3 c, C3 d by entering the appropriate reference number to replace the character text C3 or through other common techniques (point and click on the selected option). The selected replacement is then substituted for the character C3 in the in-line text. Once the user elects a candidate, the pop-up edit window 500 can be configured to automatically disappear, leaving the corrected text. Alternatively, the user may explicitly close the text hint window 502 and the candidate window 504 using conventional methods, such as a mouse click outside the windows 502 and 504.
  • The text replacement feature implemented by in- place windows 502 and 504 is referred to as the in-place error correction feature. The selected phonetic text P1P2P3 or the selected one of the replacement language text C3 a, C3 b, C3 c, C3 d is displayed in-place of the language text C3 that is to be replaced. The in-place error correction feature allows a user to focus generally proximate a string of language text containing the language text to be edited.
  • Second Candidate List
  • FIG. 6 illustrates a screen display 200 similar to that shown in FIG. 5, but also showing a second candidate window 600 separate from and adjacent to the first candidate window 504. The second candidate window 600 lists a larger or perhaps complete list of replacement language text that has the same or similar phonetic text as the corresponding phonetic text P1P2P3 of the character text C3 to be edited. The phonetic text P1P2P3 in the input text hint window 502 and the replacement language text C3 a, C3 b, C3 c, C3 d in the candidate window 504 are also listed in the second candidate window 600. In an alternative embodiment, only the additional replacement candidates are listed in the second candidate window 600.
  • To open the second candidate window 600, a user enters a command, such as a depressing a right arrow key on the keyboard while active in the candidate window 504. The user can then select a desired replacement language text by a suitable command, such as a mouse click or a key entry. A user may move a focus 602 from text character to text character.
  • The candidates in second candidate window 600 may also be arranged in some order, although not necessarily according to the same ranking technique used for the first candidate window 504. Typically sorting by probability score, as is done with the candidates in the first candidate window 504, may not be as useful for the full candidate window 600 because the variations between many candidates is small and somewhat meaningless. The user may have no intuitive feel for locating a particular candidate in this setting. Accordingly, the second candidate window 600 attempts to rank the candidates in some other way that allows intuitive discovery of the desired candidate.
  • One metric that may be used to rank the candidates in the second candidate window 600, particularly in the context of the Japanese and Chinese languages, is a measure the complexity of a character or symbol. For a list of Chinese text candidates, for instance, the candidates may be listed according to the number of strokes required to form the candidate. A stroke order imposes some tangible feel for a user who is hunting for a desired language text. The user can quickly glance to a particular area of the window 600 that holds characters of seemingly similar complexity. This ranking metric is not intended to cause the user to count or know a precise number of strokes, but only to give a strong and consistent and visually recognizable sorting order.
  • To close the window 600, a user enters a command such as a key entry at the keyboard or a mouse click outside the window 600. It is appreciated that the control of opening/closing of windows, scrolling up/down and left/right in windows, and scrolling up/down in windows are known in the art and are not described in detail.
  • In-Place Phonetic Text Correction
  • FIGS. 7-9 show a sequence of screen displays 200 at various instances to illustrate an in-place phonetic text correction of the phonetic text P1P2P3 shown in FIG. 5. In this example, a user determines that the phonetic text P1P2P3 in the input text hint window 502 is incorrect. The correct phonetic text should be P1 aP2P3. To correct the phonetic text, the user first selects the phonetic text P1P2P3 from the input text hint window 502.
  • FIG. 7 shows that the selected phonetic text P1P2P3 is displayed in place of the text character C3 being edited. The user can then edit the phonetic text by changing P1 to P1 a.
  • FIG. 8 shows the UI after the phonetic text is changed to P1 a. The text hint window 502 is also updated to reflect the change. As a result of the edit operation, at least one new replacement language text C3 j having the same or similar edited phonetic text P1 aP2P3 is displayed in the candidate window 504. The user can then select the replacement language text (e.g. C3 j) in the candidate window 504.
  • FIG. 9 shows the selected replacement text C3 j substituted for the edited phonetic text P1 aP2P3. In an alternative embodiment, the edited phonetic text can be automatically converted to the most probable new replacement language text.
  • Mixed Language Entry
  • The language input architecture may be further configured to distinguish between two or more languages. The first language is detected as phonetic text and converted to language text, whereas the second language is detected as non-phonetic text and kept as is. The UI 132 presents the two languages concurrently in the same line as the user enters text. The technique advantageously eliminates the need to switch between two input modes when inputting multi-language text. As far as a user is concerned, the user interface is modeless.
  • FIG. 10 illustrates the screen display 200 of the user interface and demonstrates an integrated handling and presentation of mixed text of two different languages. Symbol “A” represents characters of a second language text. Second language A is a non-phonetic language wherein the second language text A is displayed as input by the user. As an example, the first language is Chinese Hanzi and the second language is English. It will be appreciated that the multiple languages might be any number of different languages.
  • In one implementation, a user might input mixed language text, one of which is phonetic text P (e.g., Pinyin) convertible to language text C (e.g., Hanzi). The phonetic text P of the character-based language is displayed in-line with the language text A until the phonetic text P is automatically converted to language text C, which is displayed in-line with the language text A of the second language. FIG. 10 illustrates the input phonetic text P, the converted language text C, and the second language text A within the same in-line area 202.
  • Different fonts or colors may be used to distinguish between the phonetic text P and the non-phonetic text A. As an example, the phonetic text P is displayed in a first font or color, while the non-phonetic text A is displayed in a second font or color that is different from the first font or color. In addition to fonts and colors, other techniques may be used to visually differentiate between the phonetic text P and the non-phonetic text A.
  • General UI Operation
  • FIGS. 11-19 illustrate methods implemented by the language input architecture. The methods are implemented as part of the language input user interface to facilitate convenient entry and edition of phonetic text, as well as edition of converted language text. FIG. 11 illustrates the general process, while FIGS. 12-19 illustrate certain of the operations in more detail. The methods are described with additional reference to the screen displays of FIGS. 2-10.
  • FIG. 11 shows a method 1100 for inputting text via the language input user interface. At operation 1102, the user interface enables a user to input text within the common in-line area 202. In the described implementation, the input text is a phonetic text, such as Chinese Pinyin. The input text is automatically converted to language text of a character-based language, such as Chinese Hanzi (operation 1104). One exemplary implementation of this conversion is described above with reference to FIG. 1. If the reader is interested, a more detailed discussion can be found in the incorporated co-pending applications Ser. No. ______, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Tolerance To Spelling, Typographical, And Conversion Errors” and Ser. No. ______, entitled “Language Input Architecture For Converting One Text Form to Another Text Form With Modeless Entry.”
  • Operation 1106 determines whether a user desires to edit the language text following conversion, as indicated by repositioning the cursor or an express s command. If so (i.e., the “Yes” path from operation 1106), the UI receives the user's repositioning of the cursor proximal to the character to be edited (operation 1108). As illustrated in FIG. 5, the cursor may be repositioned in front of the language text character.
  • At operation 1110, the UI opens the edit window 500 in response to a user command as shown in FIG. 5. The edit window 500 includes the first candidate list 504 for replacing the language text. If a suitable replacement candidate is not presented in candidate list 504, the user may decide to invoke the second candidate list window 600, as illustrated in FIG. 6. Operation 1112 determines whether the user has requested the second candidate window 600. If a suitable candidate is available on the first candidate list 504, and thus the user decides not to open the second candidate list window (i.e., the “no” branch from operation 1112), the user may select replacement language text from the first candidate list window to replace the language text to be edited (operation 1114).
  • On the other hand, if the user invokes the second candidate window (i.e., the “yes” branch from operation 1112), the UI opens the second candidate list window and allows the user to select replacement language text to replace the language text being (operation 1116). The selected replacement language text from either the first candidate list window 504 or the second candidate list window 600 is then displayed in place of the language text in the in-line area 202 (operation 1118). The operational flow continues in operation 1106.
  • If a user does not desire to edit text (i.e. the “no” path from operation 1106), the UI determines whether the user continues to input text, as indicated by the user repositioning the cursor and continuing to enter characters (operation 1120). If the user's actions tend to suggest a continuation of text entry, the cursor is moved back to the input position at the end of the current section (operation 1122) and operational flow continues in input in-line operation 1102. If the user does not wish to continue, the process ends.
  • In-Line Input: Operations 1102 and 1104
  • FIG. 12 illustrates an in-line input sub-process 1200, which is an exemplary implementation of operations 1102 and 1104 of FIG. 11. Exemplary screen displays depicting this sub-process are illustrated in FIGS. 2 and 3.
  • At operation 1202, the UI receives an input string of phonetic text (e.g., Pinyin) from an input device (e.g., keyboard, voice recognition). The language input UI displays the phonetic text within the same in-line area 202 as the previously converted language text (operation 1204). The phonetic text-to-language text converter 138 converts the string of phonetic text into language text (e.g., Hanzi) in operation 1206. The language input UI replaces the phonetic text string with the converted language text string and displays the language text in the in-line area 202 (operation 1208). Sub-process 1200 then exits.
  • Sentence-Based Conversion: Operation 1104
  • FIG. 13 illustrates an automatic conversion sub-process 1300, which is another exemplary implementation of operation 1104. Exemplary screen displays depicting this sub-process are illustrated in FIGS. 3 and 4.
  • At operation 1302, the language input architecture receives a string of phonetic text input by the user via an input device. The language input UI displays the input phonetic text in the in-line area 202 (operation 1304). At operation 1306, the language input architecture determines whether the phonetic text belongs in the existing sentence or a new sentence. This determination can be based on whether the user has entered some form of punctuation, such as a period or comma.
  • If input phonetic text belongs to a new sentence (i.e. the “new” path from operation 1306), the input phonetic text is automatically converted to language text without considering the content of previous text in the previous sentence, if any (operation 1308). Conversely, if the input phonetic text does not belong to a new sentence (i.e. the “existing” path from operation 1306), the phonetic text in the sentence is automatically converted within the context of the sentence (operation 1310). As part of this conversion, previously converted language text may be further modified as additional text continues to change the intended meaning of the entire sentence. Operational flow exits following the conversion operations 1308 and 1310.
  • Confirmed Conversion: Operation 1104
  • FIG. 14 illustrates an automatic conversion sub-process 1400 in which the user confirms the converted language text. Sub-process 1400 is another exemplary implementation of operation 1104.
  • At operation 1402, the language input architecture receives a string of phonetic text input by the user via an input device. The language input UI displays the input phonetic text in the in-line area 202 (operation 1404). The phonetic text of the corresponding unconfirmed language text is automatically converted into language text of a character-based language (operation 1406).
  • At operation 1408, the language input UI determines whether the user has confirmed the converted language text. If not, the sub-process exits. Otherwise, if the user has confirmed the language text (i.e., the “yes” path from operation 1408), the UI confirms the converted language text and removes it from further contextual consideration as additional phonetic text is entered (operation 1410). Operational flow then exits.
  • In-Place Error Correction: Operations 1108-1118
  • FIGS. 15-18 illustrate different implementations of an in-place error correction sub-process, which is an exemplary implementation of operations 1108-1118 of FIG. 11. The sub-processes of FIGS. 15 and 16 concern use of the first and second candidate lists to correct language text. The sub-processes of FIGS. 17 and 18 are directed to correcting phonetic text using the phonetic text hint window.
  • FIG. 15 illustrates an in-place error correction sub-process 1500 that corrects converted language text by offering alternative language texts in a pop-up candidate window. Exemplary screen displays depicting this sub-process 1500 are illustrated in FIG. 5.
  • At operation 1502, in response to the user moving the cursor proximally to previously entered language text (e.g., in front of a character), the language input UI selects or identifies the language text to be edited. The UI opens the edit window 500, including the first candidate window 504 directly below the language text to be edited, to display a list of replacement candidates for the selected language text (operation 1504).
  • At operation 1506, the UI receives the user's selection of a replacement candidate from the first candidate window 504. The language input UI displays the selected replacement language text candidate in place of the selected language text within the same in-line area 202 (operation 1508). Operational flow then exits.
  • FIG. 16 illustrates an in-place error sub-process 1600 that corrects converted language text by offering a complete list of alternative language texts in a secondary, larger pop-up candidate window. Exemplary screen displays depicting this sub-process 1600 are illustrated in FIG. 6.
  • At operation 1602, in response to the user moving the cursor proximally to previously entered language text (e.g., in front of a character), the language input UI selects or identifies the language text to be edited. The UI opens the edit window 500, including the first candidate window 504 directly below the language text to be edited, to display a short list of replacement candidates for the selected language text (operation 1604). If the user cannot find an appropriate replacement candidate, the user may invoke a second candidate window 600 of replacement language text candidates (operation 1606). The second candidate list contains a larger or more complete list of replacement language text candidates than the first candidate window.
  • At operation 1608, the UI receives the user's selection of a replacement candidate from the second candidate window 600. The language input UI displays the selected replacement language text candidate in place of the selected language text within the same in-line area 202 (operation 1610). Operational flow then exits.
  • FIG. 17 illustrates an in-place error sub-process 1700 that corrects the converted language text by editing the previously entered phonetic text via a pop-up hint window. Exemplary screen displays depicting this sub-process 1700 are illustrated in FIG. 7.
  • At operation 1702, in response to the user moving the cursor proximally to previously entered language text (e.g., in front of a character), the language input UI selects or identifies the language text to be edited. The UI opens the edit window 500, including the phonetic text hint window 502 directly above the language text to be edited that displays the phonetic text as entered by the user (operation 1704).
  • Upon user selection of the phonetic text in the hint window 502 (i.e., the “yes” path from operation 1706), the UI displays the phonetic text in place of the language text being edited (operation 1708). This allows the user to make corrections to the phonetic text within the in-line area 202. Operational flow then exits.
  • FIG. 18 illustrates an in-place error sub-process 1800 that corrects the converted language text by editing the previously entered phonetic text and viewing a new set of candidates following the editing. Exemplary screen displays depicting this sub-process 1800 are illustrated in FIGS. 8 and 9.
  • At operation 1802, in response to the user moving the cursor proximally to previously entered language text (e.g., in front of a character), the language input UI selects or identifies the language text to be edited. The UI opens the edit window 500, including the phonetic text hint window 502 directly above the selected language text and the first candidate window 504 directly below the language text (operation 1804).
  • Upon user selection of the phonetic text in the hint window 502 (i.e., the “yes” path from operation 1806), the UI displays the phonetic text in place of the language text being edited (operation 1808). The UI receives and displays the user's edits of the phonetic text in the in-line edit area 202 (operations 1810). In response to the editing, the UI displays a new list of replacement language text candidates in the first candidate window 504 (operation 1812). The user may further invoke the second candidate window 600 if desired.
  • At operation 1814, the UI receives the user's selection of a replacement candidate from the new list in the first candidate window 504. The language input UI displays the selected replacement language text candidate in place of the selected language text within the same in-line area 202 (operation 1816). Operational flow then exits.
  • Multi-Language Entry
  • FIG. 19 illustrates a multi-language entry sub-process 1900 in which two or more different languages are entered using the in-line input UI. Exemplary screen displays depicting this sub-process 1900 are illustrated in FIG. 10.
  • At operation 1902, the language input architecture receives a string of mixed phonetic and non-phonetic text input by the user via an input device. The language input UI displays the mixed text within the same in-line area 202 as the previously converted language text (operation 1904).
  • At operation 1906, the language input architecture determines whether the input text is phonetic text (e.g., Pinyin) as opposed to non-phonetic text (e.g., English). If the input text is phonetic text (i.e., the “yes” path from operation 1906), the language input architecture converts the phonetic text to language text (operation 1908). The UI displays the language text in place of the entered phonetic text and in-line with the previous text (operation 1910). On the other hand, if the input text is non-phonetic text (i.e., the “no” path from operation 1906), the language input architecture does not convert it and the UI displays the non-phonetic text in-line with the previous text (operation 1912). Operational flow then exits.
  • Exemplars Chinese-Based Implementation
  • FIGS. 20-26 illustrate an exemplary implementation of the language input architecture and UI in the context of the Chinese language. In this context, the phonetic text is Chinese Pinyin and the language text is Chinese Hanzi characters.
  • FIG. 20 illustrates one implementation of a Chinese input user interface showing an example of the in-line input feature. Table 2000 contains two strings of Pinyin text 2002 and 2004 input by the user and corresponding converted Hanzi text 2006 and 2008 as it would appear in the in-line input area. An exemplary display screen 2010 is shown below table 2000 and contains the converted Hanzi text 2008. Notice that the Pinyin text being input at a cursor bar 2012 is displayed in-line with the converted Chinese text. The other characteristics shown in the display screen 2010 are known in the word processing art.
  • FIG. 21 illustrates a Chinese UI screen 2100 in which converted Hanzi text is presently displayed in the in-line entry area 2102. The user has moved the cursor to select Chinese character text 2104 for editing and invoked the pop-up edit window 2106, consisting of a Pinyin text hint window 2108 and a first Hanzi text candidate window 2110. The Pinyin text 2112 associated with the selected Chinese character text 2104 is displayed in a Pinyin text hint window 2108.
  • FIG. 22 illustrates one implementation of a Chinese input user interface showing an example of the in-place error correction feature. Table 2200 depicts two user actions in the left column—an action 2202 to open the edit window containing a phonetic hint and a candidate list and an action 2204 to select an item “1” from the candidate list. In response to the user actions in the left column, the right column of table 2200 illustrates corresponding exemplary screen shots 2206 and 2208.
  • With respect to screen shot 2206, the user selects Chinese character text 2210 for editing by moving the cursor in front of the character text 2210. The user inputs a command to open an edit window containing a Pinyin text hint window 2212 and a first candidate list window 2214. Next, the user selects item “1” from the candidate list 2214 and the first candidate 2216 associated with item “1” is substituted for the original selected text 2210. Notice also that the candidates in the list 2208 are updated (i.e., scrolled upward one place) to reflect that the selected candidate 2216 is moved into the in-line entry area. The updating may be animated to visually illustrate that the selected candidate 2216 is moved into the in-line area.
  • FIG. 23 illustrates another implementation of a Chinese input user interface to illustrate in-place correction of the Pinyin text. The left column in table 2300 contains a series of five user actions 2302-2310 and the right column shows corresponding exemplary screen shots 2312-2320 resulting from the user actions. When a user decides to edit the character text, the user moves the cursor to the front of the character text to be edited (action 2302). Suppose the user selects Chinese character text 2330 to be edited (UI screen shot 2312). After moving cursor in front of the character text 2330, the user inputs a command (e.g., pressing the “ESC” key) to invoke the edit window (action 2304). As a result, a Pinyin text hint window 2332 and a first candidate list window 2334 are opened as shown in the UI screen shot 2314.
  • Next, the user enters “0” (action 2306) to select the Pinyin text 2336 in the Pinyin text hint window 2332. The selected Pinyin text 2336 is substituted for the selected character text 2330 as shown in the UI screen shot 2316. At this point, the user is free to edit the original Pinyin text.
  • Suppose the user adds an additional apostrophe in the Pinyin text 2336 (action 2308) to produce text 2336′ as shown UI screen shot 2318. The edited Pinyin text 2336′ is shown both in the in-line area as well as the Pinyin text hint window 2332. Following the editing, the first candidate window 2334 is updated with a new list of character text candidates. In this example, a new character text candidate 2338 corresponding to the edited Pinyin text 2336′ is displayed in the first candidate list window 2334.
  • Finally, the user selects the desired character text 2338 in the first candidate list window 2334, for example, by entering “1” (action 2310). As a result, the selected character text 2338 is displayed in place of the edited Pinyin text 2336′, as illustrated in UI screen shot 2320. In this manner, the new character text 2338 is effectively substituted for the original language text 2330.
  • FIG. 24 illustrates another implementation of a Chinese input user interface to illustrate entry of mixed languages, such as Chinese and English. The left column in table 2400 contains a series of two user actions 2402 and 2404 and the right column shows corresponding exemplary screen shots 2406 and 2408 resulting from the user actions.
  • Suppose the user inputs mixed Pinyin text 2410 and English text 2412 as indicated by action 2402. The user can enter the mixed text into the language input UI without shifting modes between Chinese entry and English entry. That is, the user simply enters the Pinyin text and English text in the same line without stopping. The Pinyin text 2410 is converted into Chinese text 2414 and displayed within the same in-line area, as illustrated in UI screen shot 2406. The English text 2412 is not converted by the language input architecture, but is displayed as entered.
  • Subsequently, the user inputs mixed Pinyin text 2416, English text 2418, and Pinyin text 2420 without shifting modes (action 2404). The Pinyin text 2416 and 2420 are converted into Chinese text 2422 and 2424, respectively, as shown in UI screen shot 2408. The English text 2418 remains unchanged and is displayed in-line with the converted Chinese text.
  • According to one implementation, the phonetic and non-phonetic text may be displayed differently to differentiate between them. For example, compare the mixed text in table 2000 of FIG. 20 and table 2400 of FIG. 24. The Pinyin text (e.g., 2012 in FIG. 20) is displayed in a narrow, bold font, whereas the English text (e.g., 2412 or 2418 in FIG. 24) is displayed in a thin, courier-type font.
  • FIG. 25 illustrates another implementation of a Chinese input user interface to illustrate the first and second candidate lists for in-place editing. The left column in table 2500 contains a series of two user actions 2502 and 2504 and the right column shows corresponding exemplary screen shots 2506 and 2508 resulting from the user actions.
  • At action 2502, the user selects a Chinese text to be edited and inputs a command to open the Pinyin text hint window 2510 and a first character text candidate list 2512. The windows 2510 and 2512 appear above and below the in-line entry area, respectively, as illustrated in UI screen shot 2506.
  • Next, at action 2504, the user inputs a command to open a second character text candidate list. A second character text candidate list window 2514 is popped open next to the first candidate list 2512, as illustrated in UI screen shot 2508. The user may then select a character text candidate from the second character text candidate list window 2514.
  • FIG. 26 illustrates another implementation of a Chinese input user interface to illustrate sentence-based automatic conversion with confirmed character text. The left column in table 2600 contains a series of five user actions 2602-2610 and the right column shows corresponding exemplary screen shots 2612-2620 resulting from the user actions.
  • At action 2602, the user inputs Pinyin text 2622 and 2624. The Pinyin text 2622 is automatically converted into character text 2626 and Pinyin text 2624 remains unconverted until further user input, as illustrated by UI screen shot 2612. At action 2604, the user subsequently inputs Pinyin text 2628. The previously converted character text 2626 is now automatically converted into different Chinese character text 2630 as a result of changing context introduced by the addition of Pinyin text 2628. This modification of the converted character text is illustrated in UI screen shot 2614. Pinyin text 2624 and 2628 remain unconverted at this point, and continue to be illustrated in-line with the modified language text.
  • Next, at action 2606, the user inputs a confirmation command (e.g., pressing the space bar) to confirm the just converted character text 2630. Meanwhile, the Pinyin text 2624 and 2628 are automatically converted into Chinese text 2632 and 2634, respectively, based on the context in the sentence so far. This is illustrated in screen shot 2616.
  • Subsequently, at action 2608, the user enters additional Pinyin text (not shown) in the same sentence and the Pinyin text is converted into character text 2636, as illustrated in UI screen shot 2618. Notice that the confirmed character text 2630 is not changed by subsequent entry of the Pinyin text.
  • For the comparison purposes, suppose the character text 2630 is not confirmed by user action 2606 (e.g., the user does not press the space bar). Instead, the user enters the additional Pinyin text without confirmation of character text 2630. In this case, the character text 2626 remains unchanged and is not modified to text 2630, as illustrated by UI screen shot 2620. This is because the automatic conversion from Pinyin text to character text is sentence-based and character text 2626 is part of the sentence. As long as the sentence is active (i.e., no punctuation has ended the sentence or no new sentence has yet been started), the previously converted character text in the current sentence is subject to further modification unless the user confirms the converted character text.
  • Conclusion
  • Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.

Claims (52)

1. A method comprising:
receiving input text entered by a user;
converting the input text to an output text; and
displaying the input text and the output text within a common entry line.
2. A method as recited in claim 1, wherein the input text comprises a phonetic text and the output text comprises a character-based language text.
3. A method as recited in claim 1, wherein the input text comprises Chinese Pinyin and the output text comprises Chinese Hanzi.
4. A method as recited in claim 1, wherein the displaying comprises displaying the input text and the output text together within a common horizontal line.
5. A method as recited in claim 1, wherein the displaying comprises depicting the output text in place of the input text from which the output text was converted.
6. A method as recited in claim 1, further comprising modifying the output text as additional input text is entered.
7. A method as recited in claim 6, further comprising ceasing to further modify the output text as additional input text is entered in response to user entry of punctuation.
8. A method as recited in claim 6, further comprising ceasing to further modify the output text as additional input text is entered in response to user confirmation of the output text.
9. A method as recited in claim 6, further comprising ceasing, in response to user confirmation of the output text, to modify the output text while leaving unconverted input text active for modification.
10. A method as recited in claim 1, further comprising selectively modifying the output text as additional input text is entered such that no modification is made if such modification results in only a minor improvement.
11. A method as recited in claim 1, further comprising enabling a user to edit the output text within the common entry line without switching from an entry mode to an edit mode.
12. A method as recited in claim 1, further comprising, in response to user selection of output text for editing, depicting an edit window adjacent to the selected output text in the entry line.
13. A method as recited in claim 12, wherein the entry line is oriented in a first direction and further comprising orienting the edit window in a second direction orthogonal to the first direction.
14. A method as recited in claim 1, further comprising, in response to user selection of output text for editing, depicting an input text hint window adjacent to the selected output text in the entry line, the input text hint window containing the input text from which the selected output text was converted.
15. A method as recited in claim 1, further comprising, in response to user selection of output text for editing, depicting a first candidate list adjacent to the selected output text in the entry line, the first candidate list containing one or more alternate output text candidates that may be substituted for the selected output text.
16. A method as recited in claim 15, further comprising ordering the output text candidates within the first candidate list according to a ranking.
17. A method as recited in claim 15, wherein the first candidate list is scrollable, and further comprising animating movement of the output text candidates as the list is scrolled.
18. A method as recited in claim 15, further comprising depicting a second candidate list containing a complete set of output text candidates than the first candidate list.
19. A method as recited in claim 18, further comprising arranging the output text candidates in the second candidate list according to complexity of character construction.
20. A method as recited in claim 18, further comprising:
ordering the output text candidates in the first candidate list according to a first metric; and
arranging the output text candidates in the second candidate list according to a second metric different than the first metric.
21. A method as recited in claim 1, wherein the entry line is oriented in a first direction, and further comprising, in response to user selection of output text for editing:
depicting an input text hint window above the selected output text in a second direction orthogonal to the first direction, the input text hint window containing the input text from which the selected output text was converted; and
depicting a first candidate window below the selected output text in the second direction, the first candidate window containing one or more alternate output text candidates that may be substituted for the selected output text.
22. A method as recited in claim 1, wherein the input text comprises phonetic and non-phonetic text, further comprising:
converting the phonetic text to language text; and
displaying the language text, the non-phonetic text, and newly entered phonetic text within the common entry line.
23. A method as recited in claim 1, further comprising enabling a user to enter input text containing at least two languages without switching from a first entry mode for a first language and a second entry mode for a second language.
24. A method as recited in claim 1, wherein the input text comprises individual input characters, further comprising converting at least one of the input characters to the output text when at least one input character is displayed and at most six input characters are displayed.
25. A method as recited in claim 1, wherein the input text comprises individual input characters, further comprising:
evaluating at least two conversion candidates for matching characters; and
if at least one character from both conversion candidates match, converting at least one input character to the matching character.
26. A method as recited in claim 1, wherein the input text comprises individual input characters, further comprising always displaying a most recently entered input character.
27. A method as recited in claim 1, wherein the input text comprises individual input characters, further comprising converting at least one input character to the output text of a first most likely conversion candidate if the first most likely conversion candidate scores significantly higher than a second most likely conversion candidate.
28. One or more computer-readable media having computer-executable instructions that, when executed on a processor, direct a computer to perform the method as recited in claim 1.
29. A method comprising:
displaying phonetic text as a user enters the phonetic text; and
displaying language text upon conversion from the phonetic text, the language text being presented in place of the phonetic text from which the language text is converted so that the language text and any unconverted phonetic text are displayed together.
30. A method as recited in claim 29, wherein the phonetic text comprises a Chinese Pinyin and the language text comprises a Chinese Hanzi.
31. A method as recited in claim 29, further comprising displaying the unconverted phonetic text and the language text together within a common horizontal line.
32. A method as recited in claim 29, further comprising modifying the language text as additional phonetic text is entered.
33. A method as recited in claim 32, further comprising ceasing to further modify the language text as additional phonetic text is entered in response to user entry of punctuation.
34. A method as recited in claim 32, further comprising ceasing to further modify the language text as additional phonetic text is entered in response to user confirmation of the language text.
35. A method as recited in claim 32, further comprising ceasing, in response to user confirmation of the language text, to modify the language text while leaving unconverted phonetic text active for modification.
36. A method as recited in claim 29, further comprising modifying the language text to second language text as additional phonetic text is entered if the second language text is significantly more likely to have been intended.
37. A method as recited in claim 29, further comprising enabling a user to edit the language text without switching from an entry mode to an edit mode.
38. A method as recited in claim 29, further comprising, in response to user selection of language text for editing, displaying an edit window adjacent to the selected language text.
39. A method as recited in claim 29, further comprising, in response to user selection of language text for editing:
displaying a phonetic text hint proximal to the selected language text, the phonetic text hint containing the phonetic text from which the selected language text was converted; and
displaying a reduced-set candidate list proximal to the selected language text, the candidate list containing a reduced set of one or more alternate language text candidates that may be substituted for the selected language text.
40. A method as recited in claim 39, further comprising ordering the language text candidates within the candidate list according to a ranking.
41. A method as recited in claim 39, wherein the candidate list is scrollable, and further comprising animating movement of the language text candidates as the list is scrolled.
42. A method as recited in claim 39, further comprising displaying a full-set candidate list containing a complete set of language text candidates than the reduced-set candidate list.
43. A method as recited in claim 42, further comprising arranging the language text candidates in the full-set candidate list according to complexity of character construction.
44. A method as recited in claim 42, further comprising:
ordering the language text candidates in the reduced-set candidate list according to a first metric; and
arranging the language text candidates in the full-set candidate list according to a second metric different than the first metric.
45. A method as recited in claim 29, wherein the phonetic text comprises individual characters, further comprising converting at least one of the phonetic characters to the language text when at least one phonetic character is displayed and at most six phonetic characters are displayed.
46. One or more computer-readable media having computer-executable instructions that, when executed on a processor, direct a computer to perform the method as recited in claim 29.
47. A method comprising:
presenting a user interface to receive phonetic text and non-phonetic text entered by a user;
converting the phonetic text to a language text; and
displaying together the language text, the non-phonetic text, and unconverted phonetic text.
48. A method as recited in claim 47, further comprising displaying the language text, the non-phonetic text, and the unconverted phonetic text in-line within a common horizontal line.
49. A method as recited in claim 47, further comprising displaying the non-phonetic text differently than the unconverted phonetic text so that the non-phonetic text appears differently than the unconverted phonetic text.
50. A method as recited in claim 47, further comprising displaying the non-phonetic text in a first font and the unconverted phonetic text in a second font different from the first font.
51. A method as recited in claim 47, further comprising displaying the non-phonetic text in a first color and the unconverted phonetic text in a second color different from the first color.
52. One or more computer-readable media having computer-executable instructions that, when executed on a processor, direct a computer to perform the method as recited in claim 47.
US10/898,407 1999-11-05 2004-07-23 Language conversion and display Abandoned US20050060138A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/898,407 US20050060138A1 (en) 1999-11-05 2004-07-23 Language conversion and display

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16358899P 1999-11-05 1999-11-05
US09/606,811 US7403888B1 (en) 1999-11-05 2000-06-28 Language input user interface
US10/898,407 US20050060138A1 (en) 1999-11-05 2004-07-23 Language conversion and display

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/606,811 Division US7403888B1 (en) 1999-11-05 2000-06-28 Language input user interface

Publications (1)

Publication Number Publication Date
US20050060138A1 true US20050060138A1 (en) 2005-03-17

Family

ID=39619596

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/606,811 Expired - Fee Related US7403888B1 (en) 1999-11-05 2000-06-28 Language input user interface
US10/898,407 Abandoned US20050060138A1 (en) 1999-11-05 2004-07-23 Language conversion and display

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/606,811 Expired - Fee Related US7403888B1 (en) 1999-11-05 2000-06-28 Language input user interface

Country Status (5)

Country Link
US (2) US7403888B1 (en)
JP (2) JP4920154B2 (en)
CN (1) CN100593167C (en)
AU (1) AU1361401A (en)
WO (1) WO2001033324A2 (en)

Cited By (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020099552A1 (en) * 2001-01-25 2002-07-25 Darryl Rubin Annotating electronic information with audio clips
US20030073451A1 (en) * 2001-05-04 2003-04-17 Christian Kraft Communication terminal having a predictive text editor application
US20030206189A1 (en) * 1999-12-07 2003-11-06 Microsoft Corporation System, method and user interface for active reading of electronic content
US20040083198A1 (en) * 2002-07-18 2004-04-29 Bradford Ethan R. Dynamic database reordering system
US20050034056A1 (en) * 2000-04-21 2005-02-10 Microsoft Corporation Method and apparatus for displaying multiple contexts in electronic documents
US20050052406A1 (en) * 2003-04-09 2005-03-10 James Stephanick Selective input system based on tracking of motion parameters of an input device
US20060020882A1 (en) * 1999-12-07 2006-01-26 Microsoft Corporation Method and apparatus for capturing and rendering text annotations for non-modifiable electronic content
US20060242586A1 (en) * 2005-04-20 2006-10-26 Microsoft Corporation Searchable task-based interface to control panel functionality
US20060274051A1 (en) * 2003-12-22 2006-12-07 Tegic Communications, Inc. Virtual Keyboard Systems with Automatic Correction
US20070057949A1 (en) * 2005-09-15 2007-03-15 Microsoft Corporation Enlargement of font characters
US20070106664A1 (en) * 2005-11-04 2007-05-10 Minfo, Inc. Input/query methods and apparatuses
US20070132834A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Speech disambiguation in a composite services enablement environment
US20070136449A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Update notification for peer views in a composite services delivery environment
US20070133769A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Voice navigation of a visual view for a session in a composite services enablement environment
US20070136448A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Channel presence in a composite services enablement environment
US20070136420A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Visual channel refresh rate control for composite services delivery
US20070185957A1 (en) * 2005-12-08 2007-08-09 International Business Machines Corporation Using a list management server for conferencing in an ims environment
US20070214122A1 (en) * 2006-03-10 2007-09-13 Microsoft Corporation Searching command enhancements
US20070244691A1 (en) * 2006-04-17 2007-10-18 Microsoft Corporation Translation of user interface text strings
US20070277118A1 (en) * 2006-05-23 2007-11-29 Microsoft Corporation Microsoft Patent Group Providing suggestion lists for phonetic input
US20070276650A1 (en) * 2006-05-23 2007-11-29 Microsoft Corporation Techniques for customization of phonetic schemes
US20070294618A1 (en) * 2004-04-27 2007-12-20 Yoshio Yamamoto Character Input System Including Application Device and Input Server
US20080015841A1 (en) * 2000-05-26 2008-01-17 Longe Michael R Directional Input System with Automatic Correction
US20080052064A1 (en) * 2006-08-25 2008-02-28 Nhn Corporation Method for searching for chinese character using tone mark and system for executing the method
US20080082335A1 (en) * 2006-09-28 2008-04-03 Howard Engelsen Conversion of alphabetic words into a plurality of independent spellings
US7423647B2 (en) * 2001-01-16 2008-09-09 Lg Electronics Inc. Apparatus and methods of selecting special characters in a mobile communication terminal
US20090006075A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Phonetic search using normalized string
US20090112574A1 (en) * 2007-10-30 2009-04-30 Yu Zou Multi-language interfaces switch system and method therefor
US20090213134A1 (en) * 2003-04-09 2009-08-27 James Stephanick Touch screen and graphical user interface
US20100010973A1 (en) * 2008-07-09 2010-01-14 International Business Machines Corporation Vector Space Lightweight Directory Access Protocol Data Search
US7730391B2 (en) 2000-06-29 2010-06-01 Microsoft Corporation Ink thickness rendering for electronic annotations
US20100217581A1 (en) * 2007-04-10 2010-08-26 Google Inc. Multi-Mode Input Method Editor
US20100250251A1 (en) * 2009-03-30 2010-09-30 Microsoft Corporation Adaptation for statistical language model
US7809838B2 (en) 2005-12-08 2010-10-05 International Business Machines Corporation Managing concurrent data updates in a composite services delivery system
US7818432B2 (en) 2005-12-08 2010-10-19 International Business Machines Corporation Seamless reflection of model updates in a visual page for a visual channel in a composite services delivery system
US7827288B2 (en) 2005-12-08 2010-11-02 International Business Machines Corporation Model autocompletion for composite services synchronization
US7877486B2 (en) 2005-12-08 2011-01-25 International Business Machines Corporation Auto-establishment of a voice channel of access to a session for a composite service from a visual channel of access to the session for the composite service
US7880730B2 (en) 1999-05-27 2011-02-01 Tegic Communications, Inc. Keyboard system with automatic correction
US7890635B2 (en) 2005-12-08 2011-02-15 International Business Machines Corporation Selective view synchronization for composite services delivery
US20110093255A1 (en) * 2006-03-31 2011-04-21 Research In Motion Limited Handheld electronic device including toggle of a selected data source, and associated method
US20110153325A1 (en) * 2009-12-23 2011-06-23 Google Inc. Multi-Modal Input on an Electronic Device
US20110193797A1 (en) * 2007-02-01 2011-08-11 Erland Unruh Spell-check for a keyboard system with automatic correction
US20120019446A1 (en) * 2009-03-20 2012-01-26 Google Inc. Interaction with ime computing device
US20120029910A1 (en) * 2009-03-30 2012-02-02 Touchtype Ltd System and Method for Inputting Text into Electronic Devices
US8189563B2 (en) 2005-12-08 2012-05-29 International Business Machines Corporation View coordination for callers in a composite services enablement environment
US8201087B2 (en) 2007-02-01 2012-06-12 Tegic Communications, Inc. Spell-check for a keyboard system with automatic correction
US8200475B2 (en) 2004-02-13 2012-06-12 Microsoft Corporation Phonetic-based text input method
US8259923B2 (en) 2007-02-28 2012-09-04 International Business Machines Corporation Implementing a contact center using open standards and non-proprietary components
US20120245921A1 (en) * 2011-03-24 2012-09-27 Microsoft Corporation Assistance Information Controlling
US20120262488A1 (en) * 2009-12-23 2012-10-18 Nokia Corporation Method and Apparatus for Facilitating Text Editing and Related Computer Program Product and Computer Readable Medium
US20130144820A1 (en) * 2006-06-30 2013-06-06 Research In Motion Limited Method of learning a context of a segment of text, and associated handheld electronic device
US20130278506A1 (en) * 2010-06-10 2013-10-24 Michael William Murphy Novel character specification system and method that uses a limited number of selection keys
US8594305B2 (en) 2006-12-22 2013-11-26 International Business Machines Corporation Enhancing contact centers with dialog contracts
US20140006009A1 (en) * 2006-05-09 2014-01-02 Blackberry Limited Handheld electronic device including automatic selection of input language, and associated method
US8627197B2 (en) 1999-12-07 2014-01-07 Microsoft Corporation System and method for annotating an electronic document independently of its content
US8672682B2 (en) 2006-09-28 2014-03-18 Howard A. Engelsen Conversion of alphabetic words into a plurality of independent spellings
US20140317744A1 (en) * 2010-11-29 2014-10-23 Biocatch Ltd. Device, system, and method of user segmentation
EP2713255A4 (en) * 2012-06-04 2015-03-04 Huawei Device Co Ltd Method and electronic device for prompting character input
US20150088486A1 (en) * 2013-09-25 2015-03-26 International Business Machines Corporation Written language learning using an enhanced input method editor (ime)
US8996356B1 (en) * 2012-04-10 2015-03-31 Google Inc. Techniques for predictive input method editors
US9046932B2 (en) 2009-10-09 2015-06-02 Touchtype Ltd System and method for inputting text into electronic devices based on text and text category predictions
US20150154958A1 (en) * 2012-08-24 2015-06-04 Tencent Technology (Shenzhen) Company Limited Multimedia information retrieval method and electronic device
US9055150B2 (en) 2007-02-28 2015-06-09 International Business Machines Corporation Skills based routing in a standards based contact center using a presence server and expertise specific watchers
CN104714940A (en) * 2015-02-12 2015-06-17 深圳市前海安测信息技术有限公司 Method and device for identifying unregistered word in intelligent interaction system
US9189472B2 (en) 2009-03-30 2015-11-17 Touchtype Limited System and method for inputting text into small screen devices
US20150331590A1 (en) * 2013-01-25 2015-11-19 Hewlett-Packard Development Company, L.P. User interface application launcher and method thereof
US9247056B2 (en) 2007-02-28 2016-01-26 International Business Machines Corporation Identifying contact center agents based upon biometric characteristics of an agent's speech
US9286288B2 (en) 2006-06-30 2016-03-15 Blackberry Limited Method of learning character segments during text input, and associated handheld electronic device
US20160125753A1 (en) * 2014-11-04 2016-05-05 Knotbird LLC System and methods for transforming language into interactive elements
US20160142465A1 (en) * 2014-11-19 2016-05-19 Diemsk Jean System and method for generating visual identifiers from user input associated with perceived stimuli
US9372672B1 (en) * 2013-09-04 2016-06-21 Tg, Llc Translation in visual context
US20160179774A1 (en) * 2014-12-18 2016-06-23 International Business Machines Corporation Orthographic Error Correction Using Phonetic Transcription
US20160217782A1 (en) * 2013-10-10 2016-07-28 Kabushiki Kaisha Toshiba Transliteration work support device, transliteration work support method, and computer program product
US9424240B2 (en) 1999-12-07 2016-08-23 Microsoft Technology Licensing, Llc Annotations for electronic content
US9424246B2 (en) 2009-03-30 2016-08-23 Touchtype Ltd. System and method for inputting text into electronic devices
US20160282956A1 (en) * 2015-03-24 2016-09-29 Google Inc. Unlearning techniques for adaptive language models in text entry
US20170111704A1 (en) * 1998-11-30 2017-04-20 Rovi Guides, Inc. Interactive television program guide with selectable languages
US9972317B2 (en) 2004-11-16 2018-05-15 Microsoft Technology Licensing, Llc Centralized method and system for clarifying voice commands
US20180160173A1 (en) * 2016-12-07 2018-06-07 Alticast Corporation System for providing cloud-based user interfaces and method thereof
US20180210558A1 (en) * 2014-06-17 2018-07-26 Google Inc. Input method editor for inputting names of geographic locations
US10069852B2 (en) 2010-11-29 2018-09-04 Biocatch Ltd. Detection of computerized bots and automated cyber-attack modules
US10191654B2 (en) 2009-03-30 2019-01-29 Touchtype Limited System and method for inputting text into electronic devices
US10216410B2 (en) 2015-04-30 2019-02-26 Michael William Murphy Method of word identification that uses interspersed time-independent selection keys
US10332071B2 (en) 2005-12-08 2019-06-25 International Business Machines Corporation Solution for adding context to a text exchange modality during interactions with a composite services application
US10372310B2 (en) 2016-06-23 2019-08-06 Microsoft Technology Licensing, Llc Suppression of input images
US10432999B2 (en) * 2017-04-14 2019-10-01 Samsung Electronics Co., Ltd. Display device, display system and method for controlling display device
US10474815B2 (en) 2010-11-29 2019-11-12 Biocatch Ltd. System, device, and method of detecting malicious automatic script and code injection
US10523680B2 (en) * 2015-07-09 2019-12-31 Biocatch Ltd. System, device, and method for detecting a proxy server
US10579784B2 (en) 2016-11-02 2020-03-03 Biocatch Ltd. System, device, and method of secure utilization of fingerprints for user authentication
US10586036B2 (en) 2010-11-29 2020-03-10 Biocatch Ltd. System, device, and method of recovery and resetting of user authentication factor
US10621585B2 (en) 2010-11-29 2020-04-14 Biocatch Ltd. Contextual mapping of web-pages, and generation of fraud-relatedness score-values
US10685355B2 (en) 2016-12-04 2020-06-16 Biocatch Ltd. Method, device, and system of detecting mule accounts and accounts used for money laundering
US10719765B2 (en) 2015-06-25 2020-07-21 Biocatch Ltd. Conditional behavioral biometrics
US10728761B2 (en) 2010-11-29 2020-07-28 Biocatch Ltd. Method, device, and system of detecting a lie of a user who inputs data
US10747305B2 (en) 2010-11-29 2020-08-18 Biocatch Ltd. Method, system, and device of authenticating identity of a user of an electronic device
US10776476B2 (en) 2010-11-29 2020-09-15 Biocatch Ltd. System, device, and method of visual login
US10834590B2 (en) 2010-11-29 2020-11-10 Biocatch Ltd. Method, device, and system of differentiating between a cyber-attacker and a legitimate user
US10897482B2 (en) 2010-11-29 2021-01-19 Biocatch Ltd. Method, device, and system of back-coloring, forward-coloring, and fraud detection
US10917431B2 (en) 2010-11-29 2021-02-09 Biocatch Ltd. System, method, and device of authenticating a user based on selfie image or selfie video
US10949514B2 (en) 2010-11-29 2021-03-16 Biocatch Ltd. Device, system, and method of differentiating among users based on detection of hardware components
US10949757B2 (en) 2010-11-29 2021-03-16 Biocatch Ltd. System, device, and method of detecting user identity based on motor-control loop model
US10970394B2 (en) 2017-11-21 2021-04-06 Biocatch Ltd. System, device, and method of detecting vishing attacks
US11055395B2 (en) 2016-07-08 2021-07-06 Biocatch Ltd. Step-up authentication
US11054989B2 (en) 2017-05-19 2021-07-06 Michael William Murphy Interleaved character selection interface
US11093898B2 (en) 2005-12-08 2021-08-17 International Business Machines Corporation Solution for adding context to a text exchange modality during interactions with a composite services application
US11126794B2 (en) * 2019-04-11 2021-09-21 Microsoft Technology Licensing, Llc Targeted rewrites
US20210329030A1 (en) * 2010-11-29 2021-10-21 Biocatch Ltd. Device, System, and Method of Detecting Vishing Attacks
US20210375264A1 (en) * 2020-05-28 2021-12-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for speech recognition, and storage medium
US11210674B2 (en) 2010-11-29 2021-12-28 Biocatch Ltd. Method, device, and system of detecting mule accounts and accounts used for money laundering
US11223619B2 (en) 2010-11-29 2022-01-11 Biocatch Ltd. Device, system, and method of user authentication based on user-specific characteristics of task performance
US11264007B2 (en) 2017-07-20 2022-03-01 Panasonic Intellectual Property Management Co., Ltd. Translation device, translation method, and program
US11269977B2 (en) 2010-11-29 2022-03-08 Biocatch Ltd. System, apparatus, and method of collecting and processing data in electronic devices
US11416214B2 (en) 2009-12-23 2022-08-16 Google Llc Multi-modal input on an electronic device
US11526654B2 (en) * 2019-07-26 2022-12-13 See Word Design, LLC Reading proficiency system and method
US11606353B2 (en) 2021-07-22 2023-03-14 Biocatch Ltd. System, device, and method of generating and utilizing one-time passwords
US11809831B2 (en) * 2020-01-08 2023-11-07 Kabushiki Kaisha Toshiba Symbol sequence converting apparatus and symbol sequence conversion method
US11922007B2 (en) 2018-11-29 2024-03-05 Michael William Murphy Apparatus, method and system for inputting characters to an electronic device

Families Citing this family (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7884804B2 (en) * 2003-04-30 2011-02-08 Microsoft Corporation Keyboard with input-sensitive display device
US7119794B2 (en) * 2003-04-30 2006-10-10 Microsoft Corporation Character and text unit input correction system
FI115274B (en) * 2003-12-19 2005-03-31 Nokia Corp Electronic device e.g. palm computer selects language package for use in voice user interface used for controlling device functions
US7478033B2 (en) * 2004-03-16 2009-01-13 Google Inc. Systems and methods for translating Chinese pinyin to Chinese characters
US20050289463A1 (en) * 2004-06-23 2005-12-29 Google Inc., A Delaware Corporation Systems and methods for spell correction of non-roman characters and words
US9471566B1 (en) * 2005-04-14 2016-10-18 Oracle America, Inc. Method and apparatus for converting phonetic language input to written language output
US7831423B2 (en) * 2006-05-25 2010-11-09 Multimodal Technologies, Inc. Replacing text representing a concept with an alternate written form of the concept
US8626486B2 (en) * 2006-09-05 2014-01-07 Google Inc. Automatic spelling correction for machine translation
CN101149660B (en) * 2006-09-21 2011-08-17 乐金电子(中国)研究开发中心有限公司 Dummy keyboard suitable for bi-directional writing language and its implementing method
US8078451B2 (en) 2006-10-27 2011-12-13 Microsoft Corporation Interface and methods for collecting aligned editorial corrections into a database
JP2008152670A (en) * 2006-12-19 2008-07-03 Fujitsu Ltd Translation input support program, storage medium recording the same, translation input support apparatus, and translation input support system
US20080221866A1 (en) * 2007-03-06 2008-09-11 Lalitesh Katragadda Machine Learning For Transliteration
CN101286154B (en) * 2007-04-09 2016-08-10 谷歌股份有限公司 Input method editor user profiles
CN104866469B (en) 2007-04-11 2018-10-02 谷歌有限责任公司 Input Method Editor with secondary language mode
US8457946B2 (en) * 2007-04-26 2013-06-04 Microsoft Corporation Recognition architecture for generating Asian characters
US8103498B2 (en) * 2007-08-10 2012-01-24 Microsoft Corporation Progressive display rendering of processed text
EP2031486A1 (en) * 2007-08-31 2009-03-04 Research In Motion Limited Handheld electric device and associated method providing advanced text editing function in a text disambiguation environment
CN101398715B (en) * 2007-09-24 2010-06-02 普天信息技术研究院有限公司 Multi-type character mixing input method
US8010465B2 (en) * 2008-02-26 2011-08-30 Microsoft Corporation Predicting candidates using input scopes
US8289283B2 (en) * 2008-03-04 2012-10-16 Apple Inc. Language input interface on a device
CN101576783B (en) * 2008-05-09 2012-11-28 诺基亚公司 User interface, equipment and method for hand input
US9355090B2 (en) * 2008-05-30 2016-05-31 Apple Inc. Identification of candidate characters for text input
JP5224283B2 (en) * 2008-11-21 2013-07-03 キヤノンソフトウェア株式会社 Information processing system, information processing apparatus, information processing system control method, information processing apparatus control method, and program
US20110035209A1 (en) * 2009-07-06 2011-02-10 Macfarlane Scott Entry of text and selections into computing devices
TWI412955B (en) * 2009-08-19 2013-10-21 Inventec Appliances Corp Method of prompting stroke order for chinese character, electronic device, and computer program product thereof
US9317116B2 (en) * 2009-09-09 2016-04-19 Immersion Corporation Systems and methods for haptically-enhanced text interfaces
US7809550B1 (en) * 2009-10-08 2010-10-05 Joan Barry Barrows System for reading chinese characters in seconds
JP2013520878A (en) * 2010-02-18 2013-06-06 スレイマン アルカジ, Configurable multilingual keyboard
KR20110100394A (en) * 2010-03-04 2011-09-14 삼성전자주식회사 Apparatus and method for editing imoticon in portable terminal
EP2367118A1 (en) 2010-03-15 2011-09-21 GMC Software AG Method and devices for generating two-dimensional visual objects
US8463592B2 (en) * 2010-07-27 2013-06-11 International Business Machines Corporation Mode supporting multiple language input for entering text
US8438008B2 (en) * 2010-08-03 2013-05-07 King Fahd University Of Petroleum And Minerals Method of generating a transliteration font
US9058105B2 (en) * 2010-10-31 2015-06-16 International Business Machines Corporation Automated adjustment of input configuration
US8352245B1 (en) 2010-12-30 2013-01-08 Google Inc. Adjusting language models
US8296142B2 (en) 2011-01-21 2012-10-23 Google Inc. Speech recognition using dock context
US8738356B2 (en) 2011-05-18 2014-05-27 Microsoft Corp. Universal text input
JP5901333B2 (en) * 2012-02-13 2016-04-06 三菱電機株式会社 Character input device, character input method, and character input program
US8965693B2 (en) * 2012-06-05 2015-02-24 Apple Inc. Geocoded data detection and user interfaces for same
JP2015022590A (en) * 2013-07-19 2015-02-02 株式会社東芝 Character input apparatus, character input method, and character input program
US9842592B2 (en) 2014-02-12 2017-12-12 Google Inc. Language models using non-linguistic context
CN103885608A (en) 2014-03-19 2014-06-25 百度在线网络技术(北京)有限公司 Input method and system
US9412365B2 (en) 2014-03-24 2016-08-09 Google Inc. Enhanced maximum entropy models
US20160147741A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Techniques for providing a user interface incorporating sign language
US10134394B2 (en) 2015-03-20 2018-11-20 Google Llc Speech recognition using log-linear model
CN108475503B (en) * 2015-10-15 2023-09-22 交互智能集团有限公司 System and method for multilingual communication sequencing
US9978367B2 (en) 2016-03-16 2018-05-22 Google Llc Determining dialog states for language models
EP3488440A4 (en) * 2016-07-21 2020-01-22 Oslabs PTE. Ltd. A system and method for multilingual conversion of text data to speech data
US10832664B2 (en) 2016-08-19 2020-11-10 Google Llc Automated speech recognition using language models that selectively use domain-specific model components
KR101791930B1 (en) 2016-09-23 2017-10-31 (주)신성이노테크 Character Input Apparatus
KR101717488B1 (en) * 2016-09-23 2017-03-17 (주)신성이노테크 Method and Apparatus for Inputting Characters
US20180089172A1 (en) * 2016-09-27 2018-03-29 Intel Corporation Communication system supporting blended-language messages
CN106601254B (en) 2016-12-08 2020-11-06 阿里巴巴(中国)有限公司 Information input method and device and computing equipment
US10311860B2 (en) 2017-02-14 2019-06-04 Google Llc Language model biasing system
CN108803890B (en) * 2017-04-28 2024-02-06 北京搜狗科技发展有限公司 Input method, input device and input device
JP2019066917A (en) * 2017-09-28 2019-04-25 京セラドキュメントソリューションズ株式会社 Electronic device and translation support method
US10635305B2 (en) * 2018-02-01 2020-04-28 Microchip Technology Incorporated Touchscreen user interface with multi-language support
CN109917982B (en) * 2019-03-21 2021-04-02 科大讯飞股份有限公司 Voice input method, device, equipment and readable storage medium
CN112651854A (en) * 2020-12-23 2021-04-13 讯飞智元信息科技有限公司 Voice scheduling method and device, electronic equipment and storage medium

Citations (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4791587A (en) * 1984-12-25 1988-12-13 Kabushiki Kaisha Toshiba System for translation of sentences from one language to another
US4800522A (en) * 1985-05-14 1989-01-24 Sharp Kabushiki Kaisha Bilingual translation system capable of memorizing learned words
US4833610A (en) * 1986-12-16 1989-05-23 International Business Machines Corporation Morphological/phonetic method for ranking word similarities
US4864503A (en) * 1987-02-05 1989-09-05 Toltran, Ltd. Method of using a created international language as an intermediate pathway in translation between two national languages
US5175803A (en) * 1985-06-14 1992-12-29 Yeh Victor C Method and apparatus for data processing and word processing in Chinese using a phonetic Chinese language
US5214583A (en) * 1988-11-22 1993-05-25 Kabushiki Kaisha Toshiba Machine language translation system which produces consistent translated words
US5218536A (en) * 1988-05-25 1993-06-08 Franklin Electronic Publishers, Incorporated Electronic spelling machine having ordered candidate words
US5258909A (en) * 1989-08-31 1993-11-02 International Business Machines Corporation Method and apparatus for "wrong word" spelling error detection and correction
US5278943A (en) * 1990-03-23 1994-01-11 Bright Star Technology, Inc. Speech animation and inflection system
US5319552A (en) * 1991-10-14 1994-06-07 Omron Corporation Apparatus and method for selectively converting a phonetic transcription of Chinese into a Chinese character from a plurality of notations
US5369576A (en) * 1991-07-23 1994-11-29 Oce-Nederland, B.V. Method of inflecting words and a data processing unit for performing such method
US5384701A (en) * 1986-10-03 1995-01-24 British Telecommunications Public Limited Company Language translation system
US5459739A (en) * 1992-03-18 1995-10-17 Oclc Online Computer Library Center, Incorporated Merging three optical character recognition outputs for improved precision using a minimum edit distance function
US5510998A (en) * 1994-06-13 1996-04-23 Cadence Design Systems, Inc. System and method for generating component models
US5535119A (en) * 1992-06-11 1996-07-09 Hitachi, Ltd. Character inputting method allowing input of a plurality of different types of character species, and information processing equipment adopting the same
US5568383A (en) * 1992-11-30 1996-10-22 International Business Machines Corporation Natural language translation system and document transmission network with translation loss information and restrictions
US5572423A (en) * 1990-06-14 1996-11-05 Lucent Technologies Inc. Method for correcting spelling using error frequencies
US5594642A (en) * 1993-12-22 1997-01-14 Object Technology Licensing Corp. Input methods framework
US5646840A (en) * 1992-11-09 1997-07-08 Ricoh Company, Ltd. Language conversion system and text creating system using such
US5671426A (en) * 1993-06-22 1997-09-23 Kurzweil Applied Intelligence, Inc. Method for organizing incremental search dictionary
US5704007A (en) * 1994-03-11 1997-12-30 Apple Computer, Inc. Utilization of multiple voice sources in a speech synthesizer
US5715469A (en) * 1993-07-12 1998-02-03 International Business Machines Corporation Method and apparatus for detecting error strings in a text
US5732276A (en) * 1994-08-04 1998-03-24 Nec Corporation Machine translation device
US5774588A (en) * 1995-06-07 1998-06-30 United Parcel Service Of America, Inc. Method and system for comparing strings with entries of a lexicon
US5806021A (en) * 1995-10-30 1998-09-08 International Business Machines Corporation Automatic segmentation of continuous text using statistical approaches
US5835924A (en) * 1995-01-30 1998-11-10 Mitsubishi Denki Kabushiki Kaisha Language processing apparatus and method
US5893133A (en) * 1995-08-16 1999-04-06 International Business Machines Corporation Keyboard for a system and method for processing Chinese language text
US5907705A (en) * 1996-10-31 1999-05-25 Sun Microsystems, Inc. Computer implemented request to integrate (RTI) system for managing change control in software release stream
US5930755A (en) * 1994-03-11 1999-07-27 Apple Computer, Inc. Utilization of a recorded sound sample as a voice source in a speech synthesizer
US5933525A (en) * 1996-04-10 1999-08-03 Bbn Corporation Language-independent and segmentation-free optical character recognition system and method
US5974371A (en) * 1996-03-21 1999-10-26 Sharp Kabushiki Kaisha Data processor for selectively translating only newly received text data
US5974413A (en) * 1997-07-03 1999-10-26 Activeword Systems, Inc. Semantic user interface
US5987403A (en) * 1996-05-29 1999-11-16 Sugimura; Ryoichi Document conversion apparatus for carrying out a natural conversion
US6002390A (en) * 1996-11-25 1999-12-14 Sony Corporation Text input device and method
US6003049A (en) * 1997-02-10 1999-12-14 Chiang; James Data handling and transmission systems employing binary bit-patterns based on a sequence of standard decomposed strokes of ideographic characters
US6047300A (en) * 1997-05-15 2000-04-04 Microsoft Corporation System and method for automatically correcting a misspelled word
US6054941A (en) * 1997-05-27 2000-04-25 Motorola, Inc. Apparatus and method for inputting ideographic characters
US6131102A (en) * 1998-06-15 2000-10-10 Microsoft Corporation Method and system for cost computation of spelling suggestions and automatic replacement
US6148285A (en) * 1998-10-30 2000-11-14 Nortel Networks Corporation Allophonic text-to-speech generator
US6154758A (en) * 1994-05-13 2000-11-28 Apple Computer, Inc. Text conversion method for computer systems
US6173252B1 (en) * 1997-03-13 2001-01-09 International Business Machines Corp. Apparatus and methods for Chinese error check by means of dynamic programming and weighted classes
US6204848B1 (en) * 1999-04-14 2001-03-20 Motorola, Inc. Data entry apparatus having a limited number of character keys and method
US6219646B1 (en) * 1996-10-18 2001-04-17 Gedanken Corp. Methods and apparatus for translating between languages
US6246976B1 (en) * 1997-03-14 2001-06-12 Omron Corporation Apparatus, method and storage medium for identifying a combination of a language and its character code system
US6256630B1 (en) * 1994-10-03 2001-07-03 Phonetic Systems Ltd. Word-containing database accessing system for responding to ambiguous queries, including a dictionary of database words, a dictionary searcher and a database searcher
US20020021838A1 (en) * 1999-04-19 2002-02-21 Liaison Technology, Inc. Adaptively weighted, partitioned context edit distance string matching
US6356866B1 (en) * 1998-10-07 2002-03-12 Microsoft Corporation Method for converting a phonetic character string into the text of an Asian language
US6374210B1 (en) * 1998-11-30 2002-04-16 U.S. Philips Corporation Automatic segmentation of a text
US6393388B1 (en) * 1996-05-02 2002-05-21 Sony Corporation Example-based translation method and system employing multi-stage syntax dividing
US6401065B1 (en) * 1999-06-17 2002-06-04 International Business Machines Corporation Intelligent keyboard interface with use of human language processing
US6487533B2 (en) * 1997-07-03 2002-11-26 Avaya Technology Corporation Unified messaging system with automatic language identification for text-to-speech conversion
US6490563B2 (en) * 1998-08-17 2002-12-03 Microsoft Corporation Proofreading with text to speech feedback
US20030061031A1 (en) * 2001-09-25 2003-03-27 Yasuo Kida Japanese virtual dictionary
US20030088398A1 (en) * 2001-11-08 2003-05-08 Jin Guo User interface of a keypad entry system for korean text input
US6562078B1 (en) * 1999-06-29 2003-05-13 Microsoft Corporation Arrangement and method for inputting non-alphabetic language
US6573844B1 (en) * 2000-01-18 2003-06-03 Microsoft Corporation Predictive keyboard
US20030119551A1 (en) * 2001-12-20 2003-06-26 Petri Laukkanen Method and apparatus for providing Hindi input to a device using a numeric keypad
US6646572B1 (en) * 2000-02-18 2003-11-11 Mitsubish Electric Research Laboratories, Inc. Method for designing optimal single pointer predictive keyboards and apparatus therefore
US6686907B2 (en) * 2000-12-21 2004-02-03 International Business Machines Corporation Method and apparatus for inputting Chinese characters
US6848080B1 (en) * 1999-11-05 2005-01-25 Microsoft Corporation Language input architecture for converting one text form to another text form with tolerance to spelling, typographical, and conversion errors
US20050057512A1 (en) * 2003-07-17 2005-03-17 Min-Wen Du Browsing based Chinese input method
US7165019B1 (en) * 1999-11-05 2007-01-16 Microsoft Corporation Language input architecture for converting one text form to another text form with modeless entry
US7191393B1 (en) * 1998-09-25 2007-03-13 International Business Machines Corporation Interface for providing different-language versions of markup-language resources
US7277732B2 (en) * 2000-10-13 2007-10-02 Microsoft Corporation Language input system for mobile devices

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4383307A (en) 1981-05-04 1983-05-10 Software Concepts, Inc. Spelling error detector apparatus and methods
GB2158776A (en) 1984-02-24 1985-11-20 Chang Chi Chen Method of computerised input of Chinese words in keyboards
JPS619758A (en) * 1984-06-25 1986-01-17 Ricoh Co Ltd Kana-to-kanji conversion processor
JPS63155262A (en) * 1986-12-18 1988-06-28 Ricoh Co Ltd Japanese processing system
US5095432A (en) 1989-07-10 1992-03-10 Harris Corporation Data processing system implemented process and compiling technique for performing context-free parsing algorithm based on register vector grammar
US5270927A (en) 1990-09-10 1993-12-14 At&T Bell Laboratories Method for conversion of phonetic Chinese to character Chinese
US5267345A (en) 1992-02-10 1993-11-30 International Business Machines Corporation Speech recognition apparatus which predicts word classes from context and words from word classes
JPH0689302A (en) 1992-09-08 1994-03-29 Hitachi Ltd Dictionary memory
US5521816A (en) 1994-06-01 1996-05-28 Mitsubishi Electric Research Laboratories, Inc. Word inflection correction system
JPH0877173A (en) 1994-09-01 1996-03-22 Fujitsu Ltd System and method for correcting character string
CA2170669A1 (en) 1995-03-24 1996-09-25 Fernando Carlos Neves Pereira Grapheme-to phoneme conversion with weighted finite-state transducers
US5875443A (en) 1996-01-30 1999-02-23 Sun Microsystems, Inc. Internet-based spelling checker dictionary system with automatic updating
US5956739A (en) 1996-06-25 1999-09-21 Mitsubishi Electric Information Technology Center America, Inc. System for text correction adaptive to the text being corrected
JP2806452B2 (en) * 1996-12-19 1998-09-30 オムロン株式会社 Kana-kanji conversion device and method, and recording medium
US7047493B1 (en) 2000-03-31 2006-05-16 Brill Eric D Spell checker with arbitrary length string-to-string transformations to improve noisy channel spelling correction
US7076731B2 (en) 2001-06-02 2006-07-11 Microsoft Corporation Spelling correction system and method for phrasal strings using dictionary looping

Patent Citations (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4791587A (en) * 1984-12-25 1988-12-13 Kabushiki Kaisha Toshiba System for translation of sentences from one language to another
US4800522A (en) * 1985-05-14 1989-01-24 Sharp Kabushiki Kaisha Bilingual translation system capable of memorizing learned words
US5175803A (en) * 1985-06-14 1992-12-29 Yeh Victor C Method and apparatus for data processing and word processing in Chinese using a phonetic Chinese language
US5384701A (en) * 1986-10-03 1995-01-24 British Telecommunications Public Limited Company Language translation system
US4833610A (en) * 1986-12-16 1989-05-23 International Business Machines Corporation Morphological/phonetic method for ranking word similarities
US4864503A (en) * 1987-02-05 1989-09-05 Toltran, Ltd. Method of using a created international language as an intermediate pathway in translation between two national languages
US5218536A (en) * 1988-05-25 1993-06-08 Franklin Electronic Publishers, Incorporated Electronic spelling machine having ordered candidate words
US5214583A (en) * 1988-11-22 1993-05-25 Kabushiki Kaisha Toshiba Machine language translation system which produces consistent translated words
US5258909A (en) * 1989-08-31 1993-11-02 International Business Machines Corporation Method and apparatus for "wrong word" spelling error detection and correction
US5278943A (en) * 1990-03-23 1994-01-11 Bright Star Technology, Inc. Speech animation and inflection system
US5572423A (en) * 1990-06-14 1996-11-05 Lucent Technologies Inc. Method for correcting spelling using error frequencies
US5369576A (en) * 1991-07-23 1994-11-29 Oce-Nederland, B.V. Method of inflecting words and a data processing unit for performing such method
US5319552A (en) * 1991-10-14 1994-06-07 Omron Corporation Apparatus and method for selectively converting a phonetic transcription of Chinese into a Chinese character from a plurality of notations
US5459739A (en) * 1992-03-18 1995-10-17 Oclc Online Computer Library Center, Incorporated Merging three optical character recognition outputs for improved precision using a minimum edit distance function
US5535119A (en) * 1992-06-11 1996-07-09 Hitachi, Ltd. Character inputting method allowing input of a plurality of different types of character species, and information processing equipment adopting the same
US5646840A (en) * 1992-11-09 1997-07-08 Ricoh Company, Ltd. Language conversion system and text creating system using such
US5568383A (en) * 1992-11-30 1996-10-22 International Business Machines Corporation Natural language translation system and document transmission network with translation loss information and restrictions
US5671426A (en) * 1993-06-22 1997-09-23 Kurzweil Applied Intelligence, Inc. Method for organizing incremental search dictionary
US5715469A (en) * 1993-07-12 1998-02-03 International Business Machines Corporation Method and apparatus for detecting error strings in a text
US5594642A (en) * 1993-12-22 1997-01-14 Object Technology Licensing Corp. Input methods framework
US5930755A (en) * 1994-03-11 1999-07-27 Apple Computer, Inc. Utilization of a recorded sound sample as a voice source in a speech synthesizer
US5704007A (en) * 1994-03-11 1997-12-30 Apple Computer, Inc. Utilization of multiple voice sources in a speech synthesizer
US6154758A (en) * 1994-05-13 2000-11-28 Apple Computer, Inc. Text conversion method for computer systems
US5510998A (en) * 1994-06-13 1996-04-23 Cadence Design Systems, Inc. System and method for generating component models
US5732276A (en) * 1994-08-04 1998-03-24 Nec Corporation Machine translation device
US6256630B1 (en) * 1994-10-03 2001-07-03 Phonetic Systems Ltd. Word-containing database accessing system for responding to ambiguous queries, including a dictionary of database words, a dictionary searcher and a database searcher
US5835924A (en) * 1995-01-30 1998-11-10 Mitsubishi Denki Kabushiki Kaisha Language processing apparatus and method
US5774588A (en) * 1995-06-07 1998-06-30 United Parcel Service Of America, Inc. Method and system for comparing strings with entries of a lexicon
US5893133A (en) * 1995-08-16 1999-04-06 International Business Machines Corporation Keyboard for a system and method for processing Chinese language text
US6073146A (en) * 1995-08-16 2000-06-06 International Business Machines Corporation System and method for processing chinese language text
US5806021A (en) * 1995-10-30 1998-09-08 International Business Machines Corporation Automatic segmentation of continuous text using statistical approaches
US5974371A (en) * 1996-03-21 1999-10-26 Sharp Kabushiki Kaisha Data processor for selectively translating only newly received text data
US5933525A (en) * 1996-04-10 1999-08-03 Bbn Corporation Language-independent and segmentation-free optical character recognition system and method
US6393388B1 (en) * 1996-05-02 2002-05-21 Sony Corporation Example-based translation method and system employing multi-stage syntax dividing
US5987403A (en) * 1996-05-29 1999-11-16 Sugimura; Ryoichi Document conversion apparatus for carrying out a natural conversion
US6219646B1 (en) * 1996-10-18 2001-04-17 Gedanken Corp. Methods and apparatus for translating between languages
US5907705A (en) * 1996-10-31 1999-05-25 Sun Microsystems, Inc. Computer implemented request to integrate (RTI) system for managing change control in software release stream
US6002390A (en) * 1996-11-25 1999-12-14 Sony Corporation Text input device and method
US6003049A (en) * 1997-02-10 1999-12-14 Chiang; James Data handling and transmission systems employing binary bit-patterns based on a sequence of standard decomposed strokes of ideographic characters
US6173252B1 (en) * 1997-03-13 2001-01-09 International Business Machines Corp. Apparatus and methods for Chinese error check by means of dynamic programming and weighted classes
US6246976B1 (en) * 1997-03-14 2001-06-12 Omron Corporation Apparatus, method and storage medium for identifying a combination of a language and its character code system
US6047300A (en) * 1997-05-15 2000-04-04 Microsoft Corporation System and method for automatically correcting a misspelled word
US6054941A (en) * 1997-05-27 2000-04-25 Motorola, Inc. Apparatus and method for inputting ideographic characters
US5974413A (en) * 1997-07-03 1999-10-26 Activeword Systems, Inc. Semantic user interface
US6487533B2 (en) * 1997-07-03 2002-11-26 Avaya Technology Corporation Unified messaging system with automatic language identification for text-to-speech conversion
US6131102A (en) * 1998-06-15 2000-10-10 Microsoft Corporation Method and system for cost computation of spelling suggestions and automatic replacement
US6490563B2 (en) * 1998-08-17 2002-12-03 Microsoft Corporation Proofreading with text to speech feedback
US7191393B1 (en) * 1998-09-25 2007-03-13 International Business Machines Corporation Interface for providing different-language versions of markup-language resources
US6356866B1 (en) * 1998-10-07 2002-03-12 Microsoft Corporation Method for converting a phonetic character string into the text of an Asian language
US6148285A (en) * 1998-10-30 2000-11-14 Nortel Networks Corporation Allophonic text-to-speech generator
US6374210B1 (en) * 1998-11-30 2002-04-16 U.S. Philips Corporation Automatic segmentation of a text
US6204848B1 (en) * 1999-04-14 2001-03-20 Motorola, Inc. Data entry apparatus having a limited number of character keys and method
US20020021838A1 (en) * 1999-04-19 2002-02-21 Liaison Technology, Inc. Adaptively weighted, partitioned context edit distance string matching
US6401065B1 (en) * 1999-06-17 2002-06-04 International Business Machines Corporation Intelligent keyboard interface with use of human language processing
US6562078B1 (en) * 1999-06-29 2003-05-13 Microsoft Corporation Arrangement and method for inputting non-alphabetic language
US6848080B1 (en) * 1999-11-05 2005-01-25 Microsoft Corporation Language input architecture for converting one text form to another text form with tolerance to spelling, typographical, and conversion errors
US7165019B1 (en) * 1999-11-05 2007-01-16 Microsoft Corporation Language input architecture for converting one text form to another text form with modeless entry
US6573844B1 (en) * 2000-01-18 2003-06-03 Microsoft Corporation Predictive keyboard
US6646572B1 (en) * 2000-02-18 2003-11-11 Mitsubish Electric Research Laboratories, Inc. Method for designing optimal single pointer predictive keyboards and apparatus therefore
US7277732B2 (en) * 2000-10-13 2007-10-02 Microsoft Corporation Language input system for mobile devices
US6686907B2 (en) * 2000-12-21 2004-02-03 International Business Machines Corporation Method and apparatus for inputting Chinese characters
US20030061031A1 (en) * 2001-09-25 2003-03-27 Yasuo Kida Japanese virtual dictionary
US20030088398A1 (en) * 2001-11-08 2003-05-08 Jin Guo User interface of a keypad entry system for korean text input
US20030119551A1 (en) * 2001-12-20 2003-06-26 Petri Laukkanen Method and apparatus for providing Hindi input to a device using a numeric keypad
US20050057512A1 (en) * 2003-07-17 2005-03-17 Min-Wen Du Browsing based Chinese input method

Cited By (216)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813772B2 (en) * 1998-11-30 2017-11-07 Rovi Guides, Inc. Interactive television program guide with selectable languages
US20170111704A1 (en) * 1998-11-30 2017-04-20 Rovi Guides, Inc. Interactive television program guide with selectable languages
US20100277416A1 (en) * 1999-05-27 2010-11-04 Tegic Communications, Inc. Directional input system with automatic correction
US7880730B2 (en) 1999-05-27 2011-02-01 Tegic Communications, Inc. Keyboard system with automatic correction
US8294667B2 (en) 1999-05-27 2012-10-23 Tegic Communications, Inc. Directional input system with automatic correction
US8441454B2 (en) 1999-05-27 2013-05-14 Tegic Communications, Inc. Virtual keyboard system with automatic correction
US8576167B2 (en) 1999-05-27 2013-11-05 Tegic Communications, Inc. Directional input system with automatic correction
US20090284471A1 (en) * 1999-05-27 2009-11-19 Tegic Communications, Inc. Virtual Keyboard System with Automatic Correction
US8466896B2 (en) 1999-05-27 2013-06-18 Tegic Communications, Inc. System and apparatus for selectable input with a touch screen
US9400782B2 (en) 1999-05-27 2016-07-26 Nuance Communications, Inc. Virtual keyboard system with automatic correction
US9557916B2 (en) 1999-05-27 2017-01-31 Nuance Communications, Inc. Keyboard system with automatic correction
US20060020882A1 (en) * 1999-12-07 2006-01-26 Microsoft Corporation Method and apparatus for capturing and rendering text annotations for non-modifiable electronic content
US20030206189A1 (en) * 1999-12-07 2003-11-06 Microsoft Corporation System, method and user interface for active reading of electronic content
US7496856B2 (en) * 1999-12-07 2009-02-24 Microsoft Corporation Method and apparatus for capturing and rendering text annotations for non-modifiable electronic content
US9424240B2 (en) 1999-12-07 2016-08-23 Microsoft Technology Licensing, Llc Annotations for electronic content
US8555198B2 (en) 1999-12-07 2013-10-08 Microsoft Corporation Annotations for electronic content
US7028267B1 (en) * 1999-12-07 2006-04-11 Microsoft Corporation Method and apparatus for capturing and rendering text annotations for non-modifiable electronic content
US8627197B2 (en) 1999-12-07 2014-01-07 Microsoft Corporation System and method for annotating an electronic document independently of its content
US20090271381A1 (en) * 1999-12-07 2009-10-29 Beezer John L Annotations for Electronic Content
US20050034056A1 (en) * 2000-04-21 2005-02-10 Microsoft Corporation Method and apparatus for displaying multiple contexts in electronic documents
US20080126073A1 (en) * 2000-05-26 2008-05-29 Longe Michael R Directional Input System with Automatic Correction
US7778818B2 (en) 2000-05-26 2010-08-17 Tegic Communications, Inc. Directional input system with automatic correction
US8976115B2 (en) 2000-05-26 2015-03-10 Nuance Communications, Inc. Directional input system with automatic correction
US20080015841A1 (en) * 2000-05-26 2008-01-17 Longe Michael R Directional Input System with Automatic Correction
US7730391B2 (en) 2000-06-29 2010-06-01 Microsoft Corporation Ink thickness rendering for electronic annotations
US7714868B2 (en) 2001-01-16 2010-05-11 Lg Electronics Inc. Apparatus and methods of selecting special characters in a mobile communication terminal
US20090132948A1 (en) * 2001-01-16 2009-05-21 Lg Electronics Inc. Apparatus and methods of selecting special characters in a mobile communication terminal
US7453462B2 (en) * 2001-01-16 2008-11-18 Lg Electronics Inc. Apparatus and methods of selecting special characters in a mobile communication terminal
US7423647B2 (en) * 2001-01-16 2008-09-09 Lg Electronics Inc. Apparatus and methods of selecting special characters in a mobile communication terminal
US20020099552A1 (en) * 2001-01-25 2002-07-25 Darryl Rubin Annotating electronic information with audio clips
US7224989B2 (en) * 2001-05-04 2007-05-29 Nokia Corporation Communication terminal having a predictive text editor application
US20030073451A1 (en) * 2001-05-04 2003-04-17 Christian Kraft Communication terminal having a predictive text editor application
US7580829B2 (en) * 2002-07-18 2009-08-25 Tegic Communications, Inc. Apparatus and method for reordering of multiple language databases for text disambiguation
US20040083198A1 (en) * 2002-07-18 2004-04-29 Bradford Ethan R. Dynamic database reordering system
US20050198023A1 (en) * 2002-07-18 2005-09-08 Christina James Apparatus and method for reordering of multiple language databases for text disambiguation
US8237682B2 (en) 2003-04-09 2012-08-07 Tegic Communications, Inc. System and process for selectable input with a touch screen
US7750891B2 (en) 2003-04-09 2010-07-06 Tegic Communications, Inc. Selective input system based on tracking of motion parameters of an input device
US7821503B2 (en) 2003-04-09 2010-10-26 Tegic Communications, Inc. Touch screen and graphical user interface
US20050052406A1 (en) * 2003-04-09 2005-03-10 James Stephanick Selective input system based on tracking of motion parameters of an input device
US20090213134A1 (en) * 2003-04-09 2009-08-27 James Stephanick Touch screen and graphical user interface
US8237681B2 (en) 2003-04-09 2012-08-07 Tegic Communications, Inc. Selective input system and process based on tracking of motion parameters of an input object
US8456441B2 (en) 2003-04-09 2013-06-04 Tegic Communications, Inc. Selective input system and process based on tracking of motion parameters of an input object
US20060274051A1 (en) * 2003-12-22 2006-12-07 Tegic Communications, Inc. Virtual Keyboard Systems with Automatic Correction
US8570292B2 (en) 2003-12-22 2013-10-29 Tegic Communications, Inc. Virtual keyboard system with automatic correction
US8200475B2 (en) 2004-02-13 2012-06-12 Microsoft Corporation Phonetic-based text input method
US20070294618A1 (en) * 2004-04-27 2007-12-20 Yoshio Yamamoto Character Input System Including Application Device and Input Server
US7992095B2 (en) * 2004-04-27 2011-08-02 Panasonic Corporation Character input system including application device and input server
US9972317B2 (en) 2004-11-16 2018-05-15 Microsoft Technology Licensing, Llc Centralized method and system for clarifying voice commands
US10748530B2 (en) 2004-11-16 2020-08-18 Microsoft Technology Licensing, Llc Centralized method and system for determining voice commands
US20060242586A1 (en) * 2005-04-20 2006-10-26 Microsoft Corporation Searchable task-based interface to control panel functionality
US7703037B2 (en) 2005-04-20 2010-04-20 Microsoft Corporation Searchable task-based interface to control panel functionality
US7242404B2 (en) * 2005-09-15 2007-07-10 Microsoft Corporation Enlargement of font characters
US20070057949A1 (en) * 2005-09-15 2007-03-15 Microsoft Corporation Enlargement of font characters
US7453463B2 (en) 2005-09-15 2008-11-18 Microsoft Corporation Enlargement of font characters
US20080012881A1 (en) * 2005-09-15 2008-01-17 Microsoft Corporation Enlargement of font characters
US20070106664A1 (en) * 2005-11-04 2007-05-10 Minfo, Inc. Input/query methods and apparatuses
US10332071B2 (en) 2005-12-08 2019-06-25 International Business Machines Corporation Solution for adding context to a text exchange modality during interactions with a composite services application
US7809838B2 (en) 2005-12-08 2010-10-05 International Business Machines Corporation Managing concurrent data updates in a composite services delivery system
US7827288B2 (en) 2005-12-08 2010-11-02 International Business Machines Corporation Model autocompletion for composite services synchronization
US7890635B2 (en) 2005-12-08 2011-02-15 International Business Machines Corporation Selective view synchronization for composite services delivery
US7921158B2 (en) 2005-12-08 2011-04-05 International Business Machines Corporation Using a list management server for conferencing in an IMS environment
US11093898B2 (en) 2005-12-08 2021-08-17 International Business Machines Corporation Solution for adding context to a text exchange modality during interactions with a composite services application
US20070136420A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Visual channel refresh rate control for composite services delivery
US20070185957A1 (en) * 2005-12-08 2007-08-09 International Business Machines Corporation Using a list management server for conferencing in an ims environment
US7818432B2 (en) 2005-12-08 2010-10-19 International Business Machines Corporation Seamless reflection of model updates in a visual page for a visual channel in a composite services delivery system
US8189563B2 (en) 2005-12-08 2012-05-29 International Business Machines Corporation View coordination for callers in a composite services enablement environment
US7877486B2 (en) 2005-12-08 2011-01-25 International Business Machines Corporation Auto-establishment of a voice channel of access to a session for a composite service from a visual channel of access to the session for the composite service
US20070132834A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Speech disambiguation in a composite services enablement environment
US8005934B2 (en) 2005-12-08 2011-08-23 International Business Machines Corporation Channel presence in a composite services enablement environment
US20070136448A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Channel presence in a composite services enablement environment
US7792971B2 (en) 2005-12-08 2010-09-07 International Business Machines Corporation Visual channel refresh rate control for composite services delivery
US20070133769A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Voice navigation of a visual view for a session in a composite services enablement environment
US20070136449A1 (en) * 2005-12-08 2007-06-14 International Business Machines Corporation Update notification for peer views in a composite services delivery environment
US9632650B2 (en) 2006-03-10 2017-04-25 Microsoft Technology Licensing, Llc Command searching enhancements
US7925975B2 (en) 2006-03-10 2011-04-12 Microsoft Corporation Searching for commands to execute in applications
US8370743B2 (en) 2006-03-10 2013-02-05 Microsoft Corporation Searching command enhancements
US20070214122A1 (en) * 2006-03-10 2007-09-13 Microsoft Corporation Searching command enhancements
US8190421B2 (en) * 2006-03-31 2012-05-29 Research In Motion Limited Handheld electronic device including toggle of a selected data source, and associated method
US8589145B2 (en) 2006-03-31 2013-11-19 Blackberry Limited Handheld electronic device including toggle of a selected data source, and associated method
US20110093255A1 (en) * 2006-03-31 2011-04-21 Research In Motion Limited Handheld electronic device including toggle of a selected data source, and associated method
US20070244691A1 (en) * 2006-04-17 2007-10-18 Microsoft Corporation Translation of user interface text strings
US9442921B2 (en) * 2006-05-09 2016-09-13 Blackberry Limited Handheld electronic device including automatic selection of input language, and associated method
US20140006009A1 (en) * 2006-05-09 2014-01-02 Blackberry Limited Handheld electronic device including automatic selection of input language, and associated method
US7801722B2 (en) 2006-05-23 2010-09-21 Microsoft Corporation Techniques for customization of phonetic schemes
US20070277118A1 (en) * 2006-05-23 2007-11-29 Microsoft Corporation Microsoft Patent Group Providing suggestion lists for phonetic input
US20070276650A1 (en) * 2006-05-23 2007-11-29 Microsoft Corporation Techniques for customization of phonetic schemes
US20130144820A1 (en) * 2006-06-30 2013-06-06 Research In Motion Limited Method of learning a context of a segment of text, and associated handheld electronic device
US9171234B2 (en) * 2006-06-30 2015-10-27 Blackberry Limited Method of learning a context of a segment of text, and associated handheld electronic device
US9286288B2 (en) 2006-06-30 2016-03-15 Blackberry Limited Method of learning character segments during text input, and associated handheld electronic device
US8271265B2 (en) * 2006-08-25 2012-09-18 Nhn Corporation Method for searching for chinese character using tone mark and system for executing the method
US20080052064A1 (en) * 2006-08-25 2008-02-28 Nhn Corporation Method for searching for chinese character using tone mark and system for executing the method
US8672682B2 (en) 2006-09-28 2014-03-18 Howard A. Engelsen Conversion of alphabetic words into a plurality of independent spellings
US20080082335A1 (en) * 2006-09-28 2008-04-03 Howard Engelsen Conversion of alphabetic words into a plurality of independent spellings
US8594305B2 (en) 2006-12-22 2013-11-26 International Business Machines Corporation Enhancing contact centers with dialog contracts
US20110193797A1 (en) * 2007-02-01 2011-08-11 Erland Unruh Spell-check for a keyboard system with automatic correction
US8201087B2 (en) 2007-02-01 2012-06-12 Tegic Communications, Inc. Spell-check for a keyboard system with automatic correction
US9092419B2 (en) 2007-02-01 2015-07-28 Nuance Communications, Inc. Spell-check for a keyboard system with automatic correction
US8225203B2 (en) 2007-02-01 2012-07-17 Nuance Communications, Inc. Spell-check for a keyboard system with automatic correction
US8892996B2 (en) 2007-02-01 2014-11-18 Nuance Communications, Inc. Spell-check for a keyboard system with automatic correction
US9055150B2 (en) 2007-02-28 2015-06-09 International Business Machines Corporation Skills based routing in a standards based contact center using a presence server and expertise specific watchers
US9247056B2 (en) 2007-02-28 2016-01-26 International Business Machines Corporation Identifying contact center agents based upon biometric characteristics of an agent's speech
US8259923B2 (en) 2007-02-28 2012-09-04 International Business Machines Corporation Implementing a contact center using open standards and non-proprietary components
US8543375B2 (en) * 2007-04-10 2013-09-24 Google Inc. Multi-mode input method editor
US20100217581A1 (en) * 2007-04-10 2010-08-26 Google Inc. Multi-Mode Input Method Editor
US8831929B2 (en) 2007-04-10 2014-09-09 Google Inc. Multi-mode input method editor
US20090006075A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Phonetic search using normalized string
US8583415B2 (en) * 2007-06-29 2013-11-12 Microsoft Corporation Phonetic search using normalized string
US20090112574A1 (en) * 2007-10-30 2009-04-30 Yu Zou Multi-language interfaces switch system and method therefor
US8335682B2 (en) * 2007-10-30 2012-12-18 Sercomm Corporation Multi-language interfaces switch system and method therefor
US8918383B2 (en) * 2008-07-09 2014-12-23 International Business Machines Corporation Vector space lightweight directory access protocol data search
US20100010973A1 (en) * 2008-07-09 2010-01-14 International Business Machines Corporation Vector Space Lightweight Directory Access Protocol Data Search
US20120113011A1 (en) * 2009-03-20 2012-05-10 Genqing Wu Ime text entry assistance
US20120019446A1 (en) * 2009-03-20 2012-01-26 Google Inc. Interaction with ime computing device
US10073829B2 (en) * 2009-03-30 2018-09-11 Touchtype Limited System and method for inputting text into electronic devices
US8798983B2 (en) * 2009-03-30 2014-08-05 Microsoft Corporation Adaptation for statistical language model
KR101679445B1 (en) 2009-03-30 2016-11-24 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Adaptation for statistical language model
US20120029910A1 (en) * 2009-03-30 2012-02-02 Touchtype Ltd System and Method for Inputting Text into Electronic Devices
US9424246B2 (en) 2009-03-30 2016-08-23 Touchtype Ltd. System and method for inputting text into electronic devices
US10191654B2 (en) 2009-03-30 2019-01-29 Touchtype Limited System and method for inputting text into electronic devices
US10402493B2 (en) 2009-03-30 2019-09-03 Touchtype Ltd System and method for inputting text into electronic devices
US10445424B2 (en) 2009-03-30 2019-10-15 Touchtype Limited System and method for inputting text into electronic devices
US9189472B2 (en) 2009-03-30 2015-11-17 Touchtype Limited System and method for inputting text into small screen devices
US20100250251A1 (en) * 2009-03-30 2010-09-30 Microsoft Corporation Adaptation for statistical language model
US9659002B2 (en) * 2009-03-30 2017-05-23 Touchtype Ltd System and method for inputting text into electronic devices
US20140350920A1 (en) 2009-03-30 2014-11-27 Touchtype Ltd System and method for inputting text into electronic devices
US9046932B2 (en) 2009-10-09 2015-06-02 Touchtype Ltd System and method for inputting text into electronic devices based on text and text category predictions
US10157040B2 (en) 2009-12-23 2018-12-18 Google Llc Multi-modal input on an electronic device
US20110161080A1 (en) * 2009-12-23 2011-06-30 Google Inc. Speech to Text Conversion
US20110153324A1 (en) * 2009-12-23 2011-06-23 Google Inc. Language Model Selection for Speech-to-Text Conversion
US10713010B2 (en) 2009-12-23 2020-07-14 Google Llc Multi-modal input on an electronic device
US20110153325A1 (en) * 2009-12-23 2011-06-23 Google Inc. Multi-Modal Input on an Electronic Device
US9251791B2 (en) 2009-12-23 2016-02-02 Google Inc. Multi-modal input on an electronic device
US9495127B2 (en) 2009-12-23 2016-11-15 Google Inc. Language model selection for speech-to-text conversion
US11914925B2 (en) 2009-12-23 2024-02-27 Google Llc Multi-modal input on an electronic device
US20120262488A1 (en) * 2009-12-23 2012-10-18 Nokia Corporation Method and Apparatus for Facilitating Text Editing and Related Computer Program Product and Computer Readable Medium
US9031830B2 (en) * 2009-12-23 2015-05-12 Google Inc. Multi-modal input on an electronic device
US11416214B2 (en) 2009-12-23 2022-08-16 Google Llc Multi-modal input on an electronic device
US9047870B2 (en) 2009-12-23 2015-06-02 Google Inc. Context based language model selection
US20150022455A1 (en) * 2010-06-10 2015-01-22 Michael William Murphy Novel character specification system and method that uses a limited number of selection keys
US9880638B2 (en) * 2010-06-10 2018-01-30 Michael William Murphy Character specification system and method that uses a limited number of selection keys
US20130278506A1 (en) * 2010-06-10 2013-10-24 Michael William Murphy Novel character specification system and method that uses a limited number of selection keys
US8878789B2 (en) * 2010-06-10 2014-11-04 Michael William Murphy Character specification system and method that uses a limited number of selection keys
US11223619B2 (en) 2010-11-29 2022-01-11 Biocatch Ltd. Device, system, and method of user authentication based on user-specific characteristics of task performance
US10917431B2 (en) 2010-11-29 2021-02-09 Biocatch Ltd. System, method, and device of authenticating a user based on selfie image or selfie video
US10621585B2 (en) 2010-11-29 2020-04-14 Biocatch Ltd. Contextual mapping of web-pages, and generation of fraud-relatedness score-values
US10586036B2 (en) 2010-11-29 2020-03-10 Biocatch Ltd. System, device, and method of recovery and resetting of user authentication factor
US11838118B2 (en) * 2010-11-29 2023-12-05 Biocatch Ltd. Device, system, and method of detecting vishing attacks
US10747305B2 (en) 2010-11-29 2020-08-18 Biocatch Ltd. Method, system, and device of authenticating identity of a user of an electronic device
US11580553B2 (en) 2010-11-29 2023-02-14 Biocatch Ltd. Method, device, and system of detecting mule accounts and accounts used for money laundering
US11425563B2 (en) 2010-11-29 2022-08-23 Biocatch Ltd. Method, device, and system of differentiating between a cyber-attacker and a legitimate user
US11330012B2 (en) * 2010-11-29 2022-05-10 Biocatch Ltd. System, method, and device of authenticating a user based on selfie image or selfie video
US10834590B2 (en) 2010-11-29 2020-11-10 Biocatch Ltd. Method, device, and system of differentiating between a cyber-attacker and a legitimate user
US11314849B2 (en) 2010-11-29 2022-04-26 Biocatch Ltd. Method, device, and system of detecting a lie of a user who inputs data
US11269977B2 (en) 2010-11-29 2022-03-08 Biocatch Ltd. System, apparatus, and method of collecting and processing data in electronic devices
US11250435B2 (en) 2010-11-29 2022-02-15 Biocatch Ltd. Contextual mapping of web-pages, and generation of fraud-relatedness score-values
US10776476B2 (en) 2010-11-29 2020-09-15 Biocatch Ltd. System, device, and method of visual login
US10069852B2 (en) 2010-11-29 2018-09-04 Biocatch Ltd. Detection of computerized bots and automated cyber-attack modules
US10897482B2 (en) 2010-11-29 2021-01-19 Biocatch Ltd. Method, device, and system of back-coloring, forward-coloring, and fraud detection
US10474815B2 (en) 2010-11-29 2019-11-12 Biocatch Ltd. System, device, and method of detecting malicious automatic script and code injection
US11210674B2 (en) 2010-11-29 2021-12-28 Biocatch Ltd. Method, device, and system of detecting mule accounts and accounts used for money laundering
US20210329030A1 (en) * 2010-11-29 2021-10-21 Biocatch Ltd. Device, System, and Method of Detecting Vishing Attacks
US10728761B2 (en) 2010-11-29 2020-07-28 Biocatch Ltd. Method, device, and system of detecting a lie of a user who inputs data
US20140317744A1 (en) * 2010-11-29 2014-10-23 Biocatch Ltd. Device, system, and method of user segmentation
US10949757B2 (en) 2010-11-29 2021-03-16 Biocatch Ltd. System, device, and method of detecting user identity based on motor-control loop model
US10949514B2 (en) 2010-11-29 2021-03-16 Biocatch Ltd. Device, system, and method of differentiating among users based on detection of hardware components
US9965297B2 (en) * 2011-03-24 2018-05-08 Microsoft Technology Licensing, Llc Assistance information controlling
US20120245921A1 (en) * 2011-03-24 2012-09-27 Microsoft Corporation Assistance Information Controlling
US8996356B1 (en) * 2012-04-10 2015-03-31 Google Inc. Techniques for predictive input method editors
US9262412B2 (en) 2012-04-10 2016-02-16 Google Inc. Techniques for predictive input method editors
EP2713255A4 (en) * 2012-06-04 2015-03-04 Huawei Device Co Ltd Method and electronic device for prompting character input
US20150154958A1 (en) * 2012-08-24 2015-06-04 Tencent Technology (Shenzhen) Company Limited Multimedia information retrieval method and electronic device
US9704485B2 (en) * 2012-08-24 2017-07-11 Tencent Technology (Shenzhen) Company Limited Multimedia information retrieval method and electronic device
US20150331590A1 (en) * 2013-01-25 2015-11-19 Hewlett-Packard Development Company, L.P. User interface application launcher and method thereof
US9372672B1 (en) * 2013-09-04 2016-06-21 Tg, Llc Translation in visual context
US20150088486A1 (en) * 2013-09-25 2015-03-26 International Business Machines Corporation Written language learning using an enhanced input method editor (ime)
US9384191B2 (en) * 2013-09-25 2016-07-05 International Business Machines Corporation Written language learning using an enhanced input method editor (IME)
US20160217782A1 (en) * 2013-10-10 2016-07-28 Kabushiki Kaisha Toshiba Transliteration work support device, transliteration work support method, and computer program product
US9928828B2 (en) * 2013-10-10 2018-03-27 Kabushiki Kaisha Toshiba Transliteration work support device, transliteration work support method, and computer program product
US10386935B2 (en) * 2014-06-17 2019-08-20 Google Llc Input method editor for inputting names of geographic locations
US20180210558A1 (en) * 2014-06-17 2018-07-26 Google Inc. Input method editor for inputting names of geographic locations
US20160125753A1 (en) * 2014-11-04 2016-05-05 Knotbird LLC System and methods for transforming language into interactive elements
US10002543B2 (en) * 2014-11-04 2018-06-19 Knotbird LLC System and methods for transforming language into interactive elements
US20160142465A1 (en) * 2014-11-19 2016-05-19 Diemsk Jean System and method for generating visual identifiers from user input associated with perceived stimuli
US9503504B2 (en) * 2014-11-19 2016-11-22 Diemsk Jean System and method for generating visual identifiers from user input associated with perceived stimuli
US20160179774A1 (en) * 2014-12-18 2016-06-23 International Business Machines Corporation Orthographic Error Correction Using Phonetic Transcription
US9582489B2 (en) * 2014-12-18 2017-02-28 International Business Machines Corporation Orthographic error correction using phonetic transcription
CN104714940A (en) * 2015-02-12 2015-06-17 深圳市前海安测信息技术有限公司 Method and device for identifying unregistered word in intelligent interaction system
US9703394B2 (en) * 2015-03-24 2017-07-11 Google Inc. Unlearning techniques for adaptive language models in text entry
US20160282956A1 (en) * 2015-03-24 2016-09-29 Google Inc. Unlearning techniques for adaptive language models in text entry
US10452264B2 (en) 2015-04-30 2019-10-22 Michael William Murphy Systems and methods for word identification that use button press type error analysis
US10216410B2 (en) 2015-04-30 2019-02-26 Michael William Murphy Method of word identification that uses interspersed time-independent selection keys
US10719765B2 (en) 2015-06-25 2020-07-21 Biocatch Ltd. Conditional behavioral biometrics
US11238349B2 (en) 2015-06-25 2022-02-01 Biocatch Ltd. Conditional behavioural biometrics
US10834090B2 (en) * 2015-07-09 2020-11-10 Biocatch Ltd. System, device, and method for detection of proxy server
US11323451B2 (en) 2015-07-09 2022-05-03 Biocatch Ltd. System, device, and method for detection of proxy server
US10523680B2 (en) * 2015-07-09 2019-12-31 Biocatch Ltd. System, device, and method for detecting a proxy server
US10372310B2 (en) 2016-06-23 2019-08-06 Microsoft Technology Licensing, Llc Suppression of input images
US11055395B2 (en) 2016-07-08 2021-07-06 Biocatch Ltd. Step-up authentication
US10579784B2 (en) 2016-11-02 2020-03-03 Biocatch Ltd. System, device, and method of secure utilization of fingerprints for user authentication
US10685355B2 (en) 2016-12-04 2020-06-16 Biocatch Ltd. Method, device, and system of detecting mule accounts and accounts used for money laundering
US20180160173A1 (en) * 2016-12-07 2018-06-07 Alticast Corporation System for providing cloud-based user interfaces and method thereof
US10567837B2 (en) * 2016-12-07 2020-02-18 Alticast Corporation System for providing cloud-based user interfaces and method thereof
US10432999B2 (en) * 2017-04-14 2019-10-01 Samsung Electronics Co., Ltd. Display device, display system and method for controlling display device
US11082737B2 (en) * 2017-04-14 2021-08-03 Samsung Electronics Co., Ltd. Display device, display system and method for controlling display device
US11054989B2 (en) 2017-05-19 2021-07-06 Michael William Murphy Interleaved character selection interface
US11494075B2 (en) 2017-05-19 2022-11-08 Michael William Murphy Interleaved character selection interface
US11853545B2 (en) 2017-05-19 2023-12-26 Michael William Murphy Interleaved character selection interface
US11264007B2 (en) 2017-07-20 2022-03-01 Panasonic Intellectual Property Management Co., Ltd. Translation device, translation method, and program
US10970394B2 (en) 2017-11-21 2021-04-06 Biocatch Ltd. System, device, and method of detecting vishing attacks
US11922007B2 (en) 2018-11-29 2024-03-05 Michael William Murphy Apparatus, method and system for inputting characters to an electronic device
US11126794B2 (en) * 2019-04-11 2021-09-21 Microsoft Technology Licensing, Llc Targeted rewrites
US11526654B2 (en) * 2019-07-26 2022-12-13 See Word Design, LLC Reading proficiency system and method
US11809831B2 (en) * 2020-01-08 2023-11-07 Kabushiki Kaisha Toshiba Symbol sequence converting apparatus and symbol sequence conversion method
US11756529B2 (en) * 2020-05-28 2023-09-12 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for speech recognition, and storage medium
US20210375264A1 (en) * 2020-05-28 2021-12-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for speech recognition, and storage medium
US11606353B2 (en) 2021-07-22 2023-03-14 Biocatch Ltd. System, device, and method of generating and utilizing one-time passwords

Also Published As

Publication number Publication date
CN1387639A (en) 2002-12-25
WO2001033324A2 (en) 2001-05-10
AU1361401A (en) 2001-05-14
US7403888B1 (en) 2008-07-22
JP2011060308A (en) 2011-03-24
JP5021802B2 (en) 2012-09-12
CN100593167C (en) 2010-03-03
JP2003513389A (en) 2003-04-08
WO2001033324A9 (en) 2002-08-08
JP4920154B2 (en) 2012-04-18
WO2001033324A3 (en) 2002-02-14

Similar Documents

Publication Publication Date Title
US7403888B1 (en) Language input user interface
US20210073467A1 (en) Method, System and Apparatus for Entering Text on a Computing Device
US7562296B2 (en) Correction widget
JP4463795B2 (en) Reduced keyboard disambiguation system
US7149970B1 (en) Method and system for filtering and selecting from a candidate list generated by a stochastic input method
US7385531B2 (en) Entering text into an electronic communications device
CN109844696B (en) Multi-language character input device
JP2007133884A5 (en)
MXPA04008910A (en) Entering text into an electronic communications device.
US20070288240A1 (en) User interface for text-to-phone conversion and method for correcting the same
US7616190B2 (en) Asian language input using keyboard
EP1347362B1 (en) Entering text into an electronic communications device
JP2005196250A (en) Information input support device and information input support method
JP3762300B2 (en) Text input processing apparatus and method, and program
JP2002207728A (en) Phonogram generator, and recording medium recorded with program for realizing the same
JP2010152874A (en) Electronic device and control method of electronic device
JP7109498B2 (en) voice input device
JP2000148747A (en) Conversion candidate display method, record medium for program for japanese syllabary-to-chinese character conversion by same method, and japanese syllbary-to- chinese character conversion device
JPS61175855A (en) Kana to kanji converting device
JP2004086449A (en) Chinese language phonetic orthography input device with comparison function for inputting imperfect or vague phonetic orthography
JPH0414380B2 (en)
KR20040016715A (en) Chinese phonetic transcription input system and method with comparison function for imperfect and fuzzy phonetic transcriptions
JPH08212211A (en) Editing method
AU2015221542A1 (en) Method system and apparatus for entering text on a computing device
JP2002099375A (en) Character input device and recording medium

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014