WO2006085776A1 - Aid for individuals wtth a reading disability - Google Patents

Aid for individuals wtth a reading disability Download PDF

Info

Publication number
WO2006085776A1
WO2006085776A1 PCT/NO2006/000058 NO2006000058W WO2006085776A1 WO 2006085776 A1 WO2006085776 A1 WO 2006085776A1 NO 2006000058 W NO2006000058 W NO 2006000058W WO 2006085776 A1 WO2006085776 A1 WO 2006085776A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
line element
line
screen
reading
Prior art date
Application number
PCT/NO2006/000058
Other languages
French (fr)
Other versions
WO2006085776A9 (en
Inventor
Hilde Elisabeth Tallaksen
Original Assignee
Applica Attend As
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applica Attend As filed Critical Applica Attend As
Publication of WO2006085776A1 publication Critical patent/WO2006085776A1/en
Publication of WO2006085776A9 publication Critical patent/WO2006085776A9/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B17/00Teaching reading
    • G09B17/02Line indicators or other guides or masks

Definitions

  • the present invention relates to aids for individuals with a reading disability, especially dyslexics. More particularly, the invention relates to an apparatus for helping individuals with a reading disability, a computer-implemented method for execution by a processor in such an apparatus, together with a computer program for such an apparatus.
  • a large proportion of the population has a reading disability, and the largest single group amongst these consists of dyslexics. There is therefore a need for a technical aid which in practice makes it possible for those with a reading disability, such as dyslexics, to acquire textual information in an efficient manner.
  • SE-C-515 805 describes an aid for increasing a user's text reading rate.
  • a text is displayed on a screen with a cursor which automatically and almost continuously moves through the text. The speed of the cursor's movement can be adjusted by the user.
  • US-5 802 533 describes a text processing method for improving a reader's reading experience.
  • an attribute such as a degree of difficulty is derived from a text.
  • the text is then presented to the reader, the presentation rate in particular being varied according to the degree of difficulty.
  • Reading Pen II developed by Wizcom Technologies, is a text recording pen designed for individuals with a reading disability, such as dyslexics.
  • This pen comprises a scanner for input of a single line of text, a screen for displaying an enlarged, scanned-in word, and a text-to-speech device.
  • the disadvantage of this solution is that it requires the user to have control over his reading position in the text before and after the word or line concerned has been scanned in. It therefore requires the user to be capable of keeping "the thread" of the text, which can be difficult for the target group with impaired reading ability. Summary of the invention
  • An object of the present invention is to provide a computer- implemented method for execution by a processor in an apparatus for aiding those with a reading disability, a computer program for execution of the method, together with an apparatus for aiding those with a reading disability.
  • fig. 1 is a block diagram illustrating the schematic construction of an apparatus according to the invention
  • fig. 2 is a flowchart schematically illustrating a method according to the invention
  • figs. 3, 4, 5, 6 and 7 are schematic front views of an apparatus according to the invention, illustrating various aspects of the operation of the apparatus and the method
  • fig. 8 is a schematic view illustrating the mounting of two CIS sensors on an end surface of the apparatus
  • fig. 9 is a schematic cross-sectional view of the apparatus.
  • Fig. 1 is a block diagram illustrating the schematic construction of an apparatus according to the invention.
  • the apparatus 100 is an apparatus for aiding individuals with a reading disability, such as those with dyslexia.
  • the apparatus is microprocessor-based and therefore comprises a centrally arranged processor 110 such as an Intel PXA270.
  • the processor is connected in the normal manner to a working memory (RAM) 130 for storing volatile data and a Flash memory 120 for storing executable code and fixed/non-volatile data.
  • RAM working memory
  • Flash memory 120 for storing executable code and fixed/non-volatile data.
  • the executable code comprises instructions which, when executed by the processor 110, cause the processor 110 to implement a method according to the invention, for example a method as described below with reference to figure 2.
  • the apparatus is battery-operated and therefore includes a chargeable battery 194, controlled by a charge controller 190 supplied by a power supply 192.
  • the apparatus 100 comprises an input unit for input of textual information. More specifically, this input unit comprises an electro-optical imaging unit 150 for input of image information, and a conversion unit for converting the image information to textual information.
  • the electro-optical imaging unit 150 comprises at least one line scanner, and preferably two line scanners.
  • Each line scanner is preferably a CIS sensor.
  • Each CIS sensor is arranged to provide image information corresponding with a text data portion extending over a plurality of text lines.
  • the preferably two CIS sensors are arranged in parallel, in a direction at right angles to the ideal direction of motion during scanning (see also fig. 8 and fig. 9).
  • An example of a suitable CIS sensor is P1404MC.
  • the CIS sensors 150 deliver analog signals. They are therefore connected to an A/D converter 152, for example of type ht82v36.
  • An analog multiplexer (not shown), for example of type MAX4542, enables only one A/D converter to be used for two CIS sensors.
  • the input image information is kept or temporarily stored by the processor 11Oi in a memory portion in the working memory 130, for example in so-called bitmap format.
  • the conversion unit for converting the image information to textual information preferably comprises a regular OCR process (optical character recognition process) which is executed by the processor 110 by means of instructions stored in the memory.
  • OCR process optical character recognition process
  • the resulting textual information is also stored in the memory.
  • OCR processes are well-known to those skilled in the art and require no specific mention here.
  • the apparatus 100 comprises a screen 140, preferably an LCD-type colour screen.
  • the screen is preferably touch-sensitive.
  • the screen may be of the LQ035Q7DB02A type, which is a transflective LCD screen with a 52 x 72 mm viewing area, 240 x 320 pixels resolution, 3 x 6 bit colour resolution and LED-type background illumination.
  • the apparatus 100 further comprises user operating elements 160, including a forward key (164, illustrated in figs. 3-7) and a backward key (162, illustrated in figs. 3-7). These elements may advantageously be implemented as physical pushbutton switches.
  • the user operating elements are advantageously mounted on the front of the apparatus, i.e. on the same side as the screen's 140 display surface.
  • the forward key 164 is advantageously placed near the right-hand, lower part of the front panel of the apparatus 100, while the backward key is advantageously placed near the left- hand, lower part.
  • the user operating elements 160 may be implemented as virtual keys on the screen 140, if this is of the touch-sensitive type.
  • the apparatus 100 advantageously comprises a text-to-speech device.
  • a text-to-speech device This will typically be composed of a text-to-speech process realised in software, i.e. in the form of instructions contained in the memory for execution by the processor 110.
  • the text-to-speech device further comprises the audio-processor 170, which is further connected to an amplifier and loudspeaker 174, in addition to an audio- output 172.
  • the processor further comprises a serial communication interface, which is connected to a USB connection.
  • the communication interface may be connected to an RS232 connection.
  • the processor 110 may furthermore be in operative connection with a WLAN module or other type of wireless communication device.
  • a programmable logic circuit or PLD 156 provides the necessary scatter logic for implementing operative connection between the processor 110 and the CIS sensors 150, the A/D converter 152 and the screen 140 respectively.
  • the PLD 156 provides clock signals for the screen 140, for the A/D converter 152 and for the CIS sensors 150.
  • the pixel clock rate for the screen may typically be 6 MHz, the clock rate for the A/D converter 1 MHz and the clock rate for the CIS sensors 0.5 MHz.
  • the PLD also comprises an FIFO memory structure (queue) for input and temporary storage of data delivered by the A/D converter.
  • the PLD also forms an interface with the processor 110.
  • the PLD is normally arranged to deliver an interrupt signal to the processor if the FIFO queue is almost full.
  • the PLD also forms an interface between processor 110 and screen 140.
  • An example of a suitable PLD circuit is Xilinx XC2C128.
  • Fig. 2 is a flowchart schematically illustrating a method according to the invention.
  • the illustrated method is computer-implemented and executed by a processor in an apparatus for aiding individuals with a reading disability.
  • the method is explained in connection with the apparatus described above with reference to figure 1 above, and the method may particularly advantageously be implemented by the processor 110 in this apparatus.
  • the method is implemented as a result of the processor 110 executing a computer program comprising instructions contained in a memory, normally comprised of the Flash memory 120 in the apparatus.
  • a computer program comprising instructions contained in a memory, normally comprised of the Flash memory 120 in the apparatus.
  • the detailed formulation of the computer program's instructions is considered to be a commonplace task for a skilled person the basis of the present description of a method according to the invention.
  • a text data portion or a segment thereof is processed and displayed on a screen in a new and distinctive manner, which has been shown to be particularly appropriate for a user with a reading disability, and especially a dyslexic.
  • the user can control the processing and display by means of user operating elements.
  • the method comprises inputting a text data portion in a memory, segmenting the portion of text into line elements, reading user operating elements, selecting a line element based on the reading of the user operating elements, and displaying a connected segment of the portion of text on a screen, where the selected line element is placed in a central area of the screen.
  • the method starts at the initiation step 202.
  • First of all the step 210 is performed by inputting a text data portion in a memory. This is preferably done by scanning in 212 a text from a printed medium, such as for example a book, a magazine or a newspaper.
  • the scanning step 212 results in data in image format such as bit-map data, which are temporarily stored in a memory portion, typically comprised of the working memory 130.
  • a bit-map image is generated from the raw data delivered from two CIS sensors.
  • the scanning step 212 includes a pre-processing step, where a calculation is made of the speed and direction in which the scanner has been moved, and to what extent it may have been rotated during the movement.
  • the electro-optical imaging unit in the apparatus comprises two line scanners, i.e. two CIS sensors 150. This permits the establishment of correlating scan lines from one CIS sensor in the data from the other CIS sensor, in addition to the formation of a correlation data set.
  • a rotation data set is also determined from analysis of the scanner's angular movement.
  • a corrected, rectangular image is built up on the basis of the correlation data set and the rotation data set.
  • the pre-processing step therefore includes steps for adapting the resulting imaging data as the most right-angled and rectangular image possible, with the most correct height/width ratio possible.
  • the use of the two parallel-mounted CIS sensors 150 in combination with the above-mentioned pre-processing step, corrects the defects which otherwise would result from the user's manually controlled movement between scanner and the object (the printed medium) being scanned.
  • This movement is generally (i.e. in practice) not ideally rectilinear, and it is not generally performed at a constant rate.
  • an optical character recognition process 214 a so-called OCR process, is executed, with the imaging data obtained in the scanning step 212 as input.
  • the OCR process comprises instructions normally contained in the Flash memory 120.
  • the OCR process 214 results in text data stored in a memory portion, typically comprised of the working memory 130.
  • the memory portion which temporarily stored the data from the scanning step 212 can then advantageously be released. Suitable OCR processes are well-known in the art and can be selected by a skilled person.
  • the character recognition process 214 may alternatively comprise a transmission of the imaging data to an external computer where the OCR process takes place, and receipt of the resulting text data from the external computer.
  • the segmentation step 220 is then performed, where the input text data are segmented into line elements. This is done by forming text lines of suitable length, adapted to the width of the screen and a pre-selected text size. For optimal perception in the target group of individuals with impaired reading ability, it has been shown to be expedient to divide the text into line elements, each comprising a number of the order of between 20 and 40 characters, preferably between 26 and 34 characters, and particularly preferred approximately 30 characters. In the segmentation into line elements, certain rules must be observed, particularly for handling of long words. In the simplest case the rule is employed that each line element should be kept within the given maximum number of characters, it should only contain whole words, and no word should be divided.
  • Step 222 is then implemented, where the first line element is selected.
  • a connected segment of the portion of text is displayed on the screen. This is done in such a way that the selected line element is at all times placed in a central area of the screen.
  • central should be understood to refer particularly to a central position in the vertical direction.
  • 5 line elements are preferably displayed at any time, and in such a manner that the selected line element preferably constitutes the middle, i.e. the third, of these 5 line elements. This has been shown to result in a high degree of comprehensibility by the target group of individuals with a reading disability. The user can therefore focus on the central area of the screen, thereby aiding his/her orientation in the text.
  • the text located above and below can be considered as a supporting text to give an idea of what came before and what is coming next.
  • This supporting text can therefore be toned down, while the selected, central line element can advantageously be highlighted with colour and/or contrasts. It is particularly advantageous to use black letters on a blue background for the selected, central line element, while grey letters on a white background are used for the rest of the line elements.
  • Step 230 is then implemented, where user operating elements 160 are read.
  • the forward key 162 and the backward key 164 in particular are read.
  • a selection process 240 is then implemented for selecting a line element in the input text data, based on the reading of the user operating elements. For the reading- impaired user, it has been shown to be advantageous for the text to be presented in such a manner that the relevant line element, on which the reader is focussing at any time, is located in a central position on the screen. The selection process 240 is therefore aimed at selecting the relevant line element on which the reader is focussed, by means of the user operating elements.
  • step 242 it is decided in step 242 whether the forward key is activated. If so, in step 244 the next line element is chosen as the selected line element, and the sequence continues at the decision step 260.
  • stage 246 it is decided in stage 246 whether the backward key is activated. If this is the case, in step 248 the previous line element is chosen as the selected line element, and the sequence continues again at the decision step 260.
  • step 260 it is decided whether the display is completed. If it is not completed, the process is returned to the display step 224. If the display is completed, the process is concluded, step 298.
  • the segmentation step 220 further comprises segmenting the portion of text into word elements.
  • the step of selecting a line element further comprises selecting a word element, and the display step further comprises highlighting the selected word element.
  • This version involves selecting a line element which at all times is located in the centre of the screen, but in addition a word element is selected in the chosen line element by means of the forward and backward keys, and this word element is highlighted.
  • step 224 in the display step 224 not only the chosen word element is highlighted, but the part of the chosen line element extending from the start of the line element, to and including the chosen word element.
  • processor 110 can also execute additional necessary or advantageous processes:
  • Log data can be stored indicating the use of the apparatus, for example data that a user can employ for measuring or determining his/her results/progress when using the apparatus as an aid.
  • Texts that are scanned in can be numbered and stored sequentially in the Flash memory. After selection, stored texts can be displayed and/or deleted by the user.
  • a program process which interacts with the audio- processor 170 to form synthetic speech, thus enabling text data to be presented as synthetic speech in addition to the display on the screen.
  • the processor 110 is arranged to present synthetic speech according to the line/word elements resulting from the user's operation of the forward/backward keys.
  • the processor 110 is arranged to operate in automatic speech mode, so that the synthetic speech is presented sequentially without these keys being operated. Both of these embodiments are advantageously implemented, and operating elements such as virtual keys 166 allow the user to choose which of the embodiments is to be the active one.
  • Suitable text-to-speech processes are known in the art, can be selected by the skilled person and will not be described in greater detail here.
  • Figures 3, 4, 5 and 6 are schematic front views of an apparatus according to the invention, illustrating the use of the invention.
  • Figure 3 illustrates the apparatus 100 being employed by a user, particularly a user with a reading disability.
  • a printed text is scanned by the user, and the image has been converted to a text data portion which is contained in a memory in the apparatus.
  • the text data portion has been segmented into a number of line elements, and three of these (302, 304, 306) are displayed on the screen.
  • the chosen line element is therefore the first line element 302 in the text.
  • this line element 302 will be located in the centre of the screen, particularly in the vertical sense, i.e. in the position of the middle line of the 5 lines displayed.
  • the first three lines on the screen are therefore blank, whereupon the line element 302 is displayed in a highlighted form, and then the line elements 304, 306 are displayed preferably in a toned-down form.
  • Figure 4 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user once, i.e. after one pass of the "next line element" step 244.
  • the line element 304 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen.
  • Figure 5 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user twice, i.e. after two passes of the "next line element" step 244.
  • the line element 306 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen.
  • Figure 6 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user three times, i.e. after three passes of the "next line element" step 244.
  • the line element 308 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen.
  • the screen displays the line elements 304, 306, 308, 310 and a blank line.
  • Figure 7 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user four times, i.e. after four passes of the "next line element" step 244.
  • the line element 310 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen.
  • the screen displays the line elements 306, 308, 310 and then blank lines.
  • Fig. 8 is a schematic view illustrating the mounting of two CIS sensors on an end surface of the apparatus 100.
  • the two electro-optical line scanners are mounted in parallel, in a direction at right angles to the ideal direction of motion 102 during scanning. It should be understood that the direction of motion may also be the opposite of the direction of the arrow 102.
  • the distance 104 between the CIS sensors 150 is normally in the range between 5 mm and 15 mm, particularly advantageously around 10 mm.
  • Fig. 9 is a schematic cross-sectional view of the apparatus 100, viewed from the side.
  • Figure 9 therefore illustrates the CIS sensors 150, the direction of motion 102, the screen 140, the battery 194 and a printed circuit board 106 containing electronic components corresponding to most of the components mentioned earlier with reference to fig. 1.
  • the components of the apparatus are contained in a housing 108.
  • the electro-optical imaging unit comprises optical line scanners such as CIS sensors, i.e. one-dimensional scanners which require a movement over the text area concerned as a basis for generating two-dimensional image information.
  • the electro- optical imaging unit may alternatively comprise a digital camera, which in one imaging operation maps the whole of the portion of text concerned, or substantial, two-dimensional parts of the portion of text.
  • the, input unit for input of textual information is particularly specified as being an electro-optical imaging unit followed by a conversion unit for converting to textual information. It will be appreciated that this is expedient in order to achieve a hand-held apparatus for optical reading and subsequent presentation of text. It will be understood, however, that the invention may also comprise text input units which do not involve optical imaging, where the text, for example, can be retrieved directly from a digital communication source such as a computer, a digital storage medium or a network element in a computer network.
  • a digital communication source such as a computer, a digital storage medium or a network element in a computer network.
  • the OCR process can be executed externally.
  • an apparatus according to the invention in addition to the components illustrated in fig. 1, are supplied with a transceiver, such as a WLAN module, in order to achieve two-way wireless communication with a computer network.
  • the apparatus 100 can thereby communicate operatively with an external computer such as a PC via such a WLAN connection.
  • Some functions can therefore be carried out by being executed in the external computer instead of in the processor 110. It will be particularly relevant to employ this network connection for transferring the scanned-in data in the form of imaging data to the external computer, so that the OCR analysis is executed in the external computer, whereupon the resulting text data are transferred back to the apparatus 100 via the communication link.
  • the WLAN module may otherwise be employed in general for providing wireless communication instead of or in addition to the USB or RS232 connection illustrated in fig. 1.
  • a possible example is to have the text-to-speech process carried out by means of the external computer.
  • the screen displays five lines of text, where the central line on which the user focuses, and which is highlighted, is located as the middle line, i.e. the third line.
  • the middle line i.e. the third line.
  • a different number of text lines for example 3, 4, 6, 7 or 8 is also possible.
  • the central area to which the invention refers need not be exactly the middle line, and if the number of displayed lines is an even number, the central area will have to deviate from the exact vertical centre.
  • the central area should therefore generally be understood to refer to an area of the screen 140 that is vertically located closer to the centre than one of the screen's upper or lower edges.
  • an ARM processor for example of the type Intel Xscale (such as the suggested Intel PXA270 processor) is a suitable choice, since its attributes, particularly performance/power consumption, are suitable for use in hand-held, battery-operated units.
  • microprocessors including microcontrollers, may be freely chosen within the scope of the invention based on the requirements that are considered to be practical for the implementation concerned.
  • circuits specified in the detailed description including A/D converter, multiplexer, memory circuits, PLD and display module.
  • the computer program comprising instructions which cause the processor to execute the method is contained in a memory.
  • the invention also comprises a computer program of this kind which is stored on a medium, for example an optical storage medium such as a CD-ROM, or which is carried by a propagated signal, for example by means of communication between computers in a network such as the Internet.
  • a propagated signal for example by means of communication between computers in a network such as the Internet.
  • An example of this is the signal that is transmitted during downloading via the network of such a computer program from a server.
  • the step of highlighting a selected line element comprises the use of colours and/or contrasts between letters and background.
  • the highlighting may consist in controlling the screen's backlighting, particularly if an LCD screen is employed where the backlighting (for example of the LED type) can be controlled line by line. This kind of solution will also be advantageous with regard to achieving a saving in power consumption and thereby increased battery life.

Abstract

The invention relates to an apparatus (100) for aiding individuals with a reading disability. The apparatus comprises an input unit for input of textual information, particularly a scanner (150) containing two CIS sensors. The apparatus further comprises user operating elements (160), a screen (166), a memory (120, 130) and a processor (110). The processor (110) is arranged to OCR-process (214) the scanned information in order thereby to generate a text data portion. The processor (110) is further arranged to segment (220) the portion of text into line elements, to read (230) the user operating elements, to select (240) a line element based on the reading of the user operating elements (160), and to display (224) a connected segment of the portion of text on the screen, the selected line element being located and highlighted in a central area of the screen. The invention also relates to a method and a computer program which are suitable to be executed by such an apparatus.

Description

Aid for individuals with a reading disability
Technical field
The present invention relates to aids for individuals with a reading disability, especially dyslexics. More particularly, the invention relates to an apparatus for helping individuals with a reading disability, a computer-implemented method for execution by a processor in such an apparatus, together with a computer program for such an apparatus.
Background of the invention
A large proportion of the population has a reading disability, and the largest single group amongst these consists of dyslexics. There is therefore a need for a technical aid which in practice makes it possible for those with a reading disability, such as dyslexics, to acquire textual information in an efficient manner.
Aids for those with a reading disability are known in the prior art.
SE-C-515 805 describes an aid for increasing a user's text reading rate. A text is displayed on a screen with a cursor which automatically and almost continuously moves through the text. The speed of the cursor's movement can be adjusted by the user.
US-5 802 533 describes a text processing method for improving a reader's reading experience. In the method an attribute such as a degree of difficulty is derived from a text. Depending on this attribute, the text is then presented to the reader, the presentation rate in particular being varied according to the degree of difficulty.
Reading Pen II, developed by Wizcom Technologies, is a text recording pen designed for individuals with a reading disability, such as dyslexics. This pen comprises a scanner for input of a single line of text, a screen for displaying an enlarged, scanned-in word, and a text-to-speech device. The disadvantage of this solution is that it requires the user to have control over his reading position in the text before and after the word or line concerned has been scanned in. It therefore requires the user to be capable of keeping "the thread" of the text, which can be difficult for the target group with impaired reading ability. Summary of the invention
An object of the present invention is to provide a computer- implemented method for execution by a processor in an apparatus for aiding those with a reading disability, a computer program for execution of the method, together with an apparatus for aiding those with a reading disability. In particular, it is an object of the invention to provide solutions such as those mentioned above, which result in increased comprehensibility by the target group with impaired reading ability, especially dyslexics.
It is a further object to provide solutions such as those mentioned above, which eliminate or reduce the drawbacks of the prior art.
It is an object hereby to provide solutions which do not require the user to exert him/herself unnecessarily in order to remain focussed on the text image.
The invention will become apparent from the following independent claims. Advantageous embodiments are indicated in the dependent claims. Brief description of the drawings
The invention will now be described in greater detail with reference to an embodiment, illustrated by the attached drawings, in which fig. 1 is a block diagram illustrating the schematic construction of an apparatus according to the invention, fig. 2 is a flowchart schematically illustrating a method according to the invention, figs. 3, 4, 5, 6 and 7 are schematic front views of an apparatus according to the invention, illustrating various aspects of the operation of the apparatus and the method, fig. 8 is a schematic view illustrating the mounting of two CIS sensors on an end surface of the apparatus, and fig. 9 is a schematic cross-sectional view of the apparatus. Detailed description of the invention
Fig. 1 is a block diagram illustrating the schematic construction of an apparatus according to the invention.
The apparatus 100 is an apparatus for aiding individuals with a reading disability, such as those with dyslexia. The apparatus is microprocessor-based and therefore comprises a centrally arranged processor 110 such as an Intel PXA270. The processor is connected in the normal manner to a working memory (RAM) 130 for storing volatile data and a Flash memory 120 for storing executable code and fixed/non-volatile data.
The executable code comprises instructions which, when executed by the processor 110, cause the processor 110 to implement a method according to the invention, for example a method as described below with reference to figure 2. The apparatus is battery-operated and therefore includes a chargeable battery 194, controlled by a charge controller 190 supplied by a power supply 192.
In general the apparatus 100 comprises an input unit for input of textual information. More specifically, this input unit comprises an electro-optical imaging unit 150 for input of image information, and a conversion unit for converting the image information to textual information.
The electro-optical imaging unit 150 comprises at least one line scanner, and preferably two line scanners. Each line scanner is preferably a CIS sensor. Each CIS sensor is arranged to provide image information corresponding with a text data portion extending over a plurality of text lines. The preferably two CIS sensors are arranged in parallel, in a direction at right angles to the ideal direction of motion during scanning (see also fig. 8 and fig. 9). An example of a suitable CIS sensor is P1404MC.
The CIS sensors 150 deliver analog signals. They are therefore connected to an A/D converter 152, for example of type ht82v36. An analog multiplexer (not shown), for example of type MAX4542, enables only one A/D converter to be used for two CIS sensors.
The input image information is kept or temporarily stored by the processor 11Oi in a memory portion in the working memory 130, for example in so-called bitmap format.
The conversion unit for converting the image information to textual information preferably comprises a regular OCR process (optical character recognition process) which is executed by the processor 110 by means of instructions stored in the memory. The resulting textual information is also stored in the memory. OCR processes are well-known to those skilled in the art and require no specific mention here.
The apparatus 100 comprises a screen 140, preferably an LCD-type colour screen. The screen is preferably touch-sensitive. For example, the screen may be of the LQ035Q7DB02A type, which is a transflective LCD screen with a 52 x 72 mm viewing area, 240 x 320 pixels resolution, 3 x 6 bit colour resolution and LED-type background illumination.
The apparatus 100 further comprises user operating elements 160, including a forward key (164, illustrated in figs. 3-7) and a backward key (162, illustrated in figs. 3-7). These elements may advantageously be implemented as physical pushbutton switches.
The user operating elements are advantageously mounted on the front of the apparatus, i.e. on the same side as the screen's 140 display surface. The forward key 164 is advantageously placed near the right-hand, lower part of the front panel of the apparatus 100, while the backward key is advantageously placed near the left- hand, lower part.
Alternatively, the user operating elements 160 may be implemented as virtual keys on the screen 140, if this is of the touch-sensitive type.
Several virtual keys 166 may also advantageously be provided on the screen 140, intended for extra functions.
In addition the apparatus 100 advantageously comprises a text-to-speech device. This will typically be composed of a text-to-speech process realised in software, i.e. in the form of instructions contained in the memory for execution by the processor 110. The text-to-speech device further comprises the audio-processor 170, which is further connected to an amplifier and loudspeaker 174, in addition to an audio- output 172.
The processor further comprises a serial communication interface, which is connected to a USB connection. Alternatively or in addition, the communication interface may be connected to an RS232 connection. As indicated later in this description, the processor 110 may furthermore be in operative connection with a WLAN module or other type of wireless communication device.
A programmable logic circuit or PLD 156 provides the necessary scatter logic for implementing operative connection between the processor 110 and the CIS sensors 150, the A/D converter 152 and the screen 140 respectively.
The PLD 156 provides clock signals for the screen 140, for the A/D converter 152 and for the CIS sensors 150. The pixel clock rate for the screen may typically be 6 MHz, the clock rate for the A/D converter 1 MHz and the clock rate for the CIS sensors 0.5 MHz. The PLD also comprises an FIFO memory structure (queue) for input and temporary storage of data delivered by the A/D converter. The PLD also forms an interface with the processor 110. The PLD is normally arranged to deliver an interrupt signal to the processor if the FIFO queue is almost full. The PLD also forms an interface between processor 110 and screen 140. An example of a suitable PLD circuit is Xilinx XC2C128.
The formulation of the PLD circuit's detailed functions, including the programming of the circuit and the transfer of the resulting logic by means of, e.g. a JTAG interface, are regarded as being commonplace tasks for those skilled in the art, based on the present description.
Fig. 2 is a flowchart schematically illustrating a method according to the invention. The illustrated method is computer-implemented and executed by a processor in an apparatus for aiding individuals with a reading disability. In the following description the method is explained in connection with the apparatus described above with reference to figure 1 above, and the method may particularly advantageously be implemented by the processor 110 in this apparatus.
The method is implemented as a result of the processor 110 executing a computer program comprising instructions contained in a memory, normally comprised of the Flash memory 120 in the apparatus. The detailed formulation of the computer program's instructions is considered to be a commonplace task for a skilled person the basis of the present description of a method according to the invention.
By means of the method according to the invention, a text data portion or a segment thereof is processed and displayed on a screen in a new and distinctive manner, which has been shown to be particularly appropriate for a user with a reading disability, and especially a dyslexic. The user can control the processing and display by means of user operating elements.
In order to achieve the technical effect produced by the invention, the method comprises inputting a text data portion in a memory, segmenting the portion of text into line elements, reading user operating elements, selecting a line element based on the reading of the user operating elements, and displaying a connected segment of the portion of text on a screen, where the selected line element is placed in a central area of the screen.
The method starts at the initiation step 202.
First of all the step 210 is performed by inputting a text data portion in a memory. This is preferably done by scanning in 212 a text from a printed medium, such as for example a book, a magazine or a newspaper. The scanning step 212 results in data in image format such as bit-map data, which are temporarily stored in a memory portion, typically comprised of the working memory 130.
In the scanning step 212 a bit-map image is generated from the raw data delivered from two CIS sensors.
For this purpose the scanning step 212 includes a pre-processing step, where a calculation is made of the speed and direction in which the scanner has been moved, and to what extent it may have been rotated during the movement. This is achieved by exploiting the advantageous feature that the electro-optical imaging unit in the apparatus comprises two line scanners, i.e. two CIS sensors 150. This permits the establishment of correlating scan lines from one CIS sensor in the data from the other CIS sensor, in addition to the formation of a correlation data set. A rotation data set is also determined from analysis of the scanner's angular movement. A corrected, rectangular image is built up on the basis of the correlation data set and the rotation data set.
The pre-processing step therefore includes steps for adapting the resulting imaging data as the most right-angled and rectangular image possible, with the most correct height/width ratio possible.
The use of the two parallel-mounted CIS sensors 150, in combination with the above-mentioned pre-processing step, corrects the defects which otherwise would result from the user's manually controlled movement between scanner and the object (the printed medium) being scanned. This movement is generally (i.e. in practice) not ideally rectilinear, and it is not generally performed at a constant rate.
After the scanning step 212, an optical character recognition process 214, a so- called OCR process, is executed, with the imaging data obtained in the scanning step 212 as input. The OCR process comprises instructions normally contained in the Flash memory 120. The OCR process 214 results in text data stored in a memory portion, typically comprised of the working memory 130. The memory portion which temporarily stored the data from the scanning step 212 can then advantageously be released. Suitable OCR processes are well-known in the art and can be selected by a skilled person.
The character recognition process 214 may alternatively comprise a transmission of the imaging data to an external computer where the OCR process takes place, and receipt of the resulting text data from the external computer. An alternative of this kind is discussed further at the end of this description.
The segmentation step 220 is then performed, where the input text data are segmented into line elements. This is done by forming text lines of suitable length, adapted to the width of the screen and a pre-selected text size. For optimal perception in the target group of individuals with impaired reading ability, it has been shown to be expedient to divide the text into line elements, each comprising a number of the order of between 20 and 40 characters, preferably between 26 and 34 characters, and particularly preferred approximately 30 characters. In the segmentation into line elements, certain rules must be observed, particularly for handling of long words. In the simplest case the rule is employed that each line element should be kept within the given maximum number of characters, it should only contain whole words, and no word should be divided.
Step 222 is then implemented, where the first line element is selected.
Furthermore, in the display step 224, a connected segment of the portion of text is displayed on the screen. This is done in such a way that the selected line element is at all times placed in a central area of the screen. In this context "central" should be understood to refer particularly to a central position in the vertical direction. 5 line elements are preferably displayed at any time, and in such a manner that the selected line element preferably constitutes the middle, i.e. the third, of these 5 line elements. This has been shown to result in a high degree of comprehensibility by the target group of individuals with a reading disability. The user can therefore focus on the central area of the screen, thereby aiding his/her orientation in the text. The text located above and below can be considered as a supporting text to give an idea of what came before and what is coming next. This supporting text can therefore be toned down, while the selected, central line element can advantageously be highlighted with colour and/or contrasts. It is particularly advantageous to use black letters on a blue background for the selected, central line element, while grey letters on a white background are used for the rest of the line elements.
Step 230 is then implemented, where user operating elements 160 are read. The forward key 162 and the backward key 164 in particular are read.
A selection process 240 is then implemented for selecting a line element in the input text data, based on the reading of the user operating elements. For the reading- impaired user, it has been shown to be advantageous for the text to be presented in such a manner that the relevant line element, on which the reader is focussing at any time, is located in a central position on the screen. The selection process 240 is therefore aimed at selecting the relevant line element on which the reader is focussed, by means of the user operating elements.
First of all in the selection process 240 it is decided in step 242 whether the forward key is activated. If so, in step 244 the next line element is chosen as the selected line element, and the sequence continues at the decision step 260.
If not, i.e. if the forward key is not activated, it is decided in stage 246 whether the backward key is activated. If this is the case, in step 248 the previous line element is chosen as the selected line element, and the sequence continues again at the decision step 260.
If neither the forward nor the backward keys were operated, no change is made in the line element concerned; in other words the same line element is selected as before (step 249), and the sequence continues at the decision step 260.
In the decision step 260 it is decided whether the display is completed. If it is not completed, the process is returned to the display step 224. If the display is completed, the process is concluded, step 298.
In one particular embodiment, which is not specially illustrated in fig. 2, the segmentation step 220 further comprises segmenting the portion of text into word elements. The step of selecting a line element further comprises selecting a word element, and the display step further comprises highlighting the selected word element.
In this particular version the step of selecting a line element and a word element advantageously includes the following sub-steps:
- if the forward key 164 is activated, selecting the next word element as the chosen word element,
- if the forward key 164 is activated and the chosen word element is the last in a line element, selecting the next line element as the chosen line element,
- if the backward key 162 is activated, selecting the previous word element as the chosen word element, and
- if the backward key 162 is activated and the first word element in a line is selected, selecting the previous line element as the chosen line element.
This version involves selecting a line element which at all times is located in the centre of the screen, but in addition a word element is selected in the chosen line element by means of the forward and backward keys, and this word element is highlighted.
In a further preferred variant of this particular embodiment, in the display step 224 not only the chosen word element is highlighted, but the part of the chosen line element extending from the start of the line element, to and including the chosen word element.
In addition to what has been described in the above, the processor 110 can also execute additional necessary or advantageous processes:
- Logging process. Log data can be stored indicating the use of the apparatus, for example data that a user can employ for measuring or determining his/her results/progress when using the apparatus as an aid.
- Memory handling process. Texts that are scanned in can be numbered and stored sequentially in the Flash memory. After selection, stored texts can be displayed and/or deleted by the user.
- Communication processes. Provide transfer of data stored in the memory to an external computer or a network, and corresponding transfer from the external computer or network to the memory in the apparatus 100. Can also be employed for updating program/executable code that has to be stored in the Flash memory in the apparatus.
- Text-to-speech process. A program process which interacts with the audio- processor 170 to form synthetic speech, thus enabling text data to be presented as synthetic speech in addition to the display on the screen. In one embodiment the processor 110 is arranged to present synthetic speech according to the line/word elements resulting from the user's operation of the forward/backward keys. In a second embodiment the processor 110 is arranged to operate in automatic speech mode, so that the synthetic speech is presented sequentially without these keys being operated. Both of these embodiments are advantageously implemented, and operating elements such as virtual keys 166 allow the user to choose which of the embodiments is to be the active one.
Suitable text-to-speech processes are known in the art, can be selected by the skilled person and will not be described in greater detail here.
Figures 3, 4, 5 and 6 are schematic front views of an apparatus according to the invention, illustrating the use of the invention.
Figure 3 illustrates the apparatus 100 being employed by a user, particularly a user with a reading disability. A printed text is scanned by the user, and the image has been converted to a text data portion which is contained in a memory in the apparatus.
The text data portion has been segmented into a number of line elements, and three of these (302, 304, 306) are displayed on the screen.
To begin with, as illustrated in figure 3, none of the operating elements (forward key 164 or backward key 162) have been activated by the user. The chosen line element is therefore the first line element 302 in the text. According to the invention this line element 302 will be located in the centre of the screen, particularly in the vertical sense, i.e. in the position of the middle line of the 5 lines displayed. The first three lines on the screen are therefore blank, whereupon the line element 302 is displayed in a highlighted form, and then the line elements 304, 306 are displayed preferably in a toned-down form.
Figure 4 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user once, i.e. after one pass of the "next line element" step 244. The line element 304 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen.
Figure 5 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user twice, i.e. after two passes of the "next line element" step 244. The line element 306 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen.
Figure 6 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user three times, i.e. after three passes of the "next line element" step 244. The line element 308 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen. The screen displays the line elements 304, 306, 308, 310 and a blank line.
Figure 7 illustrates the apparatus 100 with the text segment that is displayed after the forward key 164 has been activated by the reading-impaired user four times, i.e. after four passes of the "next line element" step 244. The line element 310 has then become the selected line element which is displayed in a highlighted manner and in a central position on the screen. The screen displays the line elements 306, 308, 310 and then blank lines.
It will be appreciated that the line element that is selected by the reading-impaired user by means of the operating elements 162, 164 is at all times highlighted and located in the centre of the screen. This distinguishing feature of the present invention has been shown to result in a high degree of comprehensibility by users with a reading disability, particularly dyslexics.
Fig. 8 is a schematic view illustrating the mounting of two CIS sensors on an end surface of the apparatus 100.
As illustrated, the two electro-optical line scanners, more specifically the CIS sensors 150, are mounted in parallel, in a direction at right angles to the ideal direction of motion 102 during scanning. It should be understood that the direction of motion may also be the opposite of the direction of the arrow 102. The distance 104 between the CIS sensors 150 is normally in the range between 5 mm and 15 mm, particularly advantageously around 10 mm.
Fig. 9 is a schematic cross-sectional view of the apparatus 100, viewed from the side. Figure 9 therefore illustrates the CIS sensors 150, the direction of motion 102, the screen 140, the battery 194 and a printed circuit board 106 containing electronic components corresponding to most of the components mentioned earlier with reference to fig. 1. The components of the apparatus are contained in a housing 108.
Variations/alternatives
The above detailed description of the invention is presented as a non-limiting example of a preferred embodiment. Those skilled in the art, however, will realize that numerous variations and alternatives exist within the scope of the invention.
For example, it is stated that the electro-optical imaging unit comprises optical line scanners such as CIS sensors, i.e. one-dimensional scanners which require a movement over the text area concerned as a basis for generating two-dimensional image information. It will be realized by those skilled in the art that the electro- optical imaging unit may alternatively comprise a digital camera, which in one imaging operation maps the whole of the portion of text concerned, or substantial, two-dimensional parts of the portion of text.
Furthermore, the, input unit for input of textual information is particularly specified as being an electro-optical imaging unit followed by a conversion unit for converting to textual information. It will be appreciated that this is expedient in order to achieve a hand-held apparatus for optical reading and subsequent presentation of text. It will be understood, however, that the invention may also comprise text input units which do not involve optical imaging, where the text, for example, can be retrieved directly from a digital communication source such as a computer, a digital storage medium or a network element in a computer network.
As mentioned earlier, the OCR process can be executed externally. To achieve this, an apparatus according to the invention, in addition to the components illustrated in fig. 1, are supplied with a transceiver, such as a WLAN module, in order to achieve two-way wireless communication with a computer network. The apparatus 100 can thereby communicate operatively with an external computer such as a PC via such a WLAN connection. Some functions can therefore be carried out by being executed in the external computer instead of in the processor 110. It will be particularly relevant to employ this network connection for transferring the scanned-in data in the form of imaging data to the external computer, so that the OCR analysis is executed in the external computer, whereupon the resulting text data are transferred back to the apparatus 100 via the communication link. This will permit a more powerful and faster OCR analysis than if the processing takes place in the local ARM processor 110, which has a weaker performance. Such a solution may appear seamless and invisible to the user, since the solution results in an apparent improvement in the characteristics of the apparatus when it is brought within the coverage area of a WLAN network where an external computer offering this function is provided.
The WLAN module may otherwise be employed in general for providing wireless communication instead of or in addition to the USB or RS232 connection illustrated in fig. 1. A possible example is to have the text-to-speech process carried out by means of the external computer.
For the sake of simplicity, the reading of user operating elements, particularly the forward and backward keys, is indicated as a process step in a sequential process. Those skilled in the art will easily appreciate that the registration of the activation of such an operating element may advantageously be practised as a hardware interrupt initiated by the operating element.
As an example, particular mention is made of the fact that the screen displays five lines of text, where the central line on which the user focuses, and which is highlighted, is located as the middle line, i.e. the third line. Within the scope of the invention it should be understood that a different number of text lines, for example 3, 4, 6, 7 or 8 is also possible. The central area to which the invention refers need not be exactly the middle line, and if the number of displayed lines is an even number, the central area will have to deviate from the exact vertical centre. The central area should therefore generally be understood to refer to an area of the screen 140 that is vertically located closer to the centre than one of the screen's upper or lower edges.
Those skilled in the art will appreciate that an ARM processor, for example of the type Intel Xscale (such as the suggested Intel PXA270 processor) is a suitable choice, since its attributes, particularly performance/power consumption, are suitable for use in hand-held, battery-operated units. Nevertheless, the skilled person will realize that other microprocessors, including microcontrollers, may be freely chosen within the scope of the invention based on the requirements that are considered to be practical for the implementation concerned. The same applies for the other circuits specified in the detailed description, including A/D converter, multiplexer, memory circuits, PLD and display module.
It is stated that the computer program comprising instructions which cause the processor to execute the method is contained in a memory. It should be understood, however, that the invention also comprises a computer program of this kind which is stored on a medium, for example an optical storage medium such as a CD-ROM, or which is carried by a propagated signal, for example by means of communication between computers in a network such as the Internet. An example of this is the signal that is transmitted during downloading via the network of such a computer program from a server.
From the above detailed description it is apparent that the step of highlighting a selected line element comprises the use of colours and/or contrasts between letters and background. Alternatively or in addition the highlighting may consist in controlling the screen's backlighting, particularly if an LCD screen is employed where the backlighting (for example of the LED type) can be controlled line by line. This kind of solution will also be advantageous with regard to achieving a saving in power consumption and thereby increased battery life.
Further variants and modifications will be obvious to those skilled in the art. The scope of the invention will therefore become apparent from the following patent claims and their equivalents.

Claims

1. A computer-implemented method for execution by a processor in an apparatus (100) for aiding a user with a reading disability, characterised in that the method comprises the following steps:
- inputting (210) a text data portion in a memory,
- segmenting (220) the portion of text into line elements,
- reading (230) user operating elements (160),
- selecting (240) a line element based on the reading of the user operating elements (160),
- displaying (224) a connected segment of the portion of text on the screen, the selected line element being placed in a central area of the screen, with the result that the screen displays the text data in a suitable manner for the reading-impaired user, depending on the user's operation of the user operating elements (160).
2. A method according to claim 1, where the display step (224) comprises placing the selected line element in the central area of the screen at all times.
3. A method according to claim 1 or 2, where the display step (224) comprises placing the selected line element in a central position in the screen's vertical direction.
4. A method according to one of the claims 1-3, where the step (230) of reading user operating elements (160) comprises reading a forward key (164) and a backward key (162).
5. A method according to claim 4, where the step (240) of selecting a line element comprises:
- if the forward key (164) is activated, selecting (244) the next line element, and
- if the backward key (162) is activated, selecting (248) the previous line element.
6. A method according to one of the claims 1-5, where the display step (224) further comprises highlighting the selected line element.
7. A method according to claim 4, where the segmentation step (220) further comprises segmenting the portion of text into word elements, where the step (240) of selecting a line element further comprises selecting a word element, and where the display step (224) further comprises highlighting the selected word element.
8. A method according to claim 7, where the step (240) of selecting a line element and a word element comprises:
- if the forward key (164) is activated, selecting the next word element as the chosen word element,
- if the forward key (164) is activated and the chosen word element is the last in a line element, selecting the next line element as the chosen line element,
- if the backward key (162) is activated, selecting the previous word element as the chosen word element, and
- if the backward key (162) is activated and the first word element in a line is selected, selecting the previous line element as the chosen line element.
9. A method according to one of the claims 1-8, where the step (224) of displaying a connected segment comprises displaying 5 line elements, where the selected line element constitutes the middle element of the 5 line elements.
10. A method according to one of the claims 1-9, where the step (210) of reading a text data portion in a memory comprises
- providing (212) an image of a piece of textual information,
- providing (214) the text data portion by converting the image to text data.
11. A method according to claim 10, where the step (212) of providing an image of a piece of textual information comprises
- inputting raw data from a first and a second parallel-mounted CIS sensor,
- pre-processing the said raw data, including determining correlation data and rotation data on the basis of the said raw data, and determining a rectangular image on the basis of the said rotation data and the said correlation data.
12. A computer program, contained in a memory, stored on a medium or carried by a propagated signal, characterised in that it comprises instructions which, when executed by a processor, cause the processor to execute a method as specified in one of the claims 1-11.
13. An apparatus (100) for aiding individuals with a reading disability, comprising
- an input unit (150, 152, 156) for inputting textual information,
- user operating elements (160),
- a screen (166),
- a memory (120, 130), and
- a processor (110) connected to the input unit, the user operating elements and the screen, characterised in that the processor (110) is arranged for executing a method as specified in one of the claims 1-11.
14. An apparatus according to claim 13, where the input unit (150, 152, 156) comprises an electro-optical imaging unit (150) for inputting image information, arranged for mapping a text data portion extending over a plurality of text lines.
15. An apparatus according to claim 11, 13 or 14, where the electro-optical imaging unit (150) comprises two CIS sensors.
16. An apparatus according to claim 11, 13 or 14, where the electro-optical imaging unit (150) comprises a digital camera.
17. An apparatus according to one of the claims 11-16 ?, further comprising a text-to-speech device, arranged for presenting synthetic speech according to the line or word elements resulting from the user's operation of the forward/backward keys.
PCT/NO2006/000058 2005-02-14 2006-02-14 Aid for individuals wtth a reading disability WO2006085776A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NO20050783 2005-02-14
NO20050783A NO20050783L (en) 2005-02-14 2005-02-14 Aid for the Disabled

Publications (2)

Publication Number Publication Date
WO2006085776A1 true WO2006085776A1 (en) 2006-08-17
WO2006085776A9 WO2006085776A9 (en) 2007-01-11

Family

ID=35229581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NO2006/000058 WO2006085776A1 (en) 2005-02-14 2006-02-14 Aid for individuals wtth a reading disability

Country Status (2)

Country Link
NO (1) NO20050783L (en)
WO (1) WO2006085776A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4797544A (en) * 1986-07-23 1989-01-10 Montgomery James R Optical scanner including position sensors
US5595445A (en) * 1995-12-27 1997-01-21 Bobry; Howard H. Hand-held optical scanner
US20030164819A1 (en) * 2002-03-04 2003-09-04 Alex Waibel Portable object identification and translation system
WO2003088134A1 (en) * 2002-04-11 2003-10-23 Carroll King Schuller Reading machine
US6707581B1 (en) * 1997-09-17 2004-03-16 Denton R. Browning Remote information access system which utilizes handheld scanner

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4797544A (en) * 1986-07-23 1989-01-10 Montgomery James R Optical scanner including position sensors
US5595445A (en) * 1995-12-27 1997-01-21 Bobry; Howard H. Hand-held optical scanner
US6707581B1 (en) * 1997-09-17 2004-03-16 Denton R. Browning Remote information access system which utilizes handheld scanner
US20030164819A1 (en) * 2002-03-04 2003-09-04 Alex Waibel Portable object identification and translation system
WO2003088134A1 (en) * 2002-04-11 2003-10-23 Carroll King Schuller Reading machine

Also Published As

Publication number Publication date
NO20050783L (en) 2006-08-15
NO20050783D0 (en) 2005-02-14
WO2006085776A9 (en) 2007-01-11

Similar Documents

Publication Publication Date Title
EP2300989B1 (en) Method and apparatus for automatically magnifying a text based image of an object
US5913072A (en) Image processing system in which image processing programs stored in a personal computer are selectively executed through user interface of a scanner
CN102685466B (en) Adaptive video catches solution code system
US8036895B2 (en) Cooperative processing for portable reading machine
EP2306270A1 (en) Character input method and system, electronic device and keyboard thereof
US8538087B2 (en) Aiding device for reading a printed text
EP1368803A4 (en) Cellular phone with built in optical projector for display of data
KR20120069699A (en) Real-time camera dictionary
EP0863475A2 (en) Character recognition device
JP2019008482A (en) Braille character tactile sense presentation device and image forming apparatus
JPH11272690A (en) Data display device, method therefor and recording medium recorded with data displaying program
EP2299387A1 (en) Device and method for recognizing and reading text out loud
JP4661909B2 (en) Information display device and program
WO2006085776A1 (en) Aid for individuals wtth a reading disability
US20140225997A1 (en) Low vision device and method for recording and displaying an object on a screen
JP2012080316A (en) Image composing device and image composing program
KR100506222B1 (en) Code recognition apparatus and method
TWI277937B (en) Image enlarging device
JP2002298078A (en) Character display, its control method, record medium, and program
JP2006268699A (en) Code reader
JPH04282609A (en) Extremely thin input/output integrated information processor
JP2000020677A (en) Image capture/massage display device
WO1999060516A1 (en) Recording of information
JPS63211972A (en) Image processor
EP0878951A1 (en) Personal computer based image processing system for controlling image processing through a user interface of a scanner

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06716734

Country of ref document: EP

Kind code of ref document: A1