US20060234602A1 - Figurine using wireless communication to harness external computing power - Google Patents
Figurine using wireless communication to harness external computing power Download PDFInfo
- Publication number
- US20060234602A1 US20060234602A1 US11/146,907 US14690705A US2006234602A1 US 20060234602 A1 US20060234602 A1 US 20060234602A1 US 14690705 A US14690705 A US 14690705A US 2006234602 A1 US2006234602 A1 US 2006234602A1
- Authority
- US
- United States
- Prior art keywords
- figurine
- data
- computer
- translation
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
- A63H3/003—Dolls specially adapted for a particular function not connected with dolls
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
- A63H3/28—Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H30/00—Remote-control arrangements specially adapted for toys, e.g. for toy vehicles
- A63H30/02—Electrical arrangements
- A63H30/04—Electrical arrangements using wireless transmission
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H2200/00—Computerized interactive toys, e.g. dolls
Definitions
- the invention relates to figurines such as stuffed animals, teddy bears, dolls, toy robots, action figures, and the like, and more particularly, to figurines that include electronics.
- the term “figurine” refers to a doll, a teddy bear, a stuffed animal, a toy robot, a toy statue, an action figure, and the like.
- Figurines are commonly used by children to pass the time and facilitate imaginative thought.
- more advanced computerized figurines have been developed. These more advanced figurines, for example, may incorporate electronics that allow the figurine to interact with the child.
- the invention is directed to a system including a figurine that utilizes wireless communication to harness computing power of an external computer.
- applications that require intensive processing power can be seemingly executed by the figurine, with the intensive processing actually being performed by the external computer.
- the figurine may capture input and wirelessly transfer the input to an external computer, which processes the input.
- the external computer returns output to the figurine, which presents the output to a child.
- the invention provides a system comprising a figurine that captures input from a user and wirelessly communicates the input.
- the input can be image data, for example, or audio data such as speech data.
- the system also includes a computer that receives the speech data from the figurine, generates a response to the speech data, and wirelessly communicates the response to the figurine. The figurine then outputs the response to the user.
- the invention provides a system comprising a figurine that captures speech data from a user and wirelessly communicates the speech data.
- the system also includes a computer that receives the speech data from the figurine, generates a translation of the speech data, and wirelessly communicates the translation to the figurine.
- the figurine outputs the translation to the user.
- the invention provides a system comprising a figurine that captures image data from a user and wirelessly communicates the image data, wherein the image data includes one or more words or phrases.
- the system also includes a computer that receives the image data from the figurine, generates a translation of the words or phrases, and wirelessly communicates the translation to the figurine.
- the figurine outputs the translation to the user.
- the invention provides a system comprising a figurine that captures image data from a user and wirelessly communicates the image data, wherein the image data includes one or more words or phrases.
- the system also includes a computer that receives the image data from the figurine, generates audio data corresponding to the words or phrases, and wirelessly communicates the audio data to the figurine.
- the figurine outputs the audio data to the user.
- the invention provides an interactive toy figurine comprising a data capture device and a wireless transmitter/receiver to wirelessly transfer data captured by the data capture device and receive output associated with the data captured by the data capture device.
- the data capture device may be an image capture device to capture image data, such as a camera deployed in one or both of the eyes of the toy figurine, or elsewhere.
- a method comprises capturing speech data from a user at a figurine, and wirelessly communicating the speech data to an external computer. The method also comprises receiving from the external computer a response to the speech data, and outputting the response to the user from the figurine.
- a method comprises capturing speech data from a user at a figurine, and wirelessly communicating the speech data to an external computer. The method also comprises receiving from the external computer a translation of the speech data, and outputting the translation to the user from the figurine.
- a method comprises capturing image data with a figurine, and wirelessly communicating the image data to an external computer.
- the image data includes one or more words or phrases.
- the method also comprises receiving from the external computer a translation of the words or phrases, and outputting the translation from the figurine.
- a method comprises capturing image data with a figurine, and wirelessly communicating the image data to an external computer.
- the image data includes one or more words or phrases.
- the method also comprises receiving from the external computer audio data corresponding to the words or phrases, and outputting the audio data from the figurine.
- a system comprises a figurine that captures input and wirelessly communicates the input.
- the system also includes a computer that receives the input from the figurine, generates output based on the input, and wirelessly communicates the output to the figurine.
- the figurine presents the output to a user.
- a system comprises a figurine communicatively coupled to a computer, which is in-turn communicatively coupled to a server via a network.
- the figurine provides input to the computer and receives output from the computer.
- the computer can receive software updates from the server such that functionality of the figurine can be changed or expanded via computer software upgrades.
- upgrades may also be loaded on the computer via a conventional disk or other storage medium, in which case, communication with the server would not be necessary.
- a system comprises a figurine communicatively coupled to a computer.
- the system includes one or more system compatible objects that the figurine can interact with, harnessing the power of the computer.
- the compatible objects may include indicia identifiable by the figurine, which can ensure that the software on the computer can provide useful interaction between the figurine and the object.
- a system comprises a figurine, a computer, and a parent unit.
- the parent unit may comprise a software module on the computer, or a separate hardware device.
- the parent unit allows parents to exert parental control over the functionality of the figurine by interacting with software modules on the computer that control operation and interactive features of the figurine.
- the parent unit may also function as a baby monitor, e.g., a smart baby monitor that can generate an alarm if a baby in proximity to the figurine ceases to breath, or has other detectable problems.
- FIG. 1 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer.
- FIGS. 2 and 3 are block diagrams of a figurine wirelessly communicating with a computer.
- FIGS. 4-6 are flow diagrams according to embodiments of the invention, illustrating application of the invention to translation of spoken or written messages.
- FIG. 7 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer via a wireless hub.
- FIG. 8 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer thorough a network.
- FIG. 9 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer and a compatible object.
- FIG. 10 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer, with a parents' unit.
- FIG. 11 is a conceptual diagram illustrating a system in which a server communicates with clients that wirelessly communicate with figurines.
- the invention is directed to a system including a figurine that utilizes wireless communication to harness computing power of an external computer.
- a figurine that utilizes wireless communication to harness computing power of an external computer.
- certain applications that require intensive processing power can be seemingly executed by the figurine, with the intensive processing actually being performed external to the figurine in another computer.
- the figurine may capture audio data, video data or both, and wirelessly transfer the captured data to the external computer, either directly or via a network.
- Audio data includes, but is not limited to, speech data, voice data and music data.
- the external computer receives the data as input, processes the data, generates output based on the input data, and transfers the output to the figurine. The output can then be presented to a child as though the figurine processed and generated the output directly.
- FIG. 1 is a diagram illustrating a system 10 according to an embodiment of the invention.
- System 10 generally includes a figurine 12 such as a doll, teddy bear, stuffed animal, toy robot, toy statue, action figure or the like.
- System 10 also includes an external computer 14 such as a personal computer (PC), Macintosh, workstation, laptop, notebook, palm computer, or any other computer external to figurine 12 .
- Figurine 12 and external computer 14 communicate either directly or indirectly via one or more wireless communication links 16 .
- external computer 14 may be networked to one or more wireless hubs or other devices that facilitate wireless communication.
- Figurine 12 harnesses the computing power of external computer 14 in order to facilitate execution of processor-intensive and/or memory-intensive applications.
- a child can interact with figurine 12 .
- figurine 12 can facilitate learning and provide instruction and guidance to the child.
- the computing power and memory needed in figurine 12 can be significantly reduced. Accordingly, the need to protect processors and/or memory from misuse, by a child handing figurine 12 , can also be reduced. Also, the power used by figurine 12 can be reduced, prolonging battery life within figurine 12 .
- a child can utter the word “travel” to figurine 12 , which captures the utterance and wirelessly communicates the captured speech to external computer 14 .
- External computer 14 parses the captured speech and generates one or more definitions, which are communicated back to figurine 12 .
- Figurine 12 can output the definition by, for example, responding “the word ‘travel’ means to go on a trip.”
- figurine 12 may be capable of holding intelligent conversation with the child by harnessing the computing power of external computer.
- the child may speak to figurine 12 , which captures the speech and wirelessly communicates the captured speech to external computer 14 .
- External computer 14 parses the speech and generates one or more responses, which are communicated back to figurine 12 .
- Figurine 12 can then output the responses to the child.
- Software executing on external computer 14 may adapt over time to the questions posed by the child, and may also be upgradeable. Upgrades to software on external computer 14 , for example, may cause figurine 12 to appear to grow intellectually with the child.
- a child can utter the words “I love you” to figurine 12 , which captures the utterance and wirelessly communicates the captured speech to external computer 14 .
- External computer 14 parses the child's message and generates one or more responses, which are communicated back to figurine 12 .
- Figurine 12 can output the response by, for example, responding “I love you, too.” Or the child can utter the question “What is a triangle?” to figurine 12 , which can respond “A triangle is a shape that has three sides.”
- figurine 12 may facilitate translation of words spoken by the child.
- the child may speak to figurine 12 , which captures the speech and wirelessly communicates the captured speech to external computer 14 .
- External computer 14 parses the speech and identifies a translation of the words or phrases spoken by the child.
- External computer 14 then communicates back to figurine, so that figurine 12 can output the translations to the child.
- figurine 12 serves as an interpreter.
- a child can utter the words “Thank you” to figurine 12 , which captures the utterance and wirelessly communicates the captured speech to external computer 14 .
- External computer 14 parses the utterance and identifies a translation of the phrase.
- External computer 14 then communicates the translation back to figurine, and figurine 12 can output the translations by, for example, responding with “‘Gracias’ means ‘thank you’ in Spanish.”.
- figurine 12 may facilitate translation of written words.
- one or more image capture devices such as digital cameras, may be located in the eyes 18 of figurine 12 .
- the child may present words or phrases to figurine 12 by directing the eyes 18 of figurine 12 toward the words or phrases.
- the child may press a button (not shown) on figurine 12 to capture the words or phrases being “viewed” by the figurine.
- a captured image of the words or phrases can be wirelessly communicated to external computer 14 .
- External computer 14 parses the words or phrases, and identifies a translation of the words or phrases.
- External computer 14 then communicates back to figurine, so that figurine 12 can output the translation.
- a display (not shown) may also be incorporated into figurine 12 to present the child with the captured words being translated.
- the display for example, may be located anywhere on figurine, but is preferably located on the back of figurine so that the words can be viewed by the child as eyes 18 of figurine 12 are directed away from the child towards a page to be read.
- figurine 12 harnesses the computing power of external computer 14 to perform image processing unrelated to words and phrases.
- One or more image capture devices such as digital cameras, may be located in the eyes 18 of figurine 12 , and can capture image data to be processed by external computer 14 .
- Image processing can include recognition of faces, objects, colors, numbers, places, activities, and the like.
- figurine 12 can seem to recognize the person or persons interacting with figurine 12 .
- Figurine 12 can use the recognition in its interaction by, for instance, calling a child by name.
- figurine 12 can seem to recognize objects and attributers of objects such as shape, type or quantity.
- figurine 12 can teach a child to recognize shapes, count objects, become familiar with colors, and the like.
- Voice recognition applications refer to applications that identify who is talking and may allow for programmed figurine interaction only with those persons associated with a recognized voice.
- Speech recognition applications refer to applications that recognize what is being said and may be generally used with any voice.
- the invention may utilize both speech and voice applications together in order to determine what is being said and who is speaking. This can improve interaction with figurine 12 such that figurine 12 may only respond to the child for which it is programmed to respond.
- voice recognition a child could say “sing me a song” or “tell me a story” and the figurine may select a song or store from a library and respond to the recognized voice, as directed. Responses from the figurine to others, however, may be limited or prohibited if the requesting voice is not recognized.
- FIG. 2 is a functional block diagram of system 10 including a figurine 12 and an external computer 14 .
- figurine 12 uses wireless communication in order to harnesses the processing horsepower of external computer 14 .
- complex applications can be performed by computer 14 , yet presented by figurine 12 to a user.
- Sounds or images detected by input device 22 may be processed locally by local central processing unit (CPU) 24 in order to facilitate communication of the data to external computer 14 .
- local CPU 24 may package the captured input for transmission to external computer 14 .
- Local CPU 24 may also control transmitter/receiver 26 to cause transmission of data indicative of the sounds or images detected by input device 22 .
- Local CPU 24 may comprise a relatively simple controller implemented in application specific integrated circuit (ASIC). If images are captured by figurine 12 , local CPU 24 may compress the image file to simplify wireless transfer of the image file. In any case, transmitter/receiver 26 transfers data collected by figurine 12 so that the data can be processed external to figurine 12 .
- ASIC application specific integrated circuit
- transmitter/receiver 26 of figurine 12 and transmitter/receiver 27 of external computer 14 may conform to any of a wide variety of wireless communication protocols. Examples include, but are not limited to a wireless networking standard such as one of the IEEE 802.11 standards, a standard according to the Bluetooth Special Interest Group, or the like.
- wireless networks have been developed. Additional extensions to the IEEE 802.11 standard, as well as other wireless standards will likely emerge in the future.
- the invention is not limited to the type of wireless communication techniques used, and could also be implemented with wireless protocols similar to those used for cell phone communication or direct two-way pagers, or any other wireless protocol, either known or later developed.
- Transmitter/receiver 27 of external computer 14 receives data sent by transmitter/receiver 26 of figurine 12 .
- Remote CPU 28 performs extensive processing on the data to generate output.
- remote CPU 28 may comprise a general purpose microprocessor that executes software to generate the output. The output is then transmitted back to figurine 12 . Output device 23 of figurine 12 can then present the output to the user.
- the local electronics of figurine 12 can be greatly simplified.
- the need for intensive processing power and a large amount of memory in figurine 12 can be avoided.
- the need to protect powerful processors and memory from misuse by a child handing the figurine can also be avoided.
- Battery power in figurine 12 can also be extended by performing processing tasks externally in computer 14 .
- software upgrades may be easily implemented for execution by remote CPU 28 without requiring upgrade of the components of figurine 12 .
- figurine 12 may present a speech recognition application to the child, e.g., a program that teaches the child the meanings of one or more words or phrases. In that case, the child may speak to figurine 12 , and input device 22 can capture the speech.
- Local CPU 24 packages the speech and causes transmitter/receiver 26 to wirelessly communicates the captured speech to external computer 14 .
- Remote CPU 28 parses the speech and generates one or more meanings, which are communicated back to figurine 12 by transmitter/receiver 27 .
- Output device 23 of figurine 12 can then output the meanings to the child.
- figurine 12 may be capable of holding intelligent conversation with the child by harnessing the remote CPU 28 of external computer 14 .
- the child may speak to figurine 12 , and input device 22 can capture the speech.
- Local CPU 24 packages the speech and causes transmitter/receiver 26 to wirelessly communicate the captured speech to external computer 14 .
- Remote CPU 28 parses the speech and generates one or more responses, which are communicated back to figurine 12 by transmitter/receiver 27 .
- Output device 23 of figurine 12 can then output the responses to the child.
- figurine 12 may respond with an intelligent answer.
- figurine 12 may help the child with reading.
- input device 22 in the form of an image capture device may capture images of a page.
- Local CPU 24 packages the image and causes transmitter/receiver 26 to wirelessly communicate the captured image to external computer 14 .
- Local CPU may also compress the image prior to transmission.
- remote CPU 28 parses the image and generates one or more meanings, which are communicated back to figurine 12 by transmitter/receiver 27 .
- remote CPU 28 may perform character recognition on the image in order to identify characters, and may then decipher the meaning of the identified characters using one or more dictionaries stored in memory and accessible by remote CPU 28 .
- Output device 23 of figurine 12 can then output the meanings to the child. In this sense, figurine 12 may appear to be reading to the child.
- Output device 23 may include speakers for verbal output and possibly a display to present the child with the captured words being read by figurine 12 .
- figurine 12 may facilitate translation of words spoken by the child.
- the child may speak to figurine 12
- input device 22 can capture the speech.
- Local CPU 24 packages the speech and causes transmitter/receiver 26 to wirelessly communicate the captured speech to external computer 14 .
- Remote CPU 28 parses the speech and identifies a translation of the word or phrases spoken by the child, which are communicated back to figurine 12 by transmitter/receiver 27 .
- Output device 23 of figurine 12 can then output the translation to the child.
- figurine 12 serves as an interpreter.
- Interaction between a user and figurine 12 can be proactive as well as reactive.
- external computer 14 can cause figurine 12 to take action that is not in response to action by a user.
- figurine 12 may serve as an alarm clock, telling a child that it is time to get out of bed.
- Figurine 12 may also proactively remind a user of the day's appointments, birthdays of friends or relatives, and the like.
- figurine 12 may remind a user to take medication. Any such alarms or reminders may be standard audio tones, music, or possibly programmed or recorded audio of a familiar voice, making figurine 12 speak with a pleasant tone to the user when providing reminders.
- a parent's voice may be recorded such that figurine 12 speaks with such recordings.
- Voice emulation software may also be used by computer 14 so that figurine 12 speaks new words or phrases in a voice that emulates that of the parents.
- FIG. 3 is a more detailed block diagram of system 30 illustrating application of the invention to one of the example applications described above, in particular, translation of written words.
- System 30 may correspond to system 10 ( FIGS. 1 and 2 ).
- image capture device 33 may capture images of a page.
- image capture device 33 may comprise a digital camera located in the eyes of figurine 32 so that when a child directs the eyes of figurine 32 toward a page and presses an actuator, the image of the page is captured.
- the actuator for example, may be disposed on the back of figurine 32 so that when the eyes of figurine 32 are directed toward a page, the actuator is easily accessible.
- the image capture device and actuator may be deployed in other locations on figurine 32 .
- Local CPU 34 packages the image and causes transmitter/receiver 36 to wirelessly communicate the captured image to external computer 31 .
- Remote CPU 38 parses the image and generates a translation, which is communicated back to figurine 32 by transmitter/receiver 37 .
- Remote CPU 28 may invoke software modules 37 , 39 to specifically perform optical character recognition 39 and translation 40 . Once the image has been translated and the translation has been communicated back to figurine 32 , output device 35 of figurine 32 can then output the translation.
- Optical character recognition module 39 may recognize English, and translator module 40 may translate from English to Spanish. Any other languages, however, could also be supported.
- optical character recognition may be performed locally at figurine, with the more processor-intensive translation being performed by external computer 31 .
- the various modules and components described herein may be implemented in hardware, software, firmware, or any combination.
- the invention is not limited to any particular hardware or software implementation. If implemented in software, the modules may be stored on a computer readable medium such as memory or a non-volatile storage medium.
- a computer readable medium such as memory or a non-volatile storage medium.
- one advantage of the techniques described herein is that the need for large amounts of memory in a figurine can be avoided. Instead, the memory needed to execute very memory-intensive applications is included in an external computer.
- the exemplary applications described above are not exclusive of one another.
- External computer 31 can execute any combination of translation, voice and speech recognition, image processing or other types of software modules, and the invention is not limited to systems that perform a single application to the exclusion of other applications.
- an advantage of the invention is its versatility. The invention can be adapted to implement one or more applications as desired by the user.
- FIGS. 4-6 are flow diagrams according to some embodiments of the invention, illustrating application of the invention to one of the example applications described above, in particular, translation of spoken or written messages.
- figurine 12 performs speech capture ( 41 ), and then transmits speech data to external computer 14 ( 42 ).
- External computer 14 receives the speech data ( 43 ) and may perform speech recognition ( 44 ) to identify the spoken words or phrases.
- External computer 14 performs translation with respect to the identified spoken words or phrases ( 45 ) and transmits the translation back to figurine 12 ( 46 ).
- Figurine 12 receives the translation ( 47 ) and outputs the translation to the user.
- Figurine 12 may drive an output device such as a display screen, thereby providing a written output.
- figurine 12 captures an image ( 51 ), and then transmits the image to external computer 14 ( 52 ).
- External computer 14 receives the image ( 53 ) and decodes the image ( 54 ), e.g., by performing optical character recognition.
- External computer then translates the characters to generate a translation ( 55 ).
- External computer 12 transmits the translation back to figurine 12 ( 56 ).
- Figurine 12 receives the translation ( 57 ) and outputs the translation to the user ( 58 ).
- figurine 12 acts as a translator of written words or phrases, invoking an external computer 14 to reduce the local processing at figurine 12 .
- the translation may be output in audio, video, or both.
- figurine 12 captures an image that includes written words or phrases ( 61 ), and then transmits the images to external computer 14 ( 62 ).
- External computer 14 receives the image ( 63 ) and decodes the image ( 64 ) e.g., by performing optical character recognition to identify written words or phrases.
- External computer 14 then generates an audio signal ( 65 ) as a function of the identified words or phrases.
- External computer 12 transmits the audio signal back to figurine 12 ( 66 ).
- Figurine 12 receives the audio signal ( 67 ) and outputs the translation to the user. In this manner, figurine 12 appears to be reading the written words or phrases, by invoking an external computer 14 to reduce the local processing at figurine 12 .
- FIG. 8 illustrates another system 80 , similar to system 10 , in which figurine 82 wirelessly communicates to take advantage of computing power of external computer 84 .
- figurine 82 wirelessly communicates with external computer 84 via a wireless hub 85 that couples to external computer 84 via network 86 .
- wireless hub 85 communicates wirelessly with figurine 82 , and is coupled to external computer 84 via network 86 .
- Network 86 may comprise a small local area network (LAN), a wide area network, or even a global network such as the Internet.
- LAN local area network
- Communication between hub 85 and external computer 84 may be, but need not be, wireless.
- the wireless capabilities of figurine 82 allow for communication with external computer 84 , thereby allowing figurine 82 to make use of the processing capabilities of external computer 84 .
- figurine 82 When a figurine is configured to communicate with a global network such as the Internet, such as is depicted in FIG. 8 or 11 , the figurine can serve as an input-output device for interaction with the network and other stations or servers coupled to network 82 .
- figurine 82 reports information obtained from one or more network servers (not shown). For example, a child could ask figurine 82 , “What is the weather forecast for today?” The request is relayed to external computer 84 , which accesses a server via network 86 that can provide the local forecast. Upon retrieving the local forecast, external computer 84 supplies that information to figurine 82 , which answers the child's question.
- the invention supports wireless communication in other configurations as well.
- the figurine need not be directly coupled to the external computer in order to take advantage of the computing resources of the external device.
- Figurine 82 may communicate wirelessly with any intermediate device, such as another figurine, a wireless access point, or a computer that does not serve as external computer 84 .
- FIG. 9 is a diagram illustrating a system 90 according to an additional embodiment of the invention.
- System 90 includes a figurine 92 and an external computer 94 , which communicate either directly or indirectly via one or more wireless communication links.
- system 90 includes a compatible object 95 , embodied in FIG. 9 as a book.
- Compatible object 95 can be an object of any type, but in a typical implementation, compatible object 95 is an accessory for figurine 92 .
- Compatible object 95 includes a wireless identifier, by which a detector in figurine 92 can detect the presence of compatible object 95 .
- a wireless identifier is a radio frequency identification (RFID) tag 96 .
- RFID tag 96 may be hidden in compatible object 95 and not readily observable to a user.
- An RFID tag reader 98 in figurine 92 detects and reads RFID tag 96 . Bar codes or other indicia might also be used, in which case reader 98 would facilitate the reading of such indicia.
- RFID tag reader 98 may “interrogate” RFID tag 96 by directing an electromagnetic (i.e., radio) signal to RFID tag 96 .
- RFID tag 96 may include, but need not include, an independent power source.
- RFID tag 96 receives power from the interrogating signal from RFID tag reader 98 .
- RFID tag 96 may perform certain operations, which may include transmitting data stored in the memory of the RFID tag 96 to RFID tag reader 98 .
- the transmitted data may include an identification of compatible object 95 .
- figurine 92 may exert better control over those objects that will be used to interact with figurine, and can help ensure that a child will not become frustrated, e.g., if figurine 92 were used with an incompatible book or object.
- RFID tag reader 98 identifies RFID tag 96
- external computer 94 becomes aware of compatible object 95 proximate to figurine 92 .
- External computer 94 can use the identity of compatible object 95 to communicate with a user more effectively.
- exemplary compatible object 95 in FIG. 9 is a book.
- external computer 94 learns the identity of the book, external computer 94 can generate output appropriate for that book.
- Figurine 92 may, for example, direct the attention of a child to illustrations shown in the book, and explain how the illustrations pertain to the story.
- Figurine 92 can also describe how the story relates to other books, such as other books dealing with the same characters, or figurine 92 can explain background information about the story or its author.
- FIG. 10 is a diagram illustrating a system 100 according to another embodiment of the invention.
- System 100 includes a figurine 102 and an external computer 104 , which communicate either directly or indirectly via one or more wireless communication links.
- system 100 includes a parents' unit 106 .
- Parents' unit 106 may comprise any device, including, but not limited to, a television, a computer, a telephone, a speaker, a video monitor and the like.
- Parents' unit 106 may communicate with external computer 104 in any fashion, such as by an electrical connection, by an optical link, or by radio frequency.
- System 100 is configured to serve as a child monitoring system.
- a parent can deploy figurine 102 proximate to a child so that figurine 102 can capture video information or audio information or both about that child.
- Figurine 102 transmits the captured information to external computer 104 .
- External computer 104 in turn sends information to parents' unit 106 .
- the parent may also communicate in real time to the child through figurine 102 , e.g., by speaking into a microphone of parents' unit 106 .
- Parents' unit 106 may be a separate unit, or may be implemented as a software module that executes directly on external computer 104 .
- FIG. 11 is a diagram illustrating a system 110 according to another embodiment of the invention.
- System 110 is a server-client system tin which a server 112 supplies one or more functionalities to one or more client figurine-computer systems 114 , 116 .
- Server 112 manages a database 113 that stores software that can provide figurines with one or more functionalities.
- Client figurine-computer systems 114 , 116 download one or more functionalities from server 112 via a network 118 .
- Network 118 may comprise any network, including a global network such as the Internet.
- Examples of functionalities include, but are not limited to, the functionalities described herein.
- the owner of client figurine-computer system 114 may desire that her child's figurine 122 should be capable of helping teach her child about numbers, letters, basic shapes and basic colors. In addition, her child's figurine 122 should be capable of reciting stories suitable for a child four years of age. Accordingly, the owner of client figurine-computer system 114 downloads software for such functionalities from server 112 via network 118 .
- the software is stored locally at external computer 120 .
- the owner of client figurine-computer system 116 may desire that his child's figurine 124 should be capable of reading a book, helping teach his child speak and write in English and Spanish, and playing games appropriate for a child six years of age. Accordingly, the owner of client figurine-computer system 116 downloads software for such functionalities from server 112 via network 118 .
- each parent can customize his or her child's figurine for the child's age, needs or desires. As the child develops, the parent can obtain more advanced functionality. Further, as new functionalities are developed and added to database 113 , the parents can download the new functionalities. As a result, the figurines seem to “grow” with the children, and can be enabled to perform new or more sophisticated functions. Because the new and more advanced functionality can be executed in external computer 120 , the need to upgrade figurine 122 may be avoided, which can be important to a child that has become emotionally attached to figurine 122 .
- FIG. 1 may depict a single figurine with a single external computer
- the invention encompasses embodiments in which a single external computer interacts with two or more figurines.
- a parent with two children can, for example, give a different figurine to each child, and each figurine can communicate wirelessly with the same or different external computers.
- Each child will perceive that each figurine operates independently of the other.
- each figurine may be separately empowered with functionality appropriate for each child.
- the invention may offer one or more advantages.
- a child's toy can be very versatile, capable of a wide range of functionality.
- the functionality can be customized to the child, and can change as the child develops.
- the invention supports an interesting and adaptable system that can help a child learn a wide range of subjects, making interaction with the figurine not only fun, but educational as well.
Abstract
The invention is directed toward a figurine that utilizes wireless communication to harness computing power of an external computer. The figurine may capture visual or audible input and wirelessly transfer the input to the external computer, either directly or via a network. The external computer processes the input, generates output, and transfers the output to the figurine. The output can then be presented to a child as though the figurine processed and generated the output directly.
Description
- This Application claims the benefit of U.S. Provisional Patent Application No. 60/578,101, filed Jun. 8, 2004, the entire content of which is incorporated herein by reference.
- This invention was made with Government support under contract number N00014-02-C-0122 awarded by the Office of Naval Research. The government has certain right in the invention.
- The invention relates to figurines such as stuffed animals, teddy bears, dolls, toy robots, action figures, and the like, and more particularly, to figurines that include electronics.
- In this disclosure, the term “figurine” refers to a doll, a teddy bear, a stuffed animal, a toy robot, a toy statue, an action figure, and the like. Figurines are commonly used by children to pass the time and facilitate imaginative thought. In recent times, more advanced computerized figurines have been developed. These more advanced figurines, for example, may incorporate electronics that allow the figurine to interact with the child.
- In general, the invention is directed to a system including a figurine that utilizes wireless communication to harness computing power of an external computer. In particular, applications that require intensive processing power can be seemingly executed by the figurine, with the intensive processing actually being performed by the external computer. The figurine may capture input and wirelessly transfer the input to an external computer, which processes the input. The external computer returns output to the figurine, which presents the output to a child.
- Speech recognition applications, speech interpretation applications, image processing applications, voice recognition applications, and language translation applications are some examples of applications that typically require intensive processing power and large amounts of memory. The invention contemplates a figurine that utilizes wireless communication to harness computing power of an external computer in order to facilitate the presentation of speech recognition applications, speech interpretation applications, image processing applications, voice recognition applications, and language translation applications through the figurine. By performing the intensive processing external to the figurine, the internal electronics of the figurine can be greatly simplified. In particular, the need for intensive processing power and a large amount of memory in the figurine can be avoided. Accordingly, the need to protect powerful processors and memory from misuse by a child handing the figurine can also be avoided. In addition, battery life in the figurine may be extended by using the techniques described herein.
- In one embodiment, the invention provides a system comprising a figurine that captures input from a user and wirelessly communicates the input. The input can be image data, for example, or audio data such as speech data. The system also includes a computer that receives the speech data from the figurine, generates a response to the speech data, and wirelessly communicates the response to the figurine. The figurine then outputs the response to the user.
- In another embodiment, the invention provides a system comprising a figurine that captures speech data from a user and wirelessly communicates the speech data. The system also includes a computer that receives the speech data from the figurine, generates a translation of the speech data, and wirelessly communicates the translation to the figurine. The figurine outputs the translation to the user.
- In another embodiment, the invention provides a system comprising a figurine that captures image data from a user and wirelessly communicates the image data, wherein the image data includes one or more words or phrases. The system also includes a computer that receives the image data from the figurine, generates a translation of the words or phrases, and wirelessly communicates the translation to the figurine. The figurine outputs the translation to the user.
- In another embodiment, the invention provides a system comprising a figurine that captures image data from a user and wirelessly communicates the image data, wherein the image data includes one or more words or phrases. The system also includes a computer that receives the image data from the figurine, generates audio data corresponding to the words or phrases, and wirelessly communicates the audio data to the figurine. The figurine outputs the audio data to the user.
- In another embodiment, the invention provides an interactive toy figurine comprising a data capture device and a wireless transmitter/receiver to wirelessly transfer data captured by the data capture device and receive output associated with the data captured by the data capture device. For example, the data capture device may be an image capture device to capture image data, such as a camera deployed in one or both of the eyes of the toy figurine, or elsewhere.
- In another embodiment, a method comprises capturing speech data from a user at a figurine, and wirelessly communicating the speech data to an external computer. The method also comprises receiving from the external computer a response to the speech data, and outputting the response to the user from the figurine.
- In another embodiment, a method comprises capturing speech data from a user at a figurine, and wirelessly communicating the speech data to an external computer. The method also comprises receiving from the external computer a translation of the speech data, and outputting the translation to the user from the figurine.
- In another embodiment, a method comprises capturing image data with a figurine, and wirelessly communicating the image data to an external computer. The image data includes one or more words or phrases. The method also comprises receiving from the external computer a translation of the words or phrases, and outputting the translation from the figurine.
- In another embodiment, a method comprises capturing image data with a figurine, and wirelessly communicating the image data to an external computer. The image data includes one or more words or phrases. The method also comprises receiving from the external computer audio data corresponding to the words or phrases, and outputting the audio data from the figurine.
- In another embodiment, a system comprises a figurine that captures input and wirelessly communicates the input. The system also includes a computer that receives the input from the figurine, generates output based on the input, and wirelessly communicates the output to the figurine. The figurine presents the output to a user.
- In another embodiment, a system comprises a figurine communicatively coupled to a computer, which is in-turn communicatively coupled to a server via a network. The figurine provides input to the computer and receives output from the computer. The computer can receive software updates from the server such that functionality of the figurine can be changed or expanded via computer software upgrades. Of course, upgrades may also be loaded on the computer via a conventional disk or other storage medium, in which case, communication with the server would not be necessary.
- In another embodiment, a system comprises a figurine communicatively coupled to a computer. In addition, the system includes one or more system compatible objects that the figurine can interact with, harnessing the power of the computer. The compatible objects may include indicia identifiable by the figurine, which can ensure that the software on the computer can provide useful interaction between the figurine and the object.
- In another embodiment, a system comprises a figurine, a computer, and a parent unit. The parent unit may comprise a software module on the computer, or a separate hardware device. In any case, the parent unit allows parents to exert parental control over the functionality of the figurine by interacting with software modules on the computer that control operation and interactive features of the figurine. The parent unit may also function as a baby monitor, e.g., a smart baby monitor that can generate an alarm if a baby in proximity to the figurine ceases to breath, or has other detectable problems.
- The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
-
FIG. 1 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer. -
FIGS. 2 and 3 are block diagrams of a figurine wirelessly communicating with a computer. -
FIGS. 4-6 are flow diagrams according to embodiments of the invention, illustrating application of the invention to translation of spoken or written messages. -
FIG. 7 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer via a wireless hub. -
FIG. 8 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer thorough a network. -
FIG. 9 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer and a compatible object. -
FIG. 10 is a conceptual diagram illustrating a figurine wirelessly communicating with a computer, with a parents' unit. -
FIG. 11 is a conceptual diagram illustrating a system in which a server communicates with clients that wirelessly communicate with figurines. - The invention is directed to a system including a figurine that utilizes wireless communication to harness computing power of an external computer. In particular, certain applications that require intensive processing power can be seemingly executed by the figurine, with the intensive processing actually being performed external to the figurine in another computer. The figurine may capture audio data, video data or both, and wirelessly transfer the captured data to the external computer, either directly or via a network. Audio data includes, but is not limited to, speech data, voice data and music data. The external computer receives the data as input, processes the data, generates output based on the input data, and transfers the output to the figurine. The output can then be presented to a child as though the figurine processed and generated the output directly.
- Speech recognition applications, voice recognition applications, speech interpretation applications, and language translation applications are some examples of applications that may require intensive processing power and a large amount of memory. The invention contemplates a figurine that utilizes wireless communication to harness computing power of an external computer in order to facilitate the presentation of speech recognition applications, voice recognition applications, speech interpretation applications, and language translation applications through the figurine.
-
FIG. 1 is a diagram illustrating asystem 10 according to an embodiment of the invention.System 10 generally includes afigurine 12 such as a doll, teddy bear, stuffed animal, toy robot, toy statue, action figure or the like.System 10 also includes anexternal computer 14 such as a personal computer (PC), Macintosh, workstation, laptop, notebook, palm computer, or any other computer external tofigurine 12.Figurine 12 andexternal computer 14 communicate either directly or indirectly via one or more wireless communication links 16. In some cases,external computer 14 may be networked to one or more wireless hubs or other devices that facilitate wireless communication. -
Figurine 12 harnesses the computing power ofexternal computer 14 in order to facilitate execution of processor-intensive and/or memory-intensive applications. A child can interact withfigurine 12. Accordingly,figurine 12 can facilitate learning and provide instruction and guidance to the child. Asfigurine 12 harnesses the computing power ofexternal computer 14 in order to facilitate execution of these applications, the computing power and memory needed infigurine 12 can be significantly reduced. Accordingly, the need to protect processors and/or memory from misuse, by achild handing figurine 12, can also be reduced. Also, the power used byfigurine 12 can be reduced, prolonging battery life withinfigurine 12. - In one example,
figurine 12 may present a speech recognition application to the child, e.g., a program that teaches the child the meanings of one or more words or phrases. In that case, the child may speak tofigurine 12, which captures the speech and wirelessly communicates the captured speech toexternal computer 14.External computer 14 parses the speech and generates one or more meanings, which are communicated back tofigurine 12.Figurine 12 can then output the meanings to the child in any number of ways. - By way of illustration, a child can utter the word “travel” to
figurine 12, which captures the utterance and wirelessly communicates the captured speech toexternal computer 14.External computer 14 parses the captured speech and generates one or more definitions, which are communicated back tofigurine 12.Figurine 12 can output the definition by, for example, responding “the word ‘travel’ means to go on a trip.” - In another example,
figurine 12 may be capable of holding intelligent conversation with the child by harnessing the computing power of external computer. In that case, the child may speak tofigurine 12, which captures the speech and wirelessly communicates the captured speech toexternal computer 14.External computer 14 parses the speech and generates one or more responses, which are communicated back tofigurine 12.Figurine 12 can then output the responses to the child. Thus, if the child asks a question to figurine 12,figurine 12 may respond with an intelligent answer. Software executing onexternal computer 14 may adapt over time to the questions posed by the child, and may also be upgradeable. Upgrades to software onexternal computer 14, for example, may causefigurine 12 to appear to grow intellectually with the child. - By way of illustration, a child can utter the words “I love you” to
figurine 12, which captures the utterance and wirelessly communicates the captured speech toexternal computer 14.External computer 14 parses the child's message and generates one or more responses, which are communicated back tofigurine 12.Figurine 12 can output the response by, for example, responding “I love you, too.” Or the child can utter the question “What is a triangle?” tofigurine 12, which can respond “A triangle is a shape that has three sides.” - In another example,
figurine 12 may help the child with reading. For example, one or more image capture devices, such as digital cameras, may be located infigurine 12, such as in one ormore eyes 18 offigurine 12. The child may present a page of a book to figurine 12 by directing theeyes 18 offigurine 12 toward the page and pressing a button (not shown). In that case, a captured image of the page can be wirelessly communicated toexternal computer 14.External computer 14 parses the page and identifies the words printed on the page.External computer 14 then communicates back to figurine, so thatfigurine 12 can output the words written on the page. In this sense,figurine 12 may appear to be reading to the child. In that case, a display (not shown) may also be incorporated intofigurine 12 to present the child with the captured words being read byfigurine 12. Therefore,figurine 12 may aid in the learning process of the child by helping to teach the child to read. - In another example,
figurine 12 may facilitate translation of words spoken by the child. For example, the child may speak tofigurine 12, which captures the speech and wirelessly communicates the captured speech toexternal computer 14.External computer 14 parses the speech and identifies a translation of the words or phrases spoken by the child.External computer 14 then communicates back to figurine, so thatfigurine 12 can output the translations to the child. In this example,figurine 12 serves as an interpreter. - For purposes of illustration, a child can utter the words “Thank you” to
figurine 12, which captures the utterance and wirelessly communicates the captured speech toexternal computer 14.External computer 14 parses the utterance and identifies a translation of the phrase.External computer 14 then communicates the translation back to figurine, andfigurine 12 can output the translations by, for example, responding with “‘Gracias’ means ‘thank you’ in Spanish.”. - In another example,
figurine 12 may facilitate translation of written words. For example, one or more image capture devices, such as digital cameras, may be located in theeyes 18 offigurine 12. The child may present words or phrases to figurine 12 by directing theeyes 18 offigurine 12 toward the words or phrases. The child may press a button (not shown) onfigurine 12 to capture the words or phrases being “viewed” by the figurine. In that case, a captured image of the words or phrases can be wirelessly communicated toexternal computer 14.External computer 14 parses the words or phrases, and identifies a translation of the words or phrases.External computer 14 then communicates back to figurine, so thatfigurine 12 can output the translation. In that case, a display (not shown) may also be incorporated intofigurine 12 to present the child with the captured words being translated. The display, for example, may be located anywhere on figurine, but is preferably located on the back of figurine so that the words can be viewed by the child aseyes 18 offigurine 12 are directed away from the child towards a page to be read. - In a further example,
figurine 12 harnesses the computing power ofexternal computer 14 to perform image processing unrelated to words and phrases. One or more image capture devices, such as digital cameras, may be located in theeyes 18 offigurine 12, and can capture image data to be processed byexternal computer 14. Image processing can include recognition of faces, objects, colors, numbers, places, activities, and the like. Whenexternal computer 14 runs face recognition software, for example,figurine 12 can seem to recognize the person or persons interacting withfigurine 12.Figurine 12 can use the recognition in its interaction by, for instance, calling a child by name. Whenexternal computer 14 runs object recognition software,figurine 12 can seem to recognize objects and attributers of objects such as shape, type or quantity. In an exemplary application,figurine 12 can teach a child to recognize shapes, count objects, become familiar with colors, and the like. - In yet another example,
figurine 12 harnesses the computing power ofexternal computer 14 to perform voice recognition in order to identify the speaker.Figurine 12 can use the voice recognition in its interaction by, for instance, calling a child by name. Whenexternal computer 14 runs voice recognition software,figurine 12 can seem to recognize the speaker. Typically, voice recognition applications would be used along with speech recognition applications. - Voice recognition applications refer to applications that identify who is talking and may allow for programmed figurine interaction only with those persons associated with a recognized voice. Speech recognition applications refer to applications that recognize what is being said and may be generally used with any voice. In some cases, the invention may utilize both speech and voice applications together in order to determine what is being said and who is speaking. This can improve interaction with
figurine 12 such thatfigurine 12 may only respond to the child for which it is programmed to respond. With voice recognition, a child could say “sing me a song” or “tell me a story” and the figurine may select a song or store from a library and respond to the recognized voice, as directed. Responses from the figurine to others, however, may be limited or prohibited if the requesting voice is not recognized. - Interaction between a user and
figurine 12 can be proactive as well as reactive. In other words,external computer 14 can causefigurine 12 to take action that is not in response to action by a user. For example,figurine 12 may serve as an alarm clock, telling a child that it is time to get out of bed.Figurine 12 may also proactively remind a user of the day's appointments, birthdays of friends or relatives, and the like. Thus, first output may be provided, which is responsive to input to the figurine, andcomputer 12 can be programmed to proactively causefigurine 12 to output second output to a user, e.g., an alarm or reminder. -
FIG. 2 is a functional block diagram ofsystem 10 including afigurine 12 and anexternal computer 14. Again,figurine 12 uses wireless communication in order to harnesses the processing horsepower ofexternal computer 14. In this manner, complex applications can be performed bycomputer 14, yet presented byfigurine 12 to a user. -
Figurine 12 includes one ormore input devices 22 to capture input from a user, e.g., a child.Figurine 12 also includes one ormore output devices 23 present output to the user.Input device 22 may comprise, for example, a sound-detecting transducer such as a microphone, or an image capture device, such as a digital camera. A button or other actuator may be disposed onfigurine 12 to turn on the microphone or to cause the digital camera to take a picture.Output device 23 may comprise a sound-generating transducer such as a speaker, or possibly a display screen. - Sounds or images detected by
input device 22 may be processed locally by local central processing unit (CPU) 24 in order to facilitate communication of the data toexternal computer 14. For example,local CPU 24 may package the captured input for transmission toexternal computer 14.Local CPU 24 may also control transmitter/receiver 26 to cause transmission of data indicative of the sounds or images detected byinput device 22.Local CPU 24, for example, may comprise a relatively simple controller implemented in application specific integrated circuit (ASIC). If images are captured byfigurine 12,local CPU 24 may compress the image file to simplify wireless transfer of the image file. In any case, transmitter/receiver 26 transfers data collected byfigurine 12 so that the data can be processed external tofigurine 12. - The wireless communication between transmitter/
receiver 26 offigurine 12 and transmitter/receiver 27 ofexternal computer 14 may conform to any of a wide variety of wireless communication protocols. Examples include, but are not limited to a wireless networking standard such as one of the IEEE 802.11 standards, a standard according to the Bluetooth Special Interest Group, or the like. The IEEE 802.11 standards include, for example, the original 802.11 standard having data transfer rates of 1-2 Megabits per second (Mbps) in a 2.4-2.483 Gigahertz (GHz) frequency band, as well as the IEEE 802.11b standard (sometimes referred to as 802.11 wireless fidelity or 802.11 Wi-Fi) that utilizes binary phase shift keying (BPSK) for 1.0 MBPS transmission, and quadrature phase shift keying (QPSK) for 2.0, 5.5 and 11.0 Mbps transmission, the IEEE 802.11g standard that utilizes orthogonal frequency division multiplexing (OFDM) in the 2.4 GHz frequency band to provide data transmission at rates up to 54 Mbps, and the IEEE 802.11a standard that utilizes OFDM in a 5 GHz frequency band to provide data transmission at rates up to 54 Mbps. These and other wireless networks have been developed. Additional extensions to the IEEE 802.11 standard, as well as other wireless standards will likely emerge in the future. The invention is not limited to the type of wireless communication techniques used, and could also be implemented with wireless protocols similar to those used for cell phone communication or direct two-way pagers, or any other wireless protocol, either known or later developed. - Transmitter/
receiver 27 ofexternal computer 14 receives data sent by transmitter/receiver 26 offigurine 12.Remote CPU 28 performs extensive processing on the data to generate output. For example,remote CPU 28 may comprise a general purpose microprocessor that executes software to generate the output. The output is then transmitted back tofigurine 12.Output device 23 offigurine 12 can then present the output to the user. - By performing the intensive processing in
external computer 14, the local electronics offigurine 12 can be greatly simplified. In particular, the need for intensive processing power and a large amount of memory infigurine 12 can be avoided. Accordingly, the need to protect powerful processors and memory from misuse by a child handing the figurine can also be avoided. Battery power infigurine 12 can also be extended by performing processing tasks externally incomputer 14. Moreover, software upgrades may be easily implemented for execution byremote CPU 28 without requiring upgrade of the components offigurine 12. - The processing tasks performed in
remote CPU 28 ofexternal computer 14, generally depend on the given application being presented to the user byfigurine 12. In one example,figurine 12 may present a speech recognition application to the child, e.g., a program that teaches the child the meanings of one or more words or phrases. In that case, the child may speak tofigurine 12, andinput device 22 can capture the speech.Local CPU 24 packages the speech and causes transmitter/receiver 26 to wirelessly communicates the captured speech toexternal computer 14.Remote CPU 28 parses the speech and generates one or more meanings, which are communicated back tofigurine 12 by transmitter/receiver 27.Output device 23 offigurine 12 can then output the meanings to the child. - In another example,
figurine 12 may be capable of holding intelligent conversation with the child by harnessing theremote CPU 28 ofexternal computer 14. In that case, the child may speak tofigurine 12, andinput device 22 can capture the speech.Local CPU 24 packages the speech and causes transmitter/receiver 26 to wirelessly communicate the captured speech toexternal computer 14.Remote CPU 28 parses the speech and generates one or more responses, which are communicated back tofigurine 12 by transmitter/receiver 27.Output device 23 offigurine 12 can then output the responses to the child. Thus, if the child asks a question to figurine 12,figurine 12 may respond with an intelligent answer. - In another example,
figurine 12 may help the child with reading. In that case,input device 22 in the form of an image capture device may capture images of a page.Local CPU 24 packages the image and causes transmitter/receiver 26 to wirelessly communicate the captured image toexternal computer 14. Local CPU may also compress the image prior to transmission. Onceexternal computer 14 has received the captured image,remote CPU 28 parses the image and generates one or more meanings, which are communicated back tofigurine 12 by transmitter/receiver 27. For example,remote CPU 28 may perform character recognition on the image in order to identify characters, and may then decipher the meaning of the identified characters using one or more dictionaries stored in memory and accessible byremote CPU 28.Output device 23 offigurine 12 can then output the meanings to the child. In this sense,figurine 12 may appear to be reading to the child.Output device 23 may include speakers for verbal output and possibly a display to present the child with the captured words being read byfigurine 12. - In another example,
figurine 12 may facilitate translation of words spoken by the child. In that case, the child may speak tofigurine 12, andinput device 22 can capture the speech.Local CPU 24 packages the speech and causes transmitter/receiver 26 to wirelessly communicate the captured speech toexternal computer 14.Remote CPU 28 parses the speech and identifies a translation of the word or phrases spoken by the child, which are communicated back tofigurine 12 by transmitter/receiver 27.Output device 23 offigurine 12 can then output the translation to the child. In this example,figurine 12 serves as an interpreter. - In a further example,
figurine 12 harnesses the computing power ofexternal computer 14 to perform other types of image processing. One or more image capture devices can capture image data to be processed byexternal computer 14. Image processing can include recognition of faces, objects, colors, numbers, places, activities, and the like. Whenexternal computer 14 runs face recognition software, for example,figurine 12 can seem to recognize the person or persons interacting withfigurine 12.Figurine 12 can use the recognition in its interaction by, for instance, calling a child by name. Whenexternal computer 14 runs object recognition software,figurine 12 can seem to recognize objects and attributers of objects such as shape, type or quantity. In an exemplary application,figurine 12 can teach a child to recognize shapes, count objects, become familiar with colors, and the like. - Interaction between a user and
figurine 12 can be proactive as well as reactive. In other words,external computer 14 can causefigurine 12 to take action that is not in response to action by a user. For example,figurine 12 may serve as an alarm clock, telling a child that it is time to get out of bed.Figurine 12 may also proactively remind a user of the day's appointments, birthdays of friends or relatives, and the like. For elderly applications,figurine 12 may remind a user to take medication. Any such alarms or reminders may be standard audio tones, music, or possibly programmed or recorded audio of a familiar voice, makingfigurine 12 speak with a pleasant tone to the user when providing reminders. For example, a parent's voice may be recorded such thatfigurine 12 speaks with such recordings. Voice emulation software may also be used bycomputer 14 so thatfigurine 12 speaks new words or phrases in a voice that emulates that of the parents. -
FIG. 3 is a more detailed block diagram of system 30 illustrating application of the invention to one of the example applications described above, in particular, translation of written words. System 30 may correspond to system 10 (FIGS. 1 and 2 ). In this case,image capture device 33 may capture images of a page. For example,image capture device 33 may comprise a digital camera located in the eyes of figurine 32 so that when a child directs the eyes of figurine 32 toward a page and presses an actuator, the image of the page is captured. The actuator, for example, may be disposed on the back of figurine 32 so that when the eyes of figurine 32 are directed toward a page, the actuator is easily accessible. The image capture device and actuator, however, may be deployed in other locations on figurine 32. -
Local CPU 34 packages the image and causes transmitter/receiver 36 to wirelessly communicate the captured image toexternal computer 31.Remote CPU 38 parses the image and generates a translation, which is communicated back to figurine 32 by transmitter/receiver 37.Remote CPU 28 may invokesoftware modules optical character recognition 39 andtranslation 40. Once the image has been translated and the translation has been communicated back to figurine 32,output device 35 of figurine 32 can then output the translation. - Different optical character recognition modules and translator modules may also be invoked for different languages. Optical
character recognition module 39, for example, may recognize English, andtranslator module 40 may translate from English to Spanish. Any other languages, however, could also be supported. In another embodiment, optical character recognition may be performed locally at figurine, with the more processor-intensive translation being performed byexternal computer 31. - Other exemplary applications can be supported in a manner similar to that depicted in
FIG. 3 . In the context of face recognition, for example,image capture device 33 may capture one or more images of a face.Local CPU 34 packages the image and causes transmitter/receiver 36 to wirelessly communicate the captured image toexternal computer 31.Remote CPU 38 executes face recognition software modules to identify the face in the image. Once the face has been identifiedremote CPU 38 may then incorporate that identity into the output of figurine 32 by, for example, referring to the user by name. Voice recognition could also be used to cause figurine 32 to refer to the user by name. - In addition,
remote CPU 38 can execute shape recognition software modules, color recognition software modules, object recognition software modules, quantification software modules.Remote CPU 38 can use such software modules to help a user recognize shapes, count objects, become familiar with colors, and the like. Although a user perceives all action occurring through figurine 32, processor-intensive image processing is actually being performed by remotely byexternal computer 31. An object recognition module, for example, may be designed to recognize currency (such as coins) and allow the figurine to teach a child how to accurately count change. - The various modules and components described herein may be implemented in hardware, software, firmware, or any combination. The invention is not limited to any particular hardware or software implementation. If implemented in software, the modules may be stored on a computer readable medium such as memory or a non-volatile storage medium. Indeed, one advantage of the techniques described herein is that the need for large amounts of memory in a figurine can be avoided. Instead, the memory needed to execute very memory-intensive applications is included in an external computer. Further, the exemplary applications described above are not exclusive of one another.
External computer 31 can execute any combination of translation, voice and speech recognition, image processing or other types of software modules, and the invention is not limited to systems that perform a single application to the exclusion of other applications. On the contrary, an advantage of the invention is its versatility. The invention can be adapted to implement one or more applications as desired by the user. -
FIGS. 4-6 are flow diagrams according to some embodiments of the invention, illustrating application of the invention to one of the example applications described above, in particular, translation of spoken or written messages. In the technique shown inFIG. 4 ,figurine 12 performs speech capture (41), and then transmits speech data to external computer 14 (42).External computer 14 receives the speech data (43) and may perform speech recognition (44) to identify the spoken words or phrases.External computer 14 performs translation with respect to the identified spoken words or phrases (45) and transmits the translation back to figurine 12 (46).Figurine 12 receives the translation (47) and outputs the translation to the user.Figurine 12 may drive an output device such as a display screen, thereby providing a written output. A user may find it more desirable, however, to havefigurine 12 drive a speaker infigurine 12, thereby providing an audible output, such as a synthesized speech recitation of the translation. In this manner,figurine 12 acts as a translator of spoken words or phrases, invoking anexternal computer 14 to reduce the local processing atfigurine 12. Also, as mentioned above, in addition to performing speech recognition,computer 14 may also perform voice recognition so thatfigurine 12 only responds to recognized voices, or responds differently, e.g., by identifying different persons, in response tocomputer 14 recognizing different voices. - In the technique shown in
FIG. 5 ,figurine 12 captures an image (51), and then transmits the image to external computer 14 (52).External computer 14 receives the image (53) and decodes the image (54), e.g., by performing optical character recognition. External computer then translates the characters to generate a translation (55).External computer 12 then transmits the translation back to figurine 12 (56).Figurine 12 receives the translation (57) and outputs the translation to the user (58). In this manner,figurine 12 acts as a translator of written words or phrases, invoking anexternal computer 14 to reduce the local processing atfigurine 12. The translation may be output in audio, video, or both. - In the technique shown in
FIG. 6 ,figurine 12 captures an image that includes written words or phrases (61), and then transmits the images to external computer 14 (62).External computer 14 receives the image (63) and decodes the image (64) e.g., by performing optical character recognition to identify written words or phrases.External computer 14 then generates an audio signal (65) as a function of the identified words or phrases.External computer 12 then transmits the audio signal back to figurine 12 (66).Figurine 12 receives the audio signal (67) and outputs the translation to the user. In this manner,figurine 12 appears to be reading the written words or phrases, by invoking anexternal computer 14 to reduce the local processing atfigurine 12. - In the various embodiments described herein, a figurine is described that utilizes wireless communication to harness computing power of an external computer. However, the figurine need not be directly coupled to the external computer in order to take advantage of the computing resources of the external device.
FIG. 7 is a block diagram of asystem 70, similar tosystem 10. Insystem 70, however, figurine 72 wirelessly communicates withexternal computer 74 via awireless hub 75. In particular,wireless hub 75 communicates wirelessly withfigurine 72, and is coupled toexternal computer 74. -
FIG. 8 illustrates anothersystem 80, similar tosystem 10, in which figurine 82 wirelessly communicates to take advantage of computing power ofexternal computer 84. Insystem 80,figurine 82 wirelessly communicates withexternal computer 84 via awireless hub 85 that couples toexternal computer 84 vianetwork 86. In particular,wireless hub 85 communicates wirelessly withfigurine 82, and is coupled toexternal computer 84 vianetwork 86.Network 86 may comprise a small local area network (LAN), a wide area network, or even a global network such as the Internet. Communication betweenhub 85 andexternal computer 84 may be, but need not be, wireless. Importantly, the wireless capabilities offigurine 82 allow for communication withexternal computer 84, thereby allowingfigurine 82 to make use of the processing capabilities ofexternal computer 84. - When a figurine is configured to communicate with a global network such as the Internet, such as is depicted in
FIG. 8 or 11, the figurine can serve as an input-output device for interaction with the network and other stations or servers coupled tonetwork 82. In a typical application,figurine 82 reports information obtained from one or more network servers (not shown). For example, a child could askfigurine 82, “What is the weather forecast for today?” The request is relayed toexternal computer 84, which accesses a server vianetwork 86 that can provide the local forecast. Upon retrieving the local forecast,external computer 84 supplies that information tofigurine 82, which answers the child's question. - The invention supports wireless communication in other configurations as well. The figurine need not be directly coupled to the external computer in order to take advantage of the computing resources of the external device.
Figurine 82 may communicate wirelessly with any intermediate device, such as another figurine, a wireless access point, or a computer that does not serve asexternal computer 84. -
FIG. 9 is a diagram illustrating asystem 90 according to an additional embodiment of the invention.System 90 includes afigurine 92 and anexternal computer 94, which communicate either directly or indirectly via one or more wireless communication links. In addition,system 90 includes acompatible object 95, embodied inFIG. 9 as a book.Compatible object 95 can be an object of any type, but in a typical implementation,compatible object 95 is an accessory forfigurine 92. -
Compatible object 95 includes a wireless identifier, by which a detector infigurine 92 can detect the presence ofcompatible object 95. An example of such a wireless identifier is a radio frequency identification (RFID)tag 96.RFID tag 96 may be hidden incompatible object 95 and not readily observable to a user. AnRFID tag reader 98 infigurine 92 detects and readsRFID tag 96. Bar codes or other indicia might also be used, in whichcase reader 98 would facilitate the reading of such indicia. -
RFID tag 96 is a wireless electronic device that communicates withRFID tag reader 98.RFID tag 96 may include an integrated circuit (not shown) and a coil (not shown). The coil may act as a source of power, as a receiving antenna, and a transmitting antenna. The coil may be coupled to capacitor to store power when interrogated in order to drive the integrated circuit. The integrated circuit may include wireless communications components and memory.RFID tag reader 98 may include an antenna and a transceiver. -
RFID tag reader 98 may “interrogate”RFID tag 96 by directing an electromagnetic (i.e., radio) signal toRFID tag 96.RFID tag 96 may include, but need not include, an independent power source. In a typical embodiment,RFID tag 96, receives power from the interrogating signal fromRFID tag reader 98. Upon power-on,RFID tag 96 may perform certain operations, which may include transmitting data stored in the memory of theRFID tag 96 toRFID tag reader 98. The transmitted data may include an identification ofcompatible object 95. In this manner, the manufacturer offigurine 92 may exert better control over those objects that will be used to interact with figurine, and can help ensure that a child will not become frustrated, e.g., iffigurine 92 were used with an incompatible book or object. WhenRFID tag reader 98 identifiesRFID tag 96,external computer 94 becomes aware ofcompatible object 95 proximate tofigurine 92.External computer 94 can use the identity ofcompatible object 95 to communicate with a user more effectively. For example, exemplarycompatible object 95 inFIG. 9 is a book. Whenexternal computer 94 learns the identity of the book,external computer 94 can generate output appropriate for that book.Figurine 92 may, for example, direct the attention of a child to illustrations shown in the book, and explain how the illustrations pertain to the story.Figurine 92 can also describe how the story relates to other books, such as other books dealing with the same characters, orfigurine 92 can explain background information about the story or its author. -
FIG. 10 is a diagram illustrating asystem 100 according to another embodiment of the invention.System 100 includes afigurine 102 and anexternal computer 104, which communicate either directly or indirectly via one or more wireless communication links. In addition,system 100 includes a parents'unit 106. Parents'unit 106 may comprise any device, including, but not limited to, a television, a computer, a telephone, a speaker, a video monitor and the like. Parents'unit 106 may communicate withexternal computer 104 in any fashion, such as by an electrical connection, by an optical link, or by radio frequency. -
System 100 is configured to serve as a child monitoring system. A parent can deployfigurine 102 proximate to a child so thatfigurine 102 can capture video information or audio information or both about that child.Figurine 102 transmits the captured information toexternal computer 104.External computer 104 in turn sends information to parents'unit 106. In this example, the parent may also communicate in real time to the child throughfigurine 102, e.g., by speaking into a microphone of parents'unit 106. Parents'unit 106 may be a separate unit, or may be implemented as a software module that executes directly onexternal computer 104. - In one application,
external computer 104 simply relays captured information to parents'unit 106. For example, captured audio and video data showing the child's location, condition and activity may be relayed to parents'unit 106. In another application,external computer 104 can also process the captured audio and video data and provide useful information to parents'unit 106. For example,external computer 104 can process audio data captured viafigurine 102 and determine whether the child is crying, sleeping, breathing abnormally, and the like.External computer 104 can also process video data captured viafigurine 102 and determine whether the child is awake or has gotten out of bed or the like. -
FIG. 11 is a diagram illustrating asystem 110 according to another embodiment of the invention.System 110 is a server-client system tin which aserver 112 supplies one or more functionalities to one or more client figurine-computer systems Server 112 manages adatabase 113 that stores software that can provide figurines with one or more functionalities. Client figurine-computer systems server 112 via anetwork 118.Network 118 may comprise any network, including a global network such as the Internet. - Examples of functionalities include, but are not limited to, the functionalities described herein. The owner of client figurine-
computer system 114, for example, may desire that her child'sfigurine 122 should be capable of helping teach her child about numbers, letters, basic shapes and basic colors. In addition, her child'sfigurine 122 should be capable of reciting stories suitable for a child four years of age. Accordingly, the owner of client figurine-computer system 114 downloads software for such functionalities fromserver 112 vianetwork 118. The software is stored locally atexternal computer 120. By contrast, the owner of client figurine-computer system 116 may desire that his child'sfigurine 124 should be capable of reading a book, helping teach his child speak and write in English and Spanish, and playing games appropriate for a child six years of age. Accordingly, the owner of client figurine-computer system 116 downloads software for such functionalities fromserver 112 vianetwork 118. - With
system 110, each parent can customize his or her child's figurine for the child's age, needs or desires. As the child develops, the parent can obtain more advanced functionality. Further, as new functionalities are developed and added todatabase 113, the parents can download the new functionalities. As a result, the figurines seem to “grow” with the children, and can be enabled to perform new or more sophisticated functions. Because the new and more advanced functionality can be executed inexternal computer 120, the need to upgradefigurine 122 may be avoided, which can be important to a child that has become emotionally attached tofigurine 122. - Although the figures may depict a single figurine with a single external computer, the invention encompasses embodiments in which a single external computer interacts with two or more figurines. A parent with two children can, for example, give a different figurine to each child, and each figurine can communicate wirelessly with the same or different external computers. Each child will perceive that each figurine operates independently of the other. Furthermore, each figurine may be separately empowered with functionality appropriate for each child.
- The invention may offer one or more advantages. A child's toy can be very versatile, capable of a wide range of functionality. Furthermore, the functionality can be customized to the child, and can change as the child develops. Moreover, the invention supports an interesting and adaptable system that can help a child learn a wide range of subjects, making interaction with the figurine not only fun, but educational as well.
- Several embodiments of the invention have been described. Various modifications may be made without departing from the scope of the invention. These and other embodiments are within the scope of the following claims.
Claims (29)
1. A system comprising:
a figurine that captures input and wirelessly communicates the input; and
a computer that receives the input from the figurine, generates output based on the input, and wirelessly communicates the output to the figurine, wherein the figurine presents the output to a user.
2. The system of claim 1 , wherein:
the figurine captures speech data from a user and wirelessly communicates the speech data; and
the computer receives the speech data from the figurine, generates an audible response to the speech data, and wirelessly communicates the response to the figurine, wherein the figurine outputs the audible response to the user.
3. The system of claim 2 , wherein the computer receives additional data via a network, and wherein the computer generates output based on the additional data.
4. The system of claim 1 , wherein:
the figurine captures voice data from a user and wirelessly communicates the voice data; and
the computer receives the voice data from the figurine, identifies a person associated with the voice data, and generates an audible response to the voice data identifying the person.
5. The system of claim 1 , wherein:
the captured input comprises speech data from a user; and
the computer receives the speech data from the figurine, generates a translation of the speech data, and wirelessly communicates the translation to the figurine, wherein the figurine outputs the translation to the user.
6. The system of claim 1 , wherein:
the captured input comprises image data including one or more words or phrases; and
the computer receives the image data from the figurine, generates a translation of the words or phrases, and wirelessly communicates the translation to the figurine, wherein the figurine outputs the translation to the user.
7. A system of claim 1 , wherein:
the captured input comprises image data including one or more words or phrases; and
the computer receives the image data from the figurine, generates audio data corresponding to the words or phrases, and wirelessly communicates the audio data to the figurine, wherein the figurine outputs the audio data to the user.
8. The system of claim 1 , further comprising a wireless hub, wherein the figurine wirelessly communicates with the computer via the wireless hub.
9. The system of claim 8 , further comprising the Internet, wherein the figurine wirelessly communicates with the computer over the Internet via the wireless hub.
10. The system of claim 1 , wherein:
the figurine captures image data including an identifiable face or object; and
the computer receives the image data from the figurine, determines an identification that identifies the identifiable face or object, and wirelessly communicates the identification to the figurine.
11. The system of claim 1 , wherein:
the figurine captures additional data from a compatible object; and
the computer generates output based on the additional data.
12. The system of claim 1 , wherein the output is first output and wherein the computer is programmed to proactively cause the figurine to output second output to a user.
13. An interactive toy figurine comprising:
an data capture device to capture audio or video data; and
a wireless transmitter/receiver to wirelessly transfer data captured by the data capture device and receive output associated with the data captured by the data capture device.
14. The interactive toy figurine of claim 13 , wherein the data capture device comprises an image capture device, the figurine further comprising a display to display at least one of an image captured by the image capture device and the output.
15. The interactive toy figurine of claim 13 , further comprising a speaker to output audio data associated with the data captured by the data capture device.
16. The interactive toy figurine of claim 13 , wherein the data captured by the data capture device include one or more written words or phrases, and the audio data comprise an audible recitation of the words or phrases.
17. The interactive toy figurine of claim 13 , wherein the data captured by the data capture device include one or more words or phrases in a first language, and the audio data comprise a translation of the words or phrases in a second language.
18. A method comprising:
capturing data from a user at a figurine;
wirelessly communicating the data to an external computer;
receiving from the external computer a response to the data; and
outputting the response to the user from the figurine.
19. The method of claim 18 , wherein the data comprise speech data captured from the user, wherein outputting the response comprises outputting an audible response to the speech data.
20. The method of claim 19 , wherein receiving the response comprises receiving a translation of the speech data and wherein outputting the response comprises outputting the translation to the user from the figurine.
21. The method of claim 18 , wherein the data comprise image data including one or more words or phrases, wherein receiving the response comprises receiving audio data corresponding to the words or phrases, and wherein outputting the response comprises outputting the audio data from the figurine.
22. The method of claim 18 , wherein the data comprise image data including one or more words or phrases in a first language, wherein receiving the response comprises receiving a translation of the words or phrases in a second language, and wherein outputting the response comprises outputting the translation from the figurine.
23. The method of claim 22 , wherein outputting the translation includes driving a speaker to generate an audible recitation of the translation.
24. The method of claim 22 , wherein outputting the translation includes driving a display to generate a visual translation.
25. A system comprising:
a computer; and
a figurine communicatively coupled to the computer, wherein the figurine provides input to the computer, receives output from the computer and outputs the output to a user, and wherein functionality of the figurine is expandable via upgrades to the computer.
26. A system comprising:
a figurine that captures input and wirelessly communicates the input; and
a parents unit that receives the input and generates an alarm based on the input.
27. The system of claim 26 , further comprising a computer that receives the input from the figurine, forwards the input to the parents unit and causes the parents unit to generate the alarm.
28. The system of claim 26 , wherein the input comprises breathing information associated with a child and the parents unit generates the alarm if the child stops breathing.
29. A system comprising:
a computer; and
a figurine communicatively coupled to the computer;
one or more system compatible objects, wherein the figurine interacts with the one or more objects by harnessing computing power of the computer, wherein the system compatible objects include indicia identifiable by the figurine to ensure that software on the computer can ensure interaction between the figurine and the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/146,907 US20060234602A1 (en) | 2004-06-08 | 2005-06-07 | Figurine using wireless communication to harness external computing power |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US57810104P | 2004-06-08 | 2004-06-08 | |
US11/146,907 US20060234602A1 (en) | 2004-06-08 | 2005-06-07 | Figurine using wireless communication to harness external computing power |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060234602A1 true US20060234602A1 (en) | 2006-10-19 |
Family
ID=35510282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/146,907 Abandoned US20060234602A1 (en) | 2004-06-08 | 2005-06-07 | Figurine using wireless communication to harness external computing power |
Country Status (8)
Country | Link |
---|---|
US (1) | US20060234602A1 (en) |
EP (1) | EP1765478A2 (en) |
JP (1) | JP2008506510A (en) |
CN (1) | CN101193684A (en) |
BR (1) | BRPI0511898A (en) |
CA (1) | CA2569731A1 (en) |
MX (1) | MXPA06014212A (en) |
WO (1) | WO2005123210A2 (en) |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070073436A1 (en) * | 2005-09-26 | 2007-03-29 | Sham John C | Robot with audio and video capabilities for displaying advertisements |
US20080192580A1 (en) * | 2007-02-08 | 2008-08-14 | Mga Entertainment, Inc. | Animated Character Alarm Clock |
US20090197504A1 (en) * | 2008-02-06 | 2009-08-06 | Weistech Technology Co., Ltd. | Doll with communication function |
US20100099493A1 (en) * | 2008-10-20 | 2010-04-22 | Ronen Horovitz | System and method for interactive toys based on recognition and tracking of pre-programmed accessories |
US20100314456A1 (en) * | 2007-12-12 | 2010-12-16 | Nokia Corporation | Wireless association |
US20100325781A1 (en) * | 2009-06-24 | 2010-12-30 | David Lopes | Pouch pets networking |
US20110124264A1 (en) * | 2009-11-25 | 2011-05-26 | Garbos Jennifer R | Context-based interactive plush toy |
US20110230116A1 (en) * | 2010-03-19 | 2011-09-22 | Jeremiah William Balik | Bluetooth speaker embed toyetic |
WO2012088524A1 (en) * | 2010-12-23 | 2012-06-28 | Lcaip, Llc | Smart stuffed animal with air flow ventilation system |
US20120185254A1 (en) * | 2011-01-18 | 2012-07-19 | Biehler William A | Interactive figurine in a communications system incorporating selective content delivery |
WO2012103202A1 (en) * | 2011-01-25 | 2012-08-02 | Bossa Nova Robotics Ip, Inc. | System and method for online-offline interactive experience |
CN103003783A (en) * | 2011-02-01 | 2013-03-27 | 松下电器产业株式会社 | Function extension device, function extension method, function extension program, and integrated circuit |
US20130078886A1 (en) * | 2011-09-28 | 2013-03-28 | Helena Wisniewski | Interactive Toy with Object Recognition |
GB2496169A (en) * | 2011-11-04 | 2013-05-08 | Commotion Ltd | Toy having target pieces with radio ID |
US20140118548A1 (en) * | 2012-10-30 | 2014-05-01 | Baby-Tech Innovations, Inc. | Video camera device and child monitoring system |
US20140162230A1 (en) * | 2012-12-12 | 2014-06-12 | Aram Akopian | Exercise demonstration devices and systems |
US8801490B2 (en) | 2010-12-23 | 2014-08-12 | Lcaip, Llc | Smart stuffed toy with air flow ventilation system |
US20140256214A1 (en) * | 2013-03-11 | 2014-09-11 | Raja Ramamoorthy | Multi Function Toy with Embedded Wireless Hardware |
US20140329433A1 (en) * | 2013-05-06 | 2014-11-06 | Israel Carrero | Toy Stuffed Animal with Remote Video and Audio Capability |
US20140349547A1 (en) * | 2012-12-08 | 2014-11-27 | Retail Authority LLC | Wirelessly controlled action figures |
US20140357150A1 (en) * | 2010-12-23 | 2014-12-04 | Lcaip, Llc | Smart stuffed toy with air flow ventilation system |
US20150290548A1 (en) * | 2014-04-09 | 2015-10-15 | Mark Meyers | Toy messaging system |
WO2015174571A1 (en) * | 2014-05-16 | 2015-11-19 | 수상에스티(주) | Newborn monitoring system |
US20150360139A1 (en) * | 2014-06-16 | 2015-12-17 | Krissa Watry | Interactive cloud-based toy |
US20160149719A1 (en) * | 2014-11-21 | 2016-05-26 | Panasonic Intellectual Property Management Co., Ltd. | Monitoring system |
US9406240B2 (en) * | 2013-10-11 | 2016-08-02 | Dynepic Inc. | Interactive educational system |
US9421475B2 (en) | 2009-11-25 | 2016-08-23 | Hallmark Cards Incorporated | Context-based interactive plush toy |
US9492762B2 (en) | 2012-05-08 | 2016-11-15 | Funfare, Llc | Sensor configuration for toy |
US20160359651A1 (en) * | 2009-04-30 | 2016-12-08 | Humana Inc. | System and method for communication using ambient communication devices |
US9931572B2 (en) | 2014-09-15 | 2018-04-03 | Future of Play Global Limited | Systems and methods for interactive communication between an object and a smart device |
US20180117484A1 (en) * | 2012-11-15 | 2018-05-03 | LOL Buddies Enterprises | System and Method for Providing a Toy Operable for Receiving and Selectively Vocalizing Various Electronic Communications from Authorized Parties, and For Providing a Configurable Platform Independent Interactive Infrastructure for Facilitating Optimal Utilization Thereof |
US20180158458A1 (en) * | 2016-10-21 | 2018-06-07 | Shenetics, Inc. | Conversational voice interface of connected devices, including toys, cars, avionics, mobile, iot and home appliances |
US20180280793A1 (en) * | 2017-03-29 | 2018-10-04 | Disney Enterprises, Inc. | Registration of Wireless Encounters Between Wireless Devices |
US20180361263A1 (en) * | 2011-05-17 | 2018-12-20 | Zugworks, Inc | Educational device |
US20190240564A1 (en) * | 2008-06-03 | 2019-08-08 | Tweedletech, Llc | Intelligent game system for putting intelligence into board and tabletop games including miniatures |
US10405745B2 (en) | 2015-09-27 | 2019-09-10 | Gnana Haranth | Human socializable entity for improving digital health care delivery |
US10616310B2 (en) | 2015-06-15 | 2020-04-07 | Dynepic, Inc. | Interactive friend linked cloud-based toy |
US10783799B1 (en) * | 2016-12-17 | 2020-09-22 | Sproutel, Inc. | System, apparatus, and method for educating and reducing stress for patients with illness or trauma using an interactive location-aware toy and a distributed sensor network |
US20230018066A1 (en) * | 2020-11-20 | 2023-01-19 | Aurora World Corporation | Apparatus and system for growth type smart toy |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5020593B2 (en) * | 2006-10-16 | 2012-09-05 | 株式会社日立ソリューションズ | Foreign language learning communication system |
TWI338588B (en) * | 2007-07-31 | 2011-03-11 | Ind Tech Res Inst | Method and apparatus for robot behavior series control based on rfid technology |
US9610500B2 (en) | 2013-03-15 | 2017-04-04 | Disney Enterprise, Inc. | Managing virtual content based on information associated with toy objects |
EP2777786A3 (en) * | 2013-03-15 | 2014-12-10 | Disney Enterprises, Inc. | Managing virtual content based on information associated with toy objects |
US9011194B2 (en) | 2013-03-15 | 2015-04-21 | Disney Enterprises, Inc. | Managing virtual content based on information associated with toy objects |
KR101504699B1 (en) * | 2013-04-09 | 2015-03-20 | 얄리주식회사 | Phonetic conversation method and device using wired and wiress communication |
KR101458460B1 (en) * | 2013-05-27 | 2014-11-12 | 주식회사 매직에듀 | 3-dimentional character and album system using the same |
JP6174543B2 (en) * | 2014-03-07 | 2017-08-02 | 摩豆科技有限公司 | Doll control method and interactive doll operation method by application, and apparatus for doll control and operation |
KR102156536B1 (en) | 2014-06-23 | 2020-09-16 | 신에쓰 가가꾸 고교 가부시끼가이샤 | Crosslinked organopolysiloxane and method for producing same, mist suppressant, and solvent-free silicone composition for release paper |
TWI559966B (en) * | 2014-11-04 | 2016-12-01 | Mooredoll Inc | Method and device of community interaction with toy as the center |
JP6680125B2 (en) * | 2016-07-25 | 2020-04-15 | トヨタ自動車株式会社 | Robot and voice interaction method |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5324225A (en) * | 1990-12-11 | 1994-06-28 | Takara Co., Ltd. | Interactive toy figure with sound-activated and pressure-activated switches |
US5607336A (en) * | 1992-12-08 | 1997-03-04 | Steven Lebensfeld | Subject specific, word/phrase selectable message delivering doll or action figure |
US5945656A (en) * | 1997-05-27 | 1999-08-31 | Lemelson; Jerome H. | Apparatus and method for stand-alone scanning and audio generation from printed material |
US6159101A (en) * | 1997-07-24 | 2000-12-12 | Tiger Electronics, Ltd. | Interactive toy products |
US6227931B1 (en) * | 1999-07-02 | 2001-05-08 | Judith Ann Shackelford | Electronic interactive play environment for toy characters |
US6443796B1 (en) * | 2000-06-19 | 2002-09-03 | Judith Ann Shackelford | Smart blocks |
US20030027636A1 (en) * | 2001-07-26 | 2003-02-06 | Eastman Kodak Company | Intelligent toy with internet connection capability |
US6554679B1 (en) * | 1999-01-29 | 2003-04-29 | Playmates Toys, Inc. | Interactive virtual character doll |
US6719604B2 (en) * | 2000-01-04 | 2004-04-13 | Thinking Technology, Inc. | Interactive dress-up toy |
US6773344B1 (en) * | 2000-03-16 | 2004-08-10 | Creator Ltd. | Methods and apparatus for integration of interactive toys with interactive television and cellular communication systems |
US20040155781A1 (en) * | 2003-01-22 | 2004-08-12 | Deome Dennis E. | Interactive personal security system |
US6947571B1 (en) * | 1999-05-19 | 2005-09-20 | Digimarc Corporation | Cell phones with optical capabilities, and related applications |
US20060154559A1 (en) * | 2002-09-26 | 2006-07-13 | Kenji Yoshida | Information reproduction/i/o method using dot pattern, information reproduction device, mobile information i/o device, and electronic toy |
US7261612B1 (en) * | 1999-08-30 | 2007-08-28 | Digimarc Corporation | Methods and systems for read-aloud books |
-
2005
- 2005-06-07 WO PCT/US2005/019933 patent/WO2005123210A2/en active Application Filing
- 2005-06-07 MX MXPA06014212A patent/MXPA06014212A/en not_active Application Discontinuation
- 2005-06-07 JP JP2007527640A patent/JP2008506510A/en active Pending
- 2005-06-07 US US11/146,907 patent/US20060234602A1/en not_active Abandoned
- 2005-06-07 BR BRPI0511898-0A patent/BRPI0511898A/en not_active IP Right Cessation
- 2005-06-07 EP EP05758005A patent/EP1765478A2/en not_active Withdrawn
- 2005-06-07 CN CNA2005800267810A patent/CN101193684A/en active Pending
- 2005-06-07 CA CA002569731A patent/CA2569731A1/en not_active Abandoned
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5324225A (en) * | 1990-12-11 | 1994-06-28 | Takara Co., Ltd. | Interactive toy figure with sound-activated and pressure-activated switches |
US5607336A (en) * | 1992-12-08 | 1997-03-04 | Steven Lebensfeld | Subject specific, word/phrase selectable message delivering doll or action figure |
US5945656A (en) * | 1997-05-27 | 1999-08-31 | Lemelson; Jerome H. | Apparatus and method for stand-alone scanning and audio generation from printed material |
US6159101A (en) * | 1997-07-24 | 2000-12-12 | Tiger Electronics, Ltd. | Interactive toy products |
US6554679B1 (en) * | 1999-01-29 | 2003-04-29 | Playmates Toys, Inc. | Interactive virtual character doll |
US6947571B1 (en) * | 1999-05-19 | 2005-09-20 | Digimarc Corporation | Cell phones with optical capabilities, and related applications |
US6227931B1 (en) * | 1999-07-02 | 2001-05-08 | Judith Ann Shackelford | Electronic interactive play environment for toy characters |
US7261612B1 (en) * | 1999-08-30 | 2007-08-28 | Digimarc Corporation | Methods and systems for read-aloud books |
US6719604B2 (en) * | 2000-01-04 | 2004-04-13 | Thinking Technology, Inc. | Interactive dress-up toy |
US6773344B1 (en) * | 2000-03-16 | 2004-08-10 | Creator Ltd. | Methods and apparatus for integration of interactive toys with interactive television and cellular communication systems |
US6443796B1 (en) * | 2000-06-19 | 2002-09-03 | Judith Ann Shackelford | Smart blocks |
US20030027636A1 (en) * | 2001-07-26 | 2003-02-06 | Eastman Kodak Company | Intelligent toy with internet connection capability |
US20060154559A1 (en) * | 2002-09-26 | 2006-07-13 | Kenji Yoshida | Information reproduction/i/o method using dot pattern, information reproduction device, mobile information i/o device, and electronic toy |
US20040155781A1 (en) * | 2003-01-22 | 2004-08-12 | Deome Dennis E. | Interactive personal security system |
Cited By (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070073436A1 (en) * | 2005-09-26 | 2007-03-29 | Sham John C | Robot with audio and video capabilities for displaying advertisements |
US20080192580A1 (en) * | 2007-02-08 | 2008-08-14 | Mga Entertainment, Inc. | Animated Character Alarm Clock |
US7551523B2 (en) | 2007-02-08 | 2009-06-23 | Isaac Larian | Animated character alarm clock |
US20100314456A1 (en) * | 2007-12-12 | 2010-12-16 | Nokia Corporation | Wireless association |
US7956749B2 (en) * | 2007-12-12 | 2011-06-07 | Nokia Corporation | Wireless association |
US20090197504A1 (en) * | 2008-02-06 | 2009-08-06 | Weistech Technology Co., Ltd. | Doll with communication function |
US20190240564A1 (en) * | 2008-06-03 | 2019-08-08 | Tweedletech, Llc | Intelligent game system for putting intelligence into board and tabletop games including miniatures |
US10953314B2 (en) * | 2008-06-03 | 2021-03-23 | Tweedletech, Llc | Intelligent game system for putting intelligence into board and tabletop games including miniatures |
US20100099493A1 (en) * | 2008-10-20 | 2010-04-22 | Ronen Horovitz | System and method for interactive toys based on recognition and tracking of pre-programmed accessories |
US8894461B2 (en) * | 2008-10-20 | 2014-11-25 | Eyecue Vision Technologies Ltd. | System and method for interactive toys based on recognition and tracking of pre-programmed accessories |
US9712359B2 (en) * | 2009-04-30 | 2017-07-18 | Humana Inc. | System and method for communication using ambient communication devices |
US10135653B2 (en) * | 2009-04-30 | 2018-11-20 | Humana Inc. | System and method for communication using ambient communication devices |
US20160359651A1 (en) * | 2009-04-30 | 2016-12-08 | Humana Inc. | System and method for communication using ambient communication devices |
US20100325781A1 (en) * | 2009-06-24 | 2010-12-30 | David Lopes | Pouch pets networking |
US20110124264A1 (en) * | 2009-11-25 | 2011-05-26 | Garbos Jennifer R | Context-based interactive plush toy |
US8568189B2 (en) | 2009-11-25 | 2013-10-29 | Hallmark Cards, Incorporated | Context-based interactive plush toy |
US8911277B2 (en) | 2009-11-25 | 2014-12-16 | Hallmark Cards, Incorporated | Context-based interactive plush toy |
US9421475B2 (en) | 2009-11-25 | 2016-08-23 | Hallmark Cards Incorporated | Context-based interactive plush toy |
US20110223827A1 (en) * | 2009-11-25 | 2011-09-15 | Garbos Jennifer R | Context-based interactive plush toy |
US20110230116A1 (en) * | 2010-03-19 | 2011-09-22 | Jeremiah William Balik | Bluetooth speaker embed toyetic |
WO2012088524A1 (en) * | 2010-12-23 | 2012-06-28 | Lcaip, Llc | Smart stuffed animal with air flow ventilation system |
US8414347B2 (en) | 2010-12-23 | 2013-04-09 | Lcaip, Llc | Smart stuffed animal with air flow ventilation system |
US9089782B2 (en) * | 2010-12-23 | 2015-07-28 | Lcaip, Llc. | Smart stuffed toy with air flow ventilation system |
US20140357150A1 (en) * | 2010-12-23 | 2014-12-04 | Lcaip, Llc | Smart stuffed toy with air flow ventilation system |
US8801490B2 (en) | 2010-12-23 | 2014-08-12 | Lcaip, Llc | Smart stuffed toy with air flow ventilation system |
US20120185254A1 (en) * | 2011-01-18 | 2012-07-19 | Biehler William A | Interactive figurine in a communications system incorporating selective content delivery |
WO2012103202A1 (en) * | 2011-01-25 | 2012-08-02 | Bossa Nova Robotics Ip, Inc. | System and method for online-offline interactive experience |
CN103003783A (en) * | 2011-02-01 | 2013-03-27 | 松下电器产业株式会社 | Function extension device, function extension method, function extension program, and integrated circuit |
US20180361263A1 (en) * | 2011-05-17 | 2018-12-20 | Zugworks, Inc | Educational device |
US11179648B2 (en) * | 2011-05-17 | 2021-11-23 | Learning Squared, Inc. | Educational device |
US20130078886A1 (en) * | 2011-09-28 | 2013-03-28 | Helena Wisniewski | Interactive Toy with Object Recognition |
GB2496169B (en) * | 2011-11-04 | 2014-03-12 | Commotion Ltd | Toy |
GB2496169A (en) * | 2011-11-04 | 2013-05-08 | Commotion Ltd | Toy having target pieces with radio ID |
US9492762B2 (en) | 2012-05-08 | 2016-11-15 | Funfare, Llc | Sensor configuration for toy |
US9565402B2 (en) * | 2012-10-30 | 2017-02-07 | Baby-Tech Innovations, Inc. | Video camera device and method to monitor a child in a vehicle |
US9769433B2 (en) * | 2012-10-30 | 2017-09-19 | Baby-Tech Innovations, Inc. | Video camera device and method to monitor a child in a vehicle |
US10178357B2 (en) * | 2012-10-30 | 2019-01-08 | Giuseppe Veneziano | Video camera device and method to monitor a child in a vehicle |
US20140118548A1 (en) * | 2012-10-30 | 2014-05-01 | Baby-Tech Innovations, Inc. | Video camera device and child monitoring system |
WO2014070722A1 (en) * | 2012-10-30 | 2014-05-08 | Baby-Tech Innovations, Inc. | Video camera device and child monitoring system |
US20190098262A1 (en) * | 2012-10-30 | 2019-03-28 | Giuseppe Veneziano | Video camera device and method to monitor a child in a vehicle by secure video transmission using blockchain encryption |
US20170104963A1 (en) * | 2012-10-30 | 2017-04-13 | Baby-Tech Innovations, Inc. | Video camera device and method to monitor a child in a vehicle |
US10602096B2 (en) * | 2012-10-30 | 2020-03-24 | Giuseppe Veneziano | Video camera device and method to monitor a child in a vehicle by secure video transmission using blockchain encryption |
US20200186756A1 (en) * | 2012-10-30 | 2020-06-11 | Giuseppe Veneziano | Video camera device and method to monitor a child in a vehicle by secure video transmission using blockchain encryption and sim card wifi transmission |
US10887559B2 (en) * | 2012-10-30 | 2021-01-05 | Giuseppe Veneziano | Video camera device and method to monitor a child in a vehicle by secure video transmission using blockchain encryption and SIM card WiFi transmission |
US20170324938A1 (en) * | 2012-10-30 | 2017-11-09 | Baby-Tech Innovations, Inc. | Video camera device and method to monitor a child in a vehicle |
US20180117484A1 (en) * | 2012-11-15 | 2018-05-03 | LOL Buddies Enterprises | System and Method for Providing a Toy Operable for Receiving and Selectively Vocalizing Various Electronic Communications from Authorized Parties, and For Providing a Configurable Platform Independent Interactive Infrastructure for Facilitating Optimal Utilization Thereof |
US11020680B2 (en) * | 2012-11-15 | 2021-06-01 | Shana Lee McCart-Pollak | System and method for providing a toy operable for receiving and selectively vocalizing various electronic communications from authorized parties, and for providing a configurable platform independent interactive infrastructure for facilitating optimal utilization thereof |
US20140349547A1 (en) * | 2012-12-08 | 2014-11-27 | Retail Authority LLC | Wirelessly controlled action figures |
US20140162230A1 (en) * | 2012-12-12 | 2014-06-12 | Aram Akopian | Exercise demonstration devices and systems |
US20140256214A1 (en) * | 2013-03-11 | 2014-09-11 | Raja Ramamoorthy | Multi Function Toy with Embedded Wireless Hardware |
US20140329433A1 (en) * | 2013-05-06 | 2014-11-06 | Israel Carrero | Toy Stuffed Animal with Remote Video and Audio Capability |
US9406240B2 (en) * | 2013-10-11 | 2016-08-02 | Dynepic Inc. | Interactive educational system |
US20150290548A1 (en) * | 2014-04-09 | 2015-10-15 | Mark Meyers | Toy messaging system |
WO2015174571A1 (en) * | 2014-05-16 | 2015-11-19 | 수상에스티(주) | Newborn monitoring system |
US9833725B2 (en) * | 2014-06-16 | 2017-12-05 | Dynepic, Inc. | Interactive cloud-based toy |
US20150360139A1 (en) * | 2014-06-16 | 2015-12-17 | Krissa Watry | Interactive cloud-based toy |
US9931572B2 (en) | 2014-09-15 | 2018-04-03 | Future of Play Global Limited | Systems and methods for interactive communication between an object and a smart device |
US20160149719A1 (en) * | 2014-11-21 | 2016-05-26 | Panasonic Intellectual Property Management Co., Ltd. | Monitoring system |
US9705696B2 (en) * | 2014-11-21 | 2017-07-11 | Panasonic Intellectual Property Management Co., Ltd. | Monitoring system |
US10616310B2 (en) | 2015-06-15 | 2020-04-07 | Dynepic, Inc. | Interactive friend linked cloud-based toy |
US10405745B2 (en) | 2015-09-27 | 2019-09-10 | Gnana Haranth | Human socializable entity for improving digital health care delivery |
US20180158458A1 (en) * | 2016-10-21 | 2018-06-07 | Shenetics, Inc. | Conversational voice interface of connected devices, including toys, cars, avionics, mobile, iot and home appliances |
US10783799B1 (en) * | 2016-12-17 | 2020-09-22 | Sproutel, Inc. | System, apparatus, and method for educating and reducing stress for patients with illness or trauma using an interactive location-aware toy and a distributed sensor network |
US11398160B1 (en) * | 2016-12-17 | 2022-07-26 | Sproutel, Inc. | System, apparatus, and method for educating and reducing stress for patients with illness or trauma using an interactive location-aware toy and a distributed sensor network |
US20180280793A1 (en) * | 2017-03-29 | 2018-10-04 | Disney Enterprises, Inc. | Registration of Wireless Encounters Between Wireless Devices |
US10441879B2 (en) * | 2017-03-29 | 2019-10-15 | Disney Enterprises, Inc. | Registration of wireless encounters between wireless devices |
US20230018066A1 (en) * | 2020-11-20 | 2023-01-19 | Aurora World Corporation | Apparatus and system for growth type smart toy |
Also Published As
Publication number | Publication date |
---|---|
CA2569731A1 (en) | 2005-12-29 |
WO2005123210A3 (en) | 2008-02-14 |
MXPA06014212A (en) | 2007-03-12 |
CN101193684A (en) | 2008-06-04 |
JP2008506510A (en) | 2008-03-06 |
WO2005123210A2 (en) | 2005-12-29 |
EP1765478A2 (en) | 2007-03-28 |
BRPI0511898A (en) | 2008-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060234602A1 (en) | Figurine using wireless communication to harness external computing power | |
US20220111300A1 (en) | Educational device | |
US11327556B2 (en) | Information processing system, client terminal, information processing method, and recording medium | |
US8172637B2 (en) | Programmable interactive talking device | |
KR102306624B1 (en) | Persistent companion device configuration and deployment platform | |
US8591302B2 (en) | Systems and methods for communication | |
US10957325B2 (en) | Method and apparatus for speech interaction with children | |
CN105126355A (en) | Child companion robot and child companioning system | |
KR100666487B1 (en) | Educational toy using rfid tag recognition | |
US20110112826A1 (en) | System and method for simulating expression of message | |
CN109074117A (en) | Built-in storage and cognition insight are felt with the computer-readable cognition based on personal mood made decision for promoting memory | |
JPH11511859A (en) | Educational and entertainment device with dynamic configuration and operation | |
CN110609620A (en) | Human-computer interaction method and device based on virtual image and electronic equipment | |
JP2003205483A (en) | Robot system and control method for robot device | |
US20180272240A1 (en) | Modular interaction device for toys and other devices | |
CN107705640A (en) | Interactive teaching method, terminal and computer-readable storage medium based on audio | |
CN109891357A (en) | Emotion intelligently accompanies device | |
JP2008185994A (en) | Sound reproduction system | |
WO2019190817A1 (en) | Method and apparatus for speech interaction with children | |
KR102652008B1 (en) | Method and apparatus for providing a multimodal-based english learning service applying native language acquisition principles to a user terminal using a neural network | |
US20200368630A1 (en) | Apparatus and System for Providing Content to Paired Objects | |
CN211454826U (en) | Children education robot | |
CN112017484A (en) | Logic thinking training and interaction machine | |
KR20230081026A (en) | Apparatus and method for providing audiovisual content for the disabled | |
KR20170117856A (en) | Interactive system of objects using rf card |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SPEECHGEAR, INC., MINNESOTA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PALMQUIST, ROBERT D.;REEL/FRAME:017048/0441 Effective date: 20050829 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |