US8564684B2 - Emotional illumination, and related arrangements - Google Patents

Emotional illumination, and related arrangements Download PDF

Info

Publication number
US8564684B2
US8564684B2 US13/212,119 US201113212119A US8564684B2 US 8564684 B2 US8564684 B2 US 8564684B2 US 201113212119 A US201113212119 A US 201113212119A US 8564684 B2 US8564684 B2 US 8564684B2
Authority
US
United States
Prior art keywords
camera
image data
user
phone
capturing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US13/212,119
Other versions
US20130044233A1 (en
Inventor
Yang Bai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digimarc Corp
Original Assignee
Digimarc Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digimarc Corp filed Critical Digimarc Corp
Priority to US13/212,119 priority Critical patent/US8564684B2/en
Assigned to DIGIMARC CORPORATION reassignment DIGIMARC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAI, yang
Publication of US20130044233A1 publication Critical patent/US20130044233A1/en
Priority to US14/058,595 priority patent/US20140148219A1/en
Application granted granted Critical
Publication of US8564684B2 publication Critical patent/US8564684B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Definitions

  • the present technology concerns smartphones and other processor-equipped devices.
  • FIG. 1 is a block diagram of an illustrative smartphone.
  • the detailed arrangement benefits the user by responding automatically to the user's reflexive reaction to disappointment—without requiring any deliberate action on the user's part. It also conserves battery power, by not energizing the LED unnecessarily.

Abstract

A smartphone senses a user's emotional reaction to certain output (e.g., an output from a smartphone's attempt to read a barcode printed in a newspaper). The phone then tailors its operation based on the sensed reaction (e.g., it may turn on a torch to better illuminate the newspaper, or vary image processing or decoding parameters).

Description

TECHNICAL FIELD
The present technology concerns smartphones and other processor-equipped devices.
BACKGROUND AND INTRODUCTION OF THE TECHNOLOGY
Frown/smile detection is used by some consumer cameras to automatically identify good images. (The technology can be used to trigger image capture when a favorable facial expression is sensed, or to select from among a series of images, to pick a favorable image therefrom. It is sometimes termed a “smile shutter.”) See, e.g., US patent publications US20070201725, US20080309796, US20090002512, and US20100110265.
Related technology has also been proposed for games, in which a user's facial expression is sensed, and mimicked on an avatar that corresponds to the user in a game. See, e.g., Microsoft's US2011007142. Neven et al has done related work, shown in U.S. Pat. Nos. 6,580,811 and 6,714,661.
Facial expressions can also be used in conjunction with commercial methods, to sense which ads or products are pleasing (or not) to viewers. See, e.g., US20090118593, US2009112616 and US20040001616.
Motorola has proposed a phone that senses and communicates the user's emotional state, as indicated by facial expressions. See U.S. Pat. No. 7,874,983.
Verizon has suggested tailoring behavior of a user interface based on a user's sensed emotional state. For example, if the user's voice sounds stressed, a phone UI may address the user more slowly. See US20100037187. Related “affective computing” technology is detailed in Microsoft's U.S. Pat. No. 6,212,502, in which the user's emotional state is sensed, and a “help system” user interface responds accordingly. The Microsoft system relies on a Bayesian network to recognize the user's emotion. Additional mood-detecting technology is detailed in Microsoft's US20090002178.
A recent survey of affective computing techniques is provided in Robinson, The Emotional Computer, Ninth Intl Conference on Pervasive Computing, June, 2011.
Separately, smartphones are used to sense machine readable data from physical media. For example, consumers increasingly use smartphones to read QR codes and encoded digital watermarks from posters, magazines and newspapers, in order to link to related content. Such technology is detailed, e.g., in the assignee's U.S. Pat. Nos. 6,947,571, 6,590,996, 20110161076 and 20100150434, and in application Ser. Nos. 13/079,327, filed Apr. 4, 2011, and 13/011,618, filed Jan. 21, 2011.
In accordance with one aspect of the present technology, the LED “torch” (illuminator) of a smartphone is activated when a user seems to be having difficulty using the smartphone to sense machine-readable data. With additional illumination on the object being imaged, the smartphone processor may be better able to decode the encoded information from the captured imagery.
The foregoing and additional features and advantages of the present technology will be more readily apparent from the following description, which proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an illustrative smartphone.
FIG. 2 is a flow chart of a process according to one particular embodiment of the present technology.
DETAILED DESCRIPTION
Referring to FIG. 1, an illustrative smartphone 10 includes a processor 12, a display 14, a touchscreen 16 and other physical user interface (UI) elements 18 (e.g., buttons, etc.). Also included are one or more microphones 20, a variety of other sensors 22 (e.g., motions sensors such 3D accelerometers, gyroscopes and magnetometers), a network adapter 24, a location-determining module 26 (e.g., GPS), and an RF transceiver 28.
The depicted phone 10 also includes two cameras 30, 32. Camera 30 is front-facing, i.e., with a lens mounted on the side of the smartphone that also includes the screen. The second camera 32 has a lens on a different side of the smartphone, commonly on the back side. The front-facing camera is lower in resolution than the back-facing camera (e.g., 640×480 pixels for the front-facing camera, vs. 1280×720 pixels for the back-facing camera). Accordingly, imagery from the front-facing camera can be processed more simply than imagery from the back-facing camera, with less power consumption and less computational complexity.
Associated with the second camera 32 is an LED “torch” 34 that is mounted so as to illuminate the second camera's field of view. Commonly, this torch is positioned on the same side of the smartphone as the lens of the second camera, although this is not essential.
Smartphone 10 also includes a memory 36 that stores software and data. The software includes both operating system software and application software. The former includes software that controls the user interface. The latter includes content processing software—such as a QR code reader and/or a digital watermark decoder. It similarly may include music recognition software.
In operation, the smartphone captures first image data from a physical object (e.g., a newspaper) using the second (e.g., rear-facing) camera 32. The smartphone then attempts to decode encoded information from the captured imagery (e.g., a QR code or digital watermark). An associated result is presented to the user, e.g., on the smartphone screen 14.
Meanwhile, the smartphone captures imagery of the user's face, from the front-facing camera 30—both before and after the decoding attempt. This facial expression information is analyzed to discern whether an emotion indicated by the user changes negatively. For example, the user's facial expression may change from a neutral expression to a slight frown or grimace. If the smartphone thereby discerns that the user is becoming frustrated with the smartphone, the smartphone processor 36 issues a signal that turns on the torch 34. This torch illuminates the field of view of the camera 32, including the newspaper being imaged.
The increased illumination will often allow the smartphone to extract the encoded information from the imagery captured from the newspaper, when the smartphone was previously unable to do so.
The torch 34 can be extinguished when the processor 36 indicates that a decoding operation has been performed successfully. Alternatively, the torch can be turned-off if imagery captured by the camera 30 reveals a change in the users' facial expression, e.g., from a frown to a neutral expression, or a smile. Still further, the torch can be turned-off based on a time interval—such as 3, 5 or 10 seconds following its enablement. The torch can also be extinguished if the processor senses (e.g., by reference to one of the motion sensors) that the phone has been moved from the pose in which the user was holding it when a negative emotion was sensed, to a different pose—indicating that the user has ceased the attempt to extract information from the object.
Enabling the torch is one action the smartphone can take based on the user's sensed emotion. Alternatively, or additionally, the smartphone can change one or more other parameters. For example, the smartphone may change the focus or zoom of the second camera 32—trying to capture information depicted in a different focal plane. (Such change can be achieved by conventional mechanical arrangements, or by computational photography techniques). Or a different lens aperture or a different exposure interval can be tried. Likewise, different image processing operations may be triggered, such as spatial-domain or frequency-domain filtering, averaging, or analysis in different color planes (or greyscale). Still further, several captured image frames can be combined, such as by averaging, or using high dynamic range combination techniques, in an attempt to obtain imagery from which better recognition results can be obtained.
In a variant embodiment, other facial expressions control other aspects of image processing. For example, the zoom function of camera 32 can be controlled in accordance with eyelid gestures sensed by camera 30 (e.g., with zoom increasing as the user's eyes are opened further). Similarly, changes to the user's lip posture can vary a parameter of operation (e.g., with zoom increasing as the user's lips move apart).
In the detailed arrangement, it will be recognized that the smartphone analyzes camera data to turn on a torch. However, non-obviously, the analyzed camera data is not from the camera 32 with which the torch is associated, but rather is from a camera 30 facing a different direction (towards the user).
The detailed arrangement benefits the user by responding automatically to the user's reflexive reaction to disappointment—without requiring any deliberate action on the user's part. It also conserves battery power, by not energizing the LED unnecessarily.
While described in the context of reading barcode or digital watermark data from a printed object, the technology finds other applications as well. One is in performing OCR-based text recognition. Another is in connection with a pattern-matching operation (e.g., based on extracting characteristic feature data from imagery, such as by SURF). A great variety of other smartphone operations can likewise be altered based on the user's sensed emotional state.
Other Comments
Having described and illustrated the principles of my inventive work with reference to an illustrative example, it will be recognized that the technology is not so limited.
For example, while the detailed embodiment senses mood/emotion by reference to facial image data, other embodiments can use other techniques, e.g., based on voice parameters, heart rate, skin conductivity, and/or other biometrics. (Apple's patent publication 20100113950 details technology for capturing and analyzing EKG data from a user, using a smartphone.) A user's gestures with the phone can also be sensed and analyzed to discern likely emotion (e.g., hard shaking of the device can indicate frustration).
Particular arrangements for recognizing emotions (e.g., joy, sadness, anticipation, surprise, trust, disgust, anger, fear, etc.) from facial imagery are detailed in US20070066916. Other particular arrangements for facial expression analysis are familiar to artisans in the field from publications including Cohen, et al, “Facial Expression Recognition from Video Sequences: Temporal and Static Modeling,” Computer Vision and Understanding 91 (2003), pp. 160-187, and from Chapter 11 (Facial Expression Analysis) in the book Handbook of Facial Recognition, Li and Jain, eds., Springer Verlag 2005.
Analysis of the user's emotion typically is based on a “before” and “after” comparison of sampled information (e.g., facial expression data). However, this is not essential. The smartphone can decide to change a parameter of operation (e.g., turn on the torch) based on detection of a frown after the smartphone presents an original processing result (e.g., OCR extraction), regardless of the user's expression before presentation of that result. In some embodiments, a negative emotion may be inferred from the lack of a positive facial expression—or a change from positive facial expression to a neutral facial expression.
Upcoming smartphones will doubtless have stereo cameras for 3D image capture—perhaps both front-facing and back-facing. The availability of stereo imagery of the user's facial expressions allows for more accurate, and nuanced, inferencing of user emotion.
In an illustrative embodiment, a classifier arrangement is used to recognize different emotional states. (A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, that is, f(x)=confidence(class). Such classification can employ a probabilistic and/or statistical-based analysis to infer an action or state that corresponds to user. A support vector machine (SVM) is an example of a classifier that can be employed.)
While reference has been made to a smartphone-based embodiment, it will be recognized that this technology finds utility with all manner of devices. Game consoles, desktop computers, laptop computers, tablet computers, set-top boxes, televisions, netbooks, wearable computers, etc., can all make use of the principles detailed herein. The term “smartphone” should be construed to encompass all such devices, even those that are not strictly-speaking telephones.
Exemplary smartphones include the Apple iPhone 4, and smartphones following Google's Android specification (e.g., the Verizon Droid Eris phone, manufactured by HTC Corp., and the Motorola Droid 3 phone). (Details of the iPhone, including its touch interface, are provided in Apple's published patent application 20080174570.)
As is familiar to artisans, the processes and arrangements detailed in this specification can be implemented as instructions for computing devices, including general purpose processor instructions for a variety of programmable processors, including microprocessors (e.g., the Atom and A4), graphics processing units (GPUs, such as the nVidia Tegra APX 2600), and digital signal processors (e.g., the Texas Instruments TMS320 series devices), etc. These instructions can be implemented as software, firmware, etc. These instructions can also be implemented in various forms of processor circuitry, including programmable logic devices, field programmable gate arrays (e.g., the Xilinx Virtex series devices), field programmable object arrays, and application specific circuits—including digital, analog and mixed analog/digital circuitry. Execution of the instructions can be distributed among processors and/or made parallel across processors within a device or across a network of devices. Processing of data can also be distributed among different processor and memory devices. “Cloud” computing resources can be used as well. References to “processors,” “modules” or “components” should be understood to refer to functionality, rather than requiring a particular form of implementation.
Software instructions for implementing the detailed functionality can be authored by artisans without undue experimentation from the description provided herein, e.g., written in C, C++, Visual Basic, Java, Python, Tcl, Perl, Scheme, Ruby, etc. Smartphones according to certain implementations of the present technology can include software modules for performing the different functions and acts.
Different of the functionality can be implemented on different devices. For example, image processing or music recognition operations can involve one or more remote devices, between which execution can be distributed. Extraction of watermark data from image content is one example of a process that can be distributed in such fashion. Another example is image analysis to discern emotion. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a smartphone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated.
While this disclosure has detailed particular ordering of acts and particular combinations of elements, it will be recognized that other contemplated methods may re-order acts (possibly omitting some and adding others), and other contemplated combinations may omit some elements and add others, etc.
Although disclosed as complete systems, sub-combinations of the detailed arrangements are also separately contemplated.
While detailed in the context of a smartphone that extracts information from imagery, corresponding arrangements are equally applicable to systems that extract information from audio, or from combinations of media.
For example, in connection with a music-recognition app or a speech-to-text app, a user's facial response to the app can be captured by a front-facing camera and—if it turns negative—the device can employ alternate strategies to try and obtain a result that is more user-pleasing. For a music app, one strategy is for the smartphone to attempt to characterize non-music audio captured by the microphone, and then apply a corresponding filter to reduce interference from such audio. Another strategy is to involve nearby smartphones in the detection task, e.g., requesting (such as by Bluetooth) that they sample audio from their locations, and forward captured audio—perhaps after initial processing—to the original smartphone. The original smartphone can then combine such audio with its own captured audio to perhaps increase the signal-to-noise ratio of the music, to which a recognition process can be applied—hopefully with a more pleasing result.
(Music recognition is taught in Shazam's U.S. Pat. Nos. 6,990,453 and 7,359,889.)
More generally, the detailed embodiment may be regarded as employing a first, front-facing camera as a user-feedback sensor device, and employing a second camera as an environment sensor device.
A related embodiment is a variation on the “smile shutter” concept. In this embodiment, a user positions a smartphone so that the second (e.g., rear-facing) camera points towards a desired scene (which is displayed on the phone screen). While prior art smartphone cameras normally require the user to touch the screen to capture an image of the scene, this variant embodiment instead triggers image capture by analyzing imagery from the front-facing camera—looking for a particular facial signal, such as a smile. When the smartphone operator smiles, the second camera takes a picture. It will be recognized that this arrangement avoids the shake problem inherent in the prior art (in which image capture is triggered by the user touching the screen).
To provide a comprehensive disclosure, while complying with the statutory requirement of conciseness, applicant incorporates-by-reference the patents, patent applications and other documents referenced herein. (Such materials are incorporated in their entireties, even if cited above in connection with specific of their teachings.) These references disclose technologies, teachings and systems that can be incorporated into the arrangements detailed herein, and into which the technologies, teachings and systems detailed herein can be incorporated. The reader is presumed to be familiar with such prior work.
In view of the wide variety of embodiments to which the principles and features discussed above can be applied, it should be apparent that the detailed embodiments are illustrative only, and should not be taken as limiting the scope of the invention. Rather, I claim as my invention all such modifications as may come within the scope and spirit of the following claims and equivalents thereof.

Claims (3)

I claim:
1. A method comprising:
(a) capturing first image data from a printed object using a first camera arrangement;
(b) attempting to decode steganographically-encoded digital watermark data from the captured first image data, and presenting an associated result to a user;
(c) capturing facial image data from the user, both before and after said attempting, using a second camera arrangement;
(d) analyzing said captured facial image data to discern that an emotion indicated by the user changed negatively; and
(e) when such analysis indicates the emotion indicated by the user changed negatively, issuing a signal—from a processor configured to perform such act—that enables a light source for illuminating a field of view towards which the first camera arrangement is directed, said field of view including the printed object;
(f) wherein the first and second camera arrangements comprise two different camera portions of a smartphone.
2. A phone for use in capturing first image data from an object that is steganographically encoded with digital watermark data, the phone comprising:
a first camera on a first side of the phone capable of capturing the first image data;
a light source on the first side of the phone;
a second camera on a second side of the phone, capable of capturing facial image data from a user of the phone;
a display;
a processor; and
a memory containing stored instructions;
wherein the first side of the phone is opposite the second side of the phone; and
wherein the instructions are executable by the processor to cause the phone to:
capture the first image data using the first camera;
attempt to decode the steganographically-encoded digital watermark data from the captured first image data;
present an associated result to the user on the display;
capture facial image data, both before and after said attempt, using the second camera;
analyze the captured facial image data from both before and after said attempt, to discern that an emotion indicated by the user changed negatively;
and
activate the light source to illuminate a field of view of the first camera upon a determination by said analyze act that the emotion indicated by the user changed negatively.
3. A non-transitory computer readable medium containing instructions for use with a phone having a first camera and a light source on one side, and a second camera on a second side, wherein said instructions—if executed by a processor in said phone—cause the phone to perform acts including:
(a) capturing first image data using the first camera;
(b) attempting to decode steganographically-encoded digital watermark data from the captured first image data, and presenting an associated result to a user;
(c) capturing facial image data from the user, both before and after said attempting, using the second camera;
(d) analyzing said captured facial image data to discern that an emotion indicated by the user changed negatively; and
(e) when such analysis indicates the emotion indicated by the user changed negatively, activating the light source to illuminate a field of view of the first camera.
US13/212,119 2011-08-17 2011-08-17 Emotional illumination, and related arrangements Expired - Fee Related US8564684B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/212,119 US8564684B2 (en) 2011-08-17 2011-08-17 Emotional illumination, and related arrangements
US14/058,595 US20140148219A1 (en) 2011-08-17 2013-10-21 Emotional illumination, and related arrangements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/212,119 US8564684B2 (en) 2011-08-17 2011-08-17 Emotional illumination, and related arrangements

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/058,595 Continuation US20140148219A1 (en) 2011-08-17 2013-10-21 Emotional illumination, and related arrangements

Publications (2)

Publication Number Publication Date
US20130044233A1 US20130044233A1 (en) 2013-02-21
US8564684B2 true US8564684B2 (en) 2013-10-22

Family

ID=47712399

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/212,119 Expired - Fee Related US8564684B2 (en) 2011-08-17 2011-08-17 Emotional illumination, and related arrangements
US14/058,595 Abandoned US20140148219A1 (en) 2011-08-17 2013-10-21 Emotional illumination, and related arrangements

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/058,595 Abandoned US20140148219A1 (en) 2011-08-17 2013-10-21 Emotional illumination, and related arrangements

Country Status (1)

Country Link
US (2) US8564684B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9348479B2 (en) * 2011-12-08 2016-05-24 Microsoft Technology Licensing, Llc Sentiment aware user interface customization
US9378290B2 (en) 2011-12-20 2016-06-28 Microsoft Technology Licensing, Llc Scenario-adaptive input method editor
US9767156B2 (en) 2012-08-30 2017-09-19 Microsoft Technology Licensing, Llc Feature-based candidate selection
US9921665B2 (en) 2012-06-25 2018-03-20 Microsoft Technology Licensing, Llc Input method editor application platform
US10656957B2 (en) 2013-08-09 2020-05-19 Microsoft Technology Licensing, Llc Input method editor providing language assistance
US11816678B2 (en) 2020-06-26 2023-11-14 Capital One Services, Llc Systems and methods for providing user emotion information to a customer service provider

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196028B2 (en) 2011-09-23 2015-11-24 Digimarc Corporation Context-based smartphone sensor logic
US9104467B2 (en) 2012-10-14 2015-08-11 Ari M Frank Utilizing eye tracking to reduce power consumption involved in measuring affective response
US9477993B2 (en) 2012-10-14 2016-10-25 Ari M Frank Training a predictor of emotional response based on explicit voting on content and eye tracking to verify attention
US20150035952A1 (en) * 2013-08-05 2015-02-05 Samsung Electronics Co., Ltd. Photographing apparatus, display apparatus, photographing method, and computer readable recording medium
KR102063102B1 (en) * 2013-08-19 2020-01-07 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal
IL229115A0 (en) 2013-10-28 2014-03-31 Safe Code Systems Ltd Real - time presence verification
US20150215514A1 (en) * 2014-01-24 2015-07-30 Voxx International Corporation Device for wirelessly controlling a camera
US9311639B2 (en) 2014-02-11 2016-04-12 Digimarc Corporation Methods, apparatus and arrangements for device to device communication
US9269009B1 (en) * 2014-05-20 2016-02-23 Amazon Technologies, Inc. Using a front-facing camera to improve OCR with a rear-facing camera
US10334158B2 (en) * 2014-11-03 2019-06-25 Robert John Gove Autonomous media capturing
DE102014222426A1 (en) * 2014-11-04 2016-05-04 Bayerische Motoren Werke Aktiengesellschaft Radio key for adapting a configuration of a means of transportation
US10180339B1 (en) 2015-05-08 2019-01-15 Digimarc Corporation Sensing systems
US10885915B2 (en) 2016-07-12 2021-01-05 Apple Inc. Intelligent software agent
US10805367B2 (en) * 2017-12-29 2020-10-13 Facebook, Inc. Systems and methods for sharing content
US20220329678A1 (en) * 2021-03-02 2022-10-13 Apple Inc. Handheld electronic device

Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212502B1 (en) * 1998-03-23 2001-04-03 Microsoft Corporation Modeling and projecting emotion and personality from a computer user interface
US6580811B2 (en) * 1998-04-13 2003-06-17 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
US6590996B1 (en) * 2000-02-14 2003-07-08 Digimarc Corporation Color adaptive watermarking
US20040001616A1 (en) * 2002-06-27 2004-01-01 Srinivas Gutta Measurement of content ratings through vision and speech recognition
US6714661B2 (en) * 1998-11-06 2004-03-30 Nevengineering, Inc. Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image
US6947571B1 (en) * 1999-05-19 2005-09-20 Digimarc Corporation Cell phones with optical capabilities, and related applications
US6990453B2 (en) * 2000-07-31 2006-01-24 Landmark Digital Services Llc System and methods for recognizing sound and music signals in high noise and distortion
US20070066916A1 (en) * 2005-09-16 2007-03-22 Imotions Emotion Technology Aps System and method for determining human emotion by analyzing eye properties
US20070201725A1 (en) * 2006-02-24 2007-08-30 Eran Steinberg Digital Image Acquisition Control and Correction Method and Apparatus
US7359889B2 (en) * 2001-03-02 2008-04-15 Landmark Digital Services Llc Method and apparatus for automatically creating database for use in automated media recognition system
US20080174570A1 (en) * 2006-09-06 2008-07-24 Apple Inc. Touch Screen Device, Method, and Graphical User Interface for Determining Commands by Applying Heuristics
US20080212831A1 (en) * 2007-03-02 2008-09-04 Sony Ericsson Mobile Communications Ab Remote control of an image capturing unit in a portable electronic device
JP2008234401A (en) * 2007-03-22 2008-10-02 Fujifilm Corp User interface device and its operation control method
US20080309796A1 (en) * 2007-06-13 2008-12-18 Sony Corporation Imaging device, imaging method and computer program
US20090002512A1 (en) * 2007-06-28 2009-01-01 Sony Corporation Image pickup apparatus, image pickup method, and program thereof
US20090002178A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Dynamic mood sensing
US20090112616A1 (en) * 2007-10-30 2009-04-30 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Polling for interest in computational user-health test output
US20090118593A1 (en) * 2007-11-07 2009-05-07 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Determining a demographic characteristic based on computational user-health testing of a user interaction with advertiser-specified content
US20100037187A1 (en) * 2002-07-22 2010-02-11 Verizon Services Corp. Methods and apparatus for controlling a user interface based on the emotional state of a user
US20100113950A1 (en) * 2008-11-05 2010-05-06 Apple Inc. Seamlessly Embedded Heart Rate Monitor
US20100110265A1 (en) * 2008-11-05 2010-05-06 Sony Corporation Imaging apparatus and display control method thereof
US20100150434A1 (en) * 2008-12-17 2010-06-17 Reed Alastair M Out of Phase Digital Watermarking in Two Chrominance Directions
US20110007142A1 (en) * 2009-07-09 2011-01-13 Microsoft Corporation Visual representation expression based on player expression
US7874983B2 (en) * 2003-01-27 2011-01-25 Motorola Mobility, Inc. Determination of emotional and physiological states of a recipient of a communication
US20110034176A1 (en) * 2009-05-01 2011-02-10 Lord John D Methods and Systems for Content Processing
US20110058051A1 (en) * 2009-09-08 2011-03-10 Pantech Co., Ltd. Mobile terminal having photographing control function and photographing control system
US20110212717A1 (en) * 2008-08-19 2011-09-01 Rhoads Geoffrey B Methods and Systems for Content Processing
US20120004575A1 (en) * 2010-06-30 2012-01-05 Sony Ericsson Mobile Communications Ab System and method for indexing content viewed on an electronic device
US20120046071A1 (en) * 2010-08-20 2012-02-23 Robert Craig Brandis Smartphone-based user interfaces, such as for browsing print media
US20120154633A1 (en) * 2009-12-04 2012-06-21 Rodriguez Tony F Linked Data Methods and Systems
US20120242818A1 (en) * 2009-07-15 2012-09-27 Mediatek Inc. Method for operating electronic device and electronic device using the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5839000A (en) * 1997-11-10 1998-11-17 Sharp Laboratories Of America, Inc. Automatic zoom magnification control using detection of eyelid condition
US6614466B2 (en) * 2001-02-22 2003-09-02 Texas Instruments Incorporated Telescopic reconstruction of facial features from a speech pattern
US20090041428A1 (en) * 2007-08-07 2009-02-12 Jacoby Keith A Recording audio metadata for captured images

Patent Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212502B1 (en) * 1998-03-23 2001-04-03 Microsoft Corporation Modeling and projecting emotion and personality from a computer user interface
US6580811B2 (en) * 1998-04-13 2003-06-17 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
US6714661B2 (en) * 1998-11-06 2004-03-30 Nevengineering, Inc. Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image
US6947571B1 (en) * 1999-05-19 2005-09-20 Digimarc Corporation Cell phones with optical capabilities, and related applications
US6590996B1 (en) * 2000-02-14 2003-07-08 Digimarc Corporation Color adaptive watermarking
US6990453B2 (en) * 2000-07-31 2006-01-24 Landmark Digital Services Llc System and methods for recognizing sound and music signals in high noise and distortion
US7359889B2 (en) * 2001-03-02 2008-04-15 Landmark Digital Services Llc Method and apparatus for automatically creating database for use in automated media recognition system
US20040001616A1 (en) * 2002-06-27 2004-01-01 Srinivas Gutta Measurement of content ratings through vision and speech recognition
US20100037187A1 (en) * 2002-07-22 2010-02-11 Verizon Services Corp. Methods and apparatus for controlling a user interface based on the emotional state of a user
US7874983B2 (en) * 2003-01-27 2011-01-25 Motorola Mobility, Inc. Determination of emotional and physiological states of a recipient of a communication
US20070066916A1 (en) * 2005-09-16 2007-03-22 Imotions Emotion Technology Aps System and method for determining human emotion by analyzing eye properties
US20070201725A1 (en) * 2006-02-24 2007-08-30 Eran Steinberg Digital Image Acquisition Control and Correction Method and Apparatus
US20080174570A1 (en) * 2006-09-06 2008-07-24 Apple Inc. Touch Screen Device, Method, and Graphical User Interface for Determining Commands by Applying Heuristics
US20080212831A1 (en) * 2007-03-02 2008-09-04 Sony Ericsson Mobile Communications Ab Remote control of an image capturing unit in a portable electronic device
JP2008234401A (en) * 2007-03-22 2008-10-02 Fujifilm Corp User interface device and its operation control method
US20080309796A1 (en) * 2007-06-13 2008-12-18 Sony Corporation Imaging device, imaging method and computer program
US20090002512A1 (en) * 2007-06-28 2009-01-01 Sony Corporation Image pickup apparatus, image pickup method, and program thereof
US20090002178A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Dynamic mood sensing
US20090112616A1 (en) * 2007-10-30 2009-04-30 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Polling for interest in computational user-health test output
US20090118593A1 (en) * 2007-11-07 2009-05-07 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Determining a demographic characteristic based on computational user-health testing of a user interaction with advertiser-specified content
US20110212717A1 (en) * 2008-08-19 2011-09-01 Rhoads Geoffrey B Methods and Systems for Content Processing
US20100110265A1 (en) * 2008-11-05 2010-05-06 Sony Corporation Imaging apparatus and display control method thereof
US20100113950A1 (en) * 2008-11-05 2010-05-06 Apple Inc. Seamlessly Embedded Heart Rate Monitor
US20100150434A1 (en) * 2008-12-17 2010-06-17 Reed Alastair M Out of Phase Digital Watermarking in Two Chrominance Directions
US20110034176A1 (en) * 2009-05-01 2011-02-10 Lord John D Methods and Systems for Content Processing
US20110007142A1 (en) * 2009-07-09 2011-01-13 Microsoft Corporation Visual representation expression based on player expression
US20120242818A1 (en) * 2009-07-15 2012-09-27 Mediatek Inc. Method for operating electronic device and electronic device using the same
US20110058051A1 (en) * 2009-09-08 2011-03-10 Pantech Co., Ltd. Mobile terminal having photographing control function and photographing control system
US20120154633A1 (en) * 2009-12-04 2012-06-21 Rodriguez Tony F Linked Data Methods and Systems
US20120004575A1 (en) * 2010-06-30 2012-01-05 Sony Ericsson Mobile Communications Ab System and method for indexing content viewed on an electronic device
US20120046071A1 (en) * 2010-08-20 2012-02-23 Robert Craig Brandis Smartphone-based user interfaces, such as for browsing print media

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Cohen, et al, "Facial Expression Recognition from Video Sequences: Temporal and Static Modeling," Computer Vision and Understanding 91 (2003), pp. 160-187. *
Robinson, The Emotional Computer, Ninth Int'l Conference on Pervasive Computing, Jun. 2011. *
Stan Z. Li and Anil K. Jain, ed., "Handbook of Face Recognition", 2005, Springer, pp. 247-243 (chapter 11). *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9348479B2 (en) * 2011-12-08 2016-05-24 Microsoft Technology Licensing, Llc Sentiment aware user interface customization
US9378290B2 (en) 2011-12-20 2016-06-28 Microsoft Technology Licensing, Llc Scenario-adaptive input method editor
US10108726B2 (en) 2011-12-20 2018-10-23 Microsoft Technology Licensing, Llc Scenario-adaptive input method editor
US9921665B2 (en) 2012-06-25 2018-03-20 Microsoft Technology Licensing, Llc Input method editor application platform
US10867131B2 (en) 2012-06-25 2020-12-15 Microsoft Technology Licensing Llc Input method editor application platform
US9767156B2 (en) 2012-08-30 2017-09-19 Microsoft Technology Licensing, Llc Feature-based candidate selection
US10656957B2 (en) 2013-08-09 2020-05-19 Microsoft Technology Licensing, Llc Input method editor providing language assistance
US11816678B2 (en) 2020-06-26 2023-11-14 Capital One Services, Llc Systems and methods for providing user emotion information to a customer service provider

Also Published As

Publication number Publication date
US20130044233A1 (en) 2013-02-21
US20140148219A1 (en) 2014-05-29

Similar Documents

Publication Publication Date Title
US8564684B2 (en) Emotional illumination, and related arrangements
US11102398B2 (en) Distributing processing for imaging processing
KR102598109B1 (en) Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis
RU2649773C2 (en) Controlling camera with face detection
US11743571B2 (en) Electronic device and operating method thereof
US9131150B1 (en) Automatic exposure control and illumination for head tracking
KR102560689B1 (en) Method and apparatus for displaying an ar object
US9106821B1 (en) Cues for capturing images
KR102018887B1 (en) Image preview using detection of body parts
US9436870B1 (en) Automatic camera selection for head tracking using exposure control
WO2017124899A1 (en) Information processing method, apparatus and electronic device
US20160267634A1 (en) Methods, systems and computer-readable mediums for efficient creation of image collages
US11284020B2 (en) Apparatus and method for displaying graphic elements according to object
KR20190111034A (en) Feature image acquisition method and device, and user authentication method
KR101434533B1 (en) System for filming camera using appreciate gesture of finger and method therefor
TW201714074A (en) A method for taking a picture and an electronic device using the method
CN104184943B (en) Image capturing method and device
EP4258649A1 (en) Method for determining tracking target, and electronic device
JP2014220555A (en) Imaging apparatus, control method of the same, and program
KR20190141866A (en) Electronic device and method for identifying photographing intention of user
WO2022228259A1 (en) Target tracking method and related apparatus
CN116709013A (en) Terminal equipment control method, terminal equipment control device and storage medium
CN113676670A (en) Photographing method, electronic device, chip system and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: DIGIMARC CORPORATION, OREGON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BAI, YANG;REEL/FRAME:027299/0847

Effective date: 20111129

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20211022