CN103105930A - Non-contact type intelligent inputting method based on video images and device using the same - Google Patents

Non-contact type intelligent inputting method based on video images and device using the same Download PDF

Info

Publication number
CN103105930A
CN103105930A CN2013100167058A CN201310016705A CN103105930A CN 103105930 A CN103105930 A CN 103105930A CN 2013100167058 A CN2013100167058 A CN 2013100167058A CN 201310016705 A CN201310016705 A CN 201310016705A CN 103105930 A CN103105930 A CN 103105930A
Authority
CN
China
Prior art keywords
user
keyboard
video image
input
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013100167058A
Other languages
Chinese (zh)
Inventor
王东琳
杜学亮
郭若杉
林啸
蒿杰
倪素萍
张森
林忱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN2013100167058A priority Critical patent/CN103105930A/en
Publication of CN103105930A publication Critical patent/CN103105930A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an interactive intelligent inputting method based on video images. The interactive intelligent method based on the video images is capable of enabling a user to achieve non-contact type operations. Through collecting information like motion paths, speed and acceleration of two hands of the user, the input function of an entity keyboard is achieved, and the collected information is outputted in the mode of characters for upper layer software to invoke and achieve customized application. The non-contact type intelligent inputting method based on the video images is capable of further achieving dynamic adjustment of the operation range of the virtualized keyboard according to the change of the motion range of the two hands of the user. When the two hands of the user are arranged in the working area of an image collection device, a displaying device displays the relative positions of the virtualized keyboard and virtualized fingers of the user, the user can achieve the corresponding operations of the buttons of the virtualized keyboard through the pressing actions of the fingers. In view of being compatible with habits of pressing the keyboard of all kinds of users, the device using the interactive intelligent method based on the video images provides a supervising function of the inputting of the virtualized keyboard, and corrects differences of accuracy brought by all kinds of actions which are not the keyboard inputting action. The non-contact type intelligent inputting method based on the video images is characterized in that a user does not need any auxiliary positioning areas and auxiliary positioning objects for inputting, and the inputting action can be achieved at any position inside the working area of the image collection device. The interactive intelligent method based on the video images is low in cost, wide in applicability, and intelligently interactive. The invention further provides the realizing device based on the interactive intelligent method based on the video images.

Description

A kind of contactless smart input method and device based on video image
Technical field
The present invention relates to contactless smart and input field, particularly a kind of contactless smart input method and device based on video image
Background technology
Along with the fast development in the fields such as intelligent television, smart mobile phone, touch-screen has been deep into each corner of society.Remarkable user interaction is experienced added value of product and the scientific and technological content that has greatly promoted consumer field, and more differentiation and intelligent experience are provided.Two kinds of virtual input methods of main flow are soft keyboard and addition type keyboard at present.Wherein the most employing touch-screens of smart mobile phone (for example iPhone of Apple series mobile phone) are realized the shapes of the virtual full keyboard of drafting, as the medium of user's input.This input method has also obtained using and promoting in the portable sets such as panel computer, but the support of this Technology Need touch screen technology, and precision and sensitivity and user's demand for experience also has gap, and contact operation frequently, the irrecoverability damage that can cause the user to point.The addition type keyboard adopts additional projector equipment and sensor combinations as the medium of input operation, realizes virtual input function.But the cost that the method is brought increases thereupon, and projected light also has certain impact to user's eyes, causes its application to be difficult to promote.Therefore, low-cost, contactless smart input that soft keyboard and addition type keyboard are all realized, and precision and response time also greatly reduce the user and experience.Except above-mentioned two class virtual input methods, also has the virtual input method based on camera and video image processing, but existing method based on Video processing has all applied all restrictions to user's using method, some methods need to have the auxiliary positioning object, as have the template of keyboard or include the paper of the printing images of keyboard.Some methods are without the need for the object of auxiliary positioning, but still fixing supporting surface need to be arranged, such as desk etc. (is seen Patents document CN1439151A, US5767842), this demand to auxiliary positioning object and workplace does not meet some occasion user's use habit, such as for TV user, a lot of users like leaning against with comfortable, random posture and watch the programme content of oneself liking on sofa or chair, if also need the plane such as desk to realize the data input, will reduce user's experience.
Summary of the invention
The invention provides a kind of contactless smart input method based on video image, can realize the remote contactless keyboard input of user, and can automatically adjust keyboard input effective range according to user habit, correct the non-effective action in user's input operation.Solved the problem of the remote contactless input of user, provide the user who has more hommization to experience, and had that interactive height, cost are low, adaptable characteristics.
Contactless smart input method based on video image disclosed by the invention, it comprises:
Step 1, collection video image, and carry out the detection of hand and the modeling of hand;
Step 2, according to the zone of action of the video image identification user's both hands that gather, determining effective input area, and determine the volume coordinate of dummy keyboard in effective input area;
Step 3, according to the hammer action of the Model Identification of the video image that gathers and hand finger;
Step 4, according to the position of described hammer action and the volume coordinate of dummy keyboard, determine user's input text.
The invention also discloses a kind of contactless smart input system based on video image, it comprises:
Video capture device, it is used for gathering video image;
Video analysis equipment, it is used for carrying out the detection of hand and the modeling of hand according to the video image that collects, and according to the zone of action of the video image identification user's both hands that gather, determining effective input area, and determine the volume coordinate of dummy keyboard in effective input area; Hammer action according to the Model Identification of the video image that gathers and hand finger; According to the position of described hammer action and the volume coordinate of dummy keyboard, determine user's input text;
Display device, it shows after synthesizing for input text, dummy keyboard and video data to display with described user.
Advantage of the present invention and good effect are:
Based on the contactless smart input method of video image, do not need extra auxiliary positioning material, expanded the range of application of system, reduced the cost of realizing of system.
Based on the contactless smart input method of video image, do not rely on laser and form dummy keyboard around hand, avoided the stimulation of laser to human eye, improve the user and experienced comfort level.
Contactless smart input method based on video image, to pointing identification and location automatically in the image capture device perform region, extract correlated characteristic information (as motion vector, speed and acceleration etc.) in the finger movement video flowing of identification, can fast accurate realize the input of user's on-keyboard.
Based on the contactless smart input method of video image, show the dummy keyboard that is complementary with user's finger movement scope by the picture-in-picture mode, come consumer-oriented accurate hammer action, realize the machine learning of intelligent interaction and user's use habit.
Based on the contactless smart input method of video image, adjust in real time effective zone of action of user's finger, realize that the user all can carry out the keyboard input easily under mobile or stationary state.According to the motion vector information that extracts, estimate in advance the next possible hammer action of user, thereby realize intelligent decision and intelligence input.
The invention solves current consumption field user Experience Degree not high, realize the problem that difficulty and cost are high.The present invention can take full advantage of the video image information that gathers, and carries out intelligent decision and decision-making in advance, improves the interactive response time, removes user's using area and auxiliary material restriction.Dummy keyboard input based on without utility appliance can significantly improve the user experience that the consumer field multi-screen merges, thereby realize the user under multiple working conditions, and the input of Remote Non-touch keyboard reaches the remote interaction with screen.
Description of drawings
Fig. 1 is based on the structural representation of the contactless smart input system of video image in the present invention;
Fig. 2 is based on the schematic flow sheet of the contactless smart input method of video image in the present invention;
Fig. 3 is the composograph schematic diagram of realizing based on the contactless smart input method of video image in the present invention.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Fig. 1 shows the design example of a kind of contactless smart input system based on video image proposed by the invention, and it is by display device, three-dimensional video acquisition system, and the 3 D video analytic system forms.
The three-dimensional video acquisition system comprises two cameras.
Video analytic system is comprised of CPU, ROM, DDR3 SDRAM, utilizing camera interface, HDMI interface and serial ports.Described video acquisition system and video analytic system can be integrated in display device, perhaps use in display device (as TV) existing hardware resource to build.
Two cameras gather the video of two angles simultaneously, import video data into the 3 D video analytic system by utilizing camera interface, the 3 D video analytic system is according to the video data of two angles, complete tracking and the location of finger movement, and the identification of keystroke behavior, and the schematic diagram of generation keyboard image and hand, and synthesize processing from the video source image of HDMI interface input, generate final video, be input on display device and show.Simultaneously, the key information character stream is exported by serial ports.
Video analytic system is realized following functions:
1) accept the video data of two camera inputs;
2) accept the input (player, or CATV (cable television)) of video source;
3) according to the video data of camera, carry out video analysis;
4) produce keystroke information, the output key character stream is for the application program;
5) produce keyboard image, according to the position of the hand that identifies, stack produces both hands or singlehanded image on keyboard, and produces text input frame image according to the result of input method;
6) to keyboard image, the image of text input frame image and video source input synthesizes processing, and the output composograph shows to display device.
Wherein, the video analysis step of described video analytic system realization comprises:
Step 1, when system enters the keyboard input state, according to the video data of camera, carry out the detection of hand and the modeling of hand, and and the pairing of existing database;
At this time, use certain gestures, help video processnig algorithms more easily to detect in one's hands and carry out modeling.As stipulate that certain gestures is as follows, when the right-hand man is parallel to simultaneously display screen and brandishes, start detection and the modeling of hand to the right.
The method of the detection of hand is: at first, according to the difference between several two field pictures in video sequence, can detect the moving region of every two field picture, the color in comparing motion zone and the difference between the standard colour of skin can further be partitioned into the zone of every two field picture hand.
Gesture identification can obtain motion feature by the variation to the zone of hand in every two field picture, and the motion feature of the gesture of having stipulated compares, and determines whether to carry out the modeling of hand.
After completing gesture identification, carry out the modeling of hand, comprise the size in each joint of determining hand, particularly hammer action is produced the size of the finger-joint of material impact.This is conducive to improve the accuracy of supervise algorithm.
Step 2, according to the effective input area of location recognition of hand, and generating three-dimensional coordinate system.
At this time, use certain gestures, help video processnig algorithms more easily to detect in one's hands and definite effective input area.As providing as follows of certain gestures, after the modeling of completing hand, when the right-hand man is parallel to ground and when static, representative will start determining keyboard size simultaneously.When the right-hand man is parallel to ground and simultaneously to middle or when outwards moving, representative is dwindled or amplifies keyboard size.
As effective input field, effectively input area comprises both hands, and comprises certain up and down scope of activities with the high-frequency range of both hands activity, and the keystroke behavior of finger in this scope is considered to effective.
Step 3, determine the three dimensional space coordinate of dummy keyboard
Dummy keyboard is positioned at the centre of effective input area, can specify in advance according to each user's use habit the angle of inclination of dummy keyboard, and determines the volume coordinate of dummy keyboard.
Step 4, according to the video data of camera, the action of finger is identified and is followed the tracks of, extract the finger movable information.
Owing to having determined effective input area, camera collection than large scene in only have a less scope to belong to effective input area, near the difficulty of effective input area, finger being identified and following the tracks of reduces greatly.
At first, according to motion and Skin Color Information, can be partitioned into the hand zone in input picture, refer again to the model of the hand of setting up in step 1, carry out the extraction of important contour feature.Then carry out the pairing (as finger, the joint) of the finger contours point of two width images (from two cameras), can recover the three-dimensional information at unique point place.After determining the three-dimensional coordinate of each finger of every frame scene (comprising two width images from camera), for video sequence, can determine the movable information of each finger, as current location, movement velocity, acceleration.
Step 5, carry out keystroke behavior identification according to the movable information of finger, determine once the effectively keystroke position of keystroke behavior, and the comparison of the volume coordinate of dummy keyboard, determine keystroke information, output keystroke character stream.
Once effectively the process of keystroke behavior is: the finger of first upwards raising one's hand, then fall.On raise one's hand in the process that refers to, the motion of finger is zero for first accelerating to decelerate to again.In dropping process, the motion of finger is first to accelerate to decelerate to zero equally again.The finger that includes once the motion feature of limited keystroke behavior is considered to complete keystroke one time, decelerates at last zero position and is considered to the keystroke position.This keystroke position is projected to the dummy keyboard plane that produces in step 3, and the position that projects to the key of the position on this plane and dummy keyboard compares, and chooses a nearest key as the key that hits.By the identification of above-mentioned effective keystroke behavior, can realize the action recognition functions such as basic confirmation, cancellation.
Step 6, according to the volume coordinate of the dummy keyboard in step 3, produce keyboard image; According to the position of the finger that identifies, stack produces both hands or singlehanded image on keyboard simultaneously, and produces text input frame image according to the result of input method.
When the user carries out keyboard when input, can be in the keyboard image that shows highlight with a kind of color and be positioned at the button of user under pointing, show with another color and hit lower key.
When the user is carrying out keystroke when action, corresponding text also preferably appears on display device, therefore, can also produce the text input frame and the image of the text inputted.
Step 7, with keyboard image, the image of text box and video source input synthesize processing, and exports composograph and show to display device.The visible Fig. 5 of composograph.Display device shows according to the three-dimensional system of coordinate that generates the virtual soft dish that appears in one's mind on current picture in real time, and the virtual soft dish shows with the form of picture-in-picture.Display device also according to the position of the effectively movable finger of user in coordinate system, presents its position on the virtual soft dish in real time, and the function that user key-press input is provided correction and prompting is provided; Display device also according to the input command of above-mentioned gesture hammer action recognition system identification, is adjusted the virtual soft dish layout that shows, rapid switching function multilingual and that capital and small letter switches is provided.
Above-mentioned camera can be high-resolution camera group, can be also video record equipment, and its support characteristics do not comprise supports 3-D view video input function.
Above-mentioned intelligent input system after the hammer action of identification user finger, is intended to the relatively aforementioned three-dimensional system of coordinate identification of finger tapping point user's button, also with relevant information with the formal output of character stream to upper layer application, realize other function.
Above-mentioned intelligent input system is also cancelled the frequency of operation by the deletion of counting user, extract the user and knock custom, set up user feature database, reject the violate-action that the user is easily identified by mistake in the intention hammer action, thereby improve accuracy and the reflecting time of motion recognition system.
Above-mentioned intelligent input system is the scope of activities of real time monitoring user both hands also, and according to the change of User Activity scope in certain hour, adjusts the three-dimensional system of coordinate of identification workspace, thereby adjusts the button intention of user's hammer action.
When above-mentioned intelligent input system supports that the user knocks the action recognition of keyboard, also support common confirmation, cancel the action recognition functions such as page turning.
Above-described specific embodiment; purpose of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the above is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. contactless smart input method based on video image, it comprises:
Step 1, collection video image, and carry out the detection of hand and the modeling of hand;
Step 2, according to the zone of action of the video image identification user's both hands that gather, determining effective input area, and determine the volume coordinate of dummy keyboard in effective input area;
Step 3, according to the hammer action of the Model Identification of the video image that gathers and hand finger;
Step 4, according to the position of described hammer action and the volume coordinate of dummy keyboard, determine user's input text.
2. the method for claim 1, is characterized in that, described method also comprises: the text of described dummy keyboard and user's input is presented on display device in real time.
3. the method for claim 1, is characterized in that, is presented on display device after the text of described dummy keyboard and user's input and the demonstration data that display device receives are synthetic.
4. the method for claim 1, is characterized in that, the zone of action of described user's both hands is the high-frequency range of both hands activity, and described effective input area is to comprise both hands and the zone of certain scope of activities up and down thereof.
5. the method for claim 1, is characterized in that, in step 1, according to the video image that collects, detect the certain gestures of hand, and after detecting certain gestures in one's hands, the opponent carries out modeling.
6. the method for claim 1, is characterized in that, described method comprises that also the position according to user's finger is presenting its position on dummy keyboard in real time on display device.
7. the method for claim 1, is characterized in that, shown in the collection of video image adopt high-resolution camera group or video record equipment.
8. the method for claim 1, it is characterized in that, described method also comprises the frequency of the deletion cancellation operation of counting user, extracts the custom that the user knocks, and set up user feature database, reject the violate-action that the user is easily identified by mistake in the intention hammer action.
9. contactless smart input system based on video image, it comprises:
Video capture device, it is used for gathering video image;
Video analysis equipment, it is used for carrying out the detection of hand and the modeling of hand according to the video image that collects, and according to the zone of action of the video image identification user's both hands that gather, determining effective input area, and determine the volume coordinate of dummy keyboard in effective input area; Hammer action according to the Model Identification of the video image that gathers and hand finger; According to the position of described hammer action and the volume coordinate of dummy keyboard, determine user's input text;
Display device, it shows after synthesizing for input text, dummy keyboard and video data to display with described user.
CN2013100167058A 2013-01-16 2013-01-16 Non-contact type intelligent inputting method based on video images and device using the same Pending CN103105930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013100167058A CN103105930A (en) 2013-01-16 2013-01-16 Non-contact type intelligent inputting method based on video images and device using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013100167058A CN103105930A (en) 2013-01-16 2013-01-16 Non-contact type intelligent inputting method based on video images and device using the same

Publications (1)

Publication Number Publication Date
CN103105930A true CN103105930A (en) 2013-05-15

Family

ID=48313855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013100167058A Pending CN103105930A (en) 2013-01-16 2013-01-16 Non-contact type intelligent inputting method based on video images and device using the same

Country Status (1)

Country Link
CN (1) CN103105930A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013189437A2 (en) * 2013-05-16 2013-12-27 中兴通讯股份有限公司 Keyboard customization method and device for touch screen terminal
CN104199550A (en) * 2014-08-29 2014-12-10 福州瑞芯微电子有限公司 Man-machine interactive type virtual touch device, system and method
CN104423627A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Information processing method and electronic equipment
CN104571482A (en) * 2013-10-22 2015-04-29 中国传媒大学 Digital device control method based on somatosensory recognition
CN104866075A (en) * 2014-02-21 2015-08-26 联想(北京)有限公司 Input method, device and electronic equipment
CN105224069A (en) * 2014-07-03 2016-01-06 王登高 The device of a kind of augmented reality dummy keyboard input method and use the method
CN105323619A (en) * 2014-08-04 2016-02-10 深圳市同方多媒体科技有限公司 Gesture control method and gesture control television based on analog button board
CN106325488A (en) * 2015-07-09 2017-01-11 北京搜狗科技发展有限公司 Input method, input device, server and input system
CN106358088A (en) * 2015-07-20 2017-01-25 阿里巴巴集团控股有限公司 Input method and device
CN107122044A (en) * 2017-04-01 2017-09-01 原国太郎 Input equipment and input method
CN107960124A (en) * 2016-05-16 2018-04-24 深圳维盛半导体科技有限公司 A kind of mouse and method of DPI automatic adjustments
WO2018098861A1 (en) * 2016-11-29 2018-06-07 歌尔科技有限公司 Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
CN104866075B (en) * 2014-02-21 2018-08-31 联想(北京)有限公司 A kind of input method, device and electronic equipment
CN108519855A (en) * 2018-04-17 2018-09-11 北京小米移动软件有限公司 Characters input method and device
WO2019000153A1 (en) * 2017-06-26 2019-01-03 Orange Method for displaying virtual keyboard on mobile terminal screen
CN109164924A (en) * 2018-08-29 2019-01-08 陈介水 A kind of character entry method and the system for identifying character entry method
CN109669537A (en) * 2018-12-03 2019-04-23 浙江万里学院 A kind of man-machine interactive system based on computer virtual interface
CN109992101A (en) * 2018-01-02 2019-07-09 西安中兴新软件有限责任公司 A kind of method and device, the terminal of contactless interaction
CN110096166A (en) * 2019-04-23 2019-08-06 广东工业大学华立学院 A kind of virtual input method
CN111801725A (en) * 2018-09-12 2020-10-20 株式会社阿尔法代码 Image display control device and image display control program
CN112183447A (en) * 2020-10-15 2021-01-05 尚腾 Information input system based on image recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1664755A (en) * 2005-03-11 2005-09-07 西北工业大学 Video recognition input system
CN101140617A (en) * 2007-09-29 2008-03-12 东莞市步步高教育电子产品有限公司 Electronic equipments and text inputting method
US20120271810A1 (en) * 2009-07-17 2012-10-25 Erzhong Liu Method for inputting and processing feature word of file content

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1664755A (en) * 2005-03-11 2005-09-07 西北工业大学 Video recognition input system
CN101140617A (en) * 2007-09-29 2008-03-12 东莞市步步高教育电子产品有限公司 Electronic equipments and text inputting method
US20120271810A1 (en) * 2009-07-17 2012-10-25 Erzhong Liu Method for inputting and processing feature word of file content

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013189437A3 (en) * 2013-05-16 2014-04-24 中兴通讯股份有限公司 Keyboard customization method and device for touch screen terminal
CN104168363A (en) * 2013-05-16 2014-11-26 中兴通讯股份有限公司 Keyboard customization method and apparatus of touch-screen mobile phone
CN104168363B (en) * 2013-05-16 2019-02-15 中兴通讯股份有限公司 A kind of the keyboard method for customizing and device of touch-screen mobile phone
WO2013189437A2 (en) * 2013-05-16 2013-12-27 中兴通讯股份有限公司 Keyboard customization method and device for touch screen terminal
CN104423627A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Information processing method and electronic equipment
CN104571482B (en) * 2013-10-22 2018-05-29 中国传媒大学 A kind of digital device control method based on somatosensory recognition
CN104571482A (en) * 2013-10-22 2015-04-29 中国传媒大学 Digital device control method based on somatosensory recognition
CN104866075A (en) * 2014-02-21 2015-08-26 联想(北京)有限公司 Input method, device and electronic equipment
CN104866075B (en) * 2014-02-21 2018-08-31 联想(北京)有限公司 A kind of input method, device and electronic equipment
CN105224069A (en) * 2014-07-03 2016-01-06 王登高 The device of a kind of augmented reality dummy keyboard input method and use the method
CN105224069B (en) * 2014-07-03 2019-03-19 王登高 A kind of augmented reality dummy keyboard input method and the device using this method
CN105323619A (en) * 2014-08-04 2016-02-10 深圳市同方多媒体科技有限公司 Gesture control method and gesture control television based on analog button board
CN104199550B (en) * 2014-08-29 2017-05-17 福州瑞芯微电子股份有限公司 Virtual keyboard operation device, system and method
CN104199550A (en) * 2014-08-29 2014-12-10 福州瑞芯微电子有限公司 Man-machine interactive type virtual touch device, system and method
WO2017005207A1 (en) * 2015-07-09 2017-01-12 北京搜狗科技发展有限公司 Input method, input device, server and input system
CN106325488A (en) * 2015-07-09 2017-01-11 北京搜狗科技发展有限公司 Input method, input device, server and input system
CN106358088A (en) * 2015-07-20 2017-01-25 阿里巴巴集团控股有限公司 Input method and device
CN107960124B (en) * 2016-05-16 2021-02-26 深圳维盛半导体科技有限公司 Mouse and method for automatically adjusting DPI
CN107960124A (en) * 2016-05-16 2018-04-24 深圳维盛半导体科技有限公司 A kind of mouse and method of DPI automatic adjustments
WO2018098861A1 (en) * 2016-11-29 2018-06-07 歌尔科技有限公司 Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
CN107122044A (en) * 2017-04-01 2017-09-01 原国太郎 Input equipment and input method
WO2019000153A1 (en) * 2017-06-26 2019-01-03 Orange Method for displaying virtual keyboard on mobile terminal screen
CN109992101A (en) * 2018-01-02 2019-07-09 西安中兴新软件有限责任公司 A kind of method and device, the terminal of contactless interaction
CN108519855A (en) * 2018-04-17 2018-09-11 北京小米移动软件有限公司 Characters input method and device
CN109164924A (en) * 2018-08-29 2019-01-08 陈介水 A kind of character entry method and the system for identifying character entry method
CN109164924B (en) * 2018-08-29 2022-06-24 陈介水 Character input method and system for recognizing character input method
CN111801725A (en) * 2018-09-12 2020-10-20 株式会社阿尔法代码 Image display control device and image display control program
CN109669537A (en) * 2018-12-03 2019-04-23 浙江万里学院 A kind of man-machine interactive system based on computer virtual interface
CN110096166A (en) * 2019-04-23 2019-08-06 广东工业大学华立学院 A kind of virtual input method
CN112183447A (en) * 2020-10-15 2021-01-05 尚腾 Information input system based on image recognition

Similar Documents

Publication Publication Date Title
CN103105930A (en) Non-contact type intelligent inputting method based on video images and device using the same
US20220164032A1 (en) Enhanced Virtual Touchpad
US9857868B2 (en) Method and system for ergonomic touch-free interface
CN105224069B (en) A kind of augmented reality dummy keyboard input method and the device using this method
EP3527121B1 (en) Gesture detection in a 3d mapping environment
US9760214B2 (en) Method and apparatus for data entry input
CN104199550B (en) Virtual keyboard operation device, system and method
CN102779000B (en) User interaction system and method
US20120169583A1 (en) Scene profiles for non-tactile user interfaces
US9405400B1 (en) Method and apparatus of providing and customizing data input touch screen interface to multiple users
Guimbretière et al. Bimanual marking menu for near surface interactions
CN103858074A (en) System and method for interfacing with a device via a 3d display
EP3007441A1 (en) Interactive displaying method, control method and system for achieving displaying of a holographic image
CN104898879A (en) Method and apparatus for data input
CN102880304A (en) Character inputting method and device for portable device
CN104460307B (en) A kind of input block, wearable intelligent equipment and the input method in integrative display area
US20210117078A1 (en) Gesture Input Method for Wearable Device and Wearable Device
CN104571823A (en) Non-contact virtual human-computer interaction method based on smart television set
CN101847057A (en) Method for touchpad to acquire input information
KR20100075281A (en) Apparatus having function of space projection and space touch and the controlling method thereof
Zhang et al. A novel human-3DTV interaction system based on free hand gestures and a touch-based virtual interface
CN105630134A (en) Operation event identification method and apparatus
CN111007977A (en) Intelligent virtual interaction method and device
CN102981662A (en) Hand-hold device and method of adjusting position information
WO2023273638A1 (en) Content display method and apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130515