CN103207662A - Method and device for obtaining physiological characteristic information - Google Patents
Method and device for obtaining physiological characteristic information Download PDFInfo
- Publication number
- CN103207662A CN103207662A CN2012100073643A CN201210007364A CN103207662A CN 103207662 A CN103207662 A CN 103207662A CN 2012100073643 A CN2012100073643 A CN 2012100073643A CN 201210007364 A CN201210007364 A CN 201210007364A CN 103207662 A CN103207662 A CN 103207662A
- Authority
- CN
- China
- Prior art keywords
- evaluation
- estimate
- file
- information
- physiological characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention provides a method and a device for obtaining physiological characteristic information. The method comprises the following steps that when playing equipment plays one file, whether a user corresponding to the file exists or not is detected; when the user corresponding to the file exists, at least one kind of physiological characteristic information of the user is collected, and the at least one kind of physiological characteristic information corresponds to the file or one part of contents of the file and is used for characterizing the evaluation of the user on the file or one part of contents of the file.
Description
Technical field
The present invention relates to video shearing technique field, relate in particular to a kind of method and device that obtains physiological characteristic information.
Background technology
Continuous development along with play-back technology, in order more to satisfy the hobby of different user, tend to after playing a film or TV programme, collect the user to this film or the evaluation of TV programme, according to estimating a certain section most popular content of shearing in film or the program, throw in, in order to attract more user.
And the applicant finds to deposit at least with following technical matters in the prior art in realizing process of the present invention:
Collecting evaluation of user all is manually to finish, and can not realize collecting automatically and in real time, and is manually collecting in the process of estimating, and following problems may be encountered:
1, after the user finishes watching film or TV programme, be unwilling this film or TV programme are marked, cause collecting the scoring less than the user.
2, even if collect user's scoring, and this scoring only is user's subjective assessment, and evaluation result is inaccurate, and evaluation structure is comparatively single, and whether and fancy grade the hobby that can not reflect the user fully.
Summary of the invention
In view of this, the invention provides a kind of method and device that obtains physiological characteristic information, can not be automatically and collect the technical matters that the user estimates in real time in order to solve prior art.
One aspect of the present invention provides a kind of method that obtains physiological characteristic information, comprising:
Whether when playback equipment is being play a file, detecting corresponding described file has the user;
When the described file of correspondence has the user, collect described user's at least a physiological characteristic information, a part of content in the corresponding described file of described at least a physiological characteristic information or the described file, described at least a physiological characteristic information are used for characterizing described user to the evaluation of described file or a part of content of described file.
Optionally, described when the described file of correspondence has the user, collect after described user's at least a physiological characteristic information, described method also comprises: handle described at least a physiological characteristic information, obtain described user to the evaluation of estimate of a part of content in described file or the described file.
Optionally, described at least a physiological characteristic information comprises: described user's acoustic information, facial expression information, expression in the eyes information and/or limb action information.
Optionally, the described at least a physiological characteristic information of described processing, obtain described user to the evaluation of estimate of a part of content in described file or the described file, specifically comprise: the acoustic information of handling described user, facial expression information, expression in the eyes information and/or limb action information, obtain the first sub-evaluation of estimate of corresponding described acoustic information, the second sub-evaluation of estimate of corresponding described facial expression information, the 3rd sub-evaluation of estimate of corresponding described expression in the eyes information and/or the 4th sub-evaluation of estimate of corresponding described limb action information; Based on a preset rules, handle the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate and/or the described the 4th sub-evaluation of estimate obtain described user to the evaluation of estimate of a part of content in described file or the described file.
Optionally, when described at least a physiological characteristic information was a kind of physiological characteristic information, described evaluation of estimate was specially: the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate or the described the 4th sub-evaluation of estimate.
Optionally, be described acoustic information in described at least a physiological characteristic information, described facial expression information, in described expression in the eyes information or the described limb action information two kinds, when three kinds or four kinds, described based on a preset rules, handle the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate and/or the described the 4th sub-evaluation of estimate, be specially: the son of each physiological characteristic information correspondence is estimated the weighted value of correspondence on duty, obtain the weight branch of each physiological characteristic information; Divide addition with the weight of each physiological characteristic information, obtain described user to the evaluation of estimate of a part of content in described file or the described file.
Optionally, in the described at least a physiological characteristic information of described processing, obtain after the evaluation of estimate of described user to a part of content in described file or the described file, described method also comprises: according to described evaluation of estimate a part of content in described file or the described file is sheared, obtained at least one son file; With synthetic at least one preview file of described at least one son file.
The present invention provides a kind of device that obtains physiological characteristic information on the other hand, comprising:
Detection module is used for when playback equipment is being play a file, and whether detect corresponding described file has the user;
Collection module, be used for when the described file of correspondence has the user, collect described user's at least a physiological characteristic information, a part of content in the corresponding described file of described at least a physiological characteristic information or the described file, described at least a physiological characteristic information are used for characterizing described user to the evaluation of described file or a part of content of described file.
Optionally, described device also comprises: obtain module, for the treatment of described at least a physiological characteristic information, obtain described user to the evaluation of estimate of a part of content in described file or the described file.
Optionally, described at least a physiological characteristic information comprises: described user's acoustic information, facial expression information, expression in the eyes information and/or limb action information.
Optionally, described acquisition module specifically also comprises: first processing module, acoustic information for the treatment of described user, facial expression information, expression in the eyes information and/or limb action information, obtain the first sub-evaluation of estimate of corresponding described acoustic information, the second sub-evaluation of estimate of corresponding described facial expression information, the 3rd sub-evaluation of estimate of corresponding described expression in the eyes information and/or the 4th sub-evaluation of estimate of corresponding described limb action information; Second processing module, be used for based on a preset rules, handle the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate and/or the described the 4th sub-evaluation of estimate obtain described user to the evaluation of estimate of a part of content in described file or the described file.
Optionally, when described at least a physiological characteristic information was a kind of physiological characteristic information, described evaluation of estimate was specially: the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate or the described the 4th sub-evaluation of estimate.
Optionally, be described acoustic information in described at least a physiological characteristic information, described facial expression information, in described expression in the eyes information or the described limb action information two kinds, when three kinds or four kinds, second processing module specifically also comprises: first obtains module, is used for the son of each physiological characteristic information correspondence is estimated the weighted value of correspondence on duty, obtains the weight branch of each physiological characteristic information; Second obtains module, is used for dividing addition with the weight of each physiological characteristic information, obtains described user to the evaluation of estimate of a part of content in described file or the described file.
Optionally, described device also comprises: shear module, be used for according to described evaluation of estimate described file or a part of content of described file being sheared, and obtain at least one son file; Synthesis module is used for synthetic at least one preview file of described at least one son file.
By the one or more technical schemes that provide in the embodiment of the present application, the application has following technique effect at least:
One or more technical scheme among the present invention, when the user watches played file, gathering-device by playback equipment is real-time collection user's at least a physiological characteristic information automatically, be used for characterizing the user to the evaluation of file or a part of content of file, the right scoring of user is manually collected in no longer passive dependence, and physiological characteristic information that can the active collection user is marked.
Further, after active collection arrives user's physiological characteristic information, because it is versatile and flexible to use in this physiological characteristic information one or more to handle processing mode, and can characterize out the user accurately to the evaluation of a part of content in file or the file, can also be comprehensively and the hobby that objectively responds out the user whether and fancy grade.
Further, use the user that the evaluation of a part of content in file or the file is come a part of content in corresponding shearing file or the file, because the evaluation that obtains after the physiological characteristic information of collecting and the processing is all comparatively objective and accurate, so file or the file content sheared out, the result is accurate, and can satisfy most of users' demand.
Description of drawings
Fig. 1 is the process flow diagram that obtains the method for physiological characteristic information in the embodiment of the invention one;
Fig. 2 is the schematic representation of apparatus that obtains physiological characteristic information in the embodiment of the invention two.
Embodiment
Can not reach the technical matters of collecting user's evaluation in time automatically in order to solve the prior art existence, the embodiment of the invention has proposed a kind of method and device that obtains physiological characteristic information, is explained in detail to the main realization principle of the embodiment of the invention, specific implementation process and to the beneficial effect that should be able to reach below in conjunction with Figure of description.
Embodiment one:
With reference to figure 1, a kind of method that obtains physiological characteristic information has been described, specifically comprise step:
Step 101, whether when playback equipment is being play a file, detecting respective file has the user.
Step 102, when respective file has the user, collect user's at least a physiological characteristic information, a part of content at least a physiological characteristic information respective file or the file, at least a physiological characteristic information are used for characterizing the user to the evaluation of file or a part of content of file.
In step 101, the file of broadcast can be for polytype file, such as song, TV play, film, MTV, photo, e-book etc., can play, and watch the user of this document, can be common customer group, also can be specific customer group, such as concern, like the group of foundation such as concerning etc. according to social relationships, geographic position.
At least a physiological characteristic information in the step 102 can be acoustic information, facial expression information, and expression in the eyes information and/or limb action information during the content of user in watching file, corresponding physiological characteristic information can correspondingly occur.
For user's acoustic information, as laughing, wail etc., can or connect corresponding time of collecting the size of volume and continuing of gathering-device in this playback equipment etc. by playback equipment.
The facial expression information of respective user, by playback equipment or connect image or the video that gathering-device in this playback equipment can obtain to comprise the user, playback equipment can pass through analysis image or video then, go to obtain user's facial expression information, as: ignorant, terrified, glad, sadness, and the degree of expressing one's feelings like this, as: a bit glad and very delight.
The expression in the eyes information of respective user, can be by playback equipment or connect that the corresponding user of collection of gathering-device expression in the eyes in this playback equipment transmits as ignorantly, terrified, happiness, sadness, and the degree of above-mentioned expression.
The limb action information of respective user can or connect the limb action information of the corresponding user of collection of gathering-device in this playback equipment by playback equipment, as roaring with laughter, wipes the action of tears etc. when sad.
These physiological characteristic information can characterize the evaluation of this fragment objective and accurately, and collect these physiological characteristic information, just can objectively characterize out the user to the evaluation of a part of content in file or the file.
Therefore, when the user watches film, can initiatively use playback equipment or connect these physiological characteristic information that gathering-device in this playback equipment is collected the user.
And collected after these physiological characteristic information, by these physiological characteristic information are handled, just can obtain the user to the evaluation of estimate of a part of content in file or the file.
At first these physiological characteristic information are handled, obtained its corresponding evaluation result.
And the acoustic information of process user, facial expression information, expression in the eyes information and/or limb action information, just can obtain the first sub-evaluation of estimate of corresponding acoustic information, the second sub-evaluation of estimate, the 3rd sub-evaluation of estimate of corresponding expression in the eyes information and/or the 4th sub-evaluation of estimate of corresponding limb action information of corresponding facial expression information.
Above-mentioned processing mode has following several situation:
First kind:
When the user watches film, may only show a kind of physiological characteristic, as acoustic information, facial expression information, expression in the eyes information, four kinds of physiological characteristics of limb action information a kind of, and do not show other physiological characteristic.So just handle this a kind of physiological characteristic, and the corresponding sub-evaluation of estimate of acquisition, just obtain the first sub-evaluation of estimate if handle acoustic information, if the process facial expression information just obtains the second sub-evaluation of estimate, just obtain the 3rd sub-evaluation of estimate if handle expression in the eyes information, just obtain the 4th sub-evaluation of estimate if handle limb action information.
Specifically give an example, when a user watches film, when a certain fragment has the comedy effect in this film or the film, this user will show glad in various degree laugh, in order to realize the evaluation to user's happiness degree, can set up the mapping table of a laugh and happiness degree value, as: the corresponding happiness degree value of smiling is 60 minutes; The corresponding happiness degree value of laughing is 70 minutes; Laughing wildly corresponding happiness degree value is 80 minutes; The corresponding happiness degree value of roaring with laughter is 90 minutes.
When by sound collection equipment, as sound pick-up outfit etc., obtain this user's laugh after, by analyzing volume, multiple audio frequency correlation parameters such as audio frequency or tone just this user smiles as can be known, are laughed, wild laugh is still roar with laughter.If when obtaining this user for wild laugh by analyzing, namely obtaining the first sub-evaluation of estimate is 80 minutes, and owing to only gathered this user's acoustic information, so, the first sub-evaluation of estimate is this user to the evaluation of estimate of a part of content in this film or the film, and namely this user is 80 minutes to the evaluation of estimate of a part of content in this film or the film.
Second kind:
When the user watches film, may show two kinds or three kinds or four kinds of physiological characteristic information in four kinds of physiological characteristic information.
So just handle these two kinds or three kinds or four kinds of physiological characteristic information, obtain corresponding sub-evaluation of estimate.
Further, when having obtained the sub-evaluation of estimate of various physiological characteristic information correspondences, just based on a preset rules this sub-evaluation of estimate is handled, and then obtained the user to the evaluation of estimate of a part of content in file or the file.
Just as above-mentioned example, when a user watches film, when a certain fragment has the comedy effect in this film or the film, this user will show glad in various degree laugh, in order to realize the evaluation to user's happiness degree, can set up the mapping table of a laugh and happiness degree value, as: the corresponding happiness degree value of smiling is 60 minutes; The corresponding happiness degree value of laughing is 70 minutes; Laughing wildly corresponding happiness degree value is 80 minutes; The corresponding happiness degree value of roaring with laughter is 90 minutes.
When by sound collection equipment, as sound pick-up outfit etc., obtain this user's laugh after, by analyzing volume, multiple audio frequency correlation parameters such as audio frequency or tone just this user smiles as can be known, are laughed, wild laugh is still roar with laughter.If when obtaining this user for wild laugh by analyzing, namely obtaining the first sub-evaluation of estimate is 80 minutes.
And when a user watches film, when a certain fragment has the comedy effect in this film or the film, facial expression also can show different happiness degree, in order to realize the evaluation to user's happiness degree, can set up the mapping table of a facial expression and happiness degree value in addition, the corresponding tables that concerns here can be identical with the mapping table of happiness degree value with laugh, also can be different, as: when user's facial expression was smile, corresponding happiness degree value was 50 minutes; When user's facial expression was laugh, corresponding happiness degree value was 60 minutes; When user's facial expression was wild laugh, corresponding happiness degree value was 70 minutes; And user's facial expression is when roaring with laughter, and corresponding happiness degree value is 80 minutes.
When by video or image capture device, just can obtain the happiness in various degree of this user's facial expression performance, by analysis image or video, just this user smiles as can be known, laughs, and wild laugh is still roar with laughter.If when obtaining this user for wild laugh by analyzing, namely obtaining the first sub-evaluation of estimate is 70 minutes.
And preset rules can be configured such that with weight this sub-evaluation of estimate is handled, or uses the rule of mean value that this sub-evaluation of estimate is handled, and uses weight that this sub-evaluation of estimate is handled herein for example.
Concrete processing mode is as follows:
At first, the son of each physiological characteristic information correspondence is estimated the weighted value of correspondence on duty, obtain the weight branch of each physiological characteristic information;
Secondly, divide addition with the weight of each physiological characteristic information, obtain the user to the evaluation of estimate of a part of content in file or the file.
And processing mode also has following several method according to the difference of physiological characteristic information:
First kind:
When the physiological characteristic information of collecting is acoustic information, facial expression information, a kind of in expression in the eyes information or the limb action information, during as the above-mentioned acoustic information of collecting for example.
And owing to only gathered this user's acoustic information, so the first sub-evaluation of estimate is this user to the evaluation of estimate of a part of content in this film or the film, namely this user is 80 minutes to the evaluation of estimate of a part of content in this film or the film.
Second kind:
When the physiological characteristic information of collecting is acoustic information, facial expression information is when in expression in the eyes information or the limb action information two kinds, three kinds or four kinds, here with two kinds of physiological characteristic information for example.
As two kinds of information in the above-mentioned physiological characteristic information of listing for example: acoustic information, facial expression information.
The above-mentioned sub-evaluation of estimate of having collected the acoustic information correspondence is 80 minutes, and the sub-evaluation of estimate of facial expression information correspondence is 70.
Acoustic information, the weighted value of facial expression information can be the same, all is 50%, also can be different, be 75% as the weighted value of acoustic information, the weighted value of facial expression information is 25%.
Work as acoustic information, when the weighted value of facial expression information is the same:
The weight of acoustic information is divided into 80*50%=40.
The weight of facial expression information is divided into 70*50%=35.
Therefore, obtain the user to the evaluation of estimate=40+35=75 of a part of content in file or the file.
Work as acoustic information, when the weighted value of facial expression information is different:
The weight of acoustic information is divided into 80*75%=60
The weight of facial expression information is divided into 70*25%=17.5
Therefore, obtain the user to the evaluation of estimate=60+17.5=77.5 of a part of content in file or the file.
It should be noted that; listed examples only is used for description and interpretation the application herein; and be not used in restriction the application; the application can also with other physiological characteristics in the above-mentioned physiological characteristic information a kind of, two kinds, three kinds in addition all physiological characteristic information for example, these also should be contained within the application's the protection domain for example.
In addition, the application can also use except weight other method as the rule of using mean value this sub-evaluation of estimate to be handled, and these also should be contained within the application's the protection domain for example.
And the evaluation of estimate here obtains owing to the physiological characteristic information and the process processing that are the active collection user, and physiological characteristic information can be objective and be characterized out the user accurately to the evaluation of this fragment, therefore handles the user that physiological characteristic information obtains and can react the user really for the evaluation of played file to the evaluation of estimate of a part of content in file or the file.
Further, obtained after the evaluation of estimate of user to a part of content in file or the file, just can shear a part of content in file or the file according to evaluation of estimate, obtain at least one son file, then at least one son file is blended into few preview file.
And this preview file can carry out the specific aim input in conjunction with the program that the active user watches, and perhaps it is play as recommending data, to obtain the more concern degree.
The clear detailed description of above-described embodiment obtain the method for physiological characteristic information and physiological characteristic information carried out a series of processing come objectively accurately file to be estimated.Present embodiment is then corresponding to the device of processing mode correspondence in above-described embodiment.
Embodiment two:
Embodiment two has described a kind of device that obtains physiological characteristic information, with reference to figure 2, comprising:
And this at least a at least physiological characteristic information comprises: user's acoustic information, facial expression information, expression in the eyes information and/or limb action information.
Except above-mentioned module, this device has also comprised the acquisition module, for the treatment of at least a physiological characteristic information, obtains the user to the evaluation of estimate of a part of content in file or the file.
Obtain also to comprise first processing module and second processing module in the module.
Wherein, first processing module is for the treatment of user's acoustic information, facial expression information, expression in the eyes information and/or limb action information, obtain the first sub-evaluation of estimate of corresponding acoustic information, the second sub-evaluation of estimate, the 3rd sub-evaluation of estimate of corresponding expression in the eyes information and/or the 4th sub-evaluation of estimate of corresponding limb action information of corresponding facial expression information.
Second processing module is used for based on a preset rules, handles the first sub-evaluation of estimate, the second sub-evaluation of estimate, and the 3rd sub-evaluation of estimate and/or the 4th sub-evaluation of estimate obtain the user to the evaluation of estimate of a part of content in file or the file.
And when at least a physiological characteristic information was a kind of physiological characteristic information, evaluation of estimate was specially: the first sub-evaluation of estimate, the second sub-evaluation of estimate, the 3rd sub-evaluation of estimate or the 4th sub-evaluation of estimate.
And when at least a physiological characteristic information be acoustic information, facial expression information, when in expression in the eyes information or the limb action information two kinds, three kinds or four kinds, second processing module comprises also that specifically first obtains module and second and obtain module.
Wherein, first obtains module, is used for the son of each physiological characteristic information correspondence is estimated the weighted value of correspondence on duty, obtains the weight branch of each physiological characteristic information;
Second obtains module, is used for dividing addition with the weight of each physiological characteristic information, obtains the user to the evaluation of estimate of a part of content in file or the file.
Except above-mentioned module, this device also comprises:
Shear module is used for according to evaluation of estimate file or a part of content of file being sheared, and obtains at least one son file;
Synthesis module is used at least one son file is blended into few preview file.
Among the present invention among one or more embodiment, when the user watches played file, gathering-device by playback equipment is real-time collection user's at least a physiological characteristic information automatically, be used for characterizing the user to the evaluation of file or a part of content of file, the right scoring of user is manually collected in no longer passive dependence, and physiological characteristic information that can the active collection user is marked.
Further, after active collection arrives user's physiological characteristic information, because it is versatile and flexible to use in this physiological characteristic information one or more to handle processing mode, and can characterize out the user accurately to the evaluation of a part of content in file or the file, can also be comprehensively and the hobby that objectively responds out the user whether and fancy grade.
Further, use the user that the evaluation of a part of content in file or the file is come a part of content in corresponding shearing file or the file, because the evaluation that obtains after the physiological characteristic information of collecting and the processing is all comparatively objective and accurate, so file or the file content sheared out, the result is accurate, and can satisfy most of users' demand.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.
Claims (14)
1. a method that obtains physiological characteristic information is characterized in that, described method comprises:
Whether when playback equipment is being play a file, detecting corresponding described file has the user;
When the described file of correspondence has the user, collect described user's at least a physiological characteristic information, a part of content in the corresponding described file of described at least a physiological characteristic information or the described file, described at least a physiological characteristic information are used for characterizing described user to the evaluation of described file or a part of content of described file.
2. the method for claim 1 is characterized in that,, collects after described user's at least a physiological characteristic information when the described file of correspondence has the user described, and described method also comprises:
Handle described at least a physiological characteristic information, obtain described user to the evaluation of estimate of a part of content in described file or the described file.
3. method as claimed in claim 2 is characterized in that, described at least a physiological characteristic information comprises: described user's acoustic information, facial expression information, expression in the eyes information and/or limb action information.
4. method as claimed in claim 3 is characterized in that, the described at least a physiological characteristic information of described processing obtains described user to the evaluation of estimate of a part of content in described file or the described file, specifically comprises:
Handle described user's acoustic information, facial expression information, expression in the eyes information and/or limb action information, obtain the first sub-evaluation of estimate of corresponding described acoustic information, the second sub-evaluation of estimate of corresponding described facial expression information, the 3rd sub-evaluation of estimate of corresponding described expression in the eyes information and/or the 4th sub-evaluation of estimate of corresponding described limb action information;
Based on a preset rules, handle the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate and/or the described the 4th sub-evaluation of estimate obtain described user to the evaluation of estimate of a part of content in described file or the described file.
5. method as claimed in claim 4, it is characterized in that when described at least a physiological characteristic information was a kind of physiological characteristic information, described evaluation of estimate was specially: the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate or the described the 4th sub-evaluation of estimate.
6. method as claimed in claim 4, it is characterized in that, be described acoustic information in described at least a physiological characteristic information, described facial expression information, in described expression in the eyes information or the described limb action information two kinds, when three kinds or four kinds, described based on a preset rules, handle the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate and/or the described the 4th sub-evaluation of estimate are specially:
The son of each physiological characteristic information correspondence is estimated the weighted value of correspondence on duty, obtain the weight branch of each physiological characteristic information;
Divide addition with the weight of each physiological characteristic information, obtain described user to the evaluation of estimate of a part of content in described file or the described file.
7. as the described method of arbitrary claim among the claim 2-6, it is characterized in that in the described at least a physiological characteristic information of described processing, obtain after the evaluation of estimate of described user to a part of content in described file or the described file, described method also comprises:
According to described evaluation of estimate a part of content in described file or the described file is sheared, obtained at least one son file;
With synthetic at least one preview file of described at least one son file.
8. a device that obtains physiological characteristic information is characterized in that, comprising:
Detection module is used for when playback equipment is being play a file, and whether detect corresponding described file has the user;
Collection module, be used for when the described file of correspondence has the user, collect described user's at least a physiological characteristic information, a part of content in the corresponding described file of described at least a physiological characteristic information or the described file, described at least a physiological characteristic information are used for characterizing described user to the evaluation of described file or a part of content of described file.
9. device as claimed in claim 8 is characterized in that, described device also comprises:
Obtain module, for the treatment of described at least a physiological characteristic information, obtain described user to the evaluation of estimate of a part of content in described file or the described file.
10. device as claimed in claim 9 is characterized in that, described at least a physiological characteristic information comprises: described user's acoustic information, facial expression information, expression in the eyes information and/or limb action information.
11. device as claimed in claim 10 is characterized in that, described acquisition module specifically also comprises:
First processing module, acoustic information for the treatment of described user, facial expression information, expression in the eyes information and/or limb action information, obtain the first sub-evaluation of estimate of corresponding described acoustic information, the second sub-evaluation of estimate of corresponding described facial expression information, the 3rd sub-evaluation of estimate of corresponding described expression in the eyes information and/or the 4th sub-evaluation of estimate of corresponding described limb action information;
Second processing module, be used for based on a preset rules, handle the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate and/or the described the 4th sub-evaluation of estimate obtain described user to the evaluation of estimate of a part of content in described file or the described file.
12. device as claimed in claim 11, it is characterized in that when described at least a physiological characteristic information was a kind of physiological characteristic information, described evaluation of estimate was specially: the described first sub-evaluation of estimate, the described second sub-evaluation of estimate, the described the 3rd sub-evaluation of estimate or the described the 4th sub-evaluation of estimate.
13. device as claimed in claim 11 is characterized in that, is described acoustic information in described at least a physiological characteristic information, described facial expression information, when in described expression in the eyes information or the described limb action information two kinds, three kinds or four kinds, second processing module specifically also comprises:
First obtains module, is used for the son of each physiological characteristic information correspondence is estimated the weighted value of correspondence on duty, obtains the weight branch of each physiological characteristic information;
Second obtains module, is used for dividing addition with the weight of each physiological characteristic information, obtains described user to the evaluation of estimate of a part of content in described file or the described file.
14. as the described device of arbitrary claim among the claim 9-13, it is characterized in that described device also comprises:
Shear module is used for according to described evaluation of estimate described file or a part of content of described file being sheared, and obtains at least one son file;
Synthesis module is used for synthetic at least one preview file of described at least one son file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012100073643A CN103207662A (en) | 2012-01-11 | 2012-01-11 | Method and device for obtaining physiological characteristic information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012100073643A CN103207662A (en) | 2012-01-11 | 2012-01-11 | Method and device for obtaining physiological characteristic information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103207662A true CN103207662A (en) | 2013-07-17 |
Family
ID=48754914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012100073643A Pending CN103207662A (en) | 2012-01-11 | 2012-01-11 | Method and device for obtaining physiological characteristic information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103207662A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103716661A (en) * | 2013-12-16 | 2014-04-09 | 乐视致新电子科技(天津)有限公司 | Video scoring reporting method and device |
CN104185064A (en) * | 2014-05-30 | 2014-12-03 | 华为技术有限公司 | Media file identification method and device |
CN104504112A (en) * | 2014-12-30 | 2015-04-08 | 何业文 | Cinema information acquisition system |
CN105589898A (en) * | 2014-11-17 | 2016-05-18 | 中兴通讯股份有限公司 | Data storage method and device |
CN106104422A (en) * | 2014-03-31 | 2016-11-09 | 奥迪股份公司 | Gesture assessment system, the method assessed for gesture and vehicle |
CN107071534A (en) * | 2017-03-17 | 2017-08-18 | 深圳市九洲电器有限公司 | A kind of user and the interactive method and system of set top box |
CN108563687A (en) * | 2018-03-15 | 2018-09-21 | 维沃移动通信有限公司 | A kind of methods of marking and mobile terminal of resource |
CN108681390A (en) * | 2018-02-11 | 2018-10-19 | 腾讯科技(深圳)有限公司 | Information interacting method and device, storage medium and electronic device |
CN108848416A (en) * | 2018-06-21 | 2018-11-20 | 北京密境和风科技有限公司 | The evaluation method and device of audio-video frequency content |
CN109116974A (en) * | 2017-06-23 | 2019-01-01 | 中兴通讯股份有限公司 | The determination method and method for pushing of screen locking picture, terminal, network server apparatus |
CN110019897A (en) * | 2017-08-01 | 2019-07-16 | 北京小米移动软件有限公司 | Show the method and device of picture |
CN111107400A (en) * | 2019-12-30 | 2020-05-05 | 深圳Tcl数字技术有限公司 | Data collection method and device, smart television and computer readable storage medium |
CN111507143A (en) * | 2019-01-31 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1345513A (en) * | 1999-03-29 | 2002-04-17 | Q网络电视公司 | System and method for near-real time capture and reporting of large population consumer behaviors concerning television use |
WO2003043336A1 (en) * | 2001-11-13 | 2003-05-22 | Koninklijke Philips Electronics N.V. | Affective television monitoring and control |
CN1942970A (en) * | 2004-04-15 | 2007-04-04 | 皇家飞利浦电子股份有限公司 | Method of generating a content item having a specific emotional influence on a user |
CN101420579A (en) * | 2007-10-22 | 2009-04-29 | 皇家飞利浦电子股份有限公司 | Method, apparatus and system for detecting exciting part |
CN102130897A (en) * | 2010-04-26 | 2011-07-20 | 上海理滋芯片设计有限公司 | Cloud computing-based video acquisition and analysis system and method |
-
2012
- 2012-01-11 CN CN2012100073643A patent/CN103207662A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1345513A (en) * | 1999-03-29 | 2002-04-17 | Q网络电视公司 | System and method for near-real time capture and reporting of large population consumer behaviors concerning television use |
WO2003043336A1 (en) * | 2001-11-13 | 2003-05-22 | Koninklijke Philips Electronics N.V. | Affective television monitoring and control |
CN1586078A (en) * | 2001-11-13 | 2005-02-23 | 皇家飞利浦电子股份有限公司 | Affective television monitoring and control |
CN1942970A (en) * | 2004-04-15 | 2007-04-04 | 皇家飞利浦电子股份有限公司 | Method of generating a content item having a specific emotional influence on a user |
CN101420579A (en) * | 2007-10-22 | 2009-04-29 | 皇家飞利浦电子股份有限公司 | Method, apparatus and system for detecting exciting part |
CN102130897A (en) * | 2010-04-26 | 2011-07-20 | 上海理滋芯片设计有限公司 | Cloud computing-based video acquisition and analysis system and method |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103716661A (en) * | 2013-12-16 | 2014-04-09 | 乐视致新电子科技(天津)有限公司 | Video scoring reporting method and device |
CN106104422A (en) * | 2014-03-31 | 2016-11-09 | 奥迪股份公司 | Gesture assessment system, the method assessed for gesture and vehicle |
CN106104422B (en) * | 2014-03-31 | 2019-02-05 | 奥迪股份公司 | Gesture assessment system, for gesture assessment method and vehicle |
CN104185064A (en) * | 2014-05-30 | 2014-12-03 | 华为技术有限公司 | Media file identification method and device |
CN104185064B (en) * | 2014-05-30 | 2018-04-27 | 华为技术有限公司 | Media file identification method and apparatus |
CN105589898A (en) * | 2014-11-17 | 2016-05-18 | 中兴通讯股份有限公司 | Data storage method and device |
CN104504112A (en) * | 2014-12-30 | 2015-04-08 | 何业文 | Cinema information acquisition system |
CN107071534A (en) * | 2017-03-17 | 2017-08-18 | 深圳市九洲电器有限公司 | A kind of user and the interactive method and system of set top box |
CN107071534B (en) * | 2017-03-17 | 2019-12-10 | 深圳市九洲电器有限公司 | Method and system for interaction between user and set top box |
CN109116974A (en) * | 2017-06-23 | 2019-01-01 | 中兴通讯股份有限公司 | The determination method and method for pushing of screen locking picture, terminal, network server apparatus |
CN110019897A (en) * | 2017-08-01 | 2019-07-16 | 北京小米移动软件有限公司 | Show the method and device of picture |
CN108681390A (en) * | 2018-02-11 | 2018-10-19 | 腾讯科技(深圳)有限公司 | Information interacting method and device, storage medium and electronic device |
US11353950B2 (en) | 2018-02-11 | 2022-06-07 | Tencent Technology (Shenzhen) Company Limited | Information interaction method and device, storage medium and electronic device |
CN108563687A (en) * | 2018-03-15 | 2018-09-21 | 维沃移动通信有限公司 | A kind of methods of marking and mobile terminal of resource |
CN108848416A (en) * | 2018-06-21 | 2018-11-20 | 北京密境和风科技有限公司 | The evaluation method and device of audio-video frequency content |
CN111507143A (en) * | 2019-01-31 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
CN111507143B (en) * | 2019-01-31 | 2023-06-02 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
CN111107400A (en) * | 2019-12-30 | 2020-05-05 | 深圳Tcl数字技术有限公司 | Data collection method and device, smart television and computer readable storage medium |
CN111107400B (en) * | 2019-12-30 | 2022-06-10 | 深圳Tcl数字技术有限公司 | Data collection method and device, smart television and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103207662A (en) | Method and device for obtaining physiological characteristic information | |
CN104768082B (en) | A kind of audio and video playing information processing method and server | |
US9251406B2 (en) | Method and system for detecting users' emotions when experiencing a media program | |
CN104021162B (en) | A kind of method and device given a mark for multimedia resource | |
CN103945240B (en) | A kind of video broadcasting method and device based on video aggregation | |
CN104091596B (en) | A kind of melody recognition methods, system and device | |
US11212596B2 (en) | Methods and apparatus to synthesize reference media signatures | |
KR20160000399A (en) | Method and device for recommending multimedia resource | |
US20100014840A1 (en) | Information processing apparatus and information processing method | |
CN105788610B (en) | Audio-frequency processing method and device | |
CN104506894A (en) | Method and device for evaluating multi-media resources | |
RU2011135032A (en) | JOINT USE OF VIDEO | |
KR20170027649A (en) | Method and apparatus for synchronous putting of real-time mobile advertisement based on audio fingerprint | |
CN206378900U (en) | A kind of advertisement delivery effect evaluation system based on mobile terminal | |
TWI629899B (en) | Method and device for evaluating quality of multimedia resources | |
CN105608121A (en) | Personalized recommendation method and apparatus | |
CN104123949B (en) | card frame detection method and device | |
KR20210038990A (en) | Media identification using watermarks and signatures | |
CN107786895A (en) | A kind of method for evaluating quality and device of broadcast page video recommendations | |
CN103686238B (en) | Video playback detection method and device | |
CN103353868A (en) | Method and equipment for determining resource evaluation information on multimedia resource | |
US8306992B2 (en) | System for determining content topicality, and method and program thereof | |
CN104202628B (en) | The identifying system and method for client terminal playing program | |
CN110765171B (en) | Bad user discrimination method, storage medium, electronic device and system | |
CN110139160A (en) | A kind of forecasting system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20130717 |