CN104967922A - Subtitle adding position determining method and device - Google Patents

Subtitle adding position determining method and device Download PDF

Info

Publication number
CN104967922A
CN104967922A CN201510375489.5A CN201510375489A CN104967922A CN 104967922 A CN104967922 A CN 104967922A CN 201510375489 A CN201510375489 A CN 201510375489A CN 104967922 A CN104967922 A CN 104967922A
Authority
CN
China
Prior art keywords
captions
pixel
candidate region
energy value
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510375489.5A
Other languages
Chinese (zh)
Inventor
朱柏涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201510375489.5A priority Critical patent/CN104967922A/en
Publication of CN104967922A publication Critical patent/CN104967922A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Abstract

The invention provides a subtitle adding position determining method and device. The method comprises the steps that a subtitle covering pixel set is determined; a subtitle adding candidate region is selected in a screen; according to the subtitle covering pixel set, the subtitle adding candidate region is divided into at least two subtitle adding regions, wherein iteration pixel by pixel is carried out on each subtitle adding region; the pixel energy value of each subtitle adding region is calculated; and according to pixel energy values, a subtitle adding region is selected for subtitle adding. According to the method and the device, which are provided by the invention, according to the subtitle covering pixel set, the subtitle adding candidate region is divided into at least two subtitle adding regions; the pixel energy value of each subtitle adding region is calculated; a region with relatively minor vision is found in the video screen, and subtitles are added in the region; and less key elements are obscured by the added subtitles.

Description

A kind of method and device determining captions point of addition
Technical field
The present invention relates to video technique field, particularly relate to a kind of method and the device of determining captions point of addition.
Background technology
Captions (subtitles of motion picture) are with written form display video conversation content, are also the words of video post-production, as credits present, annotation.
In China, the pronunciation difference of different regions language is very large, and the people that can not understand mandarin is a lot.But the difference of word literary style is also little, and people can understand word greatly.So the captions of corresponding mandarin (or dialect) are attached in video.In addition, in foreign language video, add captions, the spectators not understanding foreign language can be made also to appreciate the foreign language video of foreign language primary sound.
When user watches video, often by most important in picture, that most is representative, easily attractive is noted region attraction, this part region is commonly referred to image vision salient region.Captions are fixedly added on the bottom center of picture by conventional method, but rich and varied due to video content, and captions may shelter from image vision salient region.And owing to usually there is continuity between picture, and this blocking is often continuation, experience is viewed and admired in impact.
Therefore, the technical problem that those skilled in the art are urgently to be resolved hurrily is: how to prevent captions from sheltering from image vision salient region.
Summary of the invention
Embodiments provide a kind of method and the device of determining captions point of addition, to solve the technical problem that captions can shelter from image vision salient region.
In order to solve the problem, the embodiment of the invention discloses a kind of method determining captions point of addition, comprising:
Determine that captions cover the set of pixel;
In picture, choose captions add candidate region;
Described captions are added candidate region and are divided at least two captions Adding Areas by the set covering pixel according to described captions, described in each between captions Adding Area by pixel iterative;
Calculate the pixel energy value of captions Adding Area described in each respectively;
According to described pixel energy value, choose described captions Adding Area and add described captions.
Preferably, the described set determining captions covering pixel, comprising:
Captions described in rasterisation, generate the mask figure of described captions, and described mask figure comprises: blank pixel and captions color filling pixel;
The set adding up described blank pixel and described captions color filling pixel is the set that described captions cover pixel.
Preferably, the described pixel energy value calculating captions Adding Area described in each respectively, comprising:
In the described captions Adding Area calculating each frame picture that described captions cover, the set of the energy value of each pixel is as the energy value of described captions Adding Area;
Wherein, the energy value of a pixel I (x, y) in the described captions Adding Area of a frame picture is E (I (x, y));
E ( I ( x , y ) ) = | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) |
Wherein, for pixel I (x, y) difference in the horizontal direction, for pixel I (x, y) difference in the vertical direction, x is the coordinate components of pixel I (x, y) horizontal direction, and y is the coordinate components of pixel I (x, y) vertical direction.
Preferably, described captions are added candidate region and are divided at least two captions Adding Areas by the described set covering pixel according to described captions, described in each between captions Adding Area by pixel iterative, comprising:
Cover the set of pixel according to described captions, according to from left to right, described captions are added candidate region and are divided at least two captions Adding Areas by order from top to bottom, described in each between captions Adding Area by pixel iterative.
Preferably, described captions interpolation candidate region of choosing in picture comprises:
Divide described captions at the top of described picture and/or bottom and add candidate region, the height ratio that described captions add candidate region and described picture is preset value.
The embodiment of the present invention additionally provides a kind of device determining captions point of addition, comprising:
Pixel set determination module, for determining that captions cover the set of pixel;
Module is chosen in candidate region, adds candidate region for choosing captions in picture;
Region dividing module, described captions are added candidate region and are divided at least two captions Adding Areas by the set for covering pixel according to described captions, described in each between captions Adding Area by pixel iterative;
Pixel energy value computing module, for calculating the pixel energy value of captions Adding Area described in each respectively;
Captions add module, for according to described pixel energy value, choose described captions Adding Area and add described captions.
Preferably, described pixel set determination module comprises:
Mask figure generation unit, for captions described in rasterisation, generate the mask figure of described captions, described mask figure comprises: blank pixel and captions color filling pixel;
Pixel set statistic unit is the set that described captions cover pixel for adding up the set of described blank pixel and described captions color filling pixel.
Preferably, described pixel energy value computing module specifically for, in the described captions Adding Area calculating picture described in each frame that described captions cover, the set of the energy value of each pixel is as the energy value of described captions Adding Area;
Wherein, the energy value of a pixel I (x, y) in the described captions Adding Area of a frame picture is E (I (x, y));
E ( I ( x , y ) ) = | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) |
Wherein, for pixel I (x, y) difference in the horizontal direction, for pixel I (x, y) difference in the vertical direction, x is the coordinate components of pixel I (x, y) horizontal direction, and y is the coordinate components of pixel I (x, y) vertical direction.
Preferably, described Region dividing module specifically for, cover the set of pixel according to described captions, according to from left to right, described captions are added candidate region and are divided at least two captions Adding Areas by order from top to bottom, described in each between captions Adding Area by pixel iterative.
Preferably, described candidate region choose module specifically for, divide described captions at the top of described picture and/or bottom and add candidate region, the height ratio that described captions add candidate region and described picture is preset value.
Compared with prior art, the embodiment of the present invention comprises following advantage:
Captions are added candidate region and are divided at least two captions Adding Areas by the set according to captions covering pixel, calculate each captions Adding Area pixel energy value respectively; According to each captions Adding Area pixel energy value, find the region that in video pictures, vision is relatively secondary, and add captions at this, effectively reduce and add captions blocking key content.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet determining the method for captions point of addition that the embodiment of the present invention provides;
The captions that Fig. 2 provides for the embodiment of the present invention add candidate region schematic diagram;
The division captions Adding Area schematic diagram that Fig. 3 provides for the embodiment of the present invention;
Fig. 4 a is captions effect schematic diagram of the prior art;
The captions effect schematic diagram that Fig. 4 b provides for the embodiment of the present invention;
A kind of schematic flow sheet determining the method for the set of captions covering pixel that Fig. 5 provides for the embodiment of the present invention;
The mask figure of the captions that Fig. 6 provides for the embodiment of the present invention;
A kind of structural representation determining the device of captions point of addition that Fig. 7 provides for the embodiment of the present invention;
The pixel set determination module structural representation that Fig. 8 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Embodiment one
The embodiment of the present invention one provides a kind of method determining captions point of addition, as shown in Figure 1, can comprise the following steps:
Step S101, determines that captions cover the set of pixel.
In this step, captions have specific color, have certain contrast grade with picture color, thus user can see captions clearly, and general captions color is white.Because the picture of video has continuity, captions can continue to show certain hour, and therefore captions have time shaft, all corresponding same captions of each frame picture play in this time shaft.Because captions color and character quantity are fixed, therefore the captions of captions in time shaft in each frame picture cover pixel quantity is fixed value.This fixed value can be called that captions cover the set of pixel, and namely captions captions in a frame picture cover the quantity of pixel.
Step S102, chooses captions and adds candidate region in picture.
In this step, the center visual effect marking area often of picture, in order to prevent captions from blocking visual effect marking area, the fringe region generally choosing picture adds candidate region as captions.Can divide captions at the top of picture and/or bottom and add candidate region, the height ratio that captions add candidate region and picture is preset value.As shown in Figure 2, divide two captions in this picture and add candidate region, lay respectively at top and the bottom of picture, two captions interpolation candidate region areas etc. are large, the ratio of picture central area and height is 0.618, then each captions add the height ratio of candidate region and picture is 0.191.
Step S103, captions are added candidate region and are divided at least two captions Adding Areas, by pixel iterative between each captions Adding Area by the set according to captions covering pixel.
In this step, captions Adding Area can hold the set that captions cover pixel.Cover the set of pixel according to captions, according to from left to right, captions are added candidate region and are divided at least two captions Adding Areas by order from top to bottom, by pixel iterative between each captions Adding Area.The division captions Adding Area schematic diagram that Fig. 3 provides for the embodiment of the present invention one, 1 for dividing the original position of captions Adding Area, 6 for dividing the end position of captions Adding Area, and the position of 6 is not fixed on the lower right corner, can divide each captions Adding Area according to the order of 1-2-3-4-5-6.The original position and the end position that divide captions Adding Area are not limited to the upper left corner and the lower right corner.Each division deducts the pixel no longer covered, and adds and newly cover pixel, thus realizes between each captions Adding Area by pixel iterative.
Step S104, calculates the pixel energy value of each captions Adding Area respectively.
In this step, because picture has continuity, in the captions Adding Area that can calculate each frame picture that captions cover, the set of the energy value of each pixel is as the energy value ∑ of captions Adding Area re (I (x, y)).
Wherein, rfor captions cover the set of pixel, E (I (x, y)) is the energy value of a pixel I (x, y) in the captions Adding Area of a frame picture;
E ( I ( x , y ) ) = | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) |
Wherein, for pixel I (x, y) difference in the horizontal direction, for pixel I (x, y) difference in the vertical direction, x is the coordinate components of pixel I (x, y) horizontal direction, and y is the coordinate components of pixel I (x, y) vertical direction.
Step S105, according to pixel energy value, chooses captions Adding Area and adds captions.
In this step, pixel energy value is higher, vision significance is higher, otherwise pixel energy value is lower, and vision significance is lower, therefore, the minimum captions Adding Area of pixel energy value is generally the least significant region of visual effect in whole picture, adds captions, can prevent captions from blocking visual effect marking area in the captions Adding Area that pixel energy value is lower.Fig. 4 a is captions effect schematic diagram of the prior art, the captions effect schematic diagram that Fig. 4 b provides for the embodiment of the present invention one, can find out that captions have blocked part animated character in fig .4, and in Fig. 4 b, animated character shows complete.
By a kind of method determining captions point of addition that the embodiment of the present invention one provides, the continuity between the importance of video content and video pictures can have been considered, find the region that in video pictures, vision is relatively secondary, and add captions at this, effectively reduce and add captions blocking key content.
Embodiment two
The embodiment of the present invention two provides a kind of method that captions cover the set of pixel of determining, as shown in Figure 5, by the step S101 in the embodiment of the present invention one, can determine that the set of captions covering pixel is optimized for following steps:
Step S1011, rasterisation captions, generate the mask figure of captions, the mask figure of captions comprises: blank pixel and captions color filling pixel.
In this step, captions are made up of at least one character, captions " in " the mask figure of captions that generates, as shown in Figure 6, in figure 6, the pixel being labeled as 0 is blank pixel, and the pixel being labeled as 1 is captions color filling pixel.
Step S1012, the set of statistics blank pixel and captions color filling pixel covers the set of pixel as captions.
In this step, the set set of pixels of captions color filling pixel is combined into R mask, the set of blank pixel and captions color filling pixel is R sub, then rcan be R subor R mask.
The method of the determination captions point of addition provided by the embodiment of the present invention two, rasterisation captions, generate the mask figure of captions, thus accurately determine that captions cover the set of pixel according to the mask figure of captions.
Embodiment three
The embodiment of the present invention three provides a kind of device determining captions point of addition, the method of the determination captions point of addition that the embodiment of the present invention one provides can be performed, as shown in Figure 7, this device comprises with lower module: pixel set determination module 71, candidate region are chosen module 72, Region dividing module 73, pixel energy value computing module 74 and captions and added module 75.
Pixel set determination module 71, for determining that captions cover the set of pixel; Module 72 is chosen in candidate region, adds candidate region for choosing captions in picture; Region dividing module 73, captions are added candidate region and are divided at least two captions Adding Areas, by pixel iterative between each captions Adding Area by the set for covering pixel according to captions; Pixel energy value computing module 74, for calculating the pixel energy value of each captions Adding Area respectively; Captions add module 75, for according to pixel energy value, choose captions Adding Area and add captions.
In pixel set determination module 71, captions have specific color, have certain contrast grade with picture color, thus user can see captions clearly, and general captions color is white.Because the picture of video has continuity, captions can continue to show certain hour, and therefore captions have time shaft, all corresponding same captions of each frame picture play in this time shaft.Because captions color and character quantity are fixed, therefore the captions of captions in time shaft in each frame picture cover pixel quantity is fixed value.This fixed value can be called that captions cover the set of pixel, and namely captions captions in a frame picture cover the quantity of pixel.
Choose in candidate region in module 72, the center visual effect marking area often of picture, in order to prevent captions from blocking visual effect marking area, the fringe region generally choosing picture adds candidate region as captions.Can divide captions at the top of picture and/or bottom and add candidate region, the height ratio that captions add candidate region and picture is preset value.
In Region dividing module 73, captions Adding Area can hold the set that captions cover pixel.Cover the set of pixel according to captions, according to from left to right, captions are added candidate region and are divided at least two captions Adding Areas by order from top to bottom, by pixel iterative between each captions Adding Area.
In pixel energy value computing module 74, because picture has continuity, in the captions Adding Area that can calculate each frame picture that captions cover, the set of the energy value of each pixel is as the energy value ∑ of captions Adding Area re (I (x, y)).
Wherein, rfor captions cover the set of pixel, E (I (x, y)) is the energy value of a pixel I (x, y) in the captions Adding Area of a frame picture;
E ( I ( x , y ) ) = | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) |
Wherein, for pixel I (x, y) difference in the horizontal direction, for pixel I (x, y) difference in the vertical direction, x is the coordinate components of pixel I (x, y) horizontal direction, and y is the coordinate components of pixel I (x, y) vertical direction.
Add in module 75 at captions, pixel energy value is higher, vision significance is higher, otherwise pixel energy value is lower, and vision significance is lower, therefore, the minimum captions Adding Area of pixel energy value is generally the least significant region of visual effect in whole picture, adds captions, can prevent captions from blocking visual effect marking area in the captions Adding Area that pixel energy value is lower.
By a kind of device determining captions point of addition that the embodiment of the present invention three provides, the continuity between the importance of video content and video pictures can have been considered, find the region that in video pictures, vision is relatively secondary, and add captions at this, effectively reduce and add captions blocking key content.
Above-mentioned pixel set determination module 71 can comprise: mask figure generation unit 711 and pixel set statistic unit 712, as shown in Figure 8.
Mask figure generation unit 711, for rasterisation captions, generate the mask figure of captions, the mask figure of captions comprises: blank pixel and captions color filling pixel; Pixel set statistic unit 712, for adding up the set of set as captions covering pixel of blank pixel and captions color filling pixel.
In mask figure generation unit 711, captions are made up of at least one character, captions " in " the mask figure of captions that generates as shown in Figure 6, in figure 6, the pixel being labeled as 0 is blank pixel, and the pixel being labeled as 1 is captions color filling pixel.
In pixel set statistic unit 712, the set set of pixels of captions color filling pixel is combined into R mask, the set of blank pixel and captions color filling pixel is R sub, then rcan be R subor R mask.
The pixel set determination module provided by the embodiment of the present invention, rasterisation captions, generate the mask figure of captions, thus accurately determine that captions cover the set of pixel according to the mask figure of captions.
Each embodiment in this specification all adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually see.
Above to a kind of method and device determining captions point of addition provided by the present invention, be described in detail, apply specific case herein to set forth principle of the present invention and execution mode, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, to sum up, this description should not be construed as limitation of the present invention.

Claims (10)

1. determine a method for captions point of addition, it is characterized in that, comprising:
Determine that captions cover the set of pixel;
In picture, choose captions add candidate region;
Described captions are added candidate region and are divided at least two captions Adding Areas by the set covering pixel according to described captions, described in each between captions Adding Area by pixel iterative;
Calculate the pixel energy value of captions Adding Area described in each respectively;
According to described pixel energy value, choose described captions Adding Area and add described captions.
2. method according to claim 1, is characterized in that, the described set determining captions covering pixel, comprising:
Captions described in rasterisation, generate the mask figure of described captions, and described mask figure comprises: blank pixel and captions color filling pixel;
The set adding up described blank pixel and described captions color filling pixel is the set that described captions cover pixel.
3. method according to claim 1, is characterized in that, the described pixel energy value calculating captions Adding Area described in each respectively, comprising:
In the described captions Adding Area calculating each frame picture that described captions cover, the set of the energy value of each pixel is as the energy value of described captions Adding Area;
Wherein, the energy value of a pixel I (x, y) in the described captions Adding Area of a frame picture is E (I (x, y));
E ( I ( x , y ) ) = | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) |
Wherein, for pixel I (x, y) difference in the horizontal direction, for pixel I (x, y) difference in the vertical direction, x is the coordinate components of pixel I (x, y) horizontal direction, and y is the coordinate components of pixel I (x, y) vertical direction.
4. method according to claim 1, is characterized in that, described captions are added candidate region and are divided at least two captions Adding Areas by the described set covering pixel according to described captions, described in each between captions Adding Area by pixel iterative, comprising:
Cover the set of pixel according to described captions, according to from left to right, described captions are added candidate region and are divided at least two captions Adding Areas by order from top to bottom, described in each between captions Adding Area by pixel iterative.
5. method according to claim 1, is characterized in that, described captions interpolation candidate region of choosing in picture comprises:
Divide described captions at the top of described picture and/or bottom and add candidate region, the height ratio that described captions add candidate region and described picture is preset value.
6. determine a device for captions point of addition, it is characterized in that, comprising:
Pixel set determination module, for determining that captions cover the set of pixel;
Module is chosen in candidate region, adds candidate region for choosing captions in picture;
Region dividing module, described captions are added candidate region and are divided at least two captions Adding Areas by the set for covering pixel according to described captions, described in each between captions Adding Area by pixel iterative;
Pixel energy value computing module, for calculating the pixel energy value of captions Adding Area described in each respectively;
Captions add module, for according to described pixel energy value, choose described captions Adding Area and add described captions.
7. device according to claim 6, is characterized in that, described pixel set determination module comprises:
Mask figure generation unit, for captions described in rasterisation, generate the mask figure of described captions, described mask figure comprises: blank pixel and captions color filling pixel;
Pixel set statistic unit is the set that described captions cover pixel for adding up the set of described blank pixel and described captions color filling pixel.
8. device according to claim 6, it is characterized in that, described pixel energy value computing module specifically for, in the described captions Adding Area calculating picture described in each frame that described captions cover, the set of the energy value of each pixel is as the energy value of described captions Adding Area;
Wherein, the energy value of a pixel I (x, y) in the described captions Adding Area of a frame picture is E (I (x, y));
E ( I ( x , y ) ) = | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) |
Wherein, for pixel I (x, y) difference in the horizontal direction, for pixel I (x, y) difference in the vertical direction, x is the coordinate components of pixel I (x, y) horizontal direction, and y is the coordinate components of pixel I (x, y) vertical direction.
9. device according to claim 6, it is characterized in that, described Region dividing module specifically for, the set of pixel is covered according to described captions, according to from left to right, described captions are added candidate region and are divided at least two captions Adding Areas by order from top to bottom, described in each between captions Adding Area by pixel iterative.
10. device according to claim 6, it is characterized in that, described candidate region choose module specifically for, divide described captions at the top of described picture and/or bottom and add candidate region, the height ratio that described captions add candidate region and described picture is preset value.
CN201510375489.5A 2015-06-30 2015-06-30 Subtitle adding position determining method and device Pending CN104967922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510375489.5A CN104967922A (en) 2015-06-30 2015-06-30 Subtitle adding position determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510375489.5A CN104967922A (en) 2015-06-30 2015-06-30 Subtitle adding position determining method and device

Publications (1)

Publication Number Publication Date
CN104967922A true CN104967922A (en) 2015-10-07

Family

ID=54221845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510375489.5A Pending CN104967922A (en) 2015-06-30 2015-06-30 Subtitle adding position determining method and device

Country Status (1)

Country Link
CN (1) CN104967922A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106604107A (en) * 2016-12-29 2017-04-26 合智能科技(深圳)有限公司 Caption processing method and apparatus
WO2017107523A1 (en) * 2015-12-24 2017-06-29 深圳市金立通信设备有限公司 Method of displaying overlay comment and terminal
CN108600727A (en) * 2018-04-13 2018-09-28 天津大学 A kind of three-dimensional subtitle adding method based on viewing comfort level
CN109688457A (en) * 2018-12-28 2019-04-26 武汉斗鱼网络科技有限公司 A kind of anti-occlusion method of video, device, electronic equipment and medium
CN112399265A (en) * 2019-08-15 2021-02-23 国际商业机器公司 Method and system for adding content to image based on negative space recognition
CN112770146A (en) * 2020-12-30 2021-05-07 广州酷狗计算机科技有限公司 Method, device and equipment for setting content data and readable storage medium
CN114615520A (en) * 2022-03-08 2022-06-10 北京达佳互联信息技术有限公司 Subtitle positioning method, subtitle positioning device, computer equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101102419A (en) * 2007-07-10 2008-01-09 北京大学 A method for caption area of positioning video
CN101510299A (en) * 2009-03-04 2009-08-19 上海大学 Image self-adapting method based on vision significance
US20090276805A1 (en) * 2008-05-03 2009-11-05 Andrews Ii James K Method and system for generation and playback of supplemented videos
CN101917557A (en) * 2010-08-10 2010-12-15 浙江大学 Method for dynamically adding subtitles based on video content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101102419A (en) * 2007-07-10 2008-01-09 北京大学 A method for caption area of positioning video
US20090276805A1 (en) * 2008-05-03 2009-11-05 Andrews Ii James K Method and system for generation and playback of supplemented videos
CN101510299A (en) * 2009-03-04 2009-08-19 上海大学 Image self-adapting method based on vision significance
CN101917557A (en) * 2010-08-10 2010-12-15 浙江大学 Method for dynamically adding subtitles based on video content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜晓希,冯靖怡,冯结青: "视频内容敏感的动态字幕", 《计算机辅助设计与图形学学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107523A1 (en) * 2015-12-24 2017-06-29 深圳市金立通信设备有限公司 Method of displaying overlay comment and terminal
CN106604107A (en) * 2016-12-29 2017-04-26 合智能科技(深圳)有限公司 Caption processing method and apparatus
CN108600727A (en) * 2018-04-13 2018-09-28 天津大学 A kind of three-dimensional subtitle adding method based on viewing comfort level
CN108600727B (en) * 2018-04-13 2020-11-27 天津大学 Stereoscopic subtitle adding method based on viewing comfort
CN109688457A (en) * 2018-12-28 2019-04-26 武汉斗鱼网络科技有限公司 A kind of anti-occlusion method of video, device, electronic equipment and medium
CN109688457B (en) * 2018-12-28 2021-07-23 武汉斗鱼网络科技有限公司 Video anti-blocking method and device, electronic equipment and medium
CN112399265A (en) * 2019-08-15 2021-02-23 国际商业机器公司 Method and system for adding content to image based on negative space recognition
CN112770146A (en) * 2020-12-30 2021-05-07 广州酷狗计算机科技有限公司 Method, device and equipment for setting content data and readable storage medium
CN112770146B (en) * 2020-12-30 2023-10-03 广州酷狗计算机科技有限公司 Method, device, equipment and readable storage medium for setting content data
CN114615520A (en) * 2022-03-08 2022-06-10 北京达佳互联信息技术有限公司 Subtitle positioning method, subtitle positioning device, computer equipment and medium
CN114615520B (en) * 2022-03-08 2024-01-02 北京达佳互联信息技术有限公司 Subtitle positioning method, subtitle positioning device, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN104967922A (en) Subtitle adding position determining method and device
TWI490772B (en) Method and apparatus for adapting custom control components to a screen
KR101289542B1 (en) Systems and methods for providing closed captioning in three-dimensional imagery
JP5560771B2 (en) Image correction apparatus, image display system, and image correction method
US8997021B2 (en) Parallax and/or three-dimensional effects for thumbnail image displays
KR101686693B1 (en) Viewer-centric user interface for stereoscopic cinema
KR101656167B1 (en) Method, apparatus, device, program and recording medium for displaying an animation
US10007978B2 (en) Method and device for processing image of upper garment
US20080062205A1 (en) Dynamic pixel snapping
CN105578172B (en) Bore hole 3D image display methods based on Unity3D engines
CN104427230A (en) Reality enhancement method and reality enhancement system
CN102209249A (en) Stereoscopic image display device
CN105898338A (en) Panorama video play method and device
CN103647960B (en) A kind of method of compositing 3 d images
US10043298B2 (en) Enhanced document readability on devices
CN105930464A (en) Web rich media multi-screen adaptation method and apparatus
WO2015076588A1 (en) Method and apparatus for normalizing size of content in multi-projection theater and computer-readable recording medium
US20170024843A1 (en) Method and device for removing video watermarks
CN112891946A (en) Game scene generation method and device, readable storage medium and electronic equipment
US20200068205A1 (en) Geodesic intra-prediction for panoramic video coding
CN104243949A (en) 3D display method and device
WO2015080476A1 (en) Method and apparatus for normalizing size of content in multi-projection theater and computer-readable recording medium
CN110910485A (en) Immersive cave image manufacturing method
US8497874B2 (en) Pixel snapping for anti-aliased rendering
KR20140058744A (en) A system for stereoscopic images with hole-filling and method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151007