WO2012140678A1 - Method of improving the content of videographic images with encoding of additional contents on a main videographic image - Google Patents

Method of improving the content of videographic images with encoding of additional contents on a main videographic image Download PDF

Info

Publication number
WO2012140678A1
WO2012140678A1 PCT/IT2011/000107 IT2011000107W WO2012140678A1 WO 2012140678 A1 WO2012140678 A1 WO 2012140678A1 IT 2011000107 W IT2011000107 W IT 2011000107W WO 2012140678 A1 WO2012140678 A1 WO 2012140678A1
Authority
WO
WIPO (PCT)
Prior art keywords
videographic
differentiated
encoding
content
image
Prior art date
Application number
PCT/IT2011/000107
Other languages
French (fr)
Inventor
Giuseppe Biava
Roberto CHIODINI
Stefano Ivaldi
Simone IANNUNZIO
Original Assignee
Aesys Spa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aesys Spa filed Critical Aesys Spa
Priority to PCT/IT2011/000107 priority Critical patent/WO2012140678A1/en
Priority to EP11721579.8A priority patent/EP2697707A1/en
Publication of WO2012140678A1 publication Critical patent/WO2012140678A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1431Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using a single graphics controller
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0428Gradation resolution change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/10Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/06Remotely controlled electronic signs other than labels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports

Definitions

  • the present invention generally refers to devices for multiple display of differentiated graphic contents, such texts, images, movies, and particularly refers to a method for enhancing the information content of videographic images in such devices.
  • the invention may have useful applications in the diffusion to the public of: differentiated information contents based on installation area, such for example stations, airports, subways, etc., as well as in devices with opposite displaying directions assembled on mobile means for public transport such buses, trams, trains, etc.; where the signals are. necessarily displayed, such as for example directionai arrows: or, more generally, information signs, such as for ascent or descent of passengers, messages about next stop, etc.
  • the invention may be applied in all cases wherein a main graphic content must be displayed on a plurality of devices placed in different installation solutions and, for this reason, requiring a differentiated display of information from case to case.
  • a typical example of such applications can be composed of two graphic panels with TFT LCD technology mounted with opposite display directions (in configuration called "back-to-back") aboard a metro.
  • the same display image for example having an arrow to indicate the passengers the descent side, should be able to be turned in the opposite direction in one of the two devices, just to indicate a common descent direction.
  • Figure 1 of the appended drawings shows schematically an example of one of the currently used techniques for the playback of two video contents 1 and 2 on as many respective display devices 1a,;:2a.
  • a graphics processor 3 such as a Dual Output graphic chipset, with two independent integrated video outputs, as; is commonly the case with Embedded PC.
  • FIG. 2 Another typical example of the known art for playing two video contents 1 and 2 on as many respective display devices 1a, 2a is illustrated in Figure 2.
  • a graphics processor 4 is used composed of one or more additional graphics cards, each of which is handling a single independent video output as it often happens in case of common desktop PCs.
  • Teletext is an interactive service used in the television-area;; consisting of text pages viewable on a television screen by user's request and it is used to provide users with information of various kinds.
  • This information is transmitted periodically within video frames, in particular within the vertical blanking interval.
  • This technique provides an additional content with an only textual or very low graphic resolution nature. It also requires a specific decoder able to detect and decode the additional information content. Further, such a content does not interact in any way with the images in the video frames and, therefore, provides no differentiated graphic content in relation to the displayed images.
  • Video mixer It is a device used to switch different video sources on a single output signal and in some cases to mix them together or add some special effects, such as texts or graphics addition, or the handling of chromatic or structural proprieties of the image.
  • the video sources once mixed together in one final video content, can not be selectively reconstructed or discriminated anymore.
  • it is necessary to have a number of video outputs at least equal to the number of sources, as shown in the example represented in Figure. 2.
  • 3D Imaging represents the set of making and viewing techniques for images, drawings, photographs and videos, designed to communicate an illusion of three dimensions, similar to that generated by the binocular vision of the human visual system, , by presenting two different images separately to the left and right eyes, respectively, of the beholder.
  • Alternate darkening system it consists in discriminating, by glasses provided with active synchronized shutters (active shutter glasses) two images, one intended for the right eye and the other intended for the left eye, projected in rapid sequence.
  • Polarized lens system it consists incdiscriminating ⁇ by polarized glasses orthogonally oriented one relatively to the other, two images, one intended for the right eye and the other intended for the left .eye, projected in -rapid sequence.
  • ⁇ . ⁇ ⁇ . - Autostereoscopy or AS-3D it consists in "hiding" to each eye the image intended for the other eye through the implementation of appropriate technologies directly on the medium (paper print or monitor), thus not requiring the use of other devices.
  • Some implementing technologies of this system are based on: lenticular network, parallax barrier, illumination and holographic screen.
  • the application of autostereoscopic techniques involves, as mentioned, the implementation of special technologies directly to the display medium.
  • the overall image must necessarily be larger, since it must be composed of a plurality of points of view of the original image, observed in the left eye and the right eye respectively, using specific devices depending on the implemented technology. This also means that the illusion of three-dimensionality is achieved only observing the device within determined observation angles.
  • the above described techniques involve the use of optical devices, such as glasses with additional filtering or polarized lenses, or opto-electronic devices such as active synchronized shutters.
  • the alternate darkening system does not provide information contents in real time, but discriminates, through active synchronized shutters, two images shown at different times.
  • the present inventors looked for a further technique for displaying differentiated videographic contents being able to allow a real discrimination of graphic contents on different playback'; devices, advantageously without the need for optical or opto-electronic devices and additional ' resources and without worsening- the: graphics,; processor performance .
  • the present invention is the result of such a research and has the object to provide an innovative method for enhancing the information content of videographic images with differentiated graphical contents, and more particularly a method for reconstructing the afore said differentiated images for their display on different playback devices.
  • This object is achieved, according to the invention, with a method for enhancing the information content of videographic images comprising: prearranging a main videographic image,
  • Figure 3 a block diagram of the whole process according to the invention.
  • Figure 4 a block diagram of the video - encoding process
  • Figure 5 a block diagram of the video - decoding process
  • Figure 6 an example of creating an encoded image
  • Figure 7 an example of encoding a single differentiated content in a single 8 bit chromatic channel.
  • Figure 8 an example of encoding the positional and controlling information
  • Figure 9 an example of decoding and reconstructing a single 8 bit chromatic channel.
  • FIG. 3 a block diagram of the whole process is therefore represented whereby, for example, at least two differentiated video contents 11 and 12 are mixed and encoded into a video encoding system 13, then processed in real-time in a video - decoding system 14 so as to separate and reconstruct the distinct and independent video information on as many display devices 15 and 16.
  • the proposed method does not change the bit or byte size of the main image nor its resolution. In consequence, the transmission bandwidth necessary to send the image to the display devices is unaffected too.
  • the method may be implemented by any data processing device (microprocessor), any programmable logic device (FPGA, CPLD, etc..) or, more generally, by any digital processing logic circuit.
  • microprocessor microprocessor
  • FPGA programmable logic device
  • CPLD CPLD
  • digital processing logic circuit any digital processing logic circuit.
  • the proposed encoding provides for the enhancement of the main image with one or more differentiated images and informations about the position of these differentiated images and possible controlling information.
  • the ; differentiated images are encoded within the main image in the exact position wherein they will have to ⁇ besdisplayed as differentiated contents on different playback devices:
  • FIG 6 an example of. creation of. an encoded image is shown.
  • An area is defined in the: main image, for example, of 250/200 pixels wide by 50/100 pixels height wherein, for example, a directional arrow of a defined color is shown ( Figure 6, a). It is then created a differentiated content of the same size of the defined area, such as a directional arrow with another color having opposite direction to the previous ( Figure 6, b). At this point, the differentiated content is encoded in the defined area on the main image, thus generating the encoded image to be transmitted ( Figure 6, c).
  • the real encoding is carried out at the level of each color component constituting the pixel of the main image and the differentiated contents to be encoded.
  • the pixels defined in the differentiated content area on the encoded image are composed using the 4 most significant bits (MSBs, Most Significant Bits) of each chromatic channel constituting the main image and the 4 most significant bits of each chromatic channel forming the differentiated content.
  • the resulting encoded image is then ready to be transmitted to the various devices of graphic reproduction.
  • Figure 7 an example of encoding is shown, wherein the 4 bits obtained from the main image are placed at the higher position of the byte constituting the chromatic component of the encoded image, whereas the 4 bits obtained from the differentiated contents are placed at the lower position.
  • Table 1 shows an example of encoding with a single differentiated content of a 24-bit pixel (then consisting of 8 bits for the chromatic red component, 8 bits for the green and 8-bit for the blue ones), that will be the differentiated content area in the encoded image.
  • Sub-pixels of Differentiated Content 1 1 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1
  • each image (which can be static or dynamic) could be enhanced with, more differentiated contents, and; then differently played on various devices for graphic playback.
  • the encoding of positional data can be carried out entering in binary encode the pixel coordinates on the graphic matrix corresponding to the two opposite vertices related to each area inside which the differentiated content is encoded. It will be still possible to use different information for the determination of each differentiated content area, such as, by way of example, a vertex, its position in relation to the area and the pixel dimensions (width and height) of the area itself.
  • the coordinates could be defined according to format "row, column” and reported on the pixel matrix having as reference (0, 0) the first pixel in the upper left corner, i.e. the usually transmitted first pixel containing the image information, as shown in Figure 6.
  • Figure 8 shows an example of encoding corresponding to the location of a single area with differentiated content. It is also shown how it may be possible to insert a controlling byte, through which the presence of differentiated content areas in the image itself is indicated to the graphics processor during the receiving step, and, consequently, the decoding and processing of the areas is started.
  • the encoding may be implemented on any RGB color component, depending on the number of differentiated encoded areas, or possibly on all three color components depending on the number of positional information determining them.
  • the encoding may be implemented using the less significant bit (LSB) of the red color component of each pixel constituting the image, encoding the whole string of 40 bits in the first row (then using 40 pixels) starting from the reference pixel in the upper left corner (0, 0).
  • LSB less significant bit
  • the receiving graphics processor may proceed with the processing of the encoded image discriminating the display of differentiated content on different devices.
  • the most significant bits of the main image and the most significant bits in the differentiated contents for each color component of every pixel can be decoded and thereby the various images can be reconstructed.
  • the 4 most significant bits of each chromatic component of the encoded image are used to reconstruct the 4 most significant bits of each chromatic component of the main image, while the 4 less significant bits are used to reconstruct the 4 most significant bits of each chromatic component of the differentiated content.
  • the complete reconstruction both of the main and differentiated content images is carried out also reporting at the bottom the same 4 bits decoded and placed; in the most significant portion of the byte. In this way. an.8;bit binary;; approximation of the chromatic level is carried out for each color component, as shown in Figure 9.
  • Tha reconstruction of the main image and differentiated content images can be carried but, ; depending on the application and the type of encoded image, by or

Abstract

The invention refers to a method for enhancing the information content of videographic images comprising the steps of: prearranging a main videographic image; prearranging one or more videographic images having additional information contents; video - encoding the videographic image added to the additional differentiated videographic contents; and subsequently video r decoding for the selective playback of the additional; differentiated videographic contents in the same main image on various display devices.

Description

D OF IMPROVING THE CONTENT OF VIDEOGRAPHIC IMAGES WITH ENCODING OF ADDITIONAL CONTENTS ON A MAIN VIDEOGRAPHIC IMAGE
* * * *
Technical field of the invention
The present invention generally refers to devices for multiple display of differentiated graphic contents, such texts, images, movies, and particularly refers to a method for enhancing the information content of videographic images in such devices.
The invention may have useful applications in the diffusion to the public of: differentiated information contents based on installation area, such for example stations, airports, subways, etc., as well as in devices with opposite displaying directions assembled on mobile means for public transport such buses, trams, trains, etc.; where the signals are. necessarily displayed, such as for example directionai arrows: or, more generally, information signs, such as for ascent or descent of passengers, messages about next stop, etc.
Still more generally, the invention may be applied in all cases wherein a main graphic content must be displayed on a plurality of devices placed in different installation solutions and, for this reason, requiring a differentiated display of information from case to case.
A typical example of such applications can be composed of two graphic panels with TFT LCD technology mounted with opposite display directions (in configuration called "back-to-back") aboard a metro. In this case, the same display image, for example having an arrow to indicate the passengers the descent side, should be able to be turned in the opposite direction in one of the two devices, just to indicate a common descent direction.
State of the Art
Currently the reproduction of different graphic contents, images or videos, on as many reproducting devices is carried out using a video transmission channel for each graphic content to be played.
Figure 1 of the appended drawings shows schematically an example of one of the currently used techniques for the playback of two video contents 1 and 2 on as many respective display devices 1a,;:2a. This is in particular a system including a graphics processor 3, such as a Dual Output graphic chipset, with two independent integrated video outputs, as; is commonly the case with Embedded PC.
Another typical example of the known art for playing two video contents 1 and 2 on as many respective display devices 1a, 2a is illustrated in Figure 2. In this other example a graphics processor 4 is used composed of one or more additional graphics cards, each of which is handling a single independent video output as it often happens in case of common desktop PCs.
Based on the above, to display a plurality of different graphic contents within a single visual channel, some techniques for processing the visual contents are necessary used.
According to the state of the art, different methods for handling and implementing more textual or graphic information contents within a single visual channel are already known and widely used, so that it could then be reproduced by one or more playback devices. Here below some of these methodologies are related.
- Chroma Key: It is one of the techniques used to create the so- called "keying effects" through which it is possible to combine two video sources using a particular color, just a "color key", to indicate to the video mixer which source is to be used in a determined time.
From the main video source, in fact, only the areas coinciding with the "color key" are removed (or made "transparent"), then allowing the secondary video source to be displayed next to the same. However, with this technique, a discrimination of chromatic contents to be displayed is made, just linked to .the chosen "color key", but then a full customization of the created graphic content it is not possible. Further, the two .video-; sources, once mixed; together in one final video content, can not be; selectively reconstructed or discriminated.;
Teletext: \\ is an interactive service used in the television-area;; consisting of text pages viewable on a television screen by user's request and it is used to provide users with information of various kinds.
This information is transmitted periodically within video frames, in particular within the vertical blanking interval. This technique, however, provides an additional content with an only textual or very low graphic resolution nature. It also requires a specific decoder able to detect and decode the additional information content. Further, such a content does not interact in any way with the images in the video frames and, therefore, provides no differentiated graphic content in relation to the displayed images.
- Video mixer: It is a device used to switch different video sources on a single output signal and in some cases to mix them together or add some special effects, such as texts or graphics addition, or the handling of chromatic or structural proprieties of the image. As above referred for the Chroma Key, also in this case the video sources, once mixed together in one final video content, can not be selectively reconstructed or discriminated anymore. In fact, to discriminate between different video sources on various display devices it is necessary to have a number of video outputs at least equal to the number of sources, as shown in the example represented in Figure. 2.
Thus where a mix of video sources is performed, in the techniques and methods above, it appears that a permanent and unidirectional union is obtained de facto, not allowing then to selectively reconstruct or discriminate the video sources to be later displayed in different playback devices.
Other currently used and diffused methods, wherein two video contents are used, differentiated one from each other, fall under the term "3D Imaging". In fact it represents the set of making and viewing techniques for images, drawings, photographs and videos, designed to communicate an illusion of three dimensions, similar to that generated by the binocular vision of the human visual system, , by presenting two different images separately to the left and right eyes, respectively, of the beholder.
Because of the range of existing implementation techniques of these stereoscopic displaying technologies, it becomes necessary to specify some of them, particularly those related to dynamic images (video), more relevant for the treatise than those related to static images.
- Anaglyph: it consists in discriminating, by the use of glasses with additional filtering, two images (one intended for the right eye and the other intended for the left eye) previously filtered with two different colors.
- Alternate darkening system: it consists in discriminating, by glasses provided with active synchronized shutters (active shutter glasses) two images, one intended for the right eye and the other intended for the left eye, projected in rapid sequence.
Polarized lens system: it consists incdiscriminating^ by polarized glasses orthogonally oriented one relatively to the other, two images, one intended for the right eye and the other intended for the left .eye, projected in -rapid sequence.
·.····. - Autostereoscopy or AS-3D: it consists in "hiding" to each eye the image intended for the other eye through the implementation of appropriate technologies directly on the medium (paper print or monitor), thus not requiring the use of other devices. Some implementing technologies of this system are based on: lenticular network, parallax barrier, illumination and holographic screen. In case of video display device, however, the application of autostereoscopic techniques involves, as mentioned, the implementation of special technologies directly to the display medium. In addition, the overall image must necessarily be larger, since it must be composed of a plurality of points of view of the original image, observed in the left eye and the right eye respectively, using specific devices depending on the implemented technology. This also means that the illusion of three-dimensionality is achieved only observing the device within determined observation angles.
It is however obvious that the above described techniques involve the use of optical devices, such as glasses with additional filtering or polarized lenses, or opto-electronic devices such as active synchronized shutters. In addition, the alternate darkening system does not provide information contents in real time, but discriminates, through active synchronized shutters, two images shown at different times.
Object and Summary of the Invention
Starting from thiscpreamble, and in particular having considered some "3D Imaging" techniques, the present inventors looked for a further technique for displaying differentiated videographic contents being able to allow a real discrimination of graphic contents on different playback'; devices, advantageously without the need for optical or opto-electronic devices and additional ' resources and without worsening- the: graphics,; processor performance .
The present invention is the result of such a research and has the object to provide an innovative method for enhancing the information content of videographic images with differentiated graphical contents, and more particularly a method for reconstructing the afore said differentiated images for their display on different playback devices.
This object is achieved, according to the invention, with a method for enhancing the information content of videographic images comprising: prearranging a main videographic image,
prearranging one or more videographic images having additional information contents, video - encoding of the main videographic image added to additional videographic contents, and
subsequently video - decoding for the selective reproduction of additional videographic contents in the same main image on various display devices.
Therefore it should be noted that the method herein proposed it is not intended for the implementation of a stereoscopic image (although it can also be implemented in this ambit too) but, as mentioned above, for the' discrimination of different graphic contents on various display devices.; Brief Description of the Drawings
An embodiment <of the^invention will; be better illustrated : in,: the/; course of the' description¾referring :to the attached reproductions i which;; in addition to Figures 1 and 2 illustrating the above commented state of the art/are represented in:
Figure 3 a block diagram of the whole process according to the invention;
Figure 4 a block diagram of the video - encoding process;
Figure 5 a block diagram of the video - decoding process;
Figure 6 an example of creating an encoded image;
Figure 7 an example of encoding a single differentiated content in a single 8 bit chromatic channel.
Figure 8 an example of encoding the positional and controlling information, and
Figure 9 an example of decoding and reconstructing a single 8 bit chromatic channel. Detailed Description of the Invention
In Figure 3 a block diagram of the whole process is therefore represented whereby, for example, at least two differentiated video contents 11 and 12 are mixed and encoded into a video encoding system 13, then processed in real-time in a video - decoding system 14 so as to separate and reconstruct the distinct and independent video information on as many display devices 15 and 16.
Video Encoding
As shown in Figure 4 the video encoding 13 is explained in the following steps:
- encoding differentiated graphic contents and the respective positional and controlling data on the main image;
- transmitting the encoded image to the reproduction 'devices.
Video Decoding
As shown in Figure 5 the video encoding 14 is explained in the following steps:
- decoding positional and controlling data from the received encoded video stream;
- transmitting differentiated contents to display devices in case of correspondence with the previously decoded respective positional and controlling data or transmitting the common original content (not differentiated).
As underlined in detail below, the proposed method does not change the bit or byte size of the main image nor its resolution. In consequence, the transmission bandwidth necessary to send the image to the display devices is unaffected too.
The method may be implemented by any data processing device (microprocessor), any programmable logic device (FPGA, CPLD, etc..) or, more generally, by any digital processing logic circuit.
Particularly, the proposed encoding provides for the enhancement of the main image with one or more differentiated images and informations about the position of these differentiated images and possible controlling information.
The; differentiated images are encoded within the main image in the exact position wherein they will have to ^besdisplayed as differentiated contents on different playback devices:
In Figure 6 an example of. creation of. an encoded image is shown. An area is defined in the: main image, for example, of 250/200 pixels wide by 50/100 pixels height wherein, for example, a directional arrow of a defined color is shown (Figure 6, a). It is then created a differentiated content of the same size of the defined area, such as a directional arrow with another color having opposite direction to the previous (Figure 6, b). At this point, the differentiated content is encoded in the defined area on the main image, thus generating the encoded image to be transmitted (Figure 6, c).
The real encoding is carried out at the level of each color component constituting the pixel of the main image and the differentiated contents to be encoded.
Assuming the use of encoded images with the RGB color model (Red, Green, Blue) with 24-bit depth (8 bits per color), the pixels defined in the differentiated content area on the encoded image are composed using the 4 most significant bits (MSBs, Most Significant Bits) of each chromatic channel constituting the main image and the 4 most significant bits of each chromatic channel forming the differentiated content.
The 3 Bytes so obtained (8 bits for red color, 8 bits for green color and 8 bits for blue color), will then form each single pixel defined in the differentiated image present in the encoded image.
On the contrary the pixels not being part of the differentiated content areas are reported as unchanged on the encoded image as they are composed in the main image:
The resulting encoded image is then ready to be transmitted to the various devices of graphic reproduction.
For purposes :of illustration, in Figure 7 an example of encoding is shown, wherein the 4 bits obtained from the main image are placed at the higher position of the byte constituting the chromatic component of the encoded image, whereas the 4 bits obtained from the differentiated contents are placed at the lower position.
It will be also possible to implement other combinations of byte encoding constituting each chromatic component corresponding to the encoded image, such as the encoding of the 4 bits obtained from the main image in the 4 even positions of the byte in the encoded image, and the encoding of 4 bits obtained from the differentiated content in the remaining 4 odd positions.
Following the encoding structure shown in Figure 7, Table 1 shows an example of encoding with a single differentiated content of a 24-bit pixel (then consisting of 8 bits for the chromatic red component, 8 bits for the green and 8-bit for the blue ones), that will be the differentiated content area in the encoded image.
Figure imgf000012_0001
Table 1
As before mentioned, many differentiated contents may be encoded and they could be differently encoded depending on the image types they are intended to be used, the needs and the type of application.
In the table an encoding example of a 10-bit single chromatic component (sub-pixel) with two various differentiated contents, using only 2 bits to encode each of them. Bit Position
Sub-Pixel 10 9 8 7 6 5 4 3 2 1 0
Main Image sub-pixels 1 1 0 1 0 1 1 0 1 1 1
Sub-pixels of Differentiated Content 1 1 1 0 0 1 0 0 1 0 0 1
Sub-pixels of Differentiated Content 2 0 1 1 0 0 1 0 1 1 1 0
Figure imgf000013_0001
Table 2
As stated above, each image (which can be static or dynamic) could be enhanced with, more differentiated contents, and; then differently played on various devices for graphic playback.
In order to play properly each differentiated content it is however necessary to be able to distinguish exactly the areas wherein these contents have been included. To do so, in the composition of the encoded image a positional, and in case controlling, information is also included, just related to the areas referring to differentiated contents.
The encoding of positional data can be carried out entering in binary encode the pixel coordinates on the graphic matrix corresponding to the two opposite vertices related to each area inside which the differentiated content is encoded. It will be still possible to use different information for the determination of each differentiated content area, such as, by way of example, a vertex, its position in relation to the area and the pixel dimensions (width and height) of the area itself.
For example, the coordinates could be defined according to format "row, column" and reported on the pixel matrix having as reference (0, 0) the first pixel in the upper left corner, i.e. the usually transmitted first pixel containing the image information, as shown in Figure 6.
It is better to enter such an encoding already in the first lines of the encoded image, in order to carry out the decoding of the exact position of the differentiated content areas earlier then the content image, then being able to use them during the processing thereof.
Figure 8 shows an example of encoding corresponding to the location of a single area with differentiated content. It is also shown how it may be possible to insert a controlling byte, through which the presence of differentiated content areas in the image itself is indicated to the graphics processor during the receiving step, and, consequently, the decoding and processing of the areas is started.
It is obvious that, in the same way, it could be possible to enter other additional information (such as an error checking on the received information or information for the chromatic change of the image) depending on the type of application and the receiving graphics processing device.
In practice, the encoding may be implemented on any RGB color component, depending on the number of differentiated encoded areas, or possibly on all three color components depending on the number of positional information determining them.
In the previous example in Figure 8, the encoding may be implemented using the less significant bit (LSB) of the red color component of each pixel constituting the image, encoding the whole string of 40 bits in the first row (then using 40 pixels) starting from the reference pixel in the upper left corner (0, 0).
As shown in Figure 5 for decoding and processing the differentiated content areas from the received video stream, it is necessary firstly to decode correctly the positional information and possible controlling information. The decoding of positional and controlling data is carried out during the receiving step of the video stream encoded in real time. As shown above, once determined the desired encoding protocol, the receiving graphics processor could decode simply the useful' information .needed for discriminating the differentiated graphic areas and for controlling the display or any other encoded information useful for the? specific application.
After positional information related to each differentiated content area has been decoded and acquired, as well as possible controlling information, the receiving graphics processor may proceed with the processing of the encoded image discriminating the display of differentiated content on different devices.
Performing the described procedure in backward order for the encoding of differentiated content and, of course, as far as areas with differentiated content, the most significant bits of the main image and the most significant bits in the differentiated contents for each color component of every pixel can be decoded and thereby the various images can be reconstructed. Referring to earlier example illustrated in Figure 7, the 4 most significant bits of each chromatic component of the encoded image are used to reconstruct the 4 most significant bits of each chromatic component of the main image, while the 4 less significant bits are used to reconstruct the 4 most significant bits of each chromatic component of the differentiated content. The complete reconstruction both of the main and differentiated content images is carried out also reporting at the bottom the same 4 bits decoded and placed; in the most significant portion of the byte. In this way. an.8;bit binary;; approximation of the chromatic level is carried out for each color component, as shown in Figure 9.
Tha reconstruction of the main image and differentiated content images can be carried but,; depending on the application and the type of encoded image, by or
Figure imgf000016_0001
default values.

Claims

"METHOD FOR IMPROVING THE CONTENT OF VIDEOGRAPHIC IMAGES" CLAIMS * * * *
1. Method for enhancing the information content of videographic images comprising the steps of:
prearranging a main videographic image,
prearranging one or more videographic images having additional information contents,
video - encoding the ^videographic image ladded to additional differentiated videographic contents, and
subsequently video - decoding for the selective playback of additional differentiated videographic contents in the same: main image on various display devices.
2. Method according to claim 1 , characterized by implementing a main videographic image with at least one additional differentiated videographic content in at least one area of a determined pixel number of said main image in order to differently play the additional videographic content in the same area of the main image on various display devices.
3. Method according to claim 1 or 2, characterized by mixing and encoding information in a video - encoding system of at least two differentiated videographic contents, and processing and then separating in a video - decoding system, in real time, information of such differentiated videographic contents in order to distinctly and independently reconstruct the differentiated videographic contents on various display devices.
4. Method according to one of the preceding claims, characterized by defining at least one area of the main image inside which the least one differentiated videographic content is played, encoding the differentiated videographic content in the defined area on the main image, and then generating the encoded image to be differently transmitted and displayed on various devices.
5. Method according to the preceding claims, characterized in that the deo - encoding system includes ; an · encoding of:: differentiated videographic contents and related positional and controlling data on the main image and further the transmission: ofs the encoded image to the playback devices of the enhanced image.
6. Method according to claim 5, characterized in that the encoding of positional data can be carried out entering in binary encoding the pixel coordinates on the graphic matrix corresponding to, e g., the two opposite vertices related to each area inside which the differentiated content is encoded.
7. Method according to claim 5 or 6, characterized in that the encoding can be carried out at the level of each color component constituting the pixel of the main image and the differentiated videographic content, and that the defined pixels in the area with differentiated videographic content on the encoded image can be composed by using the Most Significant Bits (MSBs) of each chromatic channel constituting the main image and the Most Significant Bits of each chromatic channel constituting the differentiated videographic content.
8. Method according to claim 7, characterized in that the encoding can be carried out on any color component (RGB), depending on the number of differentiated encoded areas, and the number of positional information determining them, or in case on all three color components.
9. Method according to the preceding claims, characterized in that the video - decoding system includes a decoding of positional and controlling data coming from the video - encoding, and the transmission of decoded data to devices for displaying the differentiated videographic content in case of correspondence with the just decoded positional and controlling respective data.
PCT/IT2011/000107 2011-04-11 2011-04-11 Method of improving the content of videographic images with encoding of additional contents on a main videographic image WO2012140678A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/IT2011/000107 WO2012140678A1 (en) 2011-04-11 2011-04-11 Method of improving the content of videographic images with encoding of additional contents on a main videographic image
EP11721579.8A EP2697707A1 (en) 2011-04-11 2011-04-11 Method of improving the content of videographic images with encoding of additional contents on a main videographic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IT2011/000107 WO2012140678A1 (en) 2011-04-11 2011-04-11 Method of improving the content of videographic images with encoding of additional contents on a main videographic image

Publications (1)

Publication Number Publication Date
WO2012140678A1 true WO2012140678A1 (en) 2012-10-18

Family

ID=44579242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IT2011/000107 WO2012140678A1 (en) 2011-04-11 2011-04-11 Method of improving the content of videographic images with encoding of additional contents on a main videographic image

Country Status (2)

Country Link
EP (1) EP2697707A1 (en)
WO (1) WO2012140678A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU655338B2 (en) * 1991-11-25 1994-12-15 Sony (Australia) Pty Limited Audio and multiple video monitoring on a computer display
US5666548A (en) * 1990-06-12 1997-09-09 Radius Inc. Process of extracting and processing information in a vertical interval of a video signal transmitted over a personal computer bus
WO1998017058A1 (en) * 1996-10-16 1998-04-23 Thomson Consumer Electronics, Inc. Apparatus and method for generating on-screen-display messages using line doubling
US6333750B1 (en) * 1997-03-12 2001-12-25 Cybex Computer Products Corporation Multi-sourced video distribution hub
US20020174439A1 (en) * 1999-11-05 2002-11-21 Ryuhei Akiyama Television system for accumulation-oriented broadcast, information display system, distribution system, and information distribution method
US20050140567A1 (en) * 2003-10-28 2005-06-30 Pioneer Corporation Drawing apparatus and method, computer program product, and drawing display system
US20050280650A1 (en) * 2004-06-18 2005-12-22 Fujitsu Limited Image display system and image processing device
US20050285980A1 (en) * 2004-06-25 2005-12-29 Funai Electric Co., Ltd. Digital broadcast receiver

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666548A (en) * 1990-06-12 1997-09-09 Radius Inc. Process of extracting and processing information in a vertical interval of a video signal transmitted over a personal computer bus
AU655338B2 (en) * 1991-11-25 1994-12-15 Sony (Australia) Pty Limited Audio and multiple video monitoring on a computer display
WO1998017058A1 (en) * 1996-10-16 1998-04-23 Thomson Consumer Electronics, Inc. Apparatus and method for generating on-screen-display messages using line doubling
US6333750B1 (en) * 1997-03-12 2001-12-25 Cybex Computer Products Corporation Multi-sourced video distribution hub
US20020174439A1 (en) * 1999-11-05 2002-11-21 Ryuhei Akiyama Television system for accumulation-oriented broadcast, information display system, distribution system, and information distribution method
US20050140567A1 (en) * 2003-10-28 2005-06-30 Pioneer Corporation Drawing apparatus and method, computer program product, and drawing display system
US20050280650A1 (en) * 2004-06-18 2005-12-22 Fujitsu Limited Image display system and image processing device
US20050285980A1 (en) * 2004-06-25 2005-12-29 Funai Electric Co., Ltd. Digital broadcast receiver

Also Published As

Publication number Publication date
EP2697707A1 (en) 2014-02-19

Similar Documents

Publication Publication Date Title
US10567728B2 (en) Versatile 3-D picture format
CN104333746B (en) Broadcast receiver and 3d subtitle data processing method thereof
CN102292977B (en) Systems and methods for providing closed captioning in three-dimensional imagery
RU2667605C2 (en) Method for coding video data signal for use with multidimensional visualization device
KR101651442B1 (en) Image based 3d video format
US9438879B2 (en) Combining 3D image and graphical data
KR101630866B1 (en) Transferring of 3d image data
ES2927481T3 (en) Handling subtitles on 3D display device
US20110293240A1 (en) Method and system for transmitting over a video interface and for compositing 3d video and 3d overlays
CN102918855B (en) For the method and apparatus of the activity space of reasonable employment frame packing form
US20100091012A1 (en) 3 menu display
US20110298795A1 (en) Transferring of 3d viewer metadata
KR20110114673A (en) Transferring of 3d image data
KR20070041745A (en) System and method for transferring video information
US20090207237A1 (en) Method and Device for Autosterioscopic Display With Adaptation of the Optimal Viewing Distance
JP2005175566A (en) Three-dimensional display system
US20160057488A1 (en) Method and System for Providing and Displaying Optional Overlays
US11936936B2 (en) Method and system for providing and displaying optional overlays
US10742953B2 (en) Transferring of three-dimensional image data
US20120081513A1 (en) Multiple Parallax Image Receiver Apparatus
ITMO20080267A1 (en) SYSTEM TO CODIFY AND DECODE STEREOSCOPIC IMAGES
EP2697707A1 (en) Method of improving the content of videographic images with encoding of additional contents on a main videographic image
KR101567710B1 (en) Display system where the information can be seen only by the viewer wearing the special eyeglasses
US20120154383A1 (en) Image processing apparatus and image processing method
ITBS20110048A1 (en) METHOD TO ENRICH THE INFORMATION CONTENT OF VIDEOGRAPHIC IMAGES

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11721579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011721579

Country of ref document: EP