WO2011064379A1 - System for assisting production of stereoscopic images - Google Patents

System for assisting production of stereoscopic images Download PDF

Info

Publication number
WO2011064379A1
WO2011064379A1 PCT/EP2010/068467 EP2010068467W WO2011064379A1 WO 2011064379 A1 WO2011064379 A1 WO 2011064379A1 EP 2010068467 W EP2010068467 W EP 2010068467W WO 2011064379 A1 WO2011064379 A1 WO 2011064379A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image
viewing
optics
images
Prior art date
Application number
PCT/EP2010/068467
Other languages
French (fr)
Inventor
Jacques Delacoux
Original Assignee
Transvideo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Transvideo filed Critical Transvideo
Priority to EP10785406A priority Critical patent/EP2508004A1/en
Publication of WO2011064379A1 publication Critical patent/WO2011064379A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a device for processing digital images itself consisting of several modules and creating in real time graphic indications allowing appreciation of the quality of the image and of certain disparities or anomalies which may occur in their capture by a system intended to create images causing a sensation of relief for the spectator.

Description

SYSTEM FOR ASSISTING PRODUCTION OF STEREOSCOPIC IMAGES.
DESCRIPTION TECHNICAL FIELD AND PRIOR ART
The invention relates to the field of technical means for producing multidimensional images currently called 3D or further stereoscopic images, aiming at recreating the sensation of relief specific to human vision.
The creation of this type of images resorts to systems for capturing images coupled to each other via an adequate system. These image capture systems are generically called « cameras » and are equipped with optical systems, « the objectives », which have several adjustments, adjustment of the focal length or « focusing » or « producing a sharp image », adjustment of aperture « iris diaphragm » and possibly of focal length « zoom ».
The adjustment differences between the optics of capture systems cause anomalies in the perception of relief by the spectator. It is sought to produce images as perfect or as good as possible during the taking of snapshots.
The risk of error becomes all the more significant because of the number of parameters to be constantly monitored.
Now, there is no device providing such monitoring, in an easy and time-saving way. DISCUSSION OF THE INVENTION
The invention first relates to a device for processing at least two individual digital images obtained by respective means for acquiring an individual image, each having optics, this device including means for calculating from data of at least two individual digital images:
- focusing area data of each acquisition means (for example the distance or the average distance at which each acquisition means is focused) and/or a difference between focusing area data of at least two acquisition means,
- and/or focusing data on each optics and/or a difference between focusing data of at least two optics of at least two acquisition means,
- and/or focal length data (zoom) of each optics and/or a difference between focal length data of two optics of at least two acquisition means,
- and/or aperture (iris) data of each of optics and/or a difference between aperture data of two optics of at least two acquisition means,
- and/or image histogram data from each of the means for acquiring digital images.
Each individual image may be obtained by a means for acquiring digital images, including optics and positioned in space, differently from the image acquisition means of another individual image.
Viewing means may allow display of an image resulting from the combination of individual images, in order to form a so-called « three-dimensional » image. A device according to the invention may be associated with means for providing a user with 3D vision .
A device according to the invention is therefore notably designed so as to be applied with a device for producing multidimensional images or further stereoscopic images, aiming at recreating the sensation of relief specific to human vision.
The invention therefore notably relates to a system consisting of one or several modules, with which :
- the areas of the images from each camera which are « in focus » are shown unambiguously,
- and/or a focusing disparity of each image may be diagnosed immediately,
- and/or zoom adjustment deviations may be diagnosed immediately,
- and/or iris adjustment deviations may be diagnosed immediately.
Each diagnostic module may be used individually or in combination with one or several other ones.
The invention therefore notably relates to a device for processing digital images including at least one module with which it is possible to generate in real time graphical indications for appreciating the quality of the images and of certain disparities or anomalies which may occur in their capture by a 3D image viewing system. Means may further be provided for correcting these anomalies, on the basis of these graphical indications. A device according to the invention may include means for calculating from digital image data, data of focused areas of each acquisition means and/or a difference between data of focused areas of at least two acquisition means, these means may themselves include at least one filter giving the possibility of discriminating the contours of the objects which have the focusing characteristics.
With a device according to the invention, it is possible in an embodiment to produce at least one portion of a contour of one or more focused areas of a 3D image, this contour portion being visually different from at least one portion of a contour of one or more areas of the same image which are not focused. A first type of contour for one or more focused areas of an image and a second type of contour, different from the first, for one or more other areas of the image may therefore be made. The first type of contour may be in a first colour, the second type of contour in a second colour.
Advantageously, a device according to the invention further includes means for forming, on the viewing means, or on or in an image formed by these viewing means, vertical and/or horizontal lines, or lines along at least one first and/or one second direction on the screen, superimposed on said image, these lines forming what is called a grid, or viewing grid .
Such a device may then include means for displaying portions of lines of a first type and portions of lines of a second type depending on the characteristics of the portions of the image covered by these portions of lines.
It is possible with this type of grid, to directly form on the image, at least one indication of a first type, for example on the focused areas of an image. Simultaneously, indications of a second type may be formed and displayed, for example relating to data of focused areas and/or to focal length data, and/or aperture data, as indications for example located in the margin of the image or beside the latter, but on the viewing means .
In particular, a device according to the invention may include means for forming, on the viewing means, or on or in an image formed by these means, or close to such an image, histogram data for at least one of the images obtained by the digital image acquisition means .
In a particular embodiment, a device according to the invention includes means for forming, on the viewing means, or on or in an image formed by these means, or close to such an image, at least one mark or reference mark or scale and cursor-forming means on this mark, in order to represent, for example as a function of the distance to the corresponding image acquisition means, differences between data of focused areas of images obtained by the digital image acquisition means and/or differences between focal length data and/or aperture data of the digital image acquisition means.
The invention also relates to a device for acquiring, processing and viewing images, including: - at least two means for acquiring digital images ,
- means for forming and viewing a 3D image of an object from digital image data,
- a device for processing data of digital images according to the invention, as described above.
A device according to the invention may be combined with means allowing an observer to view three- dimensional images.
The invention also relates to a method for processing synchronous and simultaneous digital images from a same scene, these images being obtained by at least two digital image acquisition means, each including optics, positioned differently in space, this method including the determination from digital image data :
- of focusing area data of each acquisition means and/or a difference between focusing data of at least two optics of at least acquisition means,
- and/or focusing data of each optics and/or a difference between focusing data of at least two optics of at least two acquisition means,
- and/or focal length data (zoom) of each optics and/or a difference between focal length data of two optics of at least two acquisition means,
- and/or aperture data (iris diaphragm) of each optics and/or a difference between aperture data of two optics of at least two acquisition means,
- and/or histogram data or each image, - and the display of an image of the scene, and of at least one of the data above. In such a method, a distance between at least two acquisition means and/or a relative position of the optical axes of at least two acquisition means may be adjusted. It is also possible to proceed with adjusting one or several parameters of the system, in order to correct defects or deficiencies in the viewed data .
A method according to the invention may therefore also include a correction of at least one of the means for acquiring digital images, or of its optics, depending on the displayed focusing area data of each acquisition means and/or on displayed focusing data of each optics and/or on displayed focal length data (zoom) of each optics and/or on displayed aperture (iris) data of each optics and/or on displayed histogram data of each image.
In a particular embodiment, data of focused areas or adjusted areas of each acquisition means and/or a difference between data of focused areas of at least two acquisition means are calculated from digital image data and viewed.
It is further possible to produce at least one portion of a contour of one or more focused areas of a 3D image, this contour portion being visually different from at least one portion of a contour of one or more areas of the same image which are not focused.
Parallel lines may advantageously be formed, superposed on an image formed on these viewing means. Portions of lines of a first type and portions of lines of a second type may then be displayed, depending on the characteristics of the portions of the image covered by these portions of lines.
On the viewing means, histogram data may further be formed for at least one of the images obtained by the digital image acquisition means.
According to another aspect of the invention, at least one mark or reference mark or scale and cursor-forming means on this mark or scale are formed in order to represent, for example as a function of the distance to the corresponding image acquisition means, differences between data of focused areas of images obtained by the digital image acquisition means and/or differences between focal length data and/or aperture data of the digital image acquisition means.
Aspects of application of these different method characteristics are already described above in connection with the presentation of a device according to the invention.
SHORT DESCRIPTION OF THE DRAWINGS
Figs. 1A and IB illustrate two embodiments of the invention.
Fig. 2 illustrates diffusion of an image obtained by a system according to the invention.
Fig. 3 illustrates in more detail two cameras of the system according to the invention.
Fig. 4 illustrates in a more detailed way a screen for diffusing an image according to the invention .
Fig. 5 illustrates another embodiment of the invention. Fig. 6 illustrates an exemplary image obtained with a device according to the invention.
Fig. 7 illustrates a screen with a viewing grid superposed to the field of the screen.
Fig. 8 illustrates an exemplary embodiment of a dual cursor for displaying data differences between the images obtained by two viewing means.
Fig. 9 illustrates an example of images displayed on a viewing screen, on which so-called « focused » areas appear in a way which is different from the other areas .
DETAILED DISCUSSION OF PARTICULAR EMBODIMENTS
For reasons of simplification, the present document describes an application to a two-camera system (or, more generally, two image sensors, or two digital image acquisition means) .
But the teaching of the invention may be generalized without any difficulty to a system equipped with more than two cameras or, more generally, more than two image sensors, or two digital image acquisition means.
Generally, these different means for acquiring digital images, in any number n (n > 2) are physically separated from each other, but positioned in an adjustable way relatively to each other. Each of them may produce an individual image of a same object 2 (see Figs. 2, 4, 5) or of a same scene, the different images being simultaneously acquired in a synchronized way, each image notably depending on the position and on the characteristics of the image acquisition means which generates it. Each of the thereby obtained individual images will be combined with the other individual images in order to make an image having the characteristics of a so-called three-dimensional or « 3D » image, or which may be perceived as such by a user.
As illustrated in Fig. 1A, a first embodiment of the invention includes two image sensors 1, 3 (for example cameras) equipped with their respective optics 10, 30.
Each camera allows acquisition of digital images. These images may then be processed as explained below .
Both cameras are associated with means 5 coupling them in space, for example substantially in parallel so that their viewing axes are parallel with each other. Alternatively, their optical axes intersect at a convergent point C as in Fig. 3. According to an embodiment, at least one camera, or each camera, is rotatably mounted around at least one axis of rotation, for example locally perpendicular to a lateral displacement direction of this camera or substantially coincident or in the vicinity of this displacement direction .
The means 5 may include a rectilinear or curved bar on which the cameras are mounted, for example in an adjustable way along the bar, in order to be able to view an object 2.
In one alternative, Fig. 3 illustrates both cameras 1, 3 with a variable spacing d between them, but also with a variable angle a between them. Point C represents the convergence point of both optical axes of both cameras 1, 3. This adjustment of the angle may be obtained by means for orienting the cameras.
Therefore, means such as the means 5 of Fig. 1A, may give the possibility of varying the distance between both cameras but also the viewing angle of each of the cameras relatively to each other. Means are then provided for maintaining the cameras fixed in a determined position once the latter is reached. For example, with each cameras, is associated a system including one or several blocking screws for blocking its translational displacement and one or several blocking screws for blocking its rotational displacement .
Alternatively (Fig. IB), both cameras may be positioned along different directions by being mounted on a frame 5' including the apertures corresponding to the objectives 10, 30 of the cameras. For example, a semi-transparent mirror 15 allows each camera 1, 3 to receive part of the radiation of an incident image which propagates along an optical axis 17.
Both image capture devices 1, 3 are synchronous with each other and simultaneously produce images which correspond to those which would be seen by the left eye and by the right eye of an observer, with a gap and convergence, variable depending on the desired effect.
Each image will have characteristics which depend on the position of the camera which generates it. As illustrated in Fig. 2, different images may be displayed on a same viewing device, so as to create a sensation of relief for a spectator. The latter may be provided with means delivering a different content for his/her right eye and his/her left eye, for example through specific spectacles 6 or by direct viewing, the human brain recombining the images in order to form what is currently called « the relief » and to allow appreciation of the distances.
Several anomalies may occur during the capture of the images making the immersion difficult for the spectator, or even a painful experience in certain cases, in any case at least discouraging for the disappointed spectator. It is recalled that immersion is the capability of a spectator of being led to believe the images which he/she sees as real images. Or, expressed in a different way, if the images have certain characteristics and if certain defects are absent, such as vertical parallax errors, the brain of the spectator accepts « playing the game » and representing or reconstructing a three-dimensional image (3D) or « an image in relief ».
Precautions should therefore be taken during capture of the images that they are of optimum quality or of good quality, allowing this « 3D » reconstruction.
As possible defects, there are those which may result from adjustment disparities of the objectives 10, 30.
A lack of focusing adjustment may first occur. A lack of adjustment of the focusing of one of the two objectives 10, 30 produces a blurred or shifted image for one of the eyes of the observer creating visual discomfort similar to that due to a lack of accommodation of an eye.
Another type of defect is a lack of adjustment of the aperture (iris) of one of the two objectives 10, 30, which produces an overexposed or underexposed resulting image for one of the eyes of the observer, causing a more or less bothersome stroboscopic phenomenon depending on its intensity. Moreover, a difference in exposure will be associated with a difference of field depth in both images.
Still another type of defect is a lack of adjustment of focal length (zoom) of one of the two objectives 10, 30, which produces two images of different sizes, making three-dimensional perception impossible .
A device according to the invention may include different electronic modules for processing an image, either associated with each other or not. With each module (or set of means) it is possible to detect one of the aforementioned defects.
Certain modules may use resources common to several modules.
As illustrated in Fig. 5, a casing 20 may accommodate one or more of these modules.
Each module may be made as an electronic circuit and/or a set of software means programmed for producing the corresponding function. The modules are formed by FPGA type programmable circuits associated with microprocessors or microcontrollers. The processing may also be performed in « digital signal processors » DSP.
A first module, a so-called focusing module, includes a set of filters enabling discrimination of the contours of the objects which have the characteristics of the focusing. According to the invention, these filters are used in each of the images which will make up the three-dimensional image. The resulting image of the transfer function TF of these filters is added to the resulting image, producing a more or less enhanced coloured contour on the so-called « focused » areas. An image 19 including thick contours 19i, which are focused areas of the image, are thereby illustrated in Fig. 9, while the areas with finer contours 192 are considered as being areas which are out of focus.
With a method according to the invention, it is possible to process the incident images of both cameras 1, 3 through these filters and to add to the images corresponding to each eye, the contours in a coded colour which will allow differentiation of each camera. Thus, it becomes possible to estimate if the adjustment of the focusing is carried out on the same plane .
A second module is a so-called « iris ».
With this module, it is possible to recover digital data from the objectives 10, 30. The aperture data of the iris are then interpreted graphically and show the distance between both objectives 10, 30. Optionally, a programmable alarm allows definition of acceptable tolerances between both adjustments. A third module is a so-called « zoom ». From digital data stemming from the objectives, the zoom adjustment data may be interpreted graphically and show the distance between both objectives. It may thus immediately identified which of the objectives would have to be corrected. Optionally, with a programmable alarm, it is possible to define acceptance tolerances between both adjustments.
A fourth module is a so-called « Dual Histogram » module: with this module it is possible to statistically analyze the digital data from both cameras. These data are simultaneously displayed on each other in coded colours allowing each source to be recognized. With this representation method it is possible i.a. to compensate for the possible lack of digital data from the objectives.
In Figs. 4 and 5, is illustrated the image of an object, but also the display of a dual histogram 11, for example as a function of the distance to the respective camera or image acquiring means: one of the histograms relates to the image obtained by one (1) of the viewing means, the other histogram relates to the images obtained by the other viewing means (3) . An illustration of such a dual histogram is given in Fig. 6. Because of simultaneous viewing of both histograms, the differences in black levels of the image, in dynamics of the image and in white levels of the image are easily interpreted. It is also possible to diagnose synchronization errors of the camera sensors. It is also possible to display data (for example aperture and/or focusing and/or zoom adjustment data) of the objectives, in another viewing field 13. An example of such a display is given in Fig. 8: these are two horizontal marks or lines 130, 132 (or horizontal scales) , one of these marks or lines 130 indicating the difference between the focusing distances of both objectives and the other mark or line 132, the difference between the aperture data of the same two objectives. A cursor 131, 133 indicates on each mark or line the difference in focussing distance and in aperture, respectively, with respect to a central value (marked as « 0 ») for which there is an agreement between both objectives (if each of the cursors is in this position, then the focusing distances of both objectives are equal, as well as the aperture data) . In order to improve visibility, the cursor 131 may be of a first colour (red for example) , and the cursor 133 may be of a second colour (blue for example) . This type of display may be achieved for other pairs of data, or for a single datum (for example, it is possible to have the sole mark or scale 130, with the sole cursor 131, which would indicate the differences between the focusing distances of both objectives) .
Vertical marks or scales can also be implemented .
In another example of such a display one horizontal or vertical mark (or horizontal or vertical scale), for example of the same type as on figure 8, indicates the position of a specific zone for example as a function of the distance to the corresponding camera or image acquiring means.
As illustrated in Fig. 4, a device according to the invention may be integrated entirely or partly in a viewing system 4 such as a video monitor, a projector, or any other system for reproducing digital images.
But, as already explained above and as illustrated in Fig. 5, means or modules for image processing according to the invention, may also be integrated in a self-contained module 20 intended to produce an image which may be used with a viewing system 4 as described earlier.
Means 21, such as adjustment buttons, allow correction of the defects of the image viewed on the screen, ascribable to either one of the phenomena described above. Alternatively adjustment of each camera may also be performed. In other words, in a method according to the invention it is possible to correct one or several adjustments of the individual cameras from information viewed on the screen 4.
In an embodiment, a viewing grid, a so-called « 3D grid », may be displayed on the screen, superposed on an image. An exemplary display of a grid 50, superposed on an image 59 on a screen 4 is given in
Fig. 6.
Such a grid includes lines 50i, 502,... 50n, parallel with each other, here vertical lines (it is possible to have horizontal lines, instead of vertical lines or as a combination with them) , the spacing (or spacing value) between two consecutive lines 50±, 50i+i of the grid may be adjustable, for example in the number of pixels of the incident image and/or in the percentage of the number of pixels making up the line of the incident signal.
The lines of the grid may be parameterizable according to one or more parameters, for example different colours and/or intensities, so to facilitate its reading by an operator, regardless of the contents of the image. For example focused areas are identified on the lines by a a particular colour. As a certain area of the image gradually becomes out of focus, the colour of the portions of lines which cover this area is modified. Other types of indication may be applied, in connection with lines parallel to each other, in order to differentiate areas of an image having a first characteristic from areas of the image which do not have this first characteristic. For example, it is possible to have more or less thick lines .
Fig. 7 schematically illustrates a grid 50 on a screen 4. For the sake of clarity, the image which results from the combination of the data of images provided by both cameras 1, 3 is not represented.
In focused areas of the image, the portions of lines of the grid are in a first colour, for example in green, which is illustrated in Fig. 7 by portions 502 of bold lines.
In areas of the image which are not in focus, the portions of lines of the grid are in a second colour, for example in red, which is illustrated in Fig. 7 by portions 501 of dashed lines. As this is seen in Fig. 6, a central portion 51 of the grid may be outlined so as to let the centre of the image remain visible.
The grid 50 may be used in a stereoscopic monitor 4 in order to determine the spacing between two objects from two different viewing devices.
In Fig. 6 is also shown a histogram 11, which indicates the intensity for each camera, for example as a function of the distance to the respective camera. This gives the operator the possibility to select the appropriate zone.
The display on a same screen, of the focused areas (with the grid 50) and of a histogram gives an operator full information, both in time (with the characteristics, for example the colour, of the lines of the grid 50 which may change over time) and as a histogram, for example as a function of the distance to the respective camera.
In a method or a device according to the invention, determination of focusing area of acquisition means and/or of focusing data of each optics and/or focal length data (zoom) of each optics and/or aperture data (iris) of each optics and/or histogram data of each image is carried out:
- in any viewing system such as a screen, or a monitor, or a projector, moreover connected to the cameras through cables 27; this is the case of Fig. 4 ;
- or in a self-contained system 20 not having any specific viewing means but means for calculating the characteristics or data indicated above and possibly for modifying the contents of the resulting image. This is the case of Fig. 5. The self- contained system 20 is connected to a viewing means 4 via cables 23 forming a video connection. It is moreover connected to the cameras through other cables 27. The system 20 is self-contained : it may be displaced in space independently of the cameras 1, 3 and independently of the viewing means 4.
By applying the invention, it is possible to avoid problems of adjustment disparities between the cameras and the objectives by providing a synthetic, simultaneous and intuitive view of the working conditions of both cameras and of their objectives. The invention provides improved reliability of the image, allows correction of errors during a shooting of stereoscopic images. It also avoids many and tedious tests to be avoided, which otherwise would have been necessary .

Claims

1. A device for acquiring, processing and viewing images, including:
- at least two means (1, 3) for acquiring synchronous and simultaneous digital images,
means (4) for forming and viewing an image, a so-called 3D image, of an object (2) from digital image data of both acquisition means,
- means (4, 20) for calculating, from the digital image data, and for viewing:
- focal length (or zoom) data of each optics (10, 30) and/or a difference between focal length data of two optics (10, 30) of at least two acquisition means,
- and/or aperture (or iris) data of each optics (10, 30) and/or a difference between aperture data of two optics (10, 30) of at least two acquisition means,
- and/or simultaneous histogram data
(11) of images of each of the means (1, 3) for acquiring digital images.
2. The device according to claim 1, including means (5, 5' ) forming a support common to the means (1, 3) for acquiring digital images.
3. The device according to claim 2, further including means for adjusting a distance between at least two acquisition means (1, 3) and/or a relative position of the optical axes of at least two acquisition means (1, 3) .
4. The device according to any of claims 1 to 3, further including means (4, 20) for calculating, from digital image data, and for viewing, focused area data of each acquisition means (1, 3) and/or a difference between focused area data of at least two acquisition means.
5. The device according to claim 4, including means (4, 20) for calculating, from digital image data, focused area data of each acquisition means (1, 3) and/or a difference between focused area data of at least two acquisition means, including at least one filter with which the contours of the objects which have the characteristics of focusing may be discriminated .
6. The device according to any of claims 4 or 5, including means for producing at least one portion of a contour of one or more focused areas of a three-dimensional image, this contour portion being visually different from at least one portion of a contour of one or more areas of a same image which are out of focus.
7. The device according to claim 5, wherein the resulting image of the transfer function TF of the filter applied to both initial images is added to a resulting image, producing a first type of contour (19i) for so-called focused areas of a three- dimensional image and a second type of contour (192) for the other areas .
8. The device according to any of claims 1 to 7, further including means (20) for forming, on the viewing means (4), a grid (50) including lines parallel with each other (50i, 502,...50n) superposed on an image formed on these viewing means.
9. The device according to claim 8, including means (20) for displaying portions of the lines of a first type (501) and portions of lines of a second type (502) depending on the characteristics of the portions of the image covered by these portions of lines .
10. The device according to any of claims 1 to 9, including means (20) for forming, on the viewing means (4), histogram data (11) for at least one of the images obtained by the means (1, 3) for acquiring digital data.
11. The device according to any of claims 1 to 10, including means (20) for forming, on the viewing means (4), at least one mark (130, 132) and cursor forming means on this mark (131, 132), for representing differences between data of focused areas of images obtained by the means (1, 3) for acquiring digital data and/or differences between focal length data and/or aperture data of the means (1, 3) for acquiring digital data .
12. The device according to any of claims 1 to 11, further including means (6) for providing a user with 3D viewing.
13. A method for processing synchronous and simultaneous digital images, from a same scene (2), these images being obtained by at least two means (1, 3) for acquiring digital images, each including optics (10, 30), differently positioned in space, this method including the determination from digital image data:
of focal length (zoom) data of each optics (10, 30), and/or of a difference between focal length data of both acquisition means,
or aperture (iris) data of each optics (10, 30), and/or of a difference between aperture data of both acquisition means,
and/or histogram data (11) of each image, and the display of an image of the scene (2) and of at least one of the data above.
14. The method according to claim 13, wherein a distance is adjusted between at least two acquisition means (1, 3) and/or a relative position of the optical axes of at least two acquisition means (1, 3) .
15. The method according to any of claims 13 or 14, wherein at least one of the means for acquiring digital images or its optics, are corrected, depending on focused area data of each acquisition means and/or on focusing data of each optics and/or on focal length (zoom) data of each optics and/or on aperture (iris) data of each optics and/or on differences between these data and/or on histogram data (11) of each image in the production of multidimensional images.
16. The method according to any of claims 13 to 15, wherein data of focused areas of each acquisition means (1, 3) and/or a difference between data of focused areas of at least two acquisition means are calculated from digital image data and viewed.
17. The method according to any of claims
13 to 16, wherein at least one portion of a contour of one or more focused areas of a three-dimensional image is produced, this contour portion being visually different from at least one portion of a contour of one or more areas of a same image which are out of focus.
18. The method according to any of claims 13 to 17, wherein on the viewing means (4), a grid (50) is formed, including lines parallel with each other (50i, 502,...50n) superposed on an image formed on these viewing means .
19. The method according to claim 18, wherein portions of lines of a first type (501) and portions of lines of a second type (502) are displayed depending on the characteristics of the portions of the image covered by these portions of lines.
20. The method according to any of claims 13 to 19, wherein, on the viewing means (4), histogram data (11) are formed for at least one of the images obtained by the means (1, 3) for acquiring digital data.
21. The method according to any of claims 13 to 20, wherein, on the viewing means (4), at least one reference mark (130, 132) and cursor forming means on this mark (131, 132) are formed for representing differences between data of focused areas of images obtained by the means (1, 3) for acquiring digital data and/or differences between focal length data and/or aperture data of the means (1, 3) for acquiring digital data .
22. The method according to any of claims 13 to 21, wherein the determination of focused area data of each acquisition means and/or of focusing data of each optics and/or of focal length (zoom) data of each optics (10, 30) and/or of aperture (iris) data of each optics (10, 30) and/or on differences between these data and/or of histogram data (11) of each image is achieved :
- in any viewing system (4) such as a screen, or a monitor or a projector which further allows display of the image of the scene (2), and of at least one of the data above,
- or in a self-contained system (20) not including any viewing means but means for calculating said data and possibly for modifying the contents of the resulting image, the display of an image of the scene, and of at least one of the data above, being achieved with viewing means (4) to which the self- contained system is connected via cables (23) forming a video link.
PCT/EP2010/068467 2009-11-30 2010-11-30 System for assisting production of stereoscopic images WO2011064379A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP10785406A EP2508004A1 (en) 2009-11-30 2010-11-30 System for assisting production of stereoscopic images

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
FR0958479 2009-11-30
FR0958479A FR2953359B1 (en) 2009-11-30 2009-11-30 SYSTEM FOR ASSISTING STEREOSCOPIC IMAGES
US32328710P 2010-04-12 2010-04-12
US61/323,287 2010-04-12

Publications (1)

Publication Number Publication Date
WO2011064379A1 true WO2011064379A1 (en) 2011-06-03

Family

ID=41693172

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/068467 WO2011064379A1 (en) 2009-11-30 2010-11-30 System for assisting production of stereoscopic images

Country Status (3)

Country Link
EP (1) EP2508004A1 (en)
FR (1) FR2953359B1 (en)
WO (1) WO2011064379A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015128897A1 (en) * 2014-02-27 2015-09-03 Sony Corporation Digital cameras having reduced startup time, and related devices, methods, and computer program products

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5496106A (en) * 1994-12-13 1996-03-05 Apple Computer, Inc. System and method for generating a contrast overlay as a focus assist for an imaging device
EP0642275B1 (en) * 1993-09-01 1999-03-10 Canon Kabushiki Kaisha Multi-eye image pick-up apparatus
US6526232B1 (en) * 1999-04-16 2003-02-25 Fuji Photo Optical Co., Ltd. Lens control unit
US20030174233A1 (en) * 2002-03-12 2003-09-18 Casio Computer Co., Ltd. Photographing apparatus, and method and program for displaying focusing condition
US20090278958A1 (en) * 2008-05-08 2009-11-12 Samsung Electronics Co., Ltd. Method and an apparatus for detecting a composition adjusted

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937284B1 (en) * 2001-03-20 2005-08-30 Microsoft Corporation Focusing aid for camera
KR101167243B1 (en) * 2005-04-29 2012-07-23 삼성전자주식회사 Method of controlling digital image processing apparatus to effectively display histogram, and digital image processing apparatus using the method
US8350945B2 (en) * 2007-10-15 2013-01-08 Panasonic Corporation Camera body used in an imaging device with which the state of an optical system is verified by an operation member

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0642275B1 (en) * 1993-09-01 1999-03-10 Canon Kabushiki Kaisha Multi-eye image pick-up apparatus
US5496106A (en) * 1994-12-13 1996-03-05 Apple Computer, Inc. System and method for generating a contrast overlay as a focus assist for an imaging device
US6526232B1 (en) * 1999-04-16 2003-02-25 Fuji Photo Optical Co., Ltd. Lens control unit
US20030174233A1 (en) * 2002-03-12 2003-09-18 Casio Computer Co., Ltd. Photographing apparatus, and method and program for displaying focusing condition
US20090278958A1 (en) * 2008-05-08 2009-11-12 Samsung Electronics Co., Ltd. Method and an apparatus for detecting a composition adjusted

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2508004A1 *

Also Published As

Publication number Publication date
FR2953359B1 (en) 2012-09-21
EP2508004A1 (en) 2012-10-10
FR2953359A1 (en) 2011-06-03

Similar Documents

Publication Publication Date Title
TWI444661B (en) Display device and control method of display device
EP2494402B1 (en) Stereo display systems
US8000521B2 (en) Stereoscopic image generating method and apparatus
US8026950B2 (en) Method of and apparatus for selecting a stereoscopic pair of images
US10134180B2 (en) Method for producing an autostereoscopic display and autostereoscopic display
CN104683786B (en) The tracing of human eye method and device of bore hole 3D equipment
US20050190258A1 (en) 3-D imaging arrangements
JP2010531102A (en) Method and apparatus for generating and displaying stereoscopic image with color filter
US11962746B2 (en) Wide-angle stereoscopic vision with cameras having different parameters
GB2489930A (en) Analysis of Three-dimensional Video to Produce a Time-Varying Graphical Representation of Displacements
JP2002223458A (en) Stereoscopic video image generator
CN106713894B (en) A kind of tracking mode stereo display method and equipment
JP2004333661A (en) Stereoscopic image display device, stereoscopic image display method, and stereoscopic image display program
JP2005026756A (en) Apparatus and method for outputting distortion in reproduced stereoscopic image, and program for outputting distortion in reproduced stereoscopic image
JP2006013851A (en) Imaging display device, and imaging display method
TWI462569B (en) 3d video camera and associated control method
CN108259888A (en) The test method and system of stereo display effect
EP2508004A1 (en) System for assisting production of stereoscopic images
CN110602478A (en) Three-dimensional display device and system
JP3425402B2 (en) Apparatus and method for displaying stereoscopic image
JP2012227653A (en) Imaging apparatus and imaging method
EP3419287A1 (en) An apparatus and a method for displaying a 3d image
EP2408217A2 (en) Method of virtual 3d image presentation and apparatus for virtual 3d image presentation
KR101414094B1 (en) Apparatus for testing align in three-dimensional device without glasses
CN104717488B (en) Show equipment and display methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10785406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2010785406

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2010785406

Country of ref document: EP