US20150154761A1 - Scene scan - Google Patents

Scene scan Download PDF

Info

Publication number
US20150154761A1
US20150154761A1 US13/721,607 US201213721607A US2015154761A1 US 20150154761 A1 US20150154761 A1 US 20150154761A1 US 201213721607 A US201213721607 A US 201213721607A US 2015154761 A1 US2015154761 A1 US 2015154761A1
Authority
US
United States
Prior art keywords
photographic images
photographic
image
factor
rotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/721,607
Other versions
US9047692B1 (en
Inventor
Steven Maxwell Seitz
Rahul Garg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US13/721,607 priority Critical patent/US9047692B1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GARG, RAHUL, SEITZ, STEVEN MAXWELL
Application granted granted Critical
Publication of US9047692B1 publication Critical patent/US9047692B1/en
Publication of US20150154761A1 publication Critical patent/US20150154761A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Definitions

  • the embodiments described herein generally relate to organizing photographic images.
  • Users wishing to stitch together a collection of photographic images captured from the same optical center may utilize a variety of computer programs that determine a set of common features in the photographic images and stitch the photographic images together into a single panorama.
  • the photographic images may be aligned by matching the common features between the photographic images.
  • These computer programs are not designed to stitch photographic images together when the photographic images are captured from different optical centers.
  • Panorama creation programs known in the art require that an image capture device rotate about the optical center of its lens, thereby maintaining the same point of perspective for all photographs. If the image capture device does not rotate about its optical center, its images may become impossible to align perfectly. These misalignments are called parallax error.
  • a method includes determining a set of common features for at least one pair of photographic images from the group of photographic images.
  • the set of common features includes at least a portion of an object captured in each of a first and second photographic image included in the at least one pair of photographic images, where the first and second photographic images may be captured from different optical centers.
  • a similarity transform for the at least one pair of photographic images is then determined.
  • the similarity transform includes a rotation factor between the first and second photographic images.
  • the rotation factor describes a rotation that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images.
  • the similarity transform also includes a scaling factor between the first and second photographic images.
  • the scaling factor describes a zoom level that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images.
  • the similarity transform further includes a translation factor between the first and second photographic images.
  • the translation factor describes a change in position that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images.
  • the similarity transform is then provided in order to render the scene scan from the at least one pair of photographic images.
  • At least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position the first and second photographic images such that the set of common features between the first and second photographic images, at least in part, align.
  • FIG. 1 illustrates a scene scan according to an embodiment.
  • FIG. 2A illustrates a scene scan with a rotation bias according to an embodiment.
  • FIG. 2B illustrates the scene scan in FIG. 2A with a counter rotation applied according to an embodiment.
  • FIG. 3A illustrates a scene scan with a rotation bias where a viewport set to zoom in the scene scan according to an embodiment.
  • FIG. 3B illustrates the scene scan in FIG. 3A with a counter rotation applied according to an embodiment.
  • FIG. 4A illustrates an example system for creating a scene scan from a group of photographic images according to an embodiment.
  • FIG. 4B illustrates an example system for creating a scene scan from a group of photographic images comprising according to an embodiment.
  • FIG. 5 is a flowchart illustrating a method that may be used to create a scene scan from a group of photographic images according to an embodiment.
  • FIG. 6 is a flowchart illustrating a method that may be used to create a scene scan from a group of photographic images according to an embodiment.
  • FIG. 7 illustrates an example computer in which the embodiments described herein, or portions thereof, may be implemented as computer-readable code.
  • Embodiments described herein may be used to create a scene scan from a group of photographic images.
  • the photographic images utilized by the embodiments may include photographic images captured from different optical centers.
  • a first photographic image captured from a first optical center may be different from a second photographic image captured from a second optical center when, for example, the first and second photographic images are captured from different locations.
  • To position photographic images captured from different optical centers a set of common features are detected between the photographic images. If a set of common features is located, a similarity transform is determined such that, when it is applied to at least one photographic images, the set of common features align.
  • the similarity transform may be provided with the photographic images and used to render the photographic images on a display device.
  • references to “one embodiment,” “an embodiment,” “an example embodiment,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic. Every embodiment, however, may not necessarily include the particular feature, structure, or characteristic. Thus, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • the first section describes example scene scans that may be rendered by the embodiments.
  • the second and third sections describe example system and method embodiments, respectively, that may be used to render a scene scan from a collection of photographic images.
  • the fourth section describes an example computer system that may be used to implement the embodiments described herein.
  • FIG. 1 illustrates a scene scan 100 according to an embodiment.
  • Scene scan 100 is created by overlaying photographic images 102 , 104 , 106 , 108 , 110 , 112 , 114 , 116 , 118 , 120 , 122 , 124 , and 126 on top of each other.
  • Photographic images 102 - 126 are each captured from a different optical center.
  • the optical center used to capture each photographic image 102 - 126 changes in a horizontal direction as each image is captured.
  • scene scan 100 shows a scene that is created by aligning each photographic image 102 - 126 based on common features captured in neighboring photographic images.
  • photographic images 102 - 126 are each positioned on top of one another based on the common features found between each pair. For example, photographic images 114 and 116 each capture a portion of the same building along a street. This common building may be detected by a system configured to create scene scans such as, for example, system 400 in FIG. 4 or system 500 in FIG. 5 .
  • scene scan 100 may be rendered by positioning photographic images 102 - 126 such that the common features align.
  • common features exist between photographic images 102 and 104 , photographic images 104 and 106 , photographic images 106 and 108 , photographic images 108 and 110 , etc.
  • Scene scan 100 may be rendered to display on a display device such that the photographic image with an image center closest to the center of a viewport is placed on top.
  • the image center of photographic image 116 is closest to the center of viewport 130 and thus, photographic image 116 is displayed on top of photographic images 102 - 114 and 118 - 126 .
  • a user interface may be utilized to that allows a user to interact with scene scan 100 .
  • the user interface may allow a user to, for example, pan or zoom scene scan 100 . If the user selects to pan scene scan 100 , the photographic image with the image center closest to the center of viewport 130 may be moved to the top of the rendered photographic images.
  • photographic image 114 may be placed on top of photographic image 116 when the image center of photographic image 114 is closer to the center of viewport 130 than the image center of photographic image 116 .
  • FIG. 2A illustrates scene scan 200 with a rotation bias according to an embodiment. Similar to scene scan 100 , scene scan 200 includes photographic images 204 arranged such that the features that are common between at least two photographic images align. Scene scan 200 is displayed through viewport 202 . Photographic images 204 are aligned with a rotation bias showing a downward direction. The rotation bias is due to one or more photographic images 204 having a stitching plane that is not parallel to the image plane. The rotation bias can occur when, for example, two photographic images are captured from different rotation angles about a capture devices optical axes.
  • FIG. 2B illustrates scene scan 250 that shows scene scan 200 in FIG. 2A with a counter rotation applied.
  • Scene scan 250 is rendered with the counter rotation to counter-act the rotation bias in scene scan 200 .
  • the counter rotation may be determined based on, for example, photographic images 204 shown in viewport 202 .
  • the counter rotation is based on a rotation factor and a weight factor associated with each photographic image 204 displayed in viewport 202 .
  • the rotation factor is determined based on, for example, aligning common features between the photographic images.
  • the weight factor may be based on, for example, the distance between the image center of a photographic image and the center of viewport 202 .
  • the rotation factor and the weight factor may be combined to determine the counter-rotation.
  • FIG. 3A illustrates scene scan 300 with a rotation bias according to an embodiment.
  • Scene scan 300 is similar to scene scan 200 in FIG. 2A except that viewport 302 is zoomed into photographic images 304 .
  • Photographic images 304 are aligned with a rotation bias showing a downward direction.
  • the rotation bias is due to one or more photographic images 304 having a stitching plane that is not parallel to the image plane.
  • the rotation bias occurs because, for example, at least two photographic images 304 are captured from different rotation angles about a capture device's optical axis.
  • FIG. 3B illustrates scene scan 350 that shows scene scan 300 in FIG. 3A with a counter rotation applied.
  • Scene scan 350 is rendered with the counter rotation to counter-act the rotation bias in scene scan 300 .
  • the counter rotation for scene scan 300 is determined based on photographic images 304 shown in viewport 302 .
  • the counter rotation is based on a rotation factor and a weight factor associated with each photographic image 304 displayed in viewport 302 .
  • the weight factor is determined for each photographic image 304 by finding the distance between the image center of a photographic image in viewport 302 and the center of viewport 302 .
  • the rotation factor corresponds to the rotation used to align common features between photographic images 304 .
  • FIGS. 1 , 2 A, 2 B, 3 A, and 3 B are provided as examples and are not intended to limit the embodiments described herein.
  • FIG. 4A illustrates an example system 400 for creating a scene scan from a group of photographic images according to an embodiment.
  • System 400 includes computing device 402 .
  • Computing device 402 includes feature detector module 406 , similarity transform module 408 , data output module 410 , rendering module 412 , user-interface module 414 , counter-rotation module 416 , and camera 418 .
  • FIG. 4B illustrates an example system 450 for creating a scene scan from a group of photographic images according to an embodiment.
  • System 450 is similar to system 400 except that some functions are carried out by a server.
  • System 450 includes computing device 452 , image processing server 454 , scene scan database 456 and network 430 .
  • Computing device 452 includes rendering module 412 , user-interface module 414 , and camera 418 .
  • Image processing server 454 includes feature detector module 406 , similarity transform module 408 , data output module 410 , and counter-rotation module 416 .
  • Computing devices 402 and 452 can be implemented on any computing device capable of processing photographic images.
  • Computing devices 402 and 452 may include, for example, a mobile computing device (e.g. a mobile phone, a smart phone, a personal digital assistant (PDA), a navigation device, a tablet, or other mobile computing devices).
  • Computing devices 402 and 452 may also include, but are not limited to, a central processing unit, an application-specific integrated circuit, a computer, workstation, a distributed computing system, a computer cluster, an embedded system, a stand-alone electronic device, a networked device, a rack server, a set-top box, or other type of computer system having at least one processor and memory.
  • a computing process performed by a clustered computing environment or server farm may be carried out across multiple processors located at the same or different locations.
  • Hardware can include, but is not limited to, a processor, memory, and a user interface display.
  • Computing devices 402 and 452 each include camera 418 .
  • Camera 418 may include any digital image capture device such as, for example, a digital camera or an image scanner. While camera 418 is included in computing devices 402 and 452 , camera 418 is not intended to limit the embodiments in any way. Alternative methods may be used to acquire photographic images such as, for example, retrieving photographic images from a local or networked storage device.
  • Network 430 can include any network or combination of networks that can carry data communication. These networks can include, for example, a local area network (LAN) or a wide area network (WAN), such as the Internet. LAN and WAN networks can include any combination of wired (e.g., Ethernet) or wireless (e.g., Wi-Fi, 3G, or 4G) network components.
  • LAN and WAN networks can include any combination of wired (e.g., Ethernet) or wireless (e.g., Wi-Fi, 3G, or 4G) network components.
  • Image processing server 454 can include any server system capable of processing photographic images.
  • Image processing server 454 may include, but is not limited to, a central processing unit, an application-specific integrated circuit, a computer, workstation, a distributed computing system, a computer cluster, an embedded system, a stand-alone electronic device, a networked device, a rack server, a set-top box, or other type of computer system having at least one processor and memory.
  • a computing process performed by a clustered computing environment or server farm may be carried out across multiple processors located at the same or different locations.
  • Hardware can include, but is not limited to, a processor, memory, and a user interface display.
  • Image processing server 454 may process photographic images into scene scans and store the scene scan information on scene scan database 456 . Scene scans stored on scene scan database 456 may be transmitted to computing device 452 for display.
  • Feature detector module 406 is configured to determine a set of common features for at least one pair of photographic images from a group of photographic images.
  • the pair of photographic images may include any two photographic images from the group of photographic images. Additionally, feature detector module 406 may detect a set of common features between multiple pairs of photographic images.
  • the set of common features include at least a portion of an object captured in each photographic image in the pair of photographic images, where each photographic image may be captured from a different optical centers.
  • the set of common features may include, for example, an outline of a structure, intersecting lines, or other features captured in the photographic images.
  • Feature detector module 406 may utilize any number of feature detection method known to those of skill in the art such as, for example, Features from Accelerated Segment Test (“FAST”), Speed Up Robust Features (“SURF”), or Scale-invariant feature transform (“SIFT”).
  • FAST Accelerated Segment Test
  • SURF Speed Up Robust Features
  • SIFT Scale-invariant feature transform
  • two features are determined between the photographic images. Other features are then determined and used to verify that the photographic images captured, at least a portion, of the same subject matter.
  • the set of common features is determined for a pair of photographic images as the photographic images are being captured by computing devices 402 or 452 . In some embodiments, as a new photographic image is captured, a set of common features is determined between the newly captured photographic image and the next most recently captured photographic image. In some embodiments, the set of common features is determined between the newly captured photographic image and a previously captured photographic image.
  • similarity transform module 408 is configured to determine a similarity transform for the pair photographic images.
  • the similarity transform is determined by calculating a rotation factor, a scaling factor, and a translation factor that, when applied to each or all of the photographic images in the pair, align the set of common features between photographic images in the pair.
  • Similarity transform module 408 is configured to determine a rotation factor between a first and second photographic image in the pair.
  • the rotation factor describes a rotation that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images.
  • the rotation factor may be determined between the first and second photographic images when, for example, the first and second photographic images are captured about parallel optical axes but at different rotation angels applied to each optical axis. For example, if the first photographic image is captured at an optical axis and at a first angle of rotation and the second photographic image is captured at a parallel optical axis but at a second angle of rotation, the image planes of the first and second photographic images may not be parallel.
  • the rotation factor may be used to rotate either or both of the photographic images such that the set of common features, at least in part, align. For example, if the rotation factor is applied to the second photographic image, the set of common features will align, at least in part, when the set of common features appear at approximately the same rotation angel.
  • Similarity transform module 408 is also configured to determine a scaling factor between the first and second photographic images in the pair.
  • the scaling factor describes a zoom level that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. For example, if the set of common features between the first and second photographic images are at different levels of scale, the set of common features between the photographic images may appear at different sizes.
  • the scale factor may be determined such that, when the scale factor is applied to either or both of the first and second photographic images, the set of common features are approximately at the same level of scale.
  • Similarity Transform module 408 is also configured to determine a translation factor between the first and second photographic images in the pair.
  • the translation factor describes a change in position that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. For example, in order to align the set of common features between the first and second photographic images, the photographic images may be positioned such that the set of common features overlap.
  • the translation factor determines, for example, the horizontal and vertical (e.g., x and y) coordinates that, when applied to either or both of the photographic images, positions the photographic images such that the set of common features overlap.
  • the translation factor may utilize other coordinate systems such as, for example, latitude/longitude or polar coordinates.
  • Data output module 410 is configured to output the similarity transform for each pair of photographic images in order to render the scene scan.
  • Each of the rotation factor, the scaling factor, and the translation factor may be used to render a scene scan from each pair of photographic images.
  • each of the rotation factor, the scaling factor, or the translation factor may be used to position a first and second photographic image in a pair such that the set of common features between the first and second photographic images, at least in part, align.
  • Each of the rotation factor, scaling factor, and translation factor may be output separately or combined into a single data value such as, for example, a matrix.
  • the rotation, scaling, and translation factors are output to scene scan database 456 .
  • the factors may then be retrieved by a user along with the corresponding photographic images so that a scene scan can be rendered on a computing device.
  • the factors may be determined by computing device 402 and output to a database such as, for example, scene scan database 452 .
  • Scene scans output to scene scan database 456 may be associated with a user profile and shared with one or more other users, or made publicly available to all users.
  • Rendering module 416 is configured to render a scene scan such that each pair of photographic images is positioned to align the set of common features between a first and second photographic image included in a pair.
  • the set of common features between the first and second photographic images is aligned using at least one of the rotation factor, the scaling factor, or the translation factor.
  • the scene scan is rendered by stitching the photographic images together and displaying the stitched photographic images.
  • each photographic image is maintained separately and positioned on top of each other such that the set of common features between the photographic images align.
  • rendering module 416 is also configured to apply the counter-rotation, at least in part, to at least one photographic image.
  • the counter rotation described below, rotates, for example, a photographic image in a direction opposite to the rotation factor in order to counter-act the rotation bias resulting from the rotation factor.
  • the counter-rotation may instead be applied to the scene scan or a portion of the scene scan.
  • the portion of the scene scan for which the counter-rotation is applied may correspond to the portion of the scene scan displayed through a viewport.
  • the viewport defines a window that is displayed on a display device.
  • Counter rotation module 412 is configured to determine a counter rotation for the scene scan.
  • the counter-rotation when applied to at least one photographic image, adjust the photographic image such that the photographic image displays with a smaller rotation bias.
  • the counter rotation is based on the rotation factor and a weight factor associated with each photographic image.
  • the weight factor is based on a distance between an image center of a photographic image and the center of the viewport. In some embodiments, the counter-rotation is determined from the following equation:
  • ‘w’ represents that weight factor associated with each photographic image in the viewport and ‘r’ represents the rotation factor associated with each photographic image in the viewport.
  • user-interface module 414 is configured to display at least a portion of the scene scan that falls within a viewport used to display the rendered photographic images.
  • the viewport is a window or boundary that defines the area that is displayed on a display device.
  • the viewport may be configured to display all or a portion of a scene scan or may be used to zoom or pan the scene scan.
  • user-interface module 414 may also be configured to receive user input to navigate through the scene scan.
  • the user input may include, for example, commands to pan through the photographic image, change the order of the overlap between photographic images, zoom into or out of the photographic images, or select portions to the scene scan to interact with.
  • the photographic image displayed on top may be determined based on the distance between the image center of a photographic image and the center of the viewport. For example, when the image center of a first photographic image is closest to the center of a viewport used to display the scene scan, user-interface module 414 may be configured to position the first photographic image over a second photographic image. Similarly, when the image center of the second photographic image is closest to the center of the viewport used to display the scene scan, user-interface module 414 may be configured to position the second photographic image over the first photographic image. In some embodiments the order of overlap between the photographic images included in the scene scan is determined as the user navigates through the scene scan.
  • user-interface module 414 is configured to position each photographic image such that the photographic image with the image center closest to the center of a viewport is placed over the photographic image with the image center next closest to the center of the viewport. For example, if a first photographic image has an image center closest to the center of the viewport, user-interface module 414 may be configured to place the first photographic image on top of all other photographic images in the scene scan. Similarly, if a second photographic image has an image center next closest to the center of the viewport, the second photographic image may be positioned over all but the first photographic image.
  • FIG. 5 is a flowchart illustrating a method 500 that may be used to create a scene scan from a group of photographic images according to an embodiment. While method 500 is described with respect to an embodiment, method 500 is not meant to be limiting and may be used in other applications. Additionally, method 500 may be carried out by, for example, system 400 in FIG. 4A or system 450 in FIG. 4B .
  • Method 500 first determines a set of common features for at least one pair of photographic images included in the group of photographic images (stage 510 ).
  • the set of common features includes at least a portion of an object captured in each of a first and a second photographic image included in the at least one pair, where the first and second photographic images may be captured from different optical centers.
  • Any feature detection method may be used to determine the set of common features for the photographic images included in a pair. Such methods may include, for example, Features from Accelerated Segment Test (“FAST”), Speed Up Robust Features (“SURF”), or Scale-invariant feature transform (“SIFT”). These feature detection methods are merely provided as examples and are not intended to limit the embodiments in any way.
  • Stage 510 may be carried out by, for example, feature detector module 406 embodied in systems 400 and 450 .
  • Method 500 determines a similarity transform for the at least one pair of photographic images (stage 520 ).
  • the similarity transform includes determining a rotation factor, a scaling factor, and a translation factor between at least the first and second photographic images included in the pair.
  • the similarity factor when applied to either or both of the first and second photographic images, may be used to align the set of common features between the first and second photographic images.
  • the rotation factor describes a rotation that, when applied to at least one of the first or second photographic images, aligns, at least in part, the set of common features between the first and second photographic images.
  • the scaling factor describes a zoom level that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images.
  • the translation factor describes a change in position that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images.
  • Stage 520 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450 .
  • Method 500 also provides the similarity transform in order to render the scene scan from the at least one pair of photographic images (stage 530 ).
  • At least one of the rotation factor, the scaling factor, or the translation factor may be used to position the first and second photographic images included in each pair such that the set of common features between the first and second photographic images, at least in part, align.
  • the scene scan may be rendered in a viewport and displayed on a display device.
  • Stage 530 may be carried out by, for example, data output module 410 embodied in systems 400 and 450 .
  • FIG. 6 is a flowchart illustrating a method 600 that may be used to create a scene scan from a group of photographic images.
  • the group of photographic images may be organized according to a time value associated with each photographic. The time value may indicate when each photographic image was captured. While method 600 is described with respect to an embodiment, method 600 is not meant to be limiting and may be used in other applications. Additionally, method 600 may be carried out by, for example, system 400 in FIG. 4A or system 450 in FIG. 4B .
  • Method 600 first determines a set of common features between two photographic images (stage 610 ).
  • the two photographic images include a most recently captured photographic image and a previously captured photographic image.
  • the features include at least a portion of an object captured in each of the two photographic images, where each of the two photographic images may be captured from different optical centers.
  • Any feature detection method may be used to determine the set of common features between the photographic images. Such methods may include, for example, Features from Accelerated Segment Test (“FAST”), Speed Up Robust Features (“SURF”), or Scale-invariant feature transform (“SIFT”). These feature detection methods are merely provided as examples and are not intended to limit the embodiments in any way.
  • Stage 610 may be carried out by, for example, feature detector module 406 embodied in systems 400 and 450 .
  • Method 600 determines a rotation factor between the two photographic images (stage 620 ).
  • the rotation factor describes a rotation that, when applied to at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images.
  • Stage 620 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450 .
  • Method 600 determines a scaling factor between the two adjacent photographic images (stage 630 ).
  • the scaling factor describes a zoom level that, when applied at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images.
  • Stage 630 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450 .
  • Method 600 determines a translation factor between the two photographic images (stage 640 ).
  • the translation factor describes a change in position that, when applied to at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images.
  • Stage 640 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450 .
  • Method 600 finally renders the scene scan from the group of photographic images such that each two photographic images are positioned to align the set of common features between them (stage 650 ).
  • the alignment is determined by using at least one of the rotation factor, the scaling factor, or the translation factor.
  • the scene scan may be rendered in a viewport and displayed on a display device.
  • Stage 650 may be carried out by, for example, rendering module 416 embodied in systems 400 and 450 .
  • FIG. 7 illustrates an example computer 700 in which the embodiments described herein, or portions thereof, may be implemented as computer-readable code.
  • feature detector module 406 image rotation module 408 , image scaling module 410 , image translation module 412 , data output module 414 , rendering module 416 , user-interface module 418 , or counter-rotation module 420 may be implemented in one or more computer systems 700 using hardware, software, firmware, computer readable storage media having instructions stored thereon, or a combination thereof.
  • a computing device having at least one processor device and a memory may be used to implement the above described embodiments.
  • a processor device may be a single processor, a plurality of processors, or combinations thereof.
  • Processor devices may have one or more processor “cores.”
  • processor device 704 may be a single processor in a multi-core/multiprocessor system, such system operating alone, or in a cluster of computing devices operating in a cluster or server farm.
  • Processor device 704 is connected to a communication infrastructure 706 , for example, a bus, message queue, network, or multi-core message-passing scheme.
  • Computer system 700 may also include display interface 702 and display unit 730 .
  • Computer system 700 also includes a main memory 708 , for example, random access memory (RAM), and may also include a secondary memory 710 .
  • Secondary memory 710 may include, for example, a hard disk drive 712 , and removable storage drive 714 .
  • Removable storage drive 714 may include a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory drive, or the like.
  • the removable storage drive 714 reads from and/or writes to a removable storage unit 718 in a well-known manner.
  • Removable storage unit 718 may include a floppy disk, magnetic tape, optical disk, flash memory drive, etc. which is read by and written to by removable storage drive 714 .
  • removable storage unit 718 includes a computer readable storage medium having stored thereon computer software and/or data.
  • secondary memory 710 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 700 .
  • Such means may include, for example, a removable storage unit 722 and an interface 720 .
  • Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 722 and interfaces 720 which allow software and data to be transferred from the removable storage unit 722 to computer system 700 .
  • Computer system 700 may also include a communications interface 724 .
  • Communications interface 724 allows software and data to be transferred between computer system 700 and external devices.
  • Communications interface 724 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like.
  • Software and data transferred via communications interface 724 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 724 . These signals may be provided to communications interface 724 via a communications path 726 .
  • Communications path 726 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels.
  • Computer storage medium and “computer readable storage medium” are used to generally refer to media such as removable storage unit 718 , removable storage unit 722 , and a hard disk installed in hard disk drive 712 .
  • Computer storage medium and computer readable storage medium may also refer to memories, such as main memory 708 and secondary memory 710 , which may be memory semiconductors (e.g. DRAMs, etc.).
  • Computer programs are stored in main memory 708 and/or secondary memory 710 . Computer programs may also be received via communications interface 724 . Such computer programs, when executed, enable computer system 700 to implement the embodiments described herein. In particular, the computer programs, when executed, enable processor device 704 to implement the processes of the embodiments, such as the stages in the methods illustrated by flowchart 500 of FIG. 5 and flowchart 600 of FIG. 6 , discussed above. Accordingly, such computer programs represent controllers of computer system 700 . Where an embodiment is implemented using software, the software may be stored in a computer storage medium and loaded into computer system 700 using removable storage drive 714 , interface 720 , and hard disk drive 712 , or communications interface 724 .
  • Embodiments of the invention also may be directed to computer program products including software stored on any computer readable storage medium.
  • Such software when executed in one or more data processing device, causes a data processing device(s) to operate as described herein.
  • Examples of computer readable storage mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory) and secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, and optical storage devices, MEMS, nanotechnological storage device, etc.).

Abstract

Systems, methods, and computer storage mediums are provided for creating a scene scan from a group of photographic images. An exemplary method includes determining a set of common features for at least one pair of photographic images. The features include a portion of an object captured in each of a first and a second photographic image included in the pair. The first and second photographic images may be captured from different optical centers. A similarity transform for the at least one pair of photographic images is the determined. The similarity transform is provided in order to render the scene scan from each pair of photographic images. At least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position each pair of photographic images such that the set of common features between a pair of, at least in part, align.

Description

  • This application claims the benefit of U.S. Provisional Application No. 61/577,931 filed Dec. 20, 2011, which is incorporated herein in its entirety by reference.
  • FIELD
  • The embodiments described herein generally relate to organizing photographic images.
  • BACKGROUND
  • Users wishing to stitch together a collection of photographic images captured from the same optical center may utilize a variety of computer programs that determine a set of common features in the photographic images and stitch the photographic images together into a single panorama. The photographic images may be aligned by matching the common features between the photographic images. These computer programs, however, are not designed to stitch photographic images together when the photographic images are captured from different optical centers. Panorama creation programs known in the art require that an image capture device rotate about the optical center of its lens, thereby maintaining the same point of perspective for all photographs. If the image capture device does not rotate about its optical center, its images may become impossible to align perfectly. These misalignments are called parallax error.
  • BRIEF SUMMARY
  • The embodiments described herein include systems, methods, and computer storage mediums for creating a scene scan from a group of photographic images. A method includes determining a set of common features for at least one pair of photographic images from the group of photographic images. The set of common features includes at least a portion of an object captured in each of a first and second photographic image included in the at least one pair of photographic images, where the first and second photographic images may be captured from different optical centers.
  • A similarity transform for the at least one pair of photographic images is then determined. The similarity transform includes a rotation factor between the first and second photographic images. The rotation factor describes a rotation that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images. The similarity transform also includes a scaling factor between the first and second photographic images. The scaling factor describes a zoom level that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images. The similarity transform further includes a translation factor between the first and second photographic images. The translation factor describes a change in position that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images.
  • The similarity transform is then provided in order to render the scene scan from the at least one pair of photographic images. At least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position the first and second photographic images such that the set of common features between the first and second photographic images, at least in part, align.
  • Further features and advantages of the embodiments described herein, as well as the structure and operation of various embodiments, are described in detail below with reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
  • Embodiments are described with reference to the accompanying drawings. In the drawings, like reference numbers may indicate identical or functionally similar elements. The drawing in which an element first appears is generally indicated by the left-most digit in the corresponding reference number.
  • FIG. 1 illustrates a scene scan according to an embodiment.
  • FIG. 2A illustrates a scene scan with a rotation bias according to an embodiment.
  • FIG. 2B illustrates the scene scan in FIG. 2A with a counter rotation applied according to an embodiment.
  • FIG. 3A illustrates a scene scan with a rotation bias where a viewport set to zoom in the scene scan according to an embodiment.
  • FIG. 3B illustrates the scene scan in FIG. 3A with a counter rotation applied according to an embodiment.
  • FIG. 4A illustrates an example system for creating a scene scan from a group of photographic images according to an embodiment.
  • FIG. 4B illustrates an example system for creating a scene scan from a group of photographic images comprising according to an embodiment.
  • FIG. 5 is a flowchart illustrating a method that may be used to create a scene scan from a group of photographic images according to an embodiment.
  • FIG. 6 is a flowchart illustrating a method that may be used to create a scene scan from a group of photographic images according to an embodiment.
  • FIG. 7 illustrates an example computer in which the embodiments described herein, or portions thereof, may be implemented as computer-readable code.
  • DETAILED DESCRIPTION
  • Embodiments described herein may be used to create a scene scan from a group of photographic images. The photographic images utilized by the embodiments may include photographic images captured from different optical centers. A first photographic image captured from a first optical center may be different from a second photographic image captured from a second optical center when, for example, the first and second photographic images are captured from different locations. To position photographic images captured from different optical centers, a set of common features are detected between the photographic images. If a set of common features is located, a similarity transform is determined such that, when it is applied to at least one photographic images, the set of common features align. The similarity transform may be provided with the photographic images and used to render the photographic images on a display device.
  • In the following detailed description, references to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic. Every embodiment, however, may not necessarily include the particular feature, structure, or characteristic. Thus, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • The following detailed description refers to the accompanying drawings that illustrate example embodiments. Other embodiments are possible, and modifications can be made to the embodiments within the spirit and scope of this description. Those skilled in the art with access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which embodiments would be of significant utility. Therefore, the detailed description is not meant to limit the embodiments described below.
  • This Detailed Description is divided into sections. The first section describes example scene scans that may be rendered by the embodiments. The second and third sections describe example system and method embodiments, respectively, that may be used to render a scene scan from a collection of photographic images. The fourth section describes an example computer system that may be used to implement the embodiments described herein.
  • Example Scene Scans
  • FIG. 1 illustrates a scene scan 100 according to an embodiment. Scene scan 100 is created by overlaying photographic images 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, and 126 on top of each other. Photographic images 102-126 are each captured from a different optical center. In scene scan 100, the optical center used to capture each photographic image 102-126 changes in a horizontal direction as each image is captured. As a result, scene scan 100 shows a scene that is created by aligning each photographic image 102-126 based on common features captured in neighboring photographic images.
  • To create scene scan 100, photographic images 102-126 are each positioned on top of one another based on the common features found between each pair. For example, photographic images 114 and 116 each capture a portion of the same building along a street. This common building may be detected by a system configured to create scene scans such as, for example, system 400 in FIG. 4 or system 500 in FIG. 5. Once common features are identified between photographic images 102-126, scene scan 100 may be rendered by positioning photographic images 102-126 such that the common features align. In scene scan 100, common features exist between photographic images 102 and 104, photographic images 104 and 106, photographic images 106 and 108, photographic images 108 and 110, etc.
  • Scene scan 100 may be rendered to display on a display device such that the photographic image with an image center closest to the center of a viewport is placed on top. In FIG. 1, the image center of photographic image 116 is closest to the center of viewport 130 and thus, photographic image 116 is displayed on top of photographic images 102-114 and 118-126. A user interface may be utilized to that allows a user to interact with scene scan 100. The user interface may allow a user to, for example, pan or zoom scene scan 100. If the user selects to pan scene scan 100, the photographic image with the image center closest to the center of viewport 130 may be moved to the top of the rendered photographic images. For example, if a user selects to pan along scene scan 100 to the left of photographic image 116, photographic image 114 may be placed on top of photographic image 116 when the image center of photographic image 114 is closer to the center of viewport 130 than the image center of photographic image 116.
  • FIG. 2A illustrates scene scan 200 with a rotation bias according to an embodiment. Similar to scene scan 100, scene scan 200 includes photographic images 204 arranged such that the features that are common between at least two photographic images align. Scene scan 200 is displayed through viewport 202. Photographic images 204 are aligned with a rotation bias showing a downward direction. The rotation bias is due to one or more photographic images 204 having a stitching plane that is not parallel to the image plane. The rotation bias can occur when, for example, two photographic images are captured from different rotation angles about a capture devices optical axes.
  • FIG. 2B illustrates scene scan 250 that shows scene scan 200 in FIG. 2A with a counter rotation applied. Scene scan 250 is rendered with the counter rotation to counter-act the rotation bias in scene scan 200. The counter rotation may be determined based on, for example, photographic images 204 shown in viewport 202. In some embodiments, the counter rotation is based on a rotation factor and a weight factor associated with each photographic image 204 displayed in viewport 202. The rotation factor is determined based on, for example, aligning common features between the photographic images. The weight factor may be based on, for example, the distance between the image center of a photographic image and the center of viewport 202. The rotation factor and the weight factor may be combined to determine the counter-rotation.
  • FIG. 3A illustrates scene scan 300 with a rotation bias according to an embodiment. Scene scan 300 is similar to scene scan 200 in FIG. 2A except that viewport 302 is zoomed into photographic images 304. Photographic images 304 are aligned with a rotation bias showing a downward direction. The rotation bias is due to one or more photographic images 304 having a stitching plane that is not parallel to the image plane. The rotation bias occurs because, for example, at least two photographic images 304 are captured from different rotation angles about a capture device's optical axis.
  • FIG. 3B illustrates scene scan 350 that shows scene scan 300 in FIG. 3A with a counter rotation applied. Scene scan 350 is rendered with the counter rotation to counter-act the rotation bias in scene scan 300. The counter rotation for scene scan 300 is determined based on photographic images 304 shown in viewport 302. The counter rotation is based on a rotation factor and a weight factor associated with each photographic image 304 displayed in viewport 302. In scene scan 350, the weight factor is determined for each photographic image 304 by finding the distance between the image center of a photographic image in viewport 302 and the center of viewport 302. The rotation factor corresponds to the rotation used to align common features between photographic images 304. Once the counter rotation is determined for the photographic images 304 in viewport 302, the counter rotation is applied by rotating photographic images 304 in a direction opposite to the rotation bias.
  • FIGS. 1, 2A, 2B, 3A, and 3B are provided as examples and are not intended to limit the embodiments described herein.
  • Example System Embodiments
  • FIG. 4A illustrates an example system 400 for creating a scene scan from a group of photographic images according to an embodiment. System 400 includes computing device 402. Computing device 402 includes feature detector module 406, similarity transform module 408, data output module 410, rendering module 412, user-interface module 414, counter-rotation module 416, and camera 418.
  • FIG. 4B illustrates an example system 450 for creating a scene scan from a group of photographic images according to an embodiment. System 450 is similar to system 400 except that some functions are carried out by a server. System 450 includes computing device 452, image processing server 454, scene scan database 456 and network 430. Computing device 452 includes rendering module 412, user-interface module 414, and camera 418. Image processing server 454 includes feature detector module 406, similarity transform module 408, data output module 410, and counter-rotation module 416.
  • Computing devices 402 and 452 can be implemented on any computing device capable of processing photographic images. Computing devices 402 and 452 may include, for example, a mobile computing device (e.g. a mobile phone, a smart phone, a personal digital assistant (PDA), a navigation device, a tablet, or other mobile computing devices). Computing devices 402 and 452 may also include, but are not limited to, a central processing unit, an application-specific integrated circuit, a computer, workstation, a distributed computing system, a computer cluster, an embedded system, a stand-alone electronic device, a networked device, a rack server, a set-top box, or other type of computer system having at least one processor and memory. A computing process performed by a clustered computing environment or server farm may be carried out across multiple processors located at the same or different locations. Hardware can include, but is not limited to, a processor, memory, and a user interface display.
  • Computing devices 402 and 452 each include camera 418. Camera 418 may include any digital image capture device such as, for example, a digital camera or an image scanner. While camera 418 is included in computing devices 402 and 452, camera 418 is not intended to limit the embodiments in any way. Alternative methods may be used to acquire photographic images such as, for example, retrieving photographic images from a local or networked storage device.
  • Network 430 can include any network or combination of networks that can carry data communication. These networks can include, for example, a local area network (LAN) or a wide area network (WAN), such as the Internet. LAN and WAN networks can include any combination of wired (e.g., Ethernet) or wireless (e.g., Wi-Fi, 3G, or 4G) network components.
  • Image processing server 454 can include any server system capable of processing photographic images. Image processing server 454 may include, but is not limited to, a central processing unit, an application-specific integrated circuit, a computer, workstation, a distributed computing system, a computer cluster, an embedded system, a stand-alone electronic device, a networked device, a rack server, a set-top box, or other type of computer system having at least one processor and memory. A computing process performed by a clustered computing environment or server farm may be carried out across multiple processors located at the same or different locations. Hardware can include, but is not limited to, a processor, memory, and a user interface display. Image processing server 454 may process photographic images into scene scans and store the scene scan information on scene scan database 456. Scene scans stored on scene scan database 456 may be transmitted to computing device 452 for display.
  • A. Feature Detector Module
  • Feature detector module 406 is configured to determine a set of common features for at least one pair of photographic images from a group of photographic images. The pair of photographic images may include any two photographic images from the group of photographic images. Additionally, feature detector module 406 may detect a set of common features between multiple pairs of photographic images.
  • The set of common features include at least a portion of an object captured in each photographic image in the pair of photographic images, where each photographic image may be captured from a different optical centers. The set of common features may include, for example, an outline of a structure, intersecting lines, or other features captured in the photographic images. Feature detector module 406 may utilize any number of feature detection method known to those of skill in the art such as, for example, Features from Accelerated Segment Test (“FAST”), Speed Up Robust Features (“SURF”), or Scale-invariant feature transform (“SIFT”). In some embodiments, two features are determined between the photographic images. Other features are then determined and used to verify that the photographic images captured, at least a portion, of the same subject matter.
  • In some embodiments, the set of common features is determined for a pair of photographic images as the photographic images are being captured by computing devices 402 or 452. In some embodiments, as a new photographic image is captured, a set of common features is determined between the newly captured photographic image and the next most recently captured photographic image. In some embodiments, the set of common features is determined between the newly captured photographic image and a previously captured photographic image.
  • B. Similarity Transform Module
  • Once a set of common features is determined for a pair of photographic images, similarity transform module 408 is configured to determine a similarity transform for the pair photographic images. The similarity transform is determined by calculating a rotation factor, a scaling factor, and a translation factor that, when applied to each or all of the photographic images in the pair, align the set of common features between photographic images in the pair.
  • 1. Rotation Factor
  • Similarity transform module 408 is configured to determine a rotation factor between a first and second photographic image in the pair. The rotation factor describes a rotation that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. The rotation factor may be determined between the first and second photographic images when, for example, the first and second photographic images are captured about parallel optical axes but at different rotation angels applied to each optical axis. For example, if the first photographic image is captured at an optical axis and at a first angle of rotation and the second photographic image is captured at a parallel optical axis but at a second angle of rotation, the image planes of the first and second photographic images may not be parallel. If the image planes are not parallel, the rotation factor may be used to rotate either or both of the photographic images such that the set of common features, at least in part, align. For example, if the rotation factor is applied to the second photographic image, the set of common features will align, at least in part, when the set of common features appear at approximately the same rotation angel.
  • 2. Scaling Factor
  • Similarity transform module 408 is also configured to determine a scaling factor between the first and second photographic images in the pair. The scaling factor describes a zoom level that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. For example, if the set of common features between the first and second photographic images are at different levels of scale, the set of common features between the photographic images may appear at different sizes. The scale factor may be determined such that, when the scale factor is applied to either or both of the first and second photographic images, the set of common features are approximately at the same level of scale.
  • 3. Translation Factor
  • Similarity Transform module 408 is also configured to determine a translation factor between the first and second photographic images in the pair. The translation factor describes a change in position that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. For example, in order to align the set of common features between the first and second photographic images, the photographic images may be positioned such that the set of common features overlap. The translation factor determines, for example, the horizontal and vertical (e.g., x and y) coordinates that, when applied to either or both of the photographic images, positions the photographic images such that the set of common features overlap. The translation factor may utilize other coordinate systems such as, for example, latitude/longitude or polar coordinates.
  • C. Data Output Module
  • Data output module 410 is configured to output the similarity transform for each pair of photographic images in order to render the scene scan. Each of the rotation factor, the scaling factor, and the translation factor may be used to render a scene scan from each pair of photographic images. For example, each of the rotation factor, the scaling factor, or the translation factor may be used to position a first and second photographic image in a pair such that the set of common features between the first and second photographic images, at least in part, align. Each of the rotation factor, scaling factor, and translation factor may be output separately or combined into a single data value such as, for example, a matrix.
  • In some embodiments, the rotation, scaling, and translation factors are output to scene scan database 456. The factors may then be retrieved by a user along with the corresponding photographic images so that a scene scan can be rendered on a computing device. In some embodiments, the factors may be determined by computing device 402 and output to a database such as, for example, scene scan database 452. Scene scans output to scene scan database 456 may be associated with a user profile and shared with one or more other users, or made publicly available to all users.
  • D. Rendering Module
  • Rendering module 416 is configured to render a scene scan such that each pair of photographic images is positioned to align the set of common features between a first and second photographic image included in a pair. The set of common features between the first and second photographic images is aligned using at least one of the rotation factor, the scaling factor, or the translation factor. In some embodiments, the scene scan is rendered by stitching the photographic images together and displaying the stitched photographic images. In some embodiments, each photographic image is maintained separately and positioned on top of each other such that the set of common features between the photographic images align.
  • In some embodiments, rendering module 416 is also configured to apply the counter-rotation, at least in part, to at least one photographic image. The counter rotation, described below, rotates, for example, a photographic image in a direction opposite to the rotation factor in order to counter-act the rotation bias resulting from the rotation factor. The counter-rotation may instead be applied to the scene scan or a portion of the scene scan. In some embodiments, the portion of the scene scan for which the counter-rotation is applied may correspond to the portion of the scene scan displayed through a viewport. The viewport defines a window that is displayed on a display device.
  • E. Counter-Rotation Module
  • Counter rotation module 412 is configured to determine a counter rotation for the scene scan. The counter-rotation, when applied to at least one photographic image, adjust the photographic image such that the photographic image displays with a smaller rotation bias. The counter rotation is based on the rotation factor and a weight factor associated with each photographic image. The weight factor is based on a distance between an image center of a photographic image and the center of the viewport. In some embodiments, the counter-rotation is determined from the following equation:

  • w 1 ×r 1 +w 2 ×r 2 ×w 3 ×r 3 . . . w n ×r n
  • In the equation, ‘w’ represents that weight factor associated with each photographic image in the viewport and ‘r’ represents the rotation factor associated with each photographic image in the viewport. Once the counter-rotation is determined, it is applied to at least one photographic image within the viewport. In some embodiments, the counter-rotation is determined separately for each photographic image. Examples illustrations showing counter-rotations applied to the photographic images in a scene scan may be found in FIGS. 2B and 3B.
  • F. User-Interface Module
  • In some embodiments, user-interface module 414 is configured to display at least a portion of the scene scan that falls within a viewport used to display the rendered photographic images. The viewport is a window or boundary that defines the area that is displayed on a display device. The viewport may be configured to display all or a portion of a scene scan or may be used to zoom or pan the scene scan.
  • In some embodiments, user-interface module 414 may also be configured to receive user input to navigate through the scene scan. The user input may include, for example, commands to pan through the photographic image, change the order of the overlap between photographic images, zoom into or out of the photographic images, or select portions to the scene scan to interact with.
  • In some embodiments, the photographic image displayed on top may be determined based on the distance between the image center of a photographic image and the center of the viewport. For example, when the image center of a first photographic image is closest to the center of a viewport used to display the scene scan, user-interface module 414 may be configured to position the first photographic image over a second photographic image. Similarly, when the image center of the second photographic image is closest to the center of the viewport used to display the scene scan, user-interface module 414 may be configured to position the second photographic image over the first photographic image. In some embodiments the order of overlap between the photographic images included in the scene scan is determined as the user navigates through the scene scan.
  • In some embodiments, user-interface module 414 is configured to position each photographic image such that the photographic image with the image center closest to the center of a viewport is placed over the photographic image with the image center next closest to the center of the viewport. For example, if a first photographic image has an image center closest to the center of the viewport, user-interface module 414 may be configured to place the first photographic image on top of all other photographic images in the scene scan. Similarly, if a second photographic image has an image center next closest to the center of the viewport, the second photographic image may be positioned over all but the first photographic image.
  • Various aspects of embodiments described herein can be implemented by software, firmware, hardware, or a combination thereof. The embodiments, or portions thereof, can also be implemented as computer-readable code. The embodiment in systems 400 and 450 are not intended to be limiting in any way.
  • Example Method Embodiments
  • FIG. 5 is a flowchart illustrating a method 500 that may be used to create a scene scan from a group of photographic images according to an embodiment. While method 500 is described with respect to an embodiment, method 500 is not meant to be limiting and may be used in other applications. Additionally, method 500 may be carried out by, for example, system 400 in FIG. 4A or system 450 in FIG. 4B.
  • Method 500 first determines a set of common features for at least one pair of photographic images included in the group of photographic images (stage 510). The set of common features includes at least a portion of an object captured in each of a first and a second photographic image included in the at least one pair, where the first and second photographic images may be captured from different optical centers. Any feature detection method may be used to determine the set of common features for the photographic images included in a pair. Such methods may include, for example, Features from Accelerated Segment Test (“FAST”), Speed Up Robust Features (“SURF”), or Scale-invariant feature transform (“SIFT”). These feature detection methods are merely provided as examples and are not intended to limit the embodiments in any way. Stage 510 may be carried out by, for example, feature detector module 406 embodied in systems 400 and 450.
  • Method 500 then determines a similarity transform for the at least one pair of photographic images (stage 520). The similarity transform includes determining a rotation factor, a scaling factor, and a translation factor between at least the first and second photographic images included in the pair. The similarity factor, when applied to either or both of the first and second photographic images, may be used to align the set of common features between the first and second photographic images.
  • The rotation factor describes a rotation that, when applied to at least one of the first or second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. The scaling factor describes a zoom level that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. The translation factor describes a change in position that, when applied to either or both of the first and second photographic images, aligns, at least in part, the set of common features between the first and second photographic images. Stage 520 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450.
  • Method 500 also provides the similarity transform in order to render the scene scan from the at least one pair of photographic images (stage 530). At least one of the rotation factor, the scaling factor, or the translation factor may be used to position the first and second photographic images included in each pair such that the set of common features between the first and second photographic images, at least in part, align. In some embodiments, the scene scan may be rendered in a viewport and displayed on a display device. Stage 530 may be carried out by, for example, data output module 410 embodied in systems 400 and 450.
  • FIG. 6 is a flowchart illustrating a method 600 that may be used to create a scene scan from a group of photographic images. The group of photographic images may be organized according to a time value associated with each photographic. The time value may indicate when each photographic image was captured. While method 600 is described with respect to an embodiment, method 600 is not meant to be limiting and may be used in other applications. Additionally, method 600 may be carried out by, for example, system 400 in FIG. 4A or system 450 in FIG. 4B.
  • Method 600 first determines a set of common features between two photographic images (stage 610). The two photographic images include a most recently captured photographic image and a previously captured photographic image. The features include at least a portion of an object captured in each of the two photographic images, where each of the two photographic images may be captured from different optical centers. Any feature detection method may be used to determine the set of common features between the photographic images. Such methods may include, for example, Features from Accelerated Segment Test (“FAST”), Speed Up Robust Features (“SURF”), or Scale-invariant feature transform (“SIFT”). These feature detection methods are merely provided as examples and are not intended to limit the embodiments in any way. Stage 610 may be carried out by, for example, feature detector module 406 embodied in systems 400 and 450.
  • Method 600 then determines a rotation factor between the two photographic images (stage 620). The rotation factor describes a rotation that, when applied to at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images. Stage 620 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450.
  • Method 600 then determines a scaling factor between the two adjacent photographic images (stage 630). The scaling factor describes a zoom level that, when applied at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images. Stage 630 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450.
  • Method 600 then determines a translation factor between the two photographic images (stage 640). The translation factor describes a change in position that, when applied to at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images. Stage 640 may be carried out by, for example, similarity transform module 408 embodied in systems 400 and 450.
  • Method 600 finally renders the scene scan from the group of photographic images such that each two photographic images are positioned to align the set of common features between them (stage 650). The alignment is determined by using at least one of the rotation factor, the scaling factor, or the translation factor. In some embodiments, the scene scan may be rendered in a viewport and displayed on a display device. Stage 650 may be carried out by, for example, rendering module 416 embodied in systems 400 and 450.
  • Example Computer System
  • FIG. 7 illustrates an example computer 700 in which the embodiments described herein, or portions thereof, may be implemented as computer-readable code. For example, feature detector module 406, image rotation module 408, image scaling module 410, image translation module 412, data output module 414, rendering module 416, user-interface module 418, or counter-rotation module 420 may be implemented in one or more computer systems 700 using hardware, software, firmware, computer readable storage media having instructions stored thereon, or a combination thereof.
  • One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multi-core multiprocessor systems, minicomputers, mainframe computers, computers linked or clustered with distributed functions, as well as pervasive or miniature computers that may be embedded into virtually any device.
  • For instance, a computing device having at least one processor device and a memory may be used to implement the above described embodiments. A processor device may be a single processor, a plurality of processors, or combinations thereof. Processor devices may have one or more processor “cores.”
  • Various embodiments are described in terms of this example computer system 700. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter.
  • As will be appreciated by persons skilled in the relevant art, processor device 704 may be a single processor in a multi-core/multiprocessor system, such system operating alone, or in a cluster of computing devices operating in a cluster or server farm. Processor device 704 is connected to a communication infrastructure 706, for example, a bus, message queue, network, or multi-core message-passing scheme. Computer system 700 may also include display interface 702 and display unit 730.
  • Computer system 700 also includes a main memory 708, for example, random access memory (RAM), and may also include a secondary memory 710. Secondary memory 710 may include, for example, a hard disk drive 712, and removable storage drive 714. Removable storage drive 714 may include a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory drive, or the like. The removable storage drive 714 reads from and/or writes to a removable storage unit 718 in a well-known manner. Removable storage unit 718 may include a floppy disk, magnetic tape, optical disk, flash memory drive, etc. which is read by and written to by removable storage drive 714. As will be appreciated by persons skilled in the relevant art, removable storage unit 718 includes a computer readable storage medium having stored thereon computer software and/or data.
  • In alternative implementations, secondary memory 710 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 700. Such means may include, for example, a removable storage unit 722 and an interface 720. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 722 and interfaces 720 which allow software and data to be transferred from the removable storage unit 722 to computer system 700.
  • Computer system 700 may also include a communications interface 724. Communications interface 724 allows software and data to be transferred between computer system 700 and external devices. Communications interface 724 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like. Software and data transferred via communications interface 724 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 724. These signals may be provided to communications interface 724 via a communications path 726. Communications path 726 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels.
  • In this document, the terms “computer storage medium” and “computer readable storage medium” are used to generally refer to media such as removable storage unit 718, removable storage unit 722, and a hard disk installed in hard disk drive 712. Computer storage medium and computer readable storage medium may also refer to memories, such as main memory 708 and secondary memory 710, which may be memory semiconductors (e.g. DRAMs, etc.).
  • Computer programs (also called computer control logic) are stored in main memory 708 and/or secondary memory 710. Computer programs may also be received via communications interface 724. Such computer programs, when executed, enable computer system 700 to implement the embodiments described herein. In particular, the computer programs, when executed, enable processor device 704 to implement the processes of the embodiments, such as the stages in the methods illustrated by flowchart 500 of FIG. 5 and flowchart 600 of FIG. 6, discussed above. Accordingly, such computer programs represent controllers of computer system 700. Where an embodiment is implemented using software, the software may be stored in a computer storage medium and loaded into computer system 700 using removable storage drive 714, interface 720, and hard disk drive 712, or communications interface 724.
  • Embodiments of the invention also may be directed to computer program products including software stored on any computer readable storage medium. Such software, when executed in one or more data processing device, causes a data processing device(s) to operate as described herein. Examples of computer readable storage mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory) and secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, and optical storage devices, MEMS, nanotechnological storage device, etc.).
  • CONCLUSION
  • The Summary and Abstract sections may set forth one or more but not all embodiments as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
  • The foregoing description of specific embodiments so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
  • The breadth and scope of the present invention should not be limited by any of the above-described example embodiments.

Claims (20)

1. A computer-implemented method for creating a scene scan from a group of photographic images comprising:
determining, by at least one computer processor, a set of common features for at least one pair of photographic images from the group of photographic images, the features including at least a portion of an object captured in each of a first and second photographic image included in the at least one pair of photographic images, wherein the first and second photographic images are captured from different optical centers;
determining, by at least one computer processor, a similarity transform for the at least one pair of photographic images, wherein determining the similarity transform includes:
determining a rotation factor between the first and second photographic images, wherein the rotation factor describes a rotation that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images;
determining a scaling factor between the first and second photographic images, wherein the scaling factor describes a zoom level that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images;
determining a translation factor between the first and second photographic images, wherein the translation factor describes a change in position that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images; and
providing, by at least one computer processor, the similarity transform in order to render the scene scan from the at least one pair of photographic images, wherein at least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position the first and second photographic images such that the set of common features between the first and second photographic images, at least in part, align;
determining, by at least one computer processor, a counter rotation for the scene scan, the counter rotation based on the rotation factor and a weight factor for each photographic image included in the scene scan, wherein the weight factor for each photographic image is based on a distance of an image center of the photographic image from a center of the viewport;
rendering, by at least one computer processor, the scene scan from the at least one pair of photographic images such that at least the first and second photographic images are positioned to align the set of common features between the first and second photographic images, wherein the set of common features between the first and second photographic images is aligned using at least one of the rotation factor, the scaling factor, or the translation factor;
wherein rendering the scene scan includes applying the counter-rotation, at least in part, to at least one photographic image included in the scene scan, wherein the counter rotation rotates at least the one photographic image in a direction opposite to the rotation factor; and
displaying at least a portion of scene scan, wherein a viewport determines the portion of the scene scan that is displayed.
2. (canceled)
3. (canceled)
4. The computer-implemented method of claim 1, wherein determining the counter rotation includes determining the counter rotation for a portion of the scene scan included in the viewport.
5. The computer-implemented method of claim 1, wherein rendering scene scan includes maintaining each photographic image included in the scene scan as separate photographic images.
6. The computer-implemented method of claim 1, further comprising:
when an image center of the first photographic image is closest to a center of a viewport used to display the scene scan, positioning the first photographic image over the second photographic image such that the set of common features between the first and second photographic images align; and
when the image center of the second photographic image is closest to the center of the viewport used to display the scene scan, positioning the second photographic image over the first photographic image such that the set of common features between the first and second photographic images align.
7. The computer-implemented method of claim 1, further comprising:
positioning each photographic image in the scene scan such that the photographic image with an image center closest to a center of a viewport is placed over the photographic image with the image center next closest to the center of the viewport, wherein the viewport is used to display at least a portion of the scene scan.
8. The computer-implemented method of claim 1, wherein the set of common features includes at least two features captured in the first and second photographic images.
9. A computer system for creating a scene scan from a group of photographic images comprising:
a computing device having one or more computer processors, the one or more computer processors, during operation, implementing:
feature detector module configured to determine a set of common features for at least one pair of photographic images from the group of photographic images, the features including at least a portion of an object captured in each of a first and second photographic image included in the at least one pair of photographic images, wherein the first and second photographic images are captured from different optical centers;
a similarity transform module configured to determine a similarity transform for the at least one pair of photographic images, the similarity transform determined by:
determining a rotation factor between the first and second photographic images, wherein the rotation factor describes a rotation that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images;
determining a scaling factor between the first and second photographic image, wherein the scaling factor describes a zoom level that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images; and
determining a translation factor between the first and second photographic image, wherein the translation factor describes a change in position that, when applied to the first or second photographic image, aligns, at least in part, the set of common features between the first and second photographic images;
a data output module configured to output the similarity transform in order to render the scene scan from the at least one pair of photographic images, wherein at least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position the first and second photographic images such that the set of common features between the first and second photographic images, at least in part, align;
a counter-rotation module configured to determine a counter rotation for the scene scan, the counter rotation based on the rotation factor and a weight factor for each photographic image included in the scene scan, wherein the weight factor for each photographic image is based on a distance of an image center of the photographic image from a center of the viewport; and
a rendering module configured to render the scene scan from the at least one pair of photographic images such that at least the first and second photographic images are positioned to align the set of common features between the first and second photographic images, wherein the set of common features between the first and second photographic images is aligned using at least one of the rotation factor, the scaling factor, or the translation factor;
wherein the rendering module is further configured to apply the counter-rotation, at least in part, to at least one photographic image included in the scene scan, wherein the counter rotation rotates at least the one photographic image in a direction opposite to the rotation factor; and
a user-interface module configured to display at least a portion of the scene scan, wherein a viewport determines the portion of the scene scan that is displayed.
10. (canceled)
11. (canceled)
12. The computer system of claim 9, wherein the counter rotation module is further configured to determine the counter rotation for the portion of the scene scan included in the viewport.
13. The computer system of claim 9, wherein the rendering module is further configured to maintain each photographic image included in the scene scan as separate photographic images.
14. The computer system of claim 9, wherein:
when an image center of the first photographic image is closest to a center of a viewport used to display the scene scan, the user-interface module is configured to position the first photographic image over the second photographic image such that the set of common features between the first and second photographic images align; and
when the image center of the second photographic image is closest to the center of the viewport used to display the scene scan, the user-interface module is configured to position the second photographic image over the first photographic image such that the set of common features between the first and second photographic images align.
15. The computer system of claim 9, wherein:
the user-interface module is further configured to position each photographic image in the scene scan such that the photographic image with an image center closest to a center of a viewport is placed over the photographic image with the image center next closest to the center of the viewport, wherein the viewport is used to display at least a portion of the scene scan.
16. The computer system of claim 9, wherein the set of common features includes at least two features captured in the first and second photographic images.
17. A computer-implemented method for creating a scene scan from a group of photographic images, the group of photographic images organized according to a time value associated with each photographic image that indicates when each photographic image was captured, the method comprising:
determining, by at least one computer processor, a set of common features between two photographic images, the two photographic images including a most recently captured photographic image and a previously captured photographic image, wherein the features include at least a portion of an object captured in each of the two photographic images, and wherein each of the two photographic images are captured from different optical centers;
determining, by at least one computer processor, a rotation factor between the two photographic images, wherein the rotation factor describes a rotation that, when applied to at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images;
determining, by at least one computer processor, a scaling factor between the two adjacent photographic images, wherein the scaling factor describes a zoom level that, when applied at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images;
determining, by at least one computer processor, a translation factor between the two photographic images, wherein the translation factor describes a change in position that, when applied to at least one of the two photographic images, aligns, at least in part, the set of common features between the two photographic images;
determining, by at least one computer processor, a counter rotation for the scene scan, the counter rotation based on the rotation factor and a weight factor for each photographic image included in the scene scan, wherein the weight factor for each photographic image is based on a distance of an image center of the photographic image from a center of the viewport;
rendering, by at least one computer processor, the scene scan from the group of photographic images such that each two photographic images are positioned to align the set of common features between each two photographic images, the alignment determined by using at least one of the rotation factor, the scaling factor, or the translation factor between each two photographic images;
rendering the scene scan from the two photographic images such that a first and a second photographic image are positioned to align the set of common features between the first and second photographic images, wherein the set of common features between the first and second photographic images is aligned using at least one of the rotation factor, the scaling factor, or the translation factor;
wherein rendering the scene scan includes applying the counter-rotation, at least in part, to at least one photographic image included in the scene scan, wherein the counter rotation rotates at least the one photographic image in a direction opposite to the rotation factor; and
displaying at least a portion of scene scan, wherein a viewport determines the portion of the scene scan that is displayed.
18. (canceled)
19. (canceled)
20. The computer-implemented method of claim 17, further comprising:
positioning each photographic image in the scene scan such that the photographic image with an image center closest to a center of a viewport is placed over the photographic image with the image center next closest to the center of the viewport, wherein the viewport is used to display at least a portion of the scene scan.
US13/721,607 2011-12-20 2012-12-20 Scene scan Active 2033-04-30 US9047692B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/721,607 US9047692B1 (en) 2011-12-20 2012-12-20 Scene scan

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161577931P 2011-12-20 2011-12-20
US13/721,607 US9047692B1 (en) 2011-12-20 2012-12-20 Scene scan

Publications (2)

Publication Number Publication Date
US9047692B1 US9047692B1 (en) 2015-06-02
US20150154761A1 true US20150154761A1 (en) 2015-06-04

Family

ID=53190707

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/721,607 Active 2033-04-30 US9047692B1 (en) 2011-12-20 2012-12-20 Scene scan

Country Status (1)

Country Link
US (1) US9047692B1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154736A1 (en) * 2011-12-20 2015-06-04 Google Inc. Linking Together Scene Scans
US9508172B1 (en) * 2013-12-05 2016-11-29 Google Inc. Methods and devices for outputting a zoom sequence
USD780777S1 (en) 2014-04-22 2017-03-07 Google Inc. Display screen with graphical user interface or portion thereof
USD781317S1 (en) 2014-04-22 2017-03-14 Google Inc. Display screen with graphical user interface or portion thereof
US9972121B2 (en) * 2014-04-22 2018-05-15 Google Llc Selecting time-distributed panoramic images for display
US9934222B2 (en) 2014-04-22 2018-04-03 Google Llc Providing a thumbnail image that follows a main image
USD781318S1 (en) 2014-04-22 2017-03-14 Google Inc. Display screen with graphical user interface or portion thereof
US9940695B2 (en) * 2016-08-26 2018-04-10 Multimedia Image Solution Limited Method for ensuring perfect stitching of a subject's images in a real-site image stitching operation
US10275856B2 (en) * 2017-08-03 2019-04-30 Facebook, Inc. Composited animation

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010010546A1 (en) * 1997-09-26 2001-08-02 Shenchang Eric Chen Virtual reality camera
US6393162B1 (en) * 1998-01-09 2002-05-21 Olympus Optical Co., Ltd. Image synthesizing apparatus
US6424752B1 (en) * 1997-10-06 2002-07-23 Canon Kabushiki Kaisha Image synthesis apparatus and image synthesis method
US20030107586A1 (en) * 1995-09-26 2003-06-12 Hideo Takiguchi Image synthesization method
US20040062454A1 (en) * 1992-04-09 2004-04-01 Olympus Optical Co., Ltd. Image processing apparatus
US20050143136A1 (en) * 2001-06-22 2005-06-30 Tvsi Lev Mms system and method with protocol conversion suitable for mobile/portable handset display
US20070031062A1 (en) * 2005-08-04 2007-02-08 Microsoft Corporation Video registration and image sequence stitching
US20090022422A1 (en) * 2007-07-18 2009-01-22 Samsung Electronics Co., Ltd. Method for constructing a composite image
US20100020097A1 (en) * 2002-09-19 2010-01-28 M7 Visual Intelligence, L.P. System and method for mosaicing digital ortho-images
US20100097444A1 (en) * 2008-10-16 2010-04-22 Peter Lablans Camera System for Creating an Image From a Plurality of Images
US7813589B2 (en) * 2004-04-01 2010-10-12 Hewlett-Packard Development Company, L.P. System and method for blending images into a single image
USRE43206E1 (en) * 2000-02-04 2012-02-21 Transpacific Ip Ltd. Apparatus and method for providing panoramic images
US8139084B2 (en) * 2006-03-29 2012-03-20 Nec Corporation Image display device and method of displaying image
US20120207386A1 (en) * 2011-02-11 2012-08-16 Microsoft Corporation Updating A Low Frame Rate Image Using A High Frame Rate Image Stream

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040062454A1 (en) * 1992-04-09 2004-04-01 Olympus Optical Co., Ltd. Image processing apparatus
US20030107586A1 (en) * 1995-09-26 2003-06-12 Hideo Takiguchi Image synthesization method
US20010010546A1 (en) * 1997-09-26 2001-08-02 Shenchang Eric Chen Virtual reality camera
US6424752B1 (en) * 1997-10-06 2002-07-23 Canon Kabushiki Kaisha Image synthesis apparatus and image synthesis method
US6393162B1 (en) * 1998-01-09 2002-05-21 Olympus Optical Co., Ltd. Image synthesizing apparatus
USRE43206E1 (en) * 2000-02-04 2012-02-21 Transpacific Ip Ltd. Apparatus and method for providing panoramic images
US20050143136A1 (en) * 2001-06-22 2005-06-30 Tvsi Lev Mms system and method with protocol conversion suitable for mobile/portable handset display
US20100020097A1 (en) * 2002-09-19 2010-01-28 M7 Visual Intelligence, L.P. System and method for mosaicing digital ortho-images
US7813589B2 (en) * 2004-04-01 2010-10-12 Hewlett-Packard Development Company, L.P. System and method for blending images into a single image
US20070031062A1 (en) * 2005-08-04 2007-02-08 Microsoft Corporation Video registration and image sequence stitching
US8139084B2 (en) * 2006-03-29 2012-03-20 Nec Corporation Image display device and method of displaying image
US20090022422A1 (en) * 2007-07-18 2009-01-22 Samsung Electronics Co., Ltd. Method for constructing a composite image
US20100097444A1 (en) * 2008-10-16 2010-04-22 Peter Lablans Camera System for Creating an Image From a Plurality of Images
US20120207386A1 (en) * 2011-02-11 2012-08-16 Microsoft Corporation Updating A Low Frame Rate Image Using A High Frame Rate Image Stream

Also Published As

Publication number Publication date
US9047692B1 (en) 2015-06-02

Similar Documents

Publication Publication Date Title
US9047692B1 (en) Scene scan
US10834317B2 (en) Connecting and using building data acquired from mobile devices
US8989506B1 (en) Incremental image processing pipeline for matching multiple photos based on image overlap
US11405549B2 (en) Automated generation on mobile devices of panorama images for building locations and subsequent use
US9189853B1 (en) Automatic pose estimation from uncalibrated unordered spherical panoramas
US7554575B2 (en) Fast imaging system calibration
US20150153172A1 (en) Photography Pose Generation and Floorplan Creation
US9055216B1 (en) Using sensor data to enhance image data
US10757327B2 (en) Panoramic sea view monitoring method and device, server and system
US20180033203A1 (en) System and method for representing remote participants to a meeting
US20150154736A1 (en) Linking Together Scene Scans
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
CN115439543B (en) Method for determining hole position and method for generating three-dimensional model in meta universe
CN111429518A (en) Labeling method, labeling device, computing equipment and storage medium
EP3177005B1 (en) Display control system, display control device, display control method, and program
US8630458B2 (en) Using camera input to determine axis of rotation and navigation
US10748333B2 (en) Finite aperture omni-directional stereo light transport
US8824794B1 (en) Graduated color correction of digital terrain assets across different levels of detail
US9852542B1 (en) Methods and apparatus related to georeferenced pose of 3D models
US8751301B1 (en) Banner advertising in spherical panoramas
EP4266239A1 (en) Image splicing method, computer-readable storage medium, and computer device
EP4148379A1 (en) Visual positioning method and apparatus
RU2759965C1 (en) Method and apparatus for creating a panoramic image
US11869137B2 (en) Method and apparatus for virtual space constructing based on stackable light field
US20190101765A1 (en) A method and an apparatus for generating data representative of a pixel beam

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SEITZ, STEVEN MAXWELL;GARG, RAHUL;SIGNING DATES FROM 20130329 TO 20130731;REEL/FRAME:031669/0083

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044334/0466

Effective date: 20170929

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8