US20100014774A1 - Methods and Systems for Content-Boundary Detection - Google Patents

Methods and Systems for Content-Boundary Detection Download PDF

Info

Publication number
US20100014774A1
US20100014774A1 US12/175,386 US17538608A US2010014774A1 US 20100014774 A1 US20100014774 A1 US 20100014774A1 US 17538608 A US17538608 A US 17538608A US 2010014774 A1 US2010014774 A1 US 2010014774A1
Authority
US
United States
Prior art keywords
tile
coordinate
projection
histogram
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/175,386
Other versions
US9547799B2 (en
Inventor
Lawrence Shao-Hsien Chen
John E. Dolan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/175,386 priority Critical patent/US9547799B2/en
Assigned to SHARP LABORATORIES OF AMERICA, INC. reassignment SHARP LABORATORIES OF AMERICA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, LAWRENCE SHAO-HSIEN, DOLAN, JOHN E.
Publication of US20100014774A1 publication Critical patent/US20100014774A1/en
Application granted granted Critical
Publication of US9547799B2 publication Critical patent/US9547799B2/en
Assigned to SHARP KABUSHIKI KAISHA reassignment SHARP KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHARP LABORATORIES OF AMERICA, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations

Definitions

  • Embodiments of the present invention comprise methods and systems for automatically determining image-content boundaries.
  • a digital page also considered a digital image, digital document and image.
  • exemplary applications in which this may be useful include applications in which the page content may be repositioned on a different size page than the original, applications in which the page content may be composited with additional material and other document layout applications.
  • Some embodiments of the present invention comprise methods and systems for determining content boundaries in a digital image.
  • an edge detector based on local gradient computation may be used to generate a gradient field which may thresholded by magnitude to retain strong edges.
  • the resulting localized edge positions may be projected onto a first direction and a second direction, which may be normal to the first direction, to form two projection histograms.
  • the first direction may be related to a skew vector which describes the skew of the image content relative to the image axes.
  • the projection histograms may be analyzed to determine the boundaries of the image content.
  • the corners of a cropping rectangle may be computed, wherein the cropping rectangle may contain the desired content from the image.
  • the digital image may be cropped according to the content boundaries.
  • the digital image may be simultaneously cropped and corrected for skew.
  • FIG. 1 is a chart showing exemplary embodiments of the present invention comprising extracting edges from an image, forming projection histograms from the edge maps and determining content boundaries from the projection histograms;
  • FIG. 2 is a picture depicting exemplary projection histograms
  • FIG. 3 is a picture depicting skewed image content
  • FIG. 4 is a chart showing exemplary embodiments of the present invention comprising forming a low-resolution representation of an input image prior to determining content boundaries;
  • FIG. 5 is a chart showing exemplary embodiments of the present invention comprising smoothing a low-resolution representation of an input image prior to determining content boundaries;
  • FIG. 6 is a chart showing exemplary embodiments of the present invention comprising smoothing an input image prior to determining content boundaries
  • FIG. 7 is a chart showing exemplary embodiments of the present invention comprising partitioning an image into non-overlapping blocks and determining block content boundaries
  • FIG. 8 is a chart showing exemplary embodiments of the present invention comprising partitioning an image into overlapping blocks and determining block content boundaries.
  • a digital page also considered a digital image, digital document and image.
  • exemplary applications in which this may be useful include applications in which the page content may be repositioned on a different size page than the original, applications in which the page content may be composited with additional material and other document layout applications. It may be desirable to perform cropping automatically without user interaction. It also may be desirable to perform cropping on a digital page comprising an arbitrarily shaped content region, and it may be desirable to perform cropping when the digital page content is skewed with respect to the orthogonal image axes.
  • Some embodiments of the present invention described in relation to FIG. 1 comprise methods and systems for automatically determining the content boundaries in a digital page.
  • the location of the edges in the digital page may be extracted 4 , also considered detected or determined, thereby producing an edge mask, or other representation, indicating locations of large gradient magnitude in the digital page.
  • the edge mask may be projected on a skew vector and the skew vector normal vector to form 6 two projection histograms.
  • the content boundaries may be detected 8 , also considered determined, from the two projection histograms.
  • edge-location determination 4 may comprise computing a gradient magnitude at each pixel in the digital page and thresholding the gradient magnitude results to form an edge mask.
  • the gradient field in the x-direction, which may be denoted G x and the gradient field in the y-direction, which may be denoted G y , may be determined independently, and the gradient magnitude, which may be denoted G, may be determined according to:
  • the digital page which may be denoted I, may be independently convolved with two edge kernels to determine the gradient fields in the x-direction and the y-direction.
  • the edge kernels may comprise Sobel operators, and the gradient fields may be determined according to:
  • edge detection may comprise other edge operators and methods known in the art, for example, a Canny edge detector, a Prewitt edge detector, a Roberts Cross kernel and a Hough transform.
  • the gradient magnitude, G may be thresholded to form a binary image, also considered edge map, which may be denoted G′.
  • the binary image, G′ may be set equal to one of the binary values when a first condition is satisfied and may be set to the other of the binary values when the first condition is not satisfied.
  • the binary image, G′ may be determined according to:
  • G ′ ⁇ ( i , j ) ⁇ 1 , G ⁇ ( i , j ) > ⁇ 0 , G ⁇ ( i , j ) ⁇ ⁇ ,
  • denotes an adaptive threshold based on the content of the image and (i, j) denotes a location in the gradient-magnitude image, G.
  • the adaptive threshold, ⁇ may be determined according to:
  • w is the width of and h is the height of the gradient-magnitude image, G, respectively
  • p is a parameter which may control the rejection of the weakest p percentage of edges.
  • the value of p may be set to 95.
  • p may be set in the range of 93 to 97.
  • Two projection histograms may be formed 6 by projecting the edge map, G′, onto a skew vector and a vector normal to the skew vector.
  • Two exemplary projection histograms 10 , 11 are shown in FIG. 2 .
  • the horizontal axis 12 , 13 of each histogram 10 , 11 indicates a coordinate in the direction of the axis, and the vertical axis 14 , 15 of each histogram 10 , 11 indicates a pixel count.
  • the content boundaries in the directions of the skew vector and the skew vector normal may be determined 8 by the first and last histogram bins which contain pixel counts. For the exemplary histograms 10 , 11 shown in FIG. 2 these bins are indicated 16 , 17 , 18 , 19 .
  • FIG. 3 shows an exemplary image 20 comprising a page region 22 which is skewed relative to the image axes.
  • the skew vector 24 and the normal to the skew vector 26 are shown for an exemplary skew angle 28 .
  • the skew vector 24 and the normal to the skew vector 26 are shown relative to an origin 30 which may be the same origin of the image coordinate system.
  • the locations of the first and last histogram bins which contain pixel counts in the projection histogram associated with the skew vector 26 are labeled A 31 and B 32 .
  • the locations of the first and last histogram bins which contain pixel counts in the projection histogram associated with the normal to the skew vector 26 are labeled C 33 and D 34 .
  • a 31 may correspond to the location in the top histogram 10 of the first bin 16 from the origin which has a non-zero pixel count.
  • B 32 may correspond to the location in the top histogram 10 of the last bin 18 from the origin which has a non-zero pixel count.
  • C 33 may correspond to the location in the bottom histogram 11 of the first bin 17 from the origin which has a non-zero pixel count.
  • D 34 may correspond to the location in the bottom histogram 11 of the last bin 19 from the origin which has a non-zero pixel count.
  • the content boundaries may be described by the corners of a bounding rectangle. These corners may be denoted in relation to the locations determined from the projection histograms. Denoting the location of the first and last histogram bins with non-zero count in the projection histogram associated with the skew vector as left and right, respectively, and the location of the first and last histogram bins with non-zero count in the projection histogram associated with the skew vector normal as bottom and top, respectively, then the corners of the bounding rectangle may be given according to:
  • bottom-left corner is (left, bottom) in the skewed coordinate system
  • bottom-right corner is (right, bottom) in the skewed coordinate system
  • top-left corner is (left, top) in the skewed coordinate system
  • top-right corner is (right, top) in the skewed coordinate system.
  • a low-resolution representation of an image may be derived 42 prior to determining 44 edge locations in the low-resolution representation.
  • Projection histograms may be formed 46 from the edge map, and the content boundaries may be detected 48 using the projection histograms.
  • the low-resolution representation may be derived 42 through sub-sampling and down-sampling techniques known in the art.
  • the low-resolution representation of the input image may be a 75 dots-per-inch image.
  • a low-resolution representation of an image may be derived 50 , and the low-resolution representation of the image may be smoothed 52 prior to determining 54 edge locations in the low-resolution representation.
  • Projection histograms may be formed 56 from the edge map, and the content boundaries may be detected 58 using the projection histograms.
  • the smoothed version of the low-resolution representation may be derived 52 by smoothing the low-resolution representation of the input image using a 3 ⁇ 3 Gaussian filter.
  • smoothing may comprise Gaussian filters of other size, smoothing filters of other types and other smoothing techniques known in the art.
  • an input image may be smoothed 62 prior to determining 64 edge locations in the smoothed image.
  • Projection histograms may be formed 66 from the edge map, and the content boundaries may be detected 68 using the projection histograms.
  • the smoothed version of the input image may be derived 62 by smoothing the input image using a 3 ⁇ 3 Gaussian filter.
  • smoothing may comprise Gaussian filters of other size, smoothing filters of other types and other smoothing techniques known in the art.
  • an image may be partitioned 72 into non-overlapping blocks.
  • the block content boundaries may be determined 74 according to embodiments of the present invention described above.
  • the block boundaries may be combined 76 to generate the content boundary for the image.
  • the content boundaries, which may be designated by the bounding rectangle corners and denoted R, for the image may be determined 76 from the block boundaries according to:
  • determination 74 of the block content boundaries may be performed in parallel by a plurality of processors. In alternative embodiments, the determination 74 of the block content boundaries may be performed serially.
  • an image may be partitioned 82 into overlapping blocks.
  • the block content boundaries may be determined 84 according to embodiments of the present invention described above.
  • the block boundaries may be combined 86 to generate the content boundary for the image.
  • the content boundaries, which may be designated by the bounding rectangle corners and denoted R, for the image may be determined 86 from the block boundaries according to:
  • determination 84 of the block content boundaries may be performed in parallel by a plurality of processors. In alternative embodiments, the determination 84 of the block content boundaries may be performed serially.
  • the input image may be a color image. In alternative embodiments of the present invention, the input image may be a gray-scale image. In still alternative embodiments of the present invention, the input image may be a binary image.
  • the input image may be a luminance image corresponding to a color image.
  • the input image may be a binary image corresponding to a color image.
  • the input image may be a binary image corresponding to a gray-scale image.
  • an image may be cropped according to the determined content boundaries.
  • an image may be simultaneously cropped according to the determined content boundaries and skew corrected.

Abstract

Aspects of the present invention are related to systems and methods for automatically determining the content boundaries in a digital image.

Description

    FIELD OF THE INVENTION
  • Embodiments of the present invention comprise methods and systems for automatically determining image-content boundaries.
  • BACKGROUND
  • It may be desirable to crop off extraneous portions of a digital page, also considered a digital image, digital document and image. In particular, in may be desirable to retain the content of the digital page while eliminating extraneous page margins. Exemplary applications in which this may be useful include applications in which the page content may be repositioned on a different size page than the original, applications in which the page content may be composited with additional material and other document layout applications. It may be desirable to perform cropping automatically without user interaction. It also may be desirable to perform cropping on a digital page comprising an arbitrarily shaped content region, and it may be desirable to perform cropping when the digital page content is skewed with respect to the orthogonal image axes. Methods and systems for automatically determining image-content boundaries, therefore, may be desirable.
  • SUMMARY
  • Some embodiments of the present invention comprise methods and systems for determining content boundaries in a digital image. In some embodiments of the present invention, an edge detector based on local gradient computation may be used to generate a gradient field which may thresholded by magnitude to retain strong edges. The resulting localized edge positions may be projected onto a first direction and a second direction, which may be normal to the first direction, to form two projection histograms. In some embodiments of the present invention, the first direction may be related to a skew vector which describes the skew of the image content relative to the image axes. The projection histograms may be analyzed to determine the boundaries of the image content. In some embodiments of the present invention, the corners of a cropping rectangle may be computed, wherein the cropping rectangle may contain the desired content from the image. In some embodiments of the present invention, the digital image may be cropped according to the content boundaries. In some embodiments of the present invention, the digital image may be simultaneously cropped and corrected for skew.
  • The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE SEVERAL DRAWINGS
  • FIG. 1 is a chart showing exemplary embodiments of the present invention comprising extracting edges from an image, forming projection histograms from the edge maps and determining content boundaries from the projection histograms;
  • FIG. 2 is a picture depicting exemplary projection histograms;
  • FIG. 3 is a picture depicting skewed image content;
  • FIG. 4 is a chart showing exemplary embodiments of the present invention comprising forming a low-resolution representation of an input image prior to determining content boundaries;
  • FIG. 5 is a chart showing exemplary embodiments of the present invention comprising smoothing a low-resolution representation of an input image prior to determining content boundaries;
  • FIG. 6 is a chart showing exemplary embodiments of the present invention comprising smoothing an input image prior to determining content boundaries;
  • FIG. 7 is a chart showing exemplary embodiments of the present invention comprising partitioning an image into non-overlapping blocks and determining block content boundaries; and
  • FIG. 8 is a chart showing exemplary embodiments of the present invention comprising partitioning an image into overlapping blocks and determining block content boundaries.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
  • It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention but it is merely representative of the presently preferred embodiments of the invention.
  • Elements of embodiments of the present invention may be embodied in hardware, firmware and/or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
  • It may be desirable to crop off extraneous portions of a digital page, also considered a digital image, digital document and image. In particular, in may be desirable to retain the content of the digital page while eliminating extraneous page margins. Exemplary applications in which this may be useful include applications in which the page content may be repositioned on a different size page than the original, applications in which the page content may be composited with additional material and other document layout applications. It may be desirable to perform cropping automatically without user interaction. It also may be desirable to perform cropping on a digital page comprising an arbitrarily shaped content region, and it may be desirable to perform cropping when the digital page content is skewed with respect to the orthogonal image axes.
  • Some embodiments of the present invention described in relation to FIG. 1 comprise methods and systems for automatically determining the content boundaries in a digital page. In these embodiments, the location of the edges in the digital page may be extracted 4, also considered detected or determined, thereby producing an edge mask, or other representation, indicating locations of large gradient magnitude in the digital page. The edge mask may be projected on a skew vector and the skew vector normal vector to form 6 two projection histograms. The content boundaries may be detected 8, also considered determined, from the two projection histograms.
  • In some embodiments of the present invention described in relation to FIG. 1, edge-location determination 4 may comprise computing a gradient magnitude at each pixel in the digital page and thresholding the gradient magnitude results to form an edge mask. In some embodiments of the present invention, the gradient field in the x-direction, which may be denoted Gx, and the gradient field in the y-direction, which may be denoted Gy, may be determined independently, and the gradient magnitude, which may be denoted G, may be determined according to:

  • G=∥∇∥ 1 =|G x |+G y|,
  • where |.| denotes absolute value.
  • In some embodiments of the present invention, the digital page, which may be denoted I, may be independently convolved with two edge kernels to determine the gradient fields in the x-direction and the y-direction. In some embodiments the edge kernels may comprise Sobel operators, and the gradient fields may be determined according to:
  • G x = [ 1 0 - 1 2 0 - 2 1 0 - 1 ] * I and G y = [ 1 2 1 0 0 0 - 1 - 2 - 1 ] * I .
  • In alternative embodiments, edge detection may comprise other edge operators and methods known in the art, for example, a Canny edge detector, a Prewitt edge detector, a Roberts Cross kernel and a Hough transform.
  • In some embodiments of the present invention, the gradient magnitude, G, may be thresholded to form a binary image, also considered edge map, which may be denoted G′. In these embodiments, the binary image, G′, may be set equal to one of the binary values when a first condition is satisfied and may be set to the other of the binary values when the first condition is not satisfied. In some embodiments of the present invention, the binary image, G′, may be determined according to:
  • G ( i , j ) = { 1 , G ( i , j ) > θ 0 , G ( i , j ) θ ,
  • where θ denotes an adaptive threshold based on the content of the image and (i, j) denotes a location in the gradient-magnitude image, G.
  • In some embodiments of the present invention, the adaptive threshold, θ, may be determined according to:
  • θ = - μ log ( 100 - p 100 ) , where : μ = 1 w · h x , y G ( x , y )
  • in which w is the width of and h is the height of the gradient-magnitude image, G, respectively, and p is a parameter which may control the rejection of the weakest p percentage of edges. In some embodiments, the value of p may be set to 95. In alternative embodiments, p may be set in the range of 93 to 97.
  • Two projection histograms may be formed 6 by projecting the edge map, G′, onto a skew vector and a vector normal to the skew vector. Two exemplary projection histograms 10, 11 are shown in FIG. 2. The horizontal axis 12, 13 of each histogram 10, 11 indicates a coordinate in the direction of the axis, and the vertical axis 14, 15 of each histogram 10, 11 indicates a pixel count. The content boundaries in the directions of the skew vector and the skew vector normal may be determined 8 by the first and last histogram bins which contain pixel counts. For the exemplary histograms 10, 11 shown in FIG. 2 these bins are indicated 16, 17, 18, 19.
  • Embodiments of the present invention may be further understood in relation to FIG. 3. FIG. 3 shows an exemplary image 20 comprising a page region 22 which is skewed relative to the image axes. The skew vector 24 and the normal to the skew vector 26 are shown for an exemplary skew angle 28. The skew vector 24 and the normal to the skew vector 26 are shown relative to an origin 30 which may be the same origin of the image coordinate system. The locations of the first and last histogram bins which contain pixel counts in the projection histogram associated with the skew vector 26 are labeled A 31 and B 32. The locations of the first and last histogram bins which contain pixel counts in the projection histogram associated with the normal to the skew vector 26 are labeled C 33 and D 34. In relation to the exemplary histograms 10, 11 shown in FIG. 2, assuming that the top histogram 10 corresponds to the projection histogram associated with the skew vector 24 and the bottom histogram 11 corresponds to the projection histogram associated with the normal to the skew vector 26, then A 31 may correspond to the location in the top histogram 10 of the first bin 16 from the origin which has a non-zero pixel count. B 32 may correspond to the location in the top histogram 10 of the last bin 18 from the origin which has a non-zero pixel count. C 33 may correspond to the location in the bottom histogram 11 of the first bin 17 from the origin which has a non-zero pixel count. D 34 may correspond to the location in the bottom histogram 11 of the last bin 19 from the origin which has a non-zero pixel count.
  • In some embodiments of the present invention, the content boundaries may be described by the corners of a bounding rectangle. These corners may be denoted in relation to the locations determined from the projection histograms. Denoting the location of the first and last histogram bins with non-zero count in the projection histogram associated with the skew vector as left and right, respectively, and the location of the first and last histogram bins with non-zero count in the projection histogram associated with the skew vector normal as bottom and top, respectively, then the corners of the bounding rectangle may be given according to:
  • bottom-left corner is (left, bottom) in the skewed coordinate system,
  • bottom-right corner is (right, bottom) in the skewed coordinate system,
  • top-left corner is (left, top) in the skewed coordinate system and
  • top-right corner is (right, top) in the skewed coordinate system.
  • Some embodiments of the present invention may be described in relation to FIG. 4. In these embodiments, a low-resolution representation of an image may be derived 42 prior to determining 44 edge locations in the low-resolution representation. Projection histograms may be formed 46 from the edge map, and the content boundaries may be detected 48 using the projection histograms. In some embodiments of the present invention, the low-resolution representation may be derived 42 through sub-sampling and down-sampling techniques known in the art. In some embodiments of the present invention, the low-resolution representation of the input image may be a 75 dots-per-inch image.
  • Some embodiments of the present invention may be described in relation to FIG. 5. In these embodiments, a low-resolution representation of an image may be derived 50, and the low-resolution representation of the image may be smoothed 52 prior to determining 54 edge locations in the low-resolution representation. Projection histograms may be formed 56 from the edge map, and the content boundaries may be detected 58 using the projection histograms. In some embodiments of the present invention, the smoothed version of the low-resolution representation may be derived 52 by smoothing the low-resolution representation of the input image using a 3×3 Gaussian filter. In alternative embodiments, smoothing may comprise Gaussian filters of other size, smoothing filters of other types and other smoothing techniques known in the art.
  • Some embodiments of the present invention may be described in relation to FIG. 6. In these embodiments, an input image may be smoothed 62 prior to determining 64 edge locations in the smoothed image. Projection histograms may be formed 66 from the edge map, and the content boundaries may be detected 68 using the projection histograms. In some embodiments of the present invention, the smoothed version of the input image may be derived 62 by smoothing the input image using a 3×3 Gaussian filter. In alternative embodiments, smoothing may comprise Gaussian filters of other size, smoothing filters of other types and other smoothing techniques known in the art.
  • In some embodiments of the present invention described in relation to FIG. 7, an image may be partitioned 72 into non-overlapping blocks. The block content boundaries may be determined 74 according to embodiments of the present invention described above. The block boundaries may be combined 76 to generate the content boundary for the image. In some of these embodiments, the corners of the content boundaries for each block may be determined 74 and designated Ri=[topi bottomi lefti righti]. The content boundaries, which may be designated by the bounding rectangle corners and denoted R, for the image may be determined 76 from the block boundaries according to:

  • R=[max(topi) min(bottomi) min(lefti) max(righti)]
  • for a coordinate origin in the lower-left of an image, and according to:

  • R=[min(topi) max(bottomi) min(lefti) max(righti)]
  • for a coordinate origin in the upper-left of an image. In some embodiments of the present invention, determination 74 of the block content boundaries may be performed in parallel by a plurality of processors. In alternative embodiments, the determination 74 of the block content boundaries may be performed serially.
  • In some embodiments of the present invention described in relation to FIG. 8, an image may be partitioned 82 into overlapping blocks. The block content boundaries may be determined 84 according to embodiments of the present invention described above. The block boundaries may be combined 86 to generate the content boundary for the image. In some of these embodiments, the corners of the content boundaries for each block may be determined 84 and designated Ri=[topi bottomi lefti righti]. The content boundaries, which may be designated by the bounding rectangle corners and denoted R, for the image may be determined 86 from the block boundaries according to:

  • R=[max(topi) min(bottomi) min(lefti) max(righti)]
  • for a coordinate origin in the lower-left of an image, and according to:

  • R=[min(topi) max(bottomi) min(lefti) max(righti)]
  • for a coordinate origin in the upper-left of an image. In some embodiments of the present invention, determination 84 of the block content boundaries may be performed in parallel by a plurality of processors. In alternative embodiments, the determination 84 of the block content boundaries may be performed serially.
  • In some embodiments of the present invention, the input image may be a color image. In alternative embodiments of the present invention, the input image may be a gray-scale image. In still alternative embodiments of the present invention, the input image may be a binary image.
  • In some embodiments of the present invention, the input image may be a luminance image corresponding to a color image. In alternative embodiments of the present invention, the input image may be a binary image corresponding to a color image. In still alternative embodiments of the present invention, the input image may be a binary image corresponding to a gray-scale image.
  • In some embodiments of the present invention, an image may be cropped according to the determined content boundaries.
  • In some embodiments of the present invention, an image may be simultaneously cropped according to the determined content boundaries and skew corrected.
  • The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.

Claims (26)

1. A method for content-boundary detection in a digital image, said method comprising:
a) determining the location of edges in a first image related to a digital image, thereby producing an edge map;
b) forming a first projection histogram of said edge map in a first projection direction;
c) forming a second projection histogram of said edge map in a second projection direction, wherein said second projection direction is normal to said first projection direction; and
d) determining a content boundary associated with said digital image using said first projection histogram and said second projection histogram.
2. A method as described in claim 1 further comprising:
a) receiving a skew parameter;
b) determining a skew vector associated with said skew parameter; and
c) wherein:
i) said first projection direction is related to said skew vector; and
ii) said second projection direction is related to the normal of said skew vector.
3. A method as described in claim 1, wherein said first image is a low-resolution representation of said digital image.
4. A method as described in claim 1, wherein said first image is a smoothed version of said digital image.
5. A method as described in claim 4, wherein said smoothed version of said digital image is formed by convolving said digital image with a Gaussian filter.
6. A method as described in claim 1, wherein said first image is a smoothed version of a low-resolution representation of said digital image.
7. A method as described in claim 1 further comprising clipping said digital image according to said content boundary.
8. A method as described in claim 1, wherein said determining the location of edges comprises:
a) determining a first gradient field of said first image in a first direction;
b) determining a second gradient field of said first image in a second direction;
c) computing a gradient magnitude from said first gradient field and said second gradient field; and
d) thresholding said gradient magnitude.
9. A method as described in claim 8, wherein:
a) said determining a first gradient field comprising convolving said first image with a first edge kernel; and
b) said determining a second gradient field comprises convolving said first image with a second edge kernel.
10. A method as described in claim 8, wherein said thresholding comprises a threshold based on the mean of said gradient magnitude and an adjustable rejection parameter associated with edge strength.
11. A method as described in claim 1, wherein said determining a content boundary associated with said digital image using said first projection histogram and said second projection histogram comprises:
a) determining a first coordinate in a first coordinate direction, wherein said first coordinate is associated with the first histogram bin having a non-zero count in said first projection histogram;
b) determining a second coordinate in said first coordinate direction, wherein said second coordinate is associated with the last histogram bin having a non-zero count in said first projection histogram;
c) determining a third coordinate in a second coordinate direction, wherein said third coordinate is associated with the first histogram bin having a non-zero count in said second projection histogram;
d) determining a fourth coordinate in said second coordinate direction, wherein said fourth coordinate is associated with the last histogram bin having a non-zero count in said second projection histogram; and
e) wherein said content boundary is a bounding rectangle described by the four vertices given by:
i) said first coordinate and said third coordinate;
ii) said first coordinate and said fourth coordinate;
iii) said second coordinate and said third coordinate; and
iv) said second coordinate and said fourth coordinate.
12. A method for content-boundary detection in a digital image, said method comprising:
a) partitioning a first image into a plurality of image tiles, said plurality of image tiles comprising a first tile and a second tile;
b) determining the location of edges in said first tile, thereby producing a first edge map;
c) forming a first first-tile projection histogram of said first edge map in a first projection direction;
d) forming a second first-tile projection histogram of said first edge map in a second projection direction, wherein said second projection direction is normal to said first projection direction;
e) determining a first-tile content boundary associated with said first tile using said first first-tile projection histogram and said second first-tile projection histogram;
f) determining the location of edges in said second tile, thereby producing a second edge map;
g) forming a first second-tile projection histogram of said second edge map in said first projection direction;
h) forming a second second-tile projection histogram of said second edge map in said second projection direction;
i) determining a second-tile content boundary associated with said second tile using said first second-tile projection histogram and said second second-tile projection histogram; and
j) determining an image-content boundary using said first-tile content boundary and said second-tile content boundary.
13. A method as described in claim 12 further comprising:
a) receiving a skew parameter;
b) determining a skew vector associated with said skew parameter; and
c) wherein:
i) said first projection direction is related to said skew vector; and
ii) said second projection direction is related to the normal of said skew vector.
14. A method as described in claim 12, wherein said first image tile and said second image tile are non-overlapping.
15. A method as described in claim 12, wherein said first image tile and said second image tile are overlapping.
16. A method as described in claim 12, wherein said first image is a low-resolution representation of a digital image.
17. A method as described in claim 12, wherein said first image is a smoothed version of said digital image.
18. A method as described in claim 12, wherein said first image is a smoothed version of a low-resolution representation of said digital image.
19. A method as described in claim 12 further comprising clipping said digital image according to said image-content boundary.
20. A method as described in claim 12, wherein said determining the location of edges in said first tile comprises:
a) determining a first gradient field of said first tile in a first direction;
b) determining a second gradient field of said first tile in a second direction;
c) computing a gradient magnitude from said first gradient field and said second gradient field; and
d) thresholding said gradient magnitude.
21. A method as described in claim 20, wherein:
a) said determining a first gradient field comprising convolving said first tile with a first edge kernel; and
b) said determining a second gradient field comprises convolving said first tile with a second edge kernel.
22. A method as described in claim 20, wherein said thresholding comprises a threshold based on the mean of said gradient magnitude and an adjustable rejection parameter associated with edge strength.
23. A method as described in claim 12, wherein said determining a first-tile content boundary using said first first-tile projection histogram and said second first-tile projection histogram comprises:
a) determining a first coordinate in a first coordinate direction, wherein said first coordinate is associated with the first histogram bin having a non-zero count in said first first-tile projection histogram;
b) determining a second coordinate in said first coordinate direction, wherein said second coordinate is associated with the last histogram bin having a non-zero count in said first first-tile projection histogram;
c) determining a third coordinate in a second coordinate direction, wherein said third coordinate is associated with the first histogram bin having a non-zero count in said second first-tile projection histogram; and
d) determining a fourth coordinate in said second coordinate direction, wherein said fourth coordinate is associated with the last histogram bin having a non-zero count in said second first-tile projection histogram.
24. A method as described in claim 12, wherein said determining an image-content boundary using said first-tile content boundary and said second-tile content boundary comprises:
a) comparing a first-tile first coordinate in a first coordinate direction, wherein said first-tile first coordinate is associated with the first histogram bin having a non-zero count in said first first-tile projection histogram, and a second-tile first coordinate in said first coordinate direction, wherein said second-tile first coordinate is associated with the first histogram bin having a non-zero count in said first second-tile projection histogram;
b) comparing a first-tile second coordinate in said first coordinate direction, wherein said first-tile second coordinate is associated with the last histogram bin having a non-zero count in said first first-tile projection histogram, and a second-tile second coordinate in said first coordinate direction, wherein said second-tile second coordinate is associated with the last histogram bin having a non-zero count in said first second-tile projection histogram;
c) comparing a first-tile third coordinate in a second coordinate direction, wherein said first-tile third coordinate is associated with the first histogram bin having a non-zero count in said second first-tile projection histogram, and a second-tile third coordinate in said second coordinate direction, wherein said second-tile third coordinate is associated with the first histogram bin having a non-zero count in said second second-tile projection histogram; and
d) comparing a first-tile fourth coordinate in said second coordinate direction, wherein said first-tile fourth coordinate is associated with the last histogram bin having a non-zero count in said second first-tile projection histogram, and a second-tile fourth coordinate in said second coordinate direction, wherein said second-tile fourth coordinate is associated with the last histogram bin having a non-zero count in said second second-tile projection histogram.
25. A system for content-boundary detection in a digital image, said system comprising:
a) an edge extractor for determining the location of edges in a first image related to a digital image, thereby producing an edge map;
b) a first projection histogram generator for forming a first projection histogram of said edge map in a first projection direction;
c) a second projection histogram generator for forming a second projection histogram of said edge map in a second projection direction, wherein said second projection direction is normal to said first projection direction; and
d) a boundary determiner for determining a content boundary associated with said digital image using said first projection histogram and said second projection histogram.
26. A system as described in claim 25 further comprising:
a) a skew parameter receiver for receiving a skew parameter; and
b) a skew vector determiner for determining a skew vector associated with said skew parameter.
US12/175,386 2008-07-17 2008-07-17 Methods and systems for content-boundary detection Active 2033-02-28 US9547799B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/175,386 US9547799B2 (en) 2008-07-17 2008-07-17 Methods and systems for content-boundary detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/175,386 US9547799B2 (en) 2008-07-17 2008-07-17 Methods and systems for content-boundary detection

Publications (2)

Publication Number Publication Date
US20100014774A1 true US20100014774A1 (en) 2010-01-21
US9547799B2 US9547799B2 (en) 2017-01-17

Family

ID=41530360

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/175,386 Active 2033-02-28 US9547799B2 (en) 2008-07-17 2008-07-17 Methods and systems for content-boundary detection

Country Status (1)

Country Link
US (1) US9547799B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142341A1 (en) * 2009-12-16 2011-06-16 Dolan John E Methods and Systems for Automatic Content-Boundary Detection
US20130004071A1 (en) * 2011-07-01 2013-01-03 Chang Yuh-Lin E Image signal processor architecture optimized for low-power, processing flexibility, and user experience
US8554005B1 (en) * 2009-04-02 2013-10-08 Hewlett-Packard Development Company, L.P. Digital image enhancement method and system that embolden or thin image features
WO2022173521A1 (en) * 2021-02-11 2022-08-18 Hewlett-Packard Development Company, L.P. Image objects extraction

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10270965B2 (en) 2015-12-04 2019-04-23 Ebay Inc. Automatic guided capturing and presentation of images
JP2018055496A (en) * 2016-09-29 2018-04-05 日本電産サンキョー株式会社 Medium recognition device and medium recognition method
US10095925B1 (en) 2017-12-18 2018-10-09 Capital One Services, Llc Recognizing text in image data

Citations (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335290A (en) * 1992-04-06 1994-08-02 Ricoh Corporation Segmentation of text, picture and lines of a document image
US5452374A (en) * 1992-04-06 1995-09-19 Ricoh Corporation Skew detection and correction of a document image representation
US5528387A (en) * 1994-11-23 1996-06-18 Xerox Corporation Electronic image registration for a scanner
US5781665A (en) * 1995-08-28 1998-07-14 Pitney Bowes Inc. Apparatus and method for cropping an image
US5825914A (en) * 1995-07-12 1998-10-20 Matsushita Electric Industrial Co., Ltd. Component detection method
US5828776A (en) * 1994-09-20 1998-10-27 Neopath, Inc. Apparatus for identification and integration of multiple cell patterns
US5880858A (en) * 1997-12-31 1999-03-09 Mustek Systems Inc. Method of auto-cropping images for scanners
US5892854A (en) * 1997-01-21 1999-04-06 Xerox Corporation Automatic image registration using binary moments
US5901253A (en) * 1996-04-04 1999-05-04 Hewlett-Packard Company Image processing system with image cropping and skew correction
US5974199A (en) * 1997-03-31 1999-10-26 Eastman Kodak Company Method for scanning and detecting multiple photographs and removing edge artifacts
US5978519A (en) * 1996-08-06 1999-11-02 Xerox Corporation Automatic image cropping
US6011635A (en) * 1995-12-27 2000-01-04 Minolta Co., Ltd. Image reading apparatus and method for correcting a read image
US6043823A (en) * 1995-07-17 2000-03-28 Kabushiki Kaisha Toshiba Document processing system which can selectively extract and process regions of a document
US6178270B1 (en) * 1997-05-28 2001-01-23 Xerox Corporation Method and apparatus for selecting text and image data from video images
US6201901B1 (en) * 1998-06-01 2001-03-13 Matsushita Electronic Industrial Co., Ltd. Border-less clock free two-dimensional barcode and method for printing and reading the same
US6282326B1 (en) * 1998-12-14 2001-08-28 Eastman Kodak Company Artifact removal technique for skew corrected images
US6298157B1 (en) * 1998-02-27 2001-10-02 Adobe Systems Incorporated Locating and aligning embedded images
US6310984B2 (en) * 1998-04-09 2001-10-30 Hewlett-Packard Company Image processing system with image cropping and skew correction
US6360026B1 (en) * 1998-03-10 2002-03-19 Canon Kabushiki Kaisha Method for determining a skew angle of a bitmap image and de-skewing and auto-cropping the bitmap image
US6373590B1 (en) * 1999-02-04 2002-04-16 Seiko Epson Corporation Method and apparatus for slant adjustment and photo layout
US6556721B1 (en) * 2000-01-07 2003-04-29 Mustek Systems Inc. Method for image auto-cropping
US6560376B2 (en) * 1998-09-11 2003-05-06 Hewlett Packard Development Company, L.P. Automatic rotation of images for printing
US6674919B1 (en) * 1999-09-21 2004-01-06 Matsushita Electric Industrial Co., Ltd. Method for determining the skew angle of a two-dimensional barcode
US6901168B1 (en) * 1998-05-07 2005-05-31 France Telecom Method for segmenting and identifying a document, in particular a technical chart
US20050168775A1 (en) * 2004-01-26 2005-08-04 Liu K. C. Method and computer program product for in-house digital photo/card processing and printing/cutting production
US6956587B1 (en) * 2003-10-30 2005-10-18 Microsoft Corporation Method of automatically cropping and adjusting scanned images
US20050244079A1 (en) * 2004-04-30 2005-11-03 Tsung-Wei Lin Method for image cropping
US6973222B2 (en) * 2000-04-28 2005-12-06 Shutterfly, Inc. System and method of cropping an image
US6987880B2 (en) * 2001-03-22 2006-01-17 Sharp Laboratories Of America, Inc. Efficient document boundary determination
US20060072847A1 (en) * 2004-10-01 2006-04-06 Microsoft Corporation System for automatic image cropping based on image saliency
US7034848B2 (en) * 2001-01-05 2006-04-25 Hewlett-Packard Development Company, L.P. System and method for automatically cropping graphical images
US20060098844A1 (en) * 2004-11-05 2006-05-11 Huitao Luo Object detection utilizing a rotated version of an image
US20060109282A1 (en) * 2004-11-23 2006-05-25 Xiaofan Lin Non-rectangular image cropping methods and systems
US7065261B1 (en) * 1999-03-23 2006-06-20 Minolta Co., Ltd. Image processing device and image processing method for correction of image distortion
US7068855B2 (en) * 2002-07-16 2006-06-27 Hewlett-Packard Development Company, L.P. System and method for manipulating a skewed digital image
US20060188173A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Systems and methods to adjust a source image aspect ratio to match a different target aspect ratio
US20060228044A1 (en) * 2005-04-12 2006-10-12 Newsoft Technology Corporation Method for automatically cropping image objects
US7133050B2 (en) * 2003-07-11 2006-11-07 Vista Print Technologies Limited Automated image resizing and cropping
US7133571B2 (en) * 2000-12-22 2006-11-07 Hewlett-Packard Development Company, L.P. Automated cropping of electronic images
US20060280364A1 (en) * 2003-08-07 2006-12-14 Matsushita Electric Industrial Co., Ltd. Automatic image cropping system and method for use with portable devices equipped with digital cameras
US20070013974A1 (en) * 2005-07-11 2007-01-18 Canon Kabushiki Kaisha Image processing apparatus and its program and control method
US20070076979A1 (en) * 2005-10-03 2007-04-05 Microsoft Corporation Automatically cropping an image
US7201323B2 (en) * 2004-12-10 2007-04-10 Mitek Systems, Inc. System and method for check fraud detection using signature validation
US7209149B2 (en) * 2000-06-05 2007-04-24 Fujifilm Corporation Image cropping and synthesizing method, and imaging apparatus
US7239726B2 (en) * 2001-12-12 2007-07-03 Sony Corporation System and method for effectively extracting facial feature information
US7305146B2 (en) * 2001-06-30 2007-12-04 Hewlett-Packard Development Company, L.P. Tilt correction of electronic images
US20090180694A1 (en) * 2008-01-11 2009-07-16 Sharp Laboratories Of America, Inc. Method and apparatus for determining an orientation of a document including Korean characters
US7657091B2 (en) * 2006-03-06 2010-02-02 Mitek Systems, Inc. Method for automatic removal of text from a signature area
US7720291B2 (en) * 2004-02-17 2010-05-18 Corel Corporation Iterative fisher linear discriminant analysis
US20110142341A1 (en) * 2009-12-16 2011-06-16 Dolan John E Methods and Systems for Automatic Content-Boundary Detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4600019B2 (en) 2004-12-07 2010-12-15 カシオ計算機株式会社 Imaging apparatus, image processing method, and program
JP4884305B2 (en) 2007-06-06 2012-02-29 シャープ株式会社 Image processing apparatus, image forming apparatus, computer program, and recording medium

Patent Citations (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5854854A (en) * 1992-04-06 1998-12-29 Ricoh Corporation Skew detection and correction of a document image representation
US5452374A (en) * 1992-04-06 1995-09-19 Ricoh Corporation Skew detection and correction of a document image representation
US5465304A (en) * 1992-04-06 1995-11-07 Ricoh Corporation Segmentation of text, picture and lines of a document image
US5335290A (en) * 1992-04-06 1994-08-02 Ricoh Corporation Segmentation of text, picture and lines of a document image
US5828776A (en) * 1994-09-20 1998-10-27 Neopath, Inc. Apparatus for identification and integration of multiple cell patterns
US5528387A (en) * 1994-11-23 1996-06-18 Xerox Corporation Electronic image registration for a scanner
US5825914A (en) * 1995-07-12 1998-10-20 Matsushita Electric Industrial Co., Ltd. Component detection method
US6043823A (en) * 1995-07-17 2000-03-28 Kabushiki Kaisha Toshiba Document processing system which can selectively extract and process regions of a document
US5781665A (en) * 1995-08-28 1998-07-14 Pitney Bowes Inc. Apparatus and method for cropping an image
US6011635A (en) * 1995-12-27 2000-01-04 Minolta Co., Ltd. Image reading apparatus and method for correcting a read image
US5901253A (en) * 1996-04-04 1999-05-04 Hewlett-Packard Company Image processing system with image cropping and skew correction
US5978519A (en) * 1996-08-06 1999-11-02 Xerox Corporation Automatic image cropping
US5892854A (en) * 1997-01-21 1999-04-06 Xerox Corporation Automatic image registration using binary moments
US5974199A (en) * 1997-03-31 1999-10-26 Eastman Kodak Company Method for scanning and detecting multiple photographs and removing edge artifacts
US6178270B1 (en) * 1997-05-28 2001-01-23 Xerox Corporation Method and apparatus for selecting text and image data from video images
US5880858A (en) * 1997-12-31 1999-03-09 Mustek Systems Inc. Method of auto-cropping images for scanners
US6298157B1 (en) * 1998-02-27 2001-10-02 Adobe Systems Incorporated Locating and aligning embedded images
US6360026B1 (en) * 1998-03-10 2002-03-19 Canon Kabushiki Kaisha Method for determining a skew angle of a bitmap image and de-skewing and auto-cropping the bitmap image
US6430320B1 (en) * 1998-04-09 2002-08-06 Hewlett-Packard Company Image processing system with automatic image cropping and skew correction
US6310984B2 (en) * 1998-04-09 2001-10-30 Hewlett-Packard Company Image processing system with image cropping and skew correction
US6901168B1 (en) * 1998-05-07 2005-05-31 France Telecom Method for segmenting and identifying a document, in particular a technical chart
US20010007116A1 (en) * 1998-06-01 2001-07-05 Jiangying Zhou Border-less clock free two-dimensional barcode and method for printing and reading the same
US20010005867A1 (en) * 1998-06-01 2001-06-28 Jiangying Zhou Border-less clock free two-dimensional barcode and method for printing and reading the same
US6201901B1 (en) * 1998-06-01 2001-03-13 Matsushita Electronic Industrial Co., Ltd. Border-less clock free two-dimensional barcode and method for printing and reading the same
US6560376B2 (en) * 1998-09-11 2003-05-06 Hewlett Packard Development Company, L.P. Automatic rotation of images for printing
US6282326B1 (en) * 1998-12-14 2001-08-28 Eastman Kodak Company Artifact removal technique for skew corrected images
US6373590B1 (en) * 1999-02-04 2002-04-16 Seiko Epson Corporation Method and apparatus for slant adjustment and photo layout
US7065261B1 (en) * 1999-03-23 2006-06-20 Minolta Co., Ltd. Image processing device and image processing method for correction of image distortion
US6674919B1 (en) * 1999-09-21 2004-01-06 Matsushita Electric Industrial Co., Ltd. Method for determining the skew angle of a two-dimensional barcode
US6556721B1 (en) * 2000-01-07 2003-04-29 Mustek Systems Inc. Method for image auto-cropping
US6973222B2 (en) * 2000-04-28 2005-12-06 Shutterfly, Inc. System and method of cropping an image
US7209149B2 (en) * 2000-06-05 2007-04-24 Fujifilm Corporation Image cropping and synthesizing method, and imaging apparatus
US7133571B2 (en) * 2000-12-22 2006-11-07 Hewlett-Packard Development Company, L.P. Automated cropping of electronic images
US7034848B2 (en) * 2001-01-05 2006-04-25 Hewlett-Packard Development Company, L.P. System and method for automatically cropping graphical images
US6987880B2 (en) * 2001-03-22 2006-01-17 Sharp Laboratories Of America, Inc. Efficient document boundary determination
US7305146B2 (en) * 2001-06-30 2007-12-04 Hewlett-Packard Development Company, L.P. Tilt correction of electronic images
US7239726B2 (en) * 2001-12-12 2007-07-03 Sony Corporation System and method for effectively extracting facial feature information
US7068855B2 (en) * 2002-07-16 2006-06-27 Hewlett-Packard Development Company, L.P. System and method for manipulating a skewed digital image
US7133050B2 (en) * 2003-07-11 2006-11-07 Vista Print Technologies Limited Automated image resizing and cropping
US20060280364A1 (en) * 2003-08-07 2006-12-14 Matsushita Electric Industrial Co., Ltd. Automatic image cropping system and method for use with portable devices equipped with digital cameras
US6956587B1 (en) * 2003-10-30 2005-10-18 Microsoft Corporation Method of automatically cropping and adjusting scanned images
US20050168775A1 (en) * 2004-01-26 2005-08-04 Liu K. C. Method and computer program product for in-house digital photo/card processing and printing/cutting production
US7720291B2 (en) * 2004-02-17 2010-05-18 Corel Corporation Iterative fisher linear discriminant analysis
US20050244079A1 (en) * 2004-04-30 2005-11-03 Tsung-Wei Lin Method for image cropping
US20060072847A1 (en) * 2004-10-01 2006-04-06 Microsoft Corporation System for automatic image cropping based on image saliency
US20060098844A1 (en) * 2004-11-05 2006-05-11 Huitao Luo Object detection utilizing a rotated version of an image
US20060109282A1 (en) * 2004-11-23 2006-05-25 Xiaofan Lin Non-rectangular image cropping methods and systems
US7201323B2 (en) * 2004-12-10 2007-04-10 Mitek Systems, Inc. System and method for check fraud detection using signature validation
US20060188173A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Systems and methods to adjust a source image aspect ratio to match a different target aspect ratio
US20060228044A1 (en) * 2005-04-12 2006-10-12 Newsoft Technology Corporation Method for automatically cropping image objects
US20070013974A1 (en) * 2005-07-11 2007-01-18 Canon Kabushiki Kaisha Image processing apparatus and its program and control method
US20070076979A1 (en) * 2005-10-03 2007-04-05 Microsoft Corporation Automatically cropping an image
US7657091B2 (en) * 2006-03-06 2010-02-02 Mitek Systems, Inc. Method for automatic removal of text from a signature area
US20090180694A1 (en) * 2008-01-11 2009-07-16 Sharp Laboratories Of America, Inc. Method and apparatus for determining an orientation of a document including Korean characters
US20110142341A1 (en) * 2009-12-16 2011-06-16 Dolan John E Methods and Systems for Automatic Content-Boundary Detection

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8554005B1 (en) * 2009-04-02 2013-10-08 Hewlett-Packard Development Company, L.P. Digital image enhancement method and system that embolden or thin image features
US20110142341A1 (en) * 2009-12-16 2011-06-16 Dolan John E Methods and Systems for Automatic Content-Boundary Detection
US8873864B2 (en) * 2009-12-16 2014-10-28 Sharp Laboratories Of America, Inc. Methods and systems for automatic content-boundary detection
US20130004071A1 (en) * 2011-07-01 2013-01-03 Chang Yuh-Lin E Image signal processor architecture optimized for low-power, processing flexibility, and user experience
CN103733189A (en) * 2011-07-01 2014-04-16 英特尔公司 Image signal processor architecture optimized for low-power, processing flexibility, and user experience
WO2022173521A1 (en) * 2021-02-11 2022-08-18 Hewlett-Packard Development Company, L.P. Image objects extraction

Also Published As

Publication number Publication date
US9547799B2 (en) 2017-01-17

Similar Documents

Publication Publication Date Title
US9547799B2 (en) Methods and systems for content-boundary detection
US8873864B2 (en) Methods and systems for automatic content-boundary detection
US8290265B2 (en) Method and apparatus for segmenting an object region of interest from an image
US6400848B1 (en) Method for modifying the perspective of a digital image
US9122921B2 (en) Method for detecting a document boundary
US9390342B2 (en) Methods, systems and apparatus for correcting perspective distortion in a document image
US8755563B2 (en) Target detecting method and apparatus
US20160314563A1 (en) Method for correcting fragmentary or deformed quadrangular image
US20120294528A1 (en) Method of Detecting and Correcting Digital Images of Books in the Book Spine Area
US20140064623A1 (en) Image feature extraction apparatus and image feature extraction method, and image processing system using the same
US8620080B2 (en) Methods and systems for locating text in a digital image
US8265416B2 (en) Method, apparatus and integrated circuit for improving image sharpness
US8520953B2 (en) Apparatus and method for extracting edges of image
US9111165B2 (en) Method and system for filtering detection patterns in a QR code
WO2016197571A1 (en) Image interpolation device and method thereof
US20160259990A1 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
KR101822909B1 (en) Method and device for detecting elliptical structure in an image
US20120320433A1 (en) Image processing method, image processing device and scanner
US8275170B2 (en) Apparatus and method for detecting horizon in sea image
US9378405B2 (en) Determining barcode locations in documents
JP3659426B2 (en) Edge detection method and edge detection apparatus
CN113298787A (en) Plate edge sealing defect detection method, controller and computer readable storage medium
Fang et al. 1-D barcode localization in complex background
US20140147056A1 (en) Depth image noise removal apparatus and method based on camera pose
US11134170B2 (en) Correction of feed skewed images

Legal Events

Date Code Title Description
AS Assignment

Owner name: SHARP LABORATORIES OF AMERICA, INC.,WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, LAWRENCE SHAO-HSIEN;DOLAN, JOHN E.;REEL/FRAME:021255/0874

Effective date: 20080715

Owner name: SHARP LABORATORIES OF AMERICA, INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, LAWRENCE SHAO-HSIEN;DOLAN, JOHN E.;REEL/FRAME:021255/0874

Effective date: 20080715

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SHARP KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHARP LABORATORIES OF AMERICA, INC.;REEL/FRAME:041024/0183

Effective date: 20170120

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4