CN101320425B - Image processing apparatus, image forming apparatus, and image processing method - Google Patents

Image processing apparatus, image forming apparatus, and image processing method Download PDF

Info

Publication number
CN101320425B
CN101320425B CN2008100997414A CN200810099741A CN101320425B CN 101320425 B CN101320425 B CN 101320425B CN 2008100997414 A CN2008100997414 A CN 2008100997414A CN 200810099741 A CN200810099741 A CN 200810099741A CN 101320425 B CN101320425 B CN 101320425B
Authority
CN
China
Prior art keywords
image
mentioned
images
parts
unique point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008100997414A
Other languages
Chinese (zh)
Other versions
CN101320425A (en
Inventor
早崎真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Publication of CN101320425A publication Critical patent/CN101320425A/en
Application granted granted Critical
Publication of CN101320425B publication Critical patent/CN101320425B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An image processing device and method, an image forming device, an image processing device are provided. The image processing device includes a pattern detection process section for extracting a partial image made of pixels including a target pixel from input image data; a rotated image generating section for generating a self-rotated image by rotating the partial image; and a matching test determination section for determining whether an image pattern included in the partial image matches an image pattern included in the self-rotated image. When it is determined that matching exists, a target pixel in the partial image or a block made of pixels including the target pixel is regarded as a feature point. Consequently, even when image data has been read while skewed with respect to a predetermined positioning angle of a reading position of an image reading apparatus or image data has been subjected to enlarging, reducing etc., a feature point properly specifying the image data can be extracted regardless of skew, enlarging, reducing etc.

Description

Image processing apparatus and method, image processing system
Technical field
The present invention relates to have image processing apparatus, image processing system, image processing system and the image processing method of feature value calculation unit of the characteristic quantity of the view data of extracting.
Background technology
In the past, proposed to have and variously compared with the login image of logining in advance, judged the technology of both similarities through utilizing scanner to read the input image data that original image obtains.
As the decision method of similarity, for example extract character image, use the key word that from image, extracts through OCR (Optical Character Reader) etc., compare the method for (matching); With the characteristic that extracts the line that comprises in the image, the method that compares etc.
In addition; In patent documentation 1 (the publication communique spy of Japan opens flat 8-255236 communique (open day: put down on October 1st, 8)); A kind of literal, text line frame, frame etc. of from input picture, identifying are disclosed; According to frame information each frame is compared, carry out the technology of the format identification of bill image etc. thus.
In addition; (disclosed technology is in WO2006/092957A1 number open text of International Publication (open day: on September 8th, 2006)) at patent documentation 2; Extract in the locking space, image of center of gravity, the Chinese character of the binding composition of the center of gravity of the word in the English file, black pixel recurrent privileged site etc. as unique point; Each unique point that extracts is determined the set of local feature point; From each set of decision, select the part set of unique point, selected various piece set as the characteristic additional amount, is obtained the non-variable in the geometry conversion respectively to a plurality of combinations of the unique point in the part set; Each non-variable of obtaining as characteristic quantity, is carried out the file contrast according to the characteristic quantity of trying to achieve like this.
But; In the technology of above-mentioned patent documentation 1,2; When input image data is that relative image read-out is reading the specified configuration angle of position and when being tilted the data that read under the state of configuration; Under the data conditions of having handled with implemented to dwindle, amplification etc., exist the problem that can not extracted with high accuracy goes out unique point.
For example, in the technology of patent documentation 1, because the influence that receives above-mentioned inclination, amplifies, dwindles etc., make the recognition result change of literal, text line frame, frame etc., so can not high precision carry out format identification.
In addition; In the technology of patent documentation 2; Because the influence of receive above-mentioned inclination, amplifying, dwindle etc.; Make the extraction change as a result of recurrent privileged site in the locking space, image of center of gravity, Chinese character of binding composition of the center of gravity of the word in the English file, black pixel etc., so the precise decreasing of file results of comparison.
In addition; When from the image that comprises handwriting (for example be file, carried out based on the hand-written image that writes etc.), extracting unique point with the font printing of regulation; Handwriting is high with the distinctiveness ratio of the shape of the font of login in image processing apparatus; Original just had the character of easy generation misinterpretation, in addition in the technology of above-mentioned patent documentation 1,2 owing to above-mentioned inclination, amplify, dwindle etc. and make the judgement precise decreasing, thereby misinterpretation takes place especially easily.
In addition; In the technology of above-mentioned patent documentation 2; When extract minutiae, because the extraction that need carry out 2 values of view data and on having carried out indicating the basis of (labeling), carry out recurrent privileged site in locking space or the image of center of gravity, Chinese character of the binding composition of the word center of gravity in the English file, black pixel, so; Exist dealing with complicated, and increased the problem of the circuit scale that is used to carry out these processing.
In addition; Under situation about as the technology of above-mentioned patent documentation 2, the center of gravity of the binding composition of the center of gravity of word and black pixel etc. being extracted as unique point; For example; Situation being the table input image data that occupies major part, original copy that literal is few is inferior, because the unique point quantity that extracts is few, so exist the low problem of contrast precision of view data.
Summary of the invention
The present invention puts to propose in view of the above-mentioned problems; Its objective is; Provide a kind of, extract the technology that irrespectively reliably to confirm the unique point of this view data with above-mentioned inclination, amplify, dwindle etc. to be tilted the state of configuration and the view data that reads and implemented amplification, image data processed such as dwindle in the specified configuration angle that reads the position with respect to image read-out.
In order to solve the above problems, image processing apparatus of the present invention has: the feature point detecting unit that detects the unique point that comprises in the input image data; With according to the detected unique point of above-mentioned feature point detecting unit relative position each other; Calculate the feature value calculation unit of the characteristic quantity of above-mentioned input image data; It is characterized in that; Above-mentioned feature point detecting unit has: parts of images extraction portion, and it extracts from above-mentioned input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generation portion, it generates the spinning image that above-mentioned parts of images has been rotated predetermined angular; Unanimity degree detection unit, it judges whether the image graphics that comprises in the image graphics that comprises in the above-mentioned parts of images and the above-mentioned spinning image is consistent with above-mentioned parts of images and above-mentioned spinning picture registration the time; And test section, it is judged to be gazing at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, come out as above-mentioned feature point detection in the consistent parts of images with above-mentioned consistent degree detection unit.
According to said structure; Parts of images extraction portion extracts from input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generation portion generates the spinning image that parts of images has been rotated predetermined angular; Unanimity degree detection unit judges with above-mentioned parts of images and above-mentioned spinning picture registration the time, and whether the image graphics that comprises in the image graphics that comprises in the parts of images and the spinning image is consistent.Then, test section is judged to be gazing at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, come out as feature point detection in the consistent parts of images with consistent degree detection unit.
Thus; Even at input image data is that the specified configuration angle that reading the position with relative image read-out is tilted the state of configuration and under the situation of the view data that reads or implemented amplification, dwindle etc. under the situation of image data processed, the image graphics that also can not receive above-mentioned inclination with comprising, amplify, dwindle etc. influences or receive above-mentioned inclination, the parts of images of image graphics that the influence amplifying, dwindle etc. is little is pairing gazes at pixel or comprise this block of gazing at pixel to come out as feature point detection.Therefore, through calculating the characteristic quantity of input image data according to above-mentioned unique point relative position each other, but can irrespectively calculate the characteristic quantity that high precision is confirmed input image data with above-mentioned inclination, amplify, dwindle etc.
About other purposes of the present invention, characteristic and advantage, through making much of with reference to following explanation.And, about advantage of the present invention,, can clearly understand through following explanation with reference to accompanying drawing.
Description of drawings
Fig. 1 is the block diagram of the schematic configuration of the file contrast processing portion that possessed in the image processing apparatus of expression an embodiment of the invention.
Fig. 2 is the block diagram of schematic configuration of the image processing apparatus of expression an embodiment of the invention.
Fig. 3 is the block diagram of the schematic configuration of the pretreatment portion that possesses in the expression file contrast processing portion shown in Figure 1.
Fig. 4 is the key diagram of an example of the filter factor of the compound filter that possesses in the MTF handling part of expression pretreatment portion shown in Figure 3.
Fig. 5 is the block diagram of the schematic configuration of the unique point calculating part (feature point detecting unit) that possesses in the expression file contrast processing portion shown in Figure 1.
Fig. 6 is the key diagram of an example of the spinning image that generated by the image rotating generation portion that possesses in the unique point calculating part shown in Figure 5 of expression.
Fig. 7 (a)~Fig. 7 (c) is the key diagram of the size of the parts of images that uses in the expression unique point calculating part shown in Figure 5.
Fig. 8 is the key diagram of the example of gazing at unique point and peripheral unique point that extracts when being illustrated in by the feature value calculation unit calculated characteristics amount that possesses in the file contrast processing portion shown in Figure 1.
The key diagram of one example of the combination of gazing at unique point and peripheral unique point that Fig. 9 (a)~Fig. 9 (c) extracts when being the feature value calculation unit calculated characteristics amount that is illustrated in by image processing apparatus shown in Figure 2.
The key diagram of one example of the combination of gazing at unique point and peripheral unique point that Figure 10 (a)~Figure 10 (c) extracts when being the feature value calculation unit calculated characteristics amount that is illustrated in by image processing apparatus shown in Figure 2.
Figure 11 (a) and Figure 11 (b) are illustrated in the file contrast processing portion shown in Figure 1, and expression is by one of the index of hashed value and the input image data of login in hash table routine key diagram.
Figure 12 be in the ballot handling part that possesses in the expression file contrast processing portion shown in Figure 1, to the histogram of an example of the votes (number of votes obtained of each login image) of each login image.
Figure 13 is the process flow diagram of the treatment scheme in the expression file contrast processing portion shown in Figure 1.
Figure 14 is the process flow diagram of the treatment scheme of the unique point calculating part that possesses in the expression file contrast processing portion shown in Figure 1.
The key diagram of one example of the relation between Figure 15 parts of images that to be expression original copy relative image read-out comprise in the angle of inclination of the specified configuration angle that reads the position, the input image data that from above-mentioned original copy, reads and the spinning image that rotated this parts of images.
The key diagram of the relation between parts of images that Figure 16 is the parts of images that comprises in the input image data of expression, comprise in the view data after becoming this input image data doubly and the spinning image that has rotated this parts of images.
Figure 17 (a) is the key diagram of an example of the parts of images that from 2 value view data, extracts of expression, and Figure 17 (b) is a routine key diagram of the expression spinning image that rotated the parts of images shown in Figure 17 (a).
Figure 18 is the block diagram of variation of the image processing apparatus of expression an embodiment of the invention.
Figure 19 is the key diagram of structure of the image processing system of expression an embodiment of the invention.
Figure 20 is a block diagram that constitutes example of the image processing system of expression an embodiment of the invention.
Figure 21 is the routine block diagram of other formations of the image processing system of expression an embodiment of the invention.
Figure 22 (a)~Figure 22 (d) is the key diagram of an example of the combination of gazing at unique point and peripheral unique point that when the calculated characteristics amount, extracts of the feature value calculation unit of image processing apparatus of expression an embodiment of the invention.
Figure 23 (a)~Figure 23 (d) is the key diagram of an example of the combination of gazing at unique point and peripheral unique point that when the calculated characteristics amount, extracts of the feature value calculation unit of image processing apparatus of expression an embodiment of the invention.
Figure 24 is the key diagram of an example of expression parts of images.
Figure 25 is the key diagram of an example of expression parts of images.
Figure 26 is the key diagram of an example of expression parts of images.
Figure 27 (a) is the key diagram of an example of the input image data that is made up of coloured image of expression, and Figure 27 (b)~Figure 27 (d) is the key diagram of the multivalue image data of the expression R passage corresponding with the view data of Figure 27 (a), G passage, B passage.
Embodiment
(embodiment 1)
Below, an embodiment of the invention are described.Wherein, in this embodiment, an example of the situation that applies the present invention to color digital compounding machine (MFP:Multi-Function Printer) is described.
(structure of 1-1. color digital compounding machine 1)
Fig. 2 is the block diagram of schematic configuration of the color digital compounding machine (image processing apparatus, image processing system, image read-out) 1 of this embodiment of expression.This color digital compounding machine 1 has copy function, printing function, facsimile transmission function, scan function and is scanned into Email (scan to e-mail) function etc.
As shown in Figure 2, color digital compounding machine 1 has: coloured image input media 2, color image processing apparatus 3, coloured image output unit 4, communicator 5 and guidance panel 6.
Coloured image input media (image read-out) 2 for example constitutes the scanner section (not shown) that optical information converts the device of electric signal to by having CCD (ChargeCoupled Device) etc.; Will (R: red, G: green, B: simulating signal indigo plant) outputs to color image processing apparatus 3 as RGB from the reflected light picture of original copy.
Color image processing apparatus 3 has: A/D converter section 11, shadow correction portion 12, file contrast processing portion 13, input gray level correction portion 14, regional separating treatment portion 15, colour correction portion 16, the black background color that generates are removed portion 17, spatial filtering handling part 18, output gray level correction portion 19 and gray-scale rendition handling part 20.Output to the simulating signal of Color Image Processing portion 3 from coloured image input media 2; In Color Image Processing portion 3, delivered to A/D converter section 11, shadow correction portion 12, file contrast processing portion 13, input gray level correction portion 14, regional separating treatment portion 15, colour correction portion 16 successively, the black background color that generates is removed portion 17, spatial filtering handling part 18, output gray level correction portion 19 and gray-scale rendition handling part 20; Convert the digital color-signal of CMYK to, outputed to coloured image output unit 4.
A/D (analog/digital) converter section 11 is used for the analog signal conversion of RGB is become digital signal.
Shadow correction portion 12 is used for the digital rgb signal of sending here from A/D converter section 11, implements to remove the processing of the various distortions that illuminator, imaging system, camera system at coloured image input media 2 produce.In addition, shadow correction portion 12 implements the adjustment of color balances and converts the image processing system easy to handle Signal Processing that is adopted in the color image processing apparatus 3 such as concentration signal to.
File contrast processing portion 13 extracts unique point from the view data of input, calculate characteristic quantity according to the unique point that extracts.And, the characteristic quantity that file contrast processing portion 13 will calculate as above-mentioned, preserve accordingly with view data (login) after in the hash table stated.And the characteristic quantity of file contrast processing portion 13 through will as above-mentioned, calculating according to input image data compares with the characteristic quantity that is kept at the login image in the hash table, judges the similarity of input picture and login image.In addition, file contrast processing portion 13 directly outputs to the rgb signal of input case structure and will be explained below.
The rgb signal of various distortions has been removed through shadow correction portion by 14 pairs in input gray level correction portion, implements background color (the concentration composition of background color: the image quality adjustment processing of removal background color concentration) and contrast etc.
Zone separating treatment portion 15 is used for according to rgb signal, each pixel separation in the input picture is become the arbitrary region in character area, dot area, photo zone.Zone separating treatment portion 15 is according to separating resulting; With remarked pixel is the regional identification signal that belongs to which zone; Output to colour correction portion 16, the black background color that generates is removed portion 17, spatial filtering handling part 18 and gray-scale rendition handling part 20; And, the colour correction portion 16 of level after will directly outputing to from the input signal of input gray level correction portion 14 outputs.
Colour correction portion 16 is in order to realize the faithful reappearance of color, based on the CMY that comprises unwanted absorption composition (C: bluish-green, M: fuchsin, Y: the dichroism of look material Huang), carried out removing the turbid processing of discoloring.
The black background color that generates is removed the black generation that portion 17 is used to carry out generating according to the CMY tristimulus signal after the color correction black (K) signal; From original C MY signal, deduct through the black K signal that draws that generates, generate new CMY Signal Processing.Thus, the CMY tristimulus signal is converted into CMYK four chrominance signals.
18 pairs of view data of removing the CMYK signal of portion's 17 inputs by black generation background color of spatial filtering handling part according to regional identification signal, use the spatial filtering of digital filter to handle, and revise spatial frequency characteristic.Thus, can alleviate the fuzzy and graininess deterioration of output image.Gray-scale rendition handling part 20 is also same with spatial filtering handling part 18, to the view data of CMYK signal, implements predetermined process according to regional identification signal.
For example, be separated into the zone of literal by regional separating treatment portion 15, for the special repeatability that improves black literal or colored text, the distinctness in handling through the spatial filtering of spatial filtering handling part 18 is stressed to handle, and has increased the emphasical amount of high frequency.Simultaneously, in gray-scale rendition handling part 20, can select the binaryzation or the many-valuedization processing of the sharpness screen of suitable high-frequency reproduction.
In addition, about be separated into the zone of dot area by regional separating treatment portion 15, in spatial filtering handling part 18, is implemented to be used to remove the low-pass filtering treatment of importing the site composition.Then; In output gray level correction portion 19; Carried out the output gray level correcting process of conversion of signals such as concentration signal one-tenth as the dot area percentage of the characteristic value of coloured image output unit 4, in gray-scale rendition handling part 20, separation of images becomes pixel to be processed into the most at last then; And can reproduce the mode of gray scale separately, implement gray-scale rendition and handle (generation medium tone).About the zone that is separated into photo by regional separating treatment portion 15, pay attention to the binaryzation or the many-valuedization processing of the display screen of gray-scale rendition property.
Implemented above-mentioned various image data processed and temporarily be kept in the memory storage (not shown), read in the moment of regulation then, be input to coloured image output unit 4.
Coloured image output unit 4 is used for the view data from color image processing apparatus 3 input is outputed to recording materials (for example paper etc.).The structure of coloured image output unit 4 does not have special qualification, for example, can use the coloured image output unit of electronic photo mode and ink-jetting style.
Communicator 5 for example is made up of modulator-demodular unit and network interface card.Communicator 5 is through network interface card, LAN circuit etc., carries out data communication with other devices that are connected with network (for example, personal computer, server unit, other digit apparatus, facsimile unit etc.).
In addition; Communicator 5 is sending under the situation of view data, when through carrying out the transmission formality with the other side, during the state having guaranteed to send; From storer, read view data (view data of utilizing scanner to read in) with the form compression of regulation; Implement the necessary processing such as change of compressed format, through communication line, order sends to the other side.
In addition, communicator 5 when communicating formality, receives the view data of sending from the other side, and is input to color image processing apparatus 3 under the situation that receives view data.The view data that receives is implemented predetermined process such as decompression, rotation processing, conversion of resolution processing, output gray level correction, gray-scale rendition processing in color image processing apparatus 3, exported through coloured image output unit 4 then.In addition, also can the view data that receive be kept in the memory storage (not shown), as required it be read, and implement the processing of afore mentioned rules by coloured image output unit 3.
Guidance panel 6 for example by display part such as LCD with set button etc. and constitute (all not shown); To be presented in the above-mentioned display part with the corresponding information of indication of the master control part (not shown) of color digital compounding machine 1, and the user will be passed to above-mentioned master control part through the button inputted information of above-mentioned setting.The user is through guidance panel 6, can import processing requirements to input image data (for example tupe (duplicating, printing, transmission, editor etc.), handle the transmission destination of number (duplicate number, print number), input image data etc.).Above-mentioned master control part is for example by CPU formations such as (Central Processing Unit), according to the program and the various data that are stored among the not shown ROM etc., and from the information of guidance panel 6 inputs etc., the action of each one of the colored compounding machine 1 of control figure.
(structure of 1-2. file contrast processing portion 13)
Below, file contrast processing portion 13 is elaborated.The file contrast processing portion 13 of this embodiment extracts a plurality of unique points from input image data; To the set of each unique point decision locality unique point of extracting, from each set that is determined, select the part set of unique point, with the various piece set of selecting as the amount of being with characteristic; According to a plurality of combinations in the part set about unique point; Obtain non-variable respectively, each non-variable of obtaining is made up, calculate hashed value (characteristic quantity) the geometry conversion.And, through the login image corresponding with the hashed value that calculates voted, carry out the retrieval of the login image similar, to the determination processing (similar/dissimilar judgement) of the similarity of this login image with input image data.In addition, also can carry out hashed value that calculates and the image that has extracted this hashed value are preserved (login) processing in hash table accordingly.
Fig. 1 is the block diagram of the schematic configuration of expression file contrast processing portion 13.As shown in the drawing, file contrast processing portion 13 has: pretreatment portion 30, unique point calculating part (feature point detecting unit) 31, feature value calculation unit 32, ballot handling part 33, similarity determination process section 34, login process portion 37, control part 7 and storer 8.
The action of each one of control part 7 control documents contrast processing portions 13.In addition, control part 7 can be arranged on the master control part of each action that is used for the colored compounding machine 1 of control figure, also can be arranged with the master control part branch, cooperates the action of master control part control documents contrast processing portion 13.
In storer 8, have hash table 103, will be used for confirming the index of login image and the characteristic quantity preservation accordingly each other that extracts from the login image in this hash table 103.In addition, in storer 8, except hash table 103, also have storage part (not shown), wherein preserve the employed various data of processing and the result etc. of each one of file contrast processing portion 13 and step on.In addition, the particular content about hash table 103 will be explained below.
Fig. 3 is the block diagram of the schematic configuration of expression pretreatment portion 30.As shown in the drawing, pretreatment portion 30 has: signal conversion processes portion (not having the coloured silkization handling part) 41, conversion of resolution portion 42 and MTF handling part 43.
View data (rgb signal) from 12 inputs of shadow correction portion is under the situation of coloured image, and signal conversion processes portion 41 does not have coloured silkization with this view data, converts lightness signal or luminance signal to.
For example, signal conversion processes portion 41 utilizes following formula to convert rgb signal to brightness signal Y.
Yi=0.30Ri+0.59Gi+0.11Bi
Here, Y is the luminance signal of each pixel, and R, G, B are each colour contents in the rgb signal of each pixel, and add-word i gives the value of giving by each pixel (i is the integer more than 1).
Perhaps, also can convert rgb signal to CIE1976L *a *b *Signal (CIE:Commission International de l ' Eclairage, L *: lightness, a *, b *: colourity).
42 pairs of input image datas of conversion of resolution portion become doubly to be handled.For example, conversion of resolution portion 42 is become under the situation doubly by coloured image input media 2 optics at input image data, and input image data is become doubly once more, makes it become the resolution of regulation.In addition; For the treatment capacity in each handling part that alleviates the back level; Conversion of resolution portion 42 also can carry out conversion of resolution; Be used for resolution is reduced to the resolution low (for example, will convert 300dpi etc. to the view data that 600dpi (dot per inch) reads) that when waiting times, reads than coloured image input media 2.
The spatial frequency characteristic that MTF (modulation transfer function) handling part 43 is used for absorbing (adjustment) coloured image input media 2 is because of machine difference condition of different.In the picture signal of CCD output,, can produce the deterioration of MTF owing to aperture (aperture) opening degree of the sensitive surface of opticses such as lens and catoptron, CCD, pass on efficient and image retention, the integrating effect that scans based on physical property and scan unequally.The deterioration of this MTF makes the image that reads thicken.MTF handling part 43 is through implementing suitable Filtering Processing (stressing to handle), repairs the fuzzy processing that the deterioration because of MTF produces.And the feature point extraction of stating after also being used for being suppressed at is handled unwanted radio-frequency component.That is, use compound filter (not shown) to stress to handle with smoothing.Wherein, Fig. 4 has represented an example of the filter factor in this compound filter.
In addition; The structure of pretreatment portion 30 is not limited to above-mentioned structure; For example; Also can be on above-mentioned each processing basis, or replace the part of above-mentioned each processing or all, the differential that extracts the marginal portion handles, used the noise of contrary sharpen masking (unsharp mask) to reduce and handle, for the binary conversion treatment that reduces handled data volume etc.
Fig. 5 is the block diagram of the schematic configuration of representation feature point calculating part 31.As shown in the drawing, unique point calculating part 31 has: test pattern handling part 45, image rotating generation portion 46, consistent degree calculating part 47 and consistent degree detection unit (test section) 48.
Whether test pattern handling part 45 extracts from input picture to gaze at the parts of images of the M that pixel is the center * M pixel (M is the integer more than 3), carry out existing in the decision section image test pattern of active graphical to handle.And test pattern handling part 45 carries out above-mentioned test pattern to each pixel and handles through carrying out raster scanning to gazing at pixel by per 1 pixel order.In addition, under the situation of pixel count that in the parts of images of the end of image, comprises, for example also can carry out the processing turned back and duplicate etc. in the image end, to replenish the pixel of insufficient section less than M * M.In addition, the size with parts of images is set at M * M here, but is not limited thereto, and for example also can extract the parts of images of M * N pixel (N is the integer more than 3, M is ≠ integer of N).But through adopting M * M pixel, though after the spinning angle stated be under the situation beyond 180 degree, also can make the pixel count that uses in the judgement consistent, so it is desirable to be set at M * M pixel with the pixel count of parts of images.
Here, the details of test pattern being handled describes.In this embodiment, test pattern handling part 45 at first calculates the dispersion value busy of the complicacy of the view data that expression goes out through computes.Wherein, in following formula (1), N representes that the pixel count of parts of images, signal value, i that I representes each pixel represent to be used for confirming the value (i is the integer from 1 to N) of each pixel.
Busy = N Σ i = 1 N ( I i × I i ) - ( Σ i = 1 N I i ) × ( Σ i = 1 N I i ) · · · ( 1 )
Then, test pattern handling part 45 compares through the dispersion value busy that will as above-mentioned, calculate and the magnitude relationship of pre-set threshold TH1, judges having or not of figure.For example, under the situation of busy >=TH1, be judged to be and have active graphical, under the situation of busy<TH1, be judged to be no active graphical.Wherein, threshold value TH1 is as long as suitable being set at can extract figure conscientiously.In addition, here, the situation of using dispersion value busy is illustrated, but is not limited thereto, for example, also can use the index of the complicacy of the presentation video beyond the dispersion value to judge.
Image rotating generation portion 46 generates the spinning image; This spinning image is with in test pattern handling part 45, being judged as the parts of images that has figure, is the image that the center has rotated R ° of predetermined angular (the spinning angle of regulation) with the centre coordinate (gazing at pixel) of this parts of images.
Fig. 6 is the key diagram of an example of expression input picture and the spinning image (90 degree image rotatings, 180 degree image rotatings, 120 degree image rotatings) that generated by image rotating generation portion 46.
In addition,, do not have special qualification, for example can adopt affine conversion of having used rotation matrix etc. for being used for image rotation method.And, under the situation of carrying out 90 degree rotations, if with capable being set at of C row R (I) of importation image (I) CR, then also can be through computing (T) CR=(I) RM-C+1, generate spinning image (90 degree spinning image).
In addition; Front and back in spinning; (for example rotate under the situation of 120 degree) under the inconsistent situation of pixel count in length and breadth, also can be in advance with the size of parts of images be set at than after the picture size used in the consistent degree determination processing stated big, after spinning; Carry out extracting of parts of images once more, to become the picture size of in the consistance determination processing, using.
For example, the picture size that is set at default value in the size of the parts of images that the dotted line that kind shown in Fig. 7 (a) will extract from input picture is (under the situation of M * M), if with these parts of images spinning 120 degree, then become the dotted portion shown in Fig. 7 (b).As a result,, then can find out in the parts of images that finally cuts out (parts of images that in consistent degree is judged, uses), produce the part that does not comprise image if solid line shown in Fig. 7 (c) and dotted line (or solid line and the dotted line shown in Fig. 7 (b)) are compared.
To this; Shown in solid line among Fig. 7 (a); Be set at picture size (under the situation of the big size (for example being N * N (N is the integer greater than M)) of M * M), in the size of the parts of images that will from input picture, extract if, then become the solid line part shown in Fig. 7 (b) with these parts of images spinning 120 degree than default value; Through regional accordingly with the size of the parts of images that consistent degree is judged, uses, can obtain the suitable parts of images that in consistent degree is judged, uses from wherein cutting out.
Unanimity degree calculating part 47 calculates correlation (the standardization correlation: S consistent degree) of input picture (importation image) and spinning image.
Here, the computing method of correlation and the decision method of consistent degree are explained more specifically.Generally, 2 image I nput (I) that are made up of N pixel and the correlation S of Target (T) are represented as following formula (2).Wherein, A, B, the C in the formula (2) is following formula (3)~represented value of formula (5).
{ A / B × C } × 1000 · · · ( 2 )
A = N Σ i = 1 N ( I i × T i ) - ( Σ i = 1 N I i ) × ( Σ i = 1 N T i ) · · · ( 3 )
B = N Σ i = 1 N ( I i × I i ) - ( Σ i = 1 N I i ) × ( Σ i = 1 N I i ) · · · ( 4 )
C = N Σ i = 1 N ( T i × T i ) - ( Σ i = 1 N T i ) × ( Σ i = 1 N T i ) · · · ( 5 )
In addition, under the situation of this embodiment, because Input (I) and Target (T) are the spinning images, so obvious B=C.Therefore, for correlation S, need only computing S=(A/B) * 1000, thereby can realize the simplification of computing.And, because above-mentioned B is identical with above-mentioned dispersion value busy, so, there is no need to calculate again above-mentioned B as long as the value of the dispersion value busy that calculates is used as above-mentioned B.
The correlation S of unanimity degree detection unit 48 through being calculated by consistent degree calculating part 47, TH_est compares with pre-set threshold, judges whether input picture is consistent with the spinning image.That is, when judging the parts of images in making input picture, comprise and this parts of images having been rotated the spinning picture registration of predetermined angular, whether whether the image graphics that comprises in the image graphics that comprises in the parts of images and the spinning image consistent (overlapping).Specifically be, consistent degree calculating part 47 is judged to be two image unanimities under the situation of S>TH_est, with the center pixel (gazing at pixel) of this parts of images as unique point.On the other hand, under the situation of S≤TH_est, it is inconsistent to be judged to be two images.In addition, consistent degree detection unit 48 will be represented to output to feature value calculation unit 32 as the information of the pixel of unique point.Perhaps, consistent degree detection unit 48 also can be kept at the information of expression as the pixel of unique point in the storer 8, from storer 8, reads this information by feature value calculation unit 32.In addition, as long as suitable being set at of threshold value TH_est can be extracted unique point conscientiously.
Feature value calculation unit 32 has: the 32a of feature point extraction portion, non-variable calculating part 32b, hashed value calculating part 32c; The unique point that use calculates in unique point calculating part 31 calculates rotation relative to original image, parallelly moves, amplifies, dwindles, parallelly the characteristic quantity (hashed value and/or non-variable) that geometry deformation becomes non-variable such as moves.
32a is as shown in Figure 8 in feature point extraction portion, and a unique point as gazing at unique point, is gazed at the peripheral unique point of unique point as peripheral unique point with this, extracts from the dot sequency nearest with gazing at unique point and stipulates several points (being 4 points) here.In the example of Fig. 8; With unique point a as the situation of gazing at unique point under; Unique point b, c, d, these 4 of e are used as peripheral feature point extraction and go out, with unique point b as the situation of gazing at unique point under, unique point a, c, e, these 4 of f are used as peripheral feature point extraction and go out.
In addition, the 32a of feature point extraction portion extracts 3 combination can from 4 of the peripheral unique point that as above-mentioned, extracts, selecting.For example; Such shown in Fig. 9 (a)~Fig. 9 (e); With unique point a shown in Figure 8 as the situation of gazing at unique point under; Extract 3 combination among peripheral unique point b, c, d, the e, that is, extract each combination of peripheral unique point b, c, d, peripheral unique point b, c, e, peripheral unique point b, d, e.
Then, non-variable calculating part 32b each combination to extracting calculates non-variable (characteristic quantity the 1) Hij of relative geometry deformation.Here, i is that the number (i is the integer more than 1) of unique point is gazed in expression, and j is the number (j is the integer more than 1) of 3 combinations of the peripheral unique point of expression.In this embodiment, 2 in the peripheral unique point of the binding line segment length each other liken to is non-variable Hij.In addition, about the length of above-mentioned line segment, can calculate according to each peripheral characteristic point coordinates value.For example, in the example of Fig. 9 (a), the line segment length of unique point c and unique point d will be made as A11, the line segment length that will link unique point c and unique point b is made as B11 if will link, and then non-variable H11 is H11=A11/B11.In addition, in the example of Fig. 9 (b), the line segment length of unique point c and unique point b will be made as A12, the line segment length that will link unique point b and unique point e is made as B12 if will link, and then non-variable H12 is H12=A12/B12.In addition, in the example of Fig. 9 (c), the line segment length of unique point d and unique point b will be made as A13, the line segment length that will link unique point b and unique point e is made as B13 if will link, and then non-variable H13 is H13=A13/B13.Like this, in the example of Fig. 9 (a)~Fig. 9 (c), can calculate non-variable H11, H12, H13.In addition; In above-mentioned example; Be that binding is positioned at the peripheral unique point in horizontal direction left side and the line segment of the peripheral unique point that is positioned at horizontal direction central authorities is made as Aij, the peripheral unique point that binding is positioned at horizontal direction central authorities and the line segment that is positioned at the peripheral unique point on horizontal direction right side are made as Bij, but are not limited thereto; About the line segment that in the calculating of non-variable Hij, uses, can use arbitrary method selected.
Then, hashed value calculating part 32c is with (Hi1 * 10 2+ Hi2 * 10 1+ Hi3 * 10 0The remainder values of)/D calculates, and is kept in the storer 8 as hashed value (characteristic quantity 1) Hi.In addition, above-mentioned D is that which kind of degree is the scope that correspondence can obtain the value of remainder be set at and predefined constant.
In addition; Computing method for non-variable Hij do not have special qualification; For example; Value that also can calculate with the configuration of the m point (m is the integer of m<n and m >=5) that extracts according near 5 the compound ratio of gazing at the unique point 5 compound ratio, extract near n point (n is the integer of n >=5), near n point with from 5 the compound ratio that the m point extracts etc. is as for the above-mentioned non-variable Hij that gazes at unique point.Wherein, so-called compound ratio is according to 5 values of obtaining on 4 on the straight line or the plane, is known relatively as the non-variable of a kind of projective deformation of geometric transformation.
In addition,, also be not limited to following formula (2), can also use other hash functions (for example any one in the hash function of record in the patent documentation 2) about being used to calculate the formula of hashed value Hi.
And; Each one of feature value calculation unit 32 is after the calculating of the extraction of having accomplished the peripheral unique point of 1 being gazed at unique point and hashed value Hi; To gaze at unique point and change to other unique points, carry out the extraction of peripheral unique point and the calculating of hashed value, calculate hashed value about whole unique points.
In the example of Fig. 8, after the extraction of having accomplished peripheral unique point and the hashed value of unique point a when gazing at unique point, next carry out peripheral unique point and the extraction of hashed value when gazing at unique point with unique point b.In the example of Fig. 8, with unique point b as the situation of gazing at unique point under, unique point a, c, e, these 4 of f are used as peripheral feature point extraction and go out.And; Shown in Figure 10 (a)~Figure 10 (c); Extract 3 the combination (peripheral unique point a, e, f, peripheral unique point c, e, f, peripheral unique point a, c, f) that from these peripheral unique point a, c, e, f, to select; Each combination calculation is gone out hashed value Hi, and be kept in the storer 8.Then, each unique point is carried out this processing repeatedly, obtain respectively the hashed value of each unique point when gazing at unique point, and be kept in the storer 8.
Wherein, the computing method with the non-variable of unique point a when gazing at unique point are not limited to above-mentioned method.For example; Such shown in Figure 22 (a)~Figure 22 (d); With unique point a shown in Figure 8 as the situation of gazing at unique point under, also can extract 3 combination among peripheral unique point b, c, d, the e, promptly; Each combination of periphery unique point b, c, d, peripheral unique point b, c, e, peripheral unique point b, d, e, peripheral unique point c, d, e goes out non-variable (characteristic quantity the 1) Hij of relative geometry deformation to each combination calculation that extracts.
In addition; With the unique point b of Fig. 8 as the situation of gazing at unique point under; Also can be such shown in Figure 23 (a)~Figure 23 (d); From 4 peripheral unique points of unique point a, c, e, f, extract certain combination of 3 (peripheral unique point a, e, f, peripheral unique point a, c, e, peripheral unique point a, f, c, peripheral unique point e, f, c), each combination calculation that extracts is gone out the non-variable Hij of relative geometry deformation.In addition, in this case, as long as with (Hi1 * 10 3+ Hi2 * 10 2+ Hi3 * 10 1+ Hi4 * 10 0The remainder values of)/D calculates as hashed value, and is kept in the storer 8 and gets final product.
And; In above-mentioned example; Be to link distance to gaze at the line segment of nearest peripheral unique point of unique point and the 2nd approaching peripheral unique point and be made as Aij, gaze at the line segment of nearest peripheral unique point of unique point and the 3rd approaching peripheral unique point and be made as Bij linking distance, but be not limited thereto; Can be that benchmark is selected etc. also, adopt arbitrary method to be selected in the line segment that uses in the calculating of non-variable Hij to link line segment length between the peripheral unique point.
In addition, when carrying out input image data as the processing of login image login, feature value calculation unit 32 will send to login process portion 37 about the hashed value (characteristic quantity) of each unique point of the input image data that as above-mentioned, calculates.And; Carrying out input image data when whether being the determination processing (similarity determination processing) of view data of the login image logined, unique point calculating part 32 will send to ballot handling part 33 about the hashed value of each unique point of the input image data that as above-mentioned, calculates.
Login process portion 37 signs in to (with reference to Figure 11 (a)) in the hash table 103 that is arranged at storer 8 with what feature value calculation unit 32 calculated to the hashed value of each unique point and index (original copy ID) order of expression original copy (input image data).Under the situation of having logined hashed value, login original copy ID accordingly with this hashed value.Original copy ID is given the numbering that does not have repetition by order assignment.In addition, when the quantity of original copy of login in hash table 103 has surpassed setting (the original copy quantity that for example can login 80%), can retrieve old original copy ID, and the order cancellation.And the original copy ID that also can be used as new input image data by the original copy ID of cancellation reuses.In addition, under the hashed value that calculates is the situation of equal values, (in the example of Figure 11 (b), H1=H5), also can they be assembled 1, sign in in the hash table 103.
The hashed value of each unique point that ballot handling part 33 will calculate according to input image data compares with the hashed value of login in hash table 103, to the login image ballot with same hash value.In other words,, add up to have calculated the number of times of the hashed value identical, aggregate-value is kept in the storer 8 with the login hashed value that image had according to input image data to each login image.Figure 12 is the histogram of expression to an example of the votes of login image I D1, ID2, ID3.
Similarity determination process section 34 is read the ballot result (index of each login image and to the votes of each login image: similarity), extract maximum number of votes obtained and obtain the index of the login image of maximum number of votes obtained of ballot handling part 33 from storer 8.Then, the maximum number of votes obtained that extracts is compared with the threshold value THa that is predetermined, judge similarity (whether input image data is the view data of login image), the decision signal of expression result of determination is sent to control part 7.Promptly; Under the situation of maximum number of votes obtained more than or equal to the threshold value THa that is predetermined; Be judged to be " having similarity (input image data is the view data of login image) "; Under situation, be judged to be " no similarity (input image data is not the view data of login image) " less than threshold value THa.
Perhaps; Similarity determination process section 34 also can be through will be to each number of votes obtained of logining image divided by ballot sum (sum of the unique point that extracts from input image data); Carry out standardization; Calculate similarity,, judge similarity through this similarity and the threshold value THa that is predetermined (for example 80% of the ballot sum) are compared.
And; Similarity determination process section 34 also can be through logining the number of votes obtained of image to each; Login number (maximum login number) divided by the hashed value of the maximum login image of the login number of hashed value carries out standardization, calculates similarity; Through this similarity and the threshold value THa that is predetermined (for example 80% of the ballot sum) are compared, judge similarity.That is, under the situation of the similarity that calculates, be judged to be " similarity is arranged ", under situation, be judged to be " no similarity " less than threshold value THa more than or equal to threshold value THa.Wherein, In this case; Owing to exist the sum of the hashed value that extracts from input image data to surpass the situation (particularly at least a portion of original copy and/or login image, having the situation of hand-written part etc.) of above-mentioned maximum login number, the calculation of similarity degree value can occur and surpass 100% situation.
In addition, the threshold value THa when judging similarity can be certain to each login image, perhaps also can each login image be set according to each importance degree etc. of logining image.The importance degree of login image can be set according to the login image grading, for example for bank note, securities, topsecret papers, to file secret company outside etc., importance degree is set at maximum, importance degree is set at lower to classified papers than bank note etc.In this case; In storer 8, will login the related preservation of index of image with the corresponding weighting coefficient of importance degree of logining image and this, the login image corresponding threshold THa that maximum number of votes obtained could used and obtain to similarity determination process section 34 judges similarity.
And, when judging similarity, also can threshold value THa be set at necessarily, and, judge similarity at the weighting coefficient that the votes (number of votes obtained of each login image) of each login image multiply by each login image.In this case; In storer 8, will login the related preservation of index of the corresponding weighting coefficient of importance degree with each login image of image in advance with each; As long as similarity determination process section 34 calculates the correction number of votes obtained that the number of votes obtained of each login image multiply by the weighting coefficient of this login image, and revise number of votes obtained according to this and judge that similarity gets final product.For example, can maximum modified number of votes obtained and threshold value THa be compared, also can be with the maximum modified number of votes obtained having been implemented standardized value and threshold value THa compares with the sum of voting.And, in this case, as long as weighting coefficient for example is set at the value greater than 1, and along with the high more then big more value of importance degree of login image gets final product.
In addition, in this embodiment, calculate 1 hashed value for 1 unique point (gazing at unique point), but be not limited thereto, also can calculate a plurality of hashed values for 1 unique point (gazing at unique point).For example, can adopt as gazing at the peripheral unique point of unique point and extract 6 points,,, obtain non-variable, calculate the method for hashed value through extract 3 points from 5 points to 6 kinds of combinations from these 6, extracting 5 points each.In this case, become the situation that 1 unique point is calculated 6 hashed values.
(processing in the 1-3. color digital compounding machine 1)
Below, the process flow diagram with reference to shown in Figure 13 describes the processing in the color digital compounding machine 1.
At first, control part 7 is obtained input image data and the user processing request (indication input) (S1, S2) through guidance panel 6 or communicator 5 inputs.Wherein, Input image data can be obtained through reading original image by coloured image input media 2; Also can obtain the input image data that sends from external device (ED), can also read, obtain input image data from various recording mediums through card reader (not shown) that color digital compounding machine 1 had etc. through communicator 5.
Then; Control part 7 make pretreatment portion 30 carry out to input image data pre-service (for example not having coloured silkization processing, conversion of resolution processing, MTF processing) (S3); Make unique point calculating part 31 carry out unique point computing (S4), make feature value calculation unit 32 calculate characteristic quantity (S5).In addition, the detailed step about the unique point computing will be explained below.
Then, control part 7 judges whether above-mentioned processing request institute processing of request is login process (S6).Then, be under the situation of login process being judged as, the characteristic quantity that control part 7 calculates feature value calculation unit 32 is corresponding with original copy ID (ID of login image) to sign in to (S7) in the hash table 103.
On the other hand, be judged as (being judged as is the situation that similarity determination is handled) under the situation that is not login process, control part 7 makes ballot handling part 33 carry out ballot and handles (S8), makes similarity determination handling part 34 carry out the determination processing (S9) of similarity.
Then, be judged to be under the similar situation, forbidding the execution (S10) of Flame Image Process (for example, processing such as the correction of duplicating, printing, electronics transmission, facsimile transmission, file consolidation, view data, editor) input image data, and end process.In addition, be judged to be under the dissimilar situation, permission is to input image data carries out image processing (S11), and end process.In addition, in this embodiment, the execution of permission Flame Image Process under similar situation has been described, under dissimilar situation, has been forbidden the example of the execution of Flame Image Process, but be not limited thereto.For example, also can notify the other side that notifies with the result of determination of similarity to regulation.And; Also can be according to the result of determination of similarity, judge whether to carry out the record of input image data, whether need overlap the mark of regulation etc. on the output image corresponding, whether need carry out authentification of user, whether need carry out similarity determination result's demonstration etc. with input image data.
Figure 14 is the process flow diagram of the flow process of the unique point computing (processing of above-mentioned S4) in the representation feature point calculating part 31.
As shown in the drawing, from pretreatment portion 30 when unique point calculating part 31 has been imported input image data, control part 7 makes test pattern handling part 45 extract parts of images (for example cutting out the parts of images of M * M) (S21), carries out test pattern and handles (S22).Then, whether control part 7 exists effective figure (S23) according to the test pattern process result in the image of judgment part.
Then, be judged as under the situation that has effective figure, control part 7 makes image rotating generation portion 46 be created on the spinning image (S24) of the parts of images that extracts among the S21.Then, control part 7 makes consistent degree calculating part 47 calculate the consistent degree (S25) of the parts of images that in S21, extracts and the spinning image that in step S24, generates.And control part 7 makes consistent degree detection unit 48 decision section images and spinning image whether consistent (S26).
Then, control part 7 is judged to be in S26 under the parts of images situation consistent with the spinning image, and the center pixel in this parts of images (gazing at pixel) as unique point, is signed in to (S27) in the storer 8.Perhaps, with the information of the center pixel (gazing at pixel) in the above-mentioned parts of images of expression, output to unique point calculating part 31 as the situation of unique point.
In addition; Control part 7 is after the processing of S27; Perhaps in S23, be judged as under the situation that does not have active graphical, perhaps in S26, be judged to be under parts of images and the inconsistent situation of spinning image, judge whether the whole pixels in the input image data have been carried out test pattern processing (S28).That is, for the whole pixels in the input image data respectively as the various piece image of gazing at pixel, judge whether to have carried out the test pattern processing.
Then, have under the situation of not carrying out the pixel that test pattern handles residual, raster scanning is gazed at pixel, and the next one is gazed at pixel, carries out the later processing of S22 (S29).On the other hand, be judged as under the situation of whole pixels having been carried out the test pattern processing, control part 7 finishes the unique point computings.
As stated; In the file contrast processing portion 13 of the color digital compounding machine 1 of this embodiment; From input image data, extract parts of images; Whether judgement comprises figure in the parts of images that extracts, comprising under the situation of figure, judges above-mentioned parts of images, whether the postrotational spinning image of this parts of images is consistent with making.Then, under the situation of unanimity, the pixel of gazing in this parts of images is gone out as feature point extraction.
Thus; Even when input image data is the data that read down with the state with the specified configuration angle tilt configuration of reading the position of the relative image read-out of original copy and implemented amplification, when the data of processing such as dwindling, also can the central point (gazing at pixel) that do not receive above-mentioned inclination, amplify, dwindle etc. influences the parts of images of (or influencing little) be gone out as feature point extraction.That is, can be with above-mentioned inclination, the unique point that extracted with high accuracy irrespectively goes out identical figure such as amplify, dwindle, make the distance rates between the unique point that calculates constant.Therefore, through calculating the similarity of image according to the characteristic quantity that calculates based on the unique point that extracts like this, can be with above-mentioned inclination, the similarity that high precision is irrespectively judged input picture and login image such as amplify, dwindle.
Figure 15 is that the relative image read-out of expression is reading the specified configuration angle of position, the parts of images in the input image data that reads with suitable angle, the 90 degree image rotatings that this parts of images spin is turn 90 degrees, the key diagram of an example of parts of images and the 90 degree image rotatings that this parts of images spin turn 90 degrees of image read-out in the input image data that the state that the specified configuration angle that reads the position tilts with 30 degree reads relatively.As shown in the drawing; Even image read-out is reading the specified configuration angle of position relatively; Original copy is tilted the situation of the input image data that is read under the state of configuration; Through the parts of images that parts of images is consistent with the spinning image of this parts of images gaze at pixel as unique point, can not receive the influence of above-mentioned inclination, extracted with high accuracy goes out the unique point of identical figure.
90 degree image rotatings that Figure 16 is the expression 90 degree image rotatings that narrowed down to parts of images in 70% the input image data, this parts of images spin is turn 90 degrees, be amplified to parts of images in 150% the input image data, this parts of images spin is turn 90 degrees and the key diagram that becomes an example of the parts of images in the input picture doubly.As shown in the drawing; Even input image data implemented to become doubly handled the situation of (dwindle or amplify); Gaze at pixel as unique point through the parts of images that parts of images is consistent with the spinning image of this parts of images; The influence that also can not doubly handled by above-mentioned change, extracted with high accuracy goes out the unique point of identical figure.
In addition, in this embodiment, each unique point that as above-mentioned, extracts has been determined the set of locality unique point; From each set of decision, select the part set of unique point; The various piece set of selecting as the characteristic additional amount, according to a plurality of combinations about unique point in the part set, is obtained the non-variable of relative geometric transformation respectively; Each non-variable combination with obtaining calculates hashed value (characteristic quantity).Thus, can further judge input picture and the similarity of logining image accurately.
And; According to above-mentioned structure, owing to only need extract parts of images, judge parts of images that extracts and the spinning image that this parts of images is rotated whether consistent (whether auto-correlation is arranged); Can easily extract unique point; So, can simplify the circuit structure of unique point calculating part 31, realize hardwareization easily.
And generally, the consistent degree (auto-correlation) of the relative spinning image of handwriting is not high.Therefore, even for example in input picture, exist, also can prevent to extract unsuitable unique point because of handwriting based on hand-written write section timesharing.Therefore, according to above-mentioned structure,, also can extract the unique point of this input picture effectively even in input picture, exist based on the hand-written situation that writes part.
In addition, in this embodiment, under the parts of images situation consistent, the pixel of gazing in this parts of images is gone out as feature point extraction, but be not limited thereto with the spinning image of this parts of images.For example, also can be with going out as feature point extraction in the above-mentioned parts of images by comprising the block (block) that a plurality of pixels of gazing at pixel constitute.
And, in this embodiment,, the situation that 90 degree, 120 degree, 180 are spent has been described, but has not been had special qualification for the establishing method of spinning angle R as the example of spinning angle R, can set arbitrarily.
And, in this embodiment, be through judging the whether consistent unique point that extracts of spinning image that this parts of images and spinning angle R with regulation make its rotation, but be not limited thereto.For example; Also can calculate respectively parts of images with the consistent degree of this parts of images with postrotational each spinning image of a plurality of spinning angles; Use the result of calculation of the consistent degree of corresponding each spinning image, whether judge that center pixel with this parts of images is as unique point.
Table 1 has been represented the consistent degree result of determination about 90 degree image rotatings figure A~E shown in Figure 6, relative, 180 degree image rotatings, 120 degree image rotatings.
[table 1]
Figure S2008100997414D00221
Wherein, the expression of zero shown in the table is judged as consistent figure, and * expression is judged as inconsistent figure.
Shown in this table; Because the result of determination of consistent degree is according to spinning angle R and difference; So, through calculating the consistent degree of a plurality of relatively spinning images, can be with different patterns extracts as the parts of images figure consistent with the spinning image by each spinning angle.
In addition; For example can gaze at pixel as unique point with what in a plurality of spinning angles, be judged as consistent figure respectively; Also can gaze at pixel as unique point with what be judged as consistent figure in any one in a plurality of spinning angles; Can also be with in the angle more than the stated number in a plurality of spinning angles, what be judged as consistent figure gazes at pixel as unique point.
Like this, extract unique point, can easily increase characteristic and count through using result of determination to the consistent degree of the spinning image of a plurality of angles.Thus, owing to can judge input picture and the similarity of logining image according to more unique point, so can further improve the precision of similarity determination.
In addition, in this embodiment, for example clear control part 7 cut out the situation of the parts of images of 1 M * M size, but the method for cutting out of parts of images is not limited to a kind when making test pattern handling part 45 extract parts of images.For example, also can be in the parts of images of gazing at M * M size, (parts of images of size of L>M) through carrying out same computing, is set unique point also to gaze at L * L.
For example hypothesis is, establishes M=7, L=11, the central point of figure A~C of Figure 24~shown in Figure 26 is extracted the situation of parts of images as the starting point.Wherein, the frame zone that the with dashed lines among Figure 24~Figure 26 surrounds is the parts of images of 7 * 7 sizes, and the frame zone that surrounds with solid line is the parts of images of 11 * 11 sizes.
Parts of images about above-mentioned each size; If consider the consistent degree of parts of images that extracts and the spinning image that this parts of images has been rotated 180 degree; Then under the situation of figure A, consistent in 7 pixels * 7 pixels, and inconsistent in 11 pixels * 11 pixels.In addition, under the situation of figure B, in 7 pixels * 7 pixels and all inconsistent in 11 pixels * 11 pixels.In addition, under the situation of figure C, in 7 pixels * 7 pixels with all consistent in 11 pixels * 11 pixels.Like this, even the parts of images that extracts from identical figure, also can be according to the different size that cuts out parts of images (Pixel Dimensions of parts of images), and the consistent/inconsistent variation with the spinning image takes place.
In addition; Under the situation of figure B; Though in 7 pixels * 7 pixels and all inconsistent in 11 pixels * 11 pixels, in the parts of images of 11 pixels * 11 pixels, if the partial image region of 7 pixels * 7 pixels is got rid of (shielding) from operand; Judge and consistent with the spinning image then become consistent result.
In addition, as the operational method of getting rid of, can note the subject pixels number, each the Σ mark in above-mentioned formula (3)~(5) deducts masked segment.If above-mentioned formula (3) is described, then,
7 * 7 pixels: pixel count Nm (=49)
Am = N m Σ i = 1 Nm ( Tm i × Im i ) - Σ Tm i × Σ Im i · · · ( 6 )
11 * 11 pixels: pixel count Nl (121)
Al = Nl Σ i = 1 Nl ( Tl i × Il i ) - Σ Tl i × Σ Il i · · · ( 7 )
The image of conductively-closed: pixel count Nk (121-49=72) becomes
Ak = Nk Σ i = 1 Nk ( Tk i × Ik i ) - Σ Tk i × Σ Ik i · · · ( 8 )
= ( Nl - Nm ) { Σ i = 1 Nl ( Tl i × Il i ) - Σ i = 1 Nm ( Tm i × Im i ) } - ( Σ Tl i - Σ Tm i ) × ( Σ Il i - Σ Im i )
Wherein, employed each Σ operation values when calculating Ak, owing to be the value of when calculating Am and Al, obtaining, so need not recomputate.In addition, the C in the B of above-mentioned formula (4) and the above-mentioned formula (5) also can likewise calculate.
The parts of images that the above-mentioned that kind of having represented table 2 obtains and the spinning image that they has been rotated 180 degree consistent/inconsistent.Wherein, the expression of zero shown in the table is judged as consistent figure, and * expression is judged as inconsistent figure.
[table 2]
? Figure A Figure B Figure C
7×7 ?○ ?○
11×11 ?○
Shielding ?○ ?○
As shown in table 2, the result of determination of consistent degree cuts out size with having or not of shielding and different according to parts of images.Therefore, through calculating a plurality of consistent degree that cut out method, can cut out in the method at each different patterns is extracted as the parts of images figure consistent with the spinning image.
In addition; For example can with a plurality of cut out be judged as consistent figure in each of method gaze at pixel as unique point; Also can gaze at pixel as unique point with what in a plurality of any one methods that cut out in the method, be judged as consistent figure, what can also a plurality ofly cut out the figure that is judged as unanimity in the above method of specified quantities in the method gazes at pixel as unique point.
Like this,, extract unique point, can easily increase the quantity that is extracted out the butterfly unique point through using a plurality of consistent degree result of determination that cut out in the method.Thus, owing to can judge input picture and the similarity of logining image according to more unique point, so can further improve the precision of similarity determination.
In addition, in this embodiment, the view data of unique point calculating part 31 according to the multivalue image of importing from pretreatment portion 30 has been described, extract an example of the situation of unique point, but the method for extracting of unique point has been not limited thereto.
For example, also can in pretreatment portion 30, carry out 2 values and handle, extract unique point according to the image of 2 values.
In this case, the view data (brightness value (luminance signal) or brightness value (lightness signal)) of pretreatment portion 30 through will there not being coloured silkization and the threshold ratio that is predetermined, with view data 2 values.
Then; The test pattern handling part 45 of unique point calculating part 31 is from being extracted parts of images the view data of pretreatment portion 30 binaryzations; Whether accumulative total ON pixel count (black pixel count) CountOn judges having or not of figure according to accumulative total in the scope of regulation.For example, preestablish threshold value TH2, TH3, under the situation of TH2≤CountOn≤TH3, be judged to be figure, under the situation of TH2>CountOn or CountOn>TH3, be judged to be no figure.Wherein, can extract figure about threshold value TH2, TH3 conscientiously as long as suitably be set at.
And image rotating generation portion 46 generates the spinning image that will rotate the spinning angle R of regulation at the parts of images that test pattern handling part 45 is judged as figure.In addition, at this moment, also can be through carrying out floating-point operation, carry out being rotated processing then with processing such as rounding up below the radix point.
Then, consistent degree calculating part 47 adds up the pixel value of parts of images (image before being rotated) and the consistent number Sn of the pixel value of spinning image.
Figure 17 (a) is the key diagram of an example of expression parts of images, and Figure 17 (b) is the key diagram that the spinning image of 90 degree has been rotated the parts of images shown in Figure 17 (a) in expression.In the example of Figure 17 (a) and Figure 17 (b), the consistent pixel of pixel value in parts of images and spinning image is the pixel (the 5th row the 5th row are listed as with eighth row the 8th) of blacking in Figure 17 (b).Therefore, under this routine situation, the consistent pixel count of pixel value becomes " Sn=2 " in parts of images and the spinning image.
Then, the value of consistent degree detection unit 48 through consistent number Sn that consistent degree calculating part 47 is added up, compare, come the decision section image whether consistent with the spinning image with pre-set threshold TH_est2.For example, under the situation of Sn>TH_est2, it is consistent to be judged to be two images, and under the situation of Sn≤TH_est2, it is inconsistent to be judged to be two images.Wherein, threshold value TH_est2 is as long as suitable being set at can extract unique point conscientiously.
In addition, also can calculate the summation of the absolute value of pixel value residual quantity, promptly remain difference Sz with each mutual corresponding pixel in the spinning image to parts of images, Sz calculates unique point according to this residue difference.
In this case, test pattern handling part 45 is identical with above-mentioned processing with the processing of image rotating generation portion 46.Unanimity degree calculating part 47 that kind that is shown below calculates the summation (residue difference Sz) about the absolute value of the residual quantity of the pixel value of each the mutual corresponding pixel in parts of images and the spinning image.In addition, be imported into the view data of test pattern handling part 45, can be 2 values, also can be many-valued.
S Z = Σ i = 1 N ( | I i - T i | ) · · · ( 9 )
Then, whether consistent degree detection unit 48 comes the decision section image consistent with the spinning image through being compared by residue difference Sz and the pre-set threshold TH_est3 that consistent degree calculating part 47 calculates.Wherein, under the situation of using residue difference Sz, Sz is more little for the residue difference, and the consistent degree of two images is high more.Therefore, for example, it is consistent under the situation of Sz<TH_est3, to be judged to be two images, and it is inconsistent under the situation of Sz >=TH_est3, to be judged to be two images.Wherein, threshold value TH_est3 is as long as suitable being set at can extract unique point conscientiously.
In addition, in this embodiment, for example clear is the multi-valued signal of 1 passage as input image data, and uses 2 value signals after these input image data 2 values are carried out the similarity judgement, but the structure of input image data is not limited thereto.For example, input image data also can be the colour signal that constitutes by a plurality of passages (for example being R, G, B3 passage or C, M, Y, K4 passage etc.) or in above-mentioned colour signal, made up based on visible light outside the data of signal of light source.Wherein, in this case, do not carry out the processing (not handling) of the signal conversion processes portion 41 in the pretreatment portion 30.About whether carrying out the processing in the signal conversion processes portion 41; For example can indicate (in selection through the selection of the coloured image of guidance panel 6 inputs of image processing system 3 according to the user as the original copy kind; Chromatic colour image and black white image; And selected the situation of coloured image) decide; Perhaps also can be provided with and judge whether original image (input image data) is the automatic coloured selection portion (not shown) of coloured image, decides according to its result of determination in the prime of file contrast processing portion 13.
As the method for automatic coloured selection, the method that for example can use patent documentation 3 to be put down in writing.This method is; It is colour element or monochrome pixels that each pixel is differentiated; Detecting under the situation that has the continuous colour element more than the specified quantity; The colour element that this is continuous partly is identified as colored block, if in 1 row, have the colored block more than the specified quantity, then this row is counted as color lines.Then, if there is the color lines of specified quantity in the original copy, then being judged as is coloured image, otherwise being judged as is black white image.
For example; At the coloured image input signal is under the situation of general R, G, B triple channel signal; As above-mentioned; Through in pretreatment portion 30, each passage being handled as multi-valued signal independently, perhaps carry out 2 values and handle, can likewise extract unique point with above-mentioned embodiment by each passage.
Figure 27 (a) is the key diagram of an example of the input image data that is made up of coloured image of expression, and Figure 27 (b)~Figure 27 (d) is the key diagram of the multivalue image data of the expression R passage corresponding with the view data of Figure 27 (a), G passage, B passage.Shown in Figure 27 (b)~Figure 27 (d), the position of the unique point that extracts, corresponding each passage and difference.That is, for each marks of 3 colors shown in Figure 27 (a) (A: black, B: green, C: redness), according to the result that extracts of the unique point of each channel image (Figure 27 (b)~Figure 27 (d)), as shown in table 3.Wherein, the expression of zero shown in the table is judged as the figure of unique point, and * expression is judged as the figure that is not unique point.
[table 3]
? The R passage The G passage The B passage
A (deceiving) ?○ ?○
B (green) ?○
C (red) × ?○ ?○
Shown in this table, owing to become corresponding each passage of parts of images and difference of the extraction object of unique point, so the unique point that extracts according to these various piece images and its spinning image is each passage of correspondence and different also.Therefore, carry out the extraction of unique point, can extract more unique point through using a plurality of passages.
In addition; For example also can be with the unique point that extracts with each passage respectively as other unique point; Can also gaze at pixel as unique point with what be judged as the parts of images figure consistent with the spinning image in any one in a plurality of passages, what be judged as the parts of images figure consistent with the spinning image in also can the passage with the specified quantity in a plurality of passages gazes at pixel respectively as other unique point.
Like this, extract unique point with the result of determination of the consistent degree of spinning image, can easily increase the quantity of unique point or accompany the information of this unique point through using the parts of images in a plurality of passages.Thus, owing to can judge input picture and the similarity of logining image according to more unique point, so can further improve the precision of similarity determination.
In addition, in this embodiment, the situation that applies the present invention to color digital compounding machine 1 is illustrated, but application of the present invention is not limited thereto.For example, also can be applied to the compounding machine of black and white.And, be not limited to compounding machine, for example, can also be applied to the image processing apparatus such as facsimile communication apparatus, duplicating machine, image read-out of monomer.
Figure 18 is the block diagram that expression applies the present invention to the structure example under the situation of flat bed scanner (image read-out, image processing apparatus) 1 '.
As shown in the drawing, flat bed scanner 1 ' has coloured image input media 2 and color image processing apparatus 3 '.Color image processing apparatus 3 ' is made up of A/D converter section 11, shadow correction portion 12, file contrast processing portion 13, control part 7 (not shown in Figure 18) and storer 8 (not shown in Figure 18); Coloured image input media 2 is connected with it, as a whole and composing images reading device 1 '.Wherein, The function of the A/D converter section 11 in the coloured image input media (image fetching unit) 2, shadow correction portion 12, file contrast processing portion 13, control part 7 and storer 8; Because roughly the same, so omit explanation at this with above-mentioned color digital compounding machine 1.
In addition, also can through by image processing apparatus with can communicate the image processing system that the server unit that is connected constitutes with this image processing apparatus, realize the function of file contrast processing portion 13.Figure 19 be expression through with image processing apparatus (compounding machine (MFP) A, B ..., printer A, B ..., facsimile recorder A, B ..., computer A, B ..., digital camera A, B ..., scanner A, B ...) and server unit 50 connect into the key diagram of the structure of the image processing system 100 that can communicate and constitute.In addition; The structure of image processing system 100 is not limited thereto; For example, also can be by any structure that constitutes more than in server unit 50, compounding machine, printer (image processing system), facsimile recorder, computing machine, digital camera (image read-out), the scanner (image read-out).
Above-mentioned scanner has document board, photoscanning portion, CCD (charge coupled device) etc., scans being carried the original image of putting on document board through utilizing photoscanning portion, reads original image, generates view data.In addition, above-mentioned digital camera has (image-input devices) such as photographic lens, CCD, generates view data through taking original image, personage and landscape etc.Wherein, above-mentioned scanner and digital camera also can have the function of the Flame Image Process (for example various correcting process etc.) of the regulation implemented for reproduced image rightly.Above-mentioned printer will print on the thin slice (paper used for recording) based on the image of the view data that is generated by computing machine, scanner, digital camera.And; Above-mentioned facsimile recorder carries out processing such as 2 value processing, conversion of resolution processing, rotation to the view data of reading in through image-input device; Give the other side with the image data transmission that is compressed into prescribed form, maybe will decompress the performance of correspondence image output unit from the view data that the other side sends; Implement the processing of rotation processing and conversion of resolution, medium tone processing, and export with the form of page or leaf unit's image.And above-mentioned compounding machine has the function more than at least 2 kinds in scan function, facsimile transmission function, the printing function (copy function, printing function).In addition, the aforementioned calculation machine can be edited or uses application software to carry out the generation of file the view data of utilizing scanner or digital camera to read.
In image processing system 100, the various piece of above-mentioned file contrast processing portion 13 disperseed to be arranged on server unit 50 with through in network and the image processing apparatus that server unit 50 is connected.And, realize the function of file contrast processing portion 13 by image processing apparatus and server unit 50 with cooperatively interacting.
Figure 20 is the block diagram of a structure example that is illustrated in the situation of the function of disperseing to possess file contrast processing portion 13 in server unit 50 and the color digital compounding machine 1.
Shown in figure 20, the color image processing apparatus 3 of color digital compounding machine 1 has: the file contrast processing 13a of portion, and it possesses pretreatment portion 30, unique point calculating part 31 and feature value calculation unit 32; The control part 7a of the action of the control documents contrast processing 13a of portion; Preserve the storer 8a of the necessary information of processing of the file contrast processing 13a of portion; With the communicator 5 that communicates with external device (ED).And server unit 50 has: the control part 7b of the communicator 51 that communicates with external device (ED), the file contrast processing 13b of portion that possesses ballot handling part 33, similarity determination process section 34 and login process portion 37, the control documents contrast processing 13b of portion and preserve the storer 8b of the necessary information of processing of the file contrast processing 13b of portion.Wherein, Under the situation of the transmitting-receiving that need carry out data between each functional module that each functional module that color digital compounding machine 1 is possessed and server unit 50 are possessed; Through control part 7a and control part 7b control communicator 5 and 51, the data transmit-receive that suits.About other functions, identical with above-mentioned structure.
In addition; In the example of Figure 20; Though be that whole (32a of feature point extraction portion, non-variable calculating part 32b, hashed value calculating part 32c) with feature value calculation unit 32 are located in the color digital compounding machine 1, be not limited thereto, also can that kind for example shown in Figure 21; 32a of feature point extraction portion and non-variable calculating part 32b are arranged in the color digital compounding machine 1, and hashed value calculating part 32c is located in the server unit 50.
And; Also can be in advance each one of feature value calculation unit 32 be located in the server unit 50; The data relevant with the unique point that unique point calculating part 31 calculates are sent to server unit 50 from color digital compounding machine 1; By the feature value calculation unit 32 that is possessed in the server unit 50, the data according to the unique point that is stored in the hash table 103 among the storer 8b and receives calculate hashed value.In addition; Also can each one of unique point calculating part 31 and feature value calculation unit 32 be arranged in the server unit 50; Send input image data from color digital compounding machine 1 to server unit 50; The unique point calculating part 31 and the feature value calculation unit 32 that are possessed by server unit 50, according to the input image data that receives from server unit 50 be stored in the hash table 103 the storer 8b, calculate hashed value.
And; In above-mentioned explanation; For example clear situation of carrying out the similarity determination processing; But under the situation of carrying out login process, as long as original copy ID that the login process portion 37 that server unit 50 is possessed will receive from color digital compounding machine 1 and hashed value (or the hashed value calculating part 32c that possessed of server unit 50 calculate hashed value) sign in to the hash table 103 that is arranged at storer 8b.Wherein, So the pass is carried out the similarity determination processing and is still carried out login process; Can specify through guidance panel 6 by the user of color digital compounding machine 1; The signal that expression is carried out which kind of processing sends to server unit 50, also can carry out login process to being judged as dissimilar input picture by server unit 50 according to the similarity determination process result.
In addition; Hashed value calculating part 32c is being arranged under the situation in the server unit 50; The computing method diverse ways (using other hash function) that also can use and be stored in the hashed value in the hash table 103 calculates hashed value, uses the hashed value that calculates to upgrade hash table 103.Thus, for example can will be in hash table 103, and use the processing of voting of this hashed value with reference to the reliable hashed value login (renewals) of characteristic quantity (non-variable) according to the kind of original image etc., therefore, can improve and contrast precision (the judgement precision of similarity).
And in above-mentioned each embodiment, the configuration file contrast processing portion that possesses in color digital compounding machine 1 and/or the server unit 50 and each one of control part (each assembly) can use processors such as CPU to realize through software.That is, color digital compounding machine 1 and/or server unit 50 have: carry out the instruction of the control program be used to realize various functions CPU (centralprocessing unit), stored said procedure ROM (read only memory), launch the RAM (random access memory) of said procedure and store memory storages (pen recorder) such as said procedure and various memory of data etc.And; The object of the invention can reach through following mode; Promptly; Generation has been write down the software that is used to realize above-mentioned functions with the mode of embodied on computer readable, that is, and and the recording medium of the program code of the control program of color digital compounding machine 1 and/or server unit 50 (execute form program, intermediate code program, source program); This recording medium is offered color digital compounding machine 1 and/or server unit 50, and (or CPU or MPU) reads and carries out the program code that is recorded in the recording medium by this computing machine.
As aforementioned recording medium, for example can use bands such as tape and cassette tape series, comprise the semiconductor memory series such as card series such as dish series and IC-card (comprising storage card)/light-card or mask rom/EPROM/EEPROM/ flash rom of CDs such as disk such as floppy disk (registered trademark)/hard disk and CD-ROM/MO/MD/DVD/CD-R etc.
And, also can color digital compounding machine 1 and/or server unit 50 be constituted and can be connected with communication network, through communication network the said procedure code is provided.As this communication network; There is not special qualification, internet for example capable of using, in-house network (intranet), extranets, LAN, ISDN, VAN, CATV communication network, Virtual Private Network (virtual private network), telephone wire road network, mobile communicating net, satellite communication link etc.In addition; As the transmission medium that constitutes communication network; There is not special qualification; For example, can utilize IEEE1394, USB, power line transmission, wired TV circuit, telephone wire, adsl line etc. wired, also can utilize the such infrared ray of IrDA and telepilot, Bluetooth (registered trademark), 802.11 wireless, HDR, mobile telephone network, satellite circuit, ground wave digital network etc. wireless.Wherein, the present invention can realize the said procedure code through electric transmission, and the form of computer data signal that also can be through being embedded in the conveyance ripple realizes.
In addition; Each assembly of color digital compounding machine 1 and/or server unit 50 is not limited to use software to realize; Also can utilize hardware logic electric circuit to constitute, can also constitute with carry out hardware that a part handles, with the structure of the arithmetic element combination of the software of carrying out the control that is used to carry out this hardware and all the other processing.
Computer system of the present invention also can be by image-input devices such as flat bed scanner, Film scanner, digital cameras, carry out the computing machine of various processing such as above-mentioned similarity computing and similarity determination processing through being written into regulated procedure, show that the image display device such as CRT monitor, LCD of the result of computing machine and the image processing systems such as printer that the result of computing machine is outputed on the paper etc. constitute.And, also can possess as the network interface card of the communicator that is used for being connected and modulator-demodular unit etc. through network and server etc.
Image processing apparatus of the present invention has: the feature point detecting unit that detects the unique point that comprises in the input image data; With according to the detected unique point of above-mentioned feature point detecting unit relative position each other; Calculate the feature value calculation unit of the characteristic quantity of above-mentioned input image data; It is characterized in that; Above-mentioned feature point detecting unit has: parts of images extraction portion, and it extracts from above-mentioned input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generation portion, its generation makes above-mentioned parts of images rotate the spinning image of predetermined angular; Unanimity degree detection unit, it judges whether the image graphics that comprises in the image graphics that comprises in the above-mentioned parts of images and the above-mentioned spinning image is consistent with above-mentioned parts of images and above-mentioned spinning picture registration the time; And test section, its detect above-mentioned consistent degree detection unit be judged to be in the consistent parts of images gaze at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, as above-mentioned unique point.
According to said structure; Parts of images extraction portion extracts from input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generation portion generates the spinning image that parts of images has been rotated predetermined angular; Unanimity degree detection unit judges with above-mentioned parts of images and above-mentioned spinning picture registration the time, and whether the image graphics that comprises in the image graphics that comprises in the parts of images and the spinning image is consistent.That is,, judge whether the image graphics that comprises in two images is consistent when so that parts of images has rotated the state of predetermined angular when overlapping with this parts of images.Then, test section is judged to be gazing at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, come out as feature point detection in the consistent parts of images with consistent degree detection unit.
Thus; Even at input image data is that specified configuration angle that relative image read-out is reading the position is tilted the state of configuration and under the situation of the view data that reads or implemented amplification, dwindle etc. under the situation of image data processed, the image graphics that also can not receive above-mentioned inclination with comprising, amplify, dwindle etc. influences or receive above-mentioned inclination, the parts of images of image graphics that the influence amplifying, dwindle etc. is little is pairing gazes at pixel or comprise this block of gazing at pixel to come out as feature point detection.Therefore, through calculating the characteristic quantity of input image data according to above-mentioned unique point relative position each other, but can irrespectively calculate the characteristic quantity that high precision is confirmed input image data with above-mentioned inclination, amplify, dwindle etc.
In addition, also can constitute on the basis of said structure, have: preserve the characteristic quantity of login image storage part, with obtain the login image of the characteristic quantity of logining image and obtain at least one side the portion from communicating the external device (ED) that is connected; Compare with characteristic quantity through input image data that above-mentioned feature value calculation unit is calculated characteristic quantity, calculate the similarity calculating part of the similarity of two images with the login image.
According to above-mentioned structure; Even be tilted the state of configuration and the situation of the view data that reads or implemented amplification, the situation of image data processed such as dwindle in the specified configuration angle that reads the position, also can high precision computation go out the similarity of input picture and login image with relative image read-out.
And, also can constitute and have: the characteristic quantity and the storage part that is used for the identifying information of recognition image data of preserving view data; With login process portion, its characteristic quantity that above-mentioned feature value calculation unit is calculated according to above-mentioned input image data, be stored in above-mentioned storage part with the identifying information that is used for discerning above-mentioned input image data with setting up corresponding relation.
According to above-mentioned structure; Even be tilted the state of configuration and the situation of the view data that reads or implemented amplification, the situation of image data processed such as dwindle in the specified configuration angle that reads the position with relative image read-out; But also can calculate the characteristic quantity that high precision is confirmed above-mentioned input image data, and can this characteristic quantity and above-mentioned input picture be kept in the storage part.
And; Also can constitute and have whether judgement comprises image graphics in above-mentioned parts of images test pattern handling part; Above-mentioned image rotating generation portion generates above-mentioned spinning image in above-mentioned test pattern handling part, being judged as the parts of images that comprises image graphics.
According to above-mentioned structure, the test pattern handling part judges in the various piece image, whether to comprise image graphics, and image rotating generation portion generates the spinning image for being judged as the parts of images that comprises image graphics.Thus, owing to the generation that can omit the spinning image to the parts of images that does not comprise image graphics is handled and consistent degree determination processing, so can simplify processing.
In addition; Also can constitute; Above-mentioned image rotating generation portion generates the different a plurality of spinning images of the anglec of rotation to the various piece image; Above-mentioned consistent degree detection unit judges respectively whether above-mentioned parts of images is with consistent through rotating each spinning image that this parts of images obtains; Above-mentioned test section is under the above-mentioned parts of images situation consistent with at least 1 spinning image, with gazing at pixel or come out as above-mentioned feature point detection by comprising the block that these a plurality of pixels of gazing at pixel constitute in this parts of images.
According to above-mentioned structure, compare with the situation that the various piece image is only generated 1 spinning image, can extract more unique point.And; Since at least the extraction of parts of images handle with consistent degree determination processing in; Can use identical algorithm or treatment circuit that each spinning image is handled; So, can under the situation of the circuit scale of the complicated and treatment circuit that does not increase algorithm, extract more unique point.
In addition, also can constitute, the size of the extraction subject area of above-mentioned parts of images extraction portion through making parts of images is different, extracts multiple sector of breakdown image.
If the size of the extraction subject area of parts of images is different, then the extraction result of parts of images is also different.Therefore, according to above-mentioned structure,, can increase the quantity of the unique point that extracts through extracting the different multiple parts of images of extraction subject area size of parts of images.Thus, owing to judge input picture and the similarity of logining image according to more unique point, so can further improve the precision of similarity determination.
And, also can constitute, above-mentioned parts of images extraction portion carries out: the size of the extraction subject area of parts of images is set at the 1st size, carries out the 1st the extracting and handle of extraction of parts of images; Be set at the 2nd size with size greater than above-mentioned the 1st size with the extraction subject area of parts of images; Carry out the 2nd processing of the extraction of parts of images; Carrying out the above-mentioned the 2nd when handling; From to gaze at the extraction subject area of the 2nd size that pixel is the center, get rid of and gaze at the extraction subject area that pixel is the 1st size at center with this, carry out the extraction of parts of images.
According to above-mentioned structure, can further increase the unique point quantity that extracts, can further improve the precision of similarity determination.
And, also can constitute, above-mentioned parts of images extraction portion carries out the extraction of parts of images to a plurality of colour contents in the input image data by each colour content.
Even identical input image data, if pay close attention to different colour contents, then the extraction result of parts of images is also different.Therefore, according to above-mentioned structure,, can increase the unique point quantity that extracts through be directed against the extraction of each colour content to a plurality of colour contents.Thus, owing to can judge input picture and the similarity of logining image according to more unique point, so can further improve the precision of similarity determination.
In addition, also can constitute, have input image data is implemented the smoothing handling part that smoothing is handled, above-mentioned feature point detecting unit detects above-mentioned unique point according to having been implemented the input image data that above-mentioned smoothing is handled.
According to above-mentioned structure,, can prevent to extract inappropriate characteristic quantity because of the influence of site and noise contribution through coming the detected characteristics point according to the input image data of having been implemented the smoothing processing.
Image processing system of the present invention has above-mentioned any one image processing apparatus and the image corresponding with input image data is formed on the image efferent on the recording materials.
According to said structure; Even input image data is to be tilted the state of configuration and the situation of the view data that reads or implemented amplification, the situation of image data processed such as dwindle with relative image read-out in the specified configuration angle that reads the position, but also can irrespectively calculate the characteristic quantity that high precision is confirmed input image data with above-mentioned inclination, amplify, dwindle etc.
Image processing system of the present invention; Have image processing apparatus and constitute the server unit that is connected that to communicate by letter with this image processing apparatus; And will detect the feature point detecting unit of the unique point that comprises in the input image data and according to the detected unique point of above-mentioned feature point detecting unit relative position each other; Calculate the feature value calculation unit of the characteristic quantity of above-mentioned input image data; Be arranged in above-mentioned image processing apparatus or the above-mentioned server unit, perhaps disperse to be arranged in above-mentioned image processing apparatus and the above-mentioned server unit, it is characterized in that; Above-mentioned feature point detecting unit has: parts of images extraction portion, and it extracts from above-mentioned input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generation portion, it generates the spinning image that above-mentioned parts of images has been rotated predetermined angular; Unanimity degree detection unit, it judges whether the image graphics that comprises in the image graphics that comprises in the above-mentioned parts of images and the above-mentioned spinning image is consistent with above-mentioned parts of images and above-mentioned spinning picture registration the time; And test section, it is judged to be gazing at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, come out as above-mentioned feature point detection in the consistent parts of images with above-mentioned consistent degree detection unit.
According to said structure; Parts of images extraction portion extracts from input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generation portion generates the spinning image that parts of images has been rotated predetermined angular; Unanimity degree detection unit judges with parts of images and spinning picture registration the time, and whether the image graphics that comprises in the image graphics that comprises in the parts of images and the spinning image is consistent.Then, test section is judged to be gazing at pixel or come out as feature point detection by comprising the block that these a plurality of pixels of gazing at pixel constitute in the consistent parts of images with consistent degree detection unit.
Thus; Even at input image data is that the specified configuration angle that reading the position with relative image read-out is tilted the state of configuration and under the situation of the view data that reads or implemented amplification, dwindle etc. under the situation of image data processed, the image graphics that also can not receive above-mentioned inclination with comprising, amplify, dwindle etc. influences or receive above-mentioned inclination, the parts of images of image graphics that the influence amplifying, dwindle etc. is little is pairing gazes at pixel or comprise this block of gazing at pixel to come out as feature point detection.Therefore, through calculating the characteristic quantity of input image data according to above-mentioned unique point relative position each other, but can irrespectively calculate the characteristic quantity that high precision is confirmed input image data with above-mentioned inclination, amplify, dwindle etc.
Image processing method of the present invention comprises: the feature point detection step that detects the unique point that comprises in the input image data; With basis detected unique point relative position each other in above-mentioned feature point detection step; Calculate the characteristic quantity calculation procedure of the characteristic quantity of above-mentioned input image data; It is characterized in that; Above-mentioned feature point detection step comprises: the parts of images extraction step extracts from above-mentioned input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute; Image rotating generates step, generates the spinning image that above-mentioned parts of images has been rotated predetermined angular; Unanimity degree determination step with above-mentioned parts of images and above-mentioned spinning picture registration the time, judges whether the image graphics that comprises in the image graphics that comprises in the above-mentioned parts of images and the above-mentioned spinning image is consistent; With detect step, will in above-mentioned consistent degree determination step, be judged as gazing at pixel or, come out in the consistent parts of images as above-mentioned feature point detection by comprising the block that these a plurality of pixels of gazing at pixel constitute.
According to above-mentioned method; In the parts of images extraction step, from input image data, extract by comprising the parts of images that a plurality of pixels of gazing at pixel constitute, generate in the step at image rotating; The spinning image of predetermined angular has been rotated parts of images in generation; In consistent degree determination step, with above-mentioned parts of images and above-mentioned spinning picture registration the time, whether the image graphics that comprises in the image graphics that comprises in the decision section image and the spinning image is consistent.Then, in detecting step, will in consistent degree determination step, be judged as gazing at pixel or, come out in the consistent parts of images as feature point detection by comprising the block that these a plurality of pixels of gazing at pixel constitute.
Thus; Even at input image data is that the specified configuration angle that reading the position with relative image read-out is tilted the state of configuration and under the situation of the view data that reads or implemented amplification, dwindle etc. under the situation of image data processed, the image graphics that also can not receive above-mentioned inclination with comprising, amplify, dwindle etc. influences or receive above-mentioned inclination, the parts of images of image graphics that the influence amplifying, dwindle etc. is little is pairing gazes at pixel or comprise this block of gazing at pixel to come out as feature point detection.Therefore, through calculating the characteristic quantity of input image data according to above-mentioned unique point relative position each other, but can irrespectively calculate the characteristic quantity that high precision is confirmed input image data with above-mentioned inclination, amplify, dwindle etc.
In addition, above-mentioned image processing apparatus can realize that also in this case, scope of the present invention also comprises, moves the image processing program of the above-mentioned image processing apparatus of cause computer realization as above-mentioned each one through making computing machine through computing machine; And the recording medium that records the embodied on computer readable of this program.
Illustrated embodiment and embodiment in the project of detailed description of the present invention; Be in order to offer some clarification on the example of technology contents of the present invention fully; Not should by narrow sense be interpreted as the present invention and only limit to such concrete example; Technical conceive of the present invention can carry out various changes and implement in the scope that claims are put down in writing.

Claims (11)

1. an image processing apparatus has: the feature point detecting unit that detects the unique point that comprises in the input image data; With according to the detected unique point of above-mentioned feature point detecting unit relative position each other, calculate the feature value calculation unit of the characteristic quantity of above-mentioned input image data, it is characterized in that,
Above-mentioned feature point detecting unit has:
Parts of images extraction portion, it extracts from above-mentioned input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute;
Image rotating generation portion, it generates the spinning image that above-mentioned parts of images has been rotated predetermined angular;
Unanimity degree detection unit, it judges whether the image graphics that comprises in the image graphics that comprises in the above-mentioned parts of images and the above-mentioned spinning image is consistent with above-mentioned parts of images and above-mentioned spinning picture registration the time; With
Test section, it is judged to be gazing at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, come out as above-mentioned feature point detection in the consistent parts of images with above-mentioned consistent degree detection unit.
2. image processing apparatus according to claim 1 is characterized in that having:
Preserve the characteristic quantity of login image storage part, with obtain at least one side that the login image of the characteristic quantity of logining image is obtained portion from communicating the external device (ED) that is connected; With
Characteristic quantity through input image data that above-mentioned feature value calculation unit is calculated compares with the characteristic quantity of login image, calculates the similarity calculating part of the similarity of two images.
3. image processing apparatus according to claim 1 is characterized in that having:
Preserve the characteristic quantity and the storage part that is used for the identifying information of recognition image data of view data; With
Login process portion, is stored in above-mentioned storage part with the identifying information that is used for discerning above-mentioned input image data at its characteristic quantity that above-mentioned feature value calculation unit is calculated according to above-mentioned input image data with setting up corresponding relation.
4. image processing apparatus according to claim 1 is characterized in that,
Have judgement and in above-mentioned parts of images, whether comprise the test pattern handling part of image graphics,
Above-mentioned image rotating generation portion generates above-mentioned spinning image in above-mentioned test pattern handling part, being judged as the parts of images that comprises image graphics.
5. image processing apparatus according to claim 1 is characterized in that,
Above-mentioned image rotating generation portion generates the different a plurality of spinning images of the anglec of rotation to the various piece image,
Above-mentioned consistent degree detection unit is judged above-mentioned parts of images respectively with whether consistent through rotating each spinning image that this parts of images obtains,
Above-mentioned test section is under the above-mentioned parts of images situation consistent with at least 1 spinning image, with gazing at pixel or by comprising the block that these a plurality of pixels of gazing at pixel constitute, come out as above-mentioned feature point detection in this parts of images.
6. image processing apparatus according to claim 1 is characterized in that,
The size of the extraction subject area of above-mentioned parts of images extraction portion through making parts of images is different, extracts multiple sector of breakdown image.
7. image processing apparatus according to claim 6 is characterized in that,
Above-mentioned parts of images extraction portion carries out:
The size of the extraction subject area of parts of images is set at the 1st size, carries out the 1st the extracting and handle of extraction of parts of images; With
The size of the extraction subject area of parts of images is set at the 2nd size greater than above-mentioned the 1st size, carries out the 2nd the handling of extraction of parts of images,
Carrying out the above-mentioned the 2nd when handling,, getting rid of and gaze at the extraction subject area that pixel is the 1st size at center, carrying out the extraction of parts of images with this to gaze at the extraction subject area of the 2nd size that pixel is the center.
8. image processing apparatus according to claim 1 is characterized in that,
Above-mentioned parts of images extraction portion carries out the extraction of parts of images to a plurality of colour contents in the input image data by each colour content.
9. image processing apparatus according to claim 1 is characterized in that,
Have input image data implemented the smoothing handling part that smoothing is handled,
Above-mentioned feature point detecting unit detects above-mentioned unique point according to having been implemented the input image data that above-mentioned smoothing is handled.
10. an image processing system is characterized in that, has the described image processing apparatus of claim 1 and the image corresponding with input image data is formed on the image efferent on the recording materials.
11. an image processing method comprises: the feature point detection step that detects the unique point that comprises in the input image data; With basis detected unique point relative position each other in above-mentioned feature point detection step, calculate the characteristic quantity calculation procedure of the characteristic quantity of above-mentioned input image data, it is characterized in that,
Above-mentioned feature point detection step comprises:
The parts of images extraction step extracts from above-mentioned input image data by comprising the parts of images that a plurality of pixels of gazing at pixel constitute;
Image rotating generates step, generates the spinning image that above-mentioned parts of images has been rotated predetermined angular;
Unanimity degree determination step with above-mentioned parts of images and above-mentioned spinning picture registration the time, judges whether the image graphics that comprises in the image graphics that comprises in the above-mentioned parts of images and the above-mentioned spinning image is consistent; With
Detect step, will in above-mentioned consistent degree determination step, be judged to be gazing at pixel or, come out in the consistent parts of images as above-mentioned feature point detection by comprising the block that these a plurality of pixels of gazing at pixel constitute.
CN2008100997414A 2007-06-06 2008-06-04 Image processing apparatus, image forming apparatus, and image processing method Expired - Fee Related CN101320425B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2007-150969 2007-06-06
JP2007150969 2007-06-06
JP2007150969 2007-06-06
JP2008-110739 2008-04-21
JP2008110739 2008-04-21
JP2008110739A JP4362537B2 (en) 2007-06-06 2008-04-21 Image processing apparatus, image forming apparatus, image transmitting apparatus, image reading apparatus, image processing system, image processing method, image processing program, and recording medium thereof

Publications (2)

Publication Number Publication Date
CN101320425A CN101320425A (en) 2008-12-10
CN101320425B true CN101320425B (en) 2012-05-16

Family

ID=40180466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100997414A Expired - Fee Related CN101320425B (en) 2007-06-06 2008-06-04 Image processing apparatus, image forming apparatus, and image processing method

Country Status (2)

Country Link
JP (1) JP4362537B2 (en)
CN (1) CN101320425B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504223B (en) * 2016-09-12 2019-06-14 北京小米移动软件有限公司 The reference angle determination method and device of picture
JP6923159B2 (en) * 2017-09-26 2021-08-18 株式会社エクォス・リサーチ Information processing device
JP6638851B1 (en) 2018-08-31 2020-01-29 ソニー株式会社 Imaging device, imaging system, imaging method, and imaging program
TWI820194B (en) 2018-08-31 2023-11-01 日商索尼半導體解決方案公司 Electronic equipment and solid-state imaging devices
WO2020054067A1 (en) * 2018-09-14 2020-03-19 三菱電機株式会社 Image information processing device, image information processing method, and image information processing program
CN109727232B (en) * 2018-12-18 2023-03-31 上海出版印刷高等专科学校 Method and apparatus for detecting dot area ratio of printing plate
JP7381211B2 (en) * 2019-03-18 2023-11-15 セイコーエプソン株式会社 Image processing device and image processing method
CN112150464B (en) * 2020-10-23 2024-01-30 腾讯科技(深圳)有限公司 Image detection method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850466A (en) * 1995-02-22 1998-12-15 Cognex Corporation Golden template comparison for rotated and/or scaled images
CN1888814A (en) * 2006-07-25 2007-01-03 深圳大学 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850466A (en) * 1995-02-22 1998-12-15 Cognex Corporation Golden template comparison for rotated and/or scaled images
CN1888814A (en) * 2006-07-25 2007-01-03 深圳大学 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor

Also Published As

Publication number Publication date
JP4362537B2 (en) 2009-11-11
JP2009015819A (en) 2009-01-22
CN101320425A (en) 2008-12-10

Similar Documents

Publication Publication Date Title
CN101320426B (en) Image processing device and method, image forming device and image processing system
CN101388073B (en) Image checking device, image checking method and image data input processing device
CN101320425B (en) Image processing apparatus, image forming apparatus, and image processing method
USRE44982E1 (en) Mixed code, and method and apparatus for generating the same
CN101382770B (en) Image matching apparatus, image matching method, and image data output processing apparatus
USRE44139E1 (en) Method and apparatus for decoding mixed code
CN101539996B (en) Image processing method, image processing apparatus, image forming apparatus
CN101571698B (en) Method for matching images, image matching device, image data output apparatus, and recording medium
US8351706B2 (en) Document extracting method and document extracting apparatus
US8103108B2 (en) Image processing apparatus, image forming apparatus, image processing system, and image processing method
CN101582117A (en) Image processing apparatus, image forming apparatus, image processing system, and image processing method
JP4913094B2 (en) Image collation method, image collation apparatus, image data output processing apparatus, program, and storage medium
CN101404020B (en) Image processing method, image processing apparatus, image forming apparatus, image reading apparatus
JP2008301476A (en) Image processing apparatus, image forming apparatus, image reading apparatus, image processing system, and image processing method, image processing program and recording medium therefor
CN101364268B (en) Image processing apparatus and image processing method
CN101369314B (en) Image processing apparatus, image forming apparatus, image processing system, and image processing method
CN101520846B (en) Image processing method, image processing apparatus and image forming apparatus
CN101393414B (en) Image data output processing apparatus and image data output processing method
JP4393556B2 (en) Image processing method, image processing apparatus, image reading apparatus, image forming apparatus, computer program, and computer-readable recording medium
JP2008228211A (en) Image output method, image processing apparatus, image forming apparatus, image reading apparatus, computer program, and record medium
JP4340714B2 (en) Document extraction method, document extraction apparatus, computer program, and recording medium
JP2008154216A (en) Image processing method and device, image forming device, document reading device, computer program, and recording medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120516

CF01 Termination of patent right due to non-payment of annual fee